RAID error email reporting with the 3ware 9550SXU-8L and tw_cli

This is a follow up to my previous post dmraid error reporting by email, this time for a hardware raid controller, this one is the 3ware/AMCC 9550SXU-8L. The concept is exactly the same, the only thing different is the script that checks the raid array status.

We will be using the command line tool tw_cli – which can be downloaded from (now LSI) – go to support and select your product, then download the command line tools zip (which is currently CLI Linux – 10.2 code set). In this example, the tw_cli file has been extracted to / and chmodded 755.

Follow the other post and instead of inserting the other script insert this:

# check raid status and email if not ok
STATUS=`/tw_cli info c0 | grep "RAID"`
OK=`echo "$STATUS" | grep "OK"`
if [ "$STATUS" != "$OK" ]
/tw_cli info c0 | mail -s "RAID ERROR ON `hostname`"

This works for my setup using the 10.2 version of the command line tools. I have 2 separate raid arrays on the card (which is why I queried all lines that have the word “RAID” on them and then check they also have the word “OK” on them). If your card is positioned differently or you have multiple cards you may need to change the command line options.

dmraid error reporting by email

dmraid is a software raid/fakeraid/onboard raid tool.

As far as I can tell, the only error reporting that dmraid does is by hooking into logwatch – which emails me a very long file I don’t often read and I would like to know immediately if my raid array is degraded.

This works for me on CentOS 5.6 with dmraid installed – no guarantees on other flavours/combinations. My dmraid version is dmraid-1.0.0.rc13-63.el5. I haven’t tested the output from other versions of dmraid, but it would be pretty trivial to update the script if they are different (or your path is different).

So what we are going to do is write a simple shell script that checks the array status and emails if there is a problem. Then we run the script every 15 (or whatever) minutes.

The dmraid needs to be run as root, so you might as well su - for the whole of this.

To create the file just vi /, hit i to insert and paste this:

# check raid status and email if not ok
STATUS=`/sbin/dmraid -s | grep "status"`
if [ "$STATUS" != "status : ok" ]
/sbin/dmraid -s | mail -s "RAID ERROR ON `hostname`: $STATUS"

Hit Esc, ZZ to save, then make the file executable:

chmod 755 /

Now add it to cron so that it runs regularly:

crontab -e

…and insert the line (i to insert – Esc, ZZ to save):

00,15,30,45 * * * * /

Voila! dmraid with email error reporting.

PHP Recursive File Copy Function

I couldn’t find a function online that copies folders recursively in PHP and actually works, so I wrote my own:

function recursiveCopy($src, $dest) {
if (is_dir($src)) $dir = opendir($src);
while ($file = readdir($dir)) {
if ($file != '.' && $file != '..') {
if (!is_dir($src.'/'.$file)) copy($src.'/'.$file, $dest.'/'.$file);
else {
@mkdir($dest.'/'.$file, 0750);
recursiveCopy($src.'/'.$file, $dest.'/'.$file);
} //else
} //if
} //while
} //function

To summarise: if the source is a folder, open it and start reading the files. If the files are not folders, copy them straight to the destination, if they are folders, create a new folder at the destination and then run the function again (within itself) for the new folder.



Javascript (JQuery): Social networking feeds – new Facebook authentication

After my previous post, Javascript (JQuery): Social networking feeds all in one place, Facebook went and added authentication to the feed retrieval. After much head-scratching, this is how to enable the Facebook feed under the new OAuth system.

You need an access token to get to the data, so what we are going to do is create a Facebook App which the user then permits to access their information and that will give us the token we need.

So first you need to create a Facebook App. This is simpler than it sounds, we don’t need to create an App that actually does anything or even exists, we just need it for authentication. So, install the Developer App on Facebook and then go to that App and select Set Up New App. Enter the details of the App and be sure to give it a URL and domain – e.g. as URL and as domain. What you put it doesn’t matter that much.

The new App will have an Application id – a load of numbers. Now, this is the method to get the access token. Log in as the user you want the feed for (I am assuming you are using this to retrieve your own feed) and then go to the page:,offline_access

Replace your App id and URL in the example. What we are doing here is creating an App request to the user to access data, including the feed (read_stream) and to access the data when they are offline (offline_access) with a token that does not expire (ever, if I’m reading the docs correctly, even if they uninstall the App).

Once you have accepted, the script continues to your URL, passing a very long code to it (you can just copy it from the address bar) – copy this code, the part after ?code=

Then we can finally request the access token. As well as the two codes we just used we also need your App Secret from your Facebook App page. Get the secret and then go to the following page:

Obviously replace your App id, URL, App secret and the long code with your variables. The script passes back to you an access token (check the source code if your browser isn’t displaying it).

All you need to do now is add that access token to the feed request (see the previous post for the rest of the scripts):


…replacing the user id of the feed to want to retrieve.


You don’t want people who check your source code to have access to your Facebook account, so we need to hide that token. This is how I did it – call a PHP proxy script from the Javascript and the PHP return the content minus the access token. So you change that line to:


Or whatever the name of your new PHP file is. And then the contents of that new file are:

$access_token = 'YOUR_ACCESS_TOKEN';
header('Content-Type: text/javascript; charset=UTF-8');
ini_set('user_agent', $_SERVER['HTTP_USER_AGENT']);
$handle = fopen(''.$access_token.'&limit=5&callback='.$_GET['callback'], 'rb');
$contents = '';
if ($handle) {
while (!feof($handle)) {
$contents .= fread($handle, 8192);
$contents = str_replace($access_token,'',$contents);

Replace your access token and user id in the above example.

What this code is doing is as follows: you define your access token, you tell the browser that it’s outputting Javascript (so that JQuery interprets the results properly), you spoof some browser information so that Facebook returns the data correctly, then you open a connection to the JSON page using your access token and the reference that JQuery has assigned the JSON, then we remove all references to your access token from the output (it appears in links that are returned) and finally print the output so that it can be interpreted by the original JQuery function. Voila! What was so simple just a week ago is now quite a bit more complicated…

Javascript (JQuery): Social networking feeds all in one place

Today we are going to get feeds from Twitter, Facebook, Youtube and Flickr all in one place for your website using JSON and the JQuery library:

Be careful on this page with wrapped lines!

Firstly we declare the jquery library in our head section (obviously download it first and change the path to match your system:

<script src="lib/jquery-1.4.2.min.js"></script>

Now we define a div to put all the feeds in:

<div id="l_tweets"><br /><br />Social Networks Update:</div>

Now let’s write some Javascript! We’ll start with Twitter:

<script type="text/javascript">
//twitter - use your own username in the getJSON line
//we'll get the feed and work out how many days ago the content was posted
$.each(data, function(i,item){
dp = item.created_at.split(" ");
cr = Date.parse(dp[1]+' '+dp[2]+' '+dp[5]);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
ct = item.text;
ct = ct.replace(/http:\/\/\S+/g,'<a href="$&" target="_blank">$&</a>'); //make urls into links, do the same for hashtags etc:
ct = ct.replace(/\s(@)(\w+)/g,' @<a onclick="javascript:pageTracker._trackPageview(\'/outgoing/\');" href="$2" target="_blank">$2</a>');
ct = ct.replace(/\s(#)(\w+)/g,' #<a onclick="javascript:pageTracker._trackPageview(\'/outgoing/\');" href="$2" target="_blank">$2</a>');
//add the feed to the div
$("#l_tweets").append('<div> '+ct+'<br />'+Math.round(dy)+' days ago</div><br />');
}); //each
} //function
); //json

Note that the # and @ replace lines may wrap here, but they should be on one line.

[EDIT – this no longer works for Facebook since they change their authentication methods, please see my new post to get the Facebook feed]

Let’s do the same thing for Facebook now – getting the date into a format that Javascript understands is a little more complicated this time:

//facebook - again use your own username
$.each(, function(i,fb){
if (fb.type=='video' || fb.type=='link') {
if ( fb.message = '<a href="' + + '" target="_blank">' + + '</a>';
else fb.message = '<a href="' + fb.source + '" target="_blank">' + + '</a>';
else fb.message = fb.message.replace(/http:\/\/\S+/g,'<a href="$&" target="_blank">$&</a>');
var d=new Date();
dt1 = fb.created_time.split("T");
dt2 = dt1[0].split("-");
cr = Date.parse(d);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
$("#l_tweets").append('<div>' + '' + fb.message + '' + '<br />(' + Math.round(dy) + ' days ago)</div><br />');
}); //each
} //function
); //json

The quotes either side of the fb.message counteract an Internet Explorer problem where content appears as undefined. Now on to Youtube, this is quite similar, but we are throwing images into the mix:

//youtube - as ever, replace your username
function(data) {
$.each(data.feed.entry, function(i, item) {
var published = item['published']['$t'];
var url = item['media$group']['media$content'][0]['url'];
var media_title = item['media$group']['media$title']['$t'];
var media_descr = item['media$group']['media$description']['$t'];
var thumb = item['media$group']['media$thumbnail'][0]['url'];
var d=new Date();
dt1 = published.split("T");
dt2 = dt1[0].split("-");
cr = Date.parse(d);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
//note this next part is all one line
$("#l_tweets").append('<div><a href="' + url + '"><img src="' + thumb + '" width="64" height="36"><br />' + media_title + ' (' + media_descr + ')</a><br />(' + Math.round(dy) + ' days ago)</div><br />');
}); //each
} //function
); //json

Just one more to go – Flickr. A couple of things to note here. Firstly, your user id in the JSON link is not your username, you can map your username to the id here: Secondly, all the other feeds have have a method of limiting results (I have chosen 5 results each so far). For this feed, we must create our own limiter.

//flickr - see the nore above about your user id
var i = 0;
$.each(data.items, function(i,item){
if (i<5) {
var d=new Date();
dt1 = item.published.split("T");
dt2 = dt1[0].split("-");
cr = Date.parse(d);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
//this is all one line again
$("#l_tweets").append('<div><a href="' + + '"><img src="' + + '" width="64" height="36"><br />' + item.title + '</a><br />(' + Math.round(dy) + ' days ago)</div><br />');
} //i
}); //each
} //function
); //json

In the websites I have implemented this on, I haven’t needed to return the feeds in date order (currently they are in order of social network). To order them, this is what I would do:
1) create an array to hold the feeds
2) instead of appending the feeds to the div, add them to the array with days first, then a delimiter, then the feed
(e.g. socnet_updates.push(Math.round(dy)+’_DELIMITER_<div>’+ct+'<br />’+Math.round(dy)+’ days ago</div><br />’);)
3) after the scripts have run, wait a couple of seconds for the data to be filled out (use setTimeout counting the number of feeds returned until you have them all) and the order the array by date (use a natural order algorithm)
4) finally, iterate through the array, split the strings by your delimiter and append the second part to the div

If anyone wants me to show you that code, let me know.

Roundcube: Vacation message with Virtualmin Plugin

None of the vacation/forwarding plugins work with Virtualmin so this is my workaround. Firstly, users need to be created as Mail and FTP users, since we will be using FTP to place the .forward files. Download and unzip the plugin:

cd vacation
vi config.ini

We are going to use Usermin’s own function (installing the Vacation program doesn’t work) – so change these lines:

binary = "/etc/usermin/forward/"
alias_identities = false

At this point I usually edit default.txt and change the default text. Now, I have mail usernames in Virtualmin formatted user@domain, so a few changes need to be made to accommodate this:

cd lib
vi dotforward.class.php

We are going to change some lines. Type :set number to see line numbers.
Line 42 change to:
$arrDotForward[] = $this->options['keepcopy'] = "\\".str_replace('@','-',$this->options['username']);
Line 94 change to:
$this->options['keepcopy'] = ($first_element == str_replace('@','-',$this->options['username']));

We need to tell the script the path to the mailbox, so we need to add some code. There’s a gap around line 68 which we can put the following in – note that this is very hacky and will only work with top level domains (not subdomains) where the username is the first part of the domain. If anyone has a better way to do this, do let me know (the virtualmin files containing the home directory paths are in /etc/webmin/virtual-server/domains but are readable by the root user only or we could parse /etc/passwd for the usernames…):

$user_parts = explode('@',$this->options['username']);
$domain_parts = explode('.',$user_parts[1]);
$this->options['flags'] = '/home/'.$domain_parts[0].'/homes/'.$user_parts[0].'/.vacation.msg';

[EDIT – thanks to Iain, the code to get the correct directory from /etc/passwd follows – use this instead of the previous code block:]

$user_accounts = file_get_contents( "/etc/passwd");
$matches = preg_grep( "/" . $this->options['username'] . "/i", explode( "\n",$user_accounts));
$user_info = explode( ":", current($matches));
$this->options['flags'] = $user_info[5] . '/.vacation.msg';


Now just add the plugin to the plugins array:

cd ../../../config/

Change the line to:
$rcmail_config['plugins'] = array('vacation');

Update: in Roundcube 0.5 the page layout has changed and the plugin displays incorrectly. To fix this, edit plugins/vacation/skins/default/vacation.css and add in some padding:
#pagecontent {
width: 800px;

Roundcube: Change Virtualmin Password Plugin

I can never get the Virtualmin binary file compilation to work properly, so here’s my workaround to enable changing a user’s Virtualmin password in Roundcube Webmail.

Firstly, let’s go to the plugins/password directory (your path will be different):

cd ~username/public_html/webmail/plugins/password

We need to change the password driver to virtualmin (hit i for insert mode), so it reads:

$rcmail_config['password_driver'] = 'virtualmin';

(Hit Esc to exist insert mode and then ZZ to save. You’ll need these commands throughout this tutorial.)
Now we’ll use sudo instead of the binary compilation:

cd drivers
yum -y install sudo

Comment out the line: Defaults requiretty
At the end, add the following line (change the username to whatever user your webmail directory runs as): username ALL=NOPASSWD: /usr/sbin/virtualmin

If you can’t work out which user it is, create and execute a PHP script in the webmail directory with the following contents: <?php exec('whoami',$output,$return_code); print_r($output); ?>

Now we need to change the exec line in the virtualmin script:

vi virtualmin.php

Change the line that starts with exec to:
exec("sudo /usr/sbin/virtualmin modify-user --domain $domain --user $username --pass $newpass", $output, $returnvalue);

Finally, add password to the plugins array to activate it:

cd ../../../config

Change plugins array to: $rcmail_config['plugins'] = array('password');

CentOs: Install ffmpeg & ffmpeg-php 0.6

The ffmpeg installed by yum cannot be used with ffmpeg-php, so we need to download and compile it:

cd ~admin/software
tar zxfv ffmpeg-0.6.tar.gz
cd ffmpeg-0.6
./configure --enable-shared
make install

Now we need to download and configure ffmpeg-php:

cd ~admin/software
tar -xjf ffmpeg-php-0.6.0.tbz2
cd ffmpeg-php-0.6.0

There’s an error in this version (0.6) we need to correct or it won’t compile, so run:

vi ffmpeg_frame.c

We need to substitute PIX_FMT_RGBA32 for PIX_FMT_RGB32, so enter this command :%s/PIX_FMT_RGBA32/PIX_FMT_RGB32 and hit return. Now compile and install:

make install
echo "" > /etc/php.d/ffmpeg.ini
service httpd restart

CentOS: Install PHP 5.2 with t1lib support

The first step is to vanilla install PHP 5.2 (to handle any dependency issues) and then recompile it with the t1lib option. So enable the testing repo of CentOS 5. Change to root user first, then create the repo:

su -
vi /etc/yum.repos.d/CentOS-Testing.repo

Enter insert mode (hit i) and paste the following into the new file:

# CentOS-Testing:
# !!!! CAUTION !!!!
# This repository is a proving grounds for packages on their way to CentOSPlus and CentOS Extras.
# They may or may not replace core CentOS packages, and are not guaranteed to function properly.
# These packages build and install, but are waiting for feedback from testers as to
# functionality and stability. Packages in this repository will come and go during the
# development period, so it should not be left enabled or used on production systems without due
# consideration.
name=CentOS-5 Testing

Then update PHP and restart Apache (yum will double-check you want to go ahead):

yum update php*
service httpd restart

PHP is now updated, but the t1lib is not installed or compiled into PHP. So let’s download and install it (you’ll need make and gcc installed):

cd ~admin/software
tar zxfv t1lib-5.1.2.tar.gz
cd t1lib-5.1.2
make && make install

If it exits with a latex error, install latex:

yum -y install tetex-latex

Installing t1lib can also be accomplished if you have the rpmforge repo installed (see previous post step 6) with: yum --enablerepo=rpmforge install t1lib
If you upgrade your software in the future and get an error about then install t1lib again using this method and then service httpd restart

Then run the make commands again. T1lib is now installed. Next step is to recompile PHP. Firstly, set up a build environment (still as root) and install some software that we’ll need to compile:

mkdir -p /usr/src/redhat/{SRPMS,RPMS,SPECS,BUILD,SOURCES}
chmod 777 /usr/src/redhat/{SRPMS,RPMS,SPECS,BUILD,SOURCES}
yum -y install rpm-build re2c bison flex

Now, we need to lose our root privileges to compile the software, so we need to run exit or logout to drop back to the admin user (make sure this is the right version of PHP you have just installed, use rpm -q php to check).

cd ~admin/software
rpm --install php-5.2.10-1.el5.centos.src.rpm
vi /usr/src/redhat/SPECS/php.spec

Technically, we should edit the release line to reflect the changes we are making, but that creates dependency issues, so we’ll ignore that and edit the configure lines. Scroll to where is says %configure with various includes after the line. Remove the line that says --disable-rpath \ which will stop the compile working (this is PHP bug #48172) and add at the end: --with-t1lib \

Exit insert mode, save and exit (hit Esc, then ZZ). Now rebuild the RPM files:

rpmbuild -bb /usr/src/redhat/SPECS/php.spec

It’s highly likely that you will now get a list of failed dependencies. All of them need to be installed. The following is my list – yours may be different. Su to the root user and install them, then logout back to the admin user after this command:

su -
yum -y --skip-broken install bzip2-devel curl-devel db4-devel expat-devel gmp-devel aspell-devel httpd-devel libjpeg-devel libpng-devel pam-devel libstdc++-devel sqlite-devel pcre-devel readline-devel libtool gcc-c++ libc-client-devel cyrus-sasl-devel openldap-devel postgresql-devel unixODBC-devel libxml2-devel net-snmp-devel libxslt-devel libxml2-devel ncurses-devel gd-devel freetype-devel

Then run the rpmbuild command again. If you get a GD error after the T1_StrError line, try running this command as root:

su -
ldconfig /usr/local/lib

Run the rpmbuild command again (as non-root). When it finishes (will take a while), install the resultant RPM files as root user:

su -
cd /usr/src/redhat/RPMS/x86_64/
rpm -Uhv --nodeps --force *.rpm
service httpd restart

Your path to the RPMs may be different depending on your architecture.

Secure new CentOs install

Step 1: Secure SSH

Log in as root to your server and type the following commands to backup and then edit the SSH configuration:

cp /etc/ssh/ssh_config /etc/ssh/ssh_config.bak; cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
vi /etc/ssh/ssh_config

Hit the i key to enter insert mode. Then uncomment all the lines after (and including) Host * (i.e. remove the hashes) and change Protocol 2,1 to 2 only. Hit Esc to exit insert mode and type ZZ to quit saving the changes. Then type the following command:

vi /etc/ssh/sshd_config

As before, in insert mode, uncomment the Port, Protocol (and change to 2 only if not already) and ListenAddress statements. Also uncomment and change PermitRootLogin to: no. Quit and save (Esc, ZZ). Then restart the SSH service:

service sshd restart

Since we have now prevented the root user from logging in remotely (as a security measure – the root user has full access to the entire system and can break things very easily), the final step is to create a user who can log in remotely. Type in:

useradd -g wheel admin
passwd admin

Next time when you log in you can switch to the root user using the following command (enter the root password at the prompt):

su -

Step 2: Install ChkRootKit (rootkit finder)

Create a directory to hold downloaded or compiled sofware, then install some tools we will need (these may well already be installed):

mkdir -p ~admin/software
cd ~admin/software
yum -y install gcc make wget vixie-cron perl

Download and install ChkRootKit:

tar zxfv chkrootkit.tar.gz
cd chkrootkit-*
make sense

Then add a cron entry to run the script automatically (this is still done as the root user):

crontab -e

Tell it to run every day at 3am and email you the errors – add the following line (use the same commands as when using vim above):

0 3 * * * /home/admin/software/chkrootkit-*/chkrootkit -q 2>&1 | mail -s "ChkRootKit Output from `hostname`"

You could do that bit of editing entirely on the command line by creating a temporary file and then adding that to the crontab like this:

touch crontab_temp
crontab -l > crontab_temp
echo "0 3 * * * /home/admin/software/chkrootkit-*/chkrootkit -q 2>&1 | mail -s \"ChkRootKit Output from \`hostname\`\"" >> crontab_temp
cat crontab_temp | crontab
rm -f crontab_temp

Step 3: Install Portsentry (check for people sniffing/scanning your ports and block them)

cd ~admin/software

32-bit version – use this if your OS is 32-bit – download and install the existing package:

rpm -Uhv portsentry-1.2-1.te.i386.rpm
/etc/rc.d/init.d/portsentry start
echo "/etc/rc.d/init.d/portsentry" >> /etc/rc.d/rc.local

64-bit version – use this if your OS is 64-bit – we need to compile the original program, but there is an error in one of the files we need to fix first:

tar zxfv portsentry-1.2.tar.gz
cd portsentry_beta
vi portsentry.c

The error is on line 1584 and will prevent the program from compiling. To see line numbers, type in :set number
Find line 1584 and remove the line break in the middle of that sentence. Then install:

make linux
make install

Next we need to create a script to control the service:

vi /etc/init.d/portsentry

Start insert mode and paste this all this into the file (careful of linebreaks – then save and quit):


case "$1" in
echo "Starting Portsentry..."
ps ax | grep -iw '/usr/local/psionic/portsentry/portsentry -atcp' | grep -iv 'grep' > /dev/null
if [ $? != 0 ]; then
/usr/local/psionic/portsentry/portsentry -atcp
ps ax | grep -iw '/usr/local/psionic/portsentry/portsentry -audp' | grep -iv 'grep' > /dev/null
if [ $? != 0 ]; then
/usr/local/psionic/portsentry/portsentry -audp
echo "Portsentry is now up and running!"
echo "Shutting down Portsentry..."
array=(`ps ax | grep -iw '/usr/local/psionic/portsentry/portsentry' | grep -iv 'grep' \
| awk '{print $1}' | cut -f1 -d/ | tr '\n' ' '`)
while [ "$index" -lt "$element_count" ]
kill -9 ${array[$index]}
let "index = $index + 1"
echo "Portsentry stopped!"
$0 stop && sleep 3
$0 start
echo "Usage: $0 {start|stop|restart}"
exit 1
exit 0

Then we need to make that script executable, add portsentry to the startup scripts and start it up:

chmod 755 /etc/init.d/portsentry
ln -s /etc/init.d/portsentry /etc/rc2.d/S20portsentry
ln -s /etc/init.d/portsentry /etc/rc3.d/S20portsentry
ln -s /etc/init.d/portsentry /etc/rc4.d/S20portsentry
ln -s /etc/init.d/portsentry /etc/rc5.d/S20portsentry
ln -s /etc/init.d/portsentry /etc/rc0.d/K20portsentry
ln -s /etc/init.d/portsentry /etc/rc1.d/K20portsentry
ln -s /etc/init.d/portsentry /etc/rc6.d/K20portsentry
/etc/init.d/portsentry start

Step 4: Install LibSafe (prevents buffer overflow exploits)

cd ~admin/software

Download for 32-bit:


Or for 64-bit:


Then install:

rpm -Uhv libsafe-2.0-16*.rpm

Step 5: Install Hogwash (inline packet scrubber)

Download, install and configure Hogwash:

cd ~admin/software
tar zxfv devel-0.5-latest.tgz
cd distro/devel-0.5/devel-0.5
cp hogwash /sbin
mkdir /var/log/hogwash
mkdir /etc/hogwash
cd rules
cp *.rules /etc/hogwash
cd ..
cp *.config /etc/hogwash
cp /etc/hogwash/stock.config /etc/hogwash/live.config

We need to create another control script, but we can do this on the command line:

touch Hog
echo '#!/bin/sh' >> Hog # needs single quotes
echo "#chkconfig: 2345 11 89" >> Hog
echo "#description: Automates Hogwash packet filter" >> Hog
echo "/sbin/hogwash -d -c /etc/hogwash/live.config -r /etc/hogwash/live.rules -l /var/log/hogwash" >> Hog
chmod 700 Hog

Make sure it starts at boot time:

cp Hog /etc/rc.d/init.d
chkconfig --add Hog

Step 6: Install DenyHosts (blocks brute force login attempts)

cd ~admin/software

Install the RPMForge repo – for 32-bit:


Or for 64-bit:


Install, configure and make sure your own address is not blocked (substitute your IP address in the code below)

rpm -i rpmforge-release-0.5.1-1.el5.rf.*.rpm
yum check-update
yum -y install denyhosts
echo "sshd:" >> /etc/hosts.allow
perl -pi -e "s/PURGE_DENY =/PURGE_DENY = 7d/g;" /etc/denyhosts/denyhosts.cfg
chkconfig denyhosts on
service denyhosts start

Step 7: Install RootKit Hunter (yes, another one)

Download and configure RkHunter, then set up the cronjob to execute automatically (as above) and email you if there are warnings:

yum -y install rkhunter
cd ~admin/software
perl -pi -e "s/MAIL-ON-WARNING=\"\"/MAIL-ON-WARNING=\"your\\"/g;" /etc/rkhunter.conf
touch crontab_temp
crontab -l > crontab_temp
echo "0 4 * * * /usr/bin/rkhunter --cronjob 2>&1" >> crontab_temp
echo "@monthly /usr/bin/rkhunter --update" >> crontab_temp
cat crontab_temp | crontab
rm -f crontab_temp


There are a couple of other things I always do when setting up a server.

• Disable the weak ciphers in the SSH server:
sshd -T | grep ciphers | sed -e "s/\(3des-cbc\|aes128-cbc\|aes192-cbc\|aes256-cbc\|arcfour\|arcfour128\|arcfour256\|blowfish-cbc\|cast128-cbc\|\)\,\?//g" >> /etc/ssh/sshd_config
service sshd restart

• Disable Apache mod_status (see httpd.conf or 00-base.conf in /etc/httpd/conf/)
• Turn off TRACK|TRACE in Apache:
echo "TraceEnable Off" >> /etc/httpd/conf/httpd.conf
service httpd reload

• If Webmin is installed tweak the SSL Options and only allow the following ciphers: ALL:!ADH:!LOW:!MEDIUM:!SSLv2:!EXP:+HIGH