IP Failover

IP Failover (proof of concept)

Essentially the idea is that if your primary web server goes down, your backup server automatically takes over (failover). When your primary server is back online, it takes over again (failback). It sounds simple, but it can be very complicated and expensive to eliminate all the single points of failure – you could need multiple redundant routers, switches, servers, power supplies, UPS and storage all on separate networks.

The simple setup I am proposing uses a minimum of two servers; the primary and the backup, both on separate networks.

An alternative to IP failover (where the IP is changed) is IP takeover (where the backup server actually takes on the IP address of the failed primary), but for this to work, the IP addresses need to have the same router, so would need to be at the same web host. That’s fine if the primary server failure issue is local hardware, but if it’s a power failure in the datacentre or a problem with the network, a switch, a router or any connection problems, transit or peering, then both servers would have the same problem, so it makes more sense to make them geographically disparate on entirely separate networks.

It should be noted that IP failover services are provided by sites like dnsmadeeasy.com and zoneedit.com – this would be much simpler to set up, but of course you pay for the privilege.

There are two options for the backup server. Either it just displays a “service currently not available” type page (simple to set up) or it is a complete mirror of the primary server. For my purposes it is a status page, but it is perfectly possible to replicate the primary server if necessary – you could run rsync (preferably through an SSH tunnel with a key pair), something along the lines of “rsync -avz –delete -e “ssh -i /root/rsync/mirror-rsync-key” /home/website user@server.com:/home/website” run every few minutes by cron – with MySQL replication for the databases, for example. Google has multiple articles and how-tos for both scenarios.

A possible issue with a replicated server is if users change files via FTP or change the database using a database script when the backup server is active, you would either have to prevent them doing so or make sure you mirror those changes back to the primary server when it comes back online. Unless of course you are using a different server for the database and shared storage for the web files.

This is how I propose to do it: there’s a script on the backup server that monitors the primary server (a heartbeat-type service, run as a PHP script every few minutes by cron). The script can optionally check with any other spare servers to see if they can contact the primary server (to make sure it is definitely down). If the primary server is definitely down, the backup server updates the DNS entries for the server domain(s) to point to itself (with a low TTL in case the primary comes back online).

It’s simplest if the primary server is also the primary DNS server and the backup server is the secondary DNS server, then if a user cannot connect to the primary server, it also cannot get the incorrect DNS records (assuming the whole server is down, not just the web service, which the heartbeat script should check).

Major caveat: some ISPs may ignore the TTL of DNS records and cache the wrong results for too long, but nothing can be done about that.

Updating the DNS records could be done using the nsupdate command, but there may be issues with permissions both to run the program and for the server to update the DNS records, so it’s simpler if the backup is running Virtualmin, which comes with a remote API which can be called from a PHP script.

This is the flowchart of what happens:

BACKUP checks PRIMARY is up every x mins and loads previous state from database:
     > PRIMARY was up and is still up – do nothing
     > PRIMARY was up and is now down:
          > check with SPARE servers:
               > cannot contact SPARES – internet probably down at BACKUP – do nothing
               > at least 1 SPARE reports PRIMARY up – network issues – do nothing
               > otherwise – update DNS and log to database
     > PRIMARY was down and is still down – do nothing
     > PRIMARY was down and is back up – update DNS and log to database

This is how the PHP script could update the DNS records:

<?php

//if primary down, failover IP addresses, remove www record then add new one:

$result1 = shell_exec("wget -O - --quiet --http-user=root --http-passwd=root_pass --no-check-certificate 'https://www.backupdomain.com:10000/virtual-server/remote.cgi?program=modify-dns&domain=primarydomain.com&remove-record=www.primarydomain.com. A'");

$result2 = shell_exec("wget -O - --quiet --http-user=root --http-passwd=root_pass --no-check-certificate 'https://www.backupdomain.com:10000/virtual-server/remote.cgi?program=modify-dns&domain=primarydomain.com&ttl=60&add-record=www.primarydomain.com. A 1.2.3.4'");

//echo $result2; //should end with: Exit status: 0 if successful

// if primary back online, reverse the process (though the primary as the primary DNS server should eventually update the record on the secondary DNS server anyway)

?>

I’ll post more actual examples when I get around to implementing this 🙂

RAID error email reporting with the 3ware 9550SXU-8L and tw_cli

This is a follow up to my previous post dmraid error reporting by email, this time for a hardware raid controller, this one is the 3ware/AMCC 9550SXU-8L. The concept is exactly the same, the only thing different is the script that checks the raid array status.

We will be using the command line tool tw_cli – which can be downloaded from www.3ware.com (now LSI) – go to support and select your product, then download the command line tools zip (which is currently CLI Linux – 10.2 code set). In this example, the tw_cli file has been extracted to / and chmodded 755.

Follow the other post and instead of inserting the other script insert this:

#!/bin/sh
# check raid status and email if not ok
STATUS=`/tw_cli info c0 | grep "RAID"`
OK=`echo "$STATUS" | grep "OK"`
if [ "$STATUS" != "$OK" ]
then
/tw_cli info c0 | mail -s "RAID ERROR ON `hostname`" your@email.com
fi

This works for my setup using the 10.2 version of the command line tools. I have 2 separate raid arrays on the card (which is why I queried all lines that have the word “RAID” on them and then check they also have the word “OK” on them). If your card is positioned differently or you have multiple cards you may need to change the command line options.

dmraid error reporting by email

dmraid is a software raid/fakeraid/onboard raid tool.

As far as I can tell, the only error reporting that dmraid does is by hooking into logwatch – which emails me a very long file I don’t often read and I would like to know immediately if my raid array is degraded.

This works for me on CentOS 5.6 with dmraid installed – no guarantees on other flavours/combinations. My dmraid version is dmraid-1.0.0.rc13-63.el5. I haven’t tested the output from other versions of dmraid, but it would be pretty trivial to update the script if they are different (or your path is different).

So what we are going to do is write a simple shell script that checks the array status and emails if there is a problem. Then we run the script every 15 (or whatever) minutes.

The dmraid needs to be run as root, so you might as well su - for the whole of this.

To create the file just vi /raid_status.sh, hit i to insert and paste this:

#!/bin/sh
# check raid status and email if not ok
STATUS=`/sbin/dmraid -s | grep "status"`
if [ "$STATUS" != "status : ok" ]
then
/sbin/dmraid -s | mail -s "RAID ERROR ON `hostname`: $STATUS" your@email.com
fi

Hit Esc, ZZ to save, then make the file executable:

chmod 755 /raid_status.sh

Now add it to cron so that it runs regularly:

crontab -e

…and insert the line (i to insert – Esc, ZZ to save):

00,15,30,45 * * * * /raid_status.sh

Voila! dmraid with email error reporting.

PHP Recursive File Copy Function

I couldn’t find a function online that copies folders recursively in PHP and actually works, so I wrote my own:

function recursiveCopy($src, $dest) {
if (is_dir($src)) $dir = opendir($src);
while ($file = readdir($dir)) {
if ($file != '.' && $file != '..') {
if (!is_dir($src.'/'.$file)) copy($src.'/'.$file, $dest.'/'.$file);
else {
@mkdir($dest.'/'.$file, 0750);
recursiveCopy($src.'/'.$file, $dest.'/'.$file);
} //else
} //if
} //while
closedir($dir);
} //function

To summarise: if the source is a folder, open it and start reading the files. If the files are not folders, copy them straight to the destination, if they are folders, create a new folder at the destination and then run the function again (within itself) for the new folder.

Usage:

recursiveCopy('/home/site/public_html/folder','/home/othersite/public_html/folder');

Javascript (JQuery): Social networking feeds – new Facebook authentication

After my previous post, Javascript (JQuery): Social networking feeds all in one place, Facebook went and added authentication to the feed retrieval. After much head-scratching, this is how to enable the Facebook feed under the new OAuth system.

You need an access token to get to the data, so what we are going to do is create a Facebook App which the user then permits to access their information and that will give us the token we need.

So first you need to create a Facebook App. This is simpler than it sounds, we don’t need to create an App that actually does anything or even exists, we just need it for authentication. So, install the Developer App on Facebook and then go to that App and select Set Up New App. Enter the details of the App and be sure to give it a URL and domain – e.g. http://www.cheesefather.com as URL and cheesefather.com as domain. What you put it doesn’t matter that much.

The new App will have an Application id – a load of numbers. Now, this is the method to get the access token. Log in as the user you want the feed for (I am assuming you are using this to retrieve your own feed) and then go to the page:

https://www.facebook.com/dialog/oauth?client_id=YOUR_APP_ID&redirect_uri=http://www.YOUR_URL.com&scope=read_stream,offline_access

Replace your App id and URL in the example. What we are doing here is creating an App request to the user to access data, including the feed (read_stream) and to access the data when they are offline (offline_access) with a token that does not expire (ever, if I’m reading the docs correctly, even if they uninstall the App).

Once you have accepted, the script continues to your URL, passing a very long code to it (you can just copy it from the address bar) – copy this code, the part after ?code=

Then we can finally request the access token. As well as the two codes we just used we also need your App Secret from your Facebook App page. Get the secret and then go to the following page:

https://graph.facebook.com/oauth/access_token?client_id=YOUR_APP_ID&redirect_uri=http://www.YOUR_URL.com&client_secret=YOUR_APP_SECRET&code=THAT_LONG_CODE&type=client_cred

Obviously replace your App id, URL, App secret and the long code with your variables. The script passes back to you an access token (check the source code if your browser isn’t displaying it).

All you need to do now is add that access token to the feed request (see the previous post for the rest of the scripts):

$.getJSON(" https://graph.facebook.com/USER_ID/posts?access_token=ACCESS_TOKEN&limit=5&callback=?",

…replacing the user id of the feed to want to retrieve.

BUT WAIT!…

You don’t want people who check your source code to have access to your Facebook account, so we need to hide that token. This is how I did it – call a PHP proxy script from the Javascript and the PHP return the content minus the access token. So you change that line to:

$.getJSON("facebook.inc.php?callback=?",

Or whatever the name of your new PHP file is. And then the contents of that new file are:

<?php
$access_token = 'YOUR_ACCESS_TOKEN';
header('Content-Type: text/javascript; charset=UTF-8');
ini_set('user_agent', $_SERVER['HTTP_USER_AGENT']);
$handle = fopen('https://graph.facebook.com/USER_ID/posts?access_token='.$access_token.'&limit=5&callback='.$_GET['callback'], 'rb');
$contents = '';
if ($handle) {
while (!feof($handle)) {
$contents .= fread($handle, 8192);
}
}
fclose($handle);
$contents = str_replace($access_token,'',$contents);
print_r($contents);
exit();
?>

Replace your access token and user id in the above example.

What this code is doing is as follows: you define your access token, you tell the browser that it’s outputting Javascript (so that JQuery interprets the results properly), you spoof some browser information so that Facebook returns the data correctly, then you open a connection to the JSON page using your access token and the reference that JQuery has assigned the JSON, then we remove all references to your access token from the output (it appears in links that are returned) and finally print the output so that it can be interpreted by the original JQuery function. Voila! What was so simple just a week ago is now quite a bit more complicated…

Javascript (JQuery): Social networking feeds all in one place

Today we are going to get feeds from Twitter, Facebook, Youtube and Flickr all in one place for your website using JSON and the JQuery library:

Be careful on this page with wrapped lines!

Firstly we declare the jquery library in our head section (obviously download it first and change the path to match your system:

<script src="lib/jquery-1.4.2.min.js"></script>

Now we define a div to put all the feeds in:

<div id="l_tweets"><br /><br />Social Networks Update:</div>

Now let’s write some Javascript! We’ll start with Twitter:

<script type="text/javascript">
//twitter - use your own username in the getJSON line
//we'll get the feed and work out how many days ago the content was posted
$.getJSON("http://twitter.com/statuses/user_timeline.json?screen_name=username&include_entities=true&count=5&callback=?",
function(data){
$.each(data, function(i,item){
dp = item.created_at.split(" ");
cr = Date.parse(dp[1]+' '+dp[2]+' '+dp[5]);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
ct = item.text;
ct = ct.replace(/http:\/\/\S+/g,'<a href="$&" target="_blank">$&</a>'); //make urls into links, do the same for hashtags etc:
ct = ct.replace(/\s(@)(\w+)/g,' @<a onclick="javascript:pageTracker._trackPageview(\'/outgoing/twitter.com/\');" href="http://twitter.com/$2" target="_blank">$2</a>');
ct = ct.replace(/\s(#)(\w+)/g,' #<a onclick="javascript:pageTracker._trackPageview(\'/outgoing/search.twitter.com/search?q=%23\');" href="http://search.twitter.com/search?q=%23$2" target="_blank">$2</a>');
//add the feed to the div
$("#l_tweets").append('<div> '+ct+'<br />'+Math.round(dy)+' days ago</div><br />');
}); //each
} //function
); //json

Note that the # and @ replace lines may wrap here, but they should be on one line.

[EDIT – this no longer works for Facebook since they change their authentication methods, please see my new post to get the Facebook feed]

Let’s do the same thing for Facebook now – getting the date into a format that Javascript understands is a little more complicated this time:

//facebook - again use your own username
$.getJSON("http://graph.facebook.com/username/posts?limit=5&callback=?",
function(json){
$.each(json.data, function(i,fb){
if (fb.type=='video' || fb.type=='link') {
if (fb.link) fb.message = '<a href="' + fb.link + '" target="_blank">' + fb.name + '</a>';
else fb.message = '<a href="' + fb.source + '" target="_blank">' + fb.name + '</a>';
}
else fb.message = fb.message.replace(/http:\/\/\S+/g,'<a href="$&" target="_blank">$&</a>');
var d=new Date();
dt1 = fb.created_time.split("T");
dt2 = dt1[0].split("-");
d.setDate(dt2[2]);
d.setFullYear(dt2[0]);
d.setMonth(dt2[1]-1);
cr = Date.parse(d);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
$("#l_tweets").append('<div>' + '' + fb.message + '' + '<br />(' + Math.round(dy) + ' days ago)</div><br />');
}); //each
} //function
); //json

The quotes either side of the fb.message counteract an Internet Explorer problem where content appears as undefined. Now on to Youtube, this is quite similar, but we are throwing images into the mix:

//youtube - as ever, replace your username
$.getJSON('http://gdata.youtube.com/feeds/users/username/uploads?alt=json-in-script&max-results=5&callback=?',
function(data) {
$.each(data.feed.entry, function(i, item) {
var published = item['published']['$t'];
var url = item['media$group']['media$content'][0]['url'];
var media_title = item['media$group']['media$title']['$t'];
var media_descr = item['media$group']['media$description']['$t'];
var thumb = item['media$group']['media$thumbnail'][0]['url'];
var d=new Date();
dt1 = published.split("T");
dt2 = dt1[0].split("-");
d.setDate(dt2[2]);
d.setFullYear(dt2[0]);
d.setMonth(dt2[1]-1);
cr = Date.parse(d);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
//note this next part is all one line
$("#l_tweets").append('<div><a href="' + url + '"><img src="' + thumb + '" width="64" height="36"><br />' + media_title + ' (' + media_descr + ')</a><br />(' + Math.round(dy) + ' days ago)</div><br />');
}); //each
} //function
); //json

Just one more to go – Flickr. A couple of things to note here. Firstly, your user id in the JSON link is not your username, you can map your username to the id here: http://idgettr.com. Secondly, all the other feeds have have a method of limiting results (I have chosen 5 results each so far). For this feed, we must create our own limiter.

//flickr - see the nore above about your user id
var i = 0;
$.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?id=user_id&lang=en-us&format=json&jsoncallback=?",
function(data){
$.each(data.items, function(i,item){
if (i<5) {
var d=new Date();
dt1 = item.published.split("T");
dt2 = dt1[0].split("-");
d.setDate(dt2[2]);
d.setFullYear(dt2[0]);
d.setMonth(dt2[1]-1);
cr = Date.parse(d);
tm = (new Date()).getTime();
dy = (tm - cr) / 86400000;
//this is all one line again
$("#l_tweets").append('<div><a href="' + item.link + '"><img src="' + item.media.m + '" width="64" height="36"><br />' + item.title + '</a><br />(' + Math.round(dy) + ' days ago)</div><br />');
i++;
} //i
}); //each
} //function
); //json

In the websites I have implemented this on, I haven’t needed to return the feeds in date order (currently they are in order of social network). To order them, this is what I would do:
1) create an array to hold the feeds
2) instead of appending the feeds to the div, add them to the array with days first, then a delimiter, then the feed
(e.g. socnet_updates.push(Math.round(dy)+’_DELIMITER_<div>’+ct+'<br />’+Math.round(dy)+’ days ago</div><br />’);)
3) after the scripts have run, wait a couple of seconds for the data to be filled out (use setTimeout counting the number of feeds returned until you have them all) and the order the array by date (use a natural order algorithm)
4) finally, iterate through the array, split the strings by your delimiter and append the second part to the div

If anyone wants me to show you that code, let me know.

Roundcube: Vacation message with Virtualmin Plugin

None of the vacation/forwarding plugins work with Virtualmin so this is my workaround. Firstly, users need to be created as Mail and FTP users, since we will be using FTP to place the .forward files. Download and unzip the plugin:

wget http://downloads.sourceforge.net/project/rcubevacation/vacation-1.9.9.zip?use_mirror=ovh&ts=1280145947
unzip vacation-1.9.9.zip
cd vacation
vi config.ini

We are going to use Usermin’s own function (installing the Vacation program doesn’t work) – so change these lines:

binary = "/etc/usermin/forward/autoreply.pl"
alias_identities = false

At this point I usually edit default.txt and change the default text. Now, I have mail usernames in Virtualmin formatted user@domain, so a few changes need to be made to accommodate this:

cd lib
vi dotforward.class.php

We are going to change some lines. Type :set number to see line numbers.
Line 42 change to:
$arrDotForward[] = $this->options['keepcopy'] = "\\".str_replace('@','-',$this->options['username']);
Line 94 change to:
$this->options['keepcopy'] = ($first_element == str_replace('@','-',$this->options['username']));

We need to tell the script the path to the mailbox, so we need to add some code. There’s a gap around line 68 which we can put the following in – note that this is very hacky and will only work with top level domains (not subdomains) where the username is the first part of the domain. If anyone has a better way to do this, do let me know (the virtualmin files containing the home directory paths are in /etc/webmin/virtual-server/domains but are readable by the root user only or we could parse /etc/passwd for the usernames…):

$user_parts = explode('@',$this->options['username']);
$domain_parts = explode('.',$user_parts[1]);
$this->options['flags'] = '/home/'.$domain_parts[0].'/homes/'.$user_parts[0].'/.vacation.msg';

[EDIT – thanks to Iain, the code to get the correct directory from /etc/passwd follows – use this instead of the previous code block:]

$user_accounts = file_get_contents( "/etc/passwd");
$matches = preg_grep( "/" . $this->options['username'] . "/i", explode( "\n",$user_accounts));
$user_info = explode( ":", current($matches));
$this->options['flags'] = $user_info[5] . '/.vacation.msg';

[/EDIT]

Now just add the plugin to the plugins array:

cd ../../../config/
vi main.inc.php

Change the line to:
$rcmail_config['plugins'] = array('vacation');

Update: in Roundcube 0.5 the page layout has changed and the plugin displays incorrectly. To fix this, edit plugins/vacation/skins/default/vacation.css and add in some padding:
#pagecontent {
width: 800px;
padding-top:70px;
}

Roundcube: Change Virtualmin Password Plugin

I can never get the Virtualmin binary file compilation to work properly, so here’s my workaround to enable changing a user’s Virtualmin password in Roundcube Webmail.

Firstly, let’s go to the plugins/password directory (your path will be different):

cd ~username/public_html/webmail/plugins/password
cp config.inc.php.dist config.inc.php
vi config.inc.php

We need to change the password driver to virtualmin (hit i for insert mode), so it reads:

$rcmail_config['password_driver'] = 'virtualmin';

(Hit Esc to exist insert mode and then ZZ to save. You’ll need these commands throughout this tutorial.)
Now we’ll use sudo instead of the binary compilation:

cd drivers
yum -y install sudo
visudo

Comment out the line: Defaults requiretty
At the end, add the following line (change the username to whatever user your webmail directory runs as): username ALL=NOPASSWD: /usr/sbin/virtualmin

If you can’t work out which user it is, create and execute a PHP script in the webmail directory with the following contents: <?php exec('whoami',$output,$return_code); print_r($output); ?>

Now we need to change the exec line in the virtualmin script:

vi virtualmin.php

Change the line that starts with exec to:
exec("sudo /usr/sbin/virtualmin modify-user --domain $domain --user $username --pass $newpass", $output, $returnvalue);

Finally, add password to the plugins array to activate it:

cd ../../../config
vi main.inc.php

Change plugins array to: $rcmail_config['plugins'] = array('password');

CentOs: Install ffmpeg & ffmpeg-php 0.6

The ffmpeg installed by yum cannot be used with ffmpeg-php, so we need to download and compile it:

cd ~admin/software
wget http://www.ffmpeg.org/releases/ffmpeg-0.6.tar.gz
tar zxfv ffmpeg-0.6.tar.gz
cd ffmpeg-0.6
./configure --enable-shared
make
make install

Now we need to download and configure ffmpeg-php:

cd ~admin/software
wget http://downloads.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2?use_mirror=puzzle&ts=1278667907
tar -xjf ffmpeg-php-0.6.0.tbz2
cd ffmpeg-php-0.6.0
phpize
./configure

There’s an error in this version (0.6) we need to correct or it won’t compile, so run:

vi ffmpeg_frame.c

We need to substitute PIX_FMT_RGBA32 for PIX_FMT_RGB32, so enter this command :%s/PIX_FMT_RGBA32/PIX_FMT_RGB32 and hit return. Now compile and install:

make
make install
echo "extension=ffmpeg.so" > /etc/php.d/ffmpeg.ini
service httpd restart

Matrox RT.X100 project in Adobe Premiere RT.X2

If you open an old Matrox RT.X100 project using an RT.X2 installation, it opens as a standard Adobe Premiere project – without using any hardware acceleration, thus losing the entire point of the system. The answer is simple, but not documented anywhere:

Step 1: Create a new Matrox-based Project as normal

Step 2: Got to File, Import and import the old project (select Entire Project)
Now we have the project, but it still isn’t using the hardware or showing on the preview monitor

Step 3: Open old project sequence(s), select all and copy

Step 4: Open/create new sequence and paste the old sequence to the new sequence
Now the old project is using the new settings – simples!