Process email bounces with PHP

This is a quick script to process email bounces, for example from a mailing list so that users can be flagged up or unsubscribed when they have too many failures.

The actual bounce identification will be done by Chris Fortune’s Bounce Handler, which you can download from:
http://anti-spam-man.com/php_bouncehandler/

We require 3 files from that package:
bounce_driver.class.php
bounce_responses.php
rfc1893.error.codes.php

What this script does is get the bounced emails from a specified mailbox and counts up how many failed emails there are per email address – if the number is at least as many as your threshold value (called $delete), then (you insert your code to unsubscribe the email address or whatever etc. and) the bounced emails are then deleted. You can run the script as a cronjob or call from your mailing list script to tidy up subscriptions.

<?php

# define variables
$mail_box = '{mail.domain.com:143/novalidate-cert}'; //imap example
$mail_user = 'username'; //mail username
$mail_pass = 'password'; //mail password
$delete = '5'; //deletes emails with at least this number of failures

# connect to mailbox
$conn = imap_open ($mail_box, $mail_user, $mail_pass) or die(imap_last_error());
$num_msgs = imap_num_msg($conn);

# start bounce class
require_once('bounce_driver.class.php');
$bouncehandler = new Bouncehandler();

# get the failures
$email_addresses = array();
$delete_addresses = array();
  for ($n=1;$n<=$num_msgs;$n++) {
  $bounce = imap_fetchheader($conn, $n).imap_body($conn, $n); //entire message
  $multiArray = $bouncehandler->get_the_facts($bounce);
    if (!empty($multiArray[0]['action']) && !empty($multiArray[0]['status']) && !empty($multiArray[0]['recipient']) ) {
      if ($multiArray[0]['action']=='failed') {
      $email_addresses[$multiArray[0]['recipient']]++; //increment number of failures
      $delete_addresses[$multiArray[0]['recipient']][] = $n; //add message to delete array
      } //if delivery failed
    } //if passed parsing as bounce
  } //for loop

# process the failures
  foreach ($email_addresses as $key => $value) { //trim($key) is email address, $value is number of failures
    if ($value>=$delete) {
    /*
    do whatever you need to do here, e.g. unsubscribe email address
    */
    # mark for deletion
      foreach ($delete_addresses[$key] as $delnum) imap_delete($conn, $delnum);
    } //if failed more than $delete times
  } //foreach

# delete messages
imap_expunge($conn);

# close
imap_close($conn);

?>

Monitor server cpu resources with email notification

I thought I’d write a quick script to keep an eye on which processes/users are using too many cpu cycles on my CentOS server. This checks the usage over the previous 5 minutes and emails a detailed list of cpu-hungry processes if it’s over the defined limit. Run it from cron to keep an eye on those resources:

#!/bin/bash
CPU_LIMIT="10" # relevant to number of cores, so quad-core at capacity is 4
EMAIL="your@email.com"
  if [ $(echo "$(cat /proc/loadavg | cut -d " " -f 2) >= $CPU_LIMIT" | bc) = 1 ]; then
ps ax --sort=-pcpu o user,pid,pcpu,pmem,vsz,rss,stat,time,comm | mail -s "CPU OVER LIMIT ON `hostname`" $EMAIL
  fi

That’s all folks!

Dovecot brute-force blocking with fail2ban

If you are getting any brute force attacks to your dovecot imap/pop3 server, install fail2ban to block the offenders. This works on CentOs 5.7. For other distributions, see the relevant websites.

Firstly, install fail2ban. You should have the rpmforge repo from my previous post. Enable it first to install fail2ban:

# cd /etc/yum.repos.d/
# vi rpmforge.repo

Change it to enabled = 1 and save

Then it’s simple:

# yum install fail2ban

After installation I recommend disabling the repo. Edit the file and change to enabled = 0

Then make sure the service starts up:

# chkconfig --add fail2ban
# chkconfig fail2ban on
# service fail2ban start

Create a new filter file for your dovecot:

# vi /etc/fail2ban/filter.d/dovecot-pop3imap.conf

Paste in the following definition:

[Definition]
failregex = pam.*dovecot.*(?:authentication failure).*rhost=(?:::f{4,6}:)?(?P<host>\S*)
ignoreregex =

Then add the new information to the main config file:

# vi /etc/fail2ban/jail.conf

At the end, add the following:

[dovecot-pop3imap]
enabled = true
filter = dovecot-pop3imap
action = iptables-multiport[name=dovecot-pop3imap, port="pop3,pop3s,imap,imaps", protocol=tcp]
# optional mail notification
# mail[name=dovecot-pop3imap, dest=root@domain]
# see /etc/fail2ban/action.d/ or Fail2Ban doc
logpath = /var/log/secure
maxretry = 20
findtime = 1200
bantime = 1200

That’s it!

IP Failover

IP Failover (proof of concept)

Essentially the idea is that if your primary web server goes down, your backup server automatically takes over (failover). When your primary server is back online, it takes over again (failback). It sounds simple, but it can be very complicated and expensive to eliminate all the single points of failure – you could need multiple redundant routers, switches, servers, power supplies, UPS and storage all on separate networks.

The simple setup I am proposing uses a minimum of two servers; the primary and the backup, both on separate networks.

An alternative to IP failover (where the IP is changed) is IP takeover (where the backup server actually takes on the IP address of the failed primary), but for this to work, the IP addresses need to have the same router, so would need to be at the same web host. That’s fine if the primary server failure issue is local hardware, but if it’s a power failure in the datacentre or a problem with the network, a switch, a router or any connection problems, transit or peering, then both servers would have the same problem, so it makes more sense to make them geographically disparate on entirely separate networks.

It should be noted that IP failover services are provided by sites like dnsmadeeasy.com and zoneedit.com – this would be much simpler to set up, but of course you pay for the privilege.

There are two options for the backup server. Either it just displays a “service currently not available” type page (simple to set up) or it is a complete mirror of the primary server. For my purposes it is a status page, but it is perfectly possible to replicate the primary server if necessary – you could run rsync (preferably through an SSH tunnel with a key pair), something along the lines of “rsync -avz –delete -e “ssh -i /root/rsync/mirror-rsync-key” /home/website user@server.com:/home/website” run every few minutes by cron – with MySQL replication for the databases, for example. Google has multiple articles and how-tos for both scenarios.

A possible issue with a replicated server is if users change files via FTP or change the database using a database script when the backup server is active, you would either have to prevent them doing so or make sure you mirror those changes back to the primary server when it comes back online. Unless of course you are using a different server for the database and shared storage for the web files.

This is how I propose to do it: there’s a script on the backup server that monitors the primary server (a heartbeat-type service, run as a PHP script every few minutes by cron). The script can optionally check with any other spare servers to see if they can contact the primary server (to make sure it is definitely down). If the primary server is definitely down, the backup server updates the DNS entries for the server domain(s) to point to itself (with a low TTL in case the primary comes back online).

It’s simplest if the primary server is also the primary DNS server and the backup server is the secondary DNS server, then if a user cannot connect to the primary server, it also cannot get the incorrect DNS records (assuming the whole server is down, not just the web service, which the heartbeat script should check).

Major caveat: some ISPs may ignore the TTL of DNS records and cache the wrong results for too long, but nothing can be done about that.

Updating the DNS records could be done using the nsupdate command, but there may be issues with permissions both to run the program and for the server to update the DNS records, so it’s simpler if the backup is running Virtualmin, which comes with a remote API which can be called from a PHP script.

This is the flowchart of what happens:

BACKUP checks PRIMARY is up every x mins and loads previous state from database:
     > PRIMARY was up and is still up – do nothing
     > PRIMARY was up and is now down:
          > check with SPARE servers:
               > cannot contact SPARES – internet probably down at BACKUP – do nothing
               > at least 1 SPARE reports PRIMARY up – network issues – do nothing
               > otherwise – update DNS and log to database
     > PRIMARY was down and is still down – do nothing
     > PRIMARY was down and is back up – update DNS and log to database

This is how the PHP script could update the DNS records:

<?php

//if primary down, failover IP addresses, remove www record then add new one:

$result1 = shell_exec("wget -O - --quiet --http-user=root --http-passwd=root_pass --no-check-certificate 'https://www.backupdomain.com:10000/virtual-server/remote.cgi?program=modify-dns&domain=primarydomain.com&remove-record=www.primarydomain.com. A'");

$result2 = shell_exec("wget -O - --quiet --http-user=root --http-passwd=root_pass --no-check-certificate 'https://www.backupdomain.com:10000/virtual-server/remote.cgi?program=modify-dns&domain=primarydomain.com&ttl=60&add-record=www.primarydomain.com. A 1.2.3.4'");

//echo $result2; //should end with: Exit status: 0 if successful

// if primary back online, reverse the process (though the primary as the primary DNS server should eventually update the record on the secondary DNS server anyway)

?>

I’ll post more actual examples when I get around to implementing this 🙂

RAID error email reporting with the 3ware 9550SXU-8L and tw_cli

This is a follow up to my previous post dmraid error reporting by email, this time for a hardware raid controller, this one is the 3ware/AMCC 9550SXU-8L. The concept is exactly the same, the only thing different is the script that checks the raid array status.

We will be using the command line tool tw_cli – which can be downloaded from www.3ware.com (now LSI) – go to support and select your product, then download the command line tools zip (which is currently CLI Linux – 10.2 code set). In this example, the tw_cli file has been extracted to / and chmodded 755.

Follow the other post and instead of inserting the other script insert this:

#!/bin/sh
# check raid status and email if not ok
STATUS=`/tw_cli info c0 | grep "RAID"`
OK=`echo "$STATUS" | grep "OK"`
if [ "$STATUS" != "$OK" ]
then
/tw_cli info c0 | mail -s "RAID ERROR ON `hostname`" your@email.com
fi

This works for my setup using the 10.2 version of the command line tools. I have 2 separate raid arrays on the card (which is why I queried all lines that have the word “RAID” on them and then check they also have the word “OK” on them). If your card is positioned differently or you have multiple cards you may need to change the command line options.

dmraid error reporting by email

dmraid is a software raid/fakeraid/onboard raid tool.

As far as I can tell, the only error reporting that dmraid does is by hooking into logwatch – which emails me a very long file I don’t often read and I would like to know immediately if my raid array is degraded.

This works for me on CentOS 5.6 with dmraid installed – no guarantees on other flavours/combinations. My dmraid version is dmraid-1.0.0.rc13-63.el5. I haven’t tested the output from other versions of dmraid, but it would be pretty trivial to update the script if they are different (or your path is different).

So what we are going to do is write a simple shell script that checks the array status and emails if there is a problem. Then we run the script every 15 (or whatever) minutes.

The dmraid needs to be run as root, so you might as well su - for the whole of this.

To create the file just vi /raid_status.sh, hit i to insert and paste this:

#!/bin/sh
# check raid status and email if not ok
STATUS=`/sbin/dmraid -s | grep "status"`
if [ "$STATUS" != "status : ok" ]
then
/sbin/dmraid -s | mail -s "RAID ERROR ON `hostname`: $STATUS" your@email.com
fi

Hit Esc, ZZ to save, then make the file executable:

chmod 755 /raid_status.sh

Now add it to cron so that it runs regularly:

crontab -e

…and insert the line (i to insert – Esc, ZZ to save):

00,15,30,45 * * * * /raid_status.sh

Voila! dmraid with email error reporting.

PHP Recursive File Copy Function

I couldn’t find a function online that copies folders recursively in PHP and actually works, so I wrote my own:

function recursiveCopy($src, $dest) {
if (is_dir($src)) $dir = opendir($src);
while ($file = readdir($dir)) {
if ($file != '.' && $file != '..') {
if (!is_dir($src.'/'.$file)) copy($src.'/'.$file, $dest.'/'.$file);
else {
@mkdir($dest.'/'.$file, 0750);
recursiveCopy($src.'/'.$file, $dest.'/'.$file);
} //else
} //if
} //while
closedir($dir);
} //function

To summarise: if the source is a folder, open it and start reading the files. If the files are not folders, copy them straight to the destination, if they are folders, create a new folder at the destination and then run the function again (within itself) for the new folder.

Usage:

recursiveCopy('/home/site/public_html/folder','/home/othersite/public_html/folder');

Javascript (JQuery): Social networking feeds – new Facebook authentication

After my previous post, Javascript (JQuery): Social networking feeds all in one place, Facebook went and added authentication to the feed retrieval. After much head-scratching, this is how to enable the Facebook feed under the new OAuth system.

You need an access token to get to the data, so what we are going to do is create a Facebook App which the user then permits to access their information and that will give us the token we need.

So first you need to create a Facebook App. This is simpler than it sounds, we don’t need to create an App that actually does anything or even exists, we just need it for authentication. So, install the Developer App on Facebook and then go to that App and select Set Up New App. Enter the details of the App and be sure to give it a URL and domain – e.g. http://www.cheesefather.com as URL and cheesefather.com as domain. What you put it doesn’t matter that much.

The new App will have an Application id – a load of numbers. Now, this is the method to get the access token. Log in as the user you want the feed for (I am assuming you are using this to retrieve your own feed) and then go to the page:

https://www.facebook.com/dialog/oauth?client_id=YOUR_APP_ID&redirect_uri=http://www.YOUR_URL.com&scope=read_stream,offline_access

Replace your App id and URL in the example. What we are doing here is creating an App request to the user to access data, including the feed (read_stream) and to access the data when they are offline (offline_access) with a token that does not expire (ever, if I’m reading the docs correctly, even if they uninstall the App).

Once you have accepted, the script continues to your URL, passing a very long code to it (you can just copy it from the address bar) – copy this code, the part after ?code=

Then we can finally request the access token. As well as the two codes we just used we also need your App Secret from your Facebook App page. Get the secret and then go to the following page:

https://graph.facebook.com/oauth/access_token?client_id=YOUR_APP_ID&redirect_uri=http://www.YOUR_URL.com&client_secret=YOUR_APP_SECRET&code=THAT_LONG_CODE&type=client_cred

Obviously replace your App id, URL, App secret and the long code with your variables. The script passes back to you an access token (check the source code if your browser isn’t displaying it).

All you need to do now is add that access token to the feed request (see the previous post for the rest of the scripts):

$.getJSON(" https://graph.facebook.com/USER_ID/posts?access_token=ACCESS_TOKEN&limit=5&callback=?",

…replacing the user id of the feed to want to retrieve.

BUT WAIT!…

You don’t want people who check your source code to have access to your Facebook account, so we need to hide that token. This is how I did it – call a PHP proxy script from the Javascript and the PHP return the content minus the access token. So you change that line to:

$.getJSON("facebook.inc.php?callback=?",

Or whatever the name of your new PHP file is. And then the contents of that new file are:

<?php
$access_token = 'YOUR_ACCESS_TOKEN';
header('Content-Type: text/javascript; charset=UTF-8');
ini_set('user_agent', $_SERVER['HTTP_USER_AGENT']);
$handle = fopen('https://graph.facebook.com/USER_ID/posts?access_token='.$access_token.'&limit=5&callback='.$_GET['callback'], 'rb');
$contents = '';
if ($handle) {
while (!feof($handle)) {
$contents .= fread($handle, 8192);
}
}
fclose($handle);
$contents = str_replace($access_token,'',$contents);
print_r($contents);
exit();
?>

Replace your access token and user id in the above example.

What this code is doing is as follows: you define your access token, you tell the browser that it’s outputting Javascript (so that JQuery interprets the results properly), you spoof some browser information so that Facebook returns the data correctly, then you open a connection to the JSON page using your access token and the reference that JQuery has assigned the JSON, then we remove all references to your access token from the output (it appears in links that are returned) and finally print the output so that it can be interpreted by the original JQuery function. Voila! What was so simple just a week ago is now quite a bit more complicated…