Add custom fonts to WordPress TinyMCE editor with @font-face

The list of fonts in the WordPress visual editor is quite short. There are plugins available to increase it, but I wanted to add my own custom font to the select dropdown.

There’s no plugin hook for this, so it needs a little lateral thinking. Firstly, generate your webfont @font-face in the normal way – I use http://www.fontsquirrel.com/fontface/generator or http://www.font2web.com

Then add the css to your site stylesheet as normal, e.g.

@font-face {
font-family: 'CustomFont';
src: url('fonts/customfont-webfont.eot');
src: local('CustomFont'),
url('fonts/customfont-webfont.eot?iefix') format('eot'),
url('fonts/customfont-webfont.woff') format('woff'),
url('fonts/customfont-webfont.ttf') format('truetype'),
url('fonts/customfont-webfont.svg#webfontnTz28sxq') format('svg');
font-weight: normal;
font-style: normal;
}

There is a plugin hook that adds a custom stylesheet (adding the code content_css: "stylesheet.css",), so we can use that to inject our own code by closing the quote marks first without entering a stylesheet (or you can if you want so that you can use the font in the editor) and then adding what we need, so add this into your theme’s functions.php:

function plugin_mce_addfont($mce_css) {
if (! empty($mce_css)) $mce_css .= ',';
$mce_css .= '",theme_advanced_fonts:"Custom Font=CustomFont,arial,helvetica,sans-serif';
return $mce_css;
}
add_filter('mce_css', 'plugin_mce_addfont');

So the first thing we do is close the double quotes and then leave off the final double quote in our code. This will only give you the one choice of font, however (with a browser backup to Arial etc. in case it doesn’t work).  The full list of fonts you originally had is in the file wp-includes/js/tinymce/themes/advanced/editor-template.js so we need to tack them on the end so that we can use all of them:

function plugin_mce_addfont($mce_css) {
if (! empty($mce_css)) $mce_css .= ',';
$mce_css .= '",theme_advanced_fonts:"Custom Font=CustomFont,arial,helvetica,sans-serif;Andale Mono=andale mono,times;Arial=arial,helvetica,sans-serif;Arial Black=arial black,avant garde;Book Antiqua=book antiqua,palatino;Comic Sans MS=comic sans ms,sans-serif;Courier New=courier new,courier;Georgia=georgia,palatino;Helvetica=helvetica;Impact=impact,chicago;Symbol=symbol;Tahoma=tahoma,arial,helvetica,sans-serif;Terminal=terminal,monaco;Times New Roman=times new roman,times;Trebuchet MS=trebuchet ms,geneva;Verdana=verdana,geneva;Webdings=webdings;Wingdings=wingdings,zapf dingbats';
return $mce_css;
}
add_filter('mce_css', 'plugin_mce_addfont');

Done.

Process email bounces with PHP

This is a quick script to process email bounces, for example from a mailing list so that users can be flagged up or unsubscribed when they have too many failures.

The actual bounce identification will be done by Chris Fortune’s Bounce Handler, which you can download from:
http://anti-spam-man.com/php_bouncehandler/

We require 3 files from that package:
bounce_driver.class.php
bounce_responses.php
rfc1893.error.codes.php

What this script does is get the bounced emails from a specified mailbox and counts up how many failed emails there are per email address – if the number is at least as many as your threshold value (called $delete), then (you insert your code to unsubscribe the email address or whatever etc. and) the bounced emails are then deleted. You can run the script as a cronjob or call from your mailing list script to tidy up subscriptions.

<?php

# define variables
$mail_box = '{mail.domain.com:143/novalidate-cert}'; //imap example
$mail_user = 'username'; //mail username
$mail_pass = 'password'; //mail password
$delete = '5'; //deletes emails with at least this number of failures

# connect to mailbox
$conn = imap_open ($mail_box, $mail_user, $mail_pass) or die(imap_last_error());
$num_msgs = imap_num_msg($conn);

# start bounce class
require_once('bounce_driver.class.php');
$bouncehandler = new Bouncehandler();

# get the failures
$email_addresses = array();
$delete_addresses = array();
  for ($n=1;$n<=$num_msgs;$n++) {
  $bounce = imap_fetchheader($conn, $n).imap_body($conn, $n); //entire message
  $multiArray = $bouncehandler->get_the_facts($bounce);
    if (!empty($multiArray[0]['action']) && !empty($multiArray[0]['status']) && !empty($multiArray[0]['recipient']) ) {
      if ($multiArray[0]['action']=='failed') {
      $email_addresses[$multiArray[0]['recipient']]++; //increment number of failures
      $delete_addresses[$multiArray[0]['recipient']][] = $n; //add message to delete array
      } //if delivery failed
    } //if passed parsing as bounce
  } //for loop

# process the failures
  foreach ($email_addresses as $key => $value) { //trim($key) is email address, $value is number of failures
    if ($value>=$delete) {
    /*
    do whatever you need to do here, e.g. unsubscribe email address
    */
    # mark for deletion
      foreach ($delete_addresses[$key] as $delnum) imap_delete($conn, $delnum);
    } //if failed more than $delete times
  } //foreach

# delete messages
imap_expunge($conn);

# close
imap_close($conn);

?>

Monitor server cpu resources with email notification

I thought I’d write a quick script to keep an eye on which processes/users are using too many cpu cycles on my CentOS server. This checks the usage over the previous 5 minutes and emails a detailed list of cpu-hungry processes if it’s over the defined limit. Run it from cron to keep an eye on those resources:

#!/bin/bash
CPU_LIMIT="10" # relevant to number of cores, so quad-core at capacity is 4
EMAIL="your@email.com"
  if [ $(echo "$(cat /proc/loadavg | cut -d " " -f 2) >= $CPU_LIMIT" | bc) = 1 ]; then
ps ax --sort=-pcpu o user,pid,pcpu,pmem,vsz,rss,stat,time,comm | mail -s "CPU OVER LIMIT ON `hostname`" $EMAIL
  fi

That’s all folks!

Dovecot brute-force blocking with fail2ban

If you are getting any brute force attacks to your dovecot imap/pop3 server, install fail2ban to block the offenders. This works on CentOs 5.7. For other distributions, see the relevant websites.

Firstly, install fail2ban. You should have the rpmforge repo from my previous post. Enable it first to install fail2ban:

# cd /etc/yum.repos.d/
# vi rpmforge.repo

Change it to enabled = 1 and save

Then it’s simple:

# yum install fail2ban

After installation I recommend disabling the repo. Edit the file and change to enabled = 0

Then make sure the service starts up:

# chkconfig --add fail2ban
# chkconfig fail2ban on
# service fail2ban start

Create a new filter file for your dovecot:

# vi /etc/fail2ban/filter.d/dovecot-pop3imap.conf

Paste in the following definition:

[Definition]
failregex = pam.*dovecot.*(?:authentication failure).*rhost=(?:::f{4,6}:)?(?P<host>\S*)
ignoreregex =

Then add the new information to the main config file:

# vi /etc/fail2ban/jail.conf

At the end, add the following:

[dovecot-pop3imap]
enabled = true
filter = dovecot-pop3imap
action = iptables-multiport[name=dovecot-pop3imap, port="pop3,pop3s,imap,imaps", protocol=tcp]
# optional mail notification
# mail[name=dovecot-pop3imap, dest=root@domain]
# see /etc/fail2ban/action.d/ or Fail2Ban doc
logpath = /var/log/secure
maxretry = 20
findtime = 1200
bantime = 1200

That’s it!

IP Failover

IP Failover (proof of concept)

Essentially the idea is that if your primary web server goes down, your backup server automatically takes over (failover). When your primary server is back online, it takes over again (failback). It sounds simple, but it can be very complicated and expensive to eliminate all the single points of failure – you could need multiple redundant routers, switches, servers, power supplies, UPS and storage all on separate networks.

The simple setup I am proposing uses a minimum of two servers; the primary and the backup, both on separate networks.

An alternative to IP failover (where the IP is changed) is IP takeover (where the backup server actually takes on the IP address of the failed primary), but for this to work, the IP addresses need to have the same router, so would need to be at the same web host. That’s fine if the primary server failure issue is local hardware, but if it’s a power failure in the datacentre or a problem with the network, a switch, a router or any connection problems, transit or peering, then both servers would have the same problem, so it makes more sense to make them geographically disparate on entirely separate networks.

It should be noted that IP failover services are provided by sites like dnsmadeeasy.com and zoneedit.com – this would be much simpler to set up, but of course you pay for the privilege.

There are two options for the backup server. Either it just displays a “service currently not available” type page (simple to set up) or it is a complete mirror of the primary server. For my purposes it is a status page, but it is perfectly possible to replicate the primary server if necessary – you could run rsync (preferably through an SSH tunnel with a key pair), something along the lines of “rsync -avz –delete -e “ssh -i /root/rsync/mirror-rsync-key” /home/website user@server.com:/home/website” run every few minutes by cron – with MySQL replication for the databases, for example. Google has multiple articles and how-tos for both scenarios.

A possible issue with a replicated server is if users change files via FTP or change the database using a database script when the backup server is active, you would either have to prevent them doing so or make sure you mirror those changes back to the primary server when it comes back online. Unless of course you are using a different server for the database and shared storage for the web files.

This is how I propose to do it: there’s a script on the backup server that monitors the primary server (a heartbeat-type service, run as a PHP script every few minutes by cron). The script can optionally check with any other spare servers to see if they can contact the primary server (to make sure it is definitely down). If the primary server is definitely down, the backup server updates the DNS entries for the server domain(s) to point to itself (with a low TTL in case the primary comes back online).

It’s simplest if the primary server is also the primary DNS server and the backup server is the secondary DNS server, then if a user cannot connect to the primary server, it also cannot get the incorrect DNS records (assuming the whole server is down, not just the web service, which the heartbeat script should check).

Major caveat: some ISPs may ignore the TTL of DNS records and cache the wrong results for too long, but nothing can be done about that.

Updating the DNS records could be done using the nsupdate command, but there may be issues with permissions both to run the program and for the server to update the DNS records, so it’s simpler if the backup is running Virtualmin, which comes with a remote API which can be called from a PHP script.

This is the flowchart of what happens:

BACKUP checks PRIMARY is up every x mins and loads previous state from database:
     > PRIMARY was up and is still up – do nothing
     > PRIMARY was up and is now down:
          > check with SPARE servers:
               > cannot contact SPARES – internet probably down at BACKUP – do nothing
               > at least 1 SPARE reports PRIMARY up – network issues – do nothing
               > otherwise – update DNS and log to database
     > PRIMARY was down and is still down – do nothing
     > PRIMARY was down and is back up – update DNS and log to database

This is how the PHP script could update the DNS records:

<?php

//if primary down, failover IP addresses, remove www record then add new one:

$result1 = shell_exec("wget -O - --quiet --http-user=root --http-passwd=root_pass --no-check-certificate 'https://www.backupdomain.com:10000/virtual-server/remote.cgi?program=modify-dns&domain=primarydomain.com&remove-record=www.primarydomain.com. A'");

$result2 = shell_exec("wget -O - --quiet --http-user=root --http-passwd=root_pass --no-check-certificate 'https://www.backupdomain.com:10000/virtual-server/remote.cgi?program=modify-dns&domain=primarydomain.com&ttl=60&add-record=www.primarydomain.com. A 1.2.3.4'");

//echo $result2; //should end with: Exit status: 0 if successful

// if primary back online, reverse the process (though the primary as the primary DNS server should eventually update the record on the secondary DNS server anyway)

?>

I’ll post more actual examples when I get around to implementing this 🙂

RAID error email reporting with the 3ware 9550SXU-8L and tw_cli

This is a follow up to my previous post dmraid error reporting by email, this time for a hardware raid controller, this one is the 3ware/AMCC 9550SXU-8L. The concept is exactly the same, the only thing different is the script that checks the raid array status.

We will be using the command line tool tw_cli – which can be downloaded from www.3ware.com (now LSI) – go to support and select your product, then download the command line tools zip (which is currently CLI Linux – 10.2 code set). In this example, the tw_cli file has been extracted to / and chmodded 755.

Follow the other post and instead of inserting the other script insert this:

#!/bin/sh
# check raid status and email if not ok
STATUS=`/tw_cli info c0 | grep "RAID"`
OK=`echo "$STATUS" | grep "OK"`
if [ "$STATUS" != "$OK" ]
then
/tw_cli info c0 | mail -s "RAID ERROR ON `hostname`" your@email.com
fi

This works for my setup using the 10.2 version of the command line tools. I have 2 separate raid arrays on the card (which is why I queried all lines that have the word “RAID” on them and then check they also have the word “OK” on them). If your card is positioned differently or you have multiple cards you may need to change the command line options.

dmraid error reporting by email

dmraid is a software raid/fakeraid/onboard raid tool.

As far as I can tell, the only error reporting that dmraid does is by hooking into logwatch – which emails me a very long file I don’t often read and I would like to know immediately if my raid array is degraded.

This works for me on CentOS 5.6 with dmraid installed – no guarantees on other flavours/combinations. My dmraid version is dmraid-1.0.0.rc13-63.el5. I haven’t tested the output from other versions of dmraid, but it would be pretty trivial to update the script if they are different (or your path is different).

So what we are going to do is write a simple shell script that checks the array status and emails if there is a problem. Then we run the script every 15 (or whatever) minutes.

The dmraid needs to be run as root, so you might as well su - for the whole of this.

To create the file just vi /raid_status.sh, hit i to insert and paste this:

#!/bin/sh
# check raid status and email if not ok
STATUS=`/sbin/dmraid -s | grep "status"`
if [ "$STATUS" != "status : ok" ]
then
/sbin/dmraid -s | mail -s "RAID ERROR ON `hostname`: $STATUS" your@email.com
fi

Hit Esc, ZZ to save, then make the file executable:

chmod 755 /raid_status.sh

Now add it to cron so that it runs regularly:

crontab -e

…and insert the line (i to insert – Esc, ZZ to save):

00,15,30,45 * * * * /raid_status.sh

Voila! dmraid with email error reporting.

PHP Recursive File Copy Function

I couldn’t find a function online that copies folders recursively in PHP and actually works, so I wrote my own:

function recursiveCopy($src, $dest) {
if (is_dir($src)) $dir = opendir($src);
while ($file = readdir($dir)) {
if ($file != '.' && $file != '..') {
if (!is_dir($src.'/'.$file)) copy($src.'/'.$file, $dest.'/'.$file);
else {
@mkdir($dest.'/'.$file, 0750);
recursiveCopy($src.'/'.$file, $dest.'/'.$file);
} //else
} //if
} //while
closedir($dir);
} //function

To summarise: if the source is a folder, open it and start reading the files. If the files are not folders, copy them straight to the destination, if they are folders, create a new folder at the destination and then run the function again (within itself) for the new folder.

Usage:

recursiveCopy('/home/site/public_html/folder','/home/othersite/public_html/folder');