VPS Benchmarks: Amazon EC2 and Lightsail, Azure, DigitalOcean, Google, Hostworld, Linode, OVH, UpCloud, VPSServer, VPS.net, Vultr

I recently needed to have a look at moving some services to a different VPS provider for redundancy so I decided to benchmark my options to compare them.

The plan selected was whichever had 16GB of RAM (though Google is 15GB). The selected datacenter was always London (Azure only says UK South). The fastest storage options were selected. The OS was CentOS 7 since that was mostly supported (Amazon and VPS.net do not support CentOS 8).

For Amazon and DigitalOcean I tested 2 options; for Amazon I was curious as to how Lightsail stacked up against vanilla EC2 and I also wondered if the 50% hike in price for DigitalOcean General Purpose was worth it versus the Standard VPSs from them that I’d used in the past.

The test were generally run a few times and averaged out. I used the latest sysbench to measure CPU speed, File I/O and MySQL transaction speed for comparison.

ProviderPlanvCPU coresRAM (GB)CPU Events/sRead MiB/sWrite MiB/sMySQL tran/s
AmazonEC2 t3a.xlarge416514.7918.8412.561099.59
AmazonLightsail 16GB416337.4415.5510.371603.31
DigitalOceanGeneral Purpose 16GB416434.4620.8513.901328.43
DigitalOceanStandard 16GB616304.3710.797.191078.46
Google Cloud Computen1-standard-4415345.2311.897.93976.040
KamateraCustom (Availability)416413.7536.9624.642634.97
LinodeShared 16GB616501.9143.2128.801728.78
OVH CloudElite816267.6731.0020.671729.30
UK2SSD VPS V6-20416254.3323.3715.581317.89
VPSServerStandard 16GB816156.2316.8211.21844.11
VultrCloud Compute 16GB616403.3735.2023.461691.64

For disk I/O UpCloud is the clear winner and Azure and VPS.net are VERY distant losers – and this is with Azure’s Premium SSD option. Hostworld just wins the CPU crown, but its I/O is less impressive. Of the rest I’d say Linode is a pretty solid performer. There are some distinctly average performances from some of these. I was quite surprised by some of the numbers and ran them a few extra times to make sure (VPSServer’s terrible CPU speed and Azure and VPS.net’s shocking I/O for example).


Price-wise, the top 4 in terms of speed have very similar pricing (that’s UpCloud, Linode, Kamatera and Vultr in order of read speed), then OVH and Hostworld are both significantly cheaper. Hostworld is actually the cheapest in the list with the fastest CPU but only mid-range disk I/O.

The commands used to benchmark were as follows:

For CPU Events per second:
sysbench cpu --cpu-max-prime=20000 run

For read and write performance in MiB/s (after creating 150GB of files):
sysbench fileio --file-total-size=150G --file-test-mode=rndrw --time=300 --max-requests=0 run

For MySQL transactions per second (after creating a test database with 1 million rows):
sysbench oltp_read_write --table-size=1000000 --db-driver=mysql --mysql-db=test --time=60 --max-requests=0 --threads=8 run

Postfix ban failed logins script

Fail2ban hasn’t been working for me, I still have people running brute force attacks on my Postfix server, so I though I’d rig up something myself.

This consists of a bash script that identifies multiple failures and bans them, run on cron every 10 minutes. It checks for both smtp and pop/imap login failures.

# postfix ban failed login ips
# get all failed ip addresses into files
cat /var/log/maillog | grep "authentication failed" | grep -Eo "([0-9]{1,3}[\.]){3}[0-9]{1,3}" > ~admin/mail_fail_smtp
cat /var/log/maillog | grep "auth failed" | grep -Eo "rip=([0-9]{1,3}[\.]){3}[0-9]{1,3}" > ~admin/mail_fail_imap
find ~admin/mail_fail_imap -type f -exec sed -i 's/rip=//g' {} \;
# only get over 5 fails (change the limit= part to change)
sort ~admin/mail_fail_imap | uniq -cd | awk -v limit=5 '$1 > limit{print $2}' > ~admin/mail_fail_imap_over5
sort ~admin/mail_fail_smtp | uniq -cd | awk -v limit=5 '$1 > limit{print $2}' > ~admin/mail_fail_smtp_over5
# read through files and add IP to hosts.deny if not there already
while read p; do
if grep $p /etc/hosts.deny; then
echo $p " already added"
echo ALL: $p >> /etc/hosts.deny
done < ~admin/mail_fail_smtp_over5
while read p; do
if grep $p /etc/hosts.deny; then
echo $p " already added"
echo ALL: $p >> /etc/hosts.deny
done < ~admin/mail_fail_imap_over5
# clean up
rm -f ~admin/mail_fail_smtp
rm -f ~admin/mail_fail_imap
rm -f ~admin/mail_fail_smtp_over5
rm -f ~admin/mail_fail_imap_over5

Then added to crontab:

*/10 * * * * /home/admin/postfix_ban_ips.sh > /dev/null

And just in case the localhost fails and is unintentionally blocked (this is quicker than filtering it out above):

echo "ALL:" >> /etc/hosts.allow

dmraid error reporting by email

dmraid is a software raid/fakeraid/onboard raid tool.

As far as I can tell, the only error reporting that dmraid does is by hooking into logwatch – which emails me a very long file I don’t often read and I would like to know immediately if my raid array is degraded.

This works for me on CentOS 5.6 with dmraid installed – no guarantees on other flavours/combinations. My dmraid version is dmraid-1.0.0.rc13-63.el5. I haven’t tested the output from other versions of dmraid, but it would be pretty trivial to update the script if they are different (or your path is different).

So what we are going to do is write a simple shell script that checks the array status and emails if there is a problem. Then we run the script every 15 (or whatever) minutes.

The dmraid needs to be run as root, so you might as well su - for the whole of this.

To create the file just vi /raid_status.sh, hit i to insert and paste this:

# check raid status and email if not ok
STATUS=`/sbin/dmraid -s | grep "status"`
if [ "$STATUS" != "status : ok" ]
/sbin/dmraid -s | mail -s "RAID ERROR ON `hostname`: $STATUS" your@email.com

Hit Esc, ZZ to save, then make the file executable:

chmod 755 /raid_status.sh

Now add it to cron so that it runs regularly:

crontab -e

…and insert the line (i to insert – Esc, ZZ to save):

00,15,30,45 * * * * /raid_status.sh

Voila! dmraid with email error reporting.