All posts by KingJ

Creating a Basic, Always On, Mobile Compatible IPSEC VPN

For a while, i’ve wanted to create a working always-on VPN from my phone back to my internal network. Partly to be able to access my network resources from anywhere, but also to provide additional security while out and about. WiFi hotspots are well known for their weaknesses, but increasingly mobile data connections are also at risk, with most mobile providers required to  keep connection logs and also other instances of mobile providers tampering with data and adding tracking cookies.

For the purposes of this article, we’re going to use Android 5.0.1 as our VPN client and Ubuntu 14.04 LTS as our VPN host. However, the steps in this article should also be applicable to other versions of Android, iOS devices and other Linux based distributions. A number of VPN protocols exist, but this article will use IPSEC – a widely adopted standard. Other protocols are available, but their security and compatibility varies. The key requirements for this project are;

  • Security – the VPN implementation must not use weak cryptographic standards.
  • Compatibility – The VPN implementation must be available out of the box on the client device.
  • Always-on – The VPN implementation must be supported in an Always-on fashion, prohibiting any data from flowing until the connection is established.

When these requirements are applied to our Android client, this limits us to using either an IPSEC or L2TP over IPSEC VPN. For simplicity, we’ll use a pure IPSEC solution.

IPSEC is highly versatile and supports a huge range of encryption protocols and authentication methods. This article will aim to create a simple implementation though and use a pre-shared key with usernames and passwords for authentication. Later articles will build on this to add certificate based authentication. The aim is to get a simple but reasonably secure implementation working first.

Originally, I wanted to get this working on my OpenWRT based router, but due to the complex iptables firewall and NAT arrangements, I was unable to get traffic to leave the router. Instead, a Ubuntu VM inside my network is handling the connectivity.

Network Diagram

Diagrams always make things easier to understand!

IPSEC tunnel between mobile client and server inside home network.
IPSEC tunnel between mobile client and server inside home network.

Arguably, this diagram is somewhat simplified. For example, the client has a public IP. On almost all mobile networks, you’ll have an RFC 1918 non-routable IP and go through some form of NAT in order to reach the internet. Historically, this presented an issue for IPSEC VPNs which assume both endpoints have a publicly routable IP. However, more recent implementations have the ability to encapsulate IPSEC packets to make them more friendly to NAT. This is also applicable to our server, which is also in a RFC 1918 non-routable IP address space.

Router Port and Protocol Forwarding

As our server is behind a router performing NAT, our client won’t be able to connect unless we tell the router which ports to forward. Ensure you’ve assigned your server a static IP address and forward the following ports and protocols.

  • Port 500 UDP
  • Port 4500 UDP
  • IP Protocol 50 (ESP)
  • IP Protocol 51 (AH)

If you can’t forward IP Protocols, it shouldn’t pose a problem – most IPSEC VPNS will fall back instead to NAT-T over UDP Port 4500.

Server Configuration

  1. Install the strongswan package, along with the plugins xauth-generic, dhcp and farp by running  apt-get install strongswan strongswan-plugin-xauth-generic strongswan-plugin-farp strongswan-plugin-dhcp .

The auth-generic plugin allows us to use xauth for authentication, the farp plugin allows our client to appear as a normal host on the remote network and the DHCP plugin allows the client to be assigned an IP from the remote network’s DHCP server rather than randomly assigning an IP from a set pool and hoping that it doesn’t collide!

  1. Edit /etc/ipsec.conf to read as follows;

This tells the server to;

  • Use the localhost address as the “left” side of the connection.
  • Add any required firewall/iptables rules automatically.
  • Accept any address as the “right” side of the connection.
  • Assign virtual IP addresses to clients from DHCP.
  • Use self as an xauth source.
  • Use a pre-shared key for xauth.
  • Use IKEv1 for Key Exchange (IKEv2 is better, however Android’s out of the box IPSEC client only supports IKEv1).
  • Use AES-128 for encryption and key exchange with SHA1 as a hash and Diffie Hellman Group 2. This provides a good level of security – you can use higher standards if you wish, but this is already “good enough” against a current adversary. This Security Stack Exchange answer does a good job of explaining why AES128 is good enough. It would be better to use AES with the GCM cipher,  but unfortunately the built-in Android VPN client does not support it.

Save the file and quit.

  1. Edit /etc/ipsec.secrets to read as follows;

Replace username with a username you want your connecting client to use, replace password with an appropriate strong password and  replace presharedkey with an even stronger password. While initally setting this up and testing, you may wish to use shorter and weak passwords but as soon as it is working change to using strong passwords. Without a strong password and pre-shared key the security that a VPN offers is just an illusion.

Save the file and quit.

  1. Restart the IPSEC server in order to load the new configuration using  ipsec restart .
  2. Enable IP forwarding, without this your client will only be able to communicate with the IPSEC server. To do this edit /etc/sysctl.conf and uncomment the  net.ipv4.ip_forward=1 line. Run sysctl -p to apply the change. This will persist across reboots.
  3. Finished! Your server should now be accepting IPSEC connections and forwarding any data.

Resolving Android MTU Issues

Unfortunately, Android relies on Path MTU direction across the tunnel to negotiate the correct MTU size with the destination server. Whilst PMTU is a good idea, across the internet it can be somewhat fragile as many firewalls are configured to drop ICMP traffic – including PMTU ICMP traffic! Consequently, Android falls back to using an MTU of 1500, which is far too big for packets going through the tunnel. Consequently, unless the MTU is changed, some communications won’t work – notably those which rely on transferring large amounts of data such as images on Twitter and Speed Test applications.

Fortunately, Zeitgeist has identified an easy work around using iptables. By using the following iptables rule, any packets with a large MTU can be re-written to have a more sensible one;

Of course, this will apply to any traffic traversing the eth0 interface – VPN traffic or otherwise. If your server is only being used as a IPSEC VPN server this may be acceptable, otherwise you may wish to use the  -s flag with a suitable netblock to only apply to traffic from your VPN clients.

Importantly, iptables rules applied in the fashion above do not persist across restarts.  I highly suggest you use the iptables-persistent package to preserve your iptables rules across restarts.

You should also disable Path MTU discovery. Zeitergist suggests doing this by altering the proc filesystem, but a more permanent solution can be obtained by editing the sysctl.conf file as before to enable IPv4 forwarding.

  1. Edit /etc/sysctl.conf and add the line  net.ipv4.ip_no_pmtu_disc=1 line. Save the file and exit.
  2. Run sysctl -p to apply the change. This will persist across reboots.

Client Configuration

As mentioned previously, these instructions are based off an Android 5.0.1 device, but should be applicable to other versions of Android and iOS with slight modifications.

  1. Enter the VPN settings via Settings > Wireless & networks > More > VPN.
  2. Add a new VPN connection by tapping the plus icon.
  3. In the Name field, enter a user-friendly name for your VPN connection.
  4. In the Type drop-down, select “IPSec Xauth PSK”.
  5. In the Server address field, enter your router’s public IP or hostname.
  6. Leave the IPSec identifier field blank.
  7. In the IPSec preshared key field, enter the preshared key from your ipsec.secrets file.
  8. Tap Save.
  9. Tap on your new VPN connection to start connecting. When prompted, enter your username and password from your ipsec.secrets file, optionally save the account information and tap connect.
  10. If working, you should see the comment under your VPN’s name change to “Connected” and see a key icon in your notification area.

Enabling Always On

Now that your VPN is working, the always-on mode can be enabled. This will force the VPN to be connected at all times and block any data traffic until it has been connected – much easier than having to remember to connect each time.

  1. Enter the VPN settings via Settings > Wireless & networks > More > VPN.
  2. From the menu, select “Always-on VPN”.
  3. Select the name of the VPN profile you just created, and then tap OK.

After enabling as an always-on VPN, it should attempt to connect automatically and you’ll get a notification in the notification area.

Always on VPN notification.
Always on VPN notification.

The “Reset” action is useful if the VPN connection goes stale, which can occasionally happen.


Although IPSEC is a nice, standardised protocol, it can be very unforgiving to configuration issues. Viewing the log live while attempting to connect can be very useful in identifying connection issues. To do this, run  tail -f /var/log/syslog .

no IKE config found for…, sending NO_PROPOSAL_CHOSEN

This error indicates that none of your configuration entries matched the connection that was attempted. This is normally caused by a few things;

  • Your configuration specifies IKEv1, but you’re using an IKEv2 client or vice-versa.
  • Your left or right definitions are wrong – try temporally  changing them to %any and $any.

no matching proposal found, sending NO_PROPOSAL_CHOSEN

Similar to above, this error indicates that your client and server couldn’t agree on a configuration. This error however is more likely to occur if your client and server don’t agree on what encryption mechanisms are possible for IKE and/or ESP, for example you’ve configured the server to only permit AES256-GCM-16, but your client can only do plain old AES256-CBC (Android, i’m looking at you). The lines above this error in the log should show what mechanisms your client supports.

no shared key found for ‘’[] – ‘(null)'[]

This error indicates that a matching pre-shared key for this combination of server and client wasn’t found in the ipsec.secrets file. Either use %any %any or make sure they match your left and right definitions.

no XAuth method found

This error indicates that although you requested xauth authentication in your IPSEC server configuration, no way of performing it was known to the server. Make sure you’ve installed the  strongswan-plugin-xauth-generic package.

Once connected over the VPN, some data connections fail.

This is likely because of the MTU issue listed earlier. Make sure you’ve got the iptables rule applied and that Path MTU discovery is disabled.


Hopefully this article was a useful reference for creating a simple but reasonably secure VPN for mobile devices. Future articles will build upon this base article to add additional functionality, such as certificate based authentication rather than usernames, passwords and pre-shared keys.

If you’ve run in to any problems, please note them in the comments so the article can be updated.


Creating a GRE Tunnel Between OpenWRT and pfSense

Following on from my previous post about building a IPsec tunnel between a Palo Alto firewall and a pfSense VM, I started trying to build a GRE tunnel between a OpenWRT router on my local network and the pfSense VM. Since GRE tunnels are unencrypted, it needs to traverse the IPSEC tunnel and not the internet! Nothing will stop you from running a GRE tunnel over the internet, but running unencrypted data over the internet is something you really do not want to do.

The main advantage of a GRE tunnel is that it has interfaces inside the tunnel. This means that you can easily route traffic over the tunnel and also run routing protocols over them. As noted in my previous post about building an IPsec tunnel, Policy Mode IPsec tunnels do not have interfaces inside the tunnel, and thus routing is much messier and routing protocols cannot be run over them. Routed IPsec Tunnels overcome this problem, but are not available in pfSense.

GRE support is available in OpenWRT since the Barrier Breaker RC3 release (August 2014). It’s very new and GRE tunnel configuration is not available via the UI so it’s not for the faint of heart. Indeed, while attempting to configure this I managed to break the LAN interface of my router and had to SSH back in over the internet to fix things, eventually resorting to wiping the configuration! pfSense has supported GRE for some time, but as with the previous post i’m using 2.2 Alpha.

As always, diagrams make everything easier! As you can see in the diagram, a GRE tunnel is established between the OpenWRT router and pfSense VM. As the tunnel is between (OpenWRT) and (pfSense) the traffic must traverse the encrypted IPsec tunnel. Of course, you will need the appropriate routes in place for the two hosts to communicate, but this is left as an exercise to the reader.

GRE tunnel over IPsec.
GRE tunnel over IPsec.

pfSense Configuration

First things first, configuring pfSense.

  1. Create a new GRE tunnel via Interfaces > (Assign) > GRE.
    1. Set Parent interface to LAN.
    2. Set GRE remote address to the address of the OpenWRT router (
    3. Set GRE tunnel local address to the tunnel’s inner IP on the pfSense side (
    4. Set GRE tunnel remote address to the tunnel’s inner IP on the other end ( Set the netmask to 30, as the tunnel only has two IP addresses on it.
    5. Click Save.
  2. Create a new Interface for the GRE tunnel via Interfaces > (Assign) > Interface assignments. This step must be performed after creating the GRE tunnel, otherwise the Add option for a new interface will not be available.
    1. A new OPT interface will be created and the Network port should automatically select your new GRE tunnel. Manually select if not.
  3. Create a rule to permit traffic via Firewall > Rules > Your New Interface. Set the rule as you wish, a permit all rule may be the best place to start.

All done! If you start a tcpdump on the pfSense system, filtering for traffic to and from the GRE tunnel destination (, you’ll see a fair amount of ICMP traffic. This is because pfSense has automatically added the other side of the tunnel as a gateway and is monitoring it’s accessibility and latency via ping.

OpenWRT Configuration

These instructions assume you’re comfortable accessing and configuring OpenWRT via SSH. If not, wait for GRE tunnels to get proper support in OpenWRT’s LuCI web GUI.

  1. Add the GRE tunnel and interface by editing /etc/config/interfaces. Use the following configuration as a template based on the previous diagram;

A quick explanation of the options;

  • mygre Interface – the GRE tunnel itself.
    • ipaddr – The local IP address to bind the tunnel’s source to. Not required, but without this I found tunnel traffic was attempting to go over the internet rather than via the LAN!
    • peeraddr – The remote IP address of the tunnel.
    • mtu – Maximum packet size for the tunnel. GRE has an encapsulation overhead and then also goes over the IPsec tunnel which also has an overhead! Setting the MTU to 1400 is a nice safe value, but could be increased further.
    • proto – The protocol of the interface. GRE in this case.
  • mygre_static Interface – The local interface inside the GRE tunnel.
    • proto – The protocol of the interface. Static in this case.
    • ifname – A name for the interface. The at symbol allows this static interface to refer to the tunnel interface.
    • ipaddr – The tunnel’s inner IP address on the OpenWRT side.
    • netmask – The tunnel’s netmask. A /30 netmask is
  1. Install the required packages kmod-gre  and gre . This can be done on the LuCI interface via System > Software.
  2. Restart the networking daemon to bring up the new tunnel using /etc/init.d/network restart. You will briefly lose connectivity.

Done! Run an ifconfig to see if the new tunnel interface has been created. You should see a new gre-mygre interface. If you also run tcpdump filtering for traffic to and from the GRE tunnel destination ( you should see the ICMP pings from the pfSense system and, hopefully, replies back from the OpenWRT router.


This forum thread on the OpenWRT forums has been particularly helpful in getting this tunnel up and running. Until GRE tunnels are documented in more depth, it serves as a very useful starting point for configuring a GRE tunnel on OpenWRT, along with the mailing list entries linked from the thread.

Good luck!

Setting up a Policy-Based IPsec VPN between a Palo Alto PA-200 and pfSense

As part of an ongoing home network project, i’m trying to set up an IPsec VPN mesh between different sites – e.g. my own home, family home and VPS hypervisor located in an offsite datacentre. The reasoning behind this?

  1. Because I can and because it’s fun!
  2. To allow for easier resource sharing and remote diagnosis.
  3. To allow for an always-on VPN for mobile devices, both at home and away.

Being a home network, none of the equipment is particularly Enterprise grade, with the exception of the PA-200 firewall I have. This introduces a bit of a problem as route-based IPsec VPNs tend only to be supported on Enterprise-grade equipment (support for VTIs is just starting to come in on StrongSwan but has absolutely no documentation right now).  Thus,  I have to use a policy based VPN, which has limitations but will work for this specific use case.

Of course, any networking topic is greatly aided by a Visio diagram so here’s what i’m trying to achieve;

IPsec VPN Network Diagram

Palo Alto Configuration

These instructions are based off the web interface, but should be easily adaptable to the terminal. They also assume you’re running PAN-OS 6, but it’s likely similar for other versions.

  1. Add an IKE Gateway for Phase 1 negotiation via Network > Network Profiles > IKE Gateways > Add.
    1. Set a friendly name for the remote gateway.
    2. Select the interface it will originate from. This should be your WAN or Untrust interface.
    3. Select the IP address on the interface that it will originate from.
    4. Set Peer IP Type static.
    5. Set Peer IP to the IP of the remote gateway (
    6. Select Authentication as Pre-shared Key.
    7. Set a strong, random Pre-shared Key.
    8. Set Local Identification to IP address, enter your WAN/Untrust IP (
    9. Set Peer Identification to IP address, enter your gateway’s WAN IP (
    10. Click OK.
  2. Adjust the default IKE Crypto profile via Network > Network Profiles > IKE Crypto.
    1. Set DH Group to group2 only.
    2. Set Encryption to 3des only.
    3. Set Authentication to sha1 only.
    4. Set Lifetime to 8 hours.
    5. Click OK.
  3. Adjust the default IPsec Crypto Profile via Network > Network Profiles > IPsec Crypto.
    1. Set IPsec Protocol to ESP.
    2. Set Encryption to aes256 only.
    3. Set Authentication to sha512 only.
    4. Set DH Group to group2.
    5. Set Lifetime to 1 hours.
    6. Click OK.
  4. Add a new Tunnel Interface via Network > Interfaces > Tunnel.
    1. Click Add.
    2. Set an Interface Name and optionally number.
    3. Set the Virtual Router and Security Zone to your desired values. I used the main Virtual Router and a separate VPN Tunnel Security Zone.
    4. Click OK. Do not set an IP – this is a policy based VPN remember! There are no IPs on the tunnel interface as a result.
  5. Add an IPsec Tunnel for Phase 2 negotiation via Network > IPsec Tunnels.
    1. Click Add.
    2. Set a friendly name.
    3. Select the Tunnel Interface created in Step 4.
    4. Set Type to Auto Key.
    5. Select the IKE Gateway created in Step 1.
    6. Select the IPsec Crypto Profile credited/edited in Step 3.
    7. Enable Showing Advanced Options
    8. Enable Replay Protection.
    9. On the Proxy IDs tab, add a new Proxy ID. This is an important and often overlooked step when creating a Policy-Based IPsec VPN on Enterprise devices.
    10. Set a friendly name for the Proxy ID.
    11. Set the local IP netmask that will be routed (
    12. Set the remote IP netmask that will be routed (
    13. Set Protocol to Any.
    14. Click OK Twice.
  6. Adjust your security zone rules as appropriate and add a static route to the remote subnet ( via the tunnel interface. You should know how to do this ;)
  7. Commit.

All done! Now for the pfSense side.

pfSense Configuration

These instructions are also for configuration via the web interface, but with pfSense you don’t really have much choice! They are however based off pfSense 2.2 Alpha as I needed to use this version for proper support under my virtualisation infrastructure, things may be slightly different in other versions. Experienced StrongSwan users should also be able to follow these instructions and adapting them to the StrongSwan configuration.

  1. Enable IPsec via VPN > IPsec, checking the Enable IPsec option and clicking save.
  2. Add an IKE Gateway for Phase 1 negotiation via VPN > IPsec.
    1. Set Key Exchange Version to V1. Palo Alto does not yet support V2.
    2. Set Internet Protocol to V4.
    3. Set Interface to the Interface of your external Interface (WAN).
    4. Set Remote gateway to the IP of the remote gateway (
    5. Set Authentication method to Mutual PSK.
    6. Set Negotiation mode to Main (Aggressive is less secure).
    7. Set My identifier to IP address and the External IP ( This must match the Peer Identification set on the Palo Alto device.
    8. Set Peer identifier to IP address and the IP of the remote gateway ( This must match the Local Identification set on the Palo Alto device.
    9. Set the Pre-Shared Key to the same Pre-Shared Key.
    10. Set Encryption Algorithm to 3DES.
    11. Set Hash Algorithm to SHA1.
    12. Set DH key group to 2 (1024 bit).
    13. Set Lifetime to 28800 seconds.
    14. Set NAT Traversal to Disable.
    15. Enable Dead Peer Detection.
    16. Click Save.
  3. Add an IPsec Tunnel for Phase 2 negotiation via VPN > IPsec and expanding the Phase 2 entries section underneath your new Phase 1 definition.
    1. Set Mode to Tunnel IPv4.
    2. Set Local Network Type to LAN subnet ( This must match the Remote Proxy ID set on the Palo Alto device.
    3. Set the Remote Network Type to Network and enter the Address. This must match the Local Proxy ID set on the Palo Alto device.
    4. Set Protocol to ESP.
    5. Set Encryption Algorithms to AES 256 bits only. Do not set Auto.
    6. Set Hash Algorithms to SHA512 only.
    7. Set PFS key group to 2 (1024 bit) only.
    8. Set Lifetime to 3600 seconds.
    9. Click Save.
  4. Click Apply Changes.

Checking Everything is Working

Theoretically as soon as you complete the configuration on the pfSense side everything should start working. To verify this, try pinging the other side of the VPN tunnel, making sure to set the source IP appropriately. For example, from the Palo Alto you’d run;

If that works, then all traffic should work. On the Palo Alto side, it’s really important that you set the Security Zones and Static Route over the tunnel appropriately!

Verifying Status on the Palo Alto Device

Under Network > IPsec Tunnels check the status indicators for the IPsec tunnel. The first indicator shows phase 2 negotiation, the first indicator shows phase 1 negotiation. You want both of these to be green.

For a more detailed status, you can also run the following commands on the command line;

The first command should show phase 1 and 2 negotiation, the second command should show tunnel information.

Verifying Status on pfSense

Under Status > IPsec check the Status of the Phase 1 connection, it should be established.  Expand the child SA entries section to show the Phase 2 connection. It should show the local and remote subnets.


Hopefully this has helped you get a policy-based IPsec VPN running between a Palo Alto device and pfSense. It’s a shame there’s not greater support for route-based IPsec VPNs in the Open Source world, but I will certainly be watching the addition of VTI support to StrongSwan with great interest. In the mean time, this solution does the job!

If you’re following these instructions, good luck! If you’re still stuck and can’t work out why, Palo Alto have a good page on diagnosing IPsec problems.

Invalid/Malformed SSL Certificates on OSX

OSX users, welcome back.

Recently, I enabled SSL by default for this site. If you try and browse to a non-HTTPS version of this site, you’ll be instantly redirected to the HTTPS version without loading any content. There’s no real security reason behind this, it’s just another thing I wanted to play around with and worry not, i’ll be doing another how-to blog entry soon.

A few weeks after I enabled it though, a friend mentioned that they couldn’t connect. Indeed, while attempting to browse using Chrome they received an alarming message stating that Chrome couldn’t connect to the real Normally, this sort of error would be reserved for a man-in-the-middle attempt where someone presented an invalid certificate and pretended to be this site. This was most definitely not the case, but the “technical details” didn’t help much either – claiming that the certificate was invalid and malformed. However, when using Chrome, and other browsers, on my system and mobile devices, I received no such error. Online SSL validators also didn’t find any errors. The only difference was that my friend was using OSX.

Chrome’s less than helpful error.

Another OSX-using friend identified the issue however – I was using a 8192-bit key on my certificate, and since 2006 OSX has not supported key lengths greater than 4096 bits. The reasoning behind this is that it stops cryptographic Denial of Service attacks, but even so, modern systems have no problem handling 8192-bit keys.

It is possible to change the maximum key size that OSX will accept, however this process needs to be performed on every OSX client that’s connecting, not really a ‘fix’ for a website operator. Indeed, the only real fix that would allow OSX users to seamlessly connect to this site would be to downgrade the key size. From a security perspective, this isn’t too much of an issue. 2048-bit keys are currently considered appropriately strong and 4096-bit keys are only used in paranoid or high-security situations. Thus, I decided to get a new certificate using a 4096-bit key and OSX users can now connect again without issue.

My thanks go to Harvey for first alerting me to this issue and supplying the screenshot above, and to InnerLambada for letting me know the cause.

Optimising WordPress for Performance

WordPress is a very powerful yet accessible platform. It powers many blogs and websites, including this one. However, many WordPress sites can often be slow to load and navigate. For a user, this is frustrating and will often result in them leaving the site and going elsewhere.

This isn’t WordPress’s fault per-se, a lot of optimisations are simply not possible out of the box because of the wide variety of hosting environments.

In this post, i’m going to explain the optimisations i’ve made to make this site load (reasonably) fast. It’s not the fastest site out there, but compared to an un-optimised version, it is several times faster. Some of the changes are more complicated than others and some will require more control over your hosting environment than you have available. However, even implementing a few of the simpler changes will still provide a performance boost.

One of the key methods for increasing performance is caching. Why have the server work several times to produce the same output to several users? Why not take the output that was generated for one user and serve it up to all of the users, so that the server only has to do the work of producing the page once?

Another key method is to decrease the delay in loading resources, including those which have been cached. It is several orders of magnitude faster to load something from memory than it is disk. Memory is cheap and plentiful these days, so it makes sense to store as much as possible in memory.

Finally, parallelisation. Whilst servers often have lots of memory these days, they also have a number of processor cores. If a number of tasks can be performed in parallel, then the overall processing time can be reduced.

Of course, these optimisation techniques are some of the most difficult things to achieve in computing. Thankfully though, many others have put a lot of hard work in to making tools and packages which achieve the previous optimisations, so with these methods in mind, here are the tools and packages that are responsible for serving up this site fast.

This post won’t delve in to the exact installation details of each as it will vary depending on your hosting environment and with time. However, a little bit of Googling will often reveal the instructions to install each in your specific situation. Hopefully this post can act as an overall guide on how to optimise performance and also why each change optimises performance.

Server Software


nginx is a web server, similar to Apache. However, unlike Apache, nginx is designed to be high-performance, lightweight webserver. That isn’t to say Apache is bad, but nginx tends to perform better and is more scalable.


PHP, which is the scripting language used by WordPress, can be run in a number of ways, primarily as CGI, FastCGI or FPM. FastCGI and FPM both spawn pools of processes to run PHP code, improving performance. An important difference however is that FPM allows for a single cache to be used between all of the processes – there’s little point caching something if it’s not going to be shared after all.

The setup of php-fpm is a bit more involved than FastCGI, but the shared cache is definitely worth it. Additionally, there are two ways for the web server to communicate with php-fpm – over a TCP socket or a Unix socket. The latter is recommended as latency (and thus overall processing time) is reduced, but can be difficult to get working in a secure fashion.

Caching & Memory Storage

While nginx and php-fpm will help improve your performance in general, the real performance boost comes from caching and storing resources in memory.

OpCode Caching

PHP is an interpreted language. Thus, when a PHP script is run, the interpreter will convert the instructions to lower level code which is then run. The advantage of this is that new code can be deployed and run instantly, without the need to go through a compilation process. However, it means that the code will need to be interpreted and converted every time it is run.

However, it is possible to cache the output of the interpretation and save it for the next time the code is run, saving time subsequently. One popular way to achieve this in PHP was via the use of APC (the Alternative PHP Cache). However, with the release of PHP5.5, the ZendOpcache is now part of PHP’s standard distribution and is the recommended cache. Enabling it in a PHP5.5 installation is extremely simple and does not require any changes to existing PHP scripts – it’ll work against any PHP script out of the box as it is an OpCode cache. OpCode caches are a great way to get a quick, free performance boost without making any major changes.

Object Caching (Userland Caching)

One other feature of the aforementioned APC cache was its ability to cache static objects in memory, such as images and Javascript in addition to OpCodes. However, the preferred ZendOpcache only acts as an OpCode cache and thus does not provide any Object/Userland Caching.

The good news however is that a userland-only caching version of APC exists called APCu.  Whilst not part of the standard PHP installation, it is available through most package managers and also PECL.

However, Object/Userland Caching will not work unless your PHP script specifically makes use of it. More on how to get WordPress to use your shiny new cache later on.


If you look at the HTML source for most sites, you’ll often see a nicely laid out document, with sensible human-readable formatting and naming. But why does your browser need to see pretty indentation? Everything could all be on one line and still perfectly parsable by a browser, but then the markup wouldn’t be readable to the designer who has to write the page.

It might seem like a very small optimisation to shorten names, cut out whitespace and so on, but “minified” pages and Javascript libraries can often cut down the overall size of a file by double-digit percentages – not something to be sniffed at. Smaller files mean faster transfers and more files that can be stored in your memory cache. Just what we want!

As a compromise between human-readability for designers and minified files for parsing by the browser, we can use a minifier to transparently perform the process in between. However, the process isn’t free – your server will of course need to do some work to perform the minification. Thus, to see a performance boost from minification, having both an OpCode and Userland Cache is extremely important.

How do you perform the minification though? As before, more on this later.

Bringing it all together – adding support to WordPress

Several of the aforementioned features, such as Userland Caching, need the PHP script to be specifically adapted to take advantage of them. Without support in the PHP script, they won’t be utilised and no performance benefit will be realised.

WordPress by default does not provide support for many of these features as it needs to ensure maximum compatibility across a wide range of web hosting platforms. There’s nothing wrong with this and it is a good decision for the WordPress team to make.

However, a number of plugins are available which do add this functionality in to WordPress. The most notable is the W3 Total Cache. This plugin adds a huge number of caching options to WordPress and is highly configurable to almost any setup, supporting a wide range of backend caches.

If you have a low traffic blog (such as, sadly, this one), it’s important to remember that caches have a lifetime – and if the resources that are in the cache are not accessed within a certain time, they will be purged, meaning that they have to be generated and retrieved the next time someone requests them, wiping out any performance benefit you may have hoped to achieve. Increasing the cache lifetime is not a good idea – all resources should have a finite lifetime as they can and will change! Instead, you can ensure the cache is kept primed and ready by having an automated crawler access of all of the relevant resources. W3 Total Cache does have an option for this and it should definitely be enabled. Even with a high-traffic site, enabling this ensures that all users receive a consistently quick loading experience.


Now that you’ve implemented some or all of the above methods, how do you actually verify it’s working and that all your effort wasn’t for nothing?

Cache Statistics

Both the ZendOpcache and APCu offer the ability to see statistics relating to cache utilisation and also the contents of the cache. This helps you to verify that the cache is being used and that it is large enough to contain all of the content you wish to cache.

For ZendOpcache a number of scripts exist to check the cache status. I use opcache-status as it provides a single script that presents all of the relevant information in a clean, accessible format, other scripts are available though.

For APCu, a script is distributed with the installation which offers similar functionality. Check your distribution’s package as to where it might be, but under Ubuntu it can be found in /usr/share/doc/php-apc/apc.php.

Load Testing

The other way to test if everything is working is to simply load the website and examine the time taken to return the results. You can either use your browser’s developer tools to gather network statistics, or use a tool such as GTmetrix or Load Impact. GTmetrix is great for seeing how individual resources on your site load and what causes delay, whereas Load Impact is better suited for seeing how your site performs when lots and lots of people are accessing it at the same time.

Finally, if you’re minifying content, make sure you view the source of the page to ensure it actually has been minified!

Do keep in mind that if you’re logged in to your blog, W3 Total Cache won’t serve cached or minified pages to you. To see cached and minified pages, either log out or better, open up an Incognito browser window.


There’s a good reason why these changes are not available out of the box – they can be incompatible in a large number of environments and require extra setup which can put off a large number of potential users. With time though, the results can be impressive with load times that are several times quicker, and more consistent, than previously.

Despite the hard work to get some of these methods working, I hope that it has produced a noticeable benefit to your blog or site. Improving the performance of your site through these methods not only makes your site load faster, but crucially it helps serve your site up to more simultaneous users without an increase in hardware performance.

Good luck and happy optimising!