Google recently announced that it is using site speed in web search ranking and while the weighting of this metric is slight (less than 1% of search queries will be affected) it is still good practice to make sure your web resources are optimised.
I was a little shocked to discover MASHe didn’t fair particularly well on speed tests. Inherently, self-hosted WordPress blogs give you a lot of flexibility in how you configure your blog, making endless tweaks to its appearance and available functionality via plugins. The cost of this flexibility is you can quickly turn your site into a quagmire of extra coding slowing down page loading times. You are also reliant on your server configuration being correctly optimised. This post documents what I discovered and how I fixed it.
Having already signed up to Google’s Webmaster Tools I was able to check the Labs –> Site performance and was a little shocked to see “your site take 6.9 seconds to load (updated on Apr 2, 2010). This is slower than 83% of sites”
My first step was to diagnose where I was loosing time. The site performance results give you some pointers but these aren’t real-time and I wanted a way to make sure I was heading in the right direction. I chose to download Google’s Page Speed and Yahoo’s YSlow. Both these tools run a barrage of tests on a web page, highlighting area’s where you can make improvements.
Because performance of self-hosted WordPress blogs is a known problem there are a number of plugin’s available to optimise performance. Previously I had been using WP Super Cache but as the site performance data has shown there is perhaps more I could be doing, so I switched to W3 Total Cache which has some nice features. The key words to look out for when optimising websites are page caching, server-side gzip compression, content delivery network (CDN) integration (also known as parallelizing) and minifying.
Just to expand on a couple of these:
Server-side compression – the basic idea is a requested webpage is compressed at the server before being sent to the user reducing the bandwidth required. A number of 3rd party hosts don’t enable this feature presumably because of increased processing load on their servers. So despite my best efforts I was unable to use server-side compression.
CDN - because there is a limit to the number of page elements the browser can download at one time, distributing assets across hostnames allows items more items to be downloaded simultaneously. A quick fix for me was to use our host providers control panel to create a sub-domain http://mashe.img.rsc-ne-scotland.org.uk which mirrors my existing directory structure. W3 Total Cache then allows you to choose the type of files to server from the different domain practically allowing you to doubled the number of page elements downloaded simultaneously.
The results are looking reasonably promising, pages that have been cached loading in 1.5-2 seconds. I’ve also gone from grade F on YSlow to grade B/C. The problem I’ve still got is pages that haven’t been visited for a long time (not being cached, or having to be re-cached) taking 20 seconds to load. I’ll perhaps come back to this another day unless anyone has some immediate suggestions. In the meantime I need to get back to posts on enhancing teaching and learning with technology ;-)