Head of Development Olly Jackson, explores the fast and responsive factor from our new PulseCX digital Customer Experience benchmarking tool. 

A faster site makes for happier users. And happy users are much more likely to convert. 

Walmart saw that for every 100ms improvement in site speed they generated a 1% increase in revenue.

Happy users = happy Finance Directors!

Google and any other search spiders are also much happier when your site loads quickly, as they require fewer resources to read your content.

There are many aspects to webpage performance, but we’re going to focus here on improving “time to first byte”, often shortened to TTFB. TTFB is the amount of time it takes from a user requesting a page on your website to the web server first responding with information. A large TTFB will make your site feel sluggish for users whilst a low TTFB makes for a snappier user experience. Google recommends an ideal response time of less than 200 milliseconds. Time To Interactive (TTI) is another important factor and is all about the time taken for your web browser to draw the page and grab additional assets before you can click, tap or scroll.

Google Lighthouse is a great tool for giving quick advice on potential improvements to your website and comes bundled within the Developer Tools in Chrome. Services like Calibre can be used to automate Lighthouse checking and graph metrics over time. We use this for a number of clients where our remit includes monthly checks of website performance.

So, what can lead to a high TTFB? A multitude of things. But here are some possible causes:  

Under-powered web server leading to resource starvation. The traffic to your website might have outgrown the specification of your hosting solution. You can likely raise this with your hosting provider to see what performance metrics they are measuring. Often it points back to some sort of bottleneck in the web application itself, see the next point.

Computationally expensive work happening on the web server. To generate a web page the web server might have to perform any number of operations: accessing a database, resizing an image, reading data from an external API. All of these operations can cause bottlenecks unless architected correctly. Adding profiling to your application or using the tooling built into the CMS or framework can be useful starting points. A broad-bush approach here can be to add a caching layer to your application, either from a platform like Cloudfront/Cloudflare/Fastly or via a hosted software solution like Varnish. Knowing when and how to effectively clear the cache is an interesting topic worthy of a separate article.

High latency network link between the user and the server. Usually caused when someone is on a mobile device in an area with poor connectivity. But there are also rural areas around the world where high speed broadband is just not possible. Broadly speaking you need to make the total download size of your web pages as small as possible. This includes all of the assets linked from those pages: images, fonts, CSS and Javascript. Depending on how much of a problem this is for your audience, dictates how much to invest in solving this problem. There are often easy wins to be had by ensuring that images uploaded to a CMS are sized and optimised correctly.  

Act fast!

Recently we accelerated the speed of a client’s blog by a factor of 680%. Within a month we had increased sales by 13 times the project cost per day, whilst also reducing platform costs by 31%.

To discuss how we could help speed up your site or compare your performance to your competitors as part of a PulseCX benchmark, contact Manging Partner Phillip Lockwood-Holmes on phillip@whitespacers.com.