This post is a sum-up of my BrightonSEO October 2024 talk on how to validate Web Performance and Web Vitals optimization efforts.
I’ve been doing web performance consultancy for many years now, and the most frustrating thing I often encountered is that people optimize things on their websites but don’t even know if it worked! Over the course of the projects, I observed that we much more focus on technical aspects of site speed optimization than on measuring the impact.
Web Performance matters for what?
Some companies justify site speed work for users, others for SEO, others for conversion rate optimization, accessibility, or even for cost reduction. If we step back a little bit and zoom-out, we will see that every single reason mentioned here, directly impact one thing : you business.
In my opinion, doing Web Performance for just one reason isn’t the best approach. Many will do it for SEO and ranking. Don’t do it for a single one reason, do it for your users, your business.
Web Performance costs money
Other than developers, project managers, product owners, QA engineers, an experienced and seasoned Web Performance Consultant’s rate starts in 1500$. This translates to 1$ for every 30 seconds. So if I say: Don’t lazy load your product details page (PDP) hero image to not delay its loading and negatively impact LCP, 1$.
Site speed unpredictable results
The frustration comes from the fact, teams invest much money and efforts but at the end, they are not able to measure the impact or make any evidence of it. That’s really sad. Worse than that, projects can easily go wrong. Here is an example of a web migration project that originally was supposed to improve Web Vitals and user experience. You know, shiny tech stuff and modern frameworks. Developers love upgrading technical stuck to the most recent versions and trends mainly because in these frameworks release notes, we read : Web Vitals improvement or important TTFB (Time To First Byte) speed up. Boooom, this website doubled its TTFB. Sad.
This talk goal is to help you better succeed in your website speed project and being able to measure the impact of your actions. So let’s make Web Performance more predictable and impact-proven.
I believe that the main reasons behind this failure are that teams aren’t using the right tools, aren’t monitoring the right metrics, and also they don’t set up clear expectations.
Is Google Lighthouse useful to validate site speed optimizations?
No. Google Lighthouse isn’t the best tool to validate your site speed efforts. Check our post on “The Blind Spots of Google Lighthouse Score for Web Vitals“. In two words: Google Lighthouse is an emulation tool (a robot) and doesn’t represent what your users may experience on your website/web pages.
Get a RUM tool to Validate Your Site Speed Work
You need to set up the right tools, measure, know your current performance and get a history : set a baseline. And for that, you need real users’ data using your product, real users’ data browsing your website : This is called a Real User Monitoring Software (RUM).
What’s is RUM (Real User Monitoring)?
RUM is like weather stations. A weather station is a set of sensors that gives governments and cities real-time weather conditions. Rain, wind, sun and allows monitoring, alert, and predict weather conditions. By the same way, a RUM tool, thanks to a snippet (sensor) we put on a website source code, gives real-time users’ data and interactions. A RUM software assess user experiences as fast, slow, laggish.
The main goals of a RUM tool are prioritize, monitor, alert, but mainly validate web performance work. Real User Monitoring data has two benefits:
- Clear view of your users experience
- Optimize things and see the impact
We can then loop over these two steps : get a clear view, optimize and see the impact. This why RUM data is the cream of the crop. I honestly don’t believe that a serious web performance project can succeed without RUM.
Of course, the goal isn’t to chase the most complex RUM tool in the market. The simpler, the better. Complex tools aren’t used and adopted by teams. Get yourself a simple to use RUM tool that is sufficient. At the end of the day, we are doing site speed and not NASA.
How to take benefit from RUM data?
Web Performance data is asymmetric. This is why it’s so important to split and break it down. Breaking down large data is so useful to extract insights and value.
A RUM tool can help segment the data by multiple axes. By device (Mobile, Tablet, Desktop, Smart TV), by the quality/speed of users’ connection, by location, by type of navigation (load, reload, back-forward, cache..), by CPU and Memory capabilities, but also by the page type.
Here is an example of Web Performance data for a French and international E-commerce website. If we take a look at France-based users, we will see that it’s nearly perfect. Now, if we check Lebanon-based users experience…Aha! A different story. We are doubling the TTFB (Time To First Byte) and moving from perfect CLS (Cumulative Layout Shift) to a failed one.
Knowing that Lebanese users are third most important paying customers, makes the topic more important.
Another RUM data segment example. A big French News Publisher. On 4G, LCP (Largest Contentful Paint) is perfect, on 3G, the 75th percentile moves to more than 4 seconds!
Optimization needs to be atomic:
Another reason why teams fail to validate their web performance optimizations is the lack of clear expectations. I often hear someone say: Let’s optimize our images. And he/she stops. This is not sufficient. Image optimization is very vague and we can list dozens of optimization techniques. You can’t do multiple things at the same time and being able to know what worked.
You do need small, isolated, detailed, and atomic site speed optimizations. Atomic optimizations are easier to validate.
Here is the format of user story (ticket or optimization) I use when I do web performance consulting.
I start with the title of the ticket. For example: Add “fetchpriority” high to hero image. If I stop here, it won’t be sufficient. So I always specify a bunch of things.
- Which metric we are trying to impact? In this case, LCP
- Which page type is concerned with this ticket? PDP for this example
- For which devices are we optimizing? Mobile, Desktop
- Where do the users we are optimizing for are located? Belgium [remember the importance of segmenting by location]
Then, I add the user story description : the why and the technical how-to. And I don’t just stop here. Another crucial section is the “How can we measure the impact”. Here I set up clear directions on how we can measure the impact of our optimization. For example: We are going to monitor RUM data, PDP (Product Details Page) segment, LCP percentiles and distribution. We are also going to exclude Firefox data because it doesn’t yet support priority hints (this way we remove noise signals).
Final notes
To succeed in your site speed and Web Vitals project, you need to be able to validate the impact. The main levers to achieve this are:
- Get yourself a RUM tool and learn how to use it, make your team adopt it.
- Monitor adequate metrics, use percentiles and not averages
- Always do atomic optimization for easier validation.
You can check the slides here and watch the replay (which will be soon available) at BrightonSEO website.
Website speed test checker
To test your website’s loading speed, visit our FREE Core Web VItals Performance assessment tool
ANALYSIS
Compare Web Vitals Tool
Try our Domaine comparaison tool to measure and compare your website’s performance aigainst competitors, and see how you stack up in the race for speed
ANALYSIS
Web Vitals Industry benchmarks
Rank Websites by industry using our web Vitals benchemarks for France and USA. Find out which sites perform best in terms of efficiency and speed across different industries