Web Vitals monitoring pitfalls and how to avoid them
- Aymen Loukil
- No Comments
There are many online website speed test tools and different ways of measuring web performance and Web Vitals. Many may mislead you with inaccurate data. Others are far from what your actual users experience.
In this guide, I’ll share the common pitfalls to avoid in site speed monitoring.
Only rely on synthetic speed test tools:
Synthetic performance tools are useful for debugging issues. Developers use them to emulate navigation performance on a page. These tools, however, don’t represent the real experience of users.
You should primarily focus on your users’ data and see what it says. And this isn’t possible with synthetic tools.
Trust Google Lighthouse score:
Many use Google Lighthouse score as an indicator of their site performance. It’s a big pitfall!
Lighthouse analyses a single page by doing an emulation. It executes a number of audits and calculates a weighted score.
Moreover, Lighthouse has variability issues. Try to do 3 consecutive runs, and you won’t get the same score. Also, a lighthouse run is just for one page on some specific conditions.
In my experience, a good Lighthouse score won’t warrant your users a good experience. Brendan Kenny, a Google engineer, did research on the Lighthouse score and field data correlation. He found that almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds!
I also gave a talk in Brighton SEO (April 2023) : What your Google Lighthouse score hides from you
Monitor on a single device type:
Don’t neglect monitoring site speed on each considerably-used device (by your audience).
Above all, compare site speed data between devices for example Desktop Vs Mobile (it will surprise you!).
The gap between website speed across devices depends on many factors. The main factors are CPU, device memory, internet connection, and how you designed your website.
This screenshot from Speetals compares side-by-side website performance on Mobile and Desktop. It shows a huge gap between FCP, LCP, CLS, and INP metrics on different devices. Without monitoring and visualizing both, we can’t spot it;
Improper site speed metrics:
Many of the known web performance metrics became LAPSED today. Stick with the ones that matter the most. Here are mines:
- TTFB for server response time;
- FCP and LCP for loading;
- FID and INP for interactivity;
- CLS for layout stability
These 6 performance metrics cover most of any website speed issues!
Confound domain and page-level data:
Domain (origin) data is useful to evaluate how your site is performing in an aggregated way.
Page-level data is where you dive into performance details for individual page types on your website (product page, listing page, content page..etc)
Monitoring both is a must-do! And the most important part is the TRANSITION from the first to the second.
Let’s try this with an example:
- My domain LCP is not that good?
- What page type is the most contributing to this bad LCP score?
- Prioritize it!
Slow outcomes validation:
You made a web performance optimization and pushed it live. Now what?
Do you wait 1 month to validate the outcomes from real users? It’s too late!
It would help if you had a Fast (daily) RUM validation to validate wins or detect regressions per page type.
Monitor all your pages:
The screenshot below shows how my page LCP distribution is evolving daily.
Crawling your website and measuring the speed of each page is useless and not an effective approach.
Choose rather the most popular page per page type.
For example, all your e-commerce listing pages share (at 99%) the same template and the same #webperf issues.
Want to avoid all the above pitfalls?
Monitor your website with Speetals, a user-centric site speed tool. Focus on your users, prioritize, optimize, validate, and repeat! You can also do a fast check with out free site speed tool