Introducing User eXperience Rating (UXR) score

Updated for FID INP replacement (March 2024) To assess how well a web page is performing in controlled environments (in the lab) we usually rely on Google Lighthouse score. But how to measure and assess real user experiences? Following the progress of every metric is challenging and the lack of a unified rating score. That’s ...

User Experience Rating score to assess the real users web experiences

Updated for FID INP replacement (March 2024)

To assess how well a web page is performing in controlled environments (in the lab) we usually rely on Google Lighthouse score. But how to measure and assess real user experiences? Following the progress of every metric is challenging and the lack of a unified rating score. That’s why we are excited to introduce User eXperience Rating (UXR): A score to assess real users’ experiences.

In this post, we’ll describe UXR, how to measure it, and how Speetals can help you get started.

=>Check our UXR score calculator

What is UXR?

UXR (user eXperience Rating) is a new way to measure and assess real user experience using data from recorded sessions by Real User Monitoring Tool (RUM). It’s a simple scoring system that measures the quality of your website users’ experiences. Inspired by Google Lighthouse, UXR scores all of your metrics in one value so you can easily see how they all come together to form an overall rating on domain and page levels.

Each metric has a weighting associated with it—the more heavily weighted metric, the more important it is for determining your overall UXR score (weighted average).

While the Lighthouse score is the weighted average of the metrics values, UXR is based on the good % distribution of each metric. For example, according to CrUX (Chrome User Experience Report) 76% of tracked website users experienced Good LCP (Largest Contentful Paint). The UXR weight for LCP will be applied to the 76%.

A Good UXR score means that a website/webpage offers a good user experience.

UXR Weighting and formula:

Starting from March 12 2024, INP (Interaction to the Next Paint) metric will replace FID (First Input Delay) one. To align with this, we are updating the UXR formula as following

MetricWeight
Largest Contentful Paint (LCP)30%
Interaction to Next Paint (INP) 20%
Cumulative Layout Shift (CLS)20%
First Contentful Paint (FCP)10%
Time To First Byte (TTFB)20%
First Input Delay (FID)0
New UXR weighting system (INP replaces FID)

UXR = (Good_LCP x 30%) + (Good_CLS x 20%) + (Good_INP x 20%) + (Good_FCP x 10%) + (Good_TTFB x 20%).

To use the UXR formula, open up this Google Sheets template and make your own copy

Before the March 12, the UXR formula had these weightings:

MetricWeight
Largest Contentful Paint (LCP)30%
First Input Delay (FID)10%
Cumulative Layout Shift (CLS)20%
First Contentful Paint (FCP)10%
Time To First Byte (TTFB)20%
Interaction to Next Paint (INP) [experimental]10%
Previous UXR weighting system

Color code of UXR:

We are following the same color code of Lighthouse.

  • 0 to 49 (red): Poor
  • 50 to 89 (orange): Needs Improvement
  • 90 to 100 (green): Good

UXR in Speetals:

Let’s take an example of the UXR score for Vinted website for a France-based audience. The below screenshot from Speetals shows the distribution of each of the metrics for the month of January 2023 on desktop devices. UXR=61/100 and compared to 2022 December data, the website improved it with 2 points.

We can also see the progress of UXR score over time by clicking on the “UXR over time” button.

With this graph, we can easily see Mobile versus Desktop UXR progress. In the above example, the Mobile UXR is poor (red) while the Desktop one needs improvement (Yellow).

=>Check your website’s speed score here

How is it different from the Lighthouse score?

UXR is a new score that is created to provide a holistic view of user experience. UXR scores are calculated based on real user data collected from the field, not lab experiments or synthetic tools.

Lighthouse score is for lab data (synthetic tools) and a controlled page load experience and not about a set of user sessions. Also, it’s impossible to use Lighthouse scoring on field data because they don’t share the same metrics (For example, Total Blocking Time (TBT) is not applicable for field data).

Differences in weighting:

MetricLighthouse (V10) weightUXR weight
Largest Contentful Paint (LCP)25%30%
First Input Delay (FID)NA0
Cumulative Layout Shift (CLS)25%20%
First Contentful Paint (FCP)10%10%
Time To First Byte (TTFB)NA20%
Interaction to Next Paint (INP) [experimental]NA20%
Speed Index (SI)10%NA
Total Blocking Time (TBT)30%NA
Time To Interactive (TTI)10%NA
Differences in Lighthouse and UXR weights

Final thoughts:

So, what has been discussed here is a new initiative to measure and assess real user experience. UXR aims to make Real User Monitoring data simpler to evaluate and compare over time but also to have an all-in-one score of users’ data.

We encourage the web performance community and other tool providers to try the UXR score, review it, and give feedback.

Thanks to Dave Smart and all those who reviewed the formula and weights of the UXR score.

Aymen Loukil
Web Performance Consultant and Speetals Founder

More from the Speetals Blog

Web Performance Benchmark : November 2024

We begin our website benchmark with the United Kingdom. Each month, we analyze the performance of the top-performing websites across various sectors in the ...
how to validate site speed optimization efforts.

How to validate site speed optimization efforts

This post is a sum-up of my BrightonSEO October 2024 talk on how to validate Web Performance and Web Vitals optimization efforts. I’ve been ...
Web performance books to read

The best Web Performance books to read

It’s important to understand how to improve the web performance of websites. In this article, I prepared for you a selection of must-read Web ...
User Experience Rating score to assess the real users web experiences

Resources

Introducing User eXperience Rating (UXR) score

Updated for FID INP replacement (March 2024) To assess how well a web page is performing in controlled environments (in … Read more

Leave a Comment

Test Speetals for 10 days

See how Speetals can transform your Web Performance workflow during 10 days for only 2 $ (no auto-renew)

Web Vitals monitoring

50k page views of Real User Monitoring

Data of 5 countries

Daily data refresh

5 user seats