Speakers
- Barry Pollard, Web Performance Developer Advocate for Google
- Kerrion Burton-Evans, Principal Sales Engineer, Director of Technology for Blue Triangle
- Nick Paladino, Principal Sales Engineer, Director of Product Engineering for Blue Triangle
Table of Contents
Overview of the Core Web Vitals
The Core Web Vitals are a set of three metrics measuring three different things that Google believes best captures user experience.
Largest Contentful Paint (LCP) measures the time from navigation to the time when the browser renders the largest bit of content from the DOM. This piece of content is usually something like a banner or H1. The LCP event is generally a sign to the user that the page is mostly loaded, and that the user can start interacting with the page.
First Input Delay (FID) measures the time from when a user first interacts with your site (i.e. when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to respond to that interaction. This is the interactivity metric, measuring how responsive the page is. For example, FID can measure how long it takes when a user clicks on a hamburger menu on a page for the menu to actually open.
Cumulative Layout Shift (CLS) measures the sum total of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page. This might not be strictly a load metric, but it can greatly contribute to user experience. A CLS event can look like the contents of an article getting shifted down the page as you are reading because of a late loading ad.
Guiding Principles of the CWVs
There are three main guiding principles when it comes to the CWVs, they must:
- Apply to all web pages
- Directly measure user experience
- Be measurable in the field
The intent of the Core Web Vitals is to measure user experience across the whole web. Metrics need to apply equally not only across e-commerce sites but to all types of sites. This is why something like conversion rate is not considered in the CWVs.
Next, the CVWs must directly measure user experience, not load technicalities of a website. The easiest way to think of this is that a Core Web Vital should be explainable to someone who is not a technical person. Most people can understand what happens when a button is clicked on a website (FID), or how things move around in page as it loads (CLS), even if they are not able to understand the calculations behind that.
Lastly, is that these metrics must be measurable in the field, with real user data. Lab tools like Google Lighthouse are not diagnostic metrics, but are instead estimates of what user experience might be. The CWVs are measured with anonymous data from real users of Chrome who chose to opt in to submit their data. This is important because web performance is a distribution, it is not a single number. Forty users might have a fast experience on a site, while another twenty might have a slow experience. Users are accessing sites from a variety of devices and networks with different performance capabilities, which is something a lab tool cannot fully account for.
Chrome UX Report
The Chrome UX Report, or CrUX Report, is the tool created by Google to measure the Core Web Vitals across qualifying sites. CrUX looks at the 75th percentile of users, only for sites that cross a certain threshold for statistical relevance, and provides a high level summary of performance.
Due to CrUX reports being a limited public data set, it should not be considered a diagnostic tool for debugging the CWVs. Every user is unique; different devices, different connection speeds, different screen sizes, etc. Even how often a user visits a page can affect their experience, as cached resources help a page load faster. A common example of this is cookie consent banners, which are present in almost every site nowadays. New versus returning users of a site will have very different experiences with LCP as new users will face the often cumbersome cookie consent banners while returning users are able to bypass that, and thus have a completely different LCP element.
Here is an example of this from web.dev, where on the left the page is loading for a returning user who has already accepted the cookie consent banner, and so their LCP is the title of the article. On the right we see a new user, who has a much bigger LCP as their LCP is the cookie consent banner, which contains much more text and takes up considerably more space than the title.
Here is another example using the Amazon homepage: On the left we see the homepage as it loads for Barry, a logged in user, where you can see his name at the top and his latest purchase being recommended to him at the bottom of the page. On the right we see the homepage as it loads for a new user, with pre cast standard images and recommendations. The home page might take longer to load for Barry as the items displayed in his page are not pre-cast.
It's important to note that CrUX and RUM data may differ, check out this article by Barry Pollard on web.dev for more information.
Debugging Core Web Vital Issues
When debugging Core Web Vitals, there are three things to keep in mind:
- CrUX shows there is a problem.
- Lab tools show potential reason.
- RUM can show actual reasons.
While a CrUX Report may be a good indication of an issue being present, it cannot tell you how to fix the issue and optimize CWVs. Knowing the Core Web Vital scores of a site can be helpful towards understanding areas for improvement, but it is not enough. Synthetic lab tools like Lighthouse can point out to potential reasons for an issue affecting a CWV score, but only real user data can point out the actual source of the issue.
Say you've got an LCP problem on your site. You can test your site with a tool like PHP Insights or Lighthouse, or look around in the Chrome dev tools, and find a potential reason for the LCP issue like an image that is slow to load. You can go ahead and try to optimize your images and maybe that will fix some of the problem. However you might look at the RUM data for the site and find that the issue was not a slow loading image but instead a cookie consent banner. That would be a completely different optimization to perform than with images, that would not have been brought to attention had it not been for the RUM data.
Debugging the CWVs is not as easy as plugging your site into a tool and automatically knowing what the problem is, it does require more analytics and implementations of different tools. Check out this web.dev article by Philip Walton for more information on capturing the type of data necessary for debugging of CWVs.
Optimizing LCP
To continue with the LCP Example, you can run a Chrome performance observer and it will return LCP events. You can then grab that and paste it into your console if you are debugging locally. You could also send that back to an analytics service, like Blue Triangle, which can provide more details, like element name , class, or what page it was on. This type of attribution data can be very helpful when debugging CWVs.
An LCP event can be broken down into four phases, and each phase must be optimized separately:
- Time to First Byte
- Resource Load Delay
- Resource Load Time
- Element Render Delay
Time to First Byte is how long it originally takes the HTML to load. Until the HTML is fully processed, the page cannot render. Resource Load Delay is the next phase, which covers the time that it takes for the page to fetch the LCP element. This phase is followed by Resource Load Time, which is how long it then takes for the LCP element to download. The final phase is Element Render Delay, timing how long it takes for the LCP element to actually render on the page.
Understanding what is going on in each phase allows for resources to be properly allocated when trying to problem solve, as you can more accurately pinpoint an issue without wasting time working on the wrong thing.
For more of Google's recommendations on optimizing Core Web Vitals, check out this article from web.dev.
VitalScope for Core Web Vitals
How do you identify there is an issue with your Core Web Vitals in the first place? As presented by Google, the CrUX Report can be a great starting point, but it is not quite specific enough.
Introducing VitalScope
The Blue Triangle Portal captures the Core Web Vitals metrics for every single user page view on a site, and trends those scores along with giving a myriad of filtering options. VitalScope collects all the attribution and debugging data previously discussed, for every measurement, and makes it easily accessible. This means that you know not only the CWW scores, but also all of the details of the CWV events for each user experience. The aggregated data can then be easily filtered and trended.
With improved visibility of your Core Web vitals, you can fully understand the business implications and needed investments by quickly pinpointing the root causes of what is affecting your scores.
Data Captured by VitalScope
Data Captured for LCP
VitalScope finds the LCP Element, the element on the page that must render before the LCP metric can be reported, along with it's URL, HTML, and Selector. With this information you are able to paste the URL of the LCP element directly onto a browser to see it, or find the code in the page with the HTML or Selector. The LCP element is not always an image, it can sometimes also be text, which is why the HTML and Selector are also collected.
You can also see how each phase of LCP is impacting the total LCP time.
Data Captured for CLS
The main focus for CLS is to capture information on the largest shifting element. VitalScope reports back the identity of the element that is causing the biggest shift on the page, along with the time on the page load when it started moving. This does not, however, tell you the cause of the shift.
As previously mentioned, VitalScope identifies the largest CLS event, but you can also see other CLS events that might be happening using the CLS Session Window data. The CLS Session Window is a grouping of all the elements that shifted on a page during a five second window. There can be multiple CLS session windows during a page load, as there can be multiple CLS events. This gives further insight into whether there are still problematic shifts on a page once the largest shift has been taken care of. Here you can find the CLS Session Window Log, including the number, score, start, and source count for that CLS session window.
Data Captured for FID
For FID, VitalScope identifies the element the user interacted with first on the page, along with the time the page took to complete that interaction. The HTML, Selector, and FID Start for the FID event area captured. The interaction type is also captured for FID, which will be covered in the new data filers.
New Filters
Traffic Lights
Google provides ideal thresholds for each Core Web Vital that can be used to rank user experience as Good, Needs Improvement, or Poor, hence the name for the CWV Traffic Lights Filter.
The CWV Traffic Lights Filter tags every page with a Good, Needs Improvement, or Poor rating based on the Google thresholds for each Core Web Vital. If you are trying to find an issue with any of the CWVs, simply filter for Poor sessions to pinpoint instances of negative user experience. Alternatively, if you are trying to see how a change to your site has positively impacted your CWVs, filter for Good sessions to see your improvements. For an insight on how to make small changes to improve those okay sessions, filter for Needs Improvement.
Debug Details
The new Debug Data filter provides more granular data for LCP and FID. Google provides target percentages for each LCP subpart, as displayed in the table below. If an LCP score is above 2.5 seconds, Google recommends to look into each phase of LCP individually.
Poor FID is often due to necessary JavaScript resources not having downloaded at the time the first interaction occurs. Knowing which element is mainly causing a poor FID, and when that element fully loads, allows changes to be made to improve the FID score like maybe loading that element later in the page to avoid premature user interaction.
The Debug Details filter captures all of the above information and more. It enables filtering based on how each LCP subpart measures up to the recommended thresholds, for example only show failed element render delays. Similarly, you can also filter when FID occurs before DOM content loaded or window loaded to get an idea of what users are actually triggering first. You can also filter for FID based on the type of interaction, or whether the element loaded before or after the JavaScript resources.
Example LCP Investigation
To see an example of VitalScope in action identifying an LCP issue, view the above product training starting at 37:47.
Comments
0 comments
Please sign in to leave a comment.