# Why Lab and Field Data Can Be Different ![rw-book-cover](https://web-dev.imgix.net/image/eqprBhZUGfb8WYnumQ9ljAxRrA72/OKW9sizk0a8UloNOFx9g.jpeg?auto=format&fit=max&w=1200&fm=auto) URL:: https://web.dev/lab-and-field-data-differences/ Author:: Philip Walton ## Highlights > The problem is that sometimes the data reported by lab tools can be quite a bit different from the data reported by field tools ([View Highlight](https://read.readwise.io/read/01gz7jr9xyzgdkbkj7cja9g2h7)) > Lab data is determined by loading a web page in a controlled environment with a predefined set of network and device conditions. These conditions are known as a *lab* environment, sometimes also referred to as a *synthetic* environment. > Chrome tools that report lab data are generally running [Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/). > The purpose of a lab test is to control for as many factors as you can, so the results are (as much as possible) consistent and reproducible from run to run. ([View Highlight](https://read.readwise.io/read/01gz7jrvyfkyez0xpzj76h5b92)) > Field data is determined by monitoring all users who visit a page and measuring a given set of performance metrics for each one of those users' individual experiences. Because field data is based on real-user visits, it reflects the actual devices, network conditions, and geographic locations of your users. > Field data is also commonly known as [Real User Monitoring (RUM)](https://en.wikipedia.org/wiki/Real_user_monitoring) data; the two terms are interchangeable. ([View Highlight](https://read.readwise.io/read/01gz7js4a0jw6wednkaq9gkkh9)) > Chrome tools that report *field data* generally get that data from the [Chrome User Experience Report (CrUX)](https://developer.chrome.com/docs/crux/). It's also common (and recommended) for site owners to [collect field data themselves](https://web.dev/vitals-field-measurement-best-practices/) because it can provide [more actionable insights](https://web.dev/vitals-ga4/) than just using CrUX alone. ([View Highlight](https://read.readwise.io/read/01gz7jsmwwww18f7vg4ya946pd)) > Field data includes a wide variety of network and device conditions as well as a myriad of different types of user behavior. It also includes any other factors that affect the user experience, such as browser optimizations like the [back/forward cache](https://web.dev/bfcache/) (bfcache), or platform optimizations like the [AMP cache](https://developers.google.com/amp/cache). ([View Highlight](https://read.readwise.io/read/01gz7jt6t8qy19cc4jafzt1rdt)) > The controlled environment of the lab is useful when debugging issues or testing features before deploying to production ([View Highlight](https://read.readwise.io/read/01gz7k6ffy0b1rtm3603b9543z)) > A lab test consists of: > • A single device… > • connected to a single network… > • run from a single geographic location. ([View Highlight](https://read.readwise.io/read/01gz8vn4dtmdtrpgskac6kp11g)) > You are also generally not capturing the performance impact of real-user behavior, such as scrolling, selecting text, or tapping elements on the page. ([View Highlight](https://read.readwise.io/read/01gz8vp4hm40qqqren0x78eg32)) > For example, the following factors could all contribute to a different LCP element being determined for the same page: ([View Highlight](https://read.readwise.io/read/01gz8vptrhgf2a951g0a3qrzk1)) > Different device screen sizes result in different elements being visible within the viewport. ([View Highlight](https://read.readwise.io/read/01gz8vpxsnj82288hgkwfwtdhf)) > If the user is logged in, or if personalized content is being shown in some way, the LCP element could be very different from user to user. ([View Highlight](https://read.readwise.io/read/01gz8vq0gmr0nvzzhav8cb72by)) > Similar to the previous point, if an A/B test is running on the page it could result in very different elements being displayed. ([View Highlight](https://read.readwise.io/read/01gz8vq4hba7c82qnnfsh93mgs)) > The set of fonts installed on the user's system can affect the size of text on the page (and thus which element is the LCP element). ([View Highlight](https://read.readwise.io/read/01gz8vq5g4z5py034exsrssc72)) > The FID metric measures how responsive a page is to user interactions, *at the time when users actually chose to interact with it.* ([View Highlight](https://read.readwise.io/read/01gz8vr030grjdtrnadds3ptex)) > The second part of that sentence is critical because lab tests, even those that support script user behavior, cannot accurately predict when users will choose to interact with a page, and thus cannot accurately measure FID. ([View Highlight](https://read.readwise.io/read/01gz8vr3n694qgkyf36rk9bg0f)) > Lab metrics such as [Total Blocking Time (TBT)](https://web.dev/tbt/) and [Time to Interactive (TTI)](https://web.dev/tti/) are intended to help diagnose issues with FID because they quantify how much the main thread is blocked during page load. ([View Highlight](https://read.readwise.io/read/01gz8vrctgv6xxsg46dya0ew0c)) > Overall, both lab data and field data are important parts of effective performance measurement. They both have their strengths and limitations, and if you're only using one you may be missing an opportunity to improve the experience for your users. ([View Highlight](https://read.readwise.io/read/01gz8vsbzher7wbdm6y43hde0r)) --- Title: Why Lab and Field Data Can Be Different Author: Philip Walton Tags: readwise, articles date: 2024-01-30 --- # Why Lab and Field Data Can Be Different ![rw-book-cover](https://web-dev.imgix.net/image/eqprBhZUGfb8WYnumQ9ljAxRrA72/OKW9sizk0a8UloNOFx9g.jpeg?auto=format&fit=max&w=1200&fm=auto) URL:: https://web.dev/lab-and-field-data-differences/ Author:: Philip Walton ## AI-Generated Summary Learn why tools that monitor Core Web Vitals metrics may report different numbers, and how to interpret those differences. ## Highlights > The problem is that sometimes the data reported by lab tools can be quite a bit different from the data reported by field tools ([View Highlight](https://read.readwise.io/read/01gz7jr9xyzgdkbkj7cja9g2h7)) > Lab data is determined by loading a web page in a controlled environment with a predefined set of network and device conditions. These conditions are known as a *lab* environment, sometimes also referred to as a *synthetic* environment. > Chrome tools that report lab data are generally running [Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/). > The purpose of a lab test is to control for as many factors as you can, so the results are (as much as possible) consistent and reproducible from run to run. ([View Highlight](https://read.readwise.io/read/01gz7jrvyfkyez0xpzj76h5b92)) > Field data is determined by monitoring all users who visit a page and measuring a given set of performance metrics for each one of those users' individual experiences. Because field data is based on real-user visits, it reflects the actual devices, network conditions, and geographic locations of your users. > Field data is also commonly known as [Real User Monitoring (RUM)](https://en.wikipedia.org/wiki/Real_user_monitoring) data; the two terms are interchangeable. ([View Highlight](https://read.readwise.io/read/01gz7js4a0jw6wednkaq9gkkh9)) > Chrome tools that report *field data* generally get that data from the [Chrome User Experience Report (CrUX)](https://developer.chrome.com/docs/crux/). It's also common (and recommended) for site owners to [collect field data themselves](https://web.dev/vitals-field-measurement-best-practices/) because it can provide [more actionable insights](https://web.dev/vitals-ga4/) than just using CrUX alone. ([View Highlight](https://read.readwise.io/read/01gz7jsmwwww18f7vg4ya946pd)) > Field data includes a wide variety of network and device conditions as well as a myriad of different types of user behavior. It also includes any other factors that affect the user experience, such as browser optimizations like the [back/forward cache](https://web.dev/bfcache/) (bfcache), or platform optimizations like the [AMP cache](https://developers.google.com/amp/cache). ([View Highlight](https://read.readwise.io/read/01gz7jt6t8qy19cc4jafzt1rdt)) > The controlled environment of the lab is useful when debugging issues or testing features before deploying to production ([View Highlight](https://read.readwise.io/read/01gz7k6ffy0b1rtm3603b9543z)) > A lab test consists of: > • A single device… > • connected to a single network… > • run from a single geographic location. ([View Highlight](https://read.readwise.io/read/01gz8vn4dtmdtrpgskac6kp11g)) > You are also generally not capturing the performance impact of real-user behavior, such as scrolling, selecting text, or tapping elements on the page. ([View Highlight](https://read.readwise.io/read/01gz8vp4hm40qqqren0x78eg32)) > For example, the following factors could all contribute to a different LCP element being determined for the same page: ([View Highlight](https://read.readwise.io/read/01gz8vptrhgf2a951g0a3qrzk1)) > Different device screen sizes result in different elements being visible within the viewport. ([View Highlight](https://read.readwise.io/read/01gz8vpxsnj82288hgkwfwtdhf)) > If the user is logged in, or if personalized content is being shown in some way, the LCP element could be very different from user to user. ([View Highlight](https://read.readwise.io/read/01gz8vq0gmr0nvzzhav8cb72by)) > Similar to the previous point, if an A/B test is running on the page it could result in very different elements being displayed. ([View Highlight](https://read.readwise.io/read/01gz8vq4hba7c82qnnfsh93mgs)) > The set of fonts installed on the user's system can affect the size of text on the page (and thus which element is the LCP element). ([View Highlight](https://read.readwise.io/read/01gz8vq5g4z5py034exsrssc72)) > The FID metric measures how responsive a page is to user interactions, *at the time when users actually chose to interact with it.* ([View Highlight](https://read.readwise.io/read/01gz8vr030grjdtrnadds3ptex)) > The second part of that sentence is critical because lab tests, even those that support script user behavior, cannot accurately predict when users will choose to interact with a page, and thus cannot accurately measure FID. ([View Highlight](https://read.readwise.io/read/01gz8vr3n694qgkyf36rk9bg0f)) > Lab metrics such as [Total Blocking Time (TBT)](https://web.dev/tbt/) and [Time to Interactive (TTI)](https://web.dev/tti/) are intended to help diagnose issues with FID because they quantify how much the main thread is blocked during page load. ([View Highlight](https://read.readwise.io/read/01gz8vrctgv6xxsg46dya0ew0c)) > Overall, both lab data and field data are important parts of effective performance measurement. They both have their strengths and limitations, and if you're only using one you may be missing an opportunity to improve the experience for your users. ([View Highlight](https://read.readwise.io/read/01gz8vsbzher7wbdm6y43hde0r))