Definition
Core Web Vitals are three page-experience metrics Google uses as part of its ranking signals. They measure how a page feels to a real user: how long the main content takes to appear, how responsive the page is to input, and how much the layout shifts while loading.
The three metrics are Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). INP replaced First Input Delay (FID) in March 2024.
Largest Contentful Paint (LCP)
LCP measures the render time of the largest image or text block visible in the viewport. It answers the question "when does the page feel loaded?" A good LCP is under 2.5 seconds at the 75th percentile. Anything over 4 seconds is classified as poor.
Common causes of slow LCP: large unoptimised hero images, render-blocking JavaScript or CSS, slow server response time (TTFB), and fonts that block text rendering until downloaded.
Interaction to Next Paint (INP)
INP measures how long the page takes to visually respond to a user interaction — a click, tap, or key press. Unlike FID, which only measured the first interaction, INP observes every interaction during the visit and reports the slowest one (approximately the 98th percentile).
A good INP is under 200 ms. Poor INP is over 500 ms. Main-thread JavaScript blocking is the usual cause: long-running event handlers, heavy React re-renders, and synchronous third-party scripts.
Cumulative Layout Shift (CLS)
CLS measures how much the page layout jumps around during loading. The score is unitless: a value under 0.1 is good, over 0.25 is poor. Every visible element that moves unexpectedly contributes to the score, weighted by how much it moved and how much area it covers.
Common causes: images without width/height attributes, dynamically injected ads or embeds, web fonts causing text reflow, and content loaded above existing content after render.
Field Data vs Lab Data
For ranking, Google uses field data — real measurements from Chrome users, aggregated into the Chrome User Experience Report (CrUX). Field data reflects actual user conditions: varied devices, connection speeds, and geographies.
Lab data — what Lighthouse, PageSpeed Insights, or WebPageTest produce on a simulated device — is useful for debugging, but can diverge significantly from what real users experience. Always treat CrUX data as the authoritative ranking input.
How Seoxpert Measures Core Web Vitals
The scanner pulls CrUX field data where available and supplements with lab-measured signals from every crawled page: time to first byte, render-blocking resource count, total JavaScript payload, and image weight. Findings are grouped by root cause so you fix the signal, not the symptom.
See also: most common performance issues → and the full performance issue library.