Skip to main content

Core Web Vitals in 2026: what actually moves the needle

A field guide to LCP, INP, and CLS in 2026 — which wins are real, which are vanity, and what to optimize when the Lighthouse score is already green.

4 min readBy IGN Solutions
  • performance
  • seo
  • core-web-vitals

Your Lighthouse score is 98. Your real-user LCP is 3.9 seconds at the 75th percentile. Those two numbers are both true, and only the second one matters. Here's what we actually optimize for in 2026, and why.

The three vitals that still matter

Google's Core Web Vitals settled down after INP replaced FID in 2024. In 2026, the three numbers that move rankings are:

  • LCP (Largest Contentful Paint) — how quickly the biggest above-the-fold element renders. Target: under 2.5s at the 75th percentile.
  • INP (Interaction to Next Paint) — how quickly the page responds to user input (click, tap, keypress). Target: under 200ms at the 75th percentile.
  • CLS (Cumulative Layout Shift) — how much the layout jumps during load. Target: under 0.1 at the 75th percentile.

The phrase "at the 75th percentile" is doing a lot of work there. It means: 75% of your real users must see a value better than the threshold. Not the median. Not your MacBook. The 75th.

LCP: where the real wins hide

Most LCP optimizations stop at "preload the hero image and call it a day". That's table stakes. In 2026 the real wins are:

1. Serve the LCP image as AVIF, with a matching fetchpriority=high

AVIF is 30-50% smaller than WebP at the same visual quality. Next.js emits <picture> tags that negotiate the format per browser, but only if you configure images.formats to include 'image/avif' before 'image/webp'. We do this in next.config.ts:

images: {
  formats: ['image/avif', 'image/webp'],
}

Without AVIF first, Chrome on a 4G phone downloads the WebP variant. That's a free 100ms of LCP you're leaving on the table.

2. Drop priority from non-LCP images

priority prop on a Next.js <Image> component emits a <link rel="preload"> for the image, which tells the browser to fetch it ahead of everything else — including the actual LCP image if you forgot which one it was.

Rule of thumb: exactly one image per route gets priority (or the equivalent fetchPriority="high" if you're using a plain <img>). If you have two, you're competing with yourself.

3. Fix the server response, not the client render

The 75th-percentile LCP includes the request time. If your TTFB (time-to-first-byte) is 800ms, no amount of client-side optimization will get you under 2.5s total. The fix is boring:

  • Use server-rendered HTML, not client-side SPA rendering.
  • Cache the rendered HTML at the edge (CloudFront, Cloudflare).
  • Use stale-while-revalidate so every request is fast even during a rebuild.

INP: the 2024 metric most sites still haven't tuned

INP is where we see the biggest gap between "Lighthouse says fine" and "real users are suffering". Lighthouse measures INP on a desktop throttled profile. Real users hit it on a mid-range Android with 50 tabs open.

The wins here are structural:

1. Ship less client JavaScript

Every kilobyte of client JS is a kilobyte the main thread has to parse and execute before it can respond to input. In Next.js App Router, that means:

  • Default to server components.
  • Use "use client" only for leaf components that genuinely need interactivity.
  • Never wrap a whole page in a client component because one button inside it is interactive. Extract the button.

2. Replace synchronous event handlers with native browser primitives

A JavaScript accordion adds INP. A <details>/<summary> element does not. A JavaScript tabs component adds INP. An anchor-link tabs component (with :target) does not.

On our own site, the Services FAQ uses native <details>. The entire page ships zero client JS for that section, and INP stays flat regardless of how many questions are open.

3. Measure INP in production, not in Lighthouse

Lighthouse is a lab metric. INP is a field metric. The only way to know your real INP is to ship a web-vitals reporter and log it. We do this from app/_components/seo/WebVitalsReporter.tsx and ingest the data via /api/vitals.

CLS: the one you get for free if you do the others right

CLS regressions in 2026 usually come from three places:

  • Fonts that swap layout. Use font-display: swap with size-adjust so the fallback font takes the same space as the loaded font.
  • Images without intrinsic dimensions. Always set width and height (Next.js <Image> does this automatically from static imports).
  • Client-side hydration that re-renders above-the-fold content. If the server renders one thing and the client renders another, the layout shifts. This is a hydration-mismatch bug, not a CSS bug.

If you're disciplined about the first two and you don't have any hydration mismatches, CLS basically solves itself.

The measurement pipeline that actually tells you the truth

  • Lab: Lighthouse CI on every PR. Catches regressions before they merge.
  • Field: web-vitals library reporting to /api/vitals. Tells you how real users experience the site.
  • Search Console: Core Web Vitals report. Tells you what Google sees when it assesses your site for ranking.

When the three agree, you've got the signal right. When they disagree, the field is the one that matters for users, and the Search Console report is the one that matters for rankings.

Want us to audit yours?

If your Lighthouse score is green but your Search Console "Core Web Vitals" report is flagging URLs, there's a specific gap between lab and field that we've fixed a dozen times. Send us the URL and we'll tell you what we'd change first.

Stop reading.
Start building.

If this post nudged something loose, let's talk. Tell us what you're trying to ship and we'll tell you how we'd approach it.