Description
Imagine you have a page with fast real-world FCP and LCP. A page like this one:
It doesn’t have any render-blocking resources. Its only contentful element above the fold is a text block (which uses system fonts). Its real-world loading speed metrics (FCP and LCP) are around 1.0-1.1s.
Now, run this page through Lighthouse:
Suddenly:
- the FCP of the page is yellow or red
- the LCP of the page is much higher than FCP, even though the FCP and the LCP element is the same
The cause is Lighthouse Simulated Throttling (aka Project Lantern).
Simulated Throttling
Lighthouse (and, therefore, PageSpeed Insights), by default, uses simulated throttling. The idea behind it is simple: instead of actually throttling the network while loading the page (which is slow), let’s load the page on a fast connection → look at the network graph and real FCP/LCP → simulate how the network graph would behave on a slow connection → derive slow FCP/LCP from that graph.
The challenge? That last step is far from perfect. For example:
-
To simulate slow LCP, Lighthouse looks at all requests that happened before real LCP – and assumes that LCP requires all of them:
This causes a bunch of issues.
- Preloaded a font? Well, now simulated LCP will be delayed by that font load time, even if the LCP element is actually an image.
- Fetched a tiny, non-blocking script that just so happened to load before real LCP? Too bad, now that script will delay simulated LCP as well.
-
To simulate slow FCP, Lighthouse does the same thing. (The code path is literally the same!) The only difference is that for FCP, nodes with a low fetch priority are ignored. This means offscreen images and non-blocking scripts won’t delay FCP (yay!), but
@font-face
requests (meh) ormodulepreload
s (eh?) still will.
This Punishes (Good) Preloading
(and, in general, any kind of good early-loaded assets)
Framer is a no-code website platform that deeply cares about performance. At Framer sites, we try to help the browser load critical resources ASAP. To do this, we:
- emit
<link rel="modulepreload">
s for all JS modules that the current page needs - inline the page CSS, including
@font-face
s, straight into the HTML
These optimizations directly improve real-world performance, making the site visible and interactive sooner. These optimizations also dramatically worsen the Lighthouse score.
Demo
Above, you saw a loading trace of a demo page (URL). For this trace, lighthouse -A
computes the following FCP and LCP:
Now, here’s the same page, modified to not have any <link rel="modulepreload">
elements (URL):
We removed <link rel="modulepreload">
s, so:
- the real-world LCP is still ~same (200-300 ms) because
<link rel="modulepreload">
s don’t affect it - website hydrates much later (at ~2000 ms instead of ~500 ms), making the real user experience worse
However, lighthouse -A
now simulates a much better LCP:
This is Bad
This is bad because it creates bad incentives:
- Developers are forced to pick between doing deep research (and convincing stakeholders that Lighthouse isn’t accurate) – or making the site slower for real users just to get the score higher
- Companies (web platforms like Framer, agencies, etc.) are forced to pick between shipping fast sites – or retaining the business of customers who look at PageSpeed Insights scores
This is Serious
In Nov 2024, Google Amsterdam hosted WebPerfDays Unconference, an informal discussion event for Googlers, GDEs, and web perf specialists. At the event, the mismatch between Lighthouse and Core Web Vitals was (to my memory) one of the most-discussed points.
See also other people being struck by this issue: WebPerf Slack one, two, #11460
Solutions
There are easy solutions, and there are hard ones.
-
Easy solution 1: Fine-tune the simulation algo.
- Ignore
modulepreload
s when computing FCP and LCP - Ignore fonts when computing FCP and LCP if 1) the FCP/LCP element is not text, or 2) the FCP/LCP element is text that uses a system font, or 3)
@font-face
usesfont-display: swap
[or similar]
At least some of those changes would have to be upstreamed to
@paulirish/trace_engine
which is not on GitHub.This will reduce the punishment that Lighthouse gives to early requests.
- Ignore
-
Easy solution 2: For simulated FCP and LCP, pick
min(simulated FCP value, simulated LCP value)
when 1) their real-world values are the same, and 2) they were triggered by the same element.This will avoid an artificial mismatch between FCP and LCP when they are actually the same in the real world.
I would be happy to contribute these changes if you’re open to accepting them.
The harder solution would be to get smarter about figuring out what assets are actually render-blocking. We might have to look at 1) where a script is positioned in the document, 2) whether a script applies an anti-flicker effect, etc. This is harder and much less defined, but perhaps this issue could be a start of a discussion.