Why the median is a comforting lie
Open your analytics console. Look at the Core Web Vitals report. If you are like most enterprise digital teams, you are looking at a dashboard that shows LCP around 2.5s, FID under 100ms, CLS around 0.1. All within the Google "Good" thresholds. The team breathes easy.
The number you are looking at is the median user. The median user is roughly your highest-quality cohort: desktop, broadband, recent browser, geographically near a CDN edge. That cohort's experience is fine. The problem is that the median user is not the user generating the marginal revenue.
Pull the p75. Look at the bottom quartile of user experience. On most enterprise sites we audit, the p75 LCP is between 4.5 and 8 seconds. That is the user on a mobile network in a tier-two market, three hops from your CDN, on a device with a constrained CPU. That user is your future customer. The aggregate median is hiding their experience.
What the p75 cohort actually does
The behavioral pattern of the p75 cohort is consistent across the brands we have measured. They land on the page. They wait. They scroll before the hero image renders. They tap a CTA that has not yet loaded its event handler. The tap goes nowhere. They wait two more seconds. They leave.
This shows up in analytics as a bounce. The marketing team reads it as a content-relevance problem and tries to fix it with messaging. The actual problem is that the user never finished loading the page they came to. No amount of better copy will fix that.
Our team has watched this pattern across about a dozen enterprise sites over the past two years. The pattern correlates strongly with geographic and device-class signals. The p75 cohort is disproportionately mobile, disproportionately in markets the brand thinks of as secondary, and disproportionately on networks that are not 5G. On every site we have measured, this cohort represents 18 to 30 percent of paid-traffic landings.
What to fix first
When we engage on a Core Web Vitals remediation, the order of work matters. Most teams want to start with image optimization and lazy-loading because those are the visible levers. They are not the first lever.
First lever: the critical render path. What blocks the browser from rendering the hero? On most enterprise sites, the answer is two or three render-blocking scripts that the analytics team installed two years ago and nobody owns anymore. Removing them yields LCP improvements of 1-2 seconds on the p75 cohort. This is a 30-minute change with sign-off, not an engineering project.
Second lever: the LCP element itself. What is the largest contentful paint element? On most sites it is a hero image. The image is rendered at full desktop resolution, served from a CDN that is geographically distant from the p75 cohort, and not preloaded. Each of those three is fixable in a sprint.
Third lever: the JavaScript bundle. Most enterprise sites ship 800KB to 1.5MB of JS to first paint. The p75 cohort's device parses that JS slowly, blocking interactivity. Code-splitting and dynamic imports — done properly, not just configured — typically cut p75 INP by 30-40%.
Image and font optimization come fourth. They are useful but they are not where the meaningful p75 wins live. Teams that start with image optimization usually ship gains that are invisible on the p75 cohort because the larger blockers were never addressed.
Platform-specific patterns we have shipped
The remediation patterns vary by platform. We will not go deep on any single platform here, but a few patterns across the engagements our team has run are worth flagging.
Sitecore. The biggest p75 win on Sitecore is usually disabling personalization rules that fire synchronously on first paint. The rules are usually justified on conversion grounds; on the p75 cohort the personalization never renders before bounce, so the cost is paid and the benefit is not collected. Audit which rules fire synchronously, move them to post-LCP execution, and the p75 numbers improve materially.
AEM. The biggest p75 win on AEM is usually around component-level CSS. AEM's component model produces page-level CSS bundles that are larger than they need to be. Aggressive CSS extraction at the component level — combined with critical-CSS inlining for the above-fold components — typically cuts render-blocking CSS by 60-70%.
Contentful (headless setups). The biggest p75 win on Contentful-fed sites is usually around image transformation policies. The default image API serves up images sized for the requesting client, which sounds correct, but the negotiation adds a CDN round-trip that the p75 cohort cannot afford. Pre-rendering a small set of image sizes and serving them from the CDN edge usually wins more than dynamic sizing does.
Across all three platforms, the meta-pattern is the same: the default configuration is tuned for the median user. The remediation work is mostly about overriding defaults for the p75 cohort. None of this is novel engineering. It is operational discipline applied to platforms that ship with median-friendly defaults.
The fix is not faster CDN edges. The fix is configuration discipline.
What to measure to know if it is working
The dashboards most teams use will keep reporting the median. To know whether your p75 remediation is working, you have to instrument the p75 cohort directly. The signals that matter: p75 LCP segmented by geography and device class; p75 INP segmented by initial page weight; conversion rate of the p75 cohort relative to the median cohort.
On every engagement where we have done this remediation work, the conversion-rate-by-cohort signal is the one that opens executive eyes. The p75 cohort typically converts at 30-50% of the median cohort rate before remediation. After remediation, that gap closes to roughly 70-80%. That is meaningful revenue — usually high single digits to low double digits as a percentage of paid traffic — and it is not visible in any standard dashboard.
If your team is running paid traffic to a site that has never been measured this way, the assessment work is genuinely worth doing. Our Digital Health Assessment includes a p75 cohort read across the same five pillars we score the rest of the assessment against.