The Performance Golden Rule Revisited
There was a comment on Twitter today from Rafael Gonzaga expressing disappointment in what he sees as a tendency to focus on the frontend solely in performance discussions, while neglecting the server-side aspect.
In the discussion that followed, the Golden Rule of Performance (popularized by Steve Souders) was brought up:
80-90% of the end-user response time is spent on the frontend.
In the thread, Steve Souders suggested that someone should revisit the Golden Rule to see if it still holds for today.
I was curious, so I figured I would oblige.
Revisiting the golden rule
Way back in 2006, Tenni Theurer first wrote about the 80 / 20 rule as it applied web performance. The Yahoo! team did some digging on 8 popular sites (remember MySpace?) to see what percentage of the time it took for a page to load was comprised of backend work (Time to First Byte) versus frontend work (pretty much everything else).
What they found was that the vast majority of the time was spent on the frontend for all 8 sites:
Time Retrieving HTML | Time Elsewhere | |
---|---|---|
Yahoo! | 10% | 90% |
25% | 75% | |
MySpace | 9% | 91% |
MSN | 5% | 95% |
ebay | 5% | 95% |
Amazon | 38% | 62% |
YouTube | 9% | 91% |
CNN | 15% | 85% |
When Steve Souders repeated it in 2012, he found much the same. Among 50,000 websites the HTTP Archive was monitoring at the time, 87% of the time was spent on the frontend and 13% on the backend.
I ran a few queries against the HTTP Archive to see how well the same logic applies today. In sticking with the original analysis, I’m comparing Time to First Byte (how long it takes for the first bytes of the first request to start arriving from the server) compared to the total load time (when onLoad fires). I broke the percentages down by page rank (based on traffic to the site). The percentages are the median percentages for that group of sites.
First up, the mobile results.
Site Rank | backend | frontend |
---|---|---|
Top 1,000 | 12.7% | 87.3% |
1,001 - 10,000 | 12.5% | 87.5% |
10,001 - 100,000 | 13.8% | 86.2% |
100,001 - 1,000,000 | 14.5% | 85.5% |
Next up, the desktop results.
Site Rank | backend | frontend |
---|---|---|
Top 1,000 | 9.9% | 90.1% |
1,001 - 10,000 | 10.8% | 89.2% |
10,001 - 100,000 | 12.6% | 87.4% |
100,001 - 1,000,000 | 13.3% | 86.7% |
For both desktop and mobile data, the 80 / 20 rule still holds strong. In fact, it’s more like the 85⁄15 rule threatening to be the 90⁄10 rule for the top 1,000 URLs where they likely have more resources to throw at high quality CDN’s and backend infrastructure.
But….does it really matter?
So…now what?
While it’s interesting to see that the rule holds up (and that, at least at the aggregate level) perf is getting even more frontend dominant, I’m not sure how much it really matters anymore.
When the rule was first brought up, frontend performance wasn’t really a thing that people did. Performance conversations were dominated by discussions around the server-side aspects of performance. That rule itself was part of an appeal to get folks to focus on the frontend aspects as well.
Fast forward to today, and that’s less of a problem. To Rafael’s point, there’s a lot of chatter nowadays around the frontend aspects of performance, particularly in after the rise in popularity of Lighthouse and Core Web Vitals.
The split between “backend” and “frontend” performance is increasingly murky as we see popular JavaScript frameworks focusing more on server-side rendering with client-side enhancements after the fact. We also see how server configurations can enable features like early hints, or mess with HTTP/2 prioritization, or impact the level of compression applied to resources, and on and on. It isn’t immediately obvious where backend performance ends and frontend performance starts.
Nor should we be thinking about it that way.
As we increasingly focus our conversations of performance on how metrics that relate to the user experience, the divide between backend and frontend goes past murky and starts becoming problematic.
Want to improve your Largest Contentful Paint? If you’re focusing solely on the “frontend” aspects of performance, you’re likely missing out. Measuring and optimizing your Time to First Byte is frequently a critical component of improving LCP. I’ve seen a lot of sites suffering from extremely volatile TTFB metrics that vary dramatically based on geography or whether or not there’s a cache hit or miss. Diving deeper into what that TTFB is comprised of and how to optimize it can unlock big performance wins and even the playing field for your users.
Another great example is the new Early Hints header. That’s technically a server-side optimization (probably…) but one that Shopify and Cloudflare saw have a big impact on frontend, user-focused metrics like First Contentful Paint and Largest Contentful Paint.
Point being: if you’re focusing on one or the other at this point, you’re going to find yourself coming to the limits of what you’re able to do with regards to optimization pretty quickly.
The Golden Rule of performance still applies, and if you’re finding yourself in a situation where your organization still treats performance as primarily a server-side consideration, then there’s definitely value in doing the comparison for your own sites to help folks start to shift their thinking.
Otherwise? The rule probably doesn’t matter much.
Take a holistic approach to performance and pay attention to how it all contributes to the user experience—that’s how you’ll find your biggest, most meaningful performance wins.