Performance Budgets That Stick


Occasionally I hear some chatter about performance budgets “not working.” And, to be fair, I have seen companies who adopt a budget and then are unable to make meaningful improvements towards that goal. Over time those ineffective budgets get pushed to the sideline where they accumulate dust before being forgotten about altogether.

It’s not really about the performance budget, though. Or rather, it’s not the idea of performance budgets that doesn’t work in those cases—it’s the execution and reinforcement around the budget.

My definition of a performance budget has evolved over the years, but here’s my current working draft:

A performance budget is a clearly defined limit on one or more performance metrics that the team agrees not to exceed, and that is used to guide design and development.

It’s a bit lengthy, I know, but I think the specifics are important. If it’s not clearly defined, if the team doesn’t all agree not to exceed the limits, and if it doesn’t get used to guide your work, then it’s a number or goal, but not a budget.

On the surface performance budgets sure seem pretty simplistic, but in my experience working with a variety of different organizations at different points in their performance journey, establishing a meaningful budget is one of the most critical components in successfully changing the way that company approaches performance on a day-to-day basis.

But as anyone who has ever set a budget on their spending will tell you, merely setting up a budget doesn’t accomplish anything. To be effective, a performance budget has to be concrete, meaningful, integrated, and enforced.


Being concrete means that we have to pick a number and get specific about what we’re after.

Phrases like “fast as possible,” “faster than our competition,” “lightning-quick” are great, but they’re not concrete. They’re too subject to interpretation and leave too much wiggle room.

Performance budgets need to be a specific, clearly-defined metric (or metrics, in some cases). For example: “We want our start render time to be less than 3 seconds at the 90th percentile.”

Or: “When tested over a 3G network on a mid-tier Android device, our Time to Interactive should be no larger than 4 seconds.”

You can have multiple budgets, but each of them needs to be very clearly defined. Someone who joins the team tomorrow should be able to look at them and know exactly what they mean and how to tell how well they’re stacking up.


That metric (or metrics if you choose to have multiple budgets) needs to be meaningful if it’s going to stick. You can opt for a performance budget on your load time, but if it doesn’t accomplish anything for your business and provides no significant change to the user experience, it won’t be long before people simply don’t care about it. We don’t make sites faster for the sake of making them faster, we make them faster because it’s better for the people using our sites and it’s better for the business.

In the best case scenario, you look at your real user data and find a metric that has a clear tie to your business.

First, identify some business critical metrics that you pay attention to. Maybe that’s conversion rate or bounce rate. Whatever it is, you want to look for a performance metric that has a clear connection (tools like SpeedCurve and mPulse should be able to help with this).

Let’s say bounce rate is critical for your organization, and that you find a clear connection between start render and the bounce rate on your site (it’s a fairly common connection in my experience). It probably makes sense to set a budget on your start render time and work towards that. That way you know that if you make improvements towards your budget, you create a better experience for users and improve your site’s effectiveness as well.

If you don’t have access to real user data, you do the best with what you’ve got (while hopefully working on getting solid real user data in place). You can look around at what similar companies have found and consider your performance in user-focused terms to come up with potential metrics to prioritize. Then you can do a little benchmarking of your organization and some competitors to find a target that puts you at the top of the list and gives you a competitive advantage.

The need for meaningful metrics is also why I always recommend using some sort of timing-based metric (custom or not) as your primary performance budget, rather than a weight or something like a Lighthouse score. Those can be great supporting budgets, but if they’re not connected to a larger goal they’re far less likely to stick for the long-haul.


Once you have a meaningful budget chosen, it needs to be integrated into your workflow from start to finish.

Put it into your acceptance criteria for any new feature or page.

Display it in your developer environment so that as developers are making changes, they’re getting immediate feedback on how well they’re adhering to the budget.

Translate the metric into a set of quantity-based metrics, which are more tangible for designers and developers alike during their day-to-day work. It’s an approximation, but it’s a critical step. It enables a designer to have some constraints to play with to help them make decisions between different images, fonts, and features.

Display your budget on dashboards throughout the organization so that everyone is continuously reminded of what they are, and how you’re doing.

The idea is you want to make your budget, and how you stack up to it, as visible as possible to everyone throughout the workflow.


Once the budget is firmly integrated into your workflow, the next step is to make it enforceable. Even the most dedicated teams are going to make mistakes. We need to put the checks and balances in place to make sure the budget doesn’t get ignored.

Most monitoring tools let you establish budgets now and will alert team members via Slack, email or a similar format when that budget is exceeded. That’s a pretty passive way of keeping tabs on the budget, but it can clue you in quickly when something goes wrong.

Even better is being proactive and building checks and balances into your continuous integration environment or build process.

You can set custom budgets in Lighthouse (something that will get more powerful soon), for example, that are checked on every pull request. You can test against WebPageTest automatically using its API.

For anyone building a JavaScript-heavy application, using something like bundlesize which alerts you to budget issues in your JavaScript bundles is an absolute must.

Enforcing hard limits on your pull requests can seem intense, but those constraints can turn into a fun challenge and really change the way your entire team approaches their work.

On a recent episode of Shop Talk Show, Jason Miller talked about Preact’s upcoming release, Preact X, and their limits on total library size. When a pull request for a new feature would land that added even a few bytes to the weight of the library, people would start playing “code golf”—finding technical debt that could be optimized to keep the library size under budget. Contributors started aiming to reduce the size of the library with every pull request as they added features, a refreshing inverse of the usual situation.

Supporting Your Budget

The point is not to let the performance budget try to stand on its own, somewhere hidden in company documentation collecting dust. You need to be proactive about making the budget become a part of your everyday work.

It’s not just a number, but a hard line. It’s a unifying target that your entire team can rally around. It provides clarity and constraints that can help guide decisions throughout the entire workflow and enable teams to focus on making meaningful improvements.

And, if you make sure it’s clearly defined, meaningful to your users and business, integrated into your workflow at every available step and enforced in your build process, then a performance budget can be an indispensable part of your performance journey.

This is a polished up take of a 5-minute presentation I gave for This.JavaScript—State of Performance on March 6, 2019.