What to Expect When You're Optimizing

#

One of the most common pain-points I hear from companies is that they spent a bunch of time chasing an optimization, only to find that, after shipping it, they could provide no evidence of it making a difference in their performance.

There are a lot of things involved in making performance more predictable, but one immediate thing you can do is do the work up front to set expectations about which metrics you expect the optimization to move, and how you’ll measure the impact.

For example, I have a client who is experimenting with the new Speculation Rules, specifically using it to prefetch PDP pages when someone is on a category or search page.

In addition to the technical details for how to implement it, our ticket includes a “Measuring the Impact” section that looks something like this:

How are we going to measure it?

We will measure the impact using our RUM data. We’ll include an inline JavaScript snippet to determine if the page was prefetched or not, and add that as custom data so we have two very clear buckets for comparison.

We’ll use the RUM products compare functionality to compare visits to PDP’s where the pages were prerendered to visits where the PDP’s were not prerendered.

Speculation Rules are only supported in Chromium-based browsers, so we’ll zero in on Chrome and Edge sessions only.

What do we expect to change?

We expect to see a direct impact on TTFB and that will be the primary metric we’ll look at.

Paint metrics such as First Contentful Paint and Largest Contentful Paint should also see an improvement.

It’s nothing too fancy or complex, but including this little section for every optimization accomplishes a few things.

It makes sure the entire team knows which data source will be our source of truth. Often teams have different tools, including locally running tools, with very different results, so we want to make sure everyone is on the same page. In this case, it also helps people to know how to narrow that data down—this isn’t an optimization that will impact Safari traffic, for example, so to get the clearest picture of the impact, we need to exclude that from our analysis.

It makes it clear which metric(s) we expect to move. There is no shortage of performance metrics, so having a clear list of which ones should be impacted here.

It ensures the team has thought about what we expect from this optimization up front. A lot of times, organizations can chase optimizations because they’ve read about them or saw a talk about them. By making sure that any optimization includes information about which metrics to measure and how, it ensures at least some thought has been given to how that optimization may or may not impact their specific situation.

It makes it clear which metric is our primary one to look at, in this case TTFB. Harry has written before about the importance of measuring the metrics you impact, not the ones you influence.. That framing of directly impacted and indirectly impacted metrics is incredibly useful for ensuring we’re getting a clear picture of the impact of our optimizations. If we were to focus on Largest Contentful Paint for this optimization, for example, we would allow a lot of noise to interfere with our measurement—prefetching pages will directly impact TTFB, but there are a lot of other potential bottlenecks between that milestone and when LCP fires.

By including a section like this, we’ve provided a lot of clarity about the optimization and reduced the likelihood for any wins to get lost by looking at the wrong metrics or data sources.

Over time, doing this with each optimization opportunity helps to build both the team’s knowledge and the overall confidence in understanding what might move the needle, and what might not.