Limiting JavaScript?

#

Yesterday there was a bit of a heated discussion around a WebKit issue that suggested putting a limit on the amount of JavaScript a website can load. In the issue, Craig Hockenberry makes the case that enforcing a limit on the amount of JavaScript would provide a sort of “meet me in the middle” solution for users currently using content blockers.

Content blockers have been a great addition to WebKit-based browsers like Safari. They prevent abuse by ad networks and many people are realizing the benefits of that with increased performance and better battery life.

But there’s a downside to this content blocking: it’s hurting many smaller sites that rely on advertising to keep the lights on…..

The situation I’m envisioning is that a site can show me any advertising they want as long as they keep the overall size under a fixed amount, say one megabyte per page. If they work hard to make their site efficient, I’m happy to provide my eyeballs.

If Webkit pursues the idea further, they wouldn’t be alone.

Alex Russell has been working on a Never-Slow Mode for Chrome since October or so. The Never-Slow Mode is much more refined, as you would expect given how long it has been brewing. It doesn’t merely look at JavaScript size, but also CSS, images, fonts and connections. It also disables some features that are harmful to performance, such as document.write and synchronous XHR.

Never-Slow Mode isn’t that far removed from two ideas that we had back in 2016 when Yoav Weiss and I met with the AMP team to discuss some standards-based alternatives to AMP. One of the ideas that came out of that discussion was Feature Policy which lets you disable and modify specific features in the browser. Another idea that came out of that discussion was the idea of “Content Sizes” which would enable first-party developers to put specify limits on the size of different resource types. This was, primarily, a way for them to keep third-party resources in check. Never-Slow Mode would combine these two concepts to create a set of default policies that would ensure a much more performant experience.

Not only would WebKit not be alone in pursuing some sort of resource limits, but they wouldn’t exactly be breaking new ground either.

Browsers feature similar limits and interventions already today. iOS imposes a memory limit (a high one, but it’s still a limit) that folks most usually run into when using large, high-resolution images. And Chrome’s NOSCRIPT intervention skips right past the idea of limiting JavaScript and disables it altogether.

In other words, the idea itself isn’t as radical as maybe it appears at first blush.

Still, there are a few concerns that were raised that I think are very valid and worth putting some thought into.

Why’s Everybody Always Pickin’ on Me

One common worry I saw voiced was “if JavaScript, why not other resources too?”. It’s true; JavaScript does get picked on a lot though it’s not without reason. Byte for byte, JavaScript is the most significant detriment to performance on the web, so it does make sense to put some focus on reducing the amount we use.

However, the point is valid. JavaScript may be the biggest culprit more often than not, but it’s not the only one. That’s why I like the more granular approach Alex has taken with Chrome’s work. Here are the current types of caps that Never-Slow Mode would enforce, as well as the limits for each:

  • Per-image max size: 1MB
  • Total image budget: 2MB
  • Per-stylesheet max size: 100kB
  • Total stylesheet budget: 200kB
  • Per-script max size: 50kB
  • Total script budget: 500kB
  • Per-font max size: 100kB
  • Total font budget: 100kB
  • Total connection limit: 10
  • Long-task limit: 200ms

There’s a lot more going on than simply limiting JavaScript. There are limits for individual resources, as well as their collective costs. The limit on connections, which I glossed over the first time I read the description, would be a very effective way to cut back on third-party content (the goal of Craig’s suggestion to WebKit). Finally, having a limit on the long-tasks ensures that the main thread is not overwhelmed and the browser can respond to user input.

It does seem to me that if browsers do end up putting a limit on the amount of JavaScript, they should consider following that lead and impose limits on other resources as well where appropriate.

How big is too big?

Another concern is the idea that these size limits are arbitrary. How do we decide how much JavaScript is too much? For reference, the WebKit bug thread hasn’t gone as far as suggesting an actual size yet, though Craig did toss out a 1 MB limit as an example. Chrome’s Never-Slow Mode is operating with a 500kB cap. That 500kB cap, it’s worth noting, is transfer size, not the decoded size the browser has to parse and execute. Regarding the actual code the device has to run, that’s still somewhere around 3-4MB which is…well, it’s a lot of JavaScript.

That being said, the caps currently used in Never-Slow Mode are just guesses and estimates. In other words, the final limits may look very different from what we see here.

Exactly what amount to settle on is a tricky problem. The primary goal here isn’t necessarily reducing data used (though that is a nice side-effect), but rather reducing the strain on the browsers main thread. Sizes are being used as a fuzzy proxy here which makes sense—putting a cap on CPU usage and memory is a lot harder to pull off. Is focusing on size ideal? Probably not. But not that far off base either.

The trick is going to be to find a default limit that provides a benefit to the user without breaking the web in the process. That 500kB JavaScript limit, for example, is right around the 60th percentile of sites according to HTTP Archive, which may end up being too aggressive a target. (Interestingly, when discussing this with Alex, he pointed out that the 50kB limit on individual JavaScript files broke sites far more often than the 500kB restriction. Which, I suppose, makes sense when you consider the size of many frameworks and bundles today.)

One thing that seems to be forgotten when we see browsers suggestion things like resource limits, or selective disabling of JavaScript, is that they aren’t going to roll something out to the broader web that is going to break a ton of sites. If they did, developers would riot and users would quickly move to another browser. There’s a delicate balance to be had here. You need the limit low enough to actually accomplish something, but high enough that you don’t break everything in the process.

Even more than that, you need to be careful about when and where you apply those limits. Currently the idea with Never-Slow Mode would be to selectively roll those restrictions out only for limited situations, according to Alex:

Current debate is on how to roll this out. I am proposing a MOAR-TLS-like approach wherein we try to limit damage by starting in high-value places (Search crawl, PWAs install criteria, Data-Saver mode) and limit to maintained sites (don’t break legacy)

In other words, they would take a very gradual approach like the did with HTTPS Everywhere, focusing on specific situations to apply the restrictions and careful consideration into how to progressively enable a UI that keeps users informed.

Data-Saver mode (the user opt-in mode that indicates they want to use less data), to me, is so obvious a choice that it should just happen.

Progressive web app (PWA) installs are an interesting one as well. I can definitely see the case for making sure that a PWA doesn’t violate these restrictions before allowing it to be added to the homescreen and get all the juicy benefits PWA’s provide.

It’s also worth noting, while we’re on PWA’s, that Never-Slow Mode would not apply those restrictions to the service worker cache or web workers. In other words, Never-Slow Mode is focused on the main thread. Keep that clear and performant and you’ll be just fine.

How would browsers enable and enforce this?

Still, the risk of broken functionality will always be there which brings us to consideration number three: how do browsers enable these limits and how do they encourage developers to pay attention?

The surface level answer seems relatively straightforward: you give control to the users. If the user can opt into these limits, then we developers have zero right to complain about it. The user has signaled what they want, and if we are going to stubbornly ignore them, they may very well decide to go somewhere else. That’s a risk we take if we ignore these signals.

The issue of control is a bit more nuanced when you start to think about the actual implementation though.

How do we expose these controls to the user without annoying them?

How do we make sure that the value and risk is communicated clearly without overwhelming people with technical lingo?

How do we ensure developers make responding to the users request for a faster site a priority?

Kyle Simpson’s suggestion of a slider that lets the user choose some level of “fidelity” they prefer is an interesting one, but it would require some care to make sure the wording strikes the right balance of being technically vague, and yet clear to users as to what the impact would be. Would users really have an idea of what level of “fidelity” or “fastness” they would be willing to accept versus not?

Kyle also suggested that these sliders would then ultimately send back a header which each request to the site so that the site itself could determine what it should and should not send down to the user. That idea is a better articulation of a concern that seemed to be underlying much of the negative feedback to the idea: developers are leery of browsers imposing some limit all on their own without letting sites have some say in it themselves.

And I get it, I do. I love the idea of a web that is responsible and considerate of users first and foremost. A web that would look at these user signals and make decisions that benefit the user based on those preferences. I think that’s the ideal scenario, for sure.

But I also think we have to be pragmatic here.

We already have a signal like this in some browsers: the Save-Data header. It’s more coarse than something like Kyle’s suggestion would be—it’s a very straightforward “I want to save data”—but it’s a direct signal from the user. And it’s being ignored. I couldn’t find a single example from the top 200 Alexa sites of anyone optimizing when the Save-Data header was present, despite the fact that it’s being sent more frequently than you might think.

If these requests for less data and less resources being utilized have any chance at all of being seriously considered by developers, there needs to be some sort of incentive in place.

That being said, I like the idea of the developer having some idea of what is happening to their site. So here’s what I’m thinking might work:

  1. The browser sets a series of restrictions that it can enforce.
    These limits need to be suitably high enough to reduce breakage while still protecting users (Sounds so simple, doesn’t it? Meanwhile the folks having to implement this are banging their heads against their desks right now. Sorry about that.) These limits also need to be very carefully applied depending on the situation. The direction Never-Slow Mode is headed, both in terms of granularity and progressive rollout, make a lot of sense to me.
  2. These restrictions could, optionally, be reduced further with user input.
    Whether it’s in the form of a coarse “put me in a never slow mode” or a more granular control, I’m not sure. If this step happens, it needs to be clearly communicated to the user what they’re getting and giving up. Right now, I’m not sure most everyday people would have a clear understanding of the trade-offs.
  3. The browser should communicate to the site when those limits apply.
    If the user does opt into a limit, or the browser is applying limits in a certain situation, communicate that through some sort of request header so developers have the ability to make optimizations on their end.
  4. The browser should communicate to the site if those limits get enforced.
    If and when the browser does have to enforce the limits that the site violates, provide a way to beacon that to the site for later analysis, perhaps similar to reporting on Content-Security policies.

I don’t see this approach as particularly troublesome as long as those defaults are handled with care. Is it applying a band-aid to a gunshot wound? Kind of, yes. There are bigger issues—lack of awareness and training, lack of top-down support, business models and more—that contribute to the current state of performance online. But those issues take time to solve. It doesn’t mean we pretend they don’t exist, but it also doesn’t mean we can’t look for ways to make the current situation a little better in the meantime.

If a limit does get enforced (it’s important to remember this is still a big if right now), as long as it’s handled with care I can see it being an excellent thing for the web that prioritizes users, while still giving developers the ability to take control of the situation themselves.

Thanks to Yoav Weiss and Alex Russell for helping me fine-tune some of my thoughts on this.