Tag: Performance
-
Why I’m skeptical of rewriting JavaScript tools in “faster” languages | Read the Tea Leaves
PermalinkGood food for thought here from Nolan Lawson about rewriting JS tools in “faster” languages.
The point that a rewrite is often faster simply because it’s a rewrite is a very valid one—over time we add more features/functionality to our code and it starts to have a cost not just on perf, but on maintainability as well. A rewrite lets us start with those learnings already in mind.
But my favorite point is around the accessibility of JavaScript tools built in JavaScript:
For years, we’ve had both library authors and library consumers in the JavaScript ecosystem largely using JavaScript. I think we take for granted what this enables.
I wrote about this a few years back, but having JavaScript available on the front-end, back-end, on the edge, and in build tools is a powerful way to let developers extend their reach into different part of the tech stack and any decision to move away from that needs to be VERY carefully considered.
-
Paint Holding - reducing the flash of white on same-origin navigations
PermalinkOldie but a goodie describing Chrome’s “Paint Holding” optimization.
-
Improving Performance with HTTP Streaming | by Victor | The Airbnb Tech Blog | May, 2023 | Medium
PermalinkOur experiments showed that Early Flush produced a flat reduction in First Contentful Paint (FCP) of around 100ms on every page tested, including the Airbnb homepage.
-
Adactio: Journal—The intersectionality of web performance
PermalinkJeremy discussing why performance isn’t just about business, but actually has impact across several broad categories:
- Business
- Sustainability
- Inclusivity
Naturally, I agree.
-
longtasks/loaf-explainer.md at main · w3c/longtasks · GitHub
PermalinkAll of the above are part of the same issue - a task is an incomplete and inaccurate cadence to measure main-thread blocking.
-
Improve Largest Contentful Paint (LCP) by removing image transitions – Performance @ Shopify
PermalinkImage transitions are a technique used to animate images as they appear on screen. Examples include scaling or fading the image into view for visual flare. However, often these transitions are implemented in a way that causes a large degradation in performance and thus user experience.
We worked with Case-Mate to fix this issue on their website, resulting in a 6 second improvement in Largest Contentful Paint (LCP)! In this case study, we’ll walk through the experience and the steps we went through to make this improvement.
-
The JavaScript Site Generator Review, 2023—zachleat.com
PermalinkNice little review of JavaScript site generators by Zach. Couple standouts, to me:
Of the 7 generators tested, only Astro and Eleventy default to 0kb of client-side JavaScript. More 0kb baselines, please.
Three of the generators hide
npm audit
reports by default. (Basically, the opposite of “secure by default”)
-
Web Performance Calendar » A Performance Maturity Matrix
PermalinkDrawing upon each model, I tried to roughly gauge the perf maturity of organizations, but found I needed a more visual aid — a map of perf work that I would help me identify where orgs are in their perf journey: what areas of performance there are and what level they are in those areas. So I drew up a “Performance Maturity Matrix” with four levels including a “null” level (i.e. lacking the traits of performance maturity) and nine areas of perf work.
-
Building a Better Web - Part 1: A faster YouTube on web
PermalinkGood little case study on how YouTube optimized their First Contentful Paint and Largest Contentful Paint by applying
preload
andfetchpriority
to their poster image.My favorite nugget is that they tested using an actual video thumbnail for their poster image versus a solid black poster image, and the black image performed better in user studies:
Using a solid black poster image showed the best results in user studies. Users found the transition from solid black to the first frame of the video to be a less-jarring experience for autoplay videos.
-
What happened when we disabled Google AMP at Tribune Publishing?
PermalinkGiven the higher page RPMs and subscriber conversion rates of a non-AMP page, pulling the plug on AMP looks like an easy win for both programmatic and consumer revenue. And most importantly, we regain full control of the user experience. And that’s perhaps the biggest upside.
It’s no shocker I’ve never been a big fan of AMP. (My first post expressing concern about the approach AMP was taking was literally the day after the initial announcement.)
So naturally, I’m pleased to see folks moving on from it.
-
Fixing Performance Regressions Before they Happen | by Netflix Technology Blog | Jan, 2022 | Netflix TechBlog
PermalinkThis post describes how the Netflix TVUI team implemented a robust strategy to quickly and easily detect performance anomalies before they are released — and often before they are even committed to the codebase.
-
Don’t attach tooltips to document.body - Atif Afzal
PermalinkReally nice walkthrough about diagnosing and solving slow tooltips.
-
The metrics game
PermalinkPhilip takes a trip down memory lane with some fun stories from the early Yahoo! days around performance.
But most importantly, he suggests that the SEO carrot has tipped the focus of performance, and not for the better.
Sites that truly care about performance and the business impact of that performance, worked hard to make their sites faster.
This changed when Google started using speed as a ranking signal.
I made a similar point in a revamped version of my “Performance Budgets that Stick” talk the other week. If we want this burst of performance interest to stick, and to have the impact we want it to have for users, we’re going to need to make it easier for folks to connect the dots between business metrics and performance.
-
Making Reasonable Use of Computer Resources
PermalinkThe computers sitting on our desks are incomprehensibly fast. They can perform more operations in one second than a human could in one hundred years. We live in an era of CPUs that can perform billions of instructions per second, tens of billions if we take multi-cores into account, of memory that can transfer data to the CPU at hundreds of gigabytes per second, of disks that support streaming reads of gigabytes per second. This era of incredibly fast hardware is also the era of programs that take tens of seconds to start from an SSD or NVMe disk; of bloated web applications that take many seconds to show a simple list, even on a broadband connection; of programs that process data at a thousandth of the speed we should expect. Software is laggy and sluggish — and the situation shows little signs of improvement.
-
The performance effects of too much lazy-loading
PermalinkThis post summarizes how we analyzed publicly available web transparency data and ad hoc A/B testing to understand the adoption and performance characteristics of native image lazy-loading. What we found is that lazy-loading can be an amazingly effective tool for reducing unneeded image bytes, but overuse can negatively affect performance. Concretely, our analysis shows that more eagerly loading images within the initial viewport—while liberally lazy-loading the rest—can give us the best of both worlds: fewer bytes loaded and improved Core Web Vitals.
-
Correlating Lighthouse scores to page-level CrUX data - Analysis - HTTP Archive
PermalinkSurprisingly, the correlations for each CWV are weak to medium strength and for FID it’s actually a negative correlation, meaning that as the Lighthouse score goes up, real-user FID performance actually tends to go down a bit.
This is the kind of analysis I was hoping to see after Pat added CrUX data to WebPageTest.
Most of this lines up with what I’d expect. Cumulative Layout Shift is measured very differently synthetically versus in the CrUX data (particularly before the new windowing approach) and First Input Delay has always seemed to have a very weak connection to Total Blocking Time in my experience. (First Input Delay itself has plenty of limitations, and I’m eager to see it supplanted by something a bit more useful in the future.)
I think many of us have cautioned against leaning too hard on optimizing for your Lighthouse scores, and it’s nice to evidence as to why. Lighthouse is a great tool, but it works better as a “here’s a list of things you can try to improve” than it does as a goal in and of itself.
-
[Public] The cost of preload - Google Docs
PermalinkVery good overview of the preload issue from Shubhie. This bit in particular is important to take to heart, I think:
Preload can be avoided in many cases, with alternative strategies such as inlining of critical CSS and inlining font-css.
Most of the time, preload gets used as a (not particularly efficient) band-aid for an underlying issue that is probably better solved in other ways.
Also, this is probably a personal hang-up, but whenever someone from the Chrome team shares a Google Doc full of interesting research and ideas (which is super often) I always feel a little anxiety. I just don’t trust these docs to stick around as long as a blog post.
-
Simple things are complicated: making a show password option - Technology in government
PermalinkAdding a ‘show password’ option to GOV.UK Accounts seemed like a straightforward task, but the more we looked into it the more complicated and interesting it became. This is how we did it and some of the challenges we faced.
More fodder for my firm belief that the closer you look at anything, the more interesting it becomes.
-
Progress Delayed Is Progress Denied - Infrequently Noted
PermalinkApple’s iOS browser (Safari) and engine (WebKit) are uniquely under-powered. Consistent delays in delivery of important features ensure the web can never be a credible alternative to its proprietary tools and App Store.
Heckuva leading assertion from Alex, but he brings some serious data to back it up, including some pretty compelling results from the Web Platform Tests.
There’s a lot of criticism levied at Chrome and how they move through the standards process (or don’t). Some of that criticism is fair, some of it isn’t.
But it’s pretty clear, I think, that we have a mismatch of resources creating an imbalance. On the one hand, we have Google funding the heck out of their web-focused efforts. On the other hand, we have Apple that just never seems willing to invest in it much.
The result isn’t particularly healthy for the web or for anyone who uses it. Alex’s point here rings true:
It’s perverse that users and developers everywhere pay a subsidy for Apple’s under-funding of Safari/WebKit development.
-
Now THAT’S What I Call Service Worker! – A List Apart
PermalinkThe 95th percentile of RUM data is a great place to assess the slowest experiences. In this case, we see that the streaming Service Worker confers a 40% and 51% improvement in FCP and LCP, respectively, over no Service Worker at all. Compared to the “standard” Service Worker, we see a reduction in FCP and LCP by 19% and 43%, respectively.
-
The Almost-Complete Guide to Cumulative Layout Shift
PermalinkWhat a fantastic article from Jess Peck on Cumulative Layout Shift. It’s approachable to folks new to the metric, detailed and in-depth and wildly entertaining (more entertaining than a post this informative has any right to be).
-
An In-Depth Guide To Measuring Core Web Vitals — Smashing Magazine
PermalinkAn in-depth, well researched guide to measuring Core Web Vitals from Barry (par for the course for him).
-
Maximally optimizing image loading for the web in 2021
PermalinkA smart little list of optimizations for loading images efficiently. Using the bytes if the image as a hash for automatic cache busting if the image changes is particularly clever.
-
Resize-Resilient `content-visiblity` Fixes - Infrequently Noted
PermalinkAlex posted a follow-up to a prior post on avoiding scrollbar jumping when using content-visibility.
It’s clever stuff—the combination of ResizeObservers and IntersectionObservers helping to avoid the shifting around that would occur otherwise.
This doesn’t do much though to eliminate the reservations I have about
content-visibility
. In theory, it’s a great addition, but the fact that it seems to need so much supporting scaffolding to make it usable for a relatively simple use-case makes me feel like it may be missing a few other pieces of native plumbing before it’s really ready for primetime.
-
These are called opportunities
PermalinkSad, but true words:
For a few months, those who will buy M1 machines will enjoy great responsiveness and blazing start-up time. Some once bloated applications will again behave like most tools should. But soon these metrics will start to degrade. Responsiveness and start-up time will progressively revert to what they used to be and old “non-M1” machines will become even slower than they used to.
For every cycle a hardware engineer saves, a software engineer will add two instructions.
-
Exploring Site Speed Optimisations With WebPageTest and Cloudflare Workers
PermalinkSuper clever stuff from Andy about how he’s using Cloudflare Workers and WebPageTest to test performance optimizations. I’ve done a little of this, but there’s a bunch here I haven’t played with. Going to have to change that for sure!
-
We need more inclusive web performance metrics | Filament Group, Inc.
PermalinkI’ve been super keen on getting some sort of way to measure when the accessibility tree is ready ever since first chatting about it with Marcy Sutton 5 years ago or so. Scott has a great post here about why it’s important. He’s also filed issues on WebPageTest and Lighthouse to get something added. Hopefully we’ll see something soon!
-
Chromium Blog: The Science Behind Web Vitals
PermalinkThe folks at Chrome, talking about the business impact of hitting their Core Web Vitals thresholds:
We analyzed millions of page impressions to understand how these metrics and thresholds affect users. We found that when a site meets the above thresholds, users are 24% less likely to abandon page loads (by leaving the page before it finishes loading).
We also looked specifically at news and shopping sites, sites whose businesses depend on traffic and task completion, and found similar numbers: 22% less abandonment for news sites and 24% less abandonment for shopping sites.
-
Defining the Core Web Vitals metrics thresholds
PermalinkSuper interesting insight into how the folks over at Google came up with their new Core Web Vitals—including everything from how they figured out what “good” or “poor” looked like, how they chose which percentiles to look at, and more.
-
Prioritizing users in a crisis: Building the California COVID-19 response site
PermalinkWe recognize, of course, that “Always accessible” is not a novel approach. Here in California accessibility is a guiding principle in the state’s digital strategy. And our work is just one part of the state’s larger commitment to ensuring that information and services are accessible.
What is novel is how our team is broadening the definition of accessibility for state government to include performance as a core component. Performance as accessibility.
Our goal is to make COVID19.CA.gov fast and easy to use on any kind of hardware or with any level of bandwidth.
-
How Netflix brings safer and faster streaming experience to the living room on crowded networks using TLS 1.3
PermalinkNetflix talks about the security and performance implications of rolling out TLS 1.3. Seeing a 8.2% improvement in play delay at the 95% percentile—not too shabby!
-
Pointer Compression in V8 · V8
PermalinkDetailed post about how v8 used Pointer Compression to reduce heap size by up to 43%, resulting in less CPU usage and less time on garbage collection.
It’s…dense. I’m going to likely have to re-read this several times to really understand all the details. Lots of interesting bits here.
-
How COVID-19 is affecting internet performance
PermalinkSome staggering stats from Fastly showing how traffic and download speed have changed during the COVID-19 pandemic.
Performance has quite literally never been more important.
-
Internet Traffic Surges As People Work From Home : NPR
PermalinkVisits to news sites went up as much as 60%. And people are spending more time playing online games.
A similar pattern is emerging in the U.S. Cloudflare says Internet traffic jumped 20% on Friday, after President Trump declared the pandemic a national emergency. In hard-hit Seattle, Internet use was up 40% last week compared to January.
-
Brace yourself for slower data speeds - The Economic Times
PermalinkLatest data put out by the telecom regulator pegs the average monthly wireless data usage per user at 10.37 GB, which analysts say could rise by around 15% in the next two quarters if people continue to work from their homes over a prolonged period.
-
HTTP Caching Tests
PermalinkWas going to build this myself, but turns out I don’t have to!
A handy suite of tests (from Mark Nottingham) to see how different browsers respond to various caching headers.
-
Understanding Performance Regions - Noteworthy - The Journal Blog
PermalinkA rather clever way of looking at performance data by breaking it down into histogram “regions”.
-
Want to Improve UI Performance? Start by Understanding Your User – Shopify Engineering
PermalinkFantastic post detailing Shopify’s work optimizing their admin pages. There are some good pointers around profiling and optimizing React, as well as a lot of thoughtful insights on designing in-between states.
-
Redux: Lazy loading youtube embeds
PermalinkRemy building on someone else’s (smart!) post about lazy-loading YouTube embeds.
Gosh I miss when this “someone blogs and then someone else iterates on that on their own blog” thing was more common.
-
When should you be using Web Workers? — DasSur.ma
PermalinkSurma argues, compellingly, for why web workers need to take a more prominent role in JS-based applications. It’s not just about the raw performance benefits, but the inclusivity that good performance brings.
Unless a globally launched framework labels itself as exclusively targeting the users of the Wealthy Western Web, its has a responsibility to help developers target every phone on The Widening Performance Gap™️ spectrum.
-
Why we focus on frontend performance - Technology in government
PermalinkFor government, GOV.UK is often the only place a user can get information. If the website were to perform badly, we become a single point of failure.
Great rundown of why performance is so important to GOV.UK and how the context of their visitors can vary dramatically, even within the same city.
-
AddyOsmani.com - Native image lazy-loading for the web!
PermalinkIn this post, we’ll look at the new loading attribute which brings native <img> and <iframe> lazy-loading to the web!
Exciting to finally see this ship! Folks have been asking for a standards-based way to support lazy-loading images for years.
Gives me hope that maybe, someday, we’ll have element queries.
-
Who has the fastest website in F1? - JakeArchibald.com
PermalinkI always like seeing how other folks handle performance audits. Here, Jake walks through 10 F1 sites, auditing them primarily with WebpageTest and a smattering of Chrome Dev Tools.
-
AddyOsmani.com - JavaScript Loading Priorities in Chrome
PermalinkHandy little reference from Addy Osmani showing how Chrome handles JavaScript scheduling.
-
Preloading Fonts and the Puzzle of Priorities - Andy Davies
PermalinkI’ve being using preload with clients over the last few years but I have never been completely satisfied with the results. I’ve also seen some things I hadn’t quite expected in page load waterfalls so decided to dig deeper.
Excellent work digging deeper into preload by Mr. Davies.
-
SpeedCurve | JavaScript Dominates Browser CPU
PermalinkTen years ago the network was the main bottleneck. Today, the main bottleneck is JavaScript.
-
What Hooks Mean for Vue | CSS-Tricks
PermalinkA very approachable explanation from Sarah about what Hooks are and the problems they solve for Vue.
-
Performance Calendar » HTTP/2 Prioritization
PermalinkPat has been doing some intense research around HTTP/2 prioritization which lead to this magnificent post discussing how each browser handles priorities (not well, for the most part) and also provides a handy test page for checking how CDN’s and servers are doing.
Andy has already taken that page and started tracking how CDN’s are doing (again, not well for most of them).
-
What If? – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts
PermalinkWhile ever you build under the assumption that things will always work smoothly, you’re leaving yourself completely ill-equipped to handle the scenario that they don’t.
-
Second Meaningful Content: the Worst Performance Metric | Filament Group, Inc., Boston, MA
PermalinkI rather like Scott’s term for what happens when you use client-side JavaScript for A/B testing.
This pattern leads to such a unique effect on page load that at last week’s Perf.Now() Conference, I coined a new somewhat tongue-in-cheek performance metric called “Second Meaningful Content,” which occurs long after the handy First Meaningful Content (or Paint) metric, which would otherwise mark the time at which a page starts to appear usable.
-
Performance Postmortem: Mapbox Studio
PermalinkLovely performance “postmortem” from Eli Fitch about how MapBox got their first-meaningful-paint to drop from 4.7s to 1.9 seconds.
Some good insights into technical optimizations, but as always, the cultural aspects are the most difficult–and the most important.
Nurturing cultural awareness and enthusiasm for building fast, snappy, responsive, tactile products is arguably the most effective performance improvement you can make, but can be the most challenging, and requires the most ongoing attention.
-
The Three Types of Performance Testing – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts
PermalinkI like Harry’s categorization for performance testing:
I try to distill the types of testing that we do into three distinct categories: Proactive, Reactive, and Passive.
I’ve been using “Active” and “Passive” myself and found that it really helps companies better understand why having both synthetic and RUM monitoring in place is important. I really like the way Harry breaks that “Active” category out further based on whether the tests are run proactively or reactively.
-
Making GOV.UK pages load faster and use less data - Technology in government
PermalinkRemoving the Base64 encoded font has reduced the total page weight by 16% (75 KB) per request (assuming no caching). This may not sound like a huge difference, but GOV.UK receives approximately 48 million visitors per month, so this adds up to mobile users saving approximately 800 GB per month, cumulatively. This is especially important to users on older mobile devices and expensive data plans.
-
Code and Dev Chat - Paul Lewis
PermalinkYou can take Paul out of dev-rel, but you can’t take the dev-rel out of Paul. Or something.
Really enjoying Paul’s videos. They’re entertaining, really high quality and super useful.
-
Accurately measuring layout on the web | Read the Tea Leaves
PermalinkClever stuff from Nolan about trying to measure the complete cost of a component—not just the JS execution.
-
How to Build a Low-Tech Website? : LOW←TECH MAGAZINE
PermalinkThis was a really fascinating post about trying to make a web site—from design down to the hardware that powers it—as energy efficient as possible. This is certainly at the extreme end of optimization, which is what made it so interesting.
They do admit that since the server is powered via solar, it’s possible they may have some downtime. I bet they could provide themselves with another layer of protection with a small service worker.
-
Building the Google Photos Web UI – Google Design – Medium
PermalinkWhat a fantastic deep-dive! Antin walks through how Google Photos built a performant photo grid in great detail. There’s a lot of careful thinking in here and some clever solutions.
-
Speeding Up NerdWallet 🚗💨 – Francis John – Medium
PermalinkWhat a great overview of how NerdWallet made their site significantly faster (6s faster time to visually complete as measured on a 3G network) in the last 6 months or so.
-
DasSur.ma – Layers and how to force them
PermalinkSurma talks about the different ways of forcing an element to be on its own element, and—most interestingly—the side-effects of each.
-
preload with font-display: optional is an Anti-pattern—zachleat.com
PermalinkAs per his usual, good advice from Zach about loading fonts: this time, advising not to pair
font-display: optional
withpreload
.
-
HTTP Heuristic Caching (Missing Cache-Control and Expires Headers) Explained – Paul Calvano
PermalinkMr. Calvano with a really good explanation of HTTP Caching. Particularly interesting to me was the bit about the heuristic cache limits of different browsers—something I had never really dug into.
In the Chromium source as well as the Webkit source, we can see that the lifetimes.freshness variable is set to 10% of the time since the object was last modified (provided that a Last-Modified header was returned). The 10% since last modified heuristic is the same as the suggestion from the RFC. When we look at the Firefox source code, we can see that the same calculation is used, however Firefox uses the std:min() C++ function to select the less of the 10% calculation or 1 week.
-
Maybe you don't need Rust and WASM to speed up your JS
PermalinkAn incredibly detailed walkthrough of optimizing the heck out of some JavaScript. Some really impressive gains here and lots of great, low-level information.
-
Using perf.html and Gecko's profiler for perf analysis (YouTube playlist)
PermalinkThe folks at Mozilla have been super busy making some fantastic improvements to Firefox. Among other things, their performance profiling tools have gotten pretty darn slick. Greg Tatum made a playlist of a bunch of short videos demonstrating how to use perf.html and the Gecko Profiler to inspect the performance of a site or application.
-
Device Intervention - Ethan Marcotte
PermalinkAs usual, Ethan makes a lot of sense in this post about how the way we build is impacted by the environment in which we build:
In our little industry, we often work on decent hardware, on reliable networks. But according to Pew Research, thirty percent of Americans don’t have broadband at home. One in ten American adults are smartphone-only internet users, while 13% of American adults don’t use the internet at all.
Meanwhile, we make mobile-friendly websites with widescreen devices, using broadband to design experiences for slow, unstable networks. In a lot of ways, we’re outliers among the people we’re designing for.
-
Implementing 'Save For Offline' with Service Workers
PermalinkUna recently added a “Save Offline” button to her blog posts that gives users control over whether an article will be saved offline or not. There was some recent discussion prompted by Nicholas Hoizey about how much data is too much to save offline. Giving users control (whether on an individual post basis or in bulk) seems like one way to deal with that question.
-
Saving you bandwidth through machine learning - Google
PermalinkThe smart folks at Google are now using a technology called RAISR to shave up to 75% off the file size of the images they display. It uses machine learning to enable it to be much more intelligent about the upsampling methods applied to images. Clever stuff!
-
How we made diff pages three times faster - GitHub Engineering
PermalinkPretty impressive investigation and creative work done by the GitHub folks to make their diff pages, some of the slowest on the site, three times faster.
-
What we learnt from our mistakes in 2016 - The Guardian
PermalinkI love companies that are as invested in giving back to the community as they are at doing quality web-work. The Guardian is certainly one of them, and their latest post is a great example. They walk through several errors and mistakes they made on their site this year, why those mistakes were made, and how they fixed them.
-
A Tale of Four Caches - Performance Calendar
PermalinkGreat introduction to the various different types of browser caches by Yoav Weiss, told as a tale about Questy’s Journey. The illustrations by Yoav’s daughter, Yaara, are also wonderful.
-
Animation in Design Systems - 24 Ways
PermalinkSarah has written a really useful post about the different roles animation can play. Naturally, I was really happy to see her talking about the impact on perceived performance:
Animation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn’t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.
In most cases, I’d argue, it would be even better to provide some sort of feedback to the user about what is happening in the background, but the same general idea applies: taking the time to design for when processes take too long is important.
-
Front-End Performance Checklist - Smashing Magazine
PermalinkVitaly published a lengthy checklist of performance optimizations, which I was more than happy to review. There are PDF and Apple Pages versions available of the list if that’s your cup of tea.