Tag: Performance
-
Preloading Fonts and the Puzzle of Priorities - Andy Davies
PermalinkI’ve being using preload with clients over the last few years but I have never been completely satisfied with the results. I’ve also seen some things I hadn’t quite expected in page load waterfalls so decided to dig deeper.
Excellent work digging deeper into preload by Mr. Davies.
-
SpeedCurve | JavaScript Dominates Browser CPU
PermalinkTen years ago the network was the main bottleneck. Today, the main bottleneck is JavaScript.
-
What Hooks Mean for Vue | CSS-Tricks
PermalinkA very approachable explanation from Sarah about what Hooks are and the problems they solve for Vue.
-
Performance Calendar » HTTP/2 Prioritization
PermalinkPat has been doing some intense research around HTTP/2 prioritization which lead to this magnificent post discussing how each browser handles priorities (not well, for the most part) and also provides a handy test page for checking how CDN’s and servers are doing.
Andy has already taken that page and started tracking how CDN’s are doing (again, not well for most of them).
-
What If? – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts
PermalinkWhile ever you build under the assumption that things will always work smoothly, you’re leaving yourself completely ill-equipped to handle the scenario that they don’t.
-
Second Meaningful Content: the Worst Performance Metric | Filament Group, Inc., Boston, MA
PermalinkI rather like Scott’s term for what happens when you use client-side JavaScript for A/B testing.
This pattern leads to such a unique effect on page load that at last week’s Perf.Now() Conference, I coined a new somewhat tongue-in-cheek performance metric called “Second Meaningful Content,” which occurs long after the handy First Meaningful Content (or Paint) metric, which would otherwise mark the time at which a page starts to appear usable.
-
Performance Postmortem: Mapbox Studio
PermalinkLovely performance “postmortem” from Eli Fitch about how MapBox got their first-meaningful-paint to drop from 4.7s to 1.9 seconds.
Some good insights into technical optimizations, but as always, the cultural aspects are the most difficult–and the most important.
Nurturing cultural awareness and enthusiasm for building fast, snappy, responsive, tactile products is arguably the most effective performance improvement you can make, but can be the most challenging, and requires the most ongoing attention.
-
The Three Types of Performance Testing – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts
PermalinkI like Harry’s categorization for performance testing:
I try to distill the types of testing that we do into three distinct categories: Proactive, Reactive, and Passive.
I’ve been using “Active” and “Passive” myself and found that it really helps companies better understand why having both synthetic and RUM monitoring in place is important. I really like the way Harry breaks that “Active” category out further based on whether the tests are run proactively or reactively.
-
Making GOV.UK pages load faster and use less data - Technology in government
PermalinkRemoving the Base64 encoded font has reduced the total page weight by 16% (75 KB) per request (assuming no caching). This may not sound like a huge difference, but GOV.UK receives approximately 48 million visitors per month, so this adds up to mobile users saving approximately 800 GB per month, cumulatively. This is especially important to users on older mobile devices and expensive data plans.
-
Code and Dev Chat - Paul Lewis
PermalinkYou can take Paul out of dev-rel, but you can’t take the dev-rel out of Paul. Or something.
Really enjoying Paul’s videos. They’re entertaining, really high quality and super useful.
-
Accurately measuring layout on the web | Read the Tea Leaves
PermalinkClever stuff from Nolan about trying to measure the complete cost of a component—not just the JS execution.
-
How to Build a Low-Tech Website? : LOW←TECH MAGAZINE
PermalinkThis was a really fascinating post about trying to make a web site—from design down to the hardware that powers it—as energy efficient as possible. This is certainly at the extreme end of optimization, which is what made it so interesting.
They do admit that since the server is powered via solar, it’s possible they may have some downtime. I bet they could provide themselves with another layer of protection with a small service worker.
-
Building the Google Photos Web UI – Google Design – Medium
PermalinkWhat a fantastic deep-dive! Antin walks through how Google Photos built a performant photo grid in great detail. There’s a lot of careful thinking in here and some clever solutions.
-
Speeding Up NerdWallet 🚗💨 – Francis John – Medium
PermalinkWhat a great overview of how NerdWallet made their site significantly faster (6s faster time to visually complete as measured on a 3G network) in the last 6 months or so.
-
DasSur.ma – Layers and how to force them
PermalinkSurma talks about the different ways of forcing an element to be on its own element, and—most interestingly—the side-effects of each.
-
preload with font-display: optional is an Anti-pattern—zachleat.com
PermalinkAs per his usual, good advice from Zach about loading fonts: this time, advising not to pair
font-display: optional
withpreload
.
-
HTTP Heuristic Caching (Missing Cache-Control and Expires Headers) Explained – Paul Calvano
PermalinkMr. Calvano with a really good explanation of HTTP Caching. Particularly interesting to me was the bit about the heuristic cache limits of different browsers—something I had never really dug into.
In the Chromium source as well as the Webkit source, we can see that the lifetimes.freshness variable is set to 10% of the time since the object was last modified (provided that a Last-Modified header was returned). The 10% since last modified heuristic is the same as the suggestion from the RFC. When we look at the Firefox source code, we can see that the same calculation is used, however Firefox uses the std:min() C++ function to select the less of the 10% calculation or 1 week.
-
Maybe you don't need Rust and WASM to speed up your JS
PermalinkAn incredibly detailed walkthrough of optimizing the heck out of some JavaScript. Some really impressive gains here and lots of great, low-level information.
-
Using perf.html and Gecko's profiler for perf analysis (YouTube playlist)
PermalinkThe folks at Mozilla have been super busy making some fantastic improvements to Firefox. Among other things, their performance profiling tools have gotten pretty darn slick. Greg Tatum made a playlist of a bunch of short videos demonstrating how to use perf.html and the Gecko Profiler to inspect the performance of a site or application.
-
Device Intervention - Ethan Marcotte
PermalinkAs usual, Ethan makes a lot of sense in this post about how the way we build is impacted by the environment in which we build:
In our little industry, we often work on decent hardware, on reliable networks. But according to Pew Research, thirty percent of Americans don’t have broadband at home. One in ten American adults are smartphone-only internet users, while 13% of American adults don’t use the internet at all.
Meanwhile, we make mobile-friendly websites with widescreen devices, using broadband to design experiences for slow, unstable networks. In a lot of ways, we’re outliers among the people we’re designing for.
-
Implementing 'Save For Offline' with Service Workers
PermalinkUna recently added a “Save Offline” button to her blog posts that gives users control over whether an article will be saved offline or not. There was some recent discussion prompted by Nicholas Hoizey about how much data is too much to save offline. Giving users control (whether on an individual post basis or in bulk) seems like one way to deal with that question.
-
Saving you bandwidth through machine learning - Google
PermalinkThe smart folks at Google are now using a technology called RAISR to shave up to 75% off the file size of the images they display. It uses machine learning to enable it to be much more intelligent about the upsampling methods applied to images. Clever stuff!
-
How we made diff pages three times faster - GitHub Engineering
PermalinkPretty impressive investigation and creative work done by the GitHub folks to make their diff pages, some of the slowest on the site, three times faster.
-
What we learnt from our mistakes in 2016 - The Guardian
PermalinkI love companies that are as invested in giving back to the community as they are at doing quality web-work. The Guardian is certainly one of them, and their latest post is a great example. They walk through several errors and mistakes they made on their site this year, why those mistakes were made, and how they fixed them.
-
A Tale of Four Caches - Performance Calendar
PermalinkGreat introduction to the various different types of browser caches by Yoav Weiss, told as a tale about Questy’s Journey. The illustrations by Yoav’s daughter, Yaara, are also wonderful.
-
Animation in Design Systems - 24 Ways
PermalinkSarah has written a really useful post about the different roles animation can play. Naturally, I was really happy to see her talking about the impact on perceived performance:
Animation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn’t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.
In most cases, I’d argue, it would be even better to provide some sort of feedback to the user about what is happening in the background, but the same general idea applies: taking the time to design for when processes take too long is important.
-
Front-End Performance Checklist - Smashing Magazine
PermalinkVitaly published a lengthy checklist of performance optimizations, which I was more than happy to review. There are PDF and Apple Pages versions available of the list if that’s your cup of tea.