Browsing Tag: Performance

    web design

    How To Boost Media Performance On A Budget — Smashing Magazine


    About The Author

    Akshay Ranganath is a Solution Architect at Cloudinary responsible for bringing customers on-board, helping them create work-flow for media management and media …
    More about

    How do we get media performance right while staying within performance budgets? Let’s take a look at the recent stats and data around performance budgets, video playback performance issues and some techniques and tools to address these issues.

    American scholar Mason Cooley deftly described a hard fact of life: “A budget takes the fun out of money.” Unquestionably, media enlivens websites, adding appeal, excitement, and intrigue, let alone enticements to stay on a page and frequently revisit it. However, just as out-of-control spending bodes ill in the long run, so does unbudgeted digital media decimate site performance.

    A case in point: a page-load slowdown of a mere second could cost Amazon $1.6 billion in annual sales. Of the many factors that affect page-load speed, media is a significant one. Hence the dire need for prioritizing optimization of media. By spending your money right on that task and budgeting your media, you’ll reap significant savings and benefits in the long run.

    A web perf summit, with a slide on evidence showing positive impact of performance, and an attendee arguing it it's all a big hoax and we create a better user experience for nothing.
    Have you been in the same sitution as well? Illustration by Joel Pett, adapted by Jake Archibald.

    Performance Budgets

    “A performance budget is ‘… just what it sounds like: you set a ‘budget’ on your page and do not allow the page to exceed that. This may be a specific load time, but it is usually an easier conversation to have when you break the budget down into the number of requests or size of the page.”

    Tim Kadlec

    A performance budget as a mechanism for planning a web experience and preventing performance decay might consist of the following yardsticks:

    • Overall page weight,
    • Total number of HTTP requests,
    • Page-load time on a particular mobile network,
    • First Input Delay (FID)
    • First Contentful Paint (FCP),
    • Cumulative Layout Shift (CLS),
    • Largest Contentful Paint (LCP).

    Vitaly Friedman has an excellent checklist that describes the components that affect web performance along with useful tips on optimization techniques. Becoming familiar with those components will enable you to set performance goals.

    With clearly documented performance goals, various teams can have meaningful conversations about the optimal delivery of content. For example, a budget can help them decide if a page should contain five images — or three images and one video — and still remain within the planned limits.

    budget speedcurve
    Performance budget, as used on performance monitoring tools, such as SpeedCurve. (Large preview)

    However, having a performance budget as a standalone metric might not be of much help. That’s why we must correlate performance to organizational goals.

    Business Performance

    If you splurge a lot of bytes on nonoptimal videos and images, the rich-media experience will not be so rich anymore. An organization exists to achieve outcomes, such as enticing people to buy, educating them, motivating them, or seeking help and volunteers. Anyone with a web presence would appreciate the relationship between the effect of various performance measures on business metrics.

    WPOStats highlights literally hundreds of case studies showing how a drop in perfrmance — from a few hundreds of milliseconds to seconds — might result in a massive drop in annual sales. Drawing that kind of relationship greatly helps track the effect of performance on business and, ultimately, build a performance culture for organizations.

    Similarly, slow sites can have a dramatic impact on conversion. A tough challenge online businesses face is to find the right balance between engaging the audience while staying within the performance budget.

    It’s not surprising then that a critical component for audience engagement is optimized visual media, e.g. a captivating video that weaves a story about your product or service along with relevant, interesting, and appealing visuals.

    According to MIT neuroscientists, our brain can absorb and understand visual media in less than 13 milliseconds, whereas text can take the average reader over 3.3 mins to comprehend, often after re-reading it and cross-referencing other places. No wonder then that microvideo content (usually just 10–20 seconds long) often delivers big engagements and conversion gains.

    Appeal Of Videos

    While shopping online, we expect to see detailed product images. For years, I’ve come to prefer browsing products that are complemented by videos that show, for example, how to use the product or maybe how to assemble it, or that demonstrate real-life use cases.

    Apart from my personal experience, a lot of research attests to the importance of video content:

    • 96% of consumers find videos helpful when making online purchasing decisions.
    • 79% of online shoppers prefer to watch a video for information on a product rather than reading the text on a webpage.
    • The right product video can raise conversions by over 80%.

    Speaking about the delivery of videos on the web,

    “The average video weight is increasing dramatically every year, more so on mobile than on desktop. In some cases, that may be warranted since mobile devices often have high-resolution screens, but it may also be due to a lack of ability to offer different video sizes using HTML alone. Many large videos on the web are hand-placed in marketing pages and don’t have sophisticated media servers to deliver appropriate sizes, so I hope in the future we’ll see similar simple HTML features for video delivery that we see in responsive images.”

    Scott Jehl

    The same sentiment was conveyed by Conviva’s Q4 2020 State of Streaming (registration required), which noted that mobile phones saw 20% more buffering issues, a 19% higher video-start failure and 5% longer start-time than other devices.

    Apart from rendering troubles, video delivery can also raise bandwidth costs, especially if you cannot deliver the browser’s optimal formats. Also, if you are not using a content delivery network (CDN) or multiple CDNs to map users to the closest edge regions for reduced latencies — a practice called suboptimal routing — you might slow down the start of the video.

    Similarly, unoptimized images were the leading cause of page bloat. According to the Web Almanac, the differential in image bytes sent to mobile or desktop devices is very small, which amounts to a further waste of bandwidth for devices that don’t really need all the extra bytes.

    Doubtless, going overboard with an engaging yet unoptimized content hurts business goals, and that’s where the fine art of balancing comes into play.

    The Art Of Balancing Performance With Media Content

    Even though rich media can promote user engagement, we need to balance the cost of delivering them with your website performance and business goals. One alternative is to host and deliver video through a third party like YouTube or Vimeo.

    Despite bandwidth savings, however, that approach comes at a cost. As the content owner, you can’t build a fully customized branded experience, or offer personalization. And of course, you need to host and deliver your images.

    You don’t have to offload your content. There are also other options available. Consider revamping your system for optimal media delivery by doing the following:

    Understand your current usage

    Study the weight of your webpages and the size of their media assets. Web-research expert Tammy Everts recommends ensuring that pages are less than 1 MB in size for mobile and less than 2 MB for everything else.
    In addition, identify the resources that are displayed on critical pages.

    For example, can you replace a paragraph of text and the associated images with a short video? How would that decision affect your business goals? At this stage, you might need to review your Real User Monitoring (RUM) and Analytics and identify the critical pages that lead to higher conversion and engagement rates.

    Also, be sure to synthetically track Google’s Core Web Vitals (CWVs) as part of your toolkit with tools like LightHouse. You can also measure CWVs through real-user monitoring (RUM) like CrUX. Since the CWVs will also be a signal for Google to crawlers, it makes sense to monitor and optimize for those metrics: Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).

    Serve the right format

    Serve images or videos in the most appropriate format in terms of size and resolution for the viewing device or browser. You might need an image CDN for that purpose. Alternatively, create variants like WebM, AVIF, JPEG-XL, HEIC, etc. and selectively serve the right content type based on the requesting User-Agent and Accept headers.

    For one-off conversions, you can try tools like or
    A related practice is to convert animated GIFs to videos. For more insight, see this article. Want to try setting up a workflow to handle video publishing? See the great tips in the article Optimizing Video For Size And Quality.

    Serve the right size

    Over 41% of images on mobile devices are improperly sized. So, rather than serving a fixed width, crop your images and videos to fit the container size with tools like Lazysizes. Better yet, AI tools that can detect areas of interest while cropping images could save you a load of time and effort. You could also leverage native lazy-loading for images that are below the fold.

    Add subtitles to your videos

    Almost 85% of videos are played without sound. Adding subtitles to them doesn’t only provide an accessible experience, but it would capture audience attention and boost engagement. However, transcribing videos could be a tedious job; you can work with an AI-based transcription service and improve it instead to automate the workflow.

    Deliver through multiple CDNs

    CDNs can alleviate last-mile latency, shorten a video’s start time, and potentially reduce buffering issues. According to a study by Citrix, a multi-CDN strategy can reduce latency even further and offer continued availability in case of localized outages in the CDN’s edge nodes.

    Instead of leveraging multiple discreet tools, you could explore a product like Cloudinary’s Media Optimizer, which effectively and efficiently optimizes media, delivering the right format and quality through multi-CDN edge nodes. In other words, Media Optimizer optimizes both quality and size, serving high visual fidelity in small files.

    Progressively render video

    Auto-playing preview videos on YouTube has shown to increase video watch time by over 90%. Video auto-play has few benefits and plenty of drawbacks, so it’s important to be careful when to use and when not to use it. It’s important to have the option to pause the video as a minimum.

    A good way to balance the page-size budget would be to first serve AI-created video previews and poster images only, loading the full video only if the user clicks the video. That way, you can eliminate unnecessary downloads and accelerate page loads.

    Alternatively, load a preview video at the beginning and let the player autoplay the full version. Once the preview completes, the player checks the connection type of the device with the Network Connection API and, if the user has good connectivity, swaps the source from preview to the actual video.

    You can check a sample page for a demo.
    Here’s a tip: since CDNs can detect network connection types more reliably, your production-quality code could leverage the CDN to detect network speed, based on which your client code could progressively load the long-form video.

    Wrapping Up

    Down the road, thanks to its remarkable ability to tell stories in a way that words can’t, visual media will continue to be a dominant element for websites and mobile apps. However, determining the right content to deliver depends on both your business strategy and site performance.

    “A performance budget doesn’t guide your decisions about what content should be displayed. Rather, it’s about how you choose to display that content. Removing important content altogether to decrease the weight of a page is not a performance strategy.”

    Tim Kadlec

    That’s sound advice to keep in mind.

    Smashing Editorial
    (vf, yk, il)

    Source link

    web design

    What’s The State Of Web Performance? — Smashing Magazine


    About The Author

    Drew is a Staff Engineer specialising in Frontend at Snyk, as well as being a co-founder of Notist and the small content management system Perch. Prior to this, …
    More about

    In this episode, we’re talking about Web Performance. What does the performance landscape look like in 2021? Drew McLellan talks to expert Harry Roberts to find out.

    In this episode, we’re talking about Web Performance. What does the performance landscape look like in 2021? I spoke with expert Harry Roberts to find out.

    Show Notes

    Harry is running a Web Performance Masterclass workshop with Smashing in May 2021. At the time of publishing, big earlybird discounts are still available.

    Weekly Update


    Photo of Charlie GerardDrew McLellan: He’s an independent Consultant Web Performance Engineer from Leeds in the UK. In his role, he helps some of the world’s largest and most respected organizations deliver faster and more reliable experiences to their customers. He’s an invited Google Developer Expert, a Cloudinary Media Developer Expert, an award-winning developer, and an international speaker. So we know he knows his stuff when it comes to web performance, but did you know he has 14 arms and seven legs? My Smashing friends, please welcome Harry Roberts. Hi Harry, how are you?

    Harry Roberts: Hey, I’m smashing thank you very much. Obviously the 14 arms, seven legs… still posing its usual problems. Impossible to buy trousers.

    Drew: And bicycles.

    Harry: Yeah. Well I have three and a half bicycles.

    Drew: So I wanted to talk to you today, not about bicycles unfortunately, although that would be fun in itself. I wanted to talk to you about web performance. It’s a subject that I’m personally really passionate about but it’s one of those areas where I worry, when I take my eye off the ball and get involved in some sort of other work and then come back to doing a bit of performance work, I worry that the knowledge I’m working with goes out of date really quick… Is web performance as fast-moving these days as I perceive?

    Harry: This is… I’m not even just saying this to be nice to you, that’s such a good question because I’ve been thinking on this quite a bit lately and I’d say there are two halves of it. One thing I would try and tell clients is that actually it doesn’t move that fast. Predominantly because, and this is the soundbite I always use, you can bet on the browser. Browsers aren’t really allowed to change fundamentally how they work, because, of course, there’s two decades of legacy they have to uphold. So, generally, if you bet on the browser and you know how those internals work, and TCP/IP that’s never changing… So the certain things that are fairly set in stone, which means that best practice will, by and large, always be best practice where the fundamentals are concerned.

    Harry: Where it does get more interesting is… The thing I’m seeing more and more is that we’re painting ourselves into corners when it comes to site-speed issues. So we actually create a lot of problems for ourselves. So what that means realistically is performance… it’s the moving goalpost, I suppose. The more the landscape or the topography of the web changes, and the way it’s built and the way we work, we pose ourself new challenges. So the advent of doing a lot more work on the client poses different performance issues than we’d be solving five years ago, but those performance issues still pertain to browser internals which, by and large, haven’t changed in five years. So a lot of it depends… And I’d say there’s definitely two clear sides to it… I encourage my clients lean on the browser, lean on the standards, because they can’t just be changed, the goalposts don’t really move. But, of course, that needs to meld with more modern and, perhaps slightly more interesting, development practices. So you keep your… Well, I was going to say “A foot in both camps” but with my seven feet, I’d have to… four and three.

    Drew: You mentioned that the fundamentals don’t change and things like TCP/IP don’t change. One of the things that did change in… I say “recent years”, this is actually probably going back a little bit now but, is HTTP in that we had this established protocol HTTP for communicating between clients and servers, and that changed and then we got H2 which is then all binary and different. And that changed a lot of the… It was for performance reasons, it was to take away some of the existing limitations, but that was a change and the way we had to optimize for that protocol changed. Is that now stable? Or is it going to change again, or…

    Harry: So one thing that I would like to be learning more about is the latter half of the question, the changing again. I need to be looking more into QUIC and H3 but it’s a bit too far around of the corner to be useful to my clients. When it comes to H2, things have changed quite a lot but I genuinely think H2 is a lot of false promise and I do believe it was rushed over the line, which is remarkable considering H1 was launched… And I mean 1.1, was 1997, so we have a lot of time to work on H2.

    Harry: I guess the primary benefit is web developers who understand it or perceive it is unlimited in flight requests now. So rather than six dispatched and/or six in-flight requests at a time, potentially unlimited, infinite. Brings really interesting problems though because… it’s quite hard to describe without visual aids but you’ve still got the same amount of bandwidth available, whether you’re running H1 or H2, the protocol doesn’t make your connection any faster. So it’s quite possible that you could flood the network by requesting 24 files at once, but you don’t have enough bandwidth for that. So you don’t actually get any faster because you can only manage, perhaps, a fraction of that at a time.

    Harry: And also what you have to think about is how the files respond. And this is another pro-tip I go through on client workshops et cetera. People will look at an H2 waterfall and they will see that instead of the traditional six dispatch requests they might see 24. Dispatching 24 requests isn’t actually that useful. What is useful is when those responses are returned. And what you’ll notice is that you might dispatch 24 requests, so your left-hand side of your waterfall looks really nice and steep, but they all return in a fairly staggered, sequential manner because you need to limit the amount of bandwidth so you can’t fulfill all response at the same time.

    Harry: Well, the other thing is if you were to fulfill all response at the same time, you’d be interleaving responses. So you night get the first 10% of each file and the next 20%… 20% of a JavaScript file is useless. JavaScript isn’t usable until 100% of it has arrived. So what you’ll see is, in actual fact, the way an H2 waterfall manifests itself when you look at the response… It looks a lot more like H1 anyway, it’s a lot more staggered. So, H2, I think it was oversold, or perhaps engineers weren’t led to believe that there are caps on how effective it could be. Because you’ll see people overly sharding their assets and they might have twenty… let’s keep the number 24. Instead of having two big JS files, you might have 24 little bundles. They’ll still return fairly sequentially. They won’t all arrive at the same time because you’ve not magic-ed yourself more bandwidth.

    Harry: And the other problem is each request has a constant amount of latency. So let’s say you’re requesting two big files and it’s a hundred millisecond roundtrip and 250 milliseconds downloading, that’s two times 250 milliseconds. If you multiply up to 24 requests, you’ve still got constant latency, which we’ve decided 100 milliseconds, so now you’ve got 2400 milliseconds of latency and 24 times… instead of 250 milliseconds download let’s say its 25 milliseconds download, it’s actually taken longer because the latency stays constant and you just multiply that latency over more requests. So I’ll see clients who will have read that H2 is this magic bullet. They’ll shard… Oh! They couldn’t simplify the development process, we don’t need to do bundling or concatenation et cetera, et cetera. And ultimately it will end up slower because you’ve managed to spread your requests out, which was the promise, but your latency stays constant so you’ve actually just got n times more latency in the browser. Like I said, really hard, probably pointless trying to explain that without visuals, but it’s remarkable how H2 manifests itself compared to what people are hoping it might do.

    Drew: Is there still benefit in that sharding process in that, okay, to get the whole lot still takes the same amount of time but by the time you get 100% of the first one 24th back you can start working on it and you can start executing it before the 24th is through.

    Harry: Oh, man, another great question. So, absolutely, if things go correctly and it does manifest itself in a more H1 looking response, the idea would be file one returns first, two, three, four, and then they can execute in the order they arrive. So you can actually shorten the aggregate time by assuring that things arrive at the same time. If we have a look at a webpage instead of waterfall, and you notice that requests are interleaved, that’s bad news. Because like I said, 10% of a JavaScript file is useless.

    Harry: If the server does its job properly and it sends, sends, sends, sends, send, then it will get faster. And then you’ve got knock-on benefits of your cacheing strategy can be more granular. So really annoying would be you update the font size on your date picker widget. In H1 world you’ve got to cache bust perhaps 200 kilowatts of your site’s wide CSS. Whereas now, you just cache bust datepicker.css. So we’ve got offshoot benefits like that, which are definitely, definitely very valuable.

    Drew: I guess, in the scenario where you magically did get all your requests back at once, that would obviously bog down the client potentially, wouldn’t it?

    Harry: Yeah, potentially. And then what would actually happen is the client would have to do a load of resource scheduling so what you’d end up with is a waterfall where all your responses return at the same time, then you’d have a fairly large gap between the last response arriving and its ability to execute. So ideally, when we’re talking about JavaScript, you’d want the browser to request them all in the request order, basically the order you defined them in, the server to return them all in the correct order so then the browser can execute them in the correct order. Because, as you say, if they all returned at the same time, you’ve just got a massive JavaScript to run at once but it still needs to be scheduled. So you could have a doubter of up to second between a file arriving and it becoming useful. So, actually, H1… I guess, ideally, what you’re after is H2 request scheduling, H1 style responses, so then things can be made useful as they arrive.

    Drew: So you’re basically looking for a response waterfall that looks like you could ski down it.

    Harry: Yeah, exactly.

    Drew: But you wouldn’t need a parachute.

    Harry: Yeah. And it’s a really difficult… I think to say it out loud it sounds really trivial, but given the way H2 was sold, I find it quite… not challenging because that makes my client sound… dumb… but it’s quite a thing to to explain to them… if you think about how H1 works, it wasn’t that bad. And if we get responses that look like that and “Oh yeah, I can see it now”. I’ve had to teach performance engineers this before. People who do what I do. I’ve had to teach performance engineers that we don’t mind too much when requests were made, we really care about when responses become useful.

    Drew: One of the reasons things seem to move on quite quickly, especially over the last five years, is that performance is a big topic for Google. And when Google puts weight behind something like this then it gains traction. Essentially though, performance is an aspect of user experience, isn’t it?

    Harry: Oh, I mean… this is a really good podcast, I was thinking about this half an hour ago, I promise you I was thinking about this half an hour ago. Performance is applied accessibility. You’re guaranteeing or increasing the chances that someone can access your content and I think accessibility is always just… Oh it’s screen readers, right? It’s people without sight. The decisions to build a website rather than an app… the decisions are access more of an audience. So yeah, performance is applied accessibility, which is therefore the user experience. And that user experience could come down to “Could somebody even experience your site” full stop. Or it could be “Was that experience delightful? When I clicked a button, did it respond in a timely manner?”. So I 100% agree and I think that’s a lot of the reason why Google are putting weight on it, is because it affects the user experience and if someone’s going to be trusting search results, we want to try and give that person a site that they’re not going to hate.

    Drew: And it’s… everything that you think about, all the benefits you think about, user experience, things like increased engagement, it’s definitely true isn’t it? There’s nothing that sends the user away from a site more quickly than a sluggish experience. It’s so frustrating, isn’t it? Using a site where you know that maybe the navigation isn’t that clear and if you click through to a link and you think “Is this what I want? Is it not?” And just the cost of making that click, just waiting, and then you’ve got to click the back button and then that waiting, and it’s just… you give up.

    Harry: Yeah, and it makes sense. If you were to nip to the supermarket and you see that it’s absolutely rammed with people, you’ll do the bare minimum. You’re not going to spend a lot of money there, it’s like “Oh I just need milk”, in and out. Whereas if it’s a nice experience, you’ve got “Oh, well, while I’m here I’ll see if… Oh, yeah they’ve got this… Oh, I’ll cook this tomorrow night” or whatever. I think still, three decades into the web, even people who build for the web struggle, because it’s intangible. They struggle to really think that what would annoy you in a real store would annoy you online, and it does, and the stats show that it has.

    Drew: I think that in the very early days, I’m thinking late 90s, showing my age here, when we were building websites we very much thought about performance but we thought about performance from a point of view that the connections that people were using were so slow. We’re talking about dial-up, modems, over phone lines, 28K, 56K modems, and there was a trend at one point with styling images that every other line of the image would blank out with a solid color to give this… if you can imagine it like looking through a venetian blind at the image. And we did that because it helped with the compression. Because every other line the compression algorithm could-

    Harry: Collapse into one pointer.

    Drew: Yeah. And so you’ve significantly reduced your image size while still being able to get… And it became a design element. Everybody was doing it. I think maybe Jeffrey Zeldman was one of the first who pioneered that approach. But what we were thinking about there was primarily how quickly could we get things down the wire. Not for user experience, because we weren’t thinking about… I mean I guess it was user experience because we didn’t want people to leave our sites, essentially. But we were thinking about not optimizing things to be really fast but trying to avoid them being really slow, if that makes sense.

    Harry: Yeah, yeah.

    Drew: And then, I think as speeds like ADSL lines became more prevalent, that we stopped thinking in those terms and started just not thinking about it at all. And now we’re at the situation where we’re using mobile devices and they’ve got constrained connections and perhaps slower CPUs and we’re having to think about it again, but this time in terms of getting an advantage. As well as the user experience side of things, it can have real business benefits in terms of costs and ability to make profit. Hasn’t it?

    Harry: Yeah, tremendously. I mean, not sure how to word it. Not shooting myself in the foot here but one thing I do try and stress to clients is that site-speed is going to give you a competitive advantage but it’s only one thing that could give you some competitive advantage. If you’ve got a product no one wants to buy then it doesn’t matter how fast your site is. And equally, if someone genuinely wants the world’s fastest website, you have to delete your images, delete your CSS, delete your JavaScript, and then see how many products you tell, because I guarantee site-speed wasn’t the factor. But studies have shown that there’s huge benefits of being fast, to the order of millions. I’m working with a client as we speak. We worked out for them that if they could render a given page one second faster, or rather their largest content for paint was one second faster, it’s worth 1.8 mil a year, which is… that’s a big number.

    Drew: That would almost pay your fee.

    Harry: Hey! Yeah, almost. I did say to them “Look, after two years this’ll be all paid off. You’ll be breaking even”. I wish. But yeah, does the client-facing aspect… sorry, the customer-facing aspect of if you’ve got an E-Com site, they’re going to spend more money. If you’re a publisher, they’re going to read more of an article or they will view more minutes of content, or whatever you do that is your KPI that you measure. It could be on the Smashing site, it could be they didn’t bounce, they actually click through a few more articles because we made it really easy and fast. And then faster sites are cheaper to run. If you’ve got your cacheing strategy sorted you’re going to keep people away from your servers. If you optimize your assets, anything that does have to come from your server is going to weight a lot less. So much cheaper to run.

    Harry: The thing is, there’s a cost in getting there. I think Scott Jehl probably said one of the most… And I heard it from him first, so I’m going to assume he came up with it but the saying is “It’s easy to make a fast website but it’s difficult to make a website fast”. And that is just so succinct. Because the reason web perf might get pushed down the list of things to do is because you might be able to say to a client “If I make your site a second faster you’ll make an extra 1.8 mil a year” or it can be “If you just added Apple Pay to your checkout, you’re going to make an extra five mil.” So it’s not all about web perf and it isn’t the deciding factor, it is one part of a much bigger strategy, especially for E-Com online. But the evidence is that I’ve measured it firsthand with my retail clients, my E-Com clients. The case for it is right there, you’re absolutely right. It’s competitive advantage, it will make you more money.

    Drew: Back in the day, again, I’m harping back to a time past, but people like Steve Souders were some of the first people to really start writing and speaking about web performance. And people like Steve were basically saying “Forget the backend infrastructure, where all the gains to be had are in the browser, in the front end, that’s where everything slow happens.” Is that still the case 15 years on?

    Harry: Yeah, yeah. He reran the test in between way back then and now, and the gap had actually widened, so it’s actually more costly over the wire. But there is a counter to that, which is if you’ve got really bad backend performance, if you set out of the gate slowly, there’s only so much you can claw back on the front end. I got a client at the moment, their time to first byte is 1.5 seconds. We can never render faster than 1.5 seconds therefore, so that’s going to be a cap. We can still claw time back on the front end but if you’ve got a really, really bad time to first byte, you have got backend slow downs, there’s a limit on how much faster your front end performance efforts could get you. But absolutely.

    Harry: That is, however, changing because… Well, no it’s not changing I guess, it’s getting worse. We’re pushing more onto the client. It used to be a case of “Your server is as fast as it is but then after that we’ve got a bunch of question marks.” because I hear this all the time “All our users run on WiFi. They’ve all got desktop machines because they all work from our office.” Well, no, now they’re all working from home. You don’t get to choose. So, that’s where all the question marks come in which is where the slow downs happen, where you can’t really control it. After that, the fact that now we are tending to put more on the client. By that I mean, entire run times on the client. You’ve moved all your application logic off of a server anyway so your time to first byte should be very, very minimal. It should be a case of sending some bundles from a CDM to my… but you’ve gone from being able to spec to your own servers to hoping that somebody’s not got Netflix running on the same machine they’re trying to view your website on.

    Drew: It’s a really good point about the way that we design sites and I think the traditional best practice has always been you should try and cater for all sorts of browsers, all sorts of connection speeds, all sorts of screen sizes, because you don’t know what the user is going to be expecting. And, as you said, you have these scenarios where people say “Oh no we know all our users are on their work-issued desktop machine, they’re running this browser, it’s the latest version, they’re hardwired into the LAN” but then things happen. One of the great benefits of having web apps is that we can do things like distribute our work force suddenly back all to their homes and they can keep working, but that only holds true if the quality of the engineering was such that then somebody who’s spinning up their home machine that might have IE11 on it or whatever, whether the quality of the work is there that actually means that the web fulfills its potential in being a truly accessible medium.

    Drew: As you say, there’s this trend to shift more and more stuff into the browser, and, of course, then if the browser is slow, that’s where the slowness happens. You have to wonder “Is this a good trend? Should we be doing this?” I’ve got one site that I particularly think of, noticed that is almost 100% server rendered. There’s very little JavaScript and it is lightning fast. Every time I go to it I think “Oh, this is fast, who wrote this?” And then I realize “Oh yeah, it was me”.

    Harry: That’s because you’re on localhost, no wonder it feels fast. It’s your dev site.

    Drew: Then, my day job, we’re building out our single page application and shifting stuff away from the server because the server’s the bottleneck in that case. Can you just say that it’s more performant to be in the browser? Or more performant to be on the server? Is it just a case of measuring and taking it on a case-by-case basis?

    Harry: I think you need to be very, very, very aware of your context and… genuinely I think an error is… narcissism where people think “Oh, my blog deserves to be rendered in someone’s browser. My blog with a bounce rate of 89% needs its own runtime in the browser, because I need subsequent navigations to be fast, I just want to fetch a… basically a diff of the data.” No one’s clicking onto your next article anyway, mate, don’t push a runtime down the pipe. So you need to be very aware of your context.

    Harry: And I know that… if Jeremy Keith’s listening to this, he’s going to probably put a hit out on me, but there is, I would say, a difference between a website and a web app and the definition of that is very, very murky. But if you’ve got a heavily read and write application, so something where you’re inputting data, manipulating data, et cetera. Basically my site is not a web app, it’s a website, it’s read only, that I would firmly put in the website camp. Something like my accountancy software is a web app, I would say is a web app and I am prepared to suffer a bit of boot time cost, because I know I’ll be there for 20 minutes, an hour, whatever. So you need a bit of context, and again, maybe narcissism’s a bit harsh but you need to have a real “Do we need to make this newspaper a client side application?” No, you don’t. No, you don’t. People have got ad-blocker on, people don’t like commuter newspaper sites anyway. They’re probably not even going to read the article and rant about it on Facebook. Just don’t build something like that as a client rendered application, it’s not suitable.

    Harry: So I do think there is definitely a point at which moving more onto the client would help, and that’s when you’ve got less sensitivity to churn. So any com type, for example, I’m doing an audit for a moment for a site who… I think it’s an E-Com site but it’s 100% on the client. You disable JavaScript and you see nothing, just an empty div id=“app”. E-Com is… you’re very sensitive to any issues. Your checkout flow is even subtly wrong, I’m off somewhere else. It’s too slow, I’m off somewhere else. You don’t have the context where someone’s willing to bed in to that app for a while.

    Harry: Photoshop. I pop open Photoshop and I’m quite happy to know that it’s going to take 45 seconds of splash screen because I’m going to be in there for… basically the 45 seconds is worth the 45 minutes. And it’s so hard to define, which is why I really struggle to convince clients “Please don’t do this” because I can’t just say “How long do you think your user’s going to be there for”. And you can prox it from… if your bounce rate’s 89% don’t optimize for a second page view. Get that bounce rate down first. I do think there’s definitely a split but what I would say is that most people fall on the wrong side of that line. Most people put stuff in the client that shouldn’t be there. CNN, for example, you cannot read a single headline on the CNN website until it is fully booted a JavaScript application. The only thing server rendered is the header and footer which is the only thing people don’t care about.

    Harry: And I feel like that is just… I don’t know how we arrive at that point. It’s never going to be the better option. You deliver a page that is effectively useless which then has to say “Cool, I’ll go fetch what would have been a web app but we’re going to run it in the browser, then I’ll go and ask for a headline, then you can start to… oh, you’re gone.” That really, really irks me.

    Harry: And it’s no one’s fault, I think it’s the infancy of this kind of JavaScript ecosystem, the hype around it, and also, this is going to sound really harsh but… It’s basically a lot of naïve implementation. Sure, Facebook have invented React and whatever, it works for them. Nine times out of 10 you’re not working at Facebook scale, 95 times out of 100 you’re probably not the smartest Facebook engineers, and that’s really, really cruel and it sounds horrible to say, but you can only get… None of these things are fast by default. You need a very, very elegant implementation of these things to make them correct.

    Harry: I was having this discussion with my old… he was a lead engineer on the squad that I was on 10 years ago at Sky. I was talking to him the other day about this and he had to work very hard to make a client rendered app fast, whereas making a server rendered app fast, you don’t need to do anything. You just need to not make it slow again. And I feel like there’s a lot of rose tinted glasses, naivety, marketing… I sound so bleak, we need to move on before I start really losing people here.

    Drew: Do you think we have the tendency, as an industry, to focus more on developer experience than user experience sometimes?

    Harry: Not as a whole, but I think that problem crops up in a place you’d expect. If you look at the disparity… I don’t know if you’ve seen this but I’m going to presume you have, you seem to very much have your finger on the pulse, the disparity between HTTP archive’s data about what frameworks and JavaScript libraries are used in the wild versus the state of JavaScript survey, if you follow the state of JavaScript survey it would say “Oh yes, 75% of developers are using React” whereas fewer than 5% of sites are using React. So, I feel like, en masse, I don’t think it’s a problem, but I think in the areas you’d expect it, heavy loyalty to one framework for example, developer experience is… evangelized probably ahead of the user. I don’t think developer experience should be overlooked, I mean, everything has a maintenance cost. Your car. There was a decision when it was designed that “Well, if we hide this key, that functionality, from a mechanic, it’s going to take that mechanic a lot longer to fix it, therefore we don’t do things like that”. So there does need to be a balance of ergonomics and usability, I think that is important. I think focusing primarily on developer experience is just baffling to me. Don’t optimize for you, optimize for your customer, your customer pays you it’s not the other way around.

    Drew: So the online echo chamber isn’t exactly representative of reality when you see everybody saying “Oh you should be using this, you should be doing that” then that’s actually only a very small percentage of people.

    Harry: Correct, and that’s a good thing, that’s reassuring. The echo chamber… it’s not healthy to have that kind of monoculture perhaps, if you want to call it that. But also, I feel like… and I’ve seen it in a lot of my own work, a lot of developers… As a consultant, I work with a lot of different companies. A lot of people are doing amazing work in WordPress. And WordPress powers 24% of the web. And I feel like it could be quite easy for a developer like that working in something like WordPress or PHP on the backend, custom code, whatever it is, to feel a bit like “Oh, I guess everyone’s using React and we aren’t” but actually, no. Everyone’s talking about React but you’re still going with the flow, you’re still with the majority. It’s quite reassuring to find the silent majority.

    Drew: The trend towards static site generators and then hosting sites entirely on a CDN, sort of JAMstack approach, I guess when we’re talking about those sorts of publishing type sites, rather than software type sites, I guess that’s a really healthy trend, would you think?

    Harry: I love that, absolutely. You remember when we used to call SSG “flap file”, right?

    Drew: Yeah.

    Harry: So, I built CSS Wizardry on Jekyll back when Jekyll was called a flap file website. But now we service our generator, huge, huge fan of that. There’s no disadvantage to it really, you pay maybe a slightly larger up front compute cost of pre-compiling the site but then your compute cost is… well, Cloudflare fronts it, right? It’s on a CDN so your application servers are largely shielded from that.

    Harry: Anything interactive that does need doing can be done on the client or, if you want to get fancy, what one really nice approach, if you are feeling ambitious, is use Edge Side Includes so you can keep your shopping cart server rendered, but at the edge. You can do stuff like that. Tremendous performance benefits there. Not appropriate for a huge swathe of sites, but, like you say, if we’re thinking publishing… an E-Com site it wouldn’t work, you need realtime stock levels, you need… search that doesn’t just… I don’t know you just need far more functionality. But yeah, I think the Smashing site, great example, my site is an example, much smaller than Smashing but yeah, SSG, flap filers, I’m really fond of it.

    Drew: Could it work going deeper into the JAMstack approach of shifting everything into the client and building an E-Commerce site? I think the Smashing E-Commerce site is essentially using JavaScript in the client and server APIs to do the actual functionality as service functions or what have you.

    Harry: Yeah. I’ve got to admit, I haven’t done any stuff with serverless. But yeah, that hybrid approach works. Perhaps my E-Commerce example was a bit clunky because you could get a hybrid between statically rendering a lot of the stuff, because most things on an E-Com site don’t really change. You filter what you can do on the client. Search, a little more difficult, stock levels does need to go back to an API somewhere, but yeah you could do a hybrid for a definite, for an E-Com site.

    Drew: Okay, so then it’s just down to monitoring all those performance metrics again, really caring about the network, about latency, about all these sorts of things, because you’re then leaning on the network a lot more to fetch all those individual bits of data. It hosts a new set of problems.

    Harry: Yeah, I mean you kind of… I wouldn’t say “Robbing Peter to pay Paul” but you are going to have to keep an eye on other things elsewhere. I’ve not got fully to the bottom of it, before anyone tweets it at us, but a client recently moved to an E-Commerce client. I worked with them two years ago and that site was already pretty fast. It was built on… I can’t remember which E-Com platform, it was .net, hosted on IIS, server rendered, obviously, and it was really fast because of that. It was great and we just wanted to maintain, maybe find a couple of hundred milliseconds here and there, but really good. Half way through last year, they moved to client side React for key pages. PP… product details page, product listing page, and stuff just got marketable slower lower, much slower. To the point they got back in touch needing help again.

    Harry: And one of the interesting things I spotted when they were putting a case for “We need to actually revert this”. I was thinking about all the…what’s slower, obviously it’s slower, how could doing more work ever be faster, blah blah blah. One of their own bullet points in the audit was: based on projections, their yearly hosting costs have gone up by a factor of 10 times. Because all of a sudden they’ve gone from having one application server and a database to having loads of different gateways, loads of different APIs, loads of different microservers they’re calling on. It increased the surface area of their application massively. And the basic reason for this, I’ll tell you exactly why this happened. The developer, it was a very small team, the developer who decided “I’m going to use React because it seems like fun” didn’t do any business analysis. It was never expected of them to actually put forward a case of how much is it going to cost the dude, how much is it going to return, what’s the maintenance cost of this?

    Harry: And that’s a thing I come up against really frequently in my work and it’s never the developer’s fault. It’s usually because the business keeps financials away from the engineering team. If your engineers don’t know the cost or value of their work then they’re not informed to make those decisions so this guy was never to know that that was going to be the outcome. But yeah, interestingly, moving to a more microservice-y approach… And this is an outlier, and I’m not going to say that that 10 times figure is typical, it definitely seems atypical, but it’s true that there is at least one incident I’m aware of when moving to this approach, because they just had to use more providers. It 10x’ed their… there’s your 10 times engineer, increased hosting by 10 times.

    Drew: I mean, it’s an important point, isn’t it? Before starting out down any particular road with architectural changes and things about doing your research and asking the right questions. If you were going to embark on some big changes, say you’ve got a really old website and you’re going to structure it and you want it to be really fast and you’re making all your technology choices, I mean it pays, doesn’t it, to talk to different people in the business to find out what they want to be doing. What sort of questions should you be asking other people in the business as a web developer or as a performance engineer? Who should you be talking to you and what should you be asking them?

    Harry: I’ve got a really annoying answer to the “Who should you be talking to?” And the answer is everyone should be available to you. And it will depend on the kind of business, but you should be able to speak to marketing “Hey, look, we’re using this AB testing tool. How much does that cost as a year and how much do you think it nets as a year?” And that developer should feel comfortable. I’m not saying developers need to change their attitude, what I mean is the company should make the developers able to ask those kind of questions. How much does Optimizely cost as a year? Right, well that seems like a lot, does it make that much in return? Okay, whatever we can make a decision based on that. That’s who you should be talking to and then questions you should ask, it should be things like…

    Harry: The amount of companies I work will, they won’t give their own developers to Google Analytics. How are you meant to build a website if you don’t know who you’re building it for? So the question should be… I work a lot with E-Com clients so every developer should things like “What is our average order value? What is our conversion rate? What is our revenue, how much do we make?” These things mean that you can at least understand that “Oh, people spend a lot of money on this website and I’m responsible for a big chunk of that and I need to take that responsibility.”

    Harry: Beyond that, other things are hard to put into context, so for me, one of things that I, as a consultant, so this is very different to an engineer in the business, I need to know how sensitive you are to performance. So if a client gives me the average order value, monthly traffic, and their conversion rate, I can work out how much 100 milliseconds, 500 a second will save them a year, or return them, just based on those three numbers I can work out roughly “Well a second’s worth 1.8 mil”. It’s a lot harder for someone in the business to get all the back information because as a performance engineer it’s second nature to me. But if you can work that kind of stuff out, it unlocks a load of doors. Okay, well if a second’s work this much to us, I need to make sure that I never lose a second and if I can, gain a second back. And that will inform a lot of things going forward. A lot of these developers are kept quite siloed. “Oh well, you don’t need to know about business stuff, just shut up and type”.

    Drew: I’ve heard you say, it is quite a nice soundbite, that nobody wants a faster website.

    Harry: Yeah.

    Drew: What do you mean by that?

    Harry: Well it kind of comes back to, I think I’ve mentioned it already in the podcast, that if my clients truly wanted the world’s fastest website, they would allow me to go in and delete all their JavaScript, all their CSS, all their images. Give that customer a Times New Roman stack.

    Harry: But fast for fast sake is… not chasing the wrong thing but you need to know what fast means to you, because, I see it all the time with clients. There’s a point at which you can stop. You might find that your customer’s only so sensitive to web perf that it might mean that getting a First Contentful Paint from four seconds to two seconds might give you a 10% increase in revenue, but getting from that two to a one, might only give you a 1% increase. It’s still twice as fast, but you get minimal gains. So what I need to do with my clients is work out “How sensitive are you? When can we take our foot off the gas?” And also, like I said, towards the top of the show… You need to have a product that people want to buy.

    Harry: If people don’t want to buy your product, it doesn’t matter how quickly you show them it, it’ll just disgust them faster, I guess. Is your checkout flow really, really, really seamless on mobile, for example. So there’s a number of factors. For me, and my clients, it’ll be working out a sweet spot, to also working out “If getting from here to here is going to make you 1.8 mil a year, I can find you that second for a fraction of that cost.” If you want me to get you an additional second on top of that, it’s going to get a lot harder. So my cost to you will probably go up, and that won’t be an extra 1.8, because it’s not lineal, you don’t get 1.8 mil for every one second.

    Harry: It will tail off at some point. And clients will get to a point where… they’ll still be making gains but it might be a case of your engineering effort doubles, meaning your returns halve, you can still be in the green, hopefully it doesn’t get more expensive and you’re losing money on performance, but there’s a point where you need to slow down. And that’s usually things that I help clients find out because otherwise they will just keep chasing speed, speed, speed and get a bit blinkered.

    Drew: Yeah, it is sort of diminishing returns, isn’t it?

    Harry: That’s what I was look for-

    Drew: Yeah.

    Harry: … diminishing returns, that’s exactly it. Yeah, exactly.

    Drew: And in terms of knowing where to focus your effort… Say you’ve got the bulk of your users, 80% of your users are getting a response within two, three seconds, and then you’ve got 20% who may be in the long-tail that might end up with responses five, ten seconds. Is it better to focus on that 80% where the work’s really hard, or is it better to focus on the 20% that’s super slow, where the work might be easier, but it’s only 20%. How do you balance those sorts of things?

    Harry: Drew, can you write all podcast questions for everyone else? This is so good. Well, a bit of a shout out to Tim Kadlec, he’s done great talks on this very topic and he calls it “The Long-Tail of Web Performance” so anyone listening who wants to look at that, Tim’s done a lot of good firsthand work there. The 80, 20, let’s just take those as good example figures, by the time you’re dealing with the 80th percentile, you’re definitely in the edge cases. All your crooks and web file data is based around 75th percentile. I think there’s a lot of value investing in that top 20th percentile, the worst 20%. Several reasons for this.

    Harry: First thing I’m going to start with is one of the most beautiful, succinct soundbites I’ve ever heard. And the guy who told me it, I can guarantee, did not mean it to be this impactful. I was 15 years old and I was studying product design, GCSE. Finally, a project, it was a bar stool so it was a good sign of things to come. And we were talking about how you design furniture. And my teacher basically said… I don’t know if I should… I’m going to say his name, Mr. Brocklesby.

    Harry: He commanded respect but he was one of the lads, we all really liked him. But he was massive in every dimension. Well over six foot tall, but just a big lad. Big, big, big, big man. And he said to us “If you were to design a doorway, would you design it for the average person?” And 15 year old brains are going “Well yeah, if everyone’s roughly 5’9 then yeah” He was like “Well, immediately, Harry can’t use that door.” You don’t design for the average person, you design for the extremities because you want it to be useful to the most people. If you designed a chair for the average person, Mr. Brocklesby wasn’t going to fit in it. So he taught me from a really, really age, design to your extremities.

    Harry: And where that becomes really interesting in web perf is… If you imagine a ladder, and you pick up the ladder by the bot… Okay I’ve just realized my metaphor might… I’ll stick with it and you can laugh at me afterwards. Imagine a ladder and you lift the ladder up by the bottom rungs. And that’s your worst experiences. You pick the bottom rung in the ladder to lift it up. The whole ladder comes with it, like a rising tide floats all boats. The reason that metaphor doesn’t work is if you pick a ladder up by the top rung, it all lifts as well, it’s a ladder. And the metaphor doesn’t even work if I turn it into a rope ladder, because a rope ladder then, you lift the bottom rung and nothing happens but… my point is, if you can improve experience for your 90th percentile, it’s got to get that up for your 10th percentile, right?

    Harry: And this is why I tell clients, they’ll say to me things like “Oh well most of our users are on 4G on iPhones” so like all right, okay, and we start testing 3G on Android, like “No, no, most of our users are iPhones” okay… that means your average user’s going to have a better experience but anyone who isn’t already in the 50th percentile just gets left further behind. So set the bar pretty high for yourself by setting expectations pretty low.

    Harry: Sorry, I’ve got a really bad habit of giving really long answers to short questions. But it was a fantastic question and, to try and wrap up, 100% definitely I agree with you that you want to look at that long-tail, you want to look at that… your 80th percentile because if you take all the experiences on the site and look at the median, and you improve the median, that means you’ve made it even better for people who were already quite satisfied. 50% of people being effectively ignored is not the right approach. And yeah, it always comes back to Mr Brocklesby telling me “Don’t design for the average person because then Harry can’t use the door”. Oh, for anyone listening, I’m 193 centimeters, so I’m quite lanky, that’s what that is.

    Drew: And all those arms and legs.

    Harry: Yeah. Here’s another good one as well. My girlfriend recently discovered the accessibility settings in iOS… so everyone has their phone on silent, right? Nobody actually has a phone that actually rings, everyone’s got it on silent. She found that “Oh you know, you can set it so that when you get a message, the flash flashes. And if you tap the back of the phone twice, it’ll do a screenshot.” And these are accessibility settings, these are designed for that 95th percentile. Yet she’s like “Oh, this is really useful”.

    Harry: Same with OXO Good Grips. OXO Good Grips, the kitchen utensils. I’ve got a load of them in the kitchen. They’re designed because the founder’s wife had arthritis and he wanted to make more comfortable utensils. He designed for the 99th percentile, most people don’t have arthritis. But by designing for the 99th percentile, inadvertently, everyone else is like “Oh my God, why can’t all potato peelers be this comfortable?” And I feel like it’s really, really… I like a feel-good or anecdote that I like to wheel out in these sort of scenarios. But yeah, if you optimize for them… Well, a rising tide floats all boats and that therefore optimizes the tail-end of people and you’re going to capture a lot of even happier customers above that.

    Drew: Do you have the OXO Good Grips manual hand whisk?

    Harry: I don’t. I don’t, is it good?

    Drew: Look into it. It’s so good.

    Harry: I do have the OXO Good Grips mandolin slicer which took the end of my finger off last week.

    Drew: Yeah, I won’t get near one of those.

    Harry: Yeah, it’s my own stupid fault.

    Drew: Another example from my own experience with catering for that long-tail is that, in the project I’m working on at the moment, that long-tail is right at the end, you’ve got people with the slowest performance, but if it turns out if you look at who those customers are, they’re the most valuable customers to the business-

    Harry: Okay.

    Drew: … because they are the biggest organizations with the most amount of data.

    Harry: Right.

    Drew: And so they’re hitting bottlenecks because they have so much data to display on a page and those pages need to be refactored a little bit to help that use case. So they’re having the slowest experience and they’re, when it comes down to it, paying the most money and making so much more of a difference than all of the people having a really fast experience because they’re free users with a tiny amount of data and it all works nice and it is quick.

    Harry: That’s a fascinating dimension, isn’t it? In fact, I had a similar… I had nowhere near the business impact as what you’ve just described, but I worked with a client a couple of years ago, and their CEO got in touch because their site was slow. Like, slow, slow, slow. Really nice guy as well, he’s just a really nice down to earth guy, but he’s mentored, like proper rich. And he’s got the latest iPhone, he can afford that. He’s a multimillionaire, he spends a lot of his time flying between Australia, where he is from, and Estonia, where he is now based.

    Harry: And he’s flying first class, course he is. But it means most of his time on his nice, shiny iPhone 12 Pro Max whatever, whatever, is over airplane WiFi, which is terrible. And it was this really amazing juxtaposition where he owns the site and he uses it a lot, it’s a site that he uses. And he was pushing it… I mean easily their richest customer was their CEO. And he’s in this weirdly privileged position where he’s on a worse connection than Joe Public because he’s somewhere above Singapore on a Quantas flight getting champagne poured down his neck, and he’s struggling. And that was a really fascinating insight that… Oh yeah, because you’ve got your 95th percentile can basically can go in either direction.

    Drew: Yeah, it’s when you start optimizing for using a site with a glass of champagne in one hand that you think “Maybe we’re starting to lose the way a bit.”

    Harry: Yeah, exactly.

    Drew: We talked a little bit about measurement of performance, and in my own experience with performance work it’s really essential to measure everyhtin.g A so you can identify where problems are but B so that when you actually start tackling something you can tell if you’re making a different and how much of a difference you’re making. How should we be going about measuring the performance of our sites? What tools can we use and where should we start?

    Harry: Oh man, another great question. So there’s a range of answers depending on how much time, resources, inclination there is towards fixing site speed. So what I will try and do with client is… Certain off the shelf metrics are really good. Load time, do not care about that anymore. It’s very, very, very… I mean, it’s a good proxy if your load time’s 120 seconds I’m going to guess you don’t have a very fast website, but it’s too obscure and it’s not really customer facing. I actually think vitals are a really good step in the right direction because they do measure user experience but they’re based on technical input. Largest Contentful Paint is a really nice thing to visual but the technical stuff there is unblock your critical path, make sure hero images arrive quickly and make sure your web font strategy is decent. There’s a technical undercurrent to these metrics. Those are really good off the shelf.

    Harry: However, if clients have got the time, it’s usually time, because you want to capture the data but you need time to actually capture the data. So what I try and do with clients is let’s go “Look, we can’t work together for the next three months because I’m fully booked. So, what we can do is really quickly set you up with a free trial of Speedcurve, set up some custom metrics” so that means that for a publisher client, a newspaper, I’d be measuring “How quickly was the headline of the article rendered? How quickly was the lead image for the article rendered?” For an E-Commerce client I want to measure, because obviously you’re measuring things like start render passively. As soon as you start using any performance monitoring software, you’re capturing your actual performance metrics for free. So your First Contentful Paint, Largest Contentful, etc. What I really want to capture is things that matter to them as a business.

    Harry: So, working with an E-Com client at the moment where we are able to correlate… The faster your start render, what is the probability to an adding to cart. If you can show them a product sooner, they’re more likely to buy it. And this is a lot of effort to set up, this is kind of the stretch goal for clients who are really ambition, but anything that you really want to measure, because like I say, you don’t really want to measure what your Largest Contentful Paint is, you want to measure your revenue and was that influenced by Large Contentful Paint? So the stretch goal, ultimate thing, would be anything you would see as a KPI for that business. It could be, on newspapers, how far down the article did someone scroll? And does that correlate in any way to first input delay? Did people read more articles if CLS was lower? But then before we start doing custom, custom metrics, I honestly think web vitals is a really good place to start and it’s also been quite well normalized. It becomes a… I don’t know what the word is. Lowest common denominator I guess, where everyone in the industry now can discuss performance on this level playing field.

    Harry: One problem I’ve got, and I actually need to set up a meeting with the vitals team, is I also really think Lighthouse is great, but CLS is 33% of web vitals. You’ve got LCP, FID, CLS. CLS is 33% of your vitals. Vitals is what normally goes in front of your marketing team, your analytics department, because it pops up in search console, it’s mentioned in context of search results pages, whereas vitals is concerned, you’ve got heavy weighting, 33%, a third of vitals is CLS, it’s only 5% of our Lighthouse score. So what you’re going to get is developers who build around Lighthouse, because it can be integrated into tooling, it’s a lab metric. Vitals is field data, it’s rum.

    Harry: So you’ve got this massive disconnect where you’ve got your marketing team saying “CLS is really bad” and developers are thinking “Well it’s 5% of the Lighthouse score that DevTools is giving me, it’s 5% of the score that Lighthouse CLI gives us in CircleCI” or whatever you’re using, yet for the marketing team its 33% of what they care about. So the problem there is a bit of a disconnect because I do think Lighthouse is very valuable, but I don’t know how they reconcile that fairly massive difference where in vitals, CLS is 33% of your score… well, not score because you don’t really have one, and Lighthouse is only 5%, and it’s things like that that still need ironing out before we can make this discussion seamless.

    Harry: But, again, long answer to a short question. Vitals is really good. LCP is a good user experience metric which can be boiled down to technical solutions, same with CLS. So I think that’s a really good jump off point. Beyond that, it’s custom metrics. What I try and get my clients to is a point where they don’t really care how fast their site is, they just care that they make more money from yesterday, and if it did is that because it was running fast? If it made less is that because it was running slower? I don’t want them to chase a mystical two second LCP, I want them to chase the optimal LCP. And if that actually turns out to be slower than what you think, then whatever, that’s fine.

    Drew: So, for the web developer who’s just interested in… they’ve not got budget to spend on tools like Speedcurve and things, they can obviously run tools like Lighthouse just within their browser, to get some good measurement… Are things like Google Analytics useful for that level?

    Harry: They are and they can be made more useful. Analytics, for many years now, has captured rudimentary performance information. And that is going to be DNS time, TCP and TLS, time to first byte, page download time, which is a proxy… well, whatever, just page download time and load time. So fairly clunky metrics. But it’s a good jump off point and normally every project I start with a client, if they don’t have New Relic or Speedcurve or whatever, I’ll just say “Well let me have a look at your analytics” because I can at least proxy the situation from there. And it’s never going to be anywhere near as good as something like Speedcurve or New Relic or Dynatrace or whatever. You can send custom metrics really, really, really easily off to analytics. If anyone listening wants to be able to send… my site for example. I’ve got metrics like “How quickly can you read the heading of one of my articles? At what point was the About page image rendered? At what point was the call to action that implores you to hire me? How soon is that rendered to screen?” Really trivial to capture this data and almost as trivial to send it to analytics. So if anyone wants to view source on my site, scroll down to the closing body tag and find the analytics snippet, you will see just how easy it is for me to capture custom data and send that off to analytics. And, in the analytics UI, you don’t need to do anything. Normally you’d have to set up custom reports and mine the data and make it presentable. These are a first class citizen in Google Analytics. So the moment you start capturing custom analytics, there’s a whole section of the dashboard dedicated to it. There’s no setup, no heavy lifting in GA itself, so it’s really trivial and, if clients are on a real budget or maybe I want to show them the power of custom monitoring, I don’t want to say “Oh yeah, I promise it’ll be really good, can I just have 24 grand for Speedcurve?” I can start by just saying “Look, this is rudimentary. Let’s see the possibilities here, now we can maybe convince you to upgrade to something like Speedcurve.”

    Drew: I’ve often found that my gut instinct on how fast something should be, or what impact a change should have, can be wrong. I’ll make a change and think I’m making things faster and then I measure it and actually I’ve made things slower. Is that just me being rubbish at web perf?

    Harry: Not at all. I’ve got a really pertinent example of this. Preload… a real quick intro for anyone who’s not heard of preload, loading certain assets on the web is inherently very slow and the two primary candidates here are background images in CSS and web fonts, because before you can download a background image, you have to download the HTML, which then downloads the CSS, and then the CSS says “Oh, this div on the homepage needs this background image.” So it’s inherently very slow because you’ve got that entire chunk of CSS time in between. With preload, you can put one line in HTML in the head tag that says “Hey, you don’t know it yet but, trust me, you’ll need this image really, really, really soon.” So you can put a preload in the HTML which preemptively fires off this download. By the time the CSS needs the background image, it’s like “Oh cool, we’ve already got it, that’s fast.” And this is toutered as this web perf Messiah… Here’s the thing, and I promise you, I tweeted this last week and I’ve been proved right twice since. People hear about preload, and the promise it gives, and also it’s very heavily pushed by Lighthouse, in theory, it makes your site faster. People get so married to the idea of preload that even when I can prove it isn’t working, they will not remove it again. Because “No, but Lighthouse said.” Now this is one of those things where the theory is sound. If you have to wait for your web font, versus downloading it earlier, you’re going to see stuff faster. The problem is, when you think of how the web actually works, any page you first hit, any brand new domain you hit, you’ve got a finite amount of bandwidth and the browser’s very smart spending that bandwidth correctly. It will look through your HTML really quickly and make a shopping list. Most important thing is CSS, then it’s this jQuery, then it’s this… and then next few things are these, these, and these less priority. As soon as you start loading your HTML with preloads, you’re telling the browser “No, no, no, this isn’t your shopping list anymore, buddy, this is mine. You need to go and get these.” That finite amount of bandwidth is still finite but it’s not spent across more assets, so everything gets marginally slower. And I’ve had to boo this twice in the past week, and still people are like “Yeah but no it’s because it’s downloading sooner.” No, it’s being requested sooner, but it’s stealing bandwidth from your CSS. You can literally see your web fonts are stealing bandwidth from your CSS. So it’s one of those things where you have to, have to, have to follow the numbers. I’ve done it before on a large scale client. If you’re listening to this, you’ve heard of this client, and I was quite insistent that “No, no, your head tags are in the wrong order because this is how it should be and you need to have them in this order because theoretically it clues in that…” Even in what I was to the client I knew that I was setting myself up for a fool. Because of how browsers work, it has to be faster. So I’m making the ploy, this change… to many millions of people, and it got slower. It got slower. And me sitting there, indignantly insisting “No but, browsers work like this” is useless because it’s not working. And we reverted it and I was like “Sorry! Still going to invoice you for that!” So it’s not you at all.

    Drew: Follow these numbers.

    Harry: Yeah, exactly. “I actually have to charge you more, because I spent time reverting it, took me longer.” But yeah, you’re absolutely right, it’s not you, it’s one of those things where… I have done it a bunch of times on a much smaller scale, where I’ll be like “Well this theoretically must work” and it doesn’t. You’ve just got to follow what happens in the real world. Which is why that monitoring is really important.

    Drew: As the landscape changes and technology develops, Google rolls out new technologies that help us make things faster, is there a good way that we can keep up with the changes? Is there any resources that we should be looking at to keep our skills up to date when it comes to web perf?

    Harry: To quickly address the whole “Google making”… I know it’s slightly tongue in cheek but I’m going to focus on this. I guess right towards the beginning, bet on the browser. Things like AMP, for example, they’re at best a after thought catch of a solution. There’s no replacement for building a fast site, and the moment you start using things like AMP, you have to hold on to those non-standard standards, the mercy of the AMP team changing their mind. I had a client spend a fortune licensing a font from an AMP allow-listed font provider, then at some point, AMP decided “Oh actually no, that font provided, we’re going to block list them now” So I had a client who’s invested heavily in AMP and this font provider and had to choose “Well, do we undo all the AMP work or do we just waste this very big number a year on the web font” blah, blah, blah. So I’d be very wary of any one… I’m a Google Developer expert but I don’t know of any gagging-order… I can be critical, and I would say… avoid things that are hailed as a one-size-fits-all solution, things like AMP.

    Harry: And to dump on someone else for a second, Cloudflare has a thing called Rocket Loader, which is AMP-esque in its endeavor. It’s designed like “Oh just turn this thing on your CDN, it’ll make your site faster.” And actually it’s just a replacement for building your site properly in the first place. So… to address that aspect of it, try and remain as independent as possible, know how browsers work, which immediately means that Chrome monoculture, you’re back in Google’s lap, but know how browsers work, stick to some fundamental ideologies. When you’re building a site, look a the page. Whether that’s in Figma, or Sketch, or wherever it is, look at the design and say “Well, that is what a user wants to see first, so I’ll put nothing in the way of that. I won’t lazy load this main image because that’s daft, why would I do that?” So just think about “What would you want the user to be first?” On an E-Com site, it’s going to be that product image, probably nav at the same time, but reviews of the product, Q and A of the product, lazy load that. Tuck that behind JavaScript.

    Harry: Certain fundamental ways of working that will serve you right no matter what technology you’re reading up on, which is “Prioritize what your customer prioritizes”. Doing more work on that’d be faster, so don’t put things in the way of that, but then more tactical things for people to be aware of, keep abreast of… and again, straight back to Google, but is proving to be a phenomenal resource for framework agnostic, stack agnostic insights… So if you want to learn about vitals, you want to learn about PWAs, so’s really great.

    Harry: There’s actually very few performance-centric publications. Calibre’s email is, I think its fortnightly perf email is just phenomenal, it’s a really good digest. Keep an eye on the web platform in general, so there’s the Performance Working Group, they’ve got a load of stuff on GitHub proposals. Again, back to Google, but no one knows about this website and its phenomenal: It tells you exactly what Chrome’s working on, what the signals are from other browsers, so if you want to see what the work is on priority hints, you can go and get links to all the relevant bug trackers. Chrome Status shows you milestones for each… “This is coming out in MAT8, this was released in ’67” or whatever, that’s a really good thing for quite technical insights.

    Harry: But I keep coming back to this thing, and I know I probably sound like “Old man shouts at Cloud” but stick to the basics, nearly every single pound or dollar, euro, I’ve ever earned, has been teaching clients that “You know the browser does this already, right” or “You know that this couldn’t possible be faster” and that sounds really righteous of me… I’ve never made a cent off of selling extra technology. Every bit of money I make is about removing, subtracting. If you find yourself adding things to make your site faster, you’re in the wrong direction.

    Harry: Case in point, I’m not going to name… the big advertising/search engine/browser company at all, not going to name them, and I’m not going to name the JavaScript framework, but I’m currently in discussions with a very, very big, very popular JavaScript framework about removing something that’s actively harming, or optionally removing something that would harm the performance of a massive number of websites. And they were like “Oh, we’re going to loop in…” someone from this big company, because they did some research… and it’s like “We need an option to remove this thing because you can see here, and here, and here it’s making this site slower.” And their solution was to add more, like “Oh but if you do this as well, then you can sidestep that” and it’s like “No, no, adding more to make a site faster must be the wrong solution. Surely you can see that you’re heading in the wrong direction if it takes more code to end up with a faster site.”

    Harry: Because it was fast to start with, and everything you add is what makes it slower. And the idea of adding more to make it faster, although… it might manifest itself in a faster website, it’s the wrong way about it. It’s a race to the bottom. Sorry, I’m getting really het up, you can tell I’ve not ranted for a while. So that’s the other thing, if you find yourself adding features to make a site faster, you’re probably heading in the wrong direction, it’s far more effective to make a faster by removing things than it is to add them.

    Drew: You’ve put together a video course called “Everything I Have Done to Make CSS Wizardry Fast”.

    Harry: Yeah!

    Drew: It’s a bit different from traditional online video courses, isn’t it?

    Harry: It is. I’ll be honest, it’s partly… I don’t want say laziness on my part, but I didn’t want to design a curriculum which had to be very rigid and take you from zero to hero because the time involved in doing that is enormous and time I didn’t know if I would have. So what I wanted to was have ready-to-go material, just screen cast myself talking through it so it doesn’t start off with “Here is a browser and here’s how it works” so you do need to be at least aware of web perf fundamentals, but it’s hacks and pro-tips and real life examples.

    Harry: And because I didn’t need to do a full curriculum, I was able to slam the price way down. So it’s not a big 10 hour course that will take you from zero to hero, it’s nip in and out as you see fit. It’s basically just looking at my site which is an excellent playground for things that are unstable or… it’s very low risk for me to experiment there. So I’ve just done video series. It was a ton of fun to record. Just tearing down my own site and talking about “Well this is how this works and here’s how you could use it”.

    Drew: I think it’s really great how it’s split up into solving different problems. If I want to find out more about optimizing images or whatever, I can think “Right, what does my mate Harry have to say about this?”, dip in to the video about images and off I go. It’s really accessible in that way, you don’t have to sit through hours and hours of stuff, you can just go to the bit you want and learn what you need to learn and then get out.

    Harry: I think I tried to keep it more… The benefit of not doing a rigid curriculum is you don’t need to watch a certain video first, there’s no intro, it’s just “Go and look around and see what you find interesting” which meant that someone suffering with LTP issues they’re like “Oh well I’ve got to dive into this folder here” or if they’re suffering with CSS problems they can go dive into that folder. Obviously I have no stats, but I imagine there’s a high abandonment rate on courses, purely because you have to trudge through three hours of intro in case you do miss something, and it’s like “Oh, do you know what, I can’t keep doing this every day” and people might just abandon a lot of courses. So my thinking was just dive in, you don’t need to have seen the preceding three hours, you can just go and find whatever you want. And feedback’s been really, really… In fact, what I’ll do is, it doesn’t exist yet, but I’ll do it straight after the call, anybody who uses the discount code SMASHING15, they’ll get 15% off of it.

    Drew: So it’s almost like you’ve performance optimized the course itself, because you can just go straight to the bit you want and you don’t have to do all the negotiation and-

    Harry: Yeah, unintentional but I’ll take credit for that.

    Drew: So, I’ve been learning all about web performance, what have you been learning about lately, Harry?

    Harry: Technical stuff… not really. I’ve got a lot on my “to learn” list, so QUIC, H3 sort of stuff I would like to get a bit more working knowledge of that, but I wrote an E-Book during first lockdown in the UK so I learned how to make E-Books which was a ton of fun because they’re just HTML and CSS and I know my way around that so that was a ton of fun. I learnt very rudimentary video editing for the course, and what I liked about those is none of that’s conceptual work. Obviously, learning a programming language, you’ve got to wrestle concepts, whereas learning an E-Book was just workflows and… stuff I’ve never tinkered with before so it was interesting to learn but it didn’t require a change of career, so that was quite nice.

    Harry: And then, non technical stuff… I ride a lot of bikes, I fall off a lot of bikes… and because I’ve not traveled at all since last March, nearly a year now, I’ve been doing a lot more cycling and focusing a lot more on… improving that. So I’ve been doing a load of research around power outputs and functional threshold powers, I’m doing a training program at the moment, so constantly, constantly exhausted legs but I’m learning a lot about physiology around cycling. I don’t know why because I’ve got no plans of doing anything with it other than keep riding. It’s been really fascinating. I feel like I’ve been very fortunate during lockdowns, plural, but I’ve managed to stay active. A lot of people will miss out on simple things like a daily commute to the office, a good chance to stretch legs. In the UK, as you’ll know, cycling has been very much championed, so I’ve been tinkering a lot more with learning more about riding bikes from a more physiological aspect which means… don’t know, just being a nerd about something else for a change.

    Drew: Is there perhaps not all that much difference between performance optimization on the web and performance optimization in cycling, it’s all marginal gains, right?

    Harry: Yeah, exactly. And the amount of graphs I’ve been looking at on the bike… I’ve got power data from the bike, I’ll go out on a ride and come back like “Oh if I had five more watts here but then saved 10 watts there, I could do this, this, and this the fastest ever” and… been a massive anorak about it. But yeah, you’re right. Do you know what, I think you’ve hit upon something really interest there. I think that kind of thing is a good sport/pastime for somebody who is a bit obsessive, who does like chasing numbers. There are things on, I mean you’ll know this but, Strava, you’ve got your KOMs. I bagged 19 of them last year which is, for me, a phenomenal amount. And it’s nearly all from obsessing over available data and looking at “This guy that I’m trying to beat, he was doing 700 watts at this point, if I could get up to 1000 and then tail off” and blah, blah, blah… it’s being obsessive. Nerdy. But you’re right, I guess it’s a similar kind of thing, isn’t it? If you could learn where you afford to tweak things from or squeeze last little drops out…

    Drew: And you’ve still got limited bandwidth in both cases. You’ve got limited energy and you’ve got limited network connection.

    Harry: Exactly, you can’t just magic some more bandwidth there.

    Drew: If you, the listener, would like to hear more from Harry, you can find him on Twitter, where he’s @csswizardty, or go to his website at where you’ll find some fascinating case studies of his work and find out how to hire him to help solve your performance problems. Harry’s E-Book, that he mentioned, and video course we’ll link up from the show notes. Thanks for joining us today, Harry, do you have any parting words?

    Harry: I’m not one for soundbites and motivation quotes but I heard something really, really, really insightful recently. Everyone keeps saying “Oh well we’re all in the same boat” and we’re not. We’re all in the same storm and some people have got better boats than others. Some people are in little dinghies, some people have got mega yachts. Oh, is that a bit dreary to end on… don’t worry about Corona, you’ll be dead soon anyway!

    Drew: Keep hold of your oars and you’ll be all right.

    Harry: Yeah. I was on a call last night with some web colleagues and we were talking about this and missing each other a lot. The web is, by default, remote, that’s the whole point of the web. But… missing a lot of human connection so, chatting to you for this hour and a bit now has been wonderful, it’s been really nice. I don’t know what my parting words really are meant to be, I should have prepared something, but I just hope everyone’s well, hope everyone’s making what they can out of lockdown and people are keeping busy.

    Smashing Editorial

    Source link

    web design

    How We Improved SmashingMag Performance — Smashing Magazine


    About The Author

    Vitaly Friedman loves beautiful content and doesn’t like to give in easily. When he is not writing or speaking at a conference, he’s most probably running …
    More about

    In this article, we’ll take a close look at some of the changes we made on this very site — running on JAMStack with React — to optimize the web performance and improve the Core Web Vitals metrics. With some of the mistakes we’ve made, and some of the unexpected changes that helped boost all the metrics across the board.

    Every web performance story is similar, isn’t it? It always starts with the long-awaited website overhaul. A day when a project, fully polished and carefully optimized, gets launched, ranking high and soaring above performance scores in Lighthouse and WebPageTest. There is a celebration and a wholehearted sense of accomplishment prevailing in the air — beautifully reflected in retweets and comments and newsletters and Slack threads.

    Yet as time passes by, the excitement slowly fades away, and urgent adjustments, much-needed features, and new business requirements creep in. And suddenly, before you know it, the code base gets a little bit overweight and fragmented, third-party scripts have to load just a little bit earlier, and shiny new dynamic content finds its way into the DOM through the backdoors of fourth-party scripts and their uninvited guests.

    We’ve been there at Smashing as well. Not many people know it but we are a very small team of around 12 people, many of whom are working part-time and most of whom are usually wearing many different hats on a given day. While performance has been our goal for almost a decade now, we never really had a dedicated performance team.

    After the latest redesign in late 2017, it was Ilya Pukhalski on the JavaScript side of things (part-time), Michael Riethmueller on the CSS side of things (a few hours a week), and yours truly, playing mind games with critical CSS and trying to juggle a few too many things.

    Performance sources screenshot showing Lighthouse scores between 40 and 60
    This is where we started. With Lighthouse scores being somewhere between 40 and 60, we decided to tackle performance (yet again) heads on. (Image source: Lighthouse Metrics) (Large preview)

    As it happened, we lost track of performance in the busyness of day-to-day routine. We were designing and building things, setting up new products, refactoring the components, and publishing articles. So by late 2020, things got a bit out of control, with yellowish-red Lighthouse scores slowly showing up across the board. We had to fix that.

    That’s Where We Were

    Some of you might know that we are running on JAMStack, with all articles and pages stored as Markdown files, Sass files compiled into CSS, JavaScript split into chunks with Webpack and Hugo building out static pages that we then serve directly from an Edge CDN. Back in 2017 we built the entire site with Preact, but then have moved to React in 2019 — and use it along with a few APIs for search, comments, authentication and checkout.

    The entire site is built with progressive enhancement in mind, meaning that you, dear reader, can read every Smashing article in its entirety without the need to boot the application at all. It’s not very surprising either — in the end, a published article doesn’t change much over the years, while dynamic pieces such as Membership authentication and checkout need the application to run.

    The entire build for deploying around 2500 articles live takes around 6 mins at the moment. The build process on its own has become quite a beast over time as well, with critical CSS injects, Webpack’s code splitting, dynamic inserts of advertising and feature panels, RSS (re)generation, and eventual A/B testing on the edge.

    In early 2020, we’ve started with the big refactoring of the CSS layout components. We never used CSS-in-JS or styled-components, but instead a good ol’ component-based system of Sass-modules which would be compiled into CSS. Back in 2017, the entire layout was built with Flexbox and rebuilt with CSS Grid and CSS Custom Properties in mid-2019. However, some pages needed special treatment due to new advertising spots and new product panels. So while the layout was working, it wasn’t working very well, and it was quite difficult to maintain.

    Additionally, the header with the main navigation had to change to accommodate for more items that we wanted to display dynamically. Plus, we wanted to refactor some frequently used components used across the site, and the CSS used there needed some revision as well — the newsletter box being the most notable culprit. We started off by refactoring some components with utility-first CSS but we never got to the point that it was used consistently across the entire site.

    The larger issue was the large JavaScript bundle that — not very surprisingly — was blocking the main-thread for hundreds of milliseconds. A big JavaScript bundle might seem out of place on a magazine that merely publishes articles, but actually, there is plenty of scripting happening behind the scenes.

    We have various states of components for authenticated and unauthenticated customers. Once you are signed in, we want to show all products in the final price, and as you add a book to the cart, we want to keep a cart accessible with a tap on a button — no matter what page you are on. Advertising needs to come in quickly without causing disruptive layout shifts, and the same goes for the native product panels that highlight our products. Plus a service worker that caches all static assets and serves them for repeat views, along with cached versions of articles that a reader has already visited.

    So all of this scripting had to happen at some point, and it was draining on the reading experience even although the script was coming in quite late. Frankly, we were painstakingly working on the site and new components without keeping a close eye on performance (and we had a few other things to keep in mind for 2020). The turning point came unexpectedly. Harry Roberts ran his (excellent) Web Performance Masterclass as an online workshop with us, and throughout the entire workshop, he was using Smashing as an example by highlighting issues that we had and suggesting solutions to those issues alongside useful tools and guidelines.

    Throughout the workshop, I was diligently taking notes and revisiting the codebase. At the time of the workshop, our Lighthouse scores were 60–68 on the homepage, and around 40-60 on article pages — and obviously worse on mobile. Once the workshop was over, we got to work.

    Identifying The Bottlenecks

    We often tend to rely on particular scores to get an understanding of how well we perform, yet too often single scores don’t provide a full picture. As David East eloquently noted in his article, web performance isn’t a single value; it’s a distribution. Even if a web experience is heavily and thoroughly an optimized all-around performance, it can’t be just fast. It might be fast to some visitors, but ultimately it will also be slower (or slow) to some others.

    The reasons for it are numerous, but the most important one is a huge difference in network conditions and device hardware across the world. More often than not we can’t really influence those things, so we have to ensure that our experience accommodates them instead.

    In essence, our job then is to increase the proportion of snappy experiences and decrease the proportion of sluggish experiences. But for that, we need to get a proper picture of what the distribution actually is. Now, analytics tools and performance monitoring tools will provide this data when needed, but we looked specifically into CrUX, Chrome User Experience Report. CrUX generates an overview of performance distributions over time, with traffic collected from Chrome users. Much of this data related to Core Web Vitals which Google has announced back in 2020, and which also contribute to and are exposed in Lighthouse.

    Largest Contentful Paint (LCP) statistics showing a massive performance drop between may and september in 2020
    The performance distribution for Largest Contentful Paint in 2020. Between May and September the performance has dropped massively. Data from CrUX. (Large preview)

    We noticed that across the board, our performance regressed dramatically throughout the year, with particular drops around August and September. Once we saw these charts, we could look back into some of the PRs we’ve pushed live back then to study what has actually happened.

    It didn’t take a while to figure out that just around these times we launched a new navigation bar live. That navigation bar — used on all pages — relied on JavaScript to display navigation items in a menu on tap or on click, but the JavaScript bit of it was actually bundled within the app.js bundle. To improve Time To Interactive, we decided to extract the navigation script from the bundle and serve it inline.

    Around the same time we switched from an (outdated) manually created critical CSS file to an automated system that was generating critical CSS for every template — homepage, article, product page, event, job board, and so on — and inline critical CSS during the build time. Yet we didn’t really realize how much heavier the automatically generated critical CSS was. We had to explore it in more detail.

    And also around the same time, we were adjusting the web font loading, trying to push web fonts more aggressively with resource hints such as preload. This seems to be backlashing with our performance efforts though, as web fonts were delaying rendering of the content, being overprioritized next to the full CSS file.

    Now, one of the common reasons for regression is the heavy cost of JavaScript, so we also looked into Webpack Bundle Analyzer and Simon Hearne’s request map to get a visual picture of our JavaScript dependencies. It looked quite healthy at the start.

    A visual mind map of JavaScript dependencies
    Nothing groundbreaking really: the request map didn’t seem to be excessive at first. (Large preview)

    A few requests were coming to the CDN, a cookie consent service Cookiebot, Google Analytics, plus our internal services for serving product panels and custom advertising. It didn’t appear like there were many bottlenecks — until we looked a bit more closely.

    In performance work, it’s common to look at the performance of some critical pages — most likely the homepage and most likely a few article/product pages. However, while there is only one homepage, there might be plenty of various product pages, so we need to pick ones that are representative of our audience.

    In fact, as we’re publishing quite a few code-heavy and design-heavy articles on SmashingMag, over the years we’ve accumulated literally thousands of articles that contained heavy GIFs, syntax-highlighted code snippets, CodePen embeds, video/audio embeds, and nested threads of never-ending comments.

    When brought together, many of them were causing nothing short of an explosion in DOM size along with excessive main thread work — slowing down the experience on thousands of pages. Not to mention that with advertising in place, some DOM elements were injected late in the page’s lifecycle causing a cascade of style recalculations and repaints — also expensive tasks that can produce long tasks.

    All of this wasn’t showing up in the map we generated for a quite lightweight article page in the chart above. So we picked the heaviest pages we had — the almighty homepage, the longest one, the one with many video embeds, and the one with many CodePen embeds — and decided to optimize them as much as we could. After all, if they are fast, then pages with a single CodePen embed should be faster, too.

    With these pages in mind, the map looked a little bit differently. Note the huge thick line heading to the Vimeo player and Vimeo CDN, with 78 requests coming from a Smashing article.

    A visual mind map showing performance issues especially in articles that used plenty of video and/or video embeds
    On some article pages, the graph looked differently. Especially with plenty of code or video embeds, the performance was dropping quite significantly. Unfortunately, many of our articles have them. (Large preview)

    To study the impact on the main thread, we took a deep-dive into the Performance panel in DevTools. More specifically, we were looking for tasks that last longer than 50 seconds (highlighted with a right rectangle in the right upper corner) and tasks that contain Recalculation styles (purple bar). The first would indicate expensive JavaScript execution, while the latter would expose style invalidations caused by dynamic injections of content in the DOM and suboptimal CSS. This gave us some actionable pointers of where to start. For example, we quickly discovered that our web font loading had a significant repaint cost, while JavaScript chunks were still heavy enough to block the main thread.

    A screenshot of the performance panel in DevTools showing JavaScript chunks that were still heavy enough to block the main thread
    Studying the Performance panel in DevTools. There were a few Long tasks, taking more than 50ms and blocking the main thread. (Large preview)

    As a baseline, we looked very closely at Core Web Vitals, trying to ensure that we are scoring well across all of them. We chose to focus specifically on slow mobile devices — with slow 3G, 400ms RTT and 400kbps transfer speed, just to be on the pessimistic side of things. It’s not surprising then that Lighthouse wasn’t very happy with our site either, providing fully solid red scores for the heaviest articles, and tirelessly complaining about unused JavaScript, CSS, offscreen images and their sizes.

    A screenshot of Lighthouse data showing opportunities and estimated savings
    Lighthouse wasn’t particularly happy about the performance of some pages either. That’s the one with plenty of video embeds. (Large preview)

    Once we had some data in front of us, we could focus on optimizing the three heaviest article pages, with a focus on critical (and non-critical) CSS, JavaScript bundle, long tasks, web font loading, layout shifts and third-party-embeds. Later we’d also revise the codebase to remove legacy code and use new modern browser features. It seemed like a lot of work ahead of was, and indeed we were quite busy for the months to come.

    Improving The Order Of Assets In The <head>

    Ironically, the very first thing we looked into wasn’t even closely related to all the tasks we’ve identified above. In the performance workshop, Harry spent a considerable amount of time explaining the order of assets in the <head> of each page, making a point that to deliver critical content quickly means being very strategic and attentive about how assets are ordered in the source code.

    Now it shouldn’t come as a big revelation that critical CSS is beneficial for web performance. However, it did come as a bit of a surprise how much difference the order of all the other assets — resource hints, web font preloading, synchronous and asynchronous scripts, full CSS and metadata — has.

    We’ve turned up the entire <head> upside down, placing critical CSS before all asynchronous scripts and all preloaded assets such as fonts, images etc. We’ve broken down the assets that we’ll be preconnecting to or preloading by template and file type, so that critical images, syntax highlighting and video embeds will be requested early only for a certain type of articles and pages.

    In general, we’ve carefully orchestrated the order in the <head>, reduced the number of preloaded assets that were competing for bandwidth, and focused on getting critical CSS right. If you’d like to dive deeper into some of the critical considerations with the <head> order, Harry highlights them in the article on CSS and Network Performance. This change alone brought us around 3–4 Lighthouse score points across the board.

    Moving From Automated Critical CSS Back To Manual Critical CSS

    Moving the <head> tags around was a simple part of the story though. A more difficult one was the generation and management of critical CSS files. Back in 2017, we manually handcrafted critical CSS for every template, by collecting all of the styles required to render the first 1000 pixels in height across all screen widths. This of course was a cumbersome and slightly uninspiring task, not to mention maintenance issues for taming a whole family of critical CSS files and a full CSS file.

    So we looked into options on automating this process as a part of the build routine. There wasn’t really a shortage of tools available, so we’ve tested a few and decided to run a few tests. We’ve managed to set it them up and running quite quickly. The output seemed to be good enough for an automated process, so after a few configuration tweaks, we plugged it in and pushed it to production. That happened around July–August last year, which is nicely visualized in the spike and performance drop in the CrUX data above. We kept going back and forth with the configuration, often having troubles with simple things like adding in particular styles or removing others. E.g. cookie consent prompt styles that aren’t really included on a page unless the cookie script has initialized.

    In October, we’ve introduced some major layout changes to the site, and when looking into the critical CSS, we’ve run into exactly the same issues yet again — the generated outcome was quite verbose, and wasn’t quite what we wanted. So as an experiment in late October, we all bundled our strengths to revisit our critical CSS approach and study how much smaller a handcrafted critical CSS would be. We took a deep breath and spent days around the code coverage tool on key pages. We grouped CSS rules manually and removed duplicates and legacy code in both places — the critical CSS and the main CSS. It was a much-needed cleanup indeed, as many styles that were written back in 2017–2018 have become obsolete over the years.

    As a result, we ended up with three handcrafted critical CSS files, and with three more files that are currently work in progress:

    The files are inlined in the head of each template, and at the moment they are duplicated in the monolithic CSS bundle that contains everything ever used (or not really used anymore) on the site. At the moment, we are looking into breaking down the full CSS bundle into a few CSS packages, so a reader of the magazine wouldn’t download styles from the job board or book pages, but then when reaching those pages would get a quick render with critical CSS and get the rest of the CSS for that page asynchronously — only on that page.

    Admittedly, handcrafted critical CSS files weren’t much smaller in size: we’ve reduced the size of critical CSS files by around 14%. However, they included everything we needed in the right order from top to finish without duplicates and overriding styles. This seemed to be a step in the right direction, and it gave us a Lighthouse boost of another 3–4 points. We were making progress.

    Changing The Web Font Loading

    With font-display at our fingertips, font loading seems to be a problem in the past. Unfortunately, it isn’t quite right in our case. You, dear readers, seem to visit a number of articles on Smashing Magazine. You also frequently return back to the site to read yet another article — perhaps a few hours or days later, or perhaps a week later. One of the issues that we had with font-display used across the site was that for readers who moved inbetween articles a lot, we noticed plenty of flashes between the fallback font and the web font (which shouldn’t normally happen as fonts would be properly cached).

    That didn’t feel like a decent user experience, so we looked into options. On Smashing, we are using two main typefaces — Mija for headings and Elena for body copy. Mija comes in two weights (Regular and Bold), while Elena is coming in three weights (Regular, Italic, Bold). We dropped Elena’s Bold Italic years ago during the redesign just because we used it on just a few pages. We subset the other fonts by removing unused characters and Unicode ranges.

    Our articles are mostly set in text, so we’ve discovered that most of the time on the site the Largest Contentful Paint is either the first paragraph of text in an article or the photo of the author. That means that we need to take extra care of ensuring that the first paragraph appears quickly in a fallback font, while gracefully changing over to the web font with minimal reflows.

    Take a close look at the initial loading experience of the front page (slowed down three times):

    We had four primary goals when figuring out a solution:

    1. On the very first visit, render the text immediately with a fallback font;
    2. Match font metrics of fallback fonts and web fonts to minimize layout shifts;
    3. Load all web fonts asynchronously and apply them all at once (max. 1 reflow);
    4. On subsequent visits, render all text directly in web fonts (without any flashing or reflows).

    Initially, we tried to use font-display: swap on font-face. This seemed to be the simplest option, however, some readers will visit a number of pages, so we ended up with a lot of flickering with the six fonts that we were rendering throughout the site. Also, with font-display alone, we couldn’t group requests or repaints.

    Another idea was to render everything in fallback font on the initial visit, then request and cache all fonts asynchronously, and only on subsequent visits deliver web fonts straight from the cache. The issue with this approach was that a number of readers is coming from search engines, and at least some of them will only see that one page — and we didn’t want to render an article in a system font alone.

    So what’s then?

    Since 2017, we’ve been using the Two-Stage-Render approach for web font loading which basically describes two stages of rendering: one with a minimal subset of web fonts, and the other with a complete family of font weights. Back in the day, we created minimal subsets of Mija Bold and Elena Regular which were the most frequently used weights on the site. Both subsets include only Latin characters, punctuation, numbers, and a few special characters. These fonts (ElenaInitial.woff2 and MijaInitial.woff2) were very small in size — often just around 10–15 KBs in size. We serve them in the first stage of font rendering, displaying the entire page in these two fonts.

    CLS caused by web fonts flickering
    CLS caused by web fonts flickering (the shadows under author images are moving due to font change). Generated with Layout Shift GIF Generator. (Large preview)

    We do so with a Font Loading API which gives us information about which fonts have been loaded and which weren’t yet. Behind the scenes, it happens by adding a class .wf-loaded-stage1 to the body, with styles rendering the content in those fonts:

    .wf-loaded-stage1 article,
    .wf-loaded-stage1 promo-box,
    .wf-loaded-stage1 comments {
        font-family: ElenaInitial,sans-serif;
    .wf-loaded-stage1 h1,
    .wf-loaded-stage1 h2,
    .wf-loaded-stage1 .btn {
        font-family: MijaInitial,sans-serif;

    Because font files are quite small, hopefully they get through the network quite quickly. Then as the reader can actually start reading an article, we load full weights of the fonts asynchronously, and add .wf-loaded-stage2 to the body:

    .wf-loaded-stage2 article,
    .wf-loaded-stage2 promo-box,
    .wf-loaded-stage2 comments {
        font-family: Elena,sans-serif;
    .wf-loaded-stage2 h1,
    .wf-loaded-stage2 h2,
    .wf-loaded-stage2 .btn {
        font-family: Mija,sans-serif;

    So when loading a page, readers are going to get a small subset web font quickly first, and then we switch over to the full font-family. Now, by default, these switches between fallback fonts and web fonts happen randomly, based on whatever comes first through the network. That might feel quite disruptive as you have started reading an article. So instead of leaving it to the browser to decide when to switch fonts, we group repaints, reducing the reflow impact to the minimum.

    /* Loading web fonts with Font Loading API to avoid multiple repaints. With help by Irina Lipovaya. */
    /* Credit to initial work by Zach Leatherman: */
    // If the Font Loading API is supported...
    // (If not, we stick to fallback fonts)
    if ("fonts" in document) {
        // Create new FontFace objects, one for each font
        let ElenaRegular = new FontFace(
            "url(/fonts/ElenaWebRegular/ElenaWebRegular.woff2) format('woff2')"
        let ElenaBold = new FontFace(
            "url(/fonts/ElenaWebBold/ElenaWebBold.woff2) format('woff2')",
                weight: "700"
        let ElenaItalic = new FontFace(
            "url(/fonts/ElenaWebRegularItalic/ElenaWebRegularItalic.woff2) format('woff2')",
                style: "italic"
        let MijaBold = new FontFace(
            "url(/fonts/MijaBold/Mija_Bold-webfont.woff2) format('woff2')",
                weight: "700"
        // Load all the fonts but render them at once
        // if they have successfully loaded
        let loadedFonts = Promise.all([
        ]).then(result => {
            result.forEach(font => document.fonts.add(font));
            // Used for repeat views
            sessionStorage.foutFontsStage2Loaded = true;
        }).catch(error => {
            throw new Error(`Error caught: ${error}`);

    However, what if the first small subset of fonts isn’t coming through the network quickly? We’ve noticed that this seems to be happening more often than we’d like to. In that case, after a timeout of 3s expires, modern browsers fall back to a system font (in our case Arial), then switch over to ElenaInitial or MijaInitial, just to be switched over to full Elena or Mija respectively later. That produced just a bit too much flashing at our tasting. We were thinking about removing the first stage render only for slow networks initially (via Network Information API), but then we decided to remove it altogether.

    So in October, we removed the subsets altogether, along with the intermediate stage. Whenever all weights of both Elena and Mija fonts are successfully downloaded by the client and ready to be applied, we initiate stage 2 and repaint everything at once. And to make reflows even less noticeable, we spent a bit of time matching fallback fonts and web fonts. That mostly meant applying slightly different font sizes and line heights for elements painted in the first visible portion of the page.

    For that, we used font-style-matcher and (ahem, ahem) a few magic numbers. That’s also the reason why we initially went with -apple-system and Arial as global fallback fonts; San Francisco (rendered via -apple-system) seemed to be a bit nicer than Arial, but if it’s not available, we chose to use Arial just because it’s widely spread across most OSes.

    In CSS, it would look like this:

    .article__summary {
        font-family: -apple-system,Arial,BlinkMacSystemFont,Roboto Slab,Droid Serif,Segoe UI,Ubuntu,Cantarell,Georgia,sans-serif;
        font-style: italic;
        /* Warning: magic numbers ahead! */
        /* San Francisco Italic and Arial Italic have larger x-height, compared to Elena */
        font-size: 0.9213em;
        line-height: 1.487em;
    .wf-loaded-stage2 .article__summary {
        font-family: Elena,sans-serif;
        font-size: 1em; /* Original font-size for Elena Italic */
        line-height: 1.55em; /* Original line-height for Elena Italic */

    This worked fairly well. We do display text immediately, and web fonts come in on the screen grouped, ideally causing exactly one reflow on the first view, and no reflows altogether on subsequent views.

    Once the fonts have been downloaded, we store them in a service worker’s cache. On subsequent visits we first check if the fonts are already in the cache. If they are, we retrieve them from the service worker’s cache and apply them immediately. And if not, we start all over with the fallback-web-font-switcheroo.

    This solution reduced the number of reflows to a minimum (one) on relatively fast connections, while also keeping the fonts persistently and reliably in the cache. In the future, we sincerely hope to replace magic numbers with f-mods. Perhaps Zach Leatherman would be proud.

    Identifying And Breaking Down The Monolithic JS

    When we studied the main thread in the DevTools’ Performance panel, we knew exactly what we needed to do. There were eight Long Tasks that were taking between 70ms and 580ms, blocking the interface and making it non-responsive. In general, these were the scripts costing the most:

    • uc.js, a cookie prompt scripting (70ms)
    • style recalculations caused by incoming full.css file (176ms) (the critical CSS doesn’t contain styles below the 1000px height across all viewports)
    • advertising scripts running on load event to manage panels, shopping cart, etc. + style recalculations (276ms)
    • web font switch, style recalculations (290ms)
    • app.js evaluation (580ms)

    We focused on the ones that were most harmful first — so-to-say the longest Long Tasks.

    A screenshot taken from DevTools showing style validations for the smashing magazine front page
    At the bottom, Devtools shows style invalidations — a font switch affected 549 elements that had to be repainted. Not to mention layout shifts it was causing. (Large preview)

    The first one was occurring due to expensive layout recalculations caused by the change of the fonts (from fallback font to web font), causing over 290ms of extra work (on a fast laptop and a fast connection). By removing stage one from the font loading alone, we were able to gain around 80ms back. It wasn’t good enough though because were way beyond the 50ms budget. So we started digging deeper.

    The main reason why recalculations happened was merely because of the huge differences between fallback fonts and web fonts. By matching the line-height and sizes for fallback fonts and web fonts, we were able to avoid many situations when a line of text would wrap on a new line in the fallback font, but then get slightly smaller and fit on the previous line, causing major change in the geometry of the entire page, and consequently massive layout shifts. We’ve played with letter-spacing and word-spacing as well, but it didn’t produce good results.

    With these changes, we were able to cut another 50-80ms, but we weren’t able to reduce it below 120ms without displaying the content in a fallback font and display the content in the web font afterwards. Obviously, it should massively affect only first time visitors as consequent page views would be rendered with the fonts retrieved directly from the service worker’s cache, without costly reflows due to the font switch.

    By the way, it’s quite important to notice that in our case, we noticed that most Long Tasks weren’t caused by massive JavaScript, but instead by Layout Recalculations and parsing of the CSS, which meant that we needed to do a bit of CSS cleaning, especially watching out for situations when styles are overwritten. In some way, it was good news because we didn’t have to deal with complex JavaScript issues that much. However, it turned out not to be straightforward as we are still cleaning up the CSS this very day. We were able to remove two Long Tasks for good, but we still have a few outstanding ones and quite a way to go. Fortunately, most of the time we aren’t way above the magical 50ms threshold.

    The much bigger issue was the JavaScript bundle we were serving, occupying the main thread for a whopping 580ms. Most of this time was spent in booting up app.js which contains React, Redux, Lodash, and a Webpack module loader. The only way to improve performance with this massive beast was to break it down into smaller pieces. So we looked into doing just that.

    With Webpack, we’ve split up the monolithic bundle into smaller chunks with code-splitting, about 30Kb per chunk. We did some package.json cleansing and version upgrade for all production dependencies, adjusted the browserlistrc setup to address the two latest browser versions, upgraded to Webpack and Babel to the latest versions, moved to Terser for minification, and used ES2017 (+ browserlistrc) as a target for script compilation.

    We also used BabelEsmPlugin to generate modern versions of existing dependencies. Finally, we’ve added prefetch links to the header for all necessary script chunks and refactored the service worker, migrating to Workbox with Webpack (workbox-webpack-plugin).

    A screenshot showing JavaScript chunks affecting performance with each running no longer than 40ms on the main thread
    JavaScript chunks in action, with each running no longer than 40ms on the main thread. (Large preview)

    Remember when we switched to the new navigation back in mid-2020, just to see a huge performance penalty as a result? The reason for it was quite simple. While in the past the navigation was just static plain HTML and a bit of CSS, with the new navigation, we needed a bit of JavaScript to act on opening and closing of the menu on mobile and on desktop. That was causing rage clicks when you would click on the navigation menu and nothing would happen, and of course, had a penalty cost in Time-To-Interactive scores in Lighthouse.

    We removed the script from the bundle and extracted it as a separate script. Additionally, we did the same thing for other standalone scripts that were used rarely — for syntax highlighting, tables, video embeds and code embeds — and removed them from the main bundle; instead, we granularly load them only when needed.

    Performance stats for the smashing magazine front page showing the function call for nav.js that happened right after a monolithic app.js bundle had been executed
    Notice that the function call for nav.js is happening after a monolithic app.js bundle is executed. That’s not quite right. (Large preview)

    However, what we didn’t notice for months was that although we removed the navigation script from the bundle, it was loading after the entire app.js bundle was evaluated, which wasn’t really helping Time-To-Interactive (see image above). We fixed it by preloading nav.js and deferring it to execute in the order of appearance in the DOM, and managed to save another 100ms with that operation alone. By the end, with everything in place we were able to bring the task to around 220ms.

    A screenshot of the the Long task reduced by almost 200ms
    By prioritizing the nav.js script, we were able to reduce the Long task by almost 200ms. (Large preview)

    We managed to get some improvement in place, but still have quite a way to go, with further React and Webpack optimizations on our to-do list. At the moment we still have two major Long Tasks — font switch (120ms), app.js execution (220ms) and style recalculations due to the size of full CSS (140ms). For us, it means cleaning up and breaking up the monolithic CSS next.

    It’s worth mentioning that these results are really the best-scenario-results. On a given article page we might have a large number of code embeds and video embeds, along with other third-party scripts that would require a separate conversation.

    Dealing With 3rd-Parties

    Fortunately, our third-party scripts footprint (and the impact of their friends’ fourth-party-scripts) wasn’t huge from the start. But when these third-party scripts accumulated, they would drive performance down significantly. This goes especially for video embedding scripts, but also syntax highlighting, advertising scripts, promo panels scripts and any external iframe embeds.

    Obviously, we defer all of these scripts to start loading after the DOMContentLoaded event, but once they finally come on stage, they cause quite a bit of work on the main thread. This shows up especially on article pages, which are obviously the vast majority of content on the site.

    The first thing we did was allocating proper space to all assets that are being injected into the DOM after the initial page render. It meant width and height for all advertising images and the styling of code snippets. We found out that because all the scripts were deferred, new styles were invalidating existing styles, causing massive layout shifts for every code snippet that was displayed. We fixed that by adding the necessary styles to the critical CSS on the article pages.

    We’ve re-established a strategy for optimizing images (preferably AVIF or WebP — still work in progress though). All images below the 1000px height threshold are natively lazy-loaded (with <img loading=lazy>), while the ones on the top are prioritized (<img loading=eager>). The same goes for all third-party embeds.

    We replaced some dynamic parts with their static counterparts — e.g. while a note about an article saved for offline reading was appearing dynamically after the article was added to the service worker’s cache, now it appears statically as we are, well, a bit optimistic and expect it to be happening in all modern browsers.

    As of the moment of writing, we’re preparing facades for code embeds and video embeds as well. Plus, all images that are offscreen will get decoding=async attribute, so the browser has a free reign over when and how it loads images offscreen, asynchronously and in parallel.

    A screenshot of the main front page of smashing magazine being highlighted by the Diagnostics CSS tool for each image that does not have a width/height attribute
    Diagnostics CSS in use: highlighting images that don’t have width/height attributes, or are served in legacy formats. (Large preview)

    To ensure that our images always include width and height attributes, we’ve also modified Harry Roberts’ snippet and Tim Kadlec’s diagnostics CSS to highlight whenever an image isn’t served properly. It’s used in development and editing but obviously not in production.

    One technique that we used frequently to track what exactly is happening as the page is being loaded, was slow-motion loading.

    First, we’ve added a simple line of code to the diagnostics CSS, which provides a noticeable outline for all elements on the page.

    * {
      outline: 3px solid red
    A screenshot of an article published on smashing magazine with red lines on the layout to help check the stability and rendering on the page
    A quick trick to check the stability of the layout, by adding * { outline: 3px red; } and observing the boxes as the browser is rendering the page. (Large preview)

    Then we record a video of the page loaded on a slow and fast connection. Then we rewatch the video by slowing down the playback and moving back and forward to identify where massive layout shifts happen.

    Here’s the recording of a page being loaded on a fast connection:

    Recording for the loading of the page with an outline applied, to observe layout shifts.

    And here’s the recording of a recording being played to study what happens with the layout:

    Auditing the layout shifts by rewatching a recording of the site loading in slow motion, watching out for height and width of content blocks, and layout shifts.

    By auditing the layout shifts this way, we were able to quite quickly notice what’s not quite right on the page, and where massive recalculation costs are happening. As you probably have noticed, adjusting the line-height and font-size on headings might go a long way to avoid large shifts.

    With these simple changes alone, we were able to boost performance score by a whopping 25 Lighthouse points for the video-heaviest article, and gain a few points for code embeds.

    Enhancing The Experience

    We’ve tried to be quite strategic in pretty much everything from loading web fonts to serving critical CSS. However, we’ve done our best to use some of the new technologies that have become available last year.

    We are planning on using AVIF by default to serve images on SmashingMag, but we aren’t quite there yet, as many of our images are served from Cloudinary (which already has beta support for AVIF), but many are directly from our CDN yet we don’t really have a logic in place just yet to generate AVIFs on the fly. That would need to be a manual process for now.

    We’re lazy rendering some of the offset components of the page with content-visibility: auto. For example, the footer, the comments section, as well as the panels way below the first 1000px height, are all rendered later after the visible portion of each page has been rendered.

    We’ve played a bit with link rel="prefetch" and even link rel="prerender" (NoPush prefetch) some parts of the page that are very likely to be used for further navigation — for example, to prefetch assets for the first articles on the front page (still in discussion).

    We also preload author images to reduce the Largest Contentful Paint, and some key assets that are used on each page, such as dancing cat images (for the navigation) and shadow used for all author images. However, all of them are preloaded only if a reader happens to be on a larger screen (>800px), although we are looking into using Network Information API instead to be more accurate.

    We’ve also reduced the size of full CSS and all critical CSS files by removing legacy code, refactoring a number of components, and removing the text-shadow trick that we were using to achieve perfect underlines with a combination of text-decoration-skip-ink and text-decoration-thickness (finally!).

    Work To Be Done

    We’ve spent a quite significant amount of time working around all the minor and major changes on the site. We’ve noticed quite significant improvements on desktop and a quite noticeable boost on mobile. At the moment of writing, our articles are scoring on average between 90 and 100 Lighthouse score on desktop, and around 65-80 on mobile.

    Lighthouse score on desktop shows between 90 and 100
    Performance score on desktop. The homepage is already heavily optimized. (Large preview)
    Lighthouse score on mobile shows between 65 and 80
    On mobile, we hardly ever reach a Lighthouse score above 85. The main issues are still Time to Interactive and Total Blocking Time. (Large preview)

    The reason for the poor score on mobile is clearly poor Time to Interactive and poor Total Blocking time due to the booting of the app and the size of the full CSS file. So there is still some work to be done there.

    As for the next steps, we are currently looking into further reducing the size of the CSS, and specifically break it down into modules, similarly to JavaScript, loading some parts of the CSS (e.g. checkout or job board or books/eBooks) only when needed.

    We also explore options of further bundling experimentation on mobile to reduce the performance impact of the app.js although it seems to be non-trivial at the moment. Finally, we’ll be looking into alternatives to our cookie prompt solution, rebuilding our containers with CSS clamp(), replacing the padding-bottom ratio technique with aspect-ratio and looking into serving as many images as possible in AVIF.

    That’s It, Folks!

    Hopefully, this little case-study will be useful to you, and perhaps there are one or two techniques that you might be able to apply to your project right away. In the end, performance is all about a sum of all the fine little details, that, when adding up, make or break your customer’s experience.

    While we are very committed to getting better at performance, we also work on improving accessibility and the content of the site.

    So if you spot anything that’s not quite right or anything that we could do to further improve Smashing Magazine, please let us know in the comments to this article!

    Also, if you’d like to stay updated on articles like this one, please subscribe to our email newsletter for friendly web tips, goodies, tools and articles, and a seasonal selection of Smashing cats.

    Smashing Editorial

    Source link

    web design

    Front-End Performance Checklist 2021 — Smashing Magazine


    Is web font delivery optimized?
    The first question that’s worth asking is if we can get away with using UI system fonts in the first place — we just need to make sure to double check that they appear correctly on various platforms. If it’s not the case, chances are high that the web fonts we are serving include glyphs and extra features and weights that aren’t being used. We can ask our type foundry to subset web fonts or if we are using open-source fonts, subset them on our own with Glyphhanger or Fontsquirrel. We can even automate our entire workflow with Peter Müller’s subfont, a command line tool that statically analyses your page in order to generate the most optimal web font subsets, and then inject them into our pages.

    WOFF2 support is great, and we can use WOFF as fallback for browsers that don’t support it — or perhaps legacy browsers could be served system fonts. There are many, many, many options for web font loading, and we can choose one of the strategies from Zach Leatherman’s “Comprehensive Guide to Font-Loading Strategies,” (code snippets also available as Web font loading recipes).

    Probably the better options to consider today are Critical FOFT with preload and “The Compromise” method. Both of them use a two-stage render for delivering web fonts in steps — first a small supersubset required to render the page fast and accurately with the web font, and then load the rest of the family async. The difference is that “The Compromise” technique loads polyfill asynchronously only if font load events are not supported, so you don’t need to load the polyfill by default. Need a quick win? Zach Leatherman has a quick 23-min tutorial and case study to get your fonts in order.

    In general, it might be a good idea to use the preload resource hint to preload fonts, but in your markup include the hints after the link to critical CSS and JavaScript. With preload, there is a puzzle of priorities, so consider injecting rel="preload" elements into the DOM just before the external blocking scripts. According to Andy Davies, “resources injected using a script are hidden from the browser until the script executes, and we can use this behaviour to delay when the browser discovers the preload hint.” Otherwise, font loading will cost you in the first render time.

    A screenshot of slide 93 showing two example of images with a title next to them saying ‘Metrics prioritization: preload one of each family’
    When everything is critical, nothing is critical. preload only one or a maximum of two fonts of each family. (Image credit: Zach Leatherman – slide 93) (Large preview)

    It’s a good idea to be selective and choose files that matter most, e.g. the ones that are critical for rendering or that would help you avoiding visible and disruptive text reflows. In general, Zach advises to preload one or two fonts of each family — it also makes sense to delay some font loading if they are less critical.

    It has become quite common to use local() value (which refers to a lo­cal font by name) when defining a font-family in the @font-face rule:

    /* Warning! Not a good idea! */
    @font-face {
      font-family: Open Sans;
      src: local('Open Sans Regular'),
           url('opensans.woff2') format ('woff2'),
           url('opensans.woff') format('woff');

    The idea is reasonable: some popular open-source fonts such as Open Sans are coming pre-installed with some drivers or apps, so if the font is avail­able lo­cally, the browser does­n’t need to down­load the web font and can dis­play the lo­cal font im­me­di­ately. As Bram Stein noted, “though a lo­cal font matches the name of a web font, it most likely isn’t the same font. Many web fonts dif­fer from their “desk­top” ver­sion. The text might be ren­dered dif­fer­ently, some char­ac­ters may fall back to other fonts, Open­Type fea­tures can be miss­ing en­tirely, or the line height may be dif­fer­ent.”

    Also, as typefaces evolve over time, the locally installed version might be very different from the web font, with characters looking very different. So, according to Bram, it’s better to never mix lo­cally in­stalled fonts and web fonts in @font-face rules. Google Fonts has followed suit by disabling local() on the CSS results for all users, other than Android requests for Roboto.

    Nobody likes waiting for the content to be displayed. With the font-display CSS descriptor, we can control the font loading behavior and enable content to be readable immediately (with font-display: optional) or almost immediately (with a timeout of 3s, as long as the font gets successfully downloaded — with font-display: swap). (Well, it’s a bit more complicated than that.)

    However, if you want to minimize the impact of text reflows, we could use the Font Loading API (supported in all modern browsers). Specifically that means for every font, we’d creata a FontFace object, then try to fetch them all, and only then apply them to the page. This way, we group all repaints by loading all fonts asynchronously, and then switch from fallback fonts to the web font exactly once. Take a look at Zach’s explanation, starting at 32:15, and the code snippet):

    /* Load two web fonts using JavaScript */
    /* Zach Leatherman: */
    // Remove existing @font-face blocks
    // Create two
    let font = new FontFace("Noto Serif", /* ... */);
    let fontBold = new FontFace("Noto Serif, /* ... */);
    // Load two fonts
    let fonts = await Promise.all([
    // Group repaints and render both fonts at the same time!
    fonts.forEach(font => documents.fonts.add(font));

    To initiate a very early fetch of the fonts with Font Loading API in use, Adrian Bece suggests to add a non-breaking space nbsp; at the top of the body, and hide it visually with aria-visibility: hidden and a .hidden class:

    <body class="no-js">
      <!-- ... Website content ... -->
      <div aria-visibility="hidden" class="hidden" style="font-family: '[web-font-name]'">
          <!-- There is a non-breaking space here -->

    This goes along with CSS that has different font families declared for different states of loading, with the change triggered by Font Loading API once the fonts have successfully loaded:

    body:not(.wf-merriweather--loaded):not(.no-js) {
      font-family: [fallback-system-font];
      /* Fallback font styles */
    .no-js {
      font-family: "[web-font-name]";
      /* Webfont styles */
    /* Accessible hiding */
    .hidden {
      position: absolute;
      overflow: hidden;
      clip: rect(0 0 0 0);
      height: 1px;
      width: 1px;
      margin: -1px;
      padding: 0;
      border: 0;

    If you ever wondered why despite all your optimizations, Lighthouse still suggests to eliminate render-blocking resources (fonts), in the same article Adrian Bece provides a few techniques to make Lighthouse happy, along with a Gatsby Omni Font Loader, a performant asynchronous font loading and Flash Of Unstyled Text (FOUT) handling plugin for Gatsby.

    Now, many of us might be using a CDN or a third-party host to load web fonts from. In general, it’s always better to self-host all your static assets if you can, so consider using google-webfonts-helper, a hassle-free way to self-host Google Fonts. And if it’s not possible, you can perhaps proxy the Google Font files through the page origin.

    It’s worth noting though that Google is doing quite a bit of work out of the box, so a server might need a bit of tweaking to avoid delays (thanks, Barry!)

    This is quite important especially as since Chrome v86 (released October 2020), cross-site resources like fonts can’t be shared on the same CDN anymore — due to the partitioned browser cache. This behavior was a default in Safari for years.

    But if it’s not possible at all, there is a way to get to the fastest possible Google Fonts with Harry Roberts’ snippet:

    <!-- By Harry Roberts.
    - 1. Preemptively warm up the fonts’ origin.
    - 2. Initiate a high-priority, asynchronous fetch for the CSS file. Works in
    -    most modern browsers.
    - 3. Initiate a low-priority, asynchronous fetch that gets applied to the page
    -    only after it’s arrived. Works in all browsers with JavaScript enabled.
    - 4. In the unlikely event that a visitor has intentionally disabled
    -    JavaScript, fall back to the original method. The good news is that,
    -    although this is a render-blocking request, it can still make use of the
    -    preconnect which makes it marginally faster than the default.
    <!-- [1] -->
    <link rel="preconnect"
          crossorigin />
    <!-- [2] -->
    <link rel="preload"
          href="$CSS&display=swap" />
    <!-- [3] -->
    <link rel="stylesheet"
          media="print" onload="'all'" />
    <!-- [4] -->
      <link rel="stylesheet"
            href="$CSS&display=swap" />

    Harry’s strategy is to pre-emptively warm up the fonts’ origin first. Then we initiate a high-priority, asynchronous fetch for the CSS file. Afterwards, we initiate a low-priority, asynchronous fetch that gets applied to the page only after it’s arrived (with a print stylesheet trick). Finally, if JavaScript isn’t supported, we fall back to the original method.

    Ah, talking about Google Fonts: you can shave up to 90% of the size of Google Fonts requests by declaring only characters you need with &text. Plus, the support for font-display was added recently to Google Fonts as well, so we can use it out of the box.

    A quick word of caution though. If you use font-display: optional, it might be suboptimal to also use preload as it will trigger that web font request early (causing network congestion if you have other critical path resources that need to be fetched). Use preconnect for faster cross-origin font requests, but be cautious with preload as preloading fonts from a different origin wlll incur network contention. All of these techniques are covered in Zach’s Web font loading recipes.

    On the other hand, it might be a good idea to opt out of web fonts (or at least second stage render) if the user has enabled Reduce Motion in accessibility preferences or has opted in for Data Saver Mode (see Save-Data header), or when the user has a slow connectivity (via Network Information API).

    We can also use the prefers-reduced-data CSS media query to not define font declarations if the user has opted into data-saving mode (there are other use-cases, too). The media query would basically expose if the Save-Data request header from the Client Hint HTTP extension is on/off to allow for usage with CSS. Currently supported only in Chrome and Edge behind a flag.

    Metrics? To measure the web font loading performance, consider the All Text Visible metric (the moment when all fonts have loaded and all content is displayed in web fonts), Time to Real Italics as well as Web Font Reflow Count after first render. Obviously, the lower both metrics are, the better the performance is.

    What about variable fonts, you might ask? It’s important to notice that variable fonts might require a significant performance consideration. They give us a much broader design space for typographic choices, but it comes at the cost of a single serial request opposed to a number of individual file requests.

    While variable fonts drastically reduce the overall combined file size of font files, that single request might be slow, blocking the rendering of all content on a page. So subsetting and splitting the font into character sets still matter. On the good side though, with a variable font in place, we’ll get exactly one reflow by default, so no JavaScript will be required to group repaints.

    Now, what would make a bulletproof web font loading strategy then? Subset fonts and prepare them for the 2-stage-render, declare them with a font-display descriptor, use Font Loading API to group repaints and store fonts in a persistent service worker’s cache. On the first visit, inject the preloading of scripts just before the blocking external scripts. You could fall back to Bram Stein’s Font Face Observer if necessary. And if you’re interested in measuring the performance of font loading, Andreas Marschke explores performance tracking with Font API and UserTiming API.

    Finally, don’t forget to include unicode-range to break down a large font into smaller language-specific fonts, and use Monica Dinculescu’s font-style-matcher to minimize a jarring shift in layout, due to sizing discrepancies between the fallback and the web fonts.

    Alternatively, to emulate a web font for a fallback font, we can use @font-face descriptors to override font metrics (demo, enabled in Chrome 87). (Note that adjustments are complicated with complicated font stacks though.)

    Does the future look bright? With progressive font enrichment, eventually we might be able to “download only the required part of the font on any given page, and for subsequent requests for that font to dynamically ‘patch’ the original download with additional sets of glyphs as required on successive page views”, as Jason Pamental explains it. Incremental Transfer Demo is already available, and it’s work in progress.

    Source link

    web design

    Methods Of Improving And Optimizing Performance In React Apps — Smashing Magazine


    About The Author

    Shedrack Akintayo is a software engineer from Lagos, Nigeria, who has a love for community building, open source and creating content and tech for the next …
    More about

    Since React was introduced, it has transformed the way front-end developers build web applications, and its virtual DOM is famous for effectively rendering components. In this tutorial, we will discuss various methods of optimizing performance in React applications, and also the features of React that we can use to improve performance.

    React enables web applications to update their user interfaces (UIs) quickly, but that does not mean your medium or large React application will perform efficiently. Its performance will depend on how you use React when building it, and on your understanding of how React operates and the process through which components live through the various phases of their lifecycle. React offers a lot of performance improvements to a web app, and you can achieve these improvements through various techniques, features, and tools.

    In this tutorial, we will discuss various methods of optimizing performance in React applications, and also the features of React that we can use to improve performance.

    Where To Start Optimizing Performance In A React Application?

    We can’t begin to optimize an app without knowing exactly when and where to optimize. You might be asking, “Where do we start?”

    During the initial rendering process, React builds a DOM tree of components. So, when data changes in the DOM tree, we want React to re-render only those components that were affected by the change, skipping the other components in the tree that were not affected.

    However, React could end up re-rendering all components in the DOM tree, even though not all are affected. This will result in longer loading time, wasted time, and even wasted CPU resources. We need to prevent this from happening. So, this is where we will focus our optimization effort.

    In this situation, we could configure every component to only render or diff when necessary, to avoid wasting resources and time.

    Measuring Performance

    Never start the optimization process of your React application based on what you feel. Instead, use the measurement tools available to analyze the performance of your React app and get a detailed report of what might be slowing it down.

    Analyzing React Components With Chrome’s Performance Tab

    According to React’s documentation,, while you’re still in development mode, you can use the “Performance” tab in the Chrome browser to visualize how React components mount, update, and unmount.
    For example, the image below shows Chrome’s “Performance” tab profiling and analyzing my blog in development mode.

    Performance profiler summary
    Performance profiler summary (Large preview)

    To do this, follow these steps:

    1. Disable all extensions temporarily, especially React Developer Tools, because they can mess with the result of the analysis. You can easily disable extensions by running your browser in incognito mode.
    2. Make sure the application is running in development mode. That is, the application should be running on your localhost.
    3. Open Chrome’s Developer Tools, click on the “Performance” tab, and then click the “Record” button.
    4. Perform the actions you want to profile. Don’t record more than 20 seconds, or else Chrome might hang.
    5. Stop the recording.
    6. React events will be grouped under the “User Timing” label.

    The numbers from the profiler are relative. Most times and components will render more quickly in production. Nevertheless, this should help you to figure out when the UI is updated by mistake, as well as how deep and how often the UI updates occur.

    React Developer Tools Profiler

    According to React’s documentation, in react-dom 16.5+ and react-native 0.57+, enhanced profiling capabilities are available in developer mode using React Developer Tools Profiler. The profiler uses React’s experimental Profiler API to collate timing information about each component that’s rendered, in order to identify performance bottlenecks in a React application.

    Just download React Developer Tools for your browser, and then you can use the profiler tool that ships with it. The profiler can only be used either in development mode or in the production-profiling build of React v16.5+. The image below is the profiler summary of my blog in development mode using React Developer Tools Profiler:

    React Developer Tools Profiler flamegraph
    React Developer Tools Profiler flamegraph (Large preview)

    To achieve this, follow these steps:

    1. Download React Developer Tools.
    2. Make sure your React application is either in development mode or in the production-profiling build of React v16.5+.
    3. Open Chrome’s “Developer Tools” tab. A new tab named “Profiler” will be available, provided by React Developer Tools.
    4. Click the “Record” button, and perform the actions you want to profile. Ideally, stop recording after you have performed the actions you want to profile.
    5. A graph (known as a flamegraph) will appear with all of the event handlers and components of your React app.

    Note: See the documentation for more information.

    Memoization With React.memo()

    React v16 was released with an additional API, a higher-order component called React.memo(). According to the documentation, this exists only as a performance optimization.

    Its name, “memo” comes from memoization, which is basically a form of optimization used mainly to speed up code by storing the results of expensive function calls and returning the stored result whenever the same expensive function is called again.

    Memoization is a technique for executing a function once, usually a pure function, and then saving the result in memory. If we try to execute that function again, with the same arguments as before, it will just return the previously saved result from the first function’s execution, without executing the function again.

    Mapping the description above to the React ecosystem, the functions mentioned are React components and the arguments are props.

    The default behavior of a component declared using React.memo() is that it renders only if the props in the component have changed. It does a shallow comparison of the props to check this, but an option is available to override this.

    React.memo() boosts the performance of a React app by avoiding re-rendering components whose props haven’t changed or when re-rendering is not needed.

    The code below is the basic syntax of React.memo():

    const MemoizedComponent = React.memeo((props) => {
    // Component code goes in here

    When To Use React.memo()

    • Pure functional component
      You can use React.memo() if your component is functional, is given the same props, and always renders the same output. You can also use React.memo() on non-pure-functional components with React hooks.
    • The component renders often
      You can use React.memo() to wrap a component that renders often.
    • The component re-renders with same props
      Use React.memo() to wrap a component that is usually provided with the same props during re-rendering.
    • Medium to high elements
      Use it for a component that contains a medium to high number of UI elements to check props for equality.

    Note: Be careful when memoizing components that make use of props as callbacks. Be sure to use the same callback function instance between renderings. This is because the parent component could provide different instances of the callback function on every render, which will cause the memoization process to break. To fix this, make sure that the memoized component always receives the same callback instance.

    Let’s see how we can use memoization in a real-world situation. The functional component below, called “Photo”, uses React.memo() to prevent re-rendering.

    export function Photo({ title, views }) {
      return (
          <div>Photo title: {title}</div>
          <div>Location: {location}</div>
    // memoize the component
    export const MemoizedPhoto = React.memo(Photo);

    The code above consists of a functional component that displays a div containing a photo title and the location of the subject in the photo. We are also memoizing the component by creating a new function and calling it MemoizedPhoto. Memoizing the photo component will prevent the component from re-rendering as long as the props, title, and location are the same on subsequent renderings.

    // On first render, React calls MemoizedPhoto function.
      title="Effiel Tower"
    // On next render, React does not call MemoizedPhoto function,
    // preventing rendering
      title="Effiel Tower"

    Here, React calls the memoized function only once. It won’t render the component in the next call as long as the props remain the same.

    Bundling And Minification

    In React single-page applications, we can bundle and minify all our JavaScript code into a single file. This is OK, as long as our application is relatively small.

    As our React application grows, bundling and minifying all of our JavaScript code into a single file becomes problematic, difficult to understand, and tedious. It will also affect the performance and loading time of our React app because we are sending a large JavaScript file to the browser. So, we need some process to help us split the code base into various files and deliver them to the browser in intervals as needed.

    In a situation like this, we can use some form of asset bundler like Webpack, and then leverage its code-splitting functionality to split our application into multiple files.

    Code-splitting is suggested in Webpack’s documentation as a means to improve the loading time of an application. It is also suggested in React’s documentation for lazy-loading (serving only the things currently needed by the user), which can dramatically improve performance.

    Webpack suggests three general approaches to code-splitting:

    • Entry points
      Manually split code using entry configuration.
    • Duplication prevention
      Use SplitChunksPlugin to de-duplicate and split chunks.
    • Dynamic imports
      Split code via inline function calls within modules.

    Benefits Of Code Splitting

    • Splitting code assists with the browser’s cache resources and with code that doesn’t change often.
    • It also helps the browser to download resources in parallel, which reduces the overall loading time of the application.
    • It enables us to split code into chunks that will be loaded on demand or as needed by the application.
    • It keeps the initial downloading of resources on first render relatively small, thereby reducing the loading time of the app.
    Bundling and minification process
    Bundling and minification process (Large preview)

    Immutable Data Structures

    React’s documentation talks of the power of not mutating data. Any data that cannot be changed is immutable. Immutability is a concept that React programmers should understand.

    An immutable value or object cannot be changed. So, when there is an update, a new value is created in memory, leaving the old one untouched.

    We can use immutable data structures and React.PureComponent to automatically check for a complex state change. For example, if the state in your application is immutable, you can actually save all state objects in a single store with a state-management library like Redux, enabling you to easily implement undo and redo functionality.

    Don’t forget that we cannot change immutable data once it’s created.

    Benefits Of Immutable Data Structures

    • They have no side effects.
    • Immutable data objects are easy to create, test, and use.
    • They help us to write logic that can be used to quickly check for updates in state, without having to check the data over and over again.
    • They help to prevent temporal coupling (a type of coupling in which code depends on the order of execution).

    The following libraries help to provide a set of immutable data structures:

    • immutability-helper
      Mutate a copy of data without changing the source.
    • Immutable.js
      Immutable persistent data collections for JavaScript increase efficiency and simplicity.
    • seamless-immutable
      Immutable data structures for JavaScript become backwards-compatible with normal JavaScript arrays and objects.
    • React-copy-write
      This gives immutable state with a mutable API.

    Other Methods Of Improving Performance

    Use A Production Build Before Deployment

    React’s documentation suggests using the minified production build when deploying your app.

    React Developer Tools’ “production build” warning
    React Developer Tools’ “production build” warning (Large preview)

    Avoid Anonymous Functions

    Because anonymous functions aren’t assigned an identifier (via const/let/var), they aren’t persistent whenever a component inevitably gets rendered again. This causes JavaScript to allocate new memory each time this component is re-rendered, instead of allocating a single piece of memory only once, like when named functions are being used.

    import React from 'react';
    // Don’t do this.
    class Dont extends Component {
      render() {
        return (
          <button onClick={() => console.log('Do not do this')}>
    // The better way
    class Do extends Component {
      handleClick = () => {
        console.log('This is OK');
      render() {
        return (
          <button onClick={this.handleClick}>

    The code above shows two different ways to make a button perform an action on click. The first code block uses an anonymous function in the onClick() prop, and this would affect performance. The second code block uses a named function in the onClick() function, which is the correct way in this scenario.

    Mounting And Unmounting Components Often Is Expensive

    Using conditionals or tenaries to make a component disappear (i.e. to unmount it) is not advisable, because the component made to disappear will cause the browser to repaint and reflow. This is an expensive process because the positions and geometries of HTML elements in the document will have to be recalculated. Instead, we can use CSS’ opacity and visibility properties to hide the component. This way, the component will still be in the DOM but invisible, without any performance cost.

    Virtualize Long Lists

    The documentation suggests that if you are rendering a list with a large amount of data, you should render a small portion of the data in the list at a time within the visible viewport. Then, you can render more data as the list is being scrolled; hence, the data is displayed only when it is in the viewport. This process is called “windowing”. In windowing, a small subset of rows are rendered at any given time. There are popular libraries for doing this, two of which are maintained by Brian Vaughn:


    There are several other methods of improving the performance of your React application. This article has discussed the most important and effective methods of performance optimization.

    I hope you’ve enjoyed reading through this tutorial. You can learn more via the resources listed below. If you have any questions, leave them in the comments section below. I’ll be happy to answer every one of them.

    Smashing Editorial
    (ks, ra, al, il)

    Source link