Browsing Tag: Web

    web design

    Are Modern Best Practices Bad For The Web? — Smashing Magazine


    We’re asking if modern best practices are bad for the web? Are modern frameworks taking us down the wrong path? Drew McLellan speaks to Lean Web expert Chris Ferdinandi to find out.

    Today, we’re asking if modern best practices are bad for the web? Are modern frameworks taking us down the wrong path? I speak to Lean Web expert Chris Ferdinandi to find out.

    Show Notes

    Weekly Update


    Photo of Chris FerdinandiDrew McLellan: He’s the author of Vanilla JS Pocket Guide Series, creator of the Vanilla JS Academy Training Program, and host of the Vanilla JS Podcast. He’s developed a Tips newsletter, it’s read by nearly 10,000 developers each weekday. He’s taught developers at organizations like Chobani and The Boston Globe. And his JavaScript plugins have been used by organizations like Apple and Harvard Business School. Most of all, he loves to help people learn Vanilla JavaScript. So we know he’d rather pick Vanilla JavaScript over a framework, but did you know he was once picked out in a police lineup as being the person least likely to have committed the crime? My Smashing friends, please welcome Chris Ferdinandi. Hello, Chris. How are you?

    Chris Ferdinandi: Oh, I’m smashing. Thanks for having me.

    Drew: I wanted to talk to you today about this concept of a Lean Web, which something of a passion for you, isn’t it?

    Chris: Yes, very much so.

    Drew: Why don’t you set the scene for us? When we talk about a Lean Web, what is the problem we are trying to solve?

    Chris: Yeah, great question. Just as a caveat for all the listeners, this episode might get a little old man yells at cloud. I’m going to try to avoid that. When I look at the way we build for the web today, it feels a little bit like a bloated over-engineered mess. I’ve come to believe that a lot of what we think of as best practices today might actually be making the web worse.

    Chris: The Lean Web is an approach to web development that is focused on simplicity, on performance, and the developer experience over… I’m sorry, not the developer experience. The user experience rather, over the developer experience, and the ease of building things from a team perspective, which is what I think where we put a lot of focus today and as we’ll probably get into in our conversation.

    Chris: I’ve actually come to find that a lot of these things we think of as improving the developer experience do so for a subset of developers, but not necessarily everybody who’s working on the thing you’re building. So there’s a whole bunch of issues with that too, that we can dig into. But really, the Lean Web is about focusing on simplicity and performance for the user and really prioritizing or putting the focus on the people who use the things we make rather than us, the people who are making it.

    Drew: As the web matures as a development platform, there seems to be this ever increasing drive towards specialization.

    Chris: Yes.

    Drew: People who used to cover Full Stack, and then we split into front-end and back-end. And then that front-end split into people who do CSS or JavaScript development. And then increasingly within JavaScript, it becomes more specialized. Somebody might consider themselves and market themselves as a React developer or an Angular developer, and their entire identity and outlook is based around a particular framework that they are highly skilled in. Is this dependency on frameworks, the core of our work on the web, a bad thing?

    Chris: It’s nuanced. I used to be very strongly in the yes, always camp. I think broadly, I still feel like yes, our obsession as an industry with frameworks and tools in general really, is potentially a little bit to our detriment. I don’t think frameworks are inherently bad. I think they’re useful for a very narrow subset of use cases. And we end up using them for almost everything, including lots of situations where they’re really not necessarily the best choice for you or for the project.

    Chris: When I think about a lot of the issues that we have on the web today, the core of those issues really starts with our over-reliance on frameworks. Everything else that comes after that is in many ways, because we throw so much not just frameworks which is JavaScript in general, at the web. I say that as someone who professionally teaches people how to write and use JavaScript. That’s how I make my money. And I’m here saying that I think we use too much JavaScript, which is sometimes a little bit of an odd thing.

    Drew: In the time before the big frameworks sprouted up, we used to build user interfaces and things with jQuery or whatever. And then frameworks came along and they gave us more of this concept of a state-based UI.

    Chris: Yes.

    Drew: Now, that’s a fairly complex bit of engineering that you’re required to get in place. Does working with less JavaScript exclude using something like that, or do you have to re-implement it yourself? Are you just creating a loaded boilerplate?

    Chris: A lot of it depends on what you’re doing. If you have a non-changing interface, you can build a state-based UI with… I don’t know, maybe a dozen or so lines of code. If you have a non-changing interface, I would honestly probably say state-based UI. It’s not necessarily the right approach. There’s probably other things you can do instead. Think of static site generators, some pre-rendered markup, even an old-school WordPress or PHP driven site.

    Chris: But where this starts to get interesting is when you get into more dynamic and interactive interfaces. Not just apps. I know people love to draw this line between websites and apps, and I think there’s this weird blend between the two of them and the line is not always as clear. When you start to get into more the user does stuff, something changes. State-based UI becomes a little bit more important. But you still don’t need tons of code to make that happen.

    Chris: I look at something like React or Vue, which are absolutely amazing pieces of engineering. I don’t want to take away from the people who work on those. I ended up as a learning exercise, building my own mini-framework just to get a better sense for how these things work under the hood. It is really hard to account for all of the different moving pieces. So I have tremendous respect for the people who build and work on these tools. But React and Vue are both about 30 kilobytes after minifying and gzipping. So once you unpack them, they’re substantially bigger than that.

    Chris: Not all of it, but a good chunk of that weight is devoted to this thing called the virtual DOM. There are alternatives that use similar APIs or approaches. For React, you have Preact, which is 2.5 kilobytes or about 3 kilobytes instead of 30. It’s a tenth of the size. For Vue, you have Alpine JS instead, which is about 7 kilobytes. Still, substantially smaller. The big difference between those and their big brother counterparts, is that they’ve shed the virtual DOM. They may drop a method or two. But generally, it’s the same approach and the same kind of API in the way of working with code, and they’re substantially smaller.

    Chris: I think a lot of the tools we use are potentially overpowered for the things we’re trying to do. If you need a state-based UI and you need reactive data and these dynamic interfaces, that’s great. I think one of the big things I try and talk about today is not… never use tools. For me, Vanilla JS is not you’re handwriting every single line of code and every single project you do. That’s madness. I couldn’t do that, I don’t do that. But it’s being more selective about the tools we use. We always go for the multi-tool, the Swiss Army knife that has all these things in it. And sometimes, all you really need is a pair of scissors. You don’t need all the other stuff, but we have it anyways.

    Chris: Which is a really long way to… I’m sorry, of answering your question. Which is sometimes it’s more code than you could or would want to write yourself, but it’s not nearly as much code as I think we think it requires. When I say you don’t need a framework, I get a lot of push-back around this idea that, “Well, if you don’t use a framework, you’re basically writing your own.” Then that comes with its own problems. I think there’s a place in between using React or Vue and writing your own framework, and it’s maybe picking something that’s a little bit smaller. There are sometimes reasons where writing your own framework might actually be the right call, depending on what you’re trying to do. It’s all very fluid and subjective.

    Drew: It’s quite interesting that as a learning exercise, you implemented your own state-based framework. I remember in the past, I used to think that every time I reached for a library or something, I liked to think that I could implement it myself.

    Chris: Sure, sure.

    Drew: But reaching for the library saved me the hassle of doing that. I knew if I had to write this code myself, I knew what approach I’d take to do it. And that was true all the way up to using things like jQuery and things. These days, I think if I had to write my own version of Vue or React, I have almost no idea what’s happening now in that library, in all that code.

    Drew: For those of us who are not familiar, when you say something like Preact drops the virtual DOM and makes everything a lot smaller, what’s that virtual DOM giving us?

    Chris: To answer that question, I want to take just a slight step back. One of the nicest things that frameworks and other state-based libraries give you is DOM diffing. If you’re not really updating the UI that much, you could get by with saying, “Here’s a string of what the markup is supposed to look like. In HTML, I’ll just put all this markup in this element.” When you need to change something, you do it again. That is a little expensive for browsers, because it triggers a repaint.

    Chris: But I think potentially more importantly than that, one of the issues with doing that is that you have any sort of interactive element in there, a form-field, a link that someone has focused on. That focus is lost because the element… even though you have a new thing that looks similar, it’s not the same literal element. So if focus is lost, it can create confusion for people using screen readers. There’s just a whole bunch of problems with that.

    Chris: Any state-based UI thing worth its weight is going to implement some for of DOM diffing, where they say, “Here’s what the UI should look like. Here’s what it looks like right now in the DOM. What’s different?” And it’s going to go and change those things. It’s effectively doing the stuff you would do just manually updating the UI yourself, but it’s doing it for you under the hood. So you can just say, “Here’s what I want it to look like.” And then the library or framework figures it out.

    Chris: The smaller things like Preact or Alpine, they’re actually doing that directly. They’re converting the string you provide them of what the UI should look like into HTML elements. And then they’re comparing each element to its corresponding piece in the literal DOM. As you end up with UIs that get bigger and bigger and bigger, that can have a performance implication because querying the DOM over and over again becomes expensive. If you want to get a sense for the type of interface where this becomes a problem, right-click and inspect element on the “Favorite” button on Twitter, or the “Like” button on Facebook. And take a look at the nesting of divs that gets you to that element. It’s very, very deep. It’s like a dozen or so divs, nested one inside the other just for that single tweet.

    Chris: When you start going that many layers down, it starts to really impact performance. What the virtual DOM does is instead of checking the real DOM every time, it creates an object-based map of what the UI looks like in JavaScript. And then does the same thing for the UI you want to replace the existing one with, and it compares those two. That’s a lot more performance in theory, than doing that in the real DOM. Once it gets a list of the things it needs to change, it just runs off and makes those changes. But it only has to attack the DOM once. It’s not checking every single element, every single time. If you have interfaces like Twitters or Facebooks or QuickBooks or something like that, virtual DOM probably makes a lot of sense.

    Chris: The challenge with it is… the difference between Preact and React is 27 kilobytes before you unpack the whole thing into its actual codewave. The raw download and unpacking and compiling time on that alone can add quite a bit of latency to the initial load on a UI. That becomes even more pronounced if your users are on not the latest devices from Apple. Even an Android device from a couple of years ago or a feature phone, or if you have people in developing countries, it’s just really… the time to get going is slower. And then on top of that, the actual interactive time is slower because of the extra abstraction.

    Chris: So it’s not just you load it and they’re comparable in speed. Each micro interaction that someone makes and the changes that need to happen can also be slightly slower just because of all that extra code in there. If you have a really, really complex UI with lots of nested elements and lots of data, then the virtual DOM’s performance gains outweigh that extra code weight. But any typical UI for a typical app that most of what I see developers using React or Vue for, the benefit you get from the virtual DOM just isn’t there and they’d be better off. Even if you want to keep the same convenience of React, use Preact. It’s a fraction of the size, it’ll work exactly the same way, and it’ll more performing. This is the kind of thing that I tend to argue for.

    Chris: We need to be better about picking the right tool for the job. If you go with an approach like that, if you get to a point where a virtual DOM actually makes sense, it’s much easier to port Preact into React than if you rolled your own. That’s the situation. If you’re really worried about that, you get some future-proofing built in too.

    Drew: Some might say, they might make the argument that these frameworks, things like Vue, React are so highly optimized for performance that you get so much benefit there that surely just pairing that with a package manager in a bundler to make sure you’re only sending down the code that you want to. Surely, you are way ahead already just by doing that?

    Chris: Yeah. I don’t agree. I don’t really have much more to elaborate on that other than… I guess maybe, but not really. Even with a bundler, you still need that React core. Even with the bundling, that’s still going to be bigger than using something like Preact.

    Chris: Drew, I really appreciate you leading the question on this. Because one of the other things I do talk about in my book, The Lean Web, and my talk of the same name, is how these tools… You mentioned the bundling, for example. One of the things we do to get around the performance hit that we take from using all this JavaScript is we throw even more JavaScript at the front-end to account for it. One of the ways we do that is package managers and module bundlers.

    Chris: As you alluded to… for those of you who don’t know, these are tools that will… they will compile all of your little individual JavaScript bits into one big file. The newer ones and the more… I don’t want to call them thoughtful. But the newer ones will use a feature called tree shaking, where they get rid of any code that isn’t actually needed. If that code has some dependencies that aren’t used for the thing you’ve actually done, they’ll drop some of that stuff out to make your packages as small as possible. It’s actually not a terrible idea, but it results in this thing I typically call dependency health where you have this really delicate house of cards of dependencies on top of dependencies on top of dependencies.

    Chris: Getting your process set up takes time. And anybody who has ever run an NPM install and then discovered that a bunch of dependencies were out of date and had to spend an hour trying to figure out which ones needed to be fixed and oh, it’s actually a dependency in a dependency and you don’t have the ability to go fix it yourself. It’s a whole thing. Maybe it works for you, but I’d rather spend my time not messing around trying to get my dependencies together.

    Chris: I’ve started collecting tweets from people where they complain about time wasted on their workflow. One of my favorites, Brad Frost a couple year ago, was talking about the depressing realization that the thing you’ve been slogging through in modern JS could have taken you 10 minutes in jQuery. I’m not really a jQuery fan, but I feel that pain when it comes to working with frameworks.

    Chris: The other issue with a lot of these tools is they start to become gatekeepers. I don’t know how much you really want to dive into this or not, Drew. But one of my big push-backs against JS, all the things in a workflow. Especially when you start to then say, “Well, if we’re already using JS for the HTML, why not use it for CSS too?” You start to exclude a lot of really talented people from the development process. It’s just a really big loss for the project for the community as a whole.

    Drew: Well, I certainly am… I started picking up React at the beginning of 2020, adding that to my skillset. I’ve been doing it now for nearly seven months. I’ve got to say one part I’m least confident in is setting up the tooling around getting a project started.

    Drew: It seems like there’s an awful lot of work to get something to Hello World, and there’s even more that you’ve got to know to get it to be production ready. That has to make development more difficult to get started with if this is being put forward as what you should be doing in 2020 to learn to be a web developer. Somebody coming in new to it is going to have a real problem. It’s going to be a real barrier to entry, isn’t it?

    Chris: Absolutely. The other piece here is that JavaScript developers aren’t always the only people working on a code base or contributing in a meaningful way to that code base. The more we make knowing JavaScript a requirement for working with a code base, the less likely those people are to be able to actually participate in the project.

    Chris: An example of that, that I like to talk about is WordPress, who has been recently… I shouldn’t say recently at this point. It’s been a couple of years now. But they’ve been shifting their back-end stack more and more to JavaScript, away from PHP, which is not inherently a bad thing. Their new editor, Gutenburg, is built on React.

    Chris: In 2018, WordPress’s lead accessibility consultant, Rian Rietveld, whose name I almost certainly butchered… she very publicly resigned from her positioned and documented why in a really detailed article. The core of the problem was that her and her team were tasked with auditing this editor to make sure that it was going to be accessible. WordPress comprises now 30% of the web. Their goal is to democratize publishing, so accessibility is a really important part of that. It should be an important part of any web project. But for them in particular, it is acutely important. Her team’s whole job is to make sure… her team’s whole job was to make sure that this editor would be accessible. But because neither she nor anyone on her team had React experience and because they couldn’t find volunteers in the accessibility community with that experience, it made it really difficult for her and her team to do their work.

    Chris: Historically, they could identify errors and then go in and fix them. But with the new React based workflow, they were reduced to identifying bugs and then filing tickets. They got added to a backlog along with all the other feature development requests that the JavaScript developers were working on. A lot of stuff that could have been easily fixed didn’t make it into the first release.

    Chris: In May of 2019, about a year after Rian resigned, there was a detailed accessibility audit done on Gutenburg. The full report was 329 pages. The executive summary alone was 34 pages. It documented 91 accessibility related bugs in quite a bit of detail. Many of these were really… I don’t want to call them simple low-hanging fruit, but a lot of them were basic things that Rian’s team could have fixed and it would have freed up the developers to focus on feature development. That’s ultimately what it seems like they ended up doing, was spending a lot of time on feature development and pushing this stuff off til later. But that team is super important to the project, and they suddenly got locked out of the process because of a technical choice.

    Chris: Alex Russell is a developer on the Chrome team. He wrote this article a couple of years ago called The Developer Bait and Switch, where he talked about the straw man argument of frameworks. This idea that these tools let you move faster and because of that, you can iterate faster and deliver better experiences. It’s this idea that a better developer experience means a better user experience. I think this is a very clear example of how that argument doesn’t always play out the way people believe it does. It’s a better experience for maybe some people, not for everyone.

    Chris: CSS developers, people working on design systems, their ability to create tools that others can use sometimes gets limited by these tool choices too. In my experience, I used to be better at CSS. It’s changed a lot in the last few years and I am nowhere near as good as I used to be. In my experience, the people who are really advanced CSS developers… and I do mean that in the truest sense. People who work on CSS are proper web developers working on a programming language. It’s a very special, or can be a very specialized thing. The people who are exceptionally good at it, in my experience, are not always also very good at JavaScript because you end up diving really deep into one thing and you slide a little bit on some other stuff. Their ability to work with these technologies gets hindered as well, which is too bad because it used to not be the case.

    Chris: When the stack was simpler, it was easier for people from multiple disciplines to participate in the development process. I think that’s to the detriment of both the things we build and the community at large, when that’s no longer the case.

    Drew: I found recently in a project researching ways to deal with some of the CSS problems, workflow problems, we’re having multiple working on the project and the bundle size increasing and the old CSS never getting removed. It was a React project, so we were looking at some of these CSS in JavaScript approaches to see what would be best for us to use to solve the problems that we had. What became very quickly apparent is there’s not only one solution to do this. There are dozens of different ways you could do this.

    Drew: CSS in JS is a general approach, but you might go from one project to the next and it’s completely influenced in a completely different way. Even if you’re a CSS person and you learn a particular approach on a project, those skills may not be transferrable anyway. It’s very difficult to see how somebody should investment too much time in learning that if they’re not particularly comfortable with JavaScript.

    Chris: Yeah. The other interesting thing that I think you just got out a little bit is when I talk about this, one of the biggest pieces of push-back I get is that frameworks enforce conventions. You’re going with Vanilla JavaScript, you’ve got this green field-blue sky, you can do anything you want kind of project. With React, there’s a React way to do things.

    Chris: But one of the things I have found is that there are Reacty approaches, but there’s not a strict one right way to do things with React. It’s one of the things people love about it. It’s a little bit more flexible, you can do things a couple of different ways if you want. Same with Vue. You can use that HTML based thing where you’ve got your properties right in the HTML and Vue replaces them, but you can also use a more JSX-like templating kind of syntax if you’d prefer.

    Chris: I’ve heard multiple folks complain about when they’re learning React, one of the big problems is you Google how to do X in React and you get a dozen articles telling you a dozen different ways that you could do that thing. That’s not to say they don’t put some guardrails on a project. I don’t think it’s as clearcut as, “I’ve chosen a framework. Now this is the way I build with it.” To be honest, I don’t know that that’s necessarily something I’d want either. I don’t think you’d want those tightly confined… some people do, maybe. But if you’re touting that as a benefit, I don’t think it’s quite as pronounced as people sometimes make it out to be.

    Chris: You got into an interesting thing though just there, where you were mentioning the CSS that is no longer needed. I think that is one of the legitimately interesting things that something like CSS and JS… or tying your CSS to JavaScript components in some way can get you is if that component drops out, the CSS also in theory, goes away with it. A lot of this to me feels like throwing engineering at people problems. Invariably, you’re still dependent on people somewhere along the line. That’s not to say never use those approaches, but they’re certainly not the only way to get at this problem.

    Chris: There are tools you can use to audit your HTML and pull out the styles that aren’t being used even without using CSS and JavaScript. You can write CSS “the old-fashioned way” and still do the linting of unused styles. There are authoring approaches to CSS that give you a CSS in JS-like output and keep your style sheet small without spitting out these gibberish human unreadable class names that you get with CSS and JS sometimes. Like X2354ABC, or just these nonsense words that get spit out.

    Chris: This is where I start to get really into the old man yells at cloud kind of thing. I’m really showing my developer age here. But yeah, it’s not necessarily that these things are bad, and they’re built to solve legitimate problems. I sometimes feel like there’s a little bit of a… if it’s good enough for Facebook, it’s good enough for us kind of thing that happens with these. Well, Facebook uses this tool. If we’re a real engineering program… team, department, organization, we should too. I don’t necessarily think that’s the right way to think about it. It’s because Facebook deals with problems that you don’t, and vice-versa. The size and scale of what they work on is just different, and that’s okay.

    Drew: Exactly. I see some of these things like CSS and JavaScript to be almost like a polyfill. We’ve got legitimate problems, we need a way to solve it. The technology isn’t providing us a way to solve it yet. Maybe whilst we wait for the web platform to evolve and to get around to addressing that problem, this thing that we do right now with JavaScript kind of will see us through and we’ll be okay. We know it’s bad. We know it’s not a great solution, but it helps us right now. And hopefully in the while we can pull it out and use the native solution.

    Chris: It’s really funny you bring this up. Because literally last night, I was watching a talk from Jeremy Keith from last year about progressive web apps. But he was talking about how a couple of decades ago, he was trying to convince people to use JavaScript. Which seems ridiculous at the time, but JavaScript was this new thing. He showed people how you could do cool things like change the color of a link on hover with this new… It seems absurd that you would need JavaScript for that now, because that’s what CSS does for you. But things like the focus attribute or property just didn’t exist at the time.

    Chris: One of the things he said in the talk that I think really resonated with me is that JavaScript in many ways, paves those cow paths. It’s this very flexible and open language that we can use to create or bolt in features that don’t exist yet. And then eventually, browsers catch up and implement these features in a more native way. But it takes time. I completely understand what you’re saying about this. It’s not the perfect solution, but it’s what we have right now.

    Chris: I think for me, the biggest difference between polyfills and some of these solutions is polyfills are designed to be ripped out. If I have a feature I want to implement that the browser doesn’t support yet, but there’s some sort of specification for it and I write a polyfill… once browsers catch up, I can rip that polyfill out and I don’t have to change anything. But when you go down the path of using some of these tools, ripping them up out means rewriting your whole code base. That’s really expensive and problematic. That’s not to say never use them, but I feel really strongly that we should be giving thought to picking tools that can easily be pulled out later. If we no longer need them or the platform catches up, it doesn’t require a huge rewrite to pull them out.

    Chris: So getting to that we have a whole bunch of styles we don’t use anymore thing, that’s why I would personally favor a build tool technology that audits your CSS against the rendered markup and pulls out the things you don’t need. Because down the road if a platform catches up, you can pull that piece of the build tool out without having to change everything. It’s just augmenting what you already have, whereas CSS and JS doesn’t give you that same kind of benefit. I’m just picking on that one, but I think about a lot of these technologies more broadly.

    Chris: I do feel things like React or Vue are probably paving some cow paths that browsers will eventually catch up with and probably use similar approaches if not the same, so there may be less rewriting involved there. A lot of the ecosystem stuff that gets plugged in around that may be less so.

    Drew: I think it’s right, isn’t it, that the web platform moves slowly and carefully. You think if five years ago, we were all putting JavaScript Carousels on our pages. They were everywhere. Everyone was implementing JavaScript Carousels. If the web platform had jumped and implemented a Carousel solution to satisfy that need, it would not be sat there with nobody using it because people aren’t doing that with Carousels anymore. Because it was just a fad, a design trend. To counteract that and stop the platform itself becoming bloated and becoming a problem that needs solving, it does have to move at a lot steadier pace. The upshot of that is HTML that I wrote in 1999 still works today because of that slow process.

    Drew: One of the other areas where things seem to be bloating out on the web is… I guess it’s related to the framework conversation. But it’s the concept of a single-page app. I feel like that a lot of promises are made around single-page apps as to their performance, like you’re getting all this benefit because you’re not reloading the entire page framework. I feel like they don’t always make good on those performance promises. Would you agree with that?

    Chris: Yes. Although I will confess, despite having a very lengthy chapter in my book on this and talking about that a lot in talks and conversations with people, I don’t think single-page apps are always a terrible thing. But I do think this idea that you need one for performance is overstated. You can oftentimes get that same level of performance with different approaches.

    Chris: I think one of the bigger challenges with single-page apps is… For anybody who’s unfamiliar with those. When a single-page app, instead of having separate HTML files or if you’re using something like a database driven site like WordPress, even though you don’t have actual physical HTML files for each page in your WordPress site, WordPress is creating HTML files on the fly and sending them back to browsers when URLs are requested. For purposes of this conversation, instead of having separate HTML files for every view in your app, a single-page app has a single HTML file. That’s what makes it a single-page app. JavaScript handles everything. Rendering the content, routing to different URL paths, fetching new content if it needs to from an API or something like that.

    Chris: One of the spoken benefits of these or stated benefits of these is that only the content on the page changes. You don’t have to re-download all the JS and the CSS. Oh, and you can do those fancy page transitions that designers sometimes love. In theory, this is more performant than having to reload the whole page.

    Chris: The problem with this approach from my perspective is that it also breaks a bunch of stuff that the browser just gives you for free out-of-the-box, and then you need to recreate it with more JS. You have an app that’s slow because it has a lot of JS. So you throw even more JavaScript at it to improve that performance and in doing so, you break a bunch of browser features and then have to re-implement those with even more JS too.

    Chris: For example, some things that the browser will do for free with a traditional website that you need to recreate with JavaScript when you go the single-page app. You need to intercept clicks on links and suppress them from actually firing, with your JavaScript. You then need to figure out what you actually need to show based on that URL, which is normally something that would get handled on the server or based on the file that goes with that URL path. You need to actually update the URL in the address bar without triggering a page reload. You need to listen for forward and back clicks on the browser and update content again, just like you would with clicks on links. You need to update the document title.

    Chris: You also need to shift focus in a way that announces the change in page to people who are using screen readers and other devices so that they’re not confused about where they are or what’s going on. Because they can’t see the change that’s happening, they’re hearing it announced. If you don’t actually shift focus anywhere, that announcement doesn’t happen. These are all things that the browser would do for you that get broken with single-page apps.

    Chris: On top of that, because you have all this extra JavaScript, this is complicated. So most people use frameworks and libraries to handle this sort of thing. Because of all this extra JavaScript to support this approach, you end up with potentially slower initial page load than you would have otherwise. Depending on the content you have, this approach I think sometimes can make sense. If you have an app that is driven by API data where you don’t necessarily know what those URL paths are going to look like ahead of time.

    Chris: Just an example here. You have an animal rescue where you have some adoptable animals, and that data comes from Petfinder, the animal adoption website. You have a bunch of animals there. Petfinder manages that, but you want to display them on your site with the Petfinder API. When your website’s being built, it doesn’t always necessarily have visibility to what pets are available in this exact moment and what kind of URL paths you’re going to need. A single-page app can help you there because it can dynamically on the fly, create these nice URLs that map with each dog or cat.

    Chris: Something like Instagram with lots of user created content, maybe that also makes sense. But for a lot of things, we do know where those URLs are going to be ahead of time. Creating an HTML file that has the content on it already is going to be just as fast as… sometimes even faster than the JavaScript based single-page app approach, especially if you use some other techniques to keep your overall CSS and JavaScript size down. I use this approach on a course portal that I have. The page loads feel instantaneous because HTML is so easy for browsers to render compared to other parts of the stack. It feels like a single-page app, but it’s not.

    Drew: Especially when you consider hosting solutions like a Jamstack approach of putting HTML files out in a CDN so it’s being served somewhere physically close to the user.

    Chris: Yep.

    Drew: Loading those pages can just be so, so quick.

    Chris: Yes. Absolutely. Absolutely. One of the other arguments I think people used to make in favor of single-page apps is offline access. If someone loads it and then their network goes down, the app is already up and all the routes are handled just with the file that’s already there. So there’s no reloading, they don’t lose any work. That was true for a long time. Now with service workers and progressive web apps, that is I think less of a compelling argument, especially since service workers can fetch full HTML files and cache them ahead of time if needed.

    Chris: You can literally have your whole app available offline before someone has even visited those pages if you want. It just happens in the background without the user having to do anything. It’s again, one of those technologies that maybe made sense for certain use cases a few years ago a little less compelling now.

    Drew: It reminds me slightly of when we used to build websites in Flash.

    Chris: Yes.

    Drew: And you’d have just a rectangle embedded in an HTML page which is your Flash Player, and you’d build your entire site in that. You had to reimplement absolutely everything. There was no back button. If you wanted a back button, you had to create a back button, and then you had to create what the concept of a page was. You were writing code upon code, upon code to reimplement just as you are saying things that the browser already does for you. Does all this JavaScript that we’re putting into our pages to create this functionality… is this going to cause fragility in our front-ends?

    Chris: Yes. This is almost certainly from my mind, one of the biggest issues with our over-reliance on JavaScript. JavaScript is just by its nature, is a scripting language, the most fragile part of the front-end stack.

    Chris: For example, if I write an HTML element that doesn’t exist, I spell article like arcitle instead of article and the browser runs across that, it’s going to be like, “Oh, I don’t know what this is. Whatever, I’ll just treat it like a div.” And it keeps going. If I mistype a CSS property… Let’s say I forget the B in bold, so I write old instead, font way old. The browser’s going to be, “I don’t know what this is. Whatever, we’ll just keep going.” Your thing won’t be bold, but it will still be there.

    Chris: With JavaScript, if you mistype a variable name or you try to use a property, you try to call a variable that doesn’t exist or a myriad of other things happen… your minifier messes up and pulls one line of code to the one before it without a semicolon where it needs one, the whole app crashes. Everything from that line on stop working. Sometimes even stuff that happens before that doesn’t complete, depending on how your app is set up. You can very quickly end up with an app that in a different approach, one where you rely a lot more on HTML and CSS, it would work. It might not look exactly right, but it would still work… to one that doesn’t work at all.

    Chris: There’s an argument to be made that in 2020, JavaScript is an integral and important part of the web and most people don’t disable it and most people are using devices that can actually handle modern JavaScript. That’s true, but that’s not the only reason why JavaScript doesn’t work right, even if you have a linter there for example and you catch bugs ahead of time and things. There’s plenty of reasons why JavaScript can go horribly awry on you. CDNs fail.

    Chris: Back in July of last, a year ago this month… at least, when we’re recording this… a bad deploy took down Cloudflare. Interestingly as we’re recording this, I think a week or two ago, Cloudflare had another massive outage that broke a whole bunch of things, which is not a knock on Cloudflare. They’re an incredibly important service that powers a ton of the web. But CDNs do sometimes go down. They are a provider used by 10% of Fortune 1,000 companies. If your JS is served by that CDN and it breaks, the JavaScript file never loads. And if your content is dependent on that JS, your users get nothing instead of getting something just not styled the way you’d like.

    Chris: Firewalls and ad blockers get overly aggressive with what they block. I used to work at a company that had a JavaScript white list because they were extremely security conscious, they worked with some government contract stuff. They had a list of allowed JavaScript, and if your site or if your URL wasn’t part of that, no JavaScript. You have these sites. I remember going to a site where it had one of the hamburger kind of style menus on every view whether it was desktop or mobile, and I could not access any page other than the homepage because no JavaScript, no hamburger, that was it.

    Chris: Sometimes connections just timeout for reasons. Either the file takes a while or someone’s in a spotty or slow connection. Ian Feather, an engineer at BuzzFeed, shared that about 1% of requests for JavaScript on the site fail which is 13 million requests a month. Or it was last year, it’s probably even more now. That’s a lot of failed JavaScript. People commuting go through tunnels and lose the internet. There’s just all sorts of reasons why JavaScript can fail and when it does, it’s so catastrophic.

    Chris: And so we built this web that should be faster than ever. It’s 2020, 5G is starting to become a thing. I thought 4G was amazing. 4G is about as fast as my home wifi network. 5G is even faster, which is just bonkers. Yet somehow, we have websites that are slower and less performant than they were 5 or 10 years ago, and that makes no sense to me. It doesn’t have to be that way.

    Drew: How do we get out of this mess, Chris?

    Chris: Great question. I want to be really clear. I know I’ve hammered on this a couple times. I’m not saying all the new stuff is bad, never use it. But what I do want to encourage is a little bit more thoughtfulness about how we build for the web.

    Chris: I think the overlying theme here is that old doesn’t mean obsolete. It doesn’t mean never embrace new stuff, but don’t be so quick to just jump on all the shiny new stuff just because it’s there. I know it’s one of the things that keeps this industry really exciting and makes it fun to work in, there’s always something new to learn. But when you pick these new things, do it because it’s the right tool for the job and not just because it’s the shiny new thing.

    Chris: One of the other things we didn’t get into as much as I would have liked, but I think is really important, is that the platform has caught up in a really big way in the last few years. Embracing that as much as possible is going to result in a web experience for people that is faster, that is less fragile, that is easier for you to build and maintain because it requires fewer dependencies such as using what the browser gives you out-of-the-box. We used to need jQuery to select things like classes. Now browsers have native ways to do that. People like JSX because it allows you to write HTML in JavaScript in a more seamless way. But we also have template literals in Vanilla JavaScript that give you that same level of ease without the additional dependency. HTML itself can now replace a lot of things that used to require JavaScript, which is absolutely amazing.

    Chris: We talked a little bit about… this is a CSS thing, but hovers over links and how that used to require JavaScript. But using things like the details and summary elements, you can create disclosure, like expand and collapse or accordion elements natively with no scripting needed. You can do auto complete inputs using just a… I shouldn’t say just, I hate that word. But using a humble input element and then a data list element that gets associated with it, with some options. If you’re curious about how any of this stuff works over at, I have a bunch of JavaScript stuff that the platform gives you. But I also have some used to require JavaScript and now doesn’t kind of things that might be interesting too if you want some code samples to go along with this.

    Chris: On the CSS side of things, my most popular Vanilla JS plugin ever is this library that lets you animate scrolling down to anchor links. It is very big. It’s the hardest piece of code I’ve ever had to write. And it now is completely replaced with a single line of CSS, scroll behavior smooth. It’s more performant. It’s easier to write. It’s easier to modify its behavior. It’s just a better overall solution.

    Chris: One of the other things that I wish we did more is leaning on multi-page apps. I feel a little bit vindicated here, because I recently saw an article from someone at Google that actually pushes for this approach now too. I thought that was pretty interesting, given this huge angular and then framework… all the things, boom, that Google started a few years back. Kind of cool to see them come back around to this. Using things like static site generators and awesome services like Netlify and CDN caching, you can create incredibly fast web experiences for people using individual HTML files for all of your different views. So kind of leaning on some of this out-of-the-box stuff.

    Chris: In situations where that’s not realistic for you, where you do need more JavaScript, you do need some sort of library, maybe taking a look at the smaller and more modular approaches first instead of just going for the behemoths of the industry. Instead of React, would Preact work? Instead of angular… I mean, instead of Vue rather, would Alpine JS work? There’s also this really interesting pre-compiler out there now called Svelt, that gives you a framework-like experience and then compiles all your code into Vanilla JavaScript. So you get these really tiny bundles that have just what you need and nothing else. Instead of CSS and JavaScript, could you bolt in some third party CSS linter that will compare your HTML to your CSS and pull out the stuff that got left in there by accident? Would a different way of authoring your CSS, like object oriented CSS by Nicole Sullivan, work instead? We didn’t really get to talk about that, but it’s a really cool thing people should check out.

    Chris: And then I think maybe the third and most important piece here, even though it’s less of a specific approach and more just a thing I wish more people kept in mind, is that the web is for everyone. A lot of the tools that we use today work for people who have good internet connections and powerful devices. But they don’t work for people who are on older devices, have spotty internet connections. This is not just people in developing areas. This is also people in the U.K., in certain parts of the U.S. where we have absolutely abysmal internet connections. The middle of our country has very slow internet. I know there’s places in part of London where they can’t wire a new broadband in for historical reasons, so you’re left with these old internet connections that are really bad. There’s places like that all over the world. Last time I was in Italy, same thing. The internet there was horrible. I don’t know if it’s changed since then.

    Chris: The things we build today don’t always work for everyone, and that’s too bad. Because the vision of the web, the thing I love about it, is that it is a platform for absolutely everyone.

    Drew: If listeners want to find out more about this approach, you’ve gone into loads of detail to it in your book, The Lean Web. And that’s available online. Is it a physical book or a digital book?

    Chris: It’s a little bit of both. Well, no. It’s definitely not a physical book. You go to You can read the whole thing for free online. You can also if you want, there’s EPUB and PDF versions available for a really small amount of money, I forget how much now. I haven’t looked at it in a while. The whole thing is free online if you want it. You can also watch a talk on this topic where I go into more details if you want.

    Chris: But I’ve also put together a special page just for listeners of Smashing Podcast at, because I’m very creative with naming things. That includes a bunch of resources in addition to the book, around things that we talked about today. It links to a lot of the different techniques that we covered, other articles I’ve written that go deeper into some of these topics and expand on my thinking a little bit. If folks want to learn more, that would probably be the best place to start.

    Drew: That’s terrific. Thank you. I’ve been learning all about the Lean Web. What have you been learning about lately, Chris?

    Chris: Yeah, a couple of things. I alluded to this a little bit earlier with watching Jeremy’s video on progressive web apps. I have been putting off learning how to actually write my own progressive web app for a couple of years because I didn’t have a specific need on anything I was working with. I recently learned from one of my students who is in South Africa, that they have been dealing with rolling blackouts because of some stuff they have going on down there. As a result, she is not able to work on some of the projects we’ve been doing together regularly, because the power goes out and she can’t access the learning portal and follow along.

    Chris: For me, now building an experience where it works even if someone doesn’t have internet has become a higher priority than… I realized that maybe it was before, so I just started digging into that and hope to get that put together in the next few weeks. We’ll see. Jeremy Keith’s resources on this have been an absolute lifesaver though. I’m glad they exist.

    Chris: I know, Drew, you mentioned one of the reasons you like to ask this question is to show that people no matter how seasoned they are, are always learning. Just a little related anecdote. I have been a web developer for I think, about eight years now. I still have to Google the CSS property to use for making things italic, literally every single time I use it. For some reason, my brain defaults to text decoration even though that’s not the right one. I’ll try a couple of combinations of different things, and I always have one word wrong every time. I also sometimes write italics instead of italic. Yeah. If anybody ever there is ever feeling like, oh, I’m never going to learn this stuff… just know that no matter how seasoned you are, there’s always some really basic thing that you Google over and over again.

    Drew: I’ve been a web developer for 22, 23 years, and I have to Google the different properties for Flexbox still, every time. Although I’ve been using that for 23 years. But yeah, some things just… there’s probably going to more of those as I get older.

    Chris: Yeah. Honestly, I ended up building a whole website of stuff I Google over and over again, just to have an easier copy-paste reference because that was easier than Googling.

    Drew: That’s not a bad idea.

    Chris: That’s the kind of lazy I am. I’ll build a whole website to save myself like three seconds of Googling.

    Drew: If you the listener would like to hear more from Chris, you can find his book on the web at, and his developer Tips newsletter and more at Chris is on Twitter at Chris Ferdinandi. And you can check out his podcast at or wherever you usually get your podcasts. Thanks for joining us today, Chris. Do you have any parting words?

    Chris: No. Thank you so much for having me, Drew. I had an absolutely smashing time. This was heaps of fun. I really appreciate the opportunity to come chat.

    Smashing Editorial

    Source link

    web design

    Crowdfunding Web Platform Features With Open Prioritization — Smashing Magazine


    About The Author

    Rachel Andrew is not only Editor in Chief of Smashing Magazine, but also a web developer, writer and speaker. She is the author of a number of books, including …
    More about

    Rachel Andrew takes a look at a new effort to crowdfund the costs of implementing browser features.

    In my last post, I described some interesting CSS features — some of which are only available in one browser. Most web developers have some feature they wish was more widely available, or that was available at all. I encourage developers to use, talk about, and raise implementation bugs with browsers to try to get features implemented, however, what if there was a more direct way to do so? What if web developers could get together and fund the development of these features?

    This is the model that open-source consultancy Igalia is launching with their Open Prioritization experiment. The basic idea is a crowdfunding model for web platform features. If we want a feature implemented, we can put in a small amount of money to help fund that work. If the goal is reached, the feature can be implemented. This article is based on an interview with Brian Kardell, Developer Advocate at Igalia.

    What Is Open Prioritization?

    The idea of open prioritization is that the community gets to choose and help fund feature development. Igalia have selected a list of target features, all of which are implemented or currently being implemented in at least one engine. Therefore, funding a feature will help it become available cross-browser, and more usable for us as developers. The initial list is:

    • CSS lab( ) colors in Firefox
    • :focus-visible in WebKit/Safari
    • HTML inert in WebKit/Safari
    • Selector list arguments for :not( ) in Chrome
    • CSS Containment support in WebKit/Safari
    • CSS d (SVG path) support in Firefox

    The website gives more explanation of each feature and all of the details of how the funding will work. Igalia are working with Open Collective to manage the pledges.

    Who Are Igalia?

    You may never have heard of Igalia, but you will have benefited from their work. Igalia works on browser engines, and have specialist knowledge of all of the engines. They had the second-highest number of commits to the Chrome and WebKit source in 2019. If you love CSS Grid Layout, then you have Igalia to thank for the implementation in Chrome and WebKit. The work to add the feature to those browsers was done by a team at Igalia, rather than engineers working internally at the browser company.

    This is what makes this idea so compelling. It isn’t a case of raising some money and then trying to persuade someone to do the work. Igalia have a track record of doing the work. Developers need to be paid, so by crowdsourcing the money we are able to choose what is worked on next. Igalia also already have the relationships with the engines to make any suggested feature likely to be a success.

    Will Browsers Accept These Features If We Fund Them?

    The fact that Igalia already have relationships within browser engine teams, and have already discussed the selected features with them means that if funded, we should see the features in browsers. And, there are already precedents for major features being funded by third parties and developed by Igalia. The Grid Layout implementation in Chrome and WebKit was funded by Bloomberg Tech. They were frustrated by the lack of Grid Layout implementation, and it was Bloomberg Tech who provided the money to develop that feature over several years.

    Chrome and WebKit were happy to accept the implementation; there was no controversy over adding the feature. Rather, it was a matter of prioritization. The browsers had other work that was deemed a higher priority, and financial commitment and developer time was therefore directed elsewhere. The features that have been selected for this initial crowdfunding attempt are also non -controversial in terms of their implementation. If the work can be done then the engines are likely to accept it. Interoperability — things working in the same way across browsers — is something all browser vendors care about. There is no benefit to an engine to lag behind. We essentially just get to bypass the internal prioritization process for the feature.

    Why Don’t Browsers Just Do This Stuff?

    I asked Brian why the browser companies don’t fund these things themselves. He explained,

    “People might think, for example, ‘Apple has all of the money in the world’ but this ignores complex realities. Apple’s business is not their Web browser. In fact, the web browser itself isn’t a money-making endeavor for anyone. Browsers and standards are voluntary, they are a commons. Cost-wise, however, browsers are considerable. They are massively more complex than most of us realize. Only 3 organizations today have invested the many years and millions of dollars annually that it takes to evolve and maintain a rendering engine project. Any one of them is already making a massive and unparalleled investment in the commons.”

    Brian went on to point out the considerable investment of Firefox into Servo, and Google into LayoutNG, projects which will improve the browser experience and also make it possible to implement new features of the platform. There is a lot that any browser could be implementing in their engine, but the way those features are internally prioritized may not always map to our needs as developers.

    It occurred to me that by funding browser implementation, we are doing the same thing that we do for other products that we use. Many of us will have developed a plugin for a needed feature in a CMS or paid a third party to provide it. The CMS developers spend their time working on the core product, ensuring that it is robust, secure, and up to date. Without the core product, adding plugins would be impossible. Third parties however can contribute parts to that platform, and in a sense that is what we can do via open prioritization. Show that a feature is worthwhile enough for us to pledge some cash to get it over the line.

    How Does This Fit With Projects Such As Web We Want?

    SmashingConf has supported the Web We Want project, where developers pitched web platform ideas to be discussed and voted for onstage at conferences. I was involved in several of these events as a host and on the panel. I wondered how open prioritization fits with these existing efforts. Brian explained that these are quite different things saying,

    “… if you asked me what could make my house better I could name a million things. Some of those aren’t even remotely practical, they would just be really neat. But if you said make a list of things you could do with a budget for what each one costs — my list will be considerably more practical and bound by realities I know exist.

    At the end of the month if you say “there is your list, and here is $100, what will you do with it?” that’s a very direct question that helps me accomplish something practical. Maybe I will paint. Maybe I will buy some new lighting. Or, maybe I will save it for some months toward something more costly.”

    The Web We Want project asks an open question, it asks what we want of the platform. Many of the wants aren’t things that already exist as a specification. To actually start to implement any of these things would mean starting right at the beginning, with an idea that needs taking right from the specification stage. There are few certainties, and they would be very hard to put a price on.

    The features selected for this first open prioritization experiment are deliberately limited in scope. They already have some implementation; they have a specification, and Igalia have already spoken to browser maintainers to check that the features are ready to work on but don’t feature in immediate priorities.

    Supporting this project means supporting a concrete chunk of development, that can happen within a reasonably short timeframe. Posting an idea to Web We Want, writing up an idea on your blog, or adding an issue describing a totally new feature on the CSSWG GitHub repo potentially gets a new idea out into the discussion. However, those ideas may have a long slow path to becoming reality. And, given the nature of standards discussions, probably won’t happen in exactly the way that you imagined. It is valuable to propose these things, but very hard to estimate time and costs to a final implementation.

    The same problem is true for the much-wanted feature of container queries, Igalia have gone so far as to mention container queries in their FAQ. Container queries are something that many people involved in the standards process and at browser vendors are looking into, however, those discussions are at an early stage. It isn’t something it would be possible to put a monetary value on at this point.

    Get Involved!

    There is more information at the Open Prioritization site, along with a detailed FAQ answering other questions that you might have. I’m excited about this because I’m always keen to help find ways for designers and developers to get involved in the web platform. It is our platform. We can wait for things to be granted to use by browser vendors, or we can actively contribute via ideas, bug reports, and with Open Prioritization a bit of cash, to help to make it better.

    Smashing Editorial

    Source link

    web design

    How Web Designers Can Help Restaurants Move Into Digital Experiences — Smashing Magazine


    About The Author

    Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
    More about

    The restaurant industry has begun to undergo a major digital transformation. Those that want to survive will need a website that can handle the new way of operating, which means they can no longer afford to hold onto that cheap website they built for themselves years ago. And this spells big opportunities for web designers interested in working in the space.

    As much as I’ve always loved the experience of going out to eat and ordering in takeout, it’s very rare that I enjoy visiting a restaurant’s website. But I get it. The restaurant industry tends to run on very slim profit margins, so it’s hard to justify spending money on a professionally designed website when all they want it to do is list their hours of operation and menu.

    However, I envision all that changing in 2020 (and beyond) as restaurants are forced to expand into digital in order to survive. Unlike a website that a novice might hack together with a cheap site builder, establishing a competitive digital presence isn’t something they’re going to be able to do on their own.

    That’s why web designers should seriously start thinking about expanding into this niche.

    How Web Designers Can Help Restaurants Move into Digital

    Usually, when something serious shakes up the restaurant industry, those that want to survive will adopt newer and better technologies to adapt. So, it’s not like restaurants are strangers to digital transformation. Until now, though, the focus has mainly been on investing in technology that improves how they work in-house.

    With everything that’s happened in 2020, though, restaurants are going to need web designers’ help in doing three things that ensure their survival in an increasingly digital world:

    1. Modernize The Restaurant Website

    Whenever I write one of these posts, I spend time reviewing a few dozen websites to find the best examples to make my point. I’m not going to lie, this one was tough. While I knew I could turn to national chain restaurants to find modern-looking websites, I had a really hard time with others.

    While it’s not impossible to find an independent restaurant operator or local chain that has a great-looking website in 2020, I’d say that at least half of them are way behind the times, if they even have a website at all.

    Remember when websites were designed like this?

    A blurred-out website to demonstrate how independent restaurants still have unresponsive designs
    An outdated restaurant website in 2020, blurred out to protect its identity. (Source: Anonymous) (Large preview)

    I’ve blurred out the restaurant’s name and details to protect its identity, but you can still get a sense of how bad this design is for 2020.

    Restaurant websites can’t afford to be crappy, non-responsive placeholders anymore. They need to become impressive digital presences that set the stage for what customers will experience when interacting with restaurants as diners.

    Let’s take a look at how In-N-Out Burger has nailed modern web design. The first thing you’ll notice is it’s a responsive design. On desktop, the website fits the full width of the screen, so there’s no wasted space around the border. It looks good on a mobile device, too:

    In-N-Out Burger website on mobile
    The In-N-Out Burger mobile website is responsive and easy to read. (Source: In-N-Out Burger) (Large preview)

    Also, take notice of the images. This is a burger joint, so you should expect the website to be full of burger photos, which it is. However, there’s something interesting to note about the burgers you find on the site.

    In-N-Out Burger website photos and transitions
    The In-N-Out Burger website uses perfectly framed images and well-chosen transitions. (Source: In-N-Out Burger) (Large preview)

    When someone enters a page where there’s a burger photo, the food slides into the frame as if someone were sliding it over to a customer in the restaurant. It’s a neat little transition and many visitors to the site might not even realize what’s happening, but it makes the experience feel more lifelike and interactive.

    Transitions aren’t the only things you can do to create this sort of experience. Background videos taken within the establishment work just as well as it gives customers the opportunity to walk through the establishment instead of relying on static images that only paint part of the picture.

    Another thing restaurant websites need to improve is how they’re organized.

    When people are ready to go out to eat or to dine in, don’t waste their time trying to force the restaurant’s history down their throats (which many of these sites surprisingly do). The navigation as well as the order in which CTAs appear on the home page should reflect the actions customers want to take.

    The thought process most likely goes like this:

    1. “I’m not sure what to order. Where’s the menu?” (Menu)
    2. “Do I need to make a reservation or can we just go whenever?” (Reservations)
    3. “Where is this place again?” (Locations or Contact)

    Or, these days, #2 looks more like this:

    • “Do they do takeout? I wonder if they’ll deliver it.” (Order Online)

    There are other things customers might want to do on the website. Like buy gift cards or merchandise, sign up for rewards or apply for a job.

    So, while the above tasks should be a priority in terms of what visitors see first, make sure to look at the site’s data to see what else they’re interested in doing. Then, make sure those popular actions take center stage in the navigation and site design.

    2. Empower Them to Diversify Their Income

    Under normal circumstances, profitability is a problem for many restaurants. Add a crisis to the mix and it’s going to become downright impossible to generate any profit — that is if they rely solely on dine-in business.

    Long before COVID-19, consumers were already showing a growing preference for digital dining solutions.

    According to Peapod, 77% of U.S. consumers said they preferred eating at home than going out. But that doesn’t necessarily translate to ordering in from a restaurant.

    • 27% preferred to order groceries online and pick them up from the store.
    • 26% planned to use grocery delivery.
    • 20% were interested in meal kits.

    Then, you have information from Technomic and the National Restaurant Association that found that about 60% of all restaurant sales in the U.S. come from off-premise dining.

    For restaurants that haven’t yet made the leap to digital dining options, they’re going to have to ASAP. This isn’t just a temporary thing either.

    Restaurants that fail to digitize going forward won’t survive.

    So, web designers are going to be needed to help them build out the following:

    • An online ordering system for their website or a link to an external service,
    • A reservation system (for when in-house dining is available).

    That’s just the bare minimum though. For instance, this is what Snooze Eatery has done:

    Snooze Eatery online delivery or pickup
    Snooze Eatery advertises delivery or pickup on its website. (Source: Snooze Eatery) (Large preview)

    The first thing visitors see on the website is the online ordering option. When they click “Place Your Order”, they’re taken to the restaurant’s proprietary ordering portal:

    Snooze Eatery online ordering portal
    Snooze Eatery’s online ordering portal. (Source: Snooze Eatery) (Large preview)

    This in and of itself is a great solution for restaurants to have available through their websites as it allows them to control the ordering process and capture more of the profits (but that’s up to your clients to decide, of course). That said, many restaurants are getting creative and going beyond traditional online ordering options.

    Below the fold on the Snooze Eatery site, visitors will find this banner:

    Snooze Eatery neighborhood provisions: family-style meal kits and food essentials
    Snooze Eatery now offers neighborhood provisions. (Source: Snooze Eatery) (Large preview)

    As I mentioned earlier, there’s a good number of people who want to be able to order food online but then prepare it for themselves at home. While that would previously have left restaurants high and dry, that’s not the case anymore as many restaurants are expanding their offering to include family-style meal kits and groceries like Snooze.

    This alone means that web designers are going to become increasingly more important for restaurants. And don’t expect the work to end there. Restaurants will also need your help building other monetized offerings into their websites. For instance:

    • Gift cards;
    • Merchandise;
    • Subscription services for meal kits, alcohol deliveries and more;
    • Online memberships for cooking classes, premium recipes, etc.

    If they don’t have one yet, they’ll also probably need help creating a rewards and account management system as well.

    3. Fix Their Brand Images on Third-party Sites

    Although the website should be the engine that powers everything for the business online, restaurants need other sites to help with visibility, too. For example:

    • Facebook to share photos, advertise location information and collect customer reviews;
    • Instagram to share photos, restaurant updates and customer-generated content;
    • Yelp and TripAdvisor to collect customer reviews and feedback;
    • Google My Business to create a local presence in Google search and Maps as well as to collect reviews;
    • Delivery services like DoorDash to outsource delivery to;
    • Reservation sites like OpenTable to outsource reservation bookings to.

    If customers are looking for restaurants online, they need to be willing and able to meet them where they are… before eventually bringing them to the website.

    Although it’s ultimately the restaurant’s responsibility to create these pages, you should provide assistance when it comes to the visual branding piece. For one, it ensures that there’s some consistency between all their platforms. Also, it enables you to fill in missing pieces that restaurateurs might not think about.

    Let’s take a look at Rhode Island staple, IGGY’S:

    IGGY’S website with picture of clamcakes and 3 options for online ordering
    IGGY’S website visitors are introduced to the restaurant with an image of its iconic clamcakes. (Source: IGGY’S) (Large preview)

    The waterfront eatery immediately gets down to business and provides visitors with 3 options for ordering online (based on which location they want to go to).

    Here’s what the online ordering portal looks like:

    IGGY’S online ordering portal
    IGGY’S restaurant’s online ordering portal. (Source: IGGY’S) (Large preview)

    Notice how good this looks. It takes what would otherwise be a text-only menu and turns it into something much more attractive and, arguably, more effective in driving up sales.

    Now, contrast that with IGGY’S online ordering through DoorDash:

    DoorDash online ordering for IGGY’s
    DoorDash customers can order online from IGGY’s restaurant. (Source: DoorDash) (Large preview)

    The items on this page rarely come with descriptions or images.

    Now, IGGY’S is a well-known restaurant around Rhode Island, so this might not be a dealbreaker for online customers. However, new customers might approach the menu with more trepidation than the one available through the IGGY’S website since it’s devoid of details.

    This is where your visual-centric approach comes in handy. By making sure each item comes with a high-resolution and mouth-watering photo (the same as the one used on the site), you can optimize this sales opportunity for them.

    It’s also important to ensure the brand elements are consistently presented. That way, if an existing customer runs across their favorite restaurant on DoorDash, they won’t hesitate to order because they’ll instantly know it’s their favorite restaurant.

    For example, the logo on DoorDash is nothing like the one on the website in terms of quality or looks:

    DoorDash logo and location information for IGGY’S RI
    The DoorDash logo for IGGY’S doesn’t match the one on the website. (Source: DoorDash) (Large preview)

    Be it the logo or another branded element, you want to make sure that 1) it matches the website and 2) looks good. This goes for online ordering sites like DoorDash as well as all the other ones I mentioned earlier.

    Wrapping Up

    We’re at a point now where restaurants can no longer be reluctant or stingy about improving their digital presence. And, as a web designer, this should get you excited.

    There’s a lot you can do to help businesses in this space beyond designing basic websites. Because so much of their digital transformation involves making sales online, you’ll get to design experiences that are intuitive, modern, and mouth-watering while also creating new monetized pathways for them.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    What Vitruvius Can Teach Us About Web Design — Smashing Magazine


    About The Author

    Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable …
    More about

    The ancients can teach us a thing or two about design — even web design. The Roman architect Vitruvius had buildings in mind when laying out his golden triad, but its principles are just as applicable to the web as they are to brick and mortar.

    There’s no escaping the ancient masters. Their shadows loom large over philosophy, literature, architecture, warfare, and… web design? Believe it or not, yes. Although Plato infamously omitted CSS Grid from from the final draft of The Republic, there is nonetheless plenty the old heads can teach us about web development.

    Today’s lecture is about architecture, and how some of its core tenets apply to the worldwide web. Architectural terms are not unusual in web development, and for good reason. In many ways, web developers are digital architects. This piece will focus on Vitruvius, a Roman architect, and how his principles can and should be applied to websites.

    This will focus in particular on the Vitruvian triad, three qualities essential to any building: durability (firmitas) , usefulness (utilitas), and beauty (venustas). Familiarity with these terms — and what they mean in practice — will help make your website better.


    Marcus Vitruvius Pollio was a Roman architect, civil engineer, and author who lived during the first century BC. He is remembered mainly for his writings on architecture, De architectura. Addressing the then emperor Augustus, Vitruvius outlines his thoughts on architectural theory, history, and methods.

    Drawing of Roman architect Marcus Vitruvius Pollio
    Vitruvius posing for a LinkedIn headshot. (Large preview)

    De architectura is the only treatise on architecture to survive from antiquity, and remains a design touchstone to this day. As you could probably guess, Leonardo da Vinci’s Vitruvian Man was inspired by Vitruvius’s writings about proportion.

    For those of you interested in going down an architecture rabbit hole, the full text of De architecture is available for free on Project Gutenberg. This piece will not attempt to summarise the whole book. There are a couple of reasons for this. First, there’d be an awful lot to cover. Second, I haven’t totally lost sight of the fact this is a web design magazine. We will be honing in on the Vitruvian triad, a standard for design that applies well beyond architecture.

    The ancients had a knack for reducing topics to their bare — you might say elemental — essentials. The Vitruvian triad is one such case. There are other architects worth studying, other design theories worth being familiar with, but Vitruvius offers a particularly neat ABC that applies just as well to the web as it does to temples.

    The Vitruvian Triad

    In De architectura, Vitruvius identified three qualities essential to any piece of architecture. In the centuries since they have established themselves as his ‘golden rules.’ If you want to make Vitruvius happy — which of course you do — whenever you make a thing you should strive to make it:

    • Useful (utilitas)
    • Durable (firmitas)
    • Beautiful (venustas)

    Designing with these three things in mind will elevate your work. Having one of these qualities is nice; having two is good; and having all three together is divine. Divine seems like the best option. Let’s break down what each of the three qualities mean in principle, then how they can be applied to web design.

    Useful (utilitas)

    In principle

    Buildings are designed and erected for a reason. Whatever that purpose is, it should always be an architect’s mind. If the structure does not meet its purpose then odds are it isn’t going to be very useful. A theatre with no stage has rather dropped the ball, for example.

    According to Vitruvius, usefulness will be assured “when the arrangement of the apartments is faultless and presents no hindrance to use, and when each class of building is assigned to its suitable and appropriate exposure.”

    You’ve heard this one before, albeit with different language. Vitruvius is the granddaddy of harping on about how form should follow function. Louis Sullivan, the ‘father of skyscrapers’, coined that particular term in 1896. Sullivan supposedly attributed the idea back to Vitruvius, although documentation of this is dubious. In any case, that’s what utilitas boils down to.

    Photograph of the old Public Library of Cincinnati
    Purpose: Library. All clear? (Large preview)

    Different types of buildings have different requirements. A building designed with these requirements as an afterthought will likely disappoint. This may sound obvious, but there are enough white elephants in this world to warrant caution. Labyrinthine shopping centres and highly conductive metal domes in playgrounds may look cool in investor presentations, but they don’t wind up being terribly useful.

    Screenshot of New York Times news article about a playground lawsuit
    Don’t be the playground designer whose playground gives children second-degree burns. Full story in The New York Times. (Large preview)

    This also means the individual parts of a structure should be logically connected. In other words, they should be simple to access and navigate. If a building is useful and easy to use that’s a very good start.


    Utilitas also applies to web design. Every website has a purpose. That purpose may be practical, like a search engine or weather forecast, or it may be artistic, like an interactive story or graphic design portfolio. Whatever it is, it has a reason to exist, and if it is designed with that reason in mind it is more likely to be useful to anyone who visits the site.

    An encyclopedia you would expect to be easy to search and navigate, with cleanly presented and properly cited information. Wikipedia, for example, ticks all of those boxes. It is the web equivalent of an enormous library, right down to the obscure sections and staff bickering behind the scenes. It was built with usefulness front and center, and so its core design has remained consistent in the years since its founding.

    Alternatively, the purpose of a publication is to produce original content that is of value or interest to its readers. To be useful, a website’s publication would present said content in a vibrant and direct way, paying special attention to the reading experience across various devices. A website with wonderful content and bad design undermines its own usefulness.

    Homepage screenshot of The Guardian newspaper
    The Guardian is a newspaper. It’s purpose is to report the news. Its award-winning website doesn’t faff around with slogans or spectacle; it packs it full of content. (Large preview)

    A clear purpose leads to clear design. If a purpose has you pulling in several different directions then the same will be true of the website. You can’t be all things to all people, and it is pointless to try. Usefulness tends to meet specific needs, not all needs.

    When it comes to usefulness you can’t afford to treat websites as something abstract. Like buildings, websites are visited and used by people, and ought to be designed with them in mind above all others. Investors, advertisers, and all the other sordid actors will have their time, but if you let them in too early a site’s usefulness will be compromised. When a publication breaks up articles across multiple pages purely to inflate traffic numbers, its usefulness is reduced. When an e-commerce platform seems more concerned with shoving you down conversion funnels than with providing honest information about its products, its usefulness is reduced. In such cases, the purpose has become secondary, and the design suffers as a result.

    Homepage screenshot of the DuckDuckGo search engine
    We recognise the hallmarks of search engine design just like we recognise theatres, libraries, or sport stadiums. Their forms are shaped around their function. (Large preview)

    Also, like buildings, websites should be easy to navigate. Ensuring the usefulness of a website requires thorough planning. Where the architect has floor plans and models, the web developer has sitemaps, wireframes, and more. These allow us to identify layout problems early and address them.

    Looking at the design through different lenses is especially important here. Does the palette account for colour blindness and cultural differences? Colours mean different things in different places, after all. Is it easy to browse using keyboards and screen readers? Not everyone navigates the web the same way you do. Surely it’s better to be useful to as many people as possible? There is no good excuse for websites not to be both accessible and inclusive.

    Durable (firmitas)

    In principle

    Firmitas boils down to the idea that things should be built to last. A fantastically useful structure that topples over after a couple of years would be widely regarded as a failure. A well-made building can last for centuries, even millenniums. Ironically, none of Vitruvius’s own buildings survive, but the point still stands.

    This principle encompasses more aspects of architecture than might immediately come to mind.

    Durability will be assured when foundations are carried down to the solid ground and materials wisely and liberally selected.
    — Vitruvius

    In other words, choose your destination carefully, lay deep foundations, and use appropriate materials.

    Photograph of the Great Wall of China
    With some sections well over 2,000 years old, it’s safe to say the Great Wall of China was built to last. Photograph by Andrea Leopardi. (Large preview)

    We all instinctively understand longevity is a mark of good design. It reflects quality materials, meticulous planning, and loving maintenance. The Pantheon in Rome, or the Great Wall of China, are testaments to durable design, renowned as much for their longevity as for their majesty.

    The principle also concerns environmental factors. Are buildings designed with due thought to the strains of weather, earthquakes, erosion, etc.? If not, it may not be a building for long…

    Footage of the Tacoma Narrows Bridge shortly before its collapse
    The Tacoma Narrows Bridge has its durability put to the test after engineers cut corners on cost. Spoiler: it collapsed.

    It’s reassuring to know you can count on a structure not collapsing for a while, and in the long run, it usually winds up being cheaper. A durable building sits on strong foundations and uses materials appropriate to its purpose and its environment. Buildings that aren’t designed to last are typically glorified film sets. Before long, they are rubble.


    Time seems to pass much faster on the web, but the principle of firmitas still applies. Given the endless turbulence of online life it makes sense to plant your flag in something sturdy. Out of the three qualities, it is the one least visible to users, but without it, everything else would fall apart.

    This starts with under the hood considerations. The foundations must be strong. Where will the website go? Is the content management system the right fit? Can your web hosting provider handle the expected traffic (and more) and still run smoothly? As anyone who has migrated from one CMS to another can tell you, it’s worth getting it right the first time if possible.

    A generic error message for websites with server issues
    This is what a crumbling website looks like. (Large preview)

    There is also the longevity of the web technologies you’re using. New frameworks may seem like a good idea at the time, but if a site needs to be around for years it may make sense to fall back on HTML, CSS, and JavaScript, as well as universally supported SEO markups like structured data. As in architecture, building things to last often means using established materials rather than newfangled ones.

    Durability extends to design. Web pages have to bend and stretch and warp in ways that would make architects weep. A responsive website is a durable website. As new devices — foldables, for example — and markups enter come at us, websites need to be able to take them in stride. Architects don’t get to cross their arms and sulk about earthquakes, so why should web designers shy away from the hazards of the web? Great design faces up to environmental challenges; it doesn’t avoid them.

    As a site grows its users will become familiar with its design. The more that happens the more of a headache it is to make wholesale changes. If a site is designed carefully from the start then renovations are more likely than rebuilds, and the appearance remains familiar even when it is updated. In this sense, a site’s durability is helped immeasurably by clear purpose. That in itself is a kind of bedrock, helping to keep sites sturdy in times of change. Even the best sites need updates from time to time.

    Homepage screenshot of Smashing Magazine website
    Smashing Magazine’s 2017 redesign solidified behind the scenes elements like content management, job boards, and ecommerce while keeping the front-end character familiar. (Large preview)

    There is also the question of sustainability. Is due attention being paid to the commercial realities of the site? In other words, where is the box office? Be it paywalls, advertising, or membership systems, there’s no shame in incorporating these into the design process. They are not a site’s purpose, but they help make it durable.

    Beautiful (venustas)

    In principle

    As Vitruvius says, “the eye is always in search of beauty.” It is a perfectly legitimate quality to aim for.

    According to De architectura, beauty occurs “when the appearance of the work is pleasing and in good taste, and when its members are in due proportion according to correct principles of symmetry.”

    As well as being useful and well made, buildings ought also to be pleasing to the eye. Some may even touch the heart.

    An illustration for Vitruvius’s writings on architecture
    If you want to design a good temple, Vitruvius is useful for that, too. (Large preview)

    Vitruvius outlines several qualities that help make buildings beautiful. Symmetry and proportion were of particular interest to him (hence Da Vinci’s Vitruvuian Man). Obsessively incorporating shapes into everything predates graphic design by a few millennia.

    Each element of a structure should be considered in relation to others near it, as well as to the environment that it is being built. Vitruvius sums up this interplay with one word: eurythmy, a Greek term for harmonious rhythm. (British pop duo Eurythmics drew their name from the same term, in case you were wondering.) Vitruvius defines it in an architectural context as follows:

    Eurythmy is beauty and fitness in the adjustments of the members. This is found when the members of a work are of a height suited to their breadth, of a breadth suited to their length, and, in a word, when they all correspond symmetrically.

    Like music, buildings have rhythm; their individual pieces forming into a kind of harmony. A beautiful building might be the carved marble equivalent of a Beach Boys chorus, while an ugly one is like nails on a chalkboard.

    An example of McMansion Hell critiquing shoddy architecture
    For those curious what beautiful architecture doesn’t look like, McMansion Hell is a good place to start. (Large preview)

    As well as being well proportioned and symmetrical, individual pieces can enhance beauty in other ways. Good craftsmanship is beautiful, as is attention to detail. Materials appropriate to the structure are also beautiful — reflecting the sound judgment and good taste of the designer.

    Ornamentation is acceptable, but it must complement the core design of the structure — think column engravings, paving patterns, etc. All these little details and considerations amount to the building as a whole. When they all fall together, it’s breathtaking.


    Beautiful websites adhere to many of the same standards as architecture. Proportion and symmetry are mainstays of attractive design. Grid systems serve the same purpose of organizing content clearly and attractively. Beyond that, there are questions of color, typography, imagery, and more, all of which feed into a website’s beauty — or lack thereof.

    To get the ball rolling, here are a handful of resources on Smashing Magazine alone:

    An aspect of venustas that is especially relevant to web design is how users can interact with it. As well as being nice to look at, websites have the potential to be playful, even surprising. It’s one thing to sit there and be admired, it’s another to invite visitors to become part of the beauty.

    Screenshot of Bruno Simon’s portfolio website
    Bruno Simon’s portfolio website invites visitors to drive around using their arrow keys. (Large preview)

    Google’s interactive doodles are another good — and less daunting — example of this. Covering all manner of subjects, they invite users to play games, to learn, and to be entertained. It’s nice in its own right, and aligns with Google’s purpose as a source of information.

    Example of a Google Doodle
    Ironically, this is just a GIF of an interactive thing rather than the interactive thing itself, but you can see the full doodle and details of its making here. (Large preview)

    With the web continuing its shift towards mobile-first experience, in which users can literally touch the websites they visit, it should be remembered that beauty pertains to all the senses — not just sight.

    As for the ‘environment’, with web design that is the device it is being displayed on. Unlike buildings, websites don’t have the luxury of being one shape at all times. To be beautiful they must be responsive, shifting size and proportion to compliment the device. This is pleasing on its own, and done well the shape shifting itself becomes beautiful in its own way.

    A Balancing Act

    Vitruvius’s rules of utilitas, firmitas, and venustas have endured because they work, and they have endured as a triad because they work best together. Attaining all three is a balancing act. If they pull in different directions then the quality of whatever is being made will suffer. Beautiful but unuseable is poor design, for example. On the flip side, when they work together the result can be far greater than the sum of their parts.

    As with architecture this requires a bird’s eye view. The pieces cannot be done one at a time, they must be done with the others in mind.

    The architect, as soon as he has formed the conception, and before he begins the work, has a definite idea of the beauty, the convenience, and the propriety that will distinguish it.
    — Vitruviuas

    No doubt the details will change, but the harmony should not.

    This extends to the people making a website. As with architecture websites typically have to balance the wants of a client, an architect, and a builder — not to mention investors, financiers, statisticians, and so on. For a website to be harmonious, so do the people responsible for building it.

    None of this is to say that the three qualities are equally important regardless of the project — only that each should be given due thought in relation to the others. The usefulness of the Eiffel Tower seems fairly trivial, as does the beauty of the Hoover Dam, and that’s fine. If a website is made to be ornamental or temporary, it doesn’t have to be more than that. The natures of utilitas, firmitas, and venustas themselves change depending on the project. Like most rules worth following, don’t be afraid to bend — or even break — them when the mood takes you.

    My Website Is A Temple

    Web developers are the architects of the Internet, and websites are their buildings. Vitruvius makes a point of saying architects are not — and indeed cannot be — experts in every field. Instead, they are jacks of all trades (my phrasing, not his). For the triad to be achieved it is better to have a good grasp of many subjects than expertise in one:

    Let him be educated, skillful with the pencil, instructed in geometry, know much history, have followed the philosophers with attention, understand music, have some knowledge of medicine, know the opinions of the jurists, and be acquainted with astronomy and the theory of the heavens.

    The relevance of some of these is obvious, others less so, but it’s all valuable to architects and web developers alike. Geometry informs proportion and layout; history puts designs in context and ensures they are understood as they are meant to be; philosophy helps us to approach projects honestly and ethically; music awakens us to the role of sound; medicine gives thought to accessibility, and potential strains on the eye, ear, or even thumb; and law looms larger now than ever. The theory of the heavens might be a stretch, but you get the idea.

    Here are yet more links to help you on your way:

    Not that theory alone will get you there. There’s no substitute for learning through doing. As the Stanford Encyclopedia of Philosophy notes, “the Vitruvian picture of architecture is rooted in experiential knowledge of making, doing, and crafting.” Or better yet, as Vitruvius himself puts it: “Knowledge is the child of practice and theory.”

    The Vitruvian triad is a worthy standard to use whether you’re building a coliseum or a portfolio website. Not everyone has the luxury of (or budget for) a team of experts, and even if we did, why deny ourselves of the breadth of knowledge that strong design requires? We can build Levittown or we can build Rome, and everything in between. A useful, durable, beautiful Internet sounds like a good deal to me.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Building A Facial Recognition Web Application With React — Smashing Magazine


    About The Author

    Adeneye David Abiodun is a JavaScript lover and a Tech Enthusiast, Founded @corperstechhub and currently a Lecturer / IT Technologist @critm_ugep. I build …
    More about

    In this article, Adeneye David Abiodun explains how to build a facial recognition web app with React by using the Face Recognition API, as well as the Face Detection model and Predict API. The app built in this article is similar to the face detection box on a pop-up camera in a mobile phone — it’s able to detect a human face in any image fetched from the Internet.

    Please note that in order to follow this article in detail, you will need to know the fundamentals of React.

    If you are going to build a facial recognition web app, this article will introduce you to an easy way of integrating such. In this article, we will take a look at the Face Detection model and Predict API for our face recognition web app with React.

    What Is Facial Recognition And Why Is It Important?

    Facial recognition is a technology that involves classifying and recognizing human faces, mostly by mapping individual facial features and recording the unique ratio mathematically and storing the data as a face print. The face detection in your mobile camera makes use of this technology.

    How Facial Recognition Technology Works

    Facial recognition is an enhanced application bio-metric software that uses a deep learning algorithm to compare a live capture or digital image to the stored face print to verify individual identity. However, deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.

    Facial detection is the process of identifying a human face within a scanned image; the process of extraction involves obtaining a facial region such as the eye spacing, variation, angle and ratio to determine if the object is human.

    Note: The scope of this tutorial is far beyond this; you can read more on this topic in “Mobile App With Facial Recognition Feature: How To Make It Real”. In today’s article, we’ll only be building a web app that detects a human face in an image.

    A Brief Introduction To Clarifai

    In this tutorial, we will be using Clarifai, a platform for visual recognition that offers a free tier for developers. They offer a comprehensive set of tools that enable you to manage your input data, annotate inputs for training, create new models, predict and search over your data. However, there are other face recognition API that you can use, check here to see a list of them. Their documentation will help you to integrate them into your app, as they all almost use the same model and process for detecting a face.

    Getting Started With Clarifai API

    In this article, we are just focusing on one of the Clarifai model called Face Detection. This particular model returns probability scores on the likelihood that the image contains human faces and coordinates locations of where those faces appear with a bounding box. This model is great for anyone building an app that monitors or detects human activity. The Predict API analyzes your images or videos and tells you what’s inside of them. The API will return a list of concepts with corresponding probabilities of how likely it is that these concepts are contained within the image.

    You will get to integrate all these with React as we continue with the tutorial, but now that you have briefly learned more about the Clarifai API, you can deep dive more about it here.

    What we are building in this article is similar to the face detection box on a pop-up camera in a mobile phone. The image presented below will give more clarification:

    Sample-App. (Large preview)

    You can see a rectangular box detecting a human face. This is the kind of simple app we will be building with React.

    Setting Development Environment

    The first step is to create a new directory for your project and start a new react project, you can give it any name of your choice. I will be using the npm package manager for this project, but you can use yarn depending on your choice.

    Note: Node.js is required for this tutorial. If you don’t have it, go to the Node.js official website to download and install before continuing.

    Open your terminal and create a new React project.

    We are using create-react-app which is a comfortable environment for learning React and is the best way to start building a new single-pageapplication to React. It is a global package that we would install from npm. it creates a starter project that contains webpack, babel and a lot of nice features.

    /* install react app globally */
    npm install -g create-react-app
    /* create the app in your new directory */
    create-react-app face-detect
    /* move into your new react directory */
    cd face-detect
    /* start development sever */
    npm start

    Let me first explain the code above. We are using npm install -g create-react-app to install the create-react-app package globally so you can use it in any of your projects. create-react-app face-detect will create the project environment for you since it’s available globally. After that, cd face-detect will move you into our project directory. npm start will start our development server. Now we are ready to start building our app.

    You can open the project folder with any editor of your choice. I use visual studio code. It’s a free IDE with tons of plugins to make your life easier, and it is available for all major platforms. You can download it from the official website.

    At this point, you should have the following folder structure.

    ├── node_modules
    ├── public 
    ├── src
    ├── .gitignore
    ├── package-lock.json
    ├── package.json

    Note: React provide us with a single page React app template, let us get rid of what we won’t be needing. First, delete the logo.svg file in src folder and replace the code you have in src/app.js to look like this.

    import React, { Component } from "react";
    import "./App.css";
    class App extends Component {
      render() {
        return (
    export default App;

    What we did was to clear the component by removing the logo and other unnecessary code that we will not be making use of. Now replace your src/App.css with the minimal CSS below:

    .App {
      text-align: center;
    .center {
      display: flex;
      justify-content: center;

    We’ll be using Tachyons for this project, It is a tool that allows you to create fast-loading, highly readable, and 100% responsive interfaces with as little CSS as possible.

    You can install tachyons to this project through npm:

    # install tachyons into your project
    npm install tachyons

    After the installation has completely let us add the Tachyons into our project below at src/index.js file.

    import React from "react";
    import ReactDOM from "react-dom";
    import "./index.css";
    import App from "./App";
    import * as serviceWorker from "./serviceWorker";
    // add tachyons below into your project, note that its only the line of code you adding here
    import "tachyons";
    ReactDOM.render(<App />, document.getElementById("root"));
    // If you want your app to work offline and load faster, you can change
    // unregister() to register() below. Note this comes with some pitfalls.
    // Learn more about service workers:

    The code above isn’t different from what you had before, all we did was to add the import statement for tachyons.

    So let us give our interface some styling at src/index.css file.

    body {
      margin: 0;
      font-family: "Courier New", Courier, monospace;
      -webkit-font-smoothing: antialiased;
      -Moz-osx-font-smoothing: grayscale;
      background: #485563; /* fallback for old browsers */
      background: linear-gradient(
        to right,
      ); /* W3C, IE 10+/ Edge, Firefox 16+, Chrome 26+, Opera 12+, Safari 7+ */
    button {
      cursor: pointer;
    code {
      font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New",

    In the code block above, I added a background color and a cursor pointer to our page, at this point we have our interface setup, let get to start creating our components in the next session.

    Building Our React Components

    In this project, we’ll have two components, we have a URL input box to fetch images for us from the internet — ImageSearchForm, we’ll also have an image component to display our image with a face detection box — FaceDetect. Let us start building our components below:

    Create a new folder called Components inside the src directory. Create another two folders called ImageSearchForm and FaceDetect inside the src/Components after that open ImageSearchForm folder and create two files as follow ImageSearchForm.js and ImageSearchForm.css.

    Then open FaceDetect directory and create two files as follow FaceDetect.js and FaceDetect.css.

    When you are done with all these steps your folder structure should look like this below in src/Components directory:

    src/Components TEMPLATE
    ├── src
      ├── Components 
        ├── FaceDetect
          ├── FaceDetect.css 
          ├── FaceDetect.js 
        ├── ImageSearchForm
          ├── ImageSearchForm.css 
          ├── ImageSearchForm.js

    At this point, we have our Components folder structure, now let us import them into our App component. Open your src/App.js folder and make it look like what I have below.

    import React, { Component } from "react";
    import "./App.css";
    import ImageSearchForm from "./components/ImageSearchForm/ImageSearchForm";
    // import FaceDetect from "./components/FaceDetect/FaceDetect";
    class App extends Component {
      render() {
        return (
          <div className="App">
            <ImageSearchForm />
            {/* <FaceDetect /> */}
    export default App;

    In the code above, we mounted our components at lines 10 and 11, but if you notice FaceDetect is commented out because we are not working on it yet till our next section and to avoid error in the code we need to add a comment to it. We have also imported our components into our app.

    To start working on our ImageSearchForm file, open the ImageSearchForm.js file and let us create our component below.
    This example below is our ImageSearchForm component which will contain an input form and the button.

    import React from "react";
    import "./ImageSearchForm.css";
    // imagesearch form component
    const ImageSearchForm = () => {
      return (
        <div className="ma5 to">
          <div className="center">
            <div className="form center pa4 br3 shadow-5">
              <input className="f4 pa2 w-70 center" type="text" />
              <button className="w-30 grow f4 link ph3 pv2 dib white bg-blue">
    export default ImageSearchForm;

    In the above line component, we have our input form to fetch the image from the web and a Detect button to perform face detection action. I’m using Tachyons CSS here that works like bootstrap; all you just have to call is className. You can find more details on their website.

    To style our component, open the ImageSearchForm.css file. Now let’s style the components below:

    .form {
      width: 700px;
      background: radial-gradient(
          transparent 20%,
          slategray 20%,
          slategray 80%,
          transparent 80%,
            transparent 20%,
            slategray 20%,
            slategray 80%,
            transparent 80%,
          50px 50px,
        linear-gradient(#a8b1bb 8px, transparent 8px) 0 -4px,
        linear-gradient(90deg, #a8b1bb 8px, transparent 8px) -4px 0;
      background-color: slategray;
      background-size: 100px 100px, 100px 100px, 50px 50px, 50px 50px;

    The CSS style property is a CSS pattern for our form background just to give it a beautiful design. You can generate the CSS pattern of your choice here and use it to replace it with.

    Open your terminal again to run your application.

    /* To start development server again */
    npm start

    We have our ImageSearchForm component display in the image below.

    Image-Search-Page. (Large preview)

    Now we have our application running with our first components.

    Image Recognition API

    It’s time to create some functionalities where we enter an image URL, press Detect and an image appear with a face detection box if a face exists in the image. Before that let setup our Clarifai account to be able to integrate the API into our app.

    How to Setup Clarifai Account

    This API makes it possible to utilize its machine learning app or services. For this tutorial, we will be making use of the tier that’s available for free to developers with 5,000 operations per month. You can read more here and sign up, after sign in it will take you to your account dashboard click on my first application or create an application to get your API-key that we will be using in this app as we progress.

    Note: You cannot use mine, you have to get yours.

    Clarifai-Dashboard. (Large preview)

    This is how your dashboard above should look. Your API key there provides you with access to Clarifai services. The arrow below the image points to a copy icon to copy your API key.

    If you go to Clarifai model you will see that they use machine learning to train what is called models, they train a computer by giving it many pictures, you can also create your own model and teach it with your own images and concepts. But here we would be making use of their Face Detection model.

    The Face detection model has a predict API we can make a call to (read more in the documentation here).

    So let’s install the clarifai package below.

    Open your terminal and run this code:

    /* Install the client from npm */
    npm install clarifai

    When you are done installing clarifai, we need to import the package into our app with the above installation we learned earlier.

    However, we need to create functionality in our input search-box to detect what the user enters. We need a state value so that our app knows what the user entered, remembers it, and updates it anytime it gets changes.

    You need to have your API key from Clarifai and must have also installed clarifai through npm.

    The example below shows how we import clarifai into the app and also implement our API key.

    Note that (as a user) you have to fetch any clear image URL from the web and paste it in the input field; that URL will the state value of imageUrl below.

    import React, { Component } from "react";
    // Import Clarifai into our App
    import Clarifai from "clarifai";
    import ImageSearchForm from "./components/ImageSearchForm/ImageSearchForm";
    // Uncomment FaceDetect Component
    import FaceDetect from "./components/FaceDetect/FaceDetect";
    import "./App.css";
    // You need to add your own API key here from Clarifai.
    const app = new Clarifai.App({
      apiKey: "ADD YOUR API KEY HERE",
    class App extends Component {
      // Create the State for input and the fectch image
      constructor() {
        this.state = {
          input: "",
          imageUrl: "",
    // setState for our input with onInputChange function
      onInputChange = (event) => {
        this.setState({ input: });
    // Perform a function when submitting with onSubmit
      onSubmit = () => {
            // set imageUrl state
        this.setState({ imageUrl: this.state.input });
        app.models.predict(Clarifai.FACE_DETECT_MODEL, this.state.input).then(
          function (response) {
            // response data fetch from FACE_DETECT_MODEL 
            /* data needed from the response data from clarifai API, 
               note we are just comparing the two for better understanding 
               would to delete the above console*/ 
          function (err) {
            // there was an error
      render() {
        return (
          <div className="App">
            // update your component with their state
            // uncomment your face detect app and update with imageUrl state
            <FaceDetect imageUrl={this.state.imageUrl} />
    export default App;

    In the above code block, we imported clarifai so that we can have access to Clarifai services and also add our API key. We use state to manage the value of input and the imageUrl. We have an onSubmit function that gets called when the Detect button is clicked, and we set the state of imageUrl and also fetch image with Clarifai FACE DETECT MODEL which returns a response data or an error.

    For now, we’re logging the data we get from the API to the console; we’ll use that in the future when determining the face detect model.

    For now, there will be an error in your terminal because we need to update the ImageSearchForm and FaceDetect Components files.

    Update the ImageSearchForm.js file with the code below:

    import React from "react";
    import "./ImageSearchForm.css";
    // update the component with their parameter
    const ImageSearchForm = ({ onInputChange, onSubmit }) => {
      return (
        <div className="ma5 mto">
          <div className="center">
            <div className="form center pa4 br3 shadow-5">
                className="f4 pa2 w-70 center"
                onChange={onInputChange}    // add an onChange to monitor input state
                className="w-30 grow f4 link ph3 pv2 dib white bg-blue"
                onClick={onSubmit}  // add onClick function to perform task
    export default ImageSearchForm;

    In the above code block, we passed onInputChange from props as a function to be called when an onChange event happens on the input field, we’re doing the same with onSubmit function we tie to the onClick event.

    Now let us create our FaceDetect component that we uncommented in src/App.js above. Open FaceDetect.js file and input the code below:

    In the example below, we created the FaceDetect component to pass the props imageUrl.

    import React from "react";
    // Pass imageUrl to FaceDetect component
    const FaceDetect = ({ imageUrl }) => {
      return (
      # This div is the container that is holding our fetch image and the face detect box
        <div className="center ma">
          <div className="absolute mt2">
                            # we set our image SRC to the url of the fetch image 
            <img alt="" src={imageUrl} width="500px" heigh="auto" />
    export default FaceDetect;

    This component will display the image we have been able to determine as a result of the response we’ll get from the API. This is why we are passing the imageUrl down to the component as props, which we then set as the src of the img tag.

    Now we both have our ImageSearchForm component and FaceDetect components are working. The Clarifai FACE_DETECT_MODEL has detected the position of the face in the image with their model and provided us with data but not a box that you can check in the console.

    Image-Link-Form. (Large preview)

    Now our FaceDetect component is working and Clarifai Model is working while fetching an image from the URL we input in the ImageSearchForm component. However, to see the data response Clarifai provided for us to annotate our result and the section of data we would be needing from the response if you remember we made two console.log in App.js file.

    So let’s open the console to see the response like mine below:

    Image-Link-Form[Console]. (Large preview)

    The first console.log statement which you can see above is the response data from Clarifai FACE_DETECT_MODEL made available for us if successful, while the second console.log is the data we are making use of in order to detect the face using the data.region.region_info.bounding_box. At the second console.log, bounding_box data are:

    bottom_row: 0.52811456
    left_col: 0.29458505
    right_col: 0.6106333
    top_row: 0.10079138

    This might look twisted to us but let me break it down briefly. At this point the Clarifai FACE_DETECT_MODEL has detected the position of face in the image with their model and provided us with a data but not a box, it ours to do a little bit of math and calculation to display the box or anything we want to do with the data in our application. So let me explain the data above,

    bottom_row: 0.52811456 This indicates our face detection box start at 52% of the image height from the bottom.
    left_col: 0.29458505 This indicates our face detection box start at 29% of the image width from the left.
    right_col: 0.6106333 This indicates our face detection box start at 61% of the image width from the right.
    top_row: 0.10079138 This indicates our face detection box start at 10% of the image height from the top.

    If you take a look at our app inter-phase above, you will see that the model is accurate to detect the bounding_box from the face in the image. However, it left us to write a function to create the box including styling that will display a box from earlier information on what we are building based on their response data provided for us from the API. So let’s implement that in the next section.

    Creating A Face Detection Box

    This is the final section of our web app where we get our facial recognition to work fully by calculating the face location of any image fetch from the web with Clarifai FACE_DETECT_MODEL and then display a facial box. Let open our src/App.js file and include the code below:

    In the example below, we created a calculateFaceLocation function with a little bit of math with the response data from Clarifai and then calculate the coordinate of the face to the image width and height so that we can give it a style to display a face box.

    import React, { Component } from "react";
    import Clarifai from "clarifai";
    import ImageSearchForm from "./components/ImageSearchForm/ImageSearchForm";
    import FaceDetect from "./components/FaceDetect/FaceDetect";
    import "./App.css";
    // You need to add your own API key here from Clarifai.
    const app = new Clarifai.App({
      apiKey: "ADD YOUR API KEY HERE",
    class App extends Component {
      constructor() {
        this.state = {
          input: "",
          imageUrl: "",
          box: {},  # a new object state that hold the bounding_box value
      // this function calculate the facedetect location in the image
      calculateFaceLocation = (data) => {
        const clarifaiFace =
        const image = document.getElementById("inputimage");
        const width = Number(image.width);
        const height = Number(image.height);
        return {
          leftCol: clarifaiFace.left_col * width,
          topRow: clarifaiFace.top_row * height,
          rightCol: width - clarifaiFace.right_col * width,
          bottomRow: height - clarifaiFace.bottom_row * height,
      /* this function display the face-detect box base on the state values */
      displayFaceBox = (box) => {
        this.setState({ box: box });
      onInputChange = (event) => {
        this.setState({ input: });
      onSubmit = () => {
        this.setState({ imageUrl: this.state.input });
          .predict(Clarifai.FACE_DETECT_MODEL, this.state.input)
          .then((response) =>
            # calculateFaceLocation function pass to displaybox as is parameter
          // if error exist console.log error
          .catch((err) => console.log(err));
      render() {
        return (
          <div className="App">
            // box state pass to facedetect component
            <FaceDetect box={} imageUrl={this.state.imageUrl} />
    export default App;

    The first thing we did here was to create another state value called box which is an empty object that contains the response values that we received. The next thing we did was to create a function called calculateFaceLocation which will receive the response we get from the API when we call it in the onSubmit method. Inside the calculateFaceLocation method, we assign image to the element object we get from calling document.getElementById("inputimage") which we use to perform some calculation.

    leftCol clarifaiFace.left_col is the % of the width multiply with the width of the image then we would get the actual width of the image and where the left_col should be.
    topRow clarifaiFace.top_row is the % of the height multiply with the height of the image then we would get the actual height of the image and where the top_row should be.
    rightCol This subtracts the width from (clarifaiFace.right_col width) to know where the right_Col should be.
    bottomRow This subtract the height from (clarifaiFace.right_col height) to know where the bottom_Row should be.

    In the displayFaceBox method, we update the state of box value to the data we get from calling calculateFaceLocation.

    We need to update our FaceDetect component, to do that open FaceDetect.js file and add the following update to it.

    import React from "react";
    // add css to style the facebox
    import "./FaceDetect.css";
    // pass the box state to the component
    const FaceDetect = ({ imageUrl, box }) => {
      return (
        <div className="center ma">
          <div className="absolute mt2">
                /* insert an id to be able to manipulate the image in the DOM */
            <img id="inputimage" alt="" src={imageUrl} width="500px" heigh="auto" />
           //this is the div displaying the faceDetect box base on the bounding box value 
              // styling that makes the box visible base on the return value
                top: box.topRow,
                right: box.rightCol,
                bottom: box.bottomRow,
                left: box.leftCol,
    export default FaceDetect;

    In order to show the box around the face, we pass down box object from the parent component into the FaceDetect component which we can then use to style the img tag.

    We imported a CSS we have not yet created, open FaceDetect.css and add the following style:

    .bounding-box {
      position: absolute;
      box-shadow: 0 0 0 3px #fff inset;
      display: flex;
      flex-wrap: wrap;
      justify-content: center;
      cursor: pointer;

    Note the style and our final output below, you could see we set our box-shadow color to be white and display flex.

    At this point, your final output should look like this below. In the output below, we now have our face detection working with a face box to display and a border style color of white.

    Final App. (Large preview)

    Let try another image below:

    Final App. (Large preview)


    I hope you enjoyed working through this tutorial. We’ve learned how to build a face recognition app that can be integrated into our future project with more functionality, you also learn how to use an amazing machine learning API with react. You can always read more on Clarifai API from the references below. If you have any questions, you can leave them in the comments section and I’ll be happy to answer every single one and work you through any issues.

    The supporting repo for this article is available on Github.

    Resources And Further Reading

    Smashing Editorial
    (ks, ra, yk, il)

    Source link