Browsing Tag: Web

    web design

    What Saul Bass Can Teach Us About Web Design — Smashing Magazine

    02/12/2021

    About The Author

    Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable …
    More about
    Frederick

    Film credits, brand logos, posters… Saul Bass did it all, and the principles that informed his work are just as valuable now as they were 50 years ago.

    Web design exists at a lovely intersection of different disciplines. In previous articles, I’ve written about the lessons to be learned from newspapers and from ancient Roman architects. This time we’ll be looking at one of the all-time great graphic designers — Saul Bass.

    Saul Bass is a graphic design legend. Responsible for title sequences in films like North by Northwest and Anatomy of a Murder, as well as a number of iconic posters and brand logos over the years. His work, in the words of Martin Scorses, “distilled the poetry of the modern, industrialized world.”

    A selection of corporate logos designed by Saul Bass
    From United Airlines to AT&T, Saul Bass designed some of the most iconic logos of all time. (Large preview)

    We’re in a different world now, a breakneck speed digital world, but that carries with it its own poetry. Although the backdrop has changed, Saul Bass’s methods and mindset have stood the test of time, and web designers would do well to remember them.

    Before getting into the particulars of Saul Bass and his work, it’s worth outlining his approach to design in broader terms. Big characters inspire big ideas, but as is so often the case the real trick is in the details.

    Concerning his approach to title sequences, Bass said:

    “Deal with ordinary things, things that we know so well that we’ve ceased to see them; deal with them in a way that allows us to understand them anew — in a sense making the ordinary extraordinary.”

    — Saul Bass (Source)

    A similar ethos can and should be applied to web design. As we look at his work, yes, by all means envision homepage splashes, but also think about buttons and signup forms and legal disclaimers. There is just as much beauty to be found in the little things. Sometimes more.

    Saul Bass-designed poster for the feature film ‘Grand Prix’
    Nothing Saul Bass did was an afterthought. Every element had to fit perfectly with everything else, from titles to credits. (Large preview)

    That Bass is even renowned for title sequences is a testament to his creativity. Before Saul Bass entered the scene film titles were usually dull affairs, names and static images delivered with all the bizazz of divorce papers. Under his eye, they became pieces of art, statements on the tone, and texture of what was to come. As he so brilliantly put it,

    “Design is thinking made visual.”

    — Saul Bass

    You can find more about Saul Bass’ vision of his work and his influences in the following pages and videos:

    Color

    Let’s start with the most basic aspect — color. Bass once said that ‘audience involvement with a film should begin with its first frame.’ So too should visitor involvement begin upon first load. We process the colors and arrangement of a website before we have time to process its content. You’ll find no greater advocate for quality content than me, but it is hampered if not given a quality canvas to unfold on.

    For Bass is typically translated into simple, vivid color palettes with no more than three of four colors. Not too busy, but plenty of pop. Red, white, and black is one of the golden color combinations — one Bass put to use many times. Bright colors don’t always mean ‘loud’, sometimes they mean ‘striking.’

    Posters for the films ‘Vertigo’ and ‘Advise & Consent’
    Saul Bass loved him some red, white, and black. (Large preview)
    Album artwork of ‘Tone Poems of Color’ by Frank Sinatra
    More stimulating than Sinatra’s mug, no? (Source: MoMA) (Large preview)

    What does this mean in terms of web design? Well, a little more than ‘use bright colors,’ I’m afraid. Study color theory then apply it to your projects in tasteful, audacious ways. Several excellent articles on the subjects on the subject listed at the end of this section, and the ‘Colors’ category of Smashing Magazine is home to plenty more. It’s well worth the attention. The right palette can set a tone before visitors have even processed what they’re looking at.

    For an uncannily Saul Bass-esque example of color and shape in action on the web, take the Holiday Center for Working Youth in Ottendorf. What better way to celebrate bold, functional architecture than through bold, functional design? It’s like a Vertigo poster in digital form.

    Screenshot of the website homepage for the Holiday Center for Working Youth in Ottendorf
    The website celebrates legacy not just through words, but through color too. (Large preview)

    Red, white, and black isn’t always the answer (though it is an incredibly sharp combination). The right mix depends on the story you’re trying to tell, and how you’re trying to tell it. Saul Bass knew full well that color is an incredibly powerful tool, and it’s one still often underused in the prim, white-space world of today’s web.

    Screenshot of the Lubmovka Festival website homepage
    The Lubimovka Festival for Russian speaking playwrights uses color on its website to convey the vibrance of what it does. It takes what could have been a stuffy old image of Shakespeare and makes it dynamic and fun. (Large preview)

    Audience involvement with a website begins with color, so make it count. For those unsure where to start here are a selection of Smashing articles on the topic:

    Typography

    Words, words, words. Design may be thinking made visual, but sometimes the best way to say something is to come right out with it in words. Bass had a typographical style almost as distinctive as his visual one. Rough, hand-drawn, and almost always all-caps, he made words powerful without being overbearing.

    Collection of stills from the ‘North by Northwest’ opening title sequence
    The title sequence of North by Northwest weds typography with color to turn the mundane into the spectacular. (Source: Art of the Title) (Large preview)

    Fonts can tell stories too. They communicate tone of voice, formality, importance, and structure, among other things. Combined with a strong color scheme they can make copy dance where it might otherwise slouch along feeling sorry for itself.

    Screenshot of the Kotak Toys website homepage
    Russian toymaker Kotak uses typography to reflect the playful, mix-and-match nature of their stacking toys. (Large preview)

    Copywriter Jon Ryder showcases this beautifully on his personal website, which is the full package of strong color and bold, playful typography. As you click the prompts the copy rearranges and edits itself. It’s a brilliant idea elegantly executed. If Saul Bass was around to design portfolios this is the kind of thing you’d expect him to come up with.

    Screenshot of copywriter Jon Ryder’s portfolio website homepage
    (Large preview)

    Art of the Title refers to Bass’s approach as ‘kinetic typography’, and I think that’s a lovely turn of phrase to keep in mind when choosing font combinations for the web. Yes, Times New Roman or Arial will do a job, but with the wealth of free fonts and CSS stylings available why wouldn’t want to try giving your words more life? It’s not always appropriate, but sometimes it can be just the ticket.

    Resources

    Drawing

    This one is as much about the process as it is about websites themselves. Saul Bass was a big believer in drawing. Even as technologies advanced and opportunities arose to streamline the design process, he understood there is no substitute for working with your hands when trying to get ideas out of your head and into the world. To aspiring designers, he advised,

    “Learn to draw. If you don’t, you’re going to live your life getting around that and trying to compensate for that.”

    Storyboard sketches of the shower scene in the Alfred Hitchcock feature film ‘Psycho’
    The shower scene in Psycho was storyboarded by none other than Saul Bass.(Large preview)

    Whatever it is you’re dealing with — page layout, logos, icons — there is no faster way to get the ideas out of your head than by drawing them. In this day and age that doesn’t necessarily mean pen and paper, you can always use tablets and like, but the underlying principle is the same. There are no presets — just you and your ideas. I’m no Saul Bass, but I’ve had a few good ideas in my time (at least two or three) and most of them happened almost by accident in the flow drawing.

    Pencil sketch plan of a New York Times front page spread
    (Large preview)

    The value of drawing pops up in the unlikeliest of places and I love it every time I do. Every front page of The New York Times starts as a hand-drawn pencil sketch, for example. Are there fancy computer programs that could do a similar job? Sure, and they’re used eventually, but they’re not used first. It doesn’t matter if they’re brainstorming corporate logos, revamping a website’s homepage, or preparing the front page of a newspaper — designers draw.

    Here are some good articles about the value of drawing in a web design context:

    An Interdisciplinary Approach

    It’s near impossible to fix one label on Saul Bass. At any given time he was a graphic designer, a filmmaker, a photographer, an architect. The list goes on and on. Having to be literate in so many areas was a necessity, but it was also a genuine passion, a constant curiosity.

    Take the title sequence of Vertigo. Its iconic spiral aesthetic dated back years earlier when Bass came across spiral diagrams by 19th-century French mathematician Jules-Antoine Lissajous. When asked to work on Vertigo, the idea clicked into place immediately. Mathematical theory found its way into an Alfred Hitchcock film poster, and who are we to argue with the results?

    A selection of Lissajous curve diagrams
    (Large preview)

    Having a specialization is obviously important in any field, but there is so much to be gained from stepping outside our lanes. Anyone with even a casual interest in web development has almost certainly found themselves needing a similarly protean approach — whether they wanted to or not.

    Screenshot of designer Tonya Baydina’s portfolio website homepage
    Sometimes websites need photography, others illustration, others geometry or video or data visualisation. You won’t know until you try. This is the portfolio website of designer Tonya Baydina. (Large preview)

    Engineering, design, UX, typography, copywriting, ethics, law… much like in architecture there are few fields that don’t enrich one’s understanding of web design, so don’t be afraid to immerse yourself in the unfamiliar. You just might find the perfect inspiration.

    Iterate, Collaborate

    Even the masters are students, always learning, always iterating, often collaborating. Bass of course had strong ideas about what form his projects ought to take, but it was not his way or the highway. Look no further than Stanley Kubrick’s feedback on potential posters for The Shining. The two went through hundreds of drafts together. In one letter Hitchcock wrote, “beautifully done but I don’t think any of them are right.”

    A rejected poster design for the Stanley Kubrick feature film ‘The Shining’
    (Large preview)

    One can only imagine how many hours Bass slaved over those mockups, but when you look at those rejected it’s hard to disagree with Kubrick; beautifully done, but not quite right. I think the final result was worth the work, don’t you?

    Poster for the Stanley Kubrick film ‘The Shining’
    (Large preview)

    We live and work in a largely corporate world. Like Bass, that doesn’t have to hamstring the things you make. Hold your ground when that’s what the moment calls for, but always be on the lookout for genuine partners. They are out there. The client isn’t always right, but they’re not always wrong either. Collaboration often brings out the best in a project, and even geniuses have to work like hell to get it right.

    There are few things more valuable than feedback from people you trust. It’s hard to beat that cool, communicative flow where egos and insecurities are out of the picture and it’s all about making the thing as good as it can be.

    Here are a couple of articles on iteration and experimentation in web design that I’ve really enjoyed working on:

    Beauty For Beauty’s Sake

    No-one dreams of doing corporate art, but Bass is a model example of excellence thriving in that world. Decades it still holds its own and is oftentimes genuinely beautiful. He showed better than most that designing for a living didn’t mean creativity couldn’t thrive. Whether you’re making brand logos or homepages there’s a lot to be said for creatives fighting their corner. You owe it to the work.

    Bass put it better than I ever could.

    “I want everything we do to be beautiful. I don’t give a damn whether the client understands that that’s worth anything, or that the client thinks it’s worth anything, or whether it is worth anything. It’s worth it to me. It’s the way I want to live my life. I want to make beautiful things, even if nobody cares.”

    Everything else stems from this ethos, from beauty for beauty’s sake. From color to iteration to delight in the little details, Saul Bass showed the way for graphic and web designers alike. Be audacious, curious, and learning all the time. Make beautiful things, even if nobody cares.

    Smashing Editorial
    (vf, yk, il)

    Source link

    web design

    Building A Web App With React, Redux And Sanity.io — Smashing Magazine

    02/11/2021

    About The Author

    Ifeanyi Dike is a full-stack developer in Abuja, Nigeria. He’s the team lead at Sterling Digitals Limited but also open to more opportunities and …
    More about
    Ifeanyi

    Headless CMS is a powerful and easy way to manage content and access API. Built on React, Sanity.io is a seamless tool for flexible content management. It can be used to build simple to complex applications from the ground up.

    In this article, we’ll build a simple listing app with Sanity.io and React. Our global states will be managed with Redux and the application will be styled with styled-components.

    The fast evolution of digital platforms have placed serious limitations on traditional CMS like WordPress. These platforms are coupled, inflexible and are focused on the project, rather than the product. Thankfully, several headless CMS have been developed to tackle these challenges and many more.

    Unlike traditional CMS, headless CMS, which can be described as Software as a Service (SaaS), can be used to develop websites, mobile apps, digital displays, and many more. They can be used on limitless platforms. If you are looking for a CMS that is platform independent, developer-first, and offers cross platform support, you need not look farther from headless CMS.

    A headless CMS is simply a CMS without a head. The head here refers to the frontend or the presentation layer while the body refers to the backend or the content repository. This offers a lot of interesting benefits. For instance, it allows the developer to choose any frontend of his choice and you can also design the presentation layer as you want.

    There are lots of headless CMS out there, some of the most popular ones include Strapi, Contentful, Contentstack, Sanity, Butter CMS, Prismic, Storyblok, Directus, etc. These headless CMS are API-based and have their individual strong points. For instance, CMS like Sanity, Strapi, Contentful, and Storyblok are free for small projects.

    These headless CMS are based on different tech stacks as well. While Sanity.io is based on React.js, Storyblok is based on Vue.js. As a React developer, this is the major reason why I quickly picked interest in Sanity. However, being a headless CMS, each of these platforms can be plugged on any frontend, whether Angular, Vue or React.

    Each of these headless CMS has both free and paid plans which represent significant price jump. Although these paid plans offer more features, you wouldn’t want to pay all that much for a small to mid-sized project. Sanity tries to solve this problem by introducing pay-as-you-go options. With these options, you will be able to pay for what you use and avoid the price jump.

    Another reason why I choose Sanity.io is their GROQ language. For me, Sanity stands out from the crowd by offering this tool. Graphical-Relational Object Queries (GROQ) reduces development time, helps you get the content you need in the form you need it, and also helps the developer to create a document with a new content model without code changes.

    Moreover, developers are not constrained to the GROQ language. You can also use GraphQL or even the traditional axios and fetch in your React app to query the backend. Like most other headless CMS, Sanity has comprehensive documentation that contains helpful tips to build on the platform.

    Note: This article requires a basic understanding of React, Redux and CSS.

    Getting Started With Sanity.io

    To use Sanity in your machine, you’ll need to install the Sanity CLI tool. While this can be installed locally on your project, it is preferable to install it globally to make it accessible to any future applications.

    To do this, enter the following commands in your terminal.

    npm install -g @sanity/cli

    The -g flag in the above command enables global installation.

    Next, we need to initialize Sanity in our application. Although this can be installed as a separate project, it is usually preferable to install it within your frontend app (in this case React).

    In her blog, Kapehe explained in detail how to integrate Sanity with React. It will be helpful to go through the article before continuing with this tutorial.

    Enter the following commands to initialize Sanity in your React app.

    sanity init

    The sanity command becomes available to us when we installed the Sanity CLI tool. You can view a list of the available Sanity commands by typing sanity or sanity help in your terminal.

    When setting up or initializing your project, you’ll need to follow the prompts to customize it. You’ll also be required to create a dataset and you can even choose their custom dataset populated with data. For this listing app, we will be using Sanity’s custom sci-fi movies dataset. This will save us from entering the data ourselves.

    To view and edit your dataset, cd to the Sanity subdirectory in your terminal and enter sanity start. This usually runs on http://localhost:3333/. You may be required to login to access the interface (make sure you login with the same account you used when initializing the project). A screenshot of the environment is shown below.

    Sanity server overview
    An overview of the sanity server for the sci-fi movie dataset. (Large preview)

    Sanity-React Two-way Communication

    Sanity and React need to communicate with each other for a fully functional application.

    CORS Origins Setting In Sanity Manager

    We’ll first connect our React app to Sanity. To do this, login to https://manage.sanity.io/ and locate CORS origins under API Settings in the Settings tab. Here, you’ll need to hook your frontend origin to the Sanity backend. Our React app runs on http://localhost:3000/ by default, so we need to add that to the CORS.

    This is shown in the figure below.

    CORS origin settings
    Setting CORS origin in Sanity.io Manager. (Large preview)

    Connecting Sanity To React

    Sanity associates a project ID to every project you create. This ID is needed when connecting it to your frontend application. You can find the project ID in your Sanity Manager.

    The backend communicates with React using a library known as sanity client. You need to install this library in your Sanity project by entering the following commands.

    npm install @sanity/client

    Create a file sanitySetup.js (the filename does not matter), in your project src folder and enter the following React codes to set up a connection between Sanity and React.

    import sanityClient from "@sanity/client"
    export default sanityClient({
        projectId: PROJECT_ID,
        dataset: DATASET_NAME,
        useCdn: true
    });

    We passed our projectId, dataset name and a boolean useCdn to the instance of the sanity client imported from @sanity/client. This works the magic and connects our app to the backend.

    Now that we’ve completed the two-way connection, let’s jump right in to build our project.

    Setting Up And Connecting Redux To Our App

    We’ll need a few dependencies to work with Redux in our React app. Open up your terminal in your React environment and enter the following bash commands.

    npm install redux react-redux redux-thunk
    

    Redux is a global state management library that can be used with most frontend frameworks and libraries such as React. However, we need an intermediary tool react-redux to enable communication between our Redux store and our React application. Redux thunk will help us to return a function instead of an action object from Redux.

    While we could write the entire Redux workflow in one file, it is often neater and better to separate our concerns. For this, we will divide our workflow into three files namely, actions, reducers, and then the store. However, we also need a separate file to store the action types, also known as constants.

    Setting Up The Store

    The store is the most important file in Redux. It organizes and packages the states and ships them to our React application.

    Here is the initial setup of our Redux store needed to connect our Redux workflow.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import reducers from "./reducers/";
    
    export default createStore(
      reducers,
      applyMiddleware(thunk)
    );
    

    The createStore function in this file takes three parameters: the reducer (required), the initial state and the enhancer (usually a middleware, in this case, thunk supplied through applyMiddleware). Our reducers will be stored in a reducers folder and we’ll combine and export them in an index.js file in the reducers folder. This is the file we imported in the code above. We’ll revisit this file later.

    Introduction To Sanity’s GROQ Language

    Sanity takes querying on JSON data a step further by introducing GROQ. GROQ stands for Graph-Relational Object Queries. According to Sanity.io, GROQ is a declarative query language designed to query collections of largely schema-less JSON documents.

    Sanity even provides the GROQ Playground to help developers become familiar with the language. However, to access the playground, you need to install sanity vision.
    Run sanity install @sanity/vision on your terminal to install it.

    GROQ has a similar syntax to GraphQL but it is more condensed and easier to read. Furthermore, unlike GraphQL, GROQ can be used to query JSON data.

    For instance, to retrieve every item in our movie document, we’ll use the following GROQ syntax.

    *[_type == "movie"]

    However, if we wish to retrieve only the _ids and crewMembers in our movie document. We need to specify those fields as follows.

    `*[_type == 'movie']{                                             
        _id,
        crewMembers
    }
    

    Here, we used * to tell GROQ that we want every document of _type movie. _type is an attribute under the movie collection. We can also return the type like we did the _id and crewMembers as follows:

    *[_type == 'movie']{                                             
        _id,
        _type,
        crewMembers
    }
    

    We’ll work more on GROQ by implementing it in our Redux actions but you can check Sanity.io’s documentation for GROQ to learn more about it. The GROQ query cheat sheet provides a lot of examples to help you master the query language.

    Setting Up Constants

    We need constants to track the action types at every stage of the Redux workflow. Constants help to determine the type of action dispatched at each point in time. For instance, we can track when the API is loading, fully loaded and when an error occurs.

    We don’t necessarily need to define constants in a separate file but for simplicity and clarity, this is usually the best practice in Redux.

    By convention, constants in Javascript are defined with uppercase. We’ll follow the best practices here to define our constants. Here is an example of a constant for denoting requests for moving movie fetching.

    export const MOVIE_FETCH_REQUEST = "MOVIE_FETCH_REQUEST";

    Here, we created a constant MOVIE_FETCH_REQUEST that denotes an action type of MOVIE_FETCH_REQUEST. This helps us to easily call this action type without using strings and avoid bugs. We also exported the constant to be available anywhere in our project.

    Similarly, we can create other constants for fetching action types denoting when the request succeeds or fails. A complete code for the movieConstants.js is given in the code below.

    Here we have defined several constants for fetching a movie or list of movies, sorting and fetching the most popular movies. Notice that we set constants to determine when the request is loading, successful and failed.

    Similarly, our personConstants.js file is given below:

    export const PERSONS_FETCH_REQUEST = "PERSONS_FETCH_REQUEST";
    export const PERSONS_FETCH_SUCCESS = "PERSONS_FETCH_SUCCESS";
    export const PERSONS_FETCH_FAIL = "PERSONS_FETCH_FAIL";
    
    export const PERSON_FETCH_REQUEST = "PERSON_FETCH_REQUEST";
    export const PERSON_FETCH_SUCCESS = "PERSON_FETCH_SUCCESS";
    export const PERSON_FETCH_FAIL = "PERSON_FETCH_FAIL";
    
    export const PERSONS_COUNT = "PERSONS_COUNT";

    Like the movieConstants.js, we set a list of constants for fetching a person or persons. We also set a constant for counting persons. The constants follow the convention described for movieConstants.js and we also exported them to be accessible to other parts of our application.

    Finally, we’ll implement light and dark mode in the app and so we have another constants file globalConstants.js. Let’s take a look at it.

    export const SET_LIGHT_THEME = "SET_LIGHT_THEME";
    export const SET_DARK_THEME = "SET_DARK_THEME";

    Here we set constants to determine when light or dark mode is dispatched. SET_LIGHT_THEME determines when the user switches to the light theme and SET_DARK_THEME determines when the dark theme is selected. We also exported our constants as shown.

    Setting Up The Actions

    By convention, our actions are stored in a separate folder. Actions are grouped according to their types. For instance, our movie actions are stored in movieActions.js while our person actions are stored in personActions.js file.

    We also have globalActions.js to take care of toggling the theme from light to dark mode.

    Let’s fetch all movies in moviesActions.js.

    import sanityAPI from "../../sanitySetup";
    import {
      MOVIES_FETCH_FAIL,
      MOVIES_FETCH_REQUEST,
      MOVIES_FETCH_SUCCESS  
    } from "../constants/movieConstants";
    
    const fetchAllMovies = () => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_FETCH_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                            
              _id,
              "poster": poster.asset->url,
          } `
        );
        dispatch({
          type: MOVIES_FETCH_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_FETCH_FAIL,
          payload: error.message
        });
      }
    };

    Remember when we created the sanitySetup.js file to connect React to our Sanity backend? Here, we imported the setup to enable us to query our sanity backend using GROQ. We also imported a few constants exported from the movieConstants.js file in the constants folder.

    Next, we created the fetchAllMovies action function for fetching every movie in our collection. Most traditional React applications use axios or fetch to fetch data from the backend. But while we could use any of these here, we’re using Sanity’s GROQ. To enter the GROQ mode, we need to call sanityAPI.fetch() function as shown in the code above. Here, sanityAPI is the React-Sanity connection we set up earlier. This returns a Promise and so it has to be called asynchronously. We’ve used the async-await syntax here, but we can also use the .then syntax.

    Since we are using thunk in our application, we can return a function instead of an action object. However, we chose to pass the return statement in one line.

    const fetchAllMovies = () => async (dispatch) => {
      ...
    }

    Note that we can also write the function this way:

    const fetchAllMovies = () => {
      return async (dispatch)=>{
        ...
      }
    }

    In general, to fetch all movies, we first dispatched an action type that tracks when the request is still loading. We then used Sanity’s GROQ syntax to asynchronously query the movie document. We retrieved the _id and the poster url of the movie data. We then returned a payload containing the data gotten from the API.

    Similarly, we can retrieve movies by their _id, sort movies, and get the most popular movies.

    We can also fetch movies that match a particular person’s reference. We did this in the fetchMoviesByRef function.

    const fetchMoviesByRef = (ref) => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_REF_FETCH_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie' 
                && (castMembers[person._ref match '${ref}'] || 
                    crewMembers[person._ref match '${ref}'])            
                ]{                                             
                    _id,                              
                    "poster" : poster.asset->url,
                    title
                } `
        );
        dispatch({
          type: MOVIES_REF_FETCH_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_REF_FETCH_FAIL,
          payload: error.message
        });
      }
    };

    This function takes an argument and checks if person._ref in either the castMembers or crewMembers matches the passed argument. We return the movie _id, poster url, and title alongside. We also dispatch an action of type MOVIES_REF_FETCH_SUCCESS, attaching a payload of the returned data, and if an error occurs, we dispatch an action of type MOVIE_REF_FETCH_FAIL, attaching a payload of the error message, thanks to the try-catch wrapper.

    In the fetchMovieById function, we used GROQ to retrieve a movie that matches a particular id passed to the function.

    The GROQ syntax for the function is shown below.

    const data = await sanityAPI.fetch(
          `*[_type == 'movie' && _id == '${id}']{                                               
                    _id,
                    "cast" :
                        castMembers[]{
                            "ref": person._ref,
                            characterName, 
                            "name": person->name,
                            "image": person->image.asset->url
                        }
                    ,
                    "crew" :
                        crewMembers[]{
                            "ref": person._ref,
                            department, 
                            job,
                            "name": person->name,
                            "image": person->image.asset->url
                        }
                    ,                
                    "overview":   {                    
                        "text": overview[0].children[0].text
                      },
                    popularity,
                    "poster" : poster.asset->url,
                    releaseDate,                                
                    title
                }[0]`
        );

    Like the fetchAllMovies action, we started by selecting all documents of type movie but we went further to select only those with an id supplied to the function. Since we intend to display a lot of details for the movie, we specified a bunch of attributes to retrieve.

    We retrieved the movie id and also a few attributes in the castMembers array namely ref, characterName, the person’s name, and the person’s image. We also changed the alias from castMembers to cast.

    Like the castMembers, we selected a few attributes from the crewMembers array, namely ref, department, job, the person’s name and the person’s image. we also changed the alias from crewMembers to crew.

    In the same way, we selected the overview text, popularity, movie’s poster url, movie’s release date and title.

    Sanity’s GROQ language also allows us to sort a document. To sort an item, we pass order next to a pipe operator.

    For instance, if we wish to sort movies by their releaseDate in ascending order, we could do the following.

    const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                            
              ...
          } | order(releaseDate, asc)`
        );
    

    We used this notion in the sortMoviesBy function to sort either by ascending or descending order.

    Let’s take a look at this function below.

    const sortMoviesBy = (item, type) => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_SORT_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                
                    _id,                                               
                    "poster" : poster.asset->url,    
                    title
                    } | order( ${item} ${type})`
        );
        dispatch({
          type: MOVIES_SORT_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_SORT_FAIL,
          payload: error.message
        });
      }
    };

    We began by dispatching an action of type MOVIES_SORT_REQUEST to determine when the request is loading. We then used the GROQ syntax to sort and fetch data from the movie collection. The item to sort by is supplied in the variable item and the mode of sorting (ascending or descending) is supplied in the variable type. Consequently, we returned the id, poster url, and title. Once the data is returned, we dispatched an action of type MOVIES_SORT_SUCCESS and if it fails, we dispatch an action of type MOVIES_SORT_FAIL.

    A similar GROQ concept applies to the getMostPopular function. The GROQ syntax is shown below.

    const data = await sanityAPI.fetch(
          `
                *[_type == 'movie']{ 
                    _id,                              
                    "overview":   {                    
                        "text": overview[0].children[0].text
                    },                
                    "poster" : poster.asset->url,    
                    title 
                }| order(popularity desc) [0..2]`
        );

    The only difference here is that we sorted the movies by popularity in descending order and then selected only the first three. The items are returned in a zero-based index and so the first three items are items 0, 1 and 2. If we wish to retrieve the first ten items, we could pass [0..9] to the function.

    Here’s the complete code for the movie actions in the movieActions.js file.

    Setting Up The Reducers

    Reducers are one of the most important concepts in Redux. They take the previous state and determine the state changes.

    Typically, we’ll be using the switch statement to execute a condition for each action type. For instance, we can return loading when the action type denotes loading, and then the payload when it denotes success or error. It is expected to take in the initial state and the action as arguments.

    Our movieReducers.js file contains various reducers to match the actions defined in the movieActions.js file. However, each of the reducers has a similar syntax and structure. The only differences are the constants they call and the values they return.

    Let’s start by taking a look at the fetchAllMoviesReducer in the movieReducers.js file.

    import {
      MOVIES_FETCH_FAIL,
      MOVIES_FETCH_REQUEST,
      MOVIES_FETCH_SUCCESS,  
    } from "../constants/movieConstants";
    
    const fetchAllMoviesReducer = (state = {}, action) => {
      switch (action.type) {
        case MOVIES_FETCH_REQUEST:
          return {
            loading: true
          };
        case MOVIES_FETCH_SUCCESS:
          return {
            loading: false,
            movies: action.payload
          };
        case MOVIES_FETCH_FAIL:
          return {
            loading: false,
            error: action.payload
          };
        case MOVIES_FETCH_RESET:
          return {};
        default:
          return state;
      }
    };

    Like all reducers, the fetchAllMoviesReducer takes the initial state object (state) and the action object as arguments. We used the switch statement to check the action types at each point in time. If it corresponds to MOVIES_FETCH_REQUEST, we return loading as true to enable us to show a loading indicator to the user.

    If it corresponds to MOVIES_FETCH_SUCCESS, we turn off the loading indicator and then return the action payload in a variable movies. But if it is MOVIES_FETCH_FAIL, we also turn off the loading and then return the error. We also want the option to reset our movies. This will enable us to clear the states when we need to do so.

    We have the same structure for other reducers. The complete movieReducers.js is shown below.

    We also followed the exact same structure for personReducers.js. For instance, the fetchAllPersonsReducer function defines the states for fetching all persons in the database.

    This is given in the code below.

    import {
      PERSONS_FETCH_FAIL,
      PERSONS_FETCH_REQUEST,
      PERSONS_FETCH_SUCCESS,
    } from "../constants/personConstants";
    
    const fetchAllPersonsReducer = (state = {}, action) => {
      switch (action.type) {
        case PERSONS_FETCH_REQUEST:
          return {
            loading: true
          };
        case PERSONS_FETCH_SUCCESS:
          return {
            loading: false,
            persons: action.payload
          };
        case PERSONS_FETCH_FAIL:
          return {
            loading: false,
            error: action.payload
          };
        default:
          return state;
      }
    };
    

    Just like the fetchAllMoviesReducer, we defined fetchAllPersonsReducer with state and action as arguments. These are standard setup for Redux reducers. We then used the switch statement to check the action types and if it’s of type PERSONS_FETCH_REQUEST, we return loading as true. If it’s PERSONS_FETCH_SUCCESS, we switch off loading and return the payload, and if it’s PERSONS_FETCH_FAIL, we return the error.

    Combining Reducer

    Redux’s combineReducers function allows us to combine more than one reducer and pass it to the store. We’ll combine our movies and persons reducers in an index.js file within the reducers folder.

    Let’s take a look at it.

    import { combineReducers } from "redux";
    import {
      fetchAllMoviesReducer,
      fetchMovieByIdReducer,
      sortMoviesByReducer,
      getMostPopularReducer,
      fetchMoviesByRefReducer
    } from "./movieReducers";
    
    import {
      fetchAllPersonsReducer,
      fetchPersonByIdReducer,
      countPersonsReducer
    } from "./personReducers";
    
    import { toggleTheme } from "./globalReducers";
    
    export default combineReducers({
      fetchAllMoviesReducer,
      fetchMovieByIdReducer,
      fetchAllPersonsReducer,
      fetchPersonByIdReducer,
      sortMoviesByReducer,
      getMostPopularReducer,
      countPersonsReducer,
      fetchMoviesByRefReducer,
      toggleTheme
    });

    Here we imported all the reducers from the movies, persons, and global reducers file and passed them to combineReducers function. The combineReducers function takes an object which allows us to pass all our reducers. We can even add an alias to the arguments in the process.

    We’ll work on the globalReducers later.

    We can now pass the reducers in the Redux store.js file. This is shown below.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import reducers from "./reducers/index";
    
    export default createStore(reducers, initialState, applyMiddleware(thunk));
    

    Having set up our Redux workflow, let’s set up our react application.

    Setting Up Our React Application

    Our react application will list movies and their corresponding cast and crewmembers. We will be using react-router-dom for routing and styled-components for styling the app. We’ll also use Material UI for icons and some UI components.

    Enter the following bash command to install the dependencies.

    npm install react-router-dom @material-ui/core @material-ui/icons query-string

    Here’s what we’ll be building.

    Connecting Redux To Our React App

    React-redux ships with a Provider function that allows us to connect our application to the Redux store. To do this, we have to pass an instance of the store to the Provider. We can do this either in our index.js or App.js file.

    Here’s our index.js file.

    import React from "react";
    import ReactDOM from "react-dom";
    import "./index.css";
    import App from "./App";
    import { Provider } from "react-redux";
    import store from "./redux/store";
    ReactDOM.render(
      <Provider store={store}>
        <App />
      </Provider>,
      document.getElementById("root")
    );

    Here, we imported Provider from react-redux and store from our Redux store. Then we wrapped our entire components tree with the Provider, passing the store to it.

    Next, we need react-router-dom for routing in our React application. react-router-dom comes with BrowserRouter, Switch and Route that can be used to define our path and routes.

    We do this in our App.js file. This is shown below.

    import React from "react";
    import Header from "./components/Header";
    import Footer from "./components/Footer";
    import { BrowserRouter as Router, Switch, Route } from "react-router-dom";
    import MoviesList from "./pages/MoviesListPage";
    import PersonsList from "./pages/PersonsListPage";
    
    function App() {
    
      return (
          <Router>
            <main className="contentwrap">
              <Header />
              <Switch>
                <Route path="/persons/">
                  <PersonsList />
                </Route>
                <Route path="/" exact>
                  <MoviesList />
                </Route>
              </Switch>
            </main>
            <Footer />
          </Router>
      );
    }
    export default App;

    This is a standard setup for routing with react-router-dom. You can check it out in their documentation. We imported our components Header, Footer, PersonsList and MovieList. We then set up the react-router-dom by wrapping everything in Router and Switch.

    Since we want our pages to share the same header and footer, we had to pass the <Header /> and <Footer /> component before wrapping the structure with Switch. We also did a similar thing with the main element since we want it to wrap the entire application.

    We passed each component to the route using Route from react-router-dom.

    Defining Our Pages And Components

    Our application is organized in a structured way. Reusable components are stored in the components folder while Pages are stored in the pages folder.

    Our pages comprise movieListPage.js, moviePage.js, PersonListPage.js and PersonPage.js. The MovieListPage.js lists all the movies in our Sanity.io backend as well as the most popular movies.

    To list all the movies, we simply dispatch the fetchAllMovies action defined in our movieAction.js file. Since we need to fetch the list as soon as the page loads, we have to define it in the useEffect. This is shown below.

    import React, { useEffect } from "react";
    import { fetchAllMovies } from "../redux/actions/movieActions";
    import { useDispatch, useSelector } from "react-redux";
    
    const MoviesListPage = () => {
      const dispatch = useDispatch();
      useEffect(() => {    
          dispatch(fetchAllMovies());
      }, [dispatch]);
    
      const { loading, error, movies } = useSelector(
        (state) => state.fetchAllMoviesReducer
      );
      
      return (
        ...
      )
    };
    export default MoviesListPage;
    

    Thanks to the useDispatch and useSelector Hooks, we can dispatch Redux actions and select the appropriate states from the Redux store. Notice that the states loading, error and movies were defined in our Reducer functions and here selected them using the useSelector Hook from React Redux. These states namely loading, error and movies become available immediately we dispatched the fetchAllMovies() actions.

    Once we get the list of movies, we can display it in our application using the map function or however we wish.

    Here is the complete code for the moviesListPage.js file.

    We started by dispatching the getMostPopular movies action (this action selects the movies with the highest popularity) in the useEffect Hook. This allows us to retrieve the most popular movies as soon as the page loads. Additionally, we allowed users to sort movies by their releaseDate and popularity. This is handled by the sortMoviesBy action dispatched in the code above. Furthermore, we dispatched the fetchAllMovies depending on the query parameters.

    Also, we used the useSelector Hook to select the corresponding reducers for each of these actions. We selected the states for loading, error and movies for each of the reducers.

    After getting the movies from the reducers, we can now display them to the user. Here, we have used the ES6 map function to do this. We first displayed a loader whenever each of the movie states is loading and if there’s an error, we display the error message. Finally, if we get a movie, we display the movie image to the user using the map function. We wrapped the entire component in a MovieListContainer component.

    The <MovieListContainer> … </MovieListContainer> tag is a div defined using styled components. We’ll take a brief look at that soon.

    Styling Our App With Styled Components

    Styled components allow us to style our pages and components on an individual basis. It also offers some interesting features such as inheritance, Theming, passing of props, etc.

    Although we always want to style our pages on an individual basis, sometimes global styling may be desirable. Interestingly, styled-components provide a way to do that, thanks to the createGlobalStyle function.

    To use styled-components in our application, we need to install it. Open your terminal in your react project and enter the following bash command.

    npm install styled-components

    Having installed styled-components, Let’s get started with our global styles.

    Let’s create a separate folder in our src directory named styles. This will store all our styles. Let’s also create a globalStyles.js file within the styles folder. To create global style in styled-components, we need to import createGlobalStyle.

    import { createGlobalStyle } from "styled-components";

    We can then define our styles as follows:

    export const GlobalStyle = createGlobalStyle`
      ...
    `

    Styled components make use of the template literal to define props. Within this literal, we can write our traditional CSS codes.

    We also imported deviceWidth defined in a file named definition.js. The deviceWidth holds the definition of breakpoints for setting our media queries.

    import { deviceWidth } from "./definition";

    We set overflow to hidden to control the flow of our application.

    html, body{
            overflow-x: hidden;
    }

    We also defined the header style using the .header style selector.

    .header{
      z-index: 5;
      background-color: ${(props)=>props.theme.midDarkBlue}; 
      display:flex;
      align-items:center;
      padding: 0 20px;
      height:50px;
      justify-content:space-between;
      position:fixed;
      top:0;
      width:100%;
      @media ${deviceWidth.laptop_lg}
      {
        width:97%;
      }
      ...
    }

    Here, various styles such as the background color, z-index, padding, and lots of other traditional CSS properties are defined.

    We’ve used the styled-components props to set the background color. This allows us to set dynamic variables that can be passed from our component. Moreover, we also passed the theme’s variable to enable us to make the most of our theme toggling.

    Theming is possible here because we have wrapped our entire application with the ThemeProvider from styled-components. We’ll talk about this in a moment. Furthermore, we used the CSS flexbox to properly style our header and set the position to fixed to make sure it remains fixed with respect to the browser. We also defined the breakpoints to make the headers mobile friendly.

    Here is the complete code for our globalStyles.js file.

    import { createGlobalStyle } from "styled-components";
    import { deviceWidth } from "./definition";
    
    export const GlobalStyle = createGlobalStyle`
        html{
            overflow-x: hidden;
        }
        body{
            background-color: ${(props) => props.theme.lighter};        
            overflow-x: hidden;   
            min-height: 100vh;     
            display: grid;
            grid-template-rows: auto 1fr auto;
        }
        #root{        
            display: grid;
            flex-direction: column;   
        }    
        h1,h2,h3, label{
            font-family: 'Aclonica', sans-serif;        
        }
        h1, h2, h3, p, span:not(.MuiIconButton-label), 
        div:not(.PrivateRadioButtonIcon-root-8), div:not(.tryingthis){
            color: ${(props) => props.theme.bodyText}
        }
        
        p, span, div, input{
            font-family: 'Jost', sans-serif;       
        }
        
        .paginate button{
            color: ${(props) => props.theme.bodyText}
        }
        
        .header{
            z-index: 5;    
            background-color: ${(props) => props.theme.midDarkBlue};                
            display: flex;
            align-items: center;   
            padding: 0 20px;        
            height: 50px;
            justify-content: space-between;
            position: fixed;
            top: 0;
            width: 100%;
            @media ${deviceWidth.laptop_lg}{
                width: 97%;            
            }               
            
            @media ${deviceWidth.tablet}{
                width: 100%;
                justify-content: space-around;
            }
            a{
                text-decoration: none;
            }
            label{
                cursor: pointer;
                color: ${(props) => props.theme.goldish};
                font-size: 1.5rem;
            }        
            .hamburger{
                cursor: pointer;   
                color: ${(props) => props.theme.white};
                @media ${deviceWidth.desktop}{
                    display: none;
                }
                @media ${deviceWidth.tablet}{
                    display: block;                
                }
            }  
                     
        }    
        .mobileHeader{
            z-index: 5;        
            background-color: ${(props) =>
              props.theme.darkBlue};                    
            color: ${(props) => props.theme.white};
            display: grid;
            place-items: center;        
            
            width: 100%;      
            @media ${deviceWidth.tablet}{
                width: 100%;                   
            }                         
            
            height: calc(100% - 50px);                
            transition: all 0.5s ease-in-out; 
            position: fixed;        
            right: 0;
            top: 50px;
            .menuitems{
                display: flex;
                box-shadow: 0 0 5px ${(props) => props.theme.lightshadowtheme};           
                flex-direction: column;
                align-items: center;
                justify-content: space-around;                        
                height: 60%;            
                width: 40%;
                a{
                    display: flex;
                    flex-direction: column;
                    align-items:center;
                    cursor: pointer;
                    color: ${(props) => props.theme.white};
                    text-decoration: none;                
                    &:hover{
                        border-bottom: 2px solid ${(props) => props.theme.goldish};
                        .MuiSvgIcon-root{
                            color: ${(props) => props.theme.lightred}
                        }
                    }
                }
            }
        }
        
        footer{                
            min-height: 30px;        
            margin-top: auto;
            display: flex;
            flex-direction: column;
            align-items: center;
            justify-content: center;        
            font-size: 0.875rem;        
            background-color: ${(props) => props.theme.midDarkBlue};      
            color: ${(props) => props.theme.white};        
        }    
    `;
    

    Notice that we wrote pure CSS code within the literal but there are a few exceptions. Styled-components allows us to pass props. You can learn more about this in the documentation.

    Apart from defining global styles, we can define styles for individual pages.

    For instance, here is the style for the PersonListPage.js defined in PersonStyle.js in the styles folder.

    import styled from "styled-components";
    import { deviceWidth, colors } from "./definition";
    
    export const PersonsListContainer = styled.div`
      margin: 50px 80px;
      @media ${deviceWidth.tablet} {
        margin: 50px 10px;
      }
      a {
        text-decoration: none;
      }
      .top {
        display: flex;
        justify-content: flex-end;
        padding: 5px;
        .MuiSvgIcon-root {
          cursor: pointer;
          &:hover {
            color: ${colors.darkred};
          }
        }
      }
      .personslist {
        margin-top: 20px;
        display: grid;
        place-items: center;
        grid-template-columns: repeat(5, 1fr);
        @media ${deviceWidth.laptop} {
          grid-template-columns: repeat(4, 1fr);
        }
        @media ${deviceWidth.tablet} {
          grid-template-columns: repeat(3, 1fr);
        }
        @media ${deviceWidth.tablet_md} {
          grid-template-columns: repeat(2, 1fr);
        }
        @media ${deviceWidth.mobile_lg} {
          grid-template-columns: repeat(1, 1fr);
        }
        grid-gap: 30px;
        .person {
          width: 200px;
          position: relative;
          img {
            width: 100%;
          }
          .content {
            position: absolute;
            bottom: 0;
            left: 8px;
            border-right: 2px solid ${colors.goldish};
            border-left: 2px solid ${colors.goldish};
            border-radius: 10px;
            width: 80%;
            margin: 20px auto;
            padding: 8px 10px;
            background-color: ${colors.transparentWhite};
            color: ${colors.darkBlue};
            h2 {
              font-size: 1.2rem;
            }
          }
        }
      }
    `;
    

    We first imported styled from styled-components and deviceWidth from the definition file. We then defined PersonsListContainer as a div to hold our styles. Using media queries and the established breakpoints, we made the page mobile-friendly by setting various breakpoints.

    Here, we have used only the standard browser breakpoints for small, large and very large screens. We also made the most of the CSS flexbox and grid to properly style and display our content on the page.

    To use this style in our PersonListPage.js file, we simply imported it and added it to our page as follows.

    import React from "react";
    
    const PersonsListPage = () => {
      return (
        <PersonsListContainer>
          ...
        </PersonsListContainer>
      );
    };
    export default PersonsListPage;
    

    The wrapper will output a div because we defined it as a div in our styles.

    Adding Themes And Wrapping It Up

    It’s always a cool feature to add themes to our application. For this, we need the following:

    • Our custom themes defined in a separate file (in our case definition.js file).
    • The logic defined in our Redux actions and reducers.
    • Calling our theme in our application and passing it through the component tree.

    Let’s check this out.

    Here is our theme object in the definition.js file.

    export const theme = {
      light: {
        dark: "#0B0C10",
        darkBlue: "#253858",
        midDarkBlue: "#42526e",
        lightBlue: "#0065ff",
        normal: "#dcdcdd",
        lighter: "#F4F5F7",
        white: "#FFFFFF",
        darkred: "#E85A4F",
        lightred: "#E98074",
        goldish: "#FFC400",
        bodyText: "#0B0C10",
        lightshadowtheme: "rgba(0, 0, 0, 0.1)"
      },
      dark: {
        dark: "white",
        darkBlue: "#06090F",
        midDarkBlue: "#161B22",
        normal: "#dcdcdd",
        lighter: "#06090F",
        white: "white",
        darkred: "#E85A4F",
        lightred: "#E98074",
        goldish: "#FFC400",
        bodyText: "white",
        lightshadowtheme: "rgba(255, 255, 255, 0.9)"
      }
    };
    

    We have added various color properties for the light and dark themes. The colors are carefully chosen to enable visibility both in light and dark mode. You can define your themes as you want. This is not a hard and fast rule.

    Next, let’s add the functionality to Redux.

    We have created globalActions.js in our Redux actions folder and added the following codes.

    import { SET_DARK_THEME, SET_LIGHT_THEME } from "../constants/globalConstants";
    import { theme } from "../../styles/definition";
    
    export const switchToLightTheme = () => (dispatch) => {
      dispatch({
        type: SET_LIGHT_THEME,
        payload: theme.light
      });
      localStorage.setItem("theme", JSON.stringify(theme.light));
      localStorage.setItem("light", JSON.stringify(true));
    };
    
    export const switchToDarkTheme = () => (dispatch) => {
      dispatch({
        type: SET_DARK_THEME,
        payload: theme.dark
      });
      localStorage.setItem("theme", JSON.stringify(theme.dark));
      localStorage.setItem("light", JSON.stringify(false));
    };

    Here, we simply imported our defined themes. Dispatched the corresponding actions, passing the payload of the themes we needed. The payload results are stored in the local storage using the same keys for both light and dark themes. This enables us to persist the states in the browser.

    We also need to define our reducer for the themes.

    import { SET_DARK_THEME, SET_LIGHT_THEME } from "../constants/globalConstants";
    
    export const toggleTheme = (state = {}, action) => {
      switch (action.type) {
        case SET_LIGHT_THEME:
          return {
            theme: action.payload,
            light: true
          };
        case SET_DARK_THEME:
          return {
            theme: action.payload,
            light: false
          };
        default:
          return state;
      }
    };

    This is very similar to what we’ve been doing. We used the switch statement to check the type of action and then returned the appropriate payload. We also returned a state light that determines whether light or dark theme is selected by the user. We’ll use this in our components.

    We also need to add it to our root reducer and store. Here is the complete code for our store.js.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import { theme as initialTheme } from "../styles/definition";
    import reducers from "./reducers/index";
    
    const theme = localStorage.getItem("theme")
      ? JSON.parse(localStorage.getItem("theme"))
      : initialTheme.light;
    
    const light = localStorage.getItem("light")
      ? JSON.parse(localStorage.getItem("light"))
      : true;
    
    const initialState = {
      toggleTheme: { light, theme }
    };
    export default createStore(reducers, initialState, applyMiddleware(thunk));

    Since we needed to persist the theme when the user refreshes, we had to get it from the local storage using localStorage.getItem() and pass it to our initial state.

    Adding The Functionality To Our React Application

    Styled components provide us with ThemeProvider that allows us to pass themes through our application. We can modify our App.js file to add this functionality.

    Let’s take a look at it.

    import React from "react";
    import { BrowserRouter as Router, Switch, Route } from "react-router-dom";
    import { useSelector } from "react-redux";
    import { ThemeProvider } from "styled-components";
    
    function App() {
      const { theme } = useSelector((state) => state.toggleTheme);
      let Theme = theme ? theme : {};
      return (
        <ThemeProvider theme={Theme}>
          <Router>
            ...
          </Router>
        </ThemeProvider>
      );
    }
    export default App;

    By passing themes through the ThemeProvider, we can easily use the theme props in our styles.

    For instance, we can set the color to our bodyText custom color as follows.

    color: ${(props) => props.theme.bodyText};

    We can use the custom themes anywhere we need color in our application.

    For example, to define border-bottom, we do the following.

    border-bottom: 2px solid ${(props) => props.theme.goldish};

    Conclusion

    We began by delving into Sanity.io, setting it up and connecting it to our React application. Then we set up Redux and used the GROQ language to query our API. We saw how to connect and use Redux to our React app using react-redux, use styled-components and theming.

    However, we only scratched the surface on what is possible with these technologies. I encourage you to go through the code samples in my GitHub repo and try your hands on a completely different project using these technologies to learn and master them.

    Resources

    Smashing Editorial
    (ks, vf, yk, il)

    Source link

    web design

    How To Port Your Web App To Microsoft Teams — Smashing Magazine

    02/02/2021

    About The Authors

    Tomomi Imura (@girlie_mac) is an avid open web technology advocate and a full-stack engineer, who is currently working as a Cloud Advocate at Microsoft in San …
    More about
    Tomomi & Daisy

    On your list of places where people might access your web app, “Microsoft Teams” is probably number “not-on-the-list”. But it turns out that making your application accessible where your users are already working has some profound benefits. In this article, we’ll look at how Microsoft Teams makes web apps a first-class citizen, and how it enables you to interact with those apps in completely new ways. 

    Perhaps you are using Microsoft Teams at work and want to build an app that runs inside Teams. Or maybe you’ve already published an app on another platform and want to gain more users on Teams. In this article, we’ll see how to build a new web application in Teams, and how to integrate an existing one — with just a few lines of code.

    You don’t need any prior experience to get started. We’ll use bare-minimum HTML code and toolsets to build a Teams tab (the simplest version of an app in Teams). While you’re walking through this tutorial, if you want to dive deeper, check out the on-demand videos from Learn Together: Developing Apps for Teams. It turns out that making your web application accessible where your users are already working has some benefits, including a reach of over 115 million daily active users. Let’s dive in!

    Microsoft Teams as a platform

    You may be familiar with Teams as a collaborative communication tool, but as a developer, you could also view it as a development platform. In fact, Teams provides an alternative way to interact with and distribute your existing web applications. This is primarily because the tool has always been designed with the web in mind. One of the key benefits of integrating web apps into Teams is providing a more productive way for users — your colleagues and Teams users around the world — to get the work done.

    Integration through tabs, embedded web apps

    While there are many different paths to building and deploying Teams apps, one of the easiest is to integrate your existing web apps with Teams through what is called “tabs.” Tabs are basically embedded web apps created using HTML, TypeScript (or JavaScript), client-side frameworks such as React, or any server-side framework such as .NET.

    Tabs allow you to surface content in your app by essentially embedding a web page in Teams using <iframes>. The application was specifically designed with this capability in mind, so you can integrate existing web apps to create custom experiences for yourself, your team, and your app users.

    One useful feature about integrating your web apps with Teams is that you can pretty much use the developer tools you’re likely already familiar with: Git, Node.js, npm, and Visual Studio Code. To expand your apps with additional capabilities, you can use specialized tools such as the Teams Yeoman generator command line tool or Teams Toolkit Visual Studio Code extension and the Microsoft Teams JavaScript client SDK. They allow you to retrieve additional information and enhance the content you display in your Teams tab.

    Build a tab with an existing code sample

    Let’s get started with the basics. (If you want to take it a step further to actually deploy your app, you can jump to the Learn Together videos) to learn more.

    To simplify the steps, let’s take a look at a code sample, so instead of the tooling outlined above, the only things you’ll need are:

    In this article, we’re going to use a web-based IDE called Glitch, which allows you to host and run this project quickly in the browser, so you don’t have to worry about the tunneling or deployment at this time. For the full-scale approach from start to finish, you can check out a comprehensive tutorial on Microsoft Docs, which includes examples of a slightly more advanced messaging extension or a bot.

    Although Glitch is a great tool for tutorial purposes, this is not a scalable environment so, in reality, you’ll also need a way to deploy and host your web services. In a nutshell, while you are developing, you need to set up a local development with a localhost tunneling, such as the 3rd party tool ngrok, and for production, you’ll need to deploy your app to a cloud service provider, for example, Microsoft Azure Web Services.

    Also, you can use on-premises infrastructure to host your web services, but they must be publicly accessible (not behind a firewall). For this article, we will focus on how to make your web app available on Teams, so let’s go back to Glitch for now!

    First, let’s go to the sample code page and remix the project. Remixing is like forking a repo on GitHub, so it generates a copy of the project for you, letting you modify the code however you want without messing with the original.

    Remix the sample code page first. We’ll use it a starting foundation for our project. (Large preview)

    Once you have your own project repo, you’ll also automatically get your own web server URL. For example, if your generated project name is achieved-diligent-bell, your web server URL would be https://achieved-diligent-bell.glitch.me. Of course, you can customize the name if you want.

    Double-check your project name in the left upper corner. (Large preview)

    Web services up and running, you’ll need to create an app package that can be distributed and installed in Teams. The app package to be installed to the Teams client contains two icons and a JSON manifest file describe the metadata for your app, the extension points your app is using, and pointers to the services powering those extension points.

    Create an app package

    Now, you will need to create an app package to make your web app available in Teams. The package includes:

    📁 your-app-package
     └── 📄 manifest.json
     └── 🖼 color.png (192x192)
     └── 🖼 outline.png (32x32)
    

    When creating your app package, you can choose to create it manually or use App Studio, which is a useful app inside Teams that helps developers make Teams apps (yes, meta indeed). App Studio will guide you through the configuration of the app and create your app manifest automatically.

    Once you have installed the App Studio app in your Teams client, open the app. You can launch it by clicking the three dots in the left menu bar.

    Launch the App Studio app by clicking the three dots in the left menu bar. (Large preview)

    Then, click the Manifest Editor tab from the top and select Create a new app.

    Proceed with the Manifest Editor in the top navigation and select ‘Create a new app’. (Large preview)

    You are going to need to fill out all the required fields including the app names, descriptions, etc.

    Fill in some details, such as app names and descriptions. (Large preview)

    In the App URLs section, fill out your privacy and TOU web pages. In this example, we are just using the placeholder URL, https://example.com.

    Configure your personal tab by selecting Capabilities > Tabs from the left menu.

    Now, you can configure the capabilities of the tab. (Large preview)

    Click the Add button under Add a personal tab and enter the info. Under Content URL, enter your webpage URL (in this case, it should be https://[your-project-name].glitch.me/index.html).

    You will need to add your content URL — the one we’ve defined earlier. (Large preview)

    In the index.html file has a few lines of static HTML code:

    <h1>Hello world! </h1>
    <p>This is the bare-minimum setting for MS Teams Tabs.</p>
    

    Feel free to tweak the content in the index.html as you want. This is the content to be displayed in your Teams client. Finally, go to Finish > Test and distribute.

    Now you should be ready to finish, test and distribute. (Large preview)

    If you get any errors, you’ll have to go back and fix them. Otherwise, you can proceed by clicking “Install”. And voilà, now you have your own personal tab!

    Here we go: our first Tab is ready to go. (Large preview)

    Additional features with Teams SDK

    This code sample only contains the bare minimal HTML code sample to just show you how to configure Teams to display your web app in Tabs. But of course, your web apps don’t need to be static, and you can use web frameworks such as React if you wish! (There are more deep-dive examples using React that you can dive into as well.)

    Teams has its own JavaScript SDK to provide additional functionality too, such as loading a configuration popup for teams, get user’s locale info, etc.

    One useful example is detecting the “theme” of a Teams client — Teams has three themes, light (default), dark, and high-contrast mode. You would think CSS should handle the theming, but remember, your web app is displayed inside of the Teams’ iframe, so you would need to use the SDK to handle the color change.

    You can include the JavaScript either from npm:

    npm install --save @microsoft/teams-js
    

    Or include in your HTML:

    <script src="https://statics.teams.cdn.office.net/sdk/v1.8.0/js/MicrosoftTeams.min.js"></script>
    

    Now you can detect the current theme with the getContext method. And this is how you can determine the body text color:

    microsoftTeams.initialize();
    
    microsoftTeams.getContext((context) => {
      if(context.theme !== 'default') {
        document.body.style.color = '#fff';  }
    });
    

    The theme can be changed by a user after loading, so to detect the theme change event, add the following code snippet:

    microsoftTeams.registerOnThemeChangeHandler((theme)=> {
      if(theme !== 'default') {
        document.body.style.color = '#fff';
        document.body.style.color = 'inherit';
    }
    });
    
    And so we’ve switched from a light mode to dark mode. (Large preview)

    Hopefully, this simple tutorial helped you to get started with the first steps. If you’d like to continue developing for Teams, you can add more capabilities such as adding Teams-native UI components, search features, messaging extensions, and conversational bots, to build more interactive applications.

    For a comprehensive guide using the recommended toolsets (Visual Studio Code, Yeoman Generator, etc.), check out Teams Developer Docs where you can learn more about tabs, messaging extensions, bots, webhooks, and the other capabilities that the Teams developer platform provides.

    Next Steps

    With just a few clicks, you can integrate your apps into Teams and create new experiences for your users. And once you’ve developed apps and deployed them to Teams, you’ll have the potential of reaching a wide audience of users that use Teams daily.

    You can get started building today or learn more from Learn Together: Developing Apps for Teams with on-demand videos and demos all around building apps for Teams.

    Smashing Editorial
    (vf, il)

    Source link

    web design

    What’s The State Of Web Performance? — Smashing Magazine

    01/26/2021

    About The Author

    Drew is a Staff Engineer specialising in Frontend at Snyk, as well as being a co-founder of Notist and the small content management system Perch. Prior to this, …
    More about
    Drew

    In this episode, we’re talking about Web Performance. What does the performance landscape look like in 2021? Drew McLellan talks to expert Harry Roberts to find out.

    In this episode, we’re talking about Web Performance. What does the performance landscape look like in 2021? I spoke with expert Harry Roberts to find out.

    Show Notes

    Harry is running a Web Performance Masterclass workshop with Smashing in May 2021. At the time of publishing, big earlybird discounts are still available.

    Weekly Update

    Transcript

    Photo of Charlie GerardDrew McLellan: He’s an independent Consultant Web Performance Engineer from Leeds in the UK. In his role, he helps some of the world’s largest and most respected organizations deliver faster and more reliable experiences to their customers. He’s an invited Google Developer Expert, a Cloudinary Media Developer Expert, an award-winning developer, and an international speaker. So we know he knows his stuff when it comes to web performance, but did you know he has 14 arms and seven legs? My Smashing friends, please welcome Harry Roberts. Hi Harry, how are you?

    Harry Roberts: Hey, I’m smashing thank you very much. Obviously the 14 arms, seven legs… still posing its usual problems. Impossible to buy trousers.

    Drew: And bicycles.

    Harry: Yeah. Well I have three and a half bicycles.

    Drew: So I wanted to talk to you today, not about bicycles unfortunately, although that would be fun in itself. I wanted to talk to you about web performance. It’s a subject that I’m personally really passionate about but it’s one of those areas where I worry, when I take my eye off the ball and get involved in some sort of other work and then come back to doing a bit of performance work, I worry that the knowledge I’m working with goes out of date really quick… Is web performance as fast-moving these days as I perceive?

    Harry: This is… I’m not even just saying this to be nice to you, that’s such a good question because I’ve been thinking on this quite a bit lately and I’d say there are two halves of it. One thing I would try and tell clients is that actually it doesn’t move that fast. Predominantly because, and this is the soundbite I always use, you can bet on the browser. Browsers aren’t really allowed to change fundamentally how they work, because, of course, there’s two decades of legacy they have to uphold. So, generally, if you bet on the browser and you know how those internals work, and TCP/IP that’s never changing… So the certain things that are fairly set in stone, which means that best practice will, by and large, always be best practice where the fundamentals are concerned.

    Harry: Where it does get more interesting is… The thing I’m seeing more and more is that we’re painting ourselves into corners when it comes to site-speed issues. So we actually create a lot of problems for ourselves. So what that means realistically is performance… it’s the moving goalpost, I suppose. The more the landscape or the topography of the web changes, and the way it’s built and the way we work, we pose ourself new challenges. So the advent of doing a lot more work on the client poses different performance issues than we’d be solving five years ago, but those performance issues still pertain to browser internals which, by and large, haven’t changed in five years. So a lot of it depends… And I’d say there’s definitely two clear sides to it… I encourage my clients lean on the browser, lean on the standards, because they can’t just be changed, the goalposts don’t really move. But, of course, that needs to meld with more modern and, perhaps slightly more interesting, development practices. So you keep your… Well, I was going to say “A foot in both camps” but with my seven feet, I’d have to… four and three.

    Drew: You mentioned that the fundamentals don’t change and things like TCP/IP don’t change. One of the things that did change in… I say “recent years”, this is actually probably going back a little bit now but, is HTTP in that we had this established protocol HTTP for communicating between clients and servers, and that changed and then we got H2 which is then all binary and different. And that changed a lot of the… It was for performance reasons, it was to take away some of the existing limitations, but that was a change and the way we had to optimize for that protocol changed. Is that now stable? Or is it going to change again, or…

    Harry: So one thing that I would like to be learning more about is the latter half of the question, the changing again. I need to be looking more into QUIC and H3 but it’s a bit too far around of the corner to be useful to my clients. When it comes to H2, things have changed quite a lot but I genuinely think H2 is a lot of false promise and I do believe it was rushed over the line, which is remarkable considering H1 was launched… And I mean 1.1, was 1997, so we have a lot of time to work on H2.

    Harry: I guess the primary benefit is web developers who understand it or perceive it is unlimited in flight requests now. So rather than six dispatched and/or six in-flight requests at a time, potentially unlimited, infinite. Brings really interesting problems though because… it’s quite hard to describe without visual aids but you’ve still got the same amount of bandwidth available, whether you’re running H1 or H2, the protocol doesn’t make your connection any faster. So it’s quite possible that you could flood the network by requesting 24 files at once, but you don’t have enough bandwidth for that. So you don’t actually get any faster because you can only manage, perhaps, a fraction of that at a time.

    Harry: And also what you have to think about is how the files respond. And this is another pro-tip I go through on client workshops et cetera. People will look at an H2 waterfall and they will see that instead of the traditional six dispatch requests they might see 24. Dispatching 24 requests isn’t actually that useful. What is useful is when those responses are returned. And what you’ll notice is that you might dispatch 24 requests, so your left-hand side of your waterfall looks really nice and steep, but they all return in a fairly staggered, sequential manner because you need to limit the amount of bandwidth so you can’t fulfill all response at the same time.

    Harry: Well, the other thing is if you were to fulfill all response at the same time, you’d be interleaving responses. So you night get the first 10% of each file and the next 20%… 20% of a JavaScript file is useless. JavaScript isn’t usable until 100% of it has arrived. So what you’ll see is, in actual fact, the way an H2 waterfall manifests itself when you look at the response… It looks a lot more like H1 anyway, it’s a lot more staggered. So, H2, I think it was oversold, or perhaps engineers weren’t led to believe that there are caps on how effective it could be. Because you’ll see people overly sharding their assets and they might have twenty… let’s keep the number 24. Instead of having two big JS files, you might have 24 little bundles. They’ll still return fairly sequentially. They won’t all arrive at the same time because you’ve not magic-ed yourself more bandwidth.

    Harry: And the other problem is each request has a constant amount of latency. So let’s say you’re requesting two big files and it’s a hundred millisecond roundtrip and 250 milliseconds downloading, that’s two times 250 milliseconds. If you multiply up to 24 requests, you’ve still got constant latency, which we’ve decided 100 milliseconds, so now you’ve got 2400 milliseconds of latency and 24 times… instead of 250 milliseconds download let’s say its 25 milliseconds download, it’s actually taken longer because the latency stays constant and you just multiply that latency over more requests. So I’ll see clients who will have read that H2 is this magic bullet. They’ll shard… Oh! They couldn’t simplify the development process, we don’t need to do bundling or concatenation et cetera, et cetera. And ultimately it will end up slower because you’ve managed to spread your requests out, which was the promise, but your latency stays constant so you’ve actually just got n times more latency in the browser. Like I said, really hard, probably pointless trying to explain that without visuals, but it’s remarkable how H2 manifests itself compared to what people are hoping it might do.

    Drew: Is there still benefit in that sharding process in that, okay, to get the whole lot still takes the same amount of time but by the time you get 100% of the first one 24th back you can start working on it and you can start executing it before the 24th is through.

    Harry: Oh, man, another great question. So, absolutely, if things go correctly and it does manifest itself in a more H1 looking response, the idea would be file one returns first, two, three, four, and then they can execute in the order they arrive. So you can actually shorten the aggregate time by assuring that things arrive at the same time. If we have a look at a webpage instead of waterfall, and you notice that requests are interleaved, that’s bad news. Because like I said, 10% of a JavaScript file is useless.

    Harry: If the server does its job properly and it sends, sends, sends, sends, send, then it will get faster. And then you’ve got knock-on benefits of your cacheing strategy can be more granular. So really annoying would be you update the font size on your date picker widget. In H1 world you’ve got to cache bust perhaps 200 kilowatts of your site’s wide CSS. Whereas now, you just cache bust datepicker.css. So we’ve got offshoot benefits like that, which are definitely, definitely very valuable.

    Drew: I guess, in the scenario where you magically did get all your requests back at once, that would obviously bog down the client potentially, wouldn’t it?

    Harry: Yeah, potentially. And then what would actually happen is the client would have to do a load of resource scheduling so what you’d end up with is a waterfall where all your responses return at the same time, then you’d have a fairly large gap between the last response arriving and its ability to execute. So ideally, when we’re talking about JavaScript, you’d want the browser to request them all in the request order, basically the order you defined them in, the server to return them all in the correct order so then the browser can execute them in the correct order. Because, as you say, if they all returned at the same time, you’ve just got a massive JavaScript to run at once but it still needs to be scheduled. So you could have a doubter of up to second between a file arriving and it becoming useful. So, actually, H1… I guess, ideally, what you’re after is H2 request scheduling, H1 style responses, so then things can be made useful as they arrive.

    Drew: So you’re basically looking for a response waterfall that looks like you could ski down it.

    Harry: Yeah, exactly.

    Drew: But you wouldn’t need a parachute.

    Harry: Yeah. And it’s a really difficult… I think to say it out loud it sounds really trivial, but given the way H2 was sold, I find it quite… not challenging because that makes my client sound… dumb… but it’s quite a thing to to explain to them… if you think about how H1 works, it wasn’t that bad. And if we get responses that look like that and “Oh yeah, I can see it now”. I’ve had to teach performance engineers this before. People who do what I do. I’ve had to teach performance engineers that we don’t mind too much when requests were made, we really care about when responses become useful.

    Drew: One of the reasons things seem to move on quite quickly, especially over the last five years, is that performance is a big topic for Google. And when Google puts weight behind something like this then it gains traction. Essentially though, performance is an aspect of user experience, isn’t it?

    Harry: Oh, I mean… this is a really good podcast, I was thinking about this half an hour ago, I promise you I was thinking about this half an hour ago. Performance is applied accessibility. You’re guaranteeing or increasing the chances that someone can access your content and I think accessibility is always just… Oh it’s screen readers, right? It’s people without sight. The decisions to build a website rather than an app… the decisions are access more of an audience. So yeah, performance is applied accessibility, which is therefore the user experience. And that user experience could come down to “Could somebody even experience your site” full stop. Or it could be “Was that experience delightful? When I clicked a button, did it respond in a timely manner?”. So I 100% agree and I think that’s a lot of the reason why Google are putting weight on it, is because it affects the user experience and if someone’s going to be trusting search results, we want to try and give that person a site that they’re not going to hate.

    Drew: And it’s… everything that you think about, all the benefits you think about, user experience, things like increased engagement, it’s definitely true isn’t it? There’s nothing that sends the user away from a site more quickly than a sluggish experience. It’s so frustrating, isn’t it? Using a site where you know that maybe the navigation isn’t that clear and if you click through to a link and you think “Is this what I want? Is it not?” And just the cost of making that click, just waiting, and then you’ve got to click the back button and then that waiting, and it’s just… you give up.

    Harry: Yeah, and it makes sense. If you were to nip to the supermarket and you see that it’s absolutely rammed with people, you’ll do the bare minimum. You’re not going to spend a lot of money there, it’s like “Oh I just need milk”, in and out. Whereas if it’s a nice experience, you’ve got “Oh, well, while I’m here I’ll see if… Oh, yeah they’ve got this… Oh, I’ll cook this tomorrow night” or whatever. I think still, three decades into the web, even people who build for the web struggle, because it’s intangible. They struggle to really think that what would annoy you in a real store would annoy you online, and it does, and the stats show that it has.

    Drew: I think that in the very early days, I’m thinking late 90s, showing my age here, when we were building websites we very much thought about performance but we thought about performance from a point of view that the connections that people were using were so slow. We’re talking about dial-up, modems, over phone lines, 28K, 56K modems, and there was a trend at one point with styling images that every other line of the image would blank out with a solid color to give this… if you can imagine it like looking through a venetian blind at the image. And we did that because it helped with the compression. Because every other line the compression algorithm could-

    Harry: Collapse into one pointer.

    Drew: Yeah. And so you’ve significantly reduced your image size while still being able to get… And it became a design element. Everybody was doing it. I think maybe Jeffrey Zeldman was one of the first who pioneered that approach. But what we were thinking about there was primarily how quickly could we get things down the wire. Not for user experience, because we weren’t thinking about… I mean I guess it was user experience because we didn’t want people to leave our sites, essentially. But we were thinking about not optimizing things to be really fast but trying to avoid them being really slow, if that makes sense.

    Harry: Yeah, yeah.

    Drew: And then, I think as speeds like ADSL lines became more prevalent, that we stopped thinking in those terms and started just not thinking about it at all. And now we’re at the situation where we’re using mobile devices and they’ve got constrained connections and perhaps slower CPUs and we’re having to think about it again, but this time in terms of getting an advantage. As well as the user experience side of things, it can have real business benefits in terms of costs and ability to make profit. Hasn’t it?

    Harry: Yeah, tremendously. I mean, not sure how to word it. Not shooting myself in the foot here but one thing I do try and stress to clients is that site-speed is going to give you a competitive advantage but it’s only one thing that could give you some competitive advantage. If you’ve got a product no one wants to buy then it doesn’t matter how fast your site is. And equally, if someone genuinely wants the world’s fastest website, you have to delete your images, delete your CSS, delete your JavaScript, and then see how many products you tell, because I guarantee site-speed wasn’t the factor. But studies have shown that there’s huge benefits of being fast, to the order of millions. I’m working with a client as we speak. We worked out for them that if they could render a given page one second faster, or rather their largest content for paint was one second faster, it’s worth 1.8 mil a year, which is… that’s a big number.

    Drew: That would almost pay your fee.

    Harry: Hey! Yeah, almost. I did say to them “Look, after two years this’ll be all paid off. You’ll be breaking even”. I wish. But yeah, does the client-facing aspect… sorry, the customer-facing aspect of if you’ve got an E-Com site, they’re going to spend more money. If you’re a publisher, they’re going to read more of an article or they will view more minutes of content, or whatever you do that is your KPI that you measure. It could be on the Smashing site, it could be they didn’t bounce, they actually click through a few more articles because we made it really easy and fast. And then faster sites are cheaper to run. If you’ve got your cacheing strategy sorted you’re going to keep people away from your servers. If you optimize your assets, anything that does have to come from your server is going to weight a lot less. So much cheaper to run.

    Harry: The thing is, there’s a cost in getting there. I think Scott Jehl probably said one of the most… And I heard it from him first, so I’m going to assume he came up with it but the saying is “It’s easy to make a fast website but it’s difficult to make a website fast”. And that is just so succinct. Because the reason web perf might get pushed down the list of things to do is because you might be able to say to a client “If I make your site a second faster you’ll make an extra 1.8 mil a year” or it can be “If you just added Apple Pay to your checkout, you’re going to make an extra five mil.” So it’s not all about web perf and it isn’t the deciding factor, it is one part of a much bigger strategy, especially for E-Com online. But the evidence is that I’ve measured it firsthand with my retail clients, my E-Com clients. The case for it is right there, you’re absolutely right. It’s competitive advantage, it will make you more money.

    Drew: Back in the day, again, I’m harping back to a time past, but people like Steve Souders were some of the first people to really start writing and speaking about web performance. And people like Steve were basically saying “Forget the backend infrastructure, where all the gains to be had are in the browser, in the front end, that’s where everything slow happens.” Is that still the case 15 years on?

    Harry: Yeah, yeah. He reran the test in between way back then and now, and the gap had actually widened, so it’s actually more costly over the wire. But there is a counter to that, which is if you’ve got really bad backend performance, if you set out of the gate slowly, there’s only so much you can claw back on the front end. I got a client at the moment, their time to first byte is 1.5 seconds. We can never render faster than 1.5 seconds therefore, so that’s going to be a cap. We can still claw time back on the front end but if you’ve got a really, really bad time to first byte, you have got backend slow downs, there’s a limit on how much faster your front end performance efforts could get you. But absolutely.

    Harry: That is, however, changing because… Well, no it’s not changing I guess, it’s getting worse. We’re pushing more onto the client. It used to be a case of “Your server is as fast as it is but then after that we’ve got a bunch of question marks.” because I hear this all the time “All our users run on WiFi. They’ve all got desktop machines because they all work from our office.” Well, no, now they’re all working from home. You don’t get to choose. So, that’s where all the question marks come in which is where the slow downs happen, where you can’t really control it. After that, the fact that now we are tending to put more on the client. By that I mean, entire run times on the client. You’ve moved all your application logic off of a server anyway so your time to first byte should be very, very minimal. It should be a case of sending some bundles from a CDM to my… but you’ve gone from being able to spec to your own servers to hoping that somebody’s not got Netflix running on the same machine they’re trying to view your website on.

    Drew: It’s a really good point about the way that we design sites and I think the traditional best practice has always been you should try and cater for all sorts of browsers, all sorts of connection speeds, all sorts of screen sizes, because you don’t know what the user is going to be expecting. And, as you said, you have these scenarios where people say “Oh no we know all our users are on their work-issued desktop machine, they’re running this browser, it’s the latest version, they’re hardwired into the LAN” but then things happen. One of the great benefits of having web apps is that we can do things like distribute our work force suddenly back all to their homes and they can keep working, but that only holds true if the quality of the engineering was such that then somebody who’s spinning up their home machine that might have IE11 on it or whatever, whether the quality of the work is there that actually means that the web fulfills its potential in being a truly accessible medium.

    Drew: As you say, there’s this trend to shift more and more stuff into the browser, and, of course, then if the browser is slow, that’s where the slowness happens. You have to wonder “Is this a good trend? Should we be doing this?” I’ve got one site that I particularly think of, noticed that is almost 100% server rendered. There’s very little JavaScript and it is lightning fast. Every time I go to it I think “Oh, this is fast, who wrote this?” And then I realize “Oh yeah, it was me”.

    Harry: That’s because you’re on localhost, no wonder it feels fast. It’s your dev site.

    Drew: Then, my day job, we’re building out our single page application and shifting stuff away from the server because the server’s the bottleneck in that case. Can you just say that it’s more performant to be in the browser? Or more performant to be on the server? Is it just a case of measuring and taking it on a case-by-case basis?

    Harry: I think you need to be very, very, very aware of your context and… genuinely I think an error is… narcissism where people think “Oh, my blog deserves to be rendered in someone’s browser. My blog with a bounce rate of 89% needs its own runtime in the browser, because I need subsequent navigations to be fast, I just want to fetch a… basically a diff of the data.” No one’s clicking onto your next article anyway, mate, don’t push a runtime down the pipe. So you need to be very aware of your context.

    Harry: And I know that… if Jeremy Keith’s listening to this, he’s going to probably put a hit out on me, but there is, I would say, a difference between a website and a web app and the definition of that is very, very murky. But if you’ve got a heavily read and write application, so something where you’re inputting data, manipulating data, et cetera. Basically my site is not a web app, it’s a website, it’s read only, that I would firmly put in the website camp. Something like my accountancy software is a web app, I would say is a web app and I am prepared to suffer a bit of boot time cost, because I know I’ll be there for 20 minutes, an hour, whatever. So you need a bit of context, and again, maybe narcissism’s a bit harsh but you need to have a real “Do we need to make this newspaper a client side application?” No, you don’t. No, you don’t. People have got ad-blocker on, people don’t like commuter newspaper sites anyway. They’re probably not even going to read the article and rant about it on Facebook. Just don’t build something like that as a client rendered application, it’s not suitable.

    Harry: So I do think there is definitely a point at which moving more onto the client would help, and that’s when you’ve got less sensitivity to churn. So any com type, for example, I’m doing an audit for a moment for a site who… I think it’s an E-Com site but it’s 100% on the client. You disable JavaScript and you see nothing, just an empty div id=“app”. E-Com is… you’re very sensitive to any issues. Your checkout flow is even subtly wrong, I’m off somewhere else. It’s too slow, I’m off somewhere else. You don’t have the context where someone’s willing to bed in to that app for a while.

    Harry: Photoshop. I pop open Photoshop and I’m quite happy to know that it’s going to take 45 seconds of splash screen because I’m going to be in there for… basically the 45 seconds is worth the 45 minutes. And it’s so hard to define, which is why I really struggle to convince clients “Please don’t do this” because I can’t just say “How long do you think your user’s going to be there for”. And you can prox it from… if your bounce rate’s 89% don’t optimize for a second page view. Get that bounce rate down first. I do think there’s definitely a split but what I would say is that most people fall on the wrong side of that line. Most people put stuff in the client that shouldn’t be there. CNN, for example, you cannot read a single headline on the CNN website until it is fully booted a JavaScript application. The only thing server rendered is the header and footer which is the only thing people don’t care about.

    Harry: And I feel like that is just… I don’t know how we arrive at that point. It’s never going to be the better option. You deliver a page that is effectively useless which then has to say “Cool, I’ll go fetch what would have been a web app but we’re going to run it in the browser, then I’ll go and ask for a headline, then you can start to… oh, you’re gone.” That really, really irks me.

    Harry: And it’s no one’s fault, I think it’s the infancy of this kind of JavaScript ecosystem, the hype around it, and also, this is going to sound really harsh but… It’s basically a lot of naïve implementation. Sure, Facebook have invented React and whatever, it works for them. Nine times out of 10 you’re not working at Facebook scale, 95 times out of 100 you’re probably not the smartest Facebook engineers, and that’s really, really cruel and it sounds horrible to say, but you can only get… None of these things are fast by default. You need a very, very elegant implementation of these things to make them correct.

    Harry: I was having this discussion with my old… he was a lead engineer on the squad that I was on 10 years ago at Sky. I was talking to him the other day about this and he had to work very hard to make a client rendered app fast, whereas making a server rendered app fast, you don’t need to do anything. You just need to not make it slow again. And I feel like there’s a lot of rose tinted glasses, naivety, marketing… I sound so bleak, we need to move on before I start really losing people here.

    Drew: Do you think we have the tendency, as an industry, to focus more on developer experience than user experience sometimes?

    Harry: Not as a whole, but I think that problem crops up in a place you’d expect. If you look at the disparity… I don’t know if you’ve seen this but I’m going to presume you have, you seem to very much have your finger on the pulse, the disparity between HTTP archive’s data about what frameworks and JavaScript libraries are used in the wild versus the state of JavaScript survey, if you follow the state of JavaScript survey it would say “Oh yes, 75% of developers are using React” whereas fewer than 5% of sites are using React. So, I feel like, en masse, I don’t think it’s a problem, but I think in the areas you’d expect it, heavy loyalty to one framework for example, developer experience is… evangelized probably ahead of the user. I don’t think developer experience should be overlooked, I mean, everything has a maintenance cost. Your car. There was a decision when it was designed that “Well, if we hide this key, that functionality, from a mechanic, it’s going to take that mechanic a lot longer to fix it, therefore we don’t do things like that”. So there does need to be a balance of ergonomics and usability, I think that is important. I think focusing primarily on developer experience is just baffling to me. Don’t optimize for you, optimize for your customer, your customer pays you it’s not the other way around.

    Drew: So the online echo chamber isn’t exactly representative of reality when you see everybody saying “Oh you should be using this, you should be doing that” then that’s actually only a very small percentage of people.

    Harry: Correct, and that’s a good thing, that’s reassuring. The echo chamber… it’s not healthy to have that kind of monoculture perhaps, if you want to call it that. But also, I feel like… and I’ve seen it in a lot of my own work, a lot of developers… As a consultant, I work with a lot of different companies. A lot of people are doing amazing work in WordPress. And WordPress powers 24% of the web. And I feel like it could be quite easy for a developer like that working in something like WordPress or PHP on the backend, custom code, whatever it is, to feel a bit like “Oh, I guess everyone’s using React and we aren’t” but actually, no. Everyone’s talking about React but you’re still going with the flow, you’re still with the majority. It’s quite reassuring to find the silent majority.

    Drew: The trend towards static site generators and then hosting sites entirely on a CDN, sort of JAMstack approach, I guess when we’re talking about those sorts of publishing type sites, rather than software type sites, I guess that’s a really healthy trend, would you think?

    Harry: I love that, absolutely. You remember when we used to call SSG “flap file”, right?

    Drew: Yeah.

    Harry: So, I built CSS Wizardry on Jekyll back when Jekyll was called a flap file website. But now we service our generator, huge, huge fan of that. There’s no disadvantage to it really, you pay maybe a slightly larger up front compute cost of pre-compiling the site but then your compute cost is… well, Cloudflare fronts it, right? It’s on a CDN so your application servers are largely shielded from that.

    Harry: Anything interactive that does need doing can be done on the client or, if you want to get fancy, what one really nice approach, if you are feeling ambitious, is use Edge Side Includes so you can keep your shopping cart server rendered, but at the edge. You can do stuff like that. Tremendous performance benefits there. Not appropriate for a huge swathe of sites, but, like you say, if we’re thinking publishing… an E-Com site it wouldn’t work, you need realtime stock levels, you need… search that doesn’t just… I don’t know you just need far more functionality. But yeah, I think the Smashing site, great example, my site is an example, much smaller than Smashing but yeah, SSG, flap filers, I’m really fond of it.

    Drew: Could it work going deeper into the JAMstack approach of shifting everything into the client and building an E-Commerce site? I think the Smashing E-Commerce site is essentially using JavaScript in the client and server APIs to do the actual functionality as service functions or what have you.

    Harry: Yeah. I’ve got to admit, I haven’t done any stuff with serverless. But yeah, that hybrid approach works. Perhaps my E-Commerce example was a bit clunky because you could get a hybrid between statically rendering a lot of the stuff, because most things on an E-Com site don’t really change. You filter what you can do on the client. Search, a little more difficult, stock levels does need to go back to an API somewhere, but yeah you could do a hybrid for a definite, for an E-Com site.

    Drew: Okay, so then it’s just down to monitoring all those performance metrics again, really caring about the network, about latency, about all these sorts of things, because you’re then leaning on the network a lot more to fetch all those individual bits of data. It hosts a new set of problems.

    Harry: Yeah, I mean you kind of… I wouldn’t say “Robbing Peter to pay Paul” but you are going to have to keep an eye on other things elsewhere. I’ve not got fully to the bottom of it, before anyone tweets it at us, but a client recently moved to an E-Commerce client. I worked with them two years ago and that site was already pretty fast. It was built on… I can’t remember which E-Com platform, it was .net, hosted on IIS, server rendered, obviously, and it was really fast because of that. It was great and we just wanted to maintain, maybe find a couple of hundred milliseconds here and there, but really good. Half way through last year, they moved to client side React for key pages. PP… product details page, product listing page, and stuff just got marketable slower lower, much slower. To the point they got back in touch needing help again.

    Harry: And one of the interesting things I spotted when they were putting a case for “We need to actually revert this”. I was thinking about all the…what’s slower, obviously it’s slower, how could doing more work ever be faster, blah blah blah. One of their own bullet points in the audit was: based on projections, their yearly hosting costs have gone up by a factor of 10 times. Because all of a sudden they’ve gone from having one application server and a database to having loads of different gateways, loads of different APIs, loads of different microservers they’re calling on. It increased the surface area of their application massively. And the basic reason for this, I’ll tell you exactly why this happened. The developer, it was a very small team, the developer who decided “I’m going to use React because it seems like fun” didn’t do any business analysis. It was never expected of them to actually put forward a case of how much is it going to cost the dude, how much is it going to return, what’s the maintenance cost of this?

    Harry: And that’s a thing I come up against really frequently in my work and it’s never the developer’s fault. It’s usually because the business keeps financials away from the engineering team. If your engineers don’t know the cost or value of their work then they’re not informed to make those decisions so this guy was never to know that that was going to be the outcome. But yeah, interestingly, moving to a more microservice-y approach… And this is an outlier, and I’m not going to say that that 10 times figure is typical, it definitely seems atypical, but it’s true that there is at least one incident I’m aware of when moving to this approach, because they just had to use more providers. It 10x’ed their… there’s your 10 times engineer, increased hosting by 10 times.

    Drew: I mean, it’s an important point, isn’t it? Before starting out down any particular road with architectural changes and things about doing your research and asking the right questions. If you were going to embark on some big changes, say you’ve got a really old website and you’re going to structure it and you want it to be really fast and you’re making all your technology choices, I mean it pays, doesn’t it, to talk to different people in the business to find out what they want to be doing. What sort of questions should you be asking other people in the business as a web developer or as a performance engineer? Who should you be talking to you and what should you be asking them?

    Harry: I’ve got a really annoying answer to the “Who should you be talking to?” And the answer is everyone should be available to you. And it will depend on the kind of business, but you should be able to speak to marketing “Hey, look, we’re using this AB testing tool. How much does that cost as a year and how much do you think it nets as a year?” And that developer should feel comfortable. I’m not saying developers need to change their attitude, what I mean is the company should make the developers able to ask those kind of questions. How much does Optimizely cost as a year? Right, well that seems like a lot, does it make that much in return? Okay, whatever we can make a decision based on that. That’s who you should be talking to and then questions you should ask, it should be things like…

    Harry: The amount of companies I work will, they won’t give their own developers to Google Analytics. How are you meant to build a website if you don’t know who you’re building it for? So the question should be… I work a lot with E-Com clients so every developer should things like “What is our average order value? What is our conversion rate? What is our revenue, how much do we make?” These things mean that you can at least understand that “Oh, people spend a lot of money on this website and I’m responsible for a big chunk of that and I need to take that responsibility.”

    Harry: Beyond that, other things are hard to put into context, so for me, one of things that I, as a consultant, so this is very different to an engineer in the business, I need to know how sensitive you are to performance. So if a client gives me the average order value, monthly traffic, and their conversion rate, I can work out how much 100 milliseconds, 500 a second will save them a year, or return them, just based on those three numbers I can work out roughly “Well a second’s worth 1.8 mil”. It’s a lot harder for someone in the business to get all the back information because as a performance engineer it’s second nature to me. But if you can work that kind of stuff out, it unlocks a load of doors. Okay, well if a second’s work this much to us, I need to make sure that I never lose a second and if I can, gain a second back. And that will inform a lot of things going forward. A lot of these developers are kept quite siloed. “Oh well, you don’t need to know about business stuff, just shut up and type”.

    Drew: I’ve heard you say, it is quite a nice soundbite, that nobody wants a faster website.

    Harry: Yeah.

    Drew: What do you mean by that?

    Harry: Well it kind of comes back to, I think I’ve mentioned it already in the podcast, that if my clients truly wanted the world’s fastest website, they would allow me to go in and delete all their JavaScript, all their CSS, all their images. Give that customer a Times New Roman stack.

    Harry: But fast for fast sake is… not chasing the wrong thing but you need to know what fast means to you, because, I see it all the time with clients. There’s a point at which you can stop. You might find that your customer’s only so sensitive to web perf that it might mean that getting a First Contentful Paint from four seconds to two seconds might give you a 10% increase in revenue, but getting from that two to a one, might only give you a 1% increase. It’s still twice as fast, but you get minimal gains. So what I need to do with my clients is work out “How sensitive are you? When can we take our foot off the gas?” And also, like I said, towards the top of the show… You need to have a product that people want to buy.

    Harry: If people don’t want to buy your product, it doesn’t matter how quickly you show them it, it’ll just disgust them faster, I guess. Is your checkout flow really, really, really seamless on mobile, for example. So there’s a number of factors. For me, and my clients, it’ll be working out a sweet spot, to also working out “If getting from here to here is going to make you 1.8 mil a year, I can find you that second for a fraction of that cost.” If you want me to get you an additional second on top of that, it’s going to get a lot harder. So my cost to you will probably go up, and that won’t be an extra 1.8, because it’s not lineal, you don’t get 1.8 mil for every one second.

    Harry: It will tail off at some point. And clients will get to a point where… they’ll still be making gains but it might be a case of your engineering effort doubles, meaning your returns halve, you can still be in the green, hopefully it doesn’t get more expensive and you’re losing money on performance, but there’s a point where you need to slow down. And that’s usually things that I help clients find out because otherwise they will just keep chasing speed, speed, speed and get a bit blinkered.

    Drew: Yeah, it is sort of diminishing returns, isn’t it?

    Harry: That’s what I was look for-

    Drew: Yeah.

    Harry: … diminishing returns, that’s exactly it. Yeah, exactly.

    Drew: And in terms of knowing where to focus your effort… Say you’ve got the bulk of your users, 80% of your users are getting a response within two, three seconds, and then you’ve got 20% who may be in the long-tail that might end up with responses five, ten seconds. Is it better to focus on that 80% where the work’s really hard, or is it better to focus on the 20% that’s super slow, where the work might be easier, but it’s only 20%. How do you balance those sorts of things?

    Harry: Drew, can you write all podcast questions for everyone else? This is so good. Well, a bit of a shout out to Tim Kadlec, he’s done great talks on this very topic and he calls it “The Long-Tail of Web Performance” so anyone listening who wants to look at that, Tim’s done a lot of good firsthand work there. The 80, 20, let’s just take those as good example figures, by the time you’re dealing with the 80th percentile, you’re definitely in the edge cases. All your crooks and web file data is based around 75th percentile. I think there’s a lot of value investing in that top 20th percentile, the worst 20%. Several reasons for this.

    Harry: First thing I’m going to start with is one of the most beautiful, succinct soundbites I’ve ever heard. And the guy who told me it, I can guarantee, did not mean it to be this impactful. I was 15 years old and I was studying product design, GCSE. Finally, a project, it was a bar stool so it was a good sign of things to come. And we were talking about how you design furniture. And my teacher basically said… I don’t know if I should… I’m going to say his name, Mr. Brocklesby.

    Harry: He commanded respect but he was one of the lads, we all really liked him. But he was massive in every dimension. Well over six foot tall, but just a big lad. Big, big, big, big man. And he said to us “If you were to design a doorway, would you design it for the average person?” And 15 year old brains are going “Well yeah, if everyone’s roughly 5’9 then yeah” He was like “Well, immediately, Harry can’t use that door.” You don’t design for the average person, you design for the extremities because you want it to be useful to the most people. If you designed a chair for the average person, Mr. Brocklesby wasn’t going to fit in it. So he taught me from a really, really age, design to your extremities.

    Harry: And where that becomes really interesting in web perf is… If you imagine a ladder, and you pick up the ladder by the bot… Okay I’ve just realized my metaphor might… I’ll stick with it and you can laugh at me afterwards. Imagine a ladder and you lift the ladder up by the bottom rungs. And that’s your worst experiences. You pick the bottom rung in the ladder to lift it up. The whole ladder comes with it, like a rising tide floats all boats. The reason that metaphor doesn’t work is if you pick a ladder up by the top rung, it all lifts as well, it’s a ladder. And the metaphor doesn’t even work if I turn it into a rope ladder, because a rope ladder then, you lift the bottom rung and nothing happens but… my point is, if you can improve experience for your 90th percentile, it’s got to get that up for your 10th percentile, right?

    Harry: And this is why I tell clients, they’ll say to me things like “Oh well most of our users are on 4G on iPhones” so like all right, okay, and we start testing 3G on Android, like “No, no, most of our users are iPhones” okay… that means your average user’s going to have a better experience but anyone who isn’t already in the 50th percentile just gets left further behind. So set the bar pretty high for yourself by setting expectations pretty low.

    Harry: Sorry, I’ve got a really bad habit of giving really long answers to short questions. But it was a fantastic question and, to try and wrap up, 100% definitely I agree with you that you want to look at that long-tail, you want to look at that… your 80th percentile because if you take all the experiences on the site and look at the median, and you improve the median, that means you’ve made it even better for people who were already quite satisfied. 50% of people being effectively ignored is not the right approach. And yeah, it always comes back to Mr Brocklesby telling me “Don’t design for the average person because then Harry can’t use the door”. Oh, for anyone listening, I’m 193 centimeters, so I’m quite lanky, that’s what that is.

    Drew: And all those arms and legs.

    Harry: Yeah. Here’s another good one as well. My girlfriend recently discovered the accessibility settings in iOS… so everyone has their phone on silent, right? Nobody actually has a phone that actually rings, everyone’s got it on silent. She found that “Oh you know, you can set it so that when you get a message, the flash flashes. And if you tap the back of the phone twice, it’ll do a screenshot.” And these are accessibility settings, these are designed for that 95th percentile. Yet she’s like “Oh, this is really useful”.

    Harry: Same with OXO Good Grips. OXO Good Grips, the kitchen utensils. I’ve got a load of them in the kitchen. They’re designed because the founder’s wife had arthritis and he wanted to make more comfortable utensils. He designed for the 99th percentile, most people don’t have arthritis. But by designing for the 99th percentile, inadvertently, everyone else is like “Oh my God, why can’t all potato peelers be this comfortable?” And I feel like it’s really, really… I like a feel-good or anecdote that I like to wheel out in these sort of scenarios. But yeah, if you optimize for them… Well, a rising tide floats all boats and that therefore optimizes the tail-end of people and you’re going to capture a lot of even happier customers above that.

    Drew: Do you have the OXO Good Grips manual hand whisk?

    Harry: I don’t. I don’t, is it good?

    Drew: Look into it. It’s so good.

    Harry: I do have the OXO Good Grips mandolin slicer which took the end of my finger off last week.

    Drew: Yeah, I won’t get near one of those.

    Harry: Yeah, it’s my own stupid fault.

    Drew: Another example from my own experience with catering for that long-tail is that, in the project I’m working on at the moment, that long-tail is right at the end, you’ve got people with the slowest performance, but if it turns out if you look at who those customers are, they’re the most valuable customers to the business-

    Harry: Okay.

    Drew: … because they are the biggest organizations with the most amount of data.

    Harry: Right.

    Drew: And so they’re hitting bottlenecks because they have so much data to display on a page and those pages need to be refactored a little bit to help that use case. So they’re having the slowest experience and they’re, when it comes down to it, paying the most money and making so much more of a difference than all of the people having a really fast experience because they’re free users with a tiny amount of data and it all works nice and it is quick.

    Harry: That’s a fascinating dimension, isn’t it? In fact, I had a similar… I had nowhere near the business impact as what you’ve just described, but I worked with a client a couple of years ago, and their CEO got in touch because their site was slow. Like, slow, slow, slow. Really nice guy as well, he’s just a really nice down to earth guy, but he’s mentored, like proper rich. And he’s got the latest iPhone, he can afford that. He’s a multimillionaire, he spends a lot of his time flying between Australia, where he is from, and Estonia, where he is now based.

    Harry: And he’s flying first class, course he is. But it means most of his time on his nice, shiny iPhone 12 Pro Max whatever, whatever, is over airplane WiFi, which is terrible. And it was this really amazing juxtaposition where he owns the site and he uses it a lot, it’s a site that he uses. And he was pushing it… I mean easily their richest customer was their CEO. And he’s in this weirdly privileged position where he’s on a worse connection than Joe Public because he’s somewhere above Singapore on a Quantas flight getting champagne poured down his neck, and he’s struggling. And that was a really fascinating insight that… Oh yeah, because you’ve got your 95th percentile can basically can go in either direction.

    Drew: Yeah, it’s when you start optimizing for using a site with a glass of champagne in one hand that you think “Maybe we’re starting to lose the way a bit.”

    Harry: Yeah, exactly.

    Drew: We talked a little bit about measurement of performance, and in my own experience with performance work it’s really essential to measure everyhtin.g A so you can identify where problems are but B so that when you actually start tackling something you can tell if you’re making a different and how much of a difference you’re making. How should we be going about measuring the performance of our sites? What tools can we use and where should we start?

    Harry: Oh man, another great question. So there’s a range of answers depending on how much time, resources, inclination there is towards fixing site speed. So what I will try and do with client is… Certain off the shelf metrics are really good. Load time, do not care about that anymore. It’s very, very, very… I mean, it’s a good proxy if your load time’s 120 seconds I’m going to guess you don’t have a very fast website, but it’s too obscure and it’s not really customer facing. I actually think vitals are a really good step in the right direction because they do measure user experience but they’re based on technical input. Largest Contentful Paint is a really nice thing to visual but the technical stuff there is unblock your critical path, make sure hero images arrive quickly and make sure your web font strategy is decent. There’s a technical undercurrent to these metrics. Those are really good off the shelf.

    Harry: However, if clients have got the time, it’s usually time, because you want to capture the data but you need time to actually capture the data. So what I try and do with clients is let’s go “Look, we can’t work together for the next three months because I’m fully booked. So, what we can do is really quickly set you up with a free trial of Speedcurve, set up some custom metrics” so that means that for a publisher client, a newspaper, I’d be measuring “How quickly was the headline of the article rendered? How quickly was the lead image for the article rendered?” For an E-Commerce client I want to measure, because obviously you’re measuring things like start render passively. As soon as you start using any performance monitoring software, you’re capturing your actual performance metrics for free. So your First Contentful Paint, Largest Contentful, etc. What I really want to capture is things that matter to them as a business.

    Harry: So, working with an E-Com client at the moment where we are able to correlate… The faster your start render, what is the probability to an adding to cart. If you can show them a product sooner, they’re more likely to buy it. And this is a lot of effort to set up, this is kind of the stretch goal for clients who are really ambition, but anything that you really want to measure, because like I say, you don’t really want to measure what your Largest Contentful Paint is, you want to measure your revenue and was that influenced by Large Contentful Paint? So the stretch goal, ultimate thing, would be anything you would see as a KPI for that business. It could be, on newspapers, how far down the article did someone scroll? And does that correlate in any way to first input delay? Did people read more articles if CLS was lower? But then before we start doing custom, custom metrics, I honestly think web vitals is a really good place to start and it’s also been quite well normalized. It becomes a… I don’t know what the word is. Lowest common denominator I guess, where everyone in the industry now can discuss performance on this level playing field.

    Harry: One problem I’ve got, and I actually need to set up a meeting with the vitals team, is I also really think Lighthouse is great, but CLS is 33% of web vitals. You’ve got LCP, FID, CLS. CLS is 33% of your vitals. Vitals is what normally goes in front of your marketing team, your analytics department, because it pops up in search console, it’s mentioned in context of search results pages, whereas vitals is concerned, you’ve got heavy weighting, 33%, a third of vitals is CLS, it’s only 5% of our Lighthouse score. So what you’re going to get is developers who build around Lighthouse, because it can be integrated into tooling, it’s a lab metric. Vitals is field data, it’s rum.

    Harry: So you’ve got this massive disconnect where you’ve got your marketing team saying “CLS is really bad” and developers are thinking “Well it’s 5% of the Lighthouse score that DevTools is giving me, it’s 5% of the score that Lighthouse CLI gives us in CircleCI” or whatever you’re using, yet for the marketing team its 33% of what they care about. So the problem there is a bit of a disconnect because I do think Lighthouse is very valuable, but I don’t know how they reconcile that fairly massive difference where in vitals, CLS is 33% of your score… well, not score because you don’t really have one, and Lighthouse is only 5%, and it’s things like that that still need ironing out before we can make this discussion seamless.

    Harry: But, again, long answer to a short question. Vitals is really good. LCP is a good user experience metric which can be boiled down to technical solutions, same with CLS. So I think that’s a really good jump off point. Beyond that, it’s custom metrics. What I try and get my clients to is a point where they don’t really care how fast their site is, they just care that they make more money from yesterday, and if it did is that because it was running fast? If it made less is that because it was running slower? I don’t want them to chase a mystical two second LCP, I want them to chase the optimal LCP. And if that actually turns out to be slower than what you think, then whatever, that’s fine.

    Drew: So, for the web developer who’s just interested in… they’ve not got budget to spend on tools like Speedcurve and things, they can obviously run tools like Lighthouse just within their browser, to get some good measurement… Are things like Google Analytics useful for that level?

    Harry: They are and they can be made more useful. Analytics, for many years now, has captured rudimentary performance information. And that is going to be DNS time, TCP and TLS, time to first byte, page download time, which is a proxy… well, whatever, just page download time and load time. So fairly clunky metrics. But it’s a good jump off point and normally every project I start with a client, if they don’t have New Relic or Speedcurve or whatever, I’ll just say “Well let me have a look at your analytics” because I can at least proxy the situation from there. And it’s never going to be anywhere near as good as something like Speedcurve or New Relic or Dynatrace or whatever. You can send custom metrics really, really, really easily off to analytics. If anyone listening wants to be able to send… my site for example. I’ve got metrics like “How quickly can you read the heading of one of my articles? At what point was the About page image rendered? At what point was the call to action that implores you to hire me? How soon is that rendered to screen?” Really trivial to capture this data and almost as trivial to send it to analytics. So if anyone wants to view source on my site, scroll down to the closing body tag and find the analytics snippet, you will see just how easy it is for me to capture custom data and send that off to analytics. And, in the analytics UI, you don’t need to do anything. Normally you’d have to set up custom reports and mine the data and make it presentable. These are a first class citizen in Google Analytics. So the moment you start capturing custom analytics, there’s a whole section of the dashboard dedicated to it. There’s no setup, no heavy lifting in GA itself, so it’s really trivial and, if clients are on a real budget or maybe I want to show them the power of custom monitoring, I don’t want to say “Oh yeah, I promise it’ll be really good, can I just have 24 grand for Speedcurve?” I can start by just saying “Look, this is rudimentary. Let’s see the possibilities here, now we can maybe convince you to upgrade to something like Speedcurve.”

    Drew: I’ve often found that my gut instinct on how fast something should be, or what impact a change should have, can be wrong. I’ll make a change and think I’m making things faster and then I measure it and actually I’ve made things slower. Is that just me being rubbish at web perf?

    Harry: Not at all. I’ve got a really pertinent example of this. Preload… a real quick intro for anyone who’s not heard of preload, loading certain assets on the web is inherently very slow and the two primary candidates here are background images in CSS and web fonts, because before you can download a background image, you have to download the HTML, which then downloads the CSS, and then the CSS says “Oh, this div on the homepage needs this background image.” So it’s inherently very slow because you’ve got that entire chunk of CSS time in between. With preload, you can put one line in HTML in the head tag that says “Hey, you don’t know it yet but, trust me, you’ll need this image really, really, really soon.” So you can put a preload in the HTML which preemptively fires off this download. By the time the CSS needs the background image, it’s like “Oh cool, we’ve already got it, that’s fast.” And this is toutered as this web perf Messiah… Here’s the thing, and I promise you, I tweeted this last week and I’ve been proved right twice since. People hear about preload, and the promise it gives, and also it’s very heavily pushed by Lighthouse, in theory, it makes your site faster. People get so married to the idea of preload that even when I can prove it isn’t working, they will not remove it again. Because “No, but Lighthouse said.” Now this is one of those things where the theory is sound. If you have to wait for your web font, versus downloading it earlier, you’re going to see stuff faster. The problem is, when you think of how the web actually works, any page you first hit, any brand new domain you hit, you’ve got a finite amount of bandwidth and the browser’s very smart spending that bandwidth correctly. It will look through your HTML really quickly and make a shopping list. Most important thing is CSS, then it’s this jQuery, then it’s this… and then next few things are these, these, and these less priority. As soon as you start loading your HTML with preloads, you’re telling the browser “No, no, no, this isn’t your shopping list anymore, buddy, this is mine. You need to go and get these.” That finite amount of bandwidth is still finite but it’s not spent across more assets, so everything gets marginally slower. And I’ve had to boo this twice in the past week, and still people are like “Yeah but no it’s because it’s downloading sooner.” No, it’s being requested sooner, but it’s stealing bandwidth from your CSS. You can literally see your web fonts are stealing bandwidth from your CSS. So it’s one of those things where you have to, have to, have to follow the numbers. I’ve done it before on a large scale client. If you’re listening to this, you’ve heard of this client, and I was quite insistent that “No, no, your head tags are in the wrong order because this is how it should be and you need to have them in this order because theoretically it clues in that…” Even in what I was to the client I knew that I was setting myself up for a fool. Because of how browsers work, it has to be faster. So I’m making the ploy, this change… to many millions of people, and it got slower. It got slower. And me sitting there, indignantly insisting “No but, browsers work like this” is useless because it’s not working. And we reverted it and I was like “Sorry! Still going to invoice you for that!” So it’s not you at all.

    Drew: Follow these numbers.

    Harry: Yeah, exactly. “I actually have to charge you more, because I spent time reverting it, took me longer.” But yeah, you’re absolutely right, it’s not you, it’s one of those things where… I have done it a bunch of times on a much smaller scale, where I’ll be like “Well this theoretically must work” and it doesn’t. You’ve just got to follow what happens in the real world. Which is why that monitoring is really important.

    Drew: As the landscape changes and technology develops, Google rolls out new technologies that help us make things faster, is there a good way that we can keep up with the changes? Is there any resources that we should be looking at to keep our skills up to date when it comes to web perf?

    Harry: To quickly address the whole “Google making”… I know it’s slightly tongue in cheek but I’m going to focus on this. I guess right towards the beginning, bet on the browser. Things like AMP, for example, they’re at best a after thought catch of a solution. There’s no replacement for building a fast site, and the moment you start using things like AMP, you have to hold on to those non-standard standards, the mercy of the AMP team changing their mind. I had a client spend a fortune licensing a font from an AMP allow-listed font provider, then at some point, AMP decided “Oh actually no, that font provided, we’re going to block list them now” So I had a client who’s invested heavily in AMP and this font provider and had to choose “Well, do we undo all the AMP work or do we just waste this very big number a year on the web font” blah, blah, blah. So I’d be very wary of any one… I’m a Google Developer expert but I don’t know of any gagging-order… I can be critical, and I would say… avoid things that are hailed as a one-size-fits-all solution, things like AMP.

    Harry: And to dump on someone else for a second, Cloudflare has a thing called Rocket Loader, which is AMP-esque in its endeavor. It’s designed like “Oh just turn this thing on your CDN, it’ll make your site faster.” And actually it’s just a replacement for building your site properly in the first place. So… to address that aspect of it, try and remain as independent as possible, know how browsers work, which immediately means that Chrome monoculture, you’re back in Google’s lap, but know how browsers work, stick to some fundamental ideologies. When you’re building a site, look a the page. Whether that’s in Figma, or Sketch, or wherever it is, look at the design and say “Well, that is what a user wants to see first, so I’ll put nothing in the way of that. I won’t lazy load this main image because that’s daft, why would I do that?” So just think about “What would you want the user to be first?” On an E-Com site, it’s going to be that product image, probably nav at the same time, but reviews of the product, Q and A of the product, lazy load that. Tuck that behind JavaScript.

    Harry: Certain fundamental ways of working that will serve you right no matter what technology you’re reading up on, which is “Prioritize what your customer prioritizes”. Doing more work on that’d be faster, so don’t put things in the way of that, but then more tactical things for people to be aware of, keep abreast of… and again, straight back to Google, but web.dev is proving to be a phenomenal resource for framework agnostic, stack agnostic insights… So if you want to learn about vitals, you want to learn about PWAs, so web.dev’s really great.

    Harry: There’s actually very few performance-centric publications. Calibre’s email is, I think its fortnightly perf email is just phenomenal, it’s a really good digest. Keep an eye on the web platform in general, so there’s the Performance Working Group, they’ve got a load of stuff on GitHub proposals. Again, back to Google, but no one knows about this website and its phenomenal: chromestatus.com. It tells you exactly what Chrome’s working on, what the signals are from other browsers, so if you want to see what the work is on priority hints, you can go and get links to all the relevant bug trackers. Chrome Status shows you milestones for each… “This is coming out in MAT8, this was released in ’67” or whatever, that’s a really good thing for quite technical insights.

    Harry: But I keep coming back to this thing, and I know I probably sound like “Old man shouts at Cloud” but stick to the basics, nearly every single pound or dollar, euro, I’ve ever earned, has been teaching clients that “You know the browser does this already, right” or “You know that this couldn’t possible be faster” and that sounds really righteous of me… I’ve never made a cent off of selling extra technology. Every bit of money I make is about removing, subtracting. If you find yourself adding things to make your site faster, you’re in the wrong direction.

    Harry: Case in point, I’m not going to name… the big advertising/search engine/browser company at all, not going to name them, and I’m not going to name the JavaScript framework, but I’m currently in discussions with a very, very big, very popular JavaScript framework about removing something that’s actively harming, or optionally removing something that would harm the performance of a massive number of websites. And they were like “Oh, we’re going to loop in…” someone from this big company, because they did some research… and it’s like “We need an option to remove this thing because you can see here, and here, and here it’s making this site slower.” And their solution was to add more, like “Oh but if you do this as well, then you can sidestep that” and it’s like “No, no, adding more to make a site faster must be the wrong solution. Surely you can see that you’re heading in the wrong direction if it takes more code to end up with a faster site.”

    Harry: Because it was fast to start with, and everything you add is what makes it slower. And the idea of adding more to make it faster, although… it might manifest itself in a faster website, it’s the wrong way about it. It’s a race to the bottom. Sorry, I’m getting really het up, you can tell I’ve not ranted for a while. So that’s the other thing, if you find yourself adding features to make a site faster, you’re probably heading in the wrong direction, it’s far more effective to make a faster by removing things than it is to add them.

    Drew: You’ve put together a video course called “Everything I Have Done to Make CSS Wizardry Fast”.

    Harry: Yeah!

    Drew: It’s a bit different from traditional online video courses, isn’t it?

    Harry: It is. I’ll be honest, it’s partly… I don’t want say laziness on my part, but I didn’t want to design a curriculum which had to be very rigid and take you from zero to hero because the time involved in doing that is enormous and time I didn’t know if I would have. So what I wanted to was have ready-to-go material, just screen cast myself talking through it so it doesn’t start off with “Here is a browser and here’s how it works” so you do need to be at least aware of web perf fundamentals, but it’s hacks and pro-tips and real life examples.

    Harry: And because I didn’t need to do a full curriculum, I was able to slam the price way down. So it’s not a big 10 hour course that will take you from zero to hero, it’s nip in and out as you see fit. It’s basically just looking at my site which is an excellent playground for things that are unstable or… it’s very low risk for me to experiment there. So I’ve just done video series. It was a ton of fun to record. Just tearing down my own site and talking about “Well this is how this works and here’s how you could use it”.

    Drew: I think it’s really great how it’s split up into solving different problems. If I want to find out more about optimizing images or whatever, I can think “Right, what does my mate Harry have to say about this?”, dip in to the video about images and off I go. It’s really accessible in that way, you don’t have to sit through hours and hours of stuff, you can just go to the bit you want and learn what you need to learn and then get out.

    Harry: I think I tried to keep it more… The benefit of not doing a rigid curriculum is you don’t need to watch a certain video first, there’s no intro, it’s just “Go and look around and see what you find interesting” which meant that someone suffering with LTP issues they’re like “Oh well I’ve got to dive into this folder here” or if they’re suffering with CSS problems they can go dive into that folder. Obviously I have no stats, but I imagine there’s a high abandonment rate on courses, purely because you have to trudge through three hours of intro in case you do miss something, and it’s like “Oh, do you know what, I can’t keep doing this every day” and people might just abandon a lot of courses. So my thinking was just dive in, you don’t need to have seen the preceding three hours, you can just go and find whatever you want. And feedback’s been really, really… In fact, what I’ll do is, it doesn’t exist yet, but I’ll do it straight after the call, anybody who uses the discount code SMASHING15, they’ll get 15% off of it.

    Drew: So it’s almost like you’ve performance optimized the course itself, because you can just go straight to the bit you want and you don’t have to do all the negotiation and-

    Harry: Yeah, unintentional but I’ll take credit for that.

    Drew: So, I’ve been learning all about web performance, what have you been learning about lately, Harry?

    Harry: Technical stuff… not really. I’ve got a lot on my “to learn” list, so QUIC, H3 sort of stuff I would like to get a bit more working knowledge of that, but I wrote an E-Book during first lockdown in the UK so I learned how to make E-Books which was a ton of fun because they’re just HTML and CSS and I know my way around that so that was a ton of fun. I learnt very rudimentary video editing for the course, and what I liked about those is none of that’s conceptual work. Obviously, learning a programming language, you’ve got to wrestle concepts, whereas learning an E-Book was just workflows and… stuff I’ve never tinkered with before so it was interesting to learn but it didn’t require a change of career, so that was quite nice.

    Harry: And then, non technical stuff… I ride a lot of bikes, I fall off a lot of bikes… and because I’ve not traveled at all since last March, nearly a year now, I’ve been doing a lot more cycling and focusing a lot more on… improving that. So I’ve been doing a load of research around power outputs and functional threshold powers, I’m doing a training program at the moment, so constantly, constantly exhausted legs but I’m learning a lot about physiology around cycling. I don’t know why because I’ve got no plans of doing anything with it other than keep riding. It’s been really fascinating. I feel like I’ve been very fortunate during lockdowns, plural, but I’ve managed to stay active. A lot of people will miss out on simple things like a daily commute to the office, a good chance to stretch legs. In the UK, as you’ll know, cycling has been very much championed, so I’ve been tinkering a lot more with learning more about riding bikes from a more physiological aspect which means… don’t know, just being a nerd about something else for a change.

    Drew: Is there perhaps not all that much difference between performance optimization on the web and performance optimization in cycling, it’s all marginal gains, right?

    Harry: Yeah, exactly. And the amount of graphs I’ve been looking at on the bike… I’ve got power data from the bike, I’ll go out on a ride and come back like “Oh if I had five more watts here but then saved 10 watts there, I could do this, this, and this the fastest ever” and… been a massive anorak about it. But yeah, you’re right. Do you know what, I think you’ve hit upon something really interest there. I think that kind of thing is a good sport/pastime for somebody who is a bit obsessive, who does like chasing numbers. There are things on, I mean you’ll know this but, Strava, you’ve got your KOMs. I bagged 19 of them last year which is, for me, a phenomenal amount. And it’s nearly all from obsessing over available data and looking at “This guy that I’m trying to beat, he was doing 700 watts at this point, if I could get up to 1000 and then tail off” and blah, blah, blah… it’s being obsessive. Nerdy. But you’re right, I guess it’s a similar kind of thing, isn’t it? If you could learn where you afford to tweak things from or squeeze last little drops out…

    Drew: And you’ve still got limited bandwidth in both cases. You’ve got limited energy and you’ve got limited network connection.

    Harry: Exactly, you can’t just magic some more bandwidth there.

    Drew: If you, the listener, would like to hear more from Harry, you can find him on Twitter, where he’s @csswizardty, or go to his website at csswizardry.com where you’ll find some fascinating case studies of his work and find out how to hire him to help solve your performance problems. Harry’s E-Book, that he mentioned, and video course we’ll link up from the show notes. Thanks for joining us today, Harry, do you have any parting words?

    Harry: I’m not one for soundbites and motivation quotes but I heard something really, really, really insightful recently. Everyone keeps saying “Oh well we’re all in the same boat” and we’re not. We’re all in the same storm and some people have got better boats than others. Some people are in little dinghies, some people have got mega yachts. Oh, is that a bit dreary to end on… don’t worry about Corona, you’ll be dead soon anyway!

    Drew: Keep hold of your oars and you’ll be all right.

    Harry: Yeah. I was on a call last night with some web colleagues and we were talking about this and missing each other a lot. The web is, by default, remote, that’s the whole point of the web. But… missing a lot of human connection so, chatting to you for this hour and a bit now has been wonderful, it’s been really nice. I don’t know what my parting words really are meant to be, I should have prepared something, but I just hope everyone’s well, hope everyone’s making what they can out of lockdown and people are keeping busy.

    Smashing Editorial
    (il)

    Source link

    web design

    Should The Web Expose Hardware Capabilities? — Smashing Magazine

    01/05/2021

    About The Author

    Noam Rosenthal is an independent web platform consultant, a WebKit reviewer, and a contributor to Chromium and to several web standards. Recently Noam has …
    More about
    Noam

    I have recently been interested in the difference of opinions between the different browser vendors about the future of the web — specifically in the various efforts to push web platform capabilities closer to native platforms, such as Chromium’s Project Fugu.

    The main positions can be summarized as:

    • Google (together with partners like Intel, Microsoft and Samsung) is aggressively pushing forward and innovating with a plethora of new APIs like the ones in Fugu, and ships them in Chromium;
    • Apple is pushing back with a more conservative approach, marking many of the new APIs as raising security & privacy concerns;
    • This (together with Apple’s restrictions on browser choice in iOS) has created a stance labeling Safari to be the new IE while claiming that Apple is slowing down the progress of the web;
    • Mozilla seems closer to Apple than to Google on this.

    My intention in this article is to look at claims identified with Google, specifically ones in the Platform Adjacency Theory by Project Fugu leader Alex Russell, look at the evidence presented in those claims, and perhaps reach my own conclusion.

    Specifically, I intend to dive into WebUSB (a particular controversial API from Project Fugu), check whether the security claims against it have merit, and try to see if an alternative emerges.

    The Platform Adjacency Theory

    The aforementioned theory makes the following claims:

    • Software is moving to the web because it is a better version of computing;
    • The web is a meta-platform — a platform abstracted from its operating system;
    • The success of a meta-platform is based on it accomplishing the things we expect most computers to do;
    • Declining to add adjacent capabilities to the web meta-platform on security grounds, while ignoring the same security issues in native platforms, will eventually make the web less and less relevant;
    • Apple & Mozilla are doing exactly that — declining to add adjacent computing capabilities to the web, thus “casting the web in amber”.

    I relate with the author’s passion for keeping the open web relevant, and with the concern that going too slow with enhancing the web with new features will make it irrelevant. This is augmented by my dislike of app stores and other walled gardens. But as a user I can relate to the opposite perspective — I get dizzy sometimes when I don’t know what websites I’m browsing are capable or not capable of doing, and I find platform restrictions and auditing to be comforting.

    Meta-Platforms

    To understand the term “meta-platform”, I looked at what the theory uses that name for — Java and Flash, both products of the turn of the millennium.

    I find it confusing to compare either Java or Flash to the web. Both Java and Flash, as mentioned in the theory, were widely distributed at the time through browser plug-ins, making them more of an alternative runtime riding on top of the browser platform. Today, Java is used mainly in the server and as part of the Android platform, and both do not share much in common, except the language.

    Today server-side Java is perhaps a meta-platform, and node.js is also a good example of a server-side meta-platform. It’s a set of APIs, a cross-platform runtime, and a package ecosystem. Indeed node.js is always adding more capabilities, previously only possible as part of a platform.

    On the client side, Qt, a C++-based cross-platform framework, does not come with a separate runtime, it’s merely a (good!) cross-platform library for UI development.

    The same applies for Rust — it’s a language and a package manager, but does not depend on pre-installed runtimes.

    The other ways to develop client-side applications are mainly platform-specific, but also include some cross-platform mobile solutions like Flutter and Xamarin.

    Capabilities vs. Time

    The main graph in the theory, shows the relevance of meta-platforms on a 2D axis of capabilities vs. time:

    The Relevance Gap
    Image credit: Alex Russell

    I can see how the above graph makes sense when talking about cross-platform development frameworks mentioned above like Qt, Xamarin, Flutter and Rust, and also to server platforms like node.js and Java/Scala.

    But all of the above have a key difference from the web.

    The 3rd Dimension

    The meta-platforms mentioned earlier are indeed competing against their host OSes in the race for capabilities, but unlike the web, they are not opinionated about trust and distribution — the 3rd dimension, that in my opinion is missing in the above graph.

    Qt and Rust are good ways to create apps that are distributed via WebAssembly, downloaded and installed directly on the host OS, or administered through package managers like Cargo or Linux distributions like Ubuntu. React Native, Flutter and Xamarin are all decent ways to create apps that are distributed via app stores. node.js and Java services are usually distributed via a docker container, a virtual machine, or some other server mechanism.

    Users are mostly unaware of what was used to develop their content, but are aware to some degree of how it is distributed. Users don’t know what Xamarin and node.js are, and if their Swift App was replaced one day by a Flutter App, most users wouldn’t and ideally shouldn’t care about it.

    But users do know the web — they know that when they’re “browsing” in Chrome or Firefox, they are “online” and can access content they don’t necessarily trust. They know that downloading software and installing it is a possible hazard, and might be blocked by their IT administrator. In fact, it’s important for the web platform that users know that they’re currently “browsing the web”. That’s why, for example, switching to full-screen mode shows a clear prompt to the user, with instructions of how to get back from it.

    The web has become successful because it’s not transparent — but clearly separated from its host OS. If I can’t trust my browser to keep random websites away from reading files on my hard-drive, I probably wouldn’t go to any website.

    Users also know that their computer software is “Windows” or “Mac”, whether their phones are Android or iOS-based, and whether they’re currently using an app (when on iOS or Android, and on Mac OS to some degree). The OS and the distribution model are generally known to the user — the user trusts their OS and the web to do different things, and to different degrees of trust.

    So, the web cannot be compared to cross-platform development frameworks, without taking its unique distribution model into account.

    On the other hand, web technologies are also used for cross-platform development, with frameworks like Electron and Cordova. But those are not exactly “the web”. When compared to Java or node.js, The term “The web” needs to be substituted with “Web Technologies”. And “web technologies” used in this way don’t necessarily need to be standard-based or work on multiple browsers. The conversation about Fugu APIs is somewhat tangential to Electron and Cordova.

    Native Apps

    When adding capabilities to the web platform, the 3rd dimension — the trust and distribution model — cannot be ignored, or taken lightly. When the author claims that “Apple and Mozilla posturing about risks from new capabilities is belied by accepted extant native platform risks”, he is putting the web and native platforms in the same dimension in regards to trust.

    Granted, native apps have their own security issues and challenges. But I don’t see how that’s an argument in favor of more web capabilities, like here. This is a fallacy — the conclusion should be fixing security issues with native apps, not relaxing security for web apps because they’re in a relevance catch-up game with OS capabilities.

    Native and web cannot be compared in terms of capabilities, without taking the 3rd dimension of trust and distribution model into account.

    App Store Limitations

    One of the criticisms about native apps in the theory is about lack of browser engine choice on iOS. This is a common thread of criticism against Apple, but there is more than one perspective to this.

    The criticism is specifically about Item 2.5.6 of Apple’s app store review guidelines:

    “Apps that browse the web must use the appropriate WebKit framework and WebKit JavaScript.”

    This might seem anti-competitive, and I do have my own reservation about how restrictive iOS is. But item 2.5.6 cannot be read without the context of the rest of the app-store review guidelines, for example Item 2.3.12:

    “Apps must clearly describe new features and product changes in their ‘What’s New’ text.”

    If an app could receive device access permissions, and then included its own framework that could execute code from any web site out there, those items in the app store review guidelines would become meaningless. Unlike apps, web sites don’t have to describe their features and product changes with every revision.

    This becomes an even bigger problem when browsers ship experimental features, like the ones in project Fugu, which are not yet considered a standard. Who defines what a browser is? By allowing apps to ship any web framework, the app store would essentially allow the “app” to run any unaudited code, or change the product completely, circumventing the store’s review process.

    As a user of both web sites and apps, I think both of them have space in the computing world, although I hope as much as possible could move to the web. But when considering the current state of web standards, and how the dimension of trust and sandboxing around things like Bluetooth and USB is far from being solved, I don’t see how allowing apps to freely execute content from the web would be beneficial for users.

    The Pursuit Of Appiness

    In another related blog post, the same author addresses some of this, when speaking about native apps:

    “Being ‘an app’ is merely meeting a set of arbitrary and changeable OS conventions.”

    I agree with the notion that the definition of “app” is arbitrary, and that its definition relies on whoever defines the app store policies. But today, the same is true for browsers. The claim from the post that web applications are safe by default is also somewhat arbitrary. Who draws the line in the sand of “what is a browser”? Is the Facebook app with a built-in browser “a browser”?

    The definition of an app is arbitrary, but also important. The fact that every revision of an application using low-level capabilities is audited by someone that I might trust, even if that someone is arbitrary, makes apps what they are. If that someone is the manufacturer of the hardware I’ve paid for, it makes it even less arbitrary — the company that I’ve bought my computer from is the one auditing software with lower capabilities to that computer.

    Everything Can Be A Browser

    Without drawing a line of “what’s a browser”, which is what the Apple app store essentially does, every app could ship its own web engine, lure the user to browse to any website using its in-app browser, and add whatever tracking code it wants, collapsing the 3rd dimension difference between apps and websites.

    When I use an app on iOS, I know my actions are currently exposed to two players: Apple & the identified app manufacturer. When I use a website on Safari or in a Safari WebView, my actions are exposed to Apple & to the owner of the top-level domain of the web site I’m currently viewing. When I use an in-app browser with an unidentified engine, I am exposed to Apple, the manufacturer of the app, and to the owner of the top-level domain. This can create avoidable same-origin violations, such as the owner of the app tracking all of my clicks on foreign websites.

    I agree that perhaps the line in the sand of “Only WebKit” is too harsh. What would be an alternative definition of a browser that wouldn’t create a backdoor for tracking user browsing?

    Other Criticism About Apple

    The theory claims that Apple’s decline to implement features is not limited to privacy/security concerns. It includes a link, which does indeed show a lot of features that are implemented in Chrome and not in Safari. However, when scrolling down, it also lists a sizable amount of other features that are implemented in Safari and not in Chrome.

    Those two browser projects have different priorities, but it’s far from the categorical statement “The game becomes clear when zooming out” and from the harsh criticism about Apple trying to cast the web in amber.

    Also, the links titled it’s hard and we don’t want to try lead to Apple’s statements that they would implement features if security/privacy concerns were met. I feel that putting these links with those titles is misleading.

    I would agree with a more balanced statement, that Google is a lot more bullish than Apple about implementing features and advancing the web.

    Permission Prompt

    Google goes long innovative ways in the 3rd dimension, developing new ways to broker trust between the user, the developer and the platform, sometimes with great success, like in the case of Trusted Web Activities.

    But still, most of the work in the 3rd dimension as it relates to device APIs is focused around permission prompts and making them more scary, or things like time-box permission grants, and block-listed domains.

    “Scary” prompts, like the ones in this example we see from time to time, look like they are meant to discourage people from going to pages that seem potentially malicious. Because they’re so blatant, those warnings encourage developers to move to safer APIs and to renew their certificates.

    I wish that for device-access capabilities we could come up with prompts that encourage engagement and ensure that the engagement is safe, rather than discourage it and transfer the liability to the user, with no remediation available for the web developer. More on that later.

    I do agree with the argument that Mozilla & Apple should at least try to innovate in that space rather than “decline to implement”. But maybe they are? I think isLoggedIn from Apple, for example, is an interesting and relevant proposal in the 3rd dimension that future device APIs could build upon — for example, device APIs that are fingerprinting-prone can be made available when the current website already knows the identity of the user.

    WebUSB

    In the next section I will dive into WebUSB, check what it allows, and how it’s handled in the 3rd dimension — what is the trust and distribution model? Is it sufficient? What are the alternatives?

    The Premise

    The WebUSB API allows full access to the USB protocol for device-classes that are not block-listed.

    It can achieve powerful things like connecting to an Arduino board or debugging and Android phone.

    It’s exciting to see Suz Hinton’s videos on how this API can help achieve things that were very expensive to achieve before.

    I truly wish platforms found ways to be more open and allow quick iterations on educational hardware/software projects, as an example.

    Funny Feeling

    But still, I get a funny feeling when I look at what WebUSB enables, and the existing security issues with USB in general.

    USB feels too powerful as a protocol exposed to the web, even with permission prompts.

    So I’ve researched further.

    Mozilla’s Official View

    I started by reading what David Baron had to say about why Mozilla ended up rejected WebUSB, in Mozilla’s official standards position:

    “Because many USB devices are not designed to handle potentially-malicious interactions over the USB protocols and because those devices can have significant effects on the computer they’re connected to, we believe that the security risks of exposing USB devices to the Web are too broad to risk exposing users to them or to explain properly to end users to obtain meaningful informed consent.”

    The Current Permission Prompt

    This is what Chrome’s WebUSB permission prompt looks like at the time of publishing this post:

    Permission Prompt
    Permission Prompt. (Large preview)

    Particular domain Foo wants to connect to particular device Bar. To do what? and how can I know for sure?

    When granting access to the printer, camera, microphone, GPS, or even to a few of the more contained WebBluetooth GATT profiles like heart rate monitoring, this question is relatively clear, and focuses on the content or action rather than on the device. There is a clear understanding of what information I want from the peripheral or what action I want to perform with it, and the user-agent mediates and makes sure that this particular action is handled.

    USB Is Generic

    Unlike the devices mentioned above that are exposed via special APIs, USB is not content-specific. As mentioned in the intro of the spec, WebUSB goes further and is intentionally designed for unknown or not-yet-invented types of devices, not for well-known device classes like keyboards or external drives.

    So, unlike the cases of the printer, GPS and camera, I cannot think of a prompt that would inform the user of what granting a page permission to connect to a device with WebUSB would allow in the content realm, without a deep understanding of the particular device and auditing the code that’s accessing it.

    The Yubikey Incident And Mitigation

    A good example from not too long ago is the Yubikey incident, where Chrome’s WebUSB was used to phish data from a USB-powered authentication device.

    Since this is a security issue that is said to be resolved, I was curious to dive into Chrome’s mitigation efforts in Chrome 67, which include blocking a specific set of devices and a specific set of classes.

    Class/Device Block-List

    So Chrome’s actual defense against WebUSB exploits that happened in the wild, in addition to the currently very general permission prompt, was to block specific devices and device classes.

    This may be a straightforward solution for a new technology or experiment, but will become harder and harder to accomplish when (and if) WebUSB becomes more popular.

    I’m afraid that the people innovating on educational devices via WebUSB might reach a difficult situation. By the time they’re done prototyping, they could be facing a set of ever-changing non-standard block lists, that only update together with browser versions, based on security issues that have nothing to do with them.

    I think that standardizing this API without addressing this will end up being counterproductive to the developers relying on it. For example, someone could spend cycles developing a WebUSB application for motion detectors, only to find out later that motion detectors become a blocked class, either due to security reasons or because the OS decides to handle them, causing their entire WebUSB effort to go to waste.

    Security vs. Features

    The platform adjacency theory, in some ways, considers capabilities and security to be a zero-sum game, and that being too conservative on security & privacy concerns would cause platforms to lose their relevance.

    Let’s take Arduino as an example. Arduino communication is possible with WebUSB and is a major use case. Someone developing an Arduino device will now have to consider a new threat scenario, where a site tries to access their device using WebUSB (with some user permission). As per the spec, this device manufacturer now has to “design their devices to only accept signed firmware”. This can add burden to firmware developers, and increase development costs, while the whole purpose of the spec is to do the opposite.

    What Makes WebUSB Different From Other Peripherals

    In browsers, there is a clear distinction between user interactions and synthetic interactions (interactions instantiated by the web page).

    For example, a web page can’t decide on its own to click a link on or wake up the CPU/display. But external devices can — for example, a mouse device can click a link on behalf of the user and almost any USB device can wake up the CPU, depending on the OS.

    So even with the current WebUSB specification, devices can choose to implement several interfaces, e.g. debug for adb and HID for pointer input, and using malicious code that takes advantage of ADB, become a keylogger and browse websites on behalf of the user, given the right exploitable firmware flashing mechanism.

    Adding that device to a blocklist would be too late for devices with firmware that was compromised using ADB or other allowed forms of flashing, and would make device manufacturers even more reliant than before on browser versions for security fixes associated with their devices.

    The problem with informed consent and USB, as mentioned before, is that USB (specifically in the extra-generic WebUSB use-cases) is not content-specific. Users know what a printer is, what a camera is, but “USB” for most users is merely a cable (or a socket) — a means to an end — very few users know that USB is a protocol and what enabling it between websites and devices means.

    One suggestion was to have a “scary” prompt, something along the lines of “Allow this web page to take over the device” (which is an improvement over the seemingly harmless “wants to connect”).

    But as scary as prompts get, they cannot explain the breadth of possible things that can be done with raw access to a USB peripheral that the browser doesn’t know intimately, and if they did, no user in their right mind would click “Yes”, unless it’s a device that they fully trust to be bug-free and a website they truly trust to be up-to-date and not malicious.

    A possible prompt like that would read “Allow this web page to potentially take over your computer”. I don’t think that a scary prompt like this one would be beneficial for the WebUSB community, and constant changes to these dialogs will leave the community confused.

    Prototyping vs. Product

    I can see a possible exception to this. If the premise of WebUSB and the other project Fugu APIs was to support prototyping rather than product-grade devices, all-encompassing generic prompts could make sense.

    In order to make that viable, though, I think the following must happen:

    1. Use language in the specs that set expectations about this being for prototyping;
    2. Have these APIs available only after some opt-in gesture, like having the user enable them manually in the browser settings;
    3. Have “scary” permission prompts, like the ones for invalid SSL certificates.

    Not having the above makes me think that these APIs are for real products rather than for prototypes, and as such, the feedback holds.

    An Alternative Proposal

    One of the parts in the original blog post that I agree with is that it’s not enough to say “no” — major players in the web world who decline certain APIs for being harmful should also play offense and propose ways in which these capabilities that matter to users and developers can be safely exposed. I don’t represent any major player, but I’m going to give it a humble go.

    I believe that the answer to this lies in the 3rd dimension of trust and relationship, and that it’s outside the box of permission prompts and block-lists.

    Straightforward And Verified Prompt

    The main case I’m going to make is that the prompt should be about the content or action, and not about the peripheral, and that informed consent can be granted for a specific straightforward action with a specific set of verified parameters, not for a general action like “taking over” or “connecting to” a device.

    The 3D Printer Example

    In the WebUSB spec, 3D printers are brought as an example, so I’m going to use it here.

    When developing a WebUSB application for a 3D printer, I want the browser/OS prompt to ask me something along the lines of Allow AutoDesk 3ds-mask to print a model to your CreatBot 3D printer?, be shown a browser/OS dialog with some print parameters, like refinement, thickness and output dimensions, and with a preview of what’s going to be printed. All of these parameters should be verified by a trusted user agent, not by a drive-by web page.

    Currently, the browser doesn’t know the printer, and it can verify only some of the claims in the prompt:

    • The requesting domain has a certificate registered to AutoDesk, so there is some certainty that this is AutoDesk Inc;
    • The requested peripheral calls itself “CreatBot 3d printer”;
    • This device, device class and domain are not found in the browser’s block-lists;
    • The user responded “Yes” or “No” to a general question they were asked.

    But in order to show a truthful prompt and dialog with the above details, the browser would also have to verify the following:

    • When permission is granted, the action performed will be printing a 3D model, and nothing but that;
    • The selected parameters (refinement/thickness/dimensions etc.) are going to be respected;
    • A verified preview of what is going to be printed was shown to the user;
    • In certain sensitive cases, an additional verification that this is in fact AutoDesk, maybe with something like a revokable short-lived token.

    Without verifying the above, a website that was granted permission to “connect to” or “take over” a 3D printer can start printing huge 3D models due to a bug (or malicious code in one of its dependencies).

    Also, an imagined full-blown web 3D printing capability would do a lot more than what WebUSB can provide — for example, spooling and queuing different print requests. How would that be handled if the browser window is closed? I haven’t researched all the possible WebUSB peripheral use-cases, but I’m guessing that when looking at them from a content/action perspective, most will need more than USB access.

    Because of the above, using WebUSB for 3D printing will probably be hacky and short-lived, and developers relying on it will have to provide a “real” driver for their printer at some point. For example, if OS vendors decide to add built-in support for 3D printers, all sites using that printer with WebUSB would stop working.

    Proposal: Driver Auditing Authority

    So, overarching permissions like “take over the peripheral” are problematic, we don’t have enough information in order to show a full-fledged parameter dialog and verify that its results are going to be respected, and we don’t want to send the user on an unsafe trip to download a random executable from the web.

    But what if there was an audited piece of code, a driver, that used the WebUSB API internally and did the following:

    • Implemented the “print” command;
    • Displayed an out-of-page print dialog;
    • Connected to a particular set of USB devices;
    • Performed some of its actions when the page is in the background (e.g. in a service worker), or even when the browser is closed.

    An auditing of a driver like this can make sure that what it does amounts to “printing”, that it respects the parameters, and that it shows the print preview.

    I see this as being similar to certificate authorities, an important piece in the web ecosystem that is somewhat disconnected from the browser vendors.

    Driver Syndication

    The drivers don’t have to be audited by Google/Apple, though the browser/OS vendor can choose to audit drivers on its own. It can work like SSL certificate authorities — the issuer is a highly trusted organization; for example, the manufacturer of the particular peripheral or an organization that certifies many drivers, or a platform like Arduino. (I imagine organizations popping up similar to Let’s Encrypt.)

    It might be enough to say to users: “Arduino trusts that this code is going to flash your Uno with this firmware” (with a preview of the firmware).

    Caveats

    This is of course not free of potential problems:

    • The driver itself can be buggy or malicious. But at least it’s audited;
    • It’s less “webby” and generates an additional development burden;
    • It doesn’t exist today, and cannot be solved by internal innovation in browser engines.

    Other Alternatives

    Other alternatives could be to somehow standardize and improve the cross-browser Web Extensions API, and make the existing browser add-on stores like Chrome Web Store into somewhat of a driver auditing authority, mediating between user requests and peripheral access.

    Summary Of Opinion

    The author, Google and partners’ bold efforts to keep the open web relevant by enhancing its capabilities are inspirational.

    When I get down to the details, I see Apple and Mozilla’s more conservative view of the web, and their defensive approach to new device capabilities, as carrying technical merit. Core issues with informed consent around open-ended hardware capabilities are far from being solved.

    Apple could be more forthcoming in the discussion to find new ways to enable device capabilities, but I believe this comes from a different perspective about computing, a standpoint that was part of Apple’s identity for decades, not from an anti-competitive standpoint.

    In order to support things like the somewhat open-ended hardware capabilities in project Fugu, and specifically WebUSB, the trust model of the web needs to evolve beyond permission prompts and domain/device block-lists, drawing inspiration from trust ecosystems like certificate authorities and package distributions.

    Further Reading on SmashingMag:

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Weaving Web Accessibility With Usability — Smashing Magazine

    11/30/2020

    About The Author

    Product designer @ Wix • Coder • Running shoes addict
    More about
    Uri

    In this article, Uri Paz explains how a site complying with accessibility guidelines may still present usability issues when testing with real users. Find out how weaving accessibility best practices with usability testing, can help as many people as possible to fully use your site.

    By formally adopting web accessibility standards, you can provide access to people with visual impairments without involving them in the product development lifecycle, but does that mean the end product is usable? In this article, I’ll briefly discuss visual impairments, as well as the connection between web accessibility standards and usability principles. I’ll also share my key takeaways from a usability test I conducted with visually impaired and blind participants.

    What Is Visual Impairment?

    The term visual impairment refers to people who can see but have a decrease in visual acuity or visual field. Visual impairment affects the ability to perform daily activities, such as reading, walking, driving, and social activities — all of which become difficult (and sometimes even impossible). There is a range of visual impairments which vary from mild to severe vision loss in one or both eyes.

    Here are a few examples:

    • Central Scotoma
      Loss of vision in the central visual field.
    Screenshot of an online stationery store with a large, black circle in the center, and the rest of the screen a bit blurred to show the impact of Central scotoma
    Funkify Disability Simulator with “Peripheral Pierre” activated. (Large preview)
    • Tunnel Vision
      Loss of vision in the peripheral visual field.
    Screenshot of an online stationery store with only a small part of the site visible, to show the impact of tunnel vision
    Funkify Disability Simulator with “Tunnel Toby” activated. (Large preview)
    • Hemianopia
      Loss of vision in half the visual field.
    Screenshot of an online stationery store with only half the screen visible, to show the impact of Hemianopia
    NoCoffee Vision Simulator with ”Side (hemianopia)” activated. (Large preview)
    • Blindness
      This term is only used for complete or near-complete loss of vision.

    Warp & Weft

    Weaving is a method of textile production in which the longitudinal warp and transverse weft come together to make a fabric. As in weaving, the creation of a user experience for people with visual impairments is based on the interweaving of two components: accessibility and usability.

    A diagram showing the structure of warp (vertical) and weft (horizontal) yarns in a weave
    (Large preview)

    Warp — Accessibility

    Web accessibility means that websites, web applications, and technologies are designed and developed so that people with disabilities can use them. More specifically, people can: perceive, understand, navigate, and interact with and contribute to the web.

    There is a range of disabilities that can impact how people access the web, including auditory, cognitive, neurological, physical, speech, and visual.

    “The power of the web is in its universality. Access by everyone regardless of disability is an essential aspect”.

    — Tim Berners-Lee, inventor of the World Wide Web

    In order to ensure the universality of the web and provide access to everyone, as Berners-Lee noted, there’s a wide range of web accessibility standards (which come with a myriad of acronyms).

    Let’s focus on these three key components:

    • Web Content Accessibility Guidelines (WCAG)
      Define how content (such as texts, images, forms) should be created so that it will be accessible through the use of sound, mouse-free navigation, compatibility with assistive technologies, and more.
    Screenshot of the web content accessibility guidelines 2.1 documentation with the main sections highlighted, including the principle, guideline and success criterion
    WCAG 2.1 has 13 guidelines that are organized under 4 principles: perceivable, operable, understandable, and robust. For each guideline, there are testable success criteria which are at three levels: A, AA and AAA. (Large preview)

    Compliance with web accessibility guidelines is technical and requires a high level of expertise. While you can use these guidelines to create a more accessible product, does that mean the product is also easy to use?

    While I tested visually impaired and blind participants on a product that was accessible according to the guidelines, I encountered the following cases:

    • Visually impaired participants were unable to read a large-size font because it’s weight was too thin.
    • Blind participants were unable to book a reservation at a restaurant because the navigation between dates was too hard to understand.
    • Visually impaired participants were unable to find their checkout because it opened elsewhere on the screen that was out of their visual field.

    In other words, formal adoption of the web accessibility guidelines can certainly lead to compliance, but not necessarily usability. This is also recognized in W3C documentation where there is an explicit reference to the fact that usability must always be taken into account:

    “Yet when designers, developers, and project managers approach accessibility as a checklist to meet these standards, the focus is only on the technical aspects of accessibility. As a result, the human interaction aspect is often lost, and accessibility is not achieved.”

    I particularly like Bruce Lawson’s pictorial description in the introduction of the book Web Accessibility: Web Standards and Regulatory Compliance:

    “I wouldn’t want you to think that making your sites accessible is just a matter of following a recipe; to make nourishing accessibility pudding, add one part CSS, one part valid code, a pinch of semantic markup, and a cupful of WCAG guidelines. It would be nice if I could guarantee that slavishly following such a recipe would make everything lovely… but the annoying fact is that people are people, and insist on having different needs and abilities.”

    Compliance with accessibility standards is a necessary goal (and often required by law), but it can’t exist in a vacuum.

    Weft — Usability

    Usability is a measure of how much a specified user in a particular environment can use a user-interface to achieve a defined goal.

    Usability is not an exact science that consists of formulas or black and white answers. Over the years, various usability models have been proposed for measuring the usability of software systems. One of the models was created by Jacob Nielsen, who proposed in his 1993 book Usability Engineering that usability is not a single, one-dimensional property of a user interface, but consists of five core attributes:

    1. Learnability
      How easy is it for the users to accomplish basic tasks during the first time they encounter the design?
    2. Efficiency
      How fast can users perform tasks and be productive after learning the design?
    3. Memorability
      How fast can returned-users reestablish proficiency, after a period of not using the design, without having to relearn everything?
    4. Errors
      How many errors do users make, how serious are these errors, and how easily can they recover from the errors?
    5. Satisfaction
      How subjectively are users satisfied with the use of the design?

    To ensure a product is usable, it’s essential that these five cornerstones are dominant in the design and development process.

    What I Learned From Conducting A Usability Test With Visually Impaired And Blind Participants

    A usability test is a structured interview where participants that match a target audience perform a series of tasks. While the participants are working, they verbally describe their reactions to interactions with the product. This allows the observers to understand not only what the participants are doing in the interface, but why they’re doing so.

    When I conducted my first usability test with visually impaired and blind participants on a product that is in compliance with the accessibility standards, I wasn’t able to find too much information about conducting these types of sessions. So, I thought to share some highlights from the process. These are divided into three parts:

    1. Before The Session
    2. During The Session
    3. After The Session
    A visually impaired participant examines a magnified user-interface while a moderator watches from the side
    We had 5 sessions: 2 with visually impaired participants, and 3 with blind participants. (Large preview)

    1. Before The Session

    Defining The Test Goal

    This is a starting point for a usability test. The test goal should be clear, specific, achievable, and relevant. The way we defined the goal is by collaborating with a multidisciplinary team: Designers, Product Managers, Developers, Content Writers, and QAs — each role brings a different perspective and expertise.

    Creating Tasks

    Since visually impaired and blind participants can take a longer time to complete tasks due to the way they navigate the site, we prioritized the tasks based on what’s most important to us, but this doesn’t mean that complex tasks need to be compromised.

    Setting A Schedule:

    Setting up our schedule for usability sessions required us to consider a range of issues, especially considering the complexity of our product and the physical limitations of the participants. This included:

    • Time to accompany the participant when entering and exiting the lab (we assigned a staff member to accompany each of the participants).
    • Time to configure and arrange assistive technology settings for each of the participants, depending on their abilities and if they brought their own equipment.
    • A time that the participants can comfortably navigate the interface.
    • Time to debrief with the staff after each session.

    We set one hour for each session and 45 minutes between sessions which was stressful  and forced us to rush (it is better to take an hour between sessions).

    Recruiting Participants

    The selection of participants whose background and abilities represent the target audience is a crucial component in the testing process. In our case, we were looking for visually impaired and blind candidates who have experience purchasing products online.

    Sources for finding participants can vary, such as information and technology learning centers for people with visual impairments in hospitals, colleges, and universities.

    In our case, my wife, an ophthalmologist by profession, referred me to the operator of the Information Center for the Visually Impaired and Blind at the hospital where she works. To my delight, I encountered someone who was happy to help and referred me to a group of relevant candidates.

    In order to prepare the candidates, we discussed the following:

    • The nature of the test, including that  there will be people watching them and a recording of the session.
    • Their online shopping experience. Do they primarily purchase on a computer or mobile? What is their favorite browser? What assistive technologies do they use? Additionally, in instances when the test is done in a non-English speaking country, ask them about the level of language proficiency when the interface is in English.
    • That each participant will receive an incentive (it’s important to make sure the incentive is also accessible).
    • If the candidates could bring their equipment with them.

    Overall the responsiveness was high, and most candidates expressed a desire to attend.

    Setting Up The Test Position

    The candidates who confirmed their participation had different ways of interacting with the web. Some consume information by customizing settings for fonts, colors contrast, screen magnification, or listening to a screen reader, while some needed a combination of a few things.

    Since most participants were not interested in bringing equipment with them (mainly due to difficulties carrying it or having a desktop computer), we had to take care of it ourselves. Once we found a staff member who understood how to configure the assistive technology, it didn’t take long to set up or adjust between sessions.

    We set up various browsers and assistive technologies, including  NVDA, JAWS, and ZoomText.

    Additionally, the camera and microphone should be adjusted to the needs of visually impaired participants, who need to get closer to the screen and view it at different angles.

    It’s necessary to check before starting that the lab is physically accessible as well. For example, that there are no stairs at the entrance, there’s an accessible toilet, access to public transportation, and a place for a guide dog to sit.

    Sending A Non-Disclosure Agreement (NDA)

    Like any other instance where you want to get informed consent, you can send the NDA online using an accessible PDF.

    Conducting A Dry Run Session

    A week before the usability session, we conducted a dry run with a visually impaired participant in order to avoid unexpected difficulties. For example, we saw that the screen sharing tool we were using conflicted with one of the assistive technologies. Additionally, the dry run helped us get a better feeling for the schedule. For example, the introduction of the moderator was too long, so we weren’t able to check some of the planned tasks. Also, it helped us to refine the test plan in instances where certain tasks weren’t clear, more difficult than expected, or too easy. Just as importantly, the dry run allowed the moderators to train with a “real” participant, and mentally prepare themselves for this type of usability test.

    2. During The Session

    Moderator

    The moderator is an important key to make this type of usability test go smoothly. Jared M. Spool once wrote:

    “The best usability test moderators have a lot in common with an orchestra conductor. They keep the participant comfortable and stress-free. The moderator tries to make the participant forget they are in a foreign environment with a bunch of strangers who intensely watch everything that he/she does. They keep the information flowing to the design team, especially the tough news. And they do all this with organized flair and patience, ensuring every aspect of the user’s experience is explored.”

    Moderating With Multiple Personalities: 3 Roles For Facilitating Usability Tests

    In a test with visually impaired and blind participants, the orchestra conductor should behave even more sensitively. For example, during sessions where a screen reader was used—which affects the concentration of the observers—it is important to ask participants to speak loud and clear, so we can understand their process and how they comprehend tasks.

    Observers

    We invited relevant people from different departments so they would be directly exposed to participants and have a better chance to absorb the key information. After all, getting a report on the results doesn’t provide the same benefits as seeing the participants’ experience firsthand.

    During the test, it’s important to pay attention and listen to the participant–even though the screen reader is distracting.

    Three people remotely observing the usability session from a conference room
    The beauty of accessibility is that it spans a wide range of roles. Here you can see a product designer, front-end developer, and analyst observing one of the sessions. In total, we had 12 observers. (Large preview)

    3. After The Session

    Writing A Report

    After the sessions, we wrote a report with our insights from the test:

    Some of the insights were related to bugs that we had to fix. For example, blind participants didn’t always find a particular button in the NVDA’s Elements List dialog, or sometimes they didn’t receive confirmation in the screen reader after clicking on the “Like” button.

    Some of the insights were related to the content. For example, some blind participants didn’t notice they were filling out the wrong form or wanted to scan an entire page quickly, but the strings in the aria-labels were too long.

    Some of the insights were related to visuals. For example, visually impaired participants who use magnifying software didn’t understand how to proceed when the next action appeared in a different area of ​​the screen. Other times they didn’t notice the modal “close” icon — although its color was high contrast.

    In the end, we found 65 issues that impact multiple departments in the company.

    Additionally, our report included happy moments from the sessions. For example, some participants noted that using an icon next to a link helps them because they don’t have to read the text. Others liked the contrast of the placeholder text, and some mentioned that the image-zoom worked very well.

    “Nothing About Us Without Us”

    On July 26, 2020, the world marked the 30th anniversary of the signing of the American Disability Act (ADA). This opened doors that were closed too long for people with disabilities, such as participating in basic daily activities like traveling by bus, going to school, attending movies, visiting museums, and more.

    All the events marking this historic signature were canceled or moved online due to the spread of the coronavirus.

    One of the online events was the Virtual Crip Camp, featuring trailblazing speakers from the disability community. In the invitation to this event, there is a green bus with the slogan “Nothing About Us Without Us”:

    A light green bus with the phrase “Nothing About Us Without Us” along the side underlined in red. Red-colored peace symbols are located on the back and front of the bus. A crutch and various black hands raised in fists and love fingers reach out of the windows. A wheelchair ramp is visible with the side door wide open. The text reads “Crip Camp: The Official Virtual Experience” in bold black letters
    The Invitation for the Virtual Crip Camp (Of course, it’s related to the rousing Netflix’s documentation.) (Large preview)

    “Nothing About Us Without Us” conveys the idea that a decision should be made with the direct participation of those most affected. The slogan came into use by activists with disabilities during the 1990s and is a connecting point between various disability rights movements around the world. The widespread use of the slogan (and in social networks using the hashtag #NothingAboutUsWithoutUs), reflects the desire of people with disabilities to take part in shaping the decisions that affect their personal lives.

    The same DNA is common with the User-Centered Design approach, whose philosophy is that the product should fit the user—and not make the user adapt to the product. Under the User-Centered Design approach, there is a collaboration with users through a variety of techniques applied at different points in the product development lifecycle. Usability testing is one of those techniques.

    The real magic of the usability test is not the reporting of data after the test, but the change in the perspective of team members who watch the participant in real-time and absorb what those participants say, think, do and, feel. As a result, they’ll develop empathy and better understand, reflect, and share the needs and motivations of another person.

    In the case of participants with disabilities, this empathy is essential for many reasons — it harnesses the observers, creates motivation for change, and raises awareness about the experience for people with disabilities.

    While automated tools that offer to make websites accessible can, at best, show us how well our site meets WCAG’s guidelines, they don’t clearly reflect how usable the website is for people with disabilities. In regard to a mechanistic approach to accessibility, my colleague Neil Osman, an accessibility engineer at Wix who is visually impaired, often uses the following expression:

    “You can put lipstick on a pig, but it’s still a pig.”

    Making a usable product is not just the ability to rely on a list of accessibility standards. In order to create solutions for people with disabilities, we need to be exposed to them firsthand.

    Disclaimer: The information provided here does not, and is not intended to, constitute legal advice; instead, all information, content, and materials are for general informational purposes only. The information contained herein may not constitute the most up-to-date legal or other information.


    Credits: Jeremy Hoover, Udi Gindi, Bat-El Sebbag, Nir Horesh, Neil Osman, Alon Fridman Waisbard, Shira Fogel and Zivan Krisher contributed to this article.

    Smashing Editorial
    (ra, il)

    Source link

    web design

    What Can Web Designers Do With Their Unused Designs? — Smashing Magazine

    11/03/2020

    About The Author

    Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
    More about
    Suzanne
    Scacca

    Do you have a hard time throwing away mockups, logos and other content you’ve created for clients? The good news is that you don’t have to see rejected or unused designs as a sign of failure or waste. You can actually repurpose them and give them new life on other projects, for other customers, and even within your own business. I’ll explain four ways to do this in this post.

    I was working on redesigning my website the other day and was having a hard time deleting sections and pages I knew were unnecessary and keeping prospects from getting to the important stuff. It’s similar to the struggle I have whenever a client (design or writing) decides not to use something I’ve created for them.

    I’m not a hoarder by any means, but when it comes to things I’ve taken a lot of time and energy to create, it’s so hard to let go.

    But that’s part of what we have to do as creators: Trim the fat.

    When it comes to client work, what do you do with that fat though? It’s not like you’re designing mockups or features that you hastily threw together and that deserve to be rejected or go unused. Sometimes the client just needs to decide which of the options they want to use. And other times it’s because they realize that a certain feature or element isn’t necessary after all. You might even come to that conclusion on their behalf, too.

    When that happens, what do you do with the unused designs? Unless the client wants to take ownership of them for a rainy day, do you just throw them away?

    If you have something really well-designed or even the inklings of something that could be great, you have a number of things you can do with your work.

    What to Do with Your Rejected Designs

    Let’s be clear about one thing before we look at your options:

    Check your contract for the ownership and copyright rules before you do anything with your unused designs. Better yet, write your own contract and make sure the copyright rules are clearly stated and work in both of your favors.

    The last thing you want is to repurpose a rejected design, only to hear from an angry client who believes they have the rights to everything you created during your time working together.

    Once that’s settled, you can pursue one of the following options:

    Option #1: Hold Onto Your Designs and Repurpose for Another Job

    When you have designs that didn’t make the cut, put them into a dedicated folder. Give it a positive name so you don’t get into the habit of viewing this work as a graveyard for “Rejects” or “Unwanted” designs. It’ll be hard to do anything with them later if you view them in a negative light.

    This is where mine live in Google Drive:

    A Google Drive folder called “(Free) to a Good Home” to store unused designs in
    Give your unused designs folder a positive name. (Image source: Google Drive) (Large preview)

    I keep this folder right next to my current client folders. That way, I almost always pass by my unused work every day, which keeps this good content top-of-mind.

    Another thing you can do to turn these unused designs into a positive is to keep your folders organized. Here’s one way to do it:

    A sample folder that shows how to keep unused designs organized in Google Drive with folders related to branding, websites and social media
    An example of how to organize your unused designs in Google Drive. (Image source: Google Drive) (Large preview)

    You could also have high-level categories for “Branding”, “Social” and “Web” (or “Apps”) and put sub-categories beneath them. It’s easier for me to see everything at a glance like the above, but either way works.

    The goal is to have your old designs on hand and well-organized so that when you start a new job and realize, “Oh, I created something before that could work for this”, it’s easy to retrieve.

    While all of your unused work should be redesigned in some way when you repurpose it for another client, the amount of redesigning you do depends on what you created and why it was rejected.

    Here are some examples of unused work you can do a more gentle redesign on (aside from the branded elements, of course):

    • Trial websites, mockups or wireframes that you pitched and weren’t accepted,
    • Web page mockups or components that wildly vary from the design picked by the client.

    For anything custom-created for a client’s brand, give the original design an overhaul and perhaps even mix-and-match a number of elements from your rejects to create something new. Essentially, what you want to do is use the good foundation of your unused work to save yourself time creating something (mostly) from-scratch for someone else.

    Option #2: Templatize and Sell Them on a Marketplace

    Another option is to templatize your unused designs.

    Unlike option #1 where you make a profit by spending less time building a new client’s website with the pieces from another, this one allows you to exponentially increase your profits by selling them over and over again. You just have to make sure your templates have mass appeal.

    There are a couple routes you can take with this.

    Templates or Themes

    If a client falls off the face of the Earth and ghosts you for good, you might have the makings of an entire website or app (if not the whole thing) sitting in your lap.

    In that case, you could turn your designs into a ready-made template or theme and sell the license to numerous users instead of just the one client. You can do this with:

    • Websites,
    • Landing pages,
    • Mobile apps,
    • Web apps.

    You could also strip down your creation and turn it into a UI kit.

    Now, because you only have one set of unused designs (hopefully), you won’t be able to sell the template on your own website unless you already have an established shop where you make and sell templates from.

    So, you’ll have to use a third-party marketplace to do this.

    Thankfully, there are a number of marketplaces with high volumes of traffic. So, even if they do take a big chunk of your profits, the increased visibility (and your additional marketing efforts) will help you make enough sales where it doesn’t matter.

    I’d recommend using one of the following platforms:

    With Dribbble, you get the added benefit of selling your template alongside the rest of your portfolio (I’ll explain below why it’s not a bad idea to have a portfolio here as well as your site).

    Licensable Components

    Let’s say you’re not stuck with a ghosting client (phew!). Instead, you’ve presented your client with a lot of great options and there are some things they just don’t need. Or they decide to go in a different direction with their business and now you have unused assets.

    In this case, you can turn these custom-made pieces into licensable components. This would work for:

    • Imagery (photos, illustrations, videos, audio, etc.);
    • Icon sets;
    • Fonts;
    • Textures;
    • Backgrounds;
    • Logo(s);
    • Plugins or extensions.

    Again, unless you have dozens of these to sell, your best bet is to license them through a marketplace. With the exception of MOJO Marketplace, you can license your components and graphics through the same list of sites above.

    Envato Market (the umbrella company above ThemeForest) has a number of categories through which you can license your content: plugins, video, audio, graphics, photos and 3D files. CodeCanyon is the part of the marketplace where you can sell website extensions and plugins:

    Envato Market pages for Code, Video, Audio, Graphics, Photos and 3D Files
    Envato Market has a variety of categories designers can sell their unused work through. (Image source: CodeCanyon) (Large preview)

    If you have a component you custom designed and coded (like a pop-up, contact form or a reviews widget), this is a good place to sell it.

    Creative Market, on the other hand, is a good place to sell stock photography, graphics and fonts:

    Creative Market page for custom-designed graphics like web elements, illustrations, objects, icons, textures and patterns
    Designers can sell custom-designed graphics through Creative Market. (Image source: Creative Market) (Large preview)

    Dribbble would be a good option to license, well, everything. Though if you created branding that’s now going to go unused, this is definitely the place to use to find a buyer:

    Dribbble “Goods for Sale” page featuring logos
    The “Goods for Sale” page on Dribbble shows logos that are for sale. (Image source: Dribbble) (Large preview)

    One thing to keep in mind is this:

    It’s okay if you don’t license any of your unused components or templates right away. As your online reputation grows, the possibility of them selling will grow. It’ll just require some work on your part to market them and increase their visibility in the marketplaces.

    Option #3: Show Off the Early, Unused Versions of an Approved Design in a Case Study

    How often do you really knock it out of the park on the first try with a design?

    Unless you’ve worked with a client for some time or are super well-versed in their niche and what they do, it can be hard translating their vision into the perfect design in the first round. In some cases, you might even pitch them a few different options to better gauge what they want and get them as close as possible to perfect next time.

    Eventually, you will nail it. But what happens to all those quality designs you created in the lead up to the approved design?

    One of the things I love about the Rejected Design Instagram account is that it doesn’t hide the other designs that were created along the way. You get to see how a designer took a logo from “not crazy about it” to “love it”:

    The Rejected Design Instagram account shows off rejected and accepted logo designs
    A snippet of samples from the Rejected Design Instagram account. (Image source: Rejected Design) (Large preview)

    It’s an honest approach to showing how the design process works. And I think this is something that web designers can use when creating client case studies for their websites.

    So, rather than tell the fairy tale version of the story:

    “I met Client X. I designed this beautiful responsive website that they loved.”

    You share the nitty-gritty details about how you and your client got the happy ending:

    “I met Client X. We carefully reviewed their business plan, mission and values. After a lot of strategic planning, I created two mockups based on our discussions and research…”

    This gives you a chance to show off different versions of the work you did as well as different iterations of the same design.

    Unlike design portfolios that usually only show the winning design, your case study would show your prospects the before, the after and everything that happened in between — which includes showing off designs that didn’t work for the client.

    I think in a time where consumers are demanding more transparency from brands, using rejected designs to build out client case studies could be a really good thing for your business.

    First, you’ll get to show prospects how good of a designer you are that you could take a client’s feedback and turn a rejected or so-so design into a winner. Second, you’ll show them how your design process works, how you deal with negative feedback and how you’re able to use your creativity and skills to pivot when you’ve gone in the wrong direction.

    You won’t be able to monetize this option, not directly anyway. But you can use it to attract more prospects to your business by being more honest about how you work and showing what you’ve done for your clients in the process.

    Option #4: Put Them in Your Design Portfolio

    There’s one last option for your unused designs and this is if you can’t find a way to monetize them and you’re not able to show them off in your case study (either because your client doesn’t want to put their experience out there or you’re not confident enough to show off earlier designs).

    That said, you can still benefit from this option. I’d just say it’s more for fledgling designers struggling to bulk up their portfolio with quality work samples.

    There are a couple of places to do this. I guess it depends on your confidence in the designs and what you want to happen with them.

    For example, you can place unused designs into your website portfolio to:

    • Bulk up a slim collection of work for prospects to glance over.
    • Provide inspiration and examples to show off to existing clients.
    • Show off your evolution as a designer over time, though that’ll require you to add more details to each portfolio piece like the year it was done and to explain that it was unused.

    You can also use a site like Dribbble or Behance to share your unused work (on top of your regular portfolio). There’s a lot of work that designers showcase as “unused” on these sites:

    A search for “unused” designs on the Behance website
    A search for “unused” designs on Behance. (Image source: Behance) (Large preview)

    There are a few benefits to this:

    • Obviously, if your website doesn’t have a lot of organic traffic yet, your Behance portfolio can help boost your visibility.
    • By sharing your unused work here, you’ll provide inspiration to other designers or even business owners looking for somewhere to start.
    • If you sell your unused designs here, you’re likely to discover that one person’s trash is another person’s treasure.

    Wherever you end up sharing your unused work, remember to strip out all branded elements from the designs. You can replace them with your own logo, company name, images, etc. or create some dummy ones to fill in the gaps. Then, let your prospects focus on the designs and not the company who turned down the work.

    Wrapping Up

    It can be very hard letting go of a design, feature or piece of content you created. This is one of those times in business, however, where it’s okay to let a deep personal connection to your work drive your decisions.

    In this case, you’ll use your belief in the quality of the work to bring life to it in a different way than was originally intended. And that’s a really great thing. In addition to preserving a great piece of design, you can repurpose it into something that helps you make money in new ways.

    Smashing Editorial
    (ra, il)

    Source link

    web design

    Managing Long-Running Tasks In A React App With Web Workers — Smashing Magazine

    10/15/2020

    About The Author

    Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a …
    More about
    Chidi

    In this tutorial, we’re going to learn how to use the Web Worker API to manage time-consuming and UI-blocking tasks in a JavaScript app by building a sample web app that leverages Web Workers. Finally, we’ll end the article by transferring everything to a React application.

    Response time is a big deal when it comes to web applications. Users demand instantaneous responses, no matter what your app may be doing. Whether it’s only displaying a person’s name or crunching numbers, web app users demand that your app responds to their command every single time. Sometimes that can be hard to achieve given the single-threaded nature of JavaScript. But in this article, we’ll learn how we can leverage the Web Worker API to deliver a better experience.

    In writing this article, I made the following assumptions:

    1. To be able to follow along, you should have at least some familiarity with JavaScript and the document API;
    2. You should also have a working knowledge of React so that you can successfully start a new React project using Create React App.

    If you need more insights into this topic, I’ve included a number of links in the “Further Resources” section to help you get up to speed.

    First, let’s get started with Web Workers.

    What Is A Web Worker?

    To understand Web Workers and the problem they’re meant to solve, it is necessary to get a grasp of how JavaScript code is executed at runtime. During runtime, JavaScript code is executed sequentially and in a turn-by-turn manner. Once a piece of code ends, then the next one in line starts running, and so on. In technical terms, we say that JavaScript is single-threaded. This behavior implies that once some piece of code starts running, every code that comes after must wait for that code to finish execution. Thus, every line of code “blocks” the execution of everything else that comes after it. It is therefore desirable that every piece of code finish as quickly as possible. If some piece of code takes too long to finish our program would appear to have stopped working. On the browser, this manifests as a frozen, unresponsive page. In some extreme cases, the tab will freeze altogether.

    Imagine driving on a single-lane. If any of the drivers ahead of you happen to stop moving for any reason, then, you have a traffic jam. With a program like Java, traffic could continue on other lanes. Thus Java is said to be multi-threaded. Web Workers are an attempt to bring multi-threaded behavior to JavaScript.

    The screenshot below shows that the Web Worker API is supported by many browsers, so you should feel confident in using it.

    Showing browser support chart for web workers
    Web Workers browser support. (Large preview)

    Web Workers run in background threads without interfering with the UI, and they communicate with the code that created them by way of event handlers.

    An excellent definition of a Web Worker comes from MDN:

    “A worker is an object created using a constructor (e.g. Worker() that runs a named JavaScript file — this file contains the code that will run in the worker thread; workers run in another global context that is different from the current window. Thus, using the window shortcut to get the current global scope (instead of self within a Worker will return an error.”

    A worker is created using the Worker constructor.

    const worker = new Worker('worker-file.js')

    It is possible to run most code inside a web worker, with some exceptions. For example, you can’t manipulate the DOM from inside a worker. There is no access to the document API.

    Workers and the thread that spawns them send messages to each other using the postMessage() method. Similarly, they respond to messages using the onmessage event handler. It’s important to get this difference. Sending messages is achieved using a method; receiving a message back requires an event handler. The message being received is contained in the data attribute of the event. We will see an example of this in the next section. But let me quickly mention that the sort of worker we’ve been discussing is called a “dedicated worker”. This means that the worker is only accessible to the script that called it. It is also possible to have a worker that is accessible from multiple scripts. These are called shared workers and are created using the SharedWorker constructor, as shown below.

    const sWorker = new SharedWorker('shared-worker-file.js')

    To learn more about Workers, please see this MDN article. The purpose of this article is to get you started with using Web workers. Let’s get to it by computing the nth Fibonacci number.

    Computing The Nth Fibonacci Number

    Note: For this and the next two sections, I’m using Live Server on VSCode to run the app. You can certainly use something else.

    This is the section you’ve been waiting for. We’ll finally write some code to see Web Workers in action. Well, not so fast. We wouldn’t appreciate the job a Web Worker does unless we run into the sort of problems it solves. In this section, we’re going to see an example problem, and in the following section, we’ll see how a web worker helps us do better.

    Imagine you were building a web app that allowed users to calculate the nth Fibonacci number. In case you’re new to the term ‘Fibonacci number’, you can read more about it here, but in summary, Fibonacci numbers are a sequence of numbers such that each number is the sum of the two preceding numbers.

    Mathematically, it is expressed as:

    Thus the first few numbers of the sequence are:

    1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 ...

    In some sources, the sequence starts at F0 = 0, in which case the formula below holds for n > 1:

    In this article we’ll start at F1 = 1. One thing we can see right away from the formula is that the numbers follow a recursive pattern. The task at hand now is to write a recursive function to compute the nth Fibonacci number (FN).

    After a few tries, I believe you can easily come up with the function below.

    const fib = n => {
      if (n < 2) {
        return n // or 1
      } else {
        return fib(n - 1) + fib(n - 2)
      }
    }

    The function is simple. If n is less than 2, return n (or 1), otherwise, return the sum of the n-1 and n-2 FNs. With arrow functions and ternary operator, we can come up with a one-liner.

    const fib = n => (n < 2 ? n : fib(n-1) + fib(n-2))

    This function has a time complexity of 0(2n). This simply means that as the value of n increases, the time required to compute the sum increases exponentially. This makes for a really long-running task that could potentially interfere with our UI, for large values of n. Let’s see this in action.

    Note: This is by no means the best way to solve this particular problem. My choice of using this method is for the purpose of this article.

    To start, create a new folder and name it whatever you like. Now inside that folder create a src/ folder. Also, create an index.html file in the root folder. Inside the src/ folder, create a file named index.js.

    Open up index.html and add the following HTML code.

    <!DOCTYPE html>
    <html>
    <head>
      <link rel="stylesheet" href="styles.css">
    </head>
    <body>
      <div class="heading-container">
        <h1>Computing the nth Fibonnaci number</h1>
      </div>
      <div class="body-container">
        <p id='error' class="error"></p>
        <div class="input-div">
          <input id='number-input' class="number-input" type='number' placeholder="Enter a number" />
          <button id='submit-btn' class="btn-submit">Calculate</button>
        </div>
        <div id='results-container' class="results"></div>
      </div>
      <script src="http://www.smashingmagazine.com/src/index.js"></script>
    </body>
    </html>

    This part is very simple. First, we have a heading. Then we have a container with an input and a button. A user would enter a number then click on “Calculate”. We also have a container to hold the result of the calculation. Lastly, we include the src/index.js file in a script tag.

    You may delete the stylesheet link. But if you’re short on time, I have defined some CSS which you can use. Just create the styles.css file at the root folder and add the styles below:

    
    body {
        margin: 0;
        padding: 0;
        box-sizing: border-box;
      }
      
      .body-container,
      .heading-container {
        padding: 0 20px;
      }
      
      .heading-container {
        padding: 20px;
        color: white;
        background: #7a84dd;
      }
      
      .heading-container > h1 {
        margin: 0;
      }
      
      .body-container {
        width: 50%
      }
      
      .input-div {
        margin-top: 15px;
        margin-bottom: 15px;
        display: flex;
        align-items: center;
      }
      
      .results {
        width: 50vw;
      }
      
      .results>p {
        font-size: 24px;
      }
      
      .result-div {
        padding: 5px 10px;
        border-radius: 5px;
        margin: 10px 0;
        background-color: #e09bb7;
      }
      
      .result-div p {
        margin: 5px;
      }
      
      span.bold {
        font-weight: bold;
      }
      
      input {
        font-size: 25px;
      }
      
      p.error {
        color: red;
      }
      
      .number-input {
        padding: 7.5px 10px;
      }
      
      .btn-submit {
        padding: 10px;
        border-radius: 5px;
        border: none;
        background: #07f;
        font-size: 24px;
        color: white;
        cursor: pointer;
        margin: 0 10px;
      }

    Now open up src/index.js let’s slowly develop it. Add the code below.

    const fib = (n) => (n < 2 ? n : fib(n - 1) + fib(n - 2));
    
    const ordinal_suffix = (num) => {
      // 1st, 2nd, 3rd, 4th, etc.
      const j = num % 10;
      const k = num % 100;
      switch (true) {
        case j === 1 && k !== 11:
          return num + "st";
        case j === 2 && k !== 12:
          return num + "nd";
        case j === 3 && k !== 13:
          return num + "rd";
        default:
          return num + "th";
      }
    };
    const textCont = (n, fibNum, time) => {
      const nth = ordinal_suffix(n);
      return `
      <p id='timer'>Time: <span class='bold'>${time} ms</span></p>
      <p><span class="bold" id='nth'>${nth}</span> fibonnaci number: <span class="bold" id='sum'>${fibNum}</span></p>
      `;
    };

    Here we have three functions. The first one is the function we saw earlier for calculating the nth FN. The second function is just a utility function to attach an appropriate suffix to an integer number. The third function takes some arguments and outputs a markup which we will later insert in the DOM. The first argument is the number whose FN is being computed. The second argument is the computed FN. The last argument is the time it takes to perform the computation.

    Still in src/index.js, add the below code just under the previous one.

    const errPar = document.getElementById("error");
    const btn = document.getElementById("submit-btn");
    const input = document.getElementById("number-input");
    const resultsContainer = document.getElementById("results-container");
    
    btn.addEventListener("click", (e) => {
      errPar.textContent = '';
      const num = window.Number(input.value);
    
      if (num < 2) {
        errPar.textContent = "Please enter a number greater than 2";
        return;
      }
    
      const startTime = new Date().getTime();
      const sum = fib(num);
      const time = new Date().getTime() - startTime;
    
      const resultDiv = document.createElement("div");
      resultDiv.innerHTML = textCont(num, sum, time);
      resultDiv.className = "result-div";
      resultsContainer.appendChild(resultDiv);
    });

    First, we use the document API to get hold of DOM nodes in our HTML file. We get a reference to the paragraph where we’ll display error messages; the input; the calculate button and the container where we’ll show our results.

    Next, we attach a “click” event handler to the button. When the button gets clicked, we take whatever is inside the input element and convert it to a number, if we get anything less than 2, we display an error message and return. If we get a number greater than 2, we continue. First, we record the current time. After that, we calculate the FN. When that finishes, we get a time difference that represents how long the computation took. In the remaining part of the code, we create a new div. We then set its inner HTML to be the output of the textCont() function we defined earlier. Finally, we add a class to it (for styling) and append it to the results container. The effect of this is that each computation will appear in a separate div below the previous one.

    Showing computed Fibonacci numbers up to 43
    Some Fibonacci numbers. (Large preview)

    We can see that as the number increases, the computation time also increases (exponentially). For instance, from 30 to 35, we had the computation time jump from 13ms to 130ms. We can still consider those operations to be “fast”. At 40 we see a computation time of over 1 second. On my machine, this is where I start noticing the page become unresponsive. At this point, I can no longer interact with the page while the computation is on-going. I can’t focus on the input or do anything else.

    Recall when we talked about JavaScript being single-threaded? Well, that thread has been “blocked” by this long-running computation, so everything else must “wait” for it to finish. It may start at a lower or higher value on your machine, but you’re bound to reach that point. Notice that it took almost 10s to compute that of 44. If there were other things to do on your web app, well, the user has to wait for Fib(44) to finish before they can continue. But if you deployed a web worker to handle that calculation, your users could carry on with something else while that runs.

    Let’s now see how web workers help us overcome this problem.

    An Example Web Worker In Action

    In this section, we’ll delegate the job of computing the nth FN to a web worker. This will help free up the main thread and keep our UI responsive while the computation is on-going.

    Getting started with web workers is surprisingly simple. Let’s see how. Create a new file src/fib-worker.js. and enter the following code.

    const fib = (n) => (n < 2 ? n : fib(n - 1) + fib(n - 2));
    
    onmessage = (e) => {
      const { num } = e.data;
      const startTime = new Date().getTime();
      const fibNum = fib(num);
      postMessage({
        fibNum,
        time: new Date().getTime() - startTime,
      });
    };

    Notice that we have moved the function that calculates the nth Fibonacci number, fib inside this file. This file will be run by our web worker.

    Recall in the section What is a web worker, we mentioned that web workers and their parent communicate using the onmessage event handler and postMessage() method. Here we’re using the onmessage event handler to listen to messages from the parent script. Once we get a message, we destructure the number from the data attribute of the event. Next, we get the current time and start the computation. Once the result is ready, we use the postMessage() method to post the results back to the parent script.

    Open up src/index.js let’s make some changes.

    ...
    
    const worker = new window.Worker("src/fib-worker.js");
    
    btn.addEventListener("click", (e) => {
      errPar.textContent = "";
      const num = window.Number(input.value);
      if (num < 2) {
        errPar.textContent = "Please enter a number greater than 2";
        return;
      }
    
      worker.postMessage({ num });
      worker.onerror = (err) => err;
      worker.onmessage = (e) => {
        const { time, fibNum } = e.data;
        const resultDiv = document.createElement("div");
        resultDiv.innerHTML = textCont(num, fibNum, time);
        resultDiv.className = "result-div";
        resultsContainer.appendChild(resultDiv);
      };
    });

    The first thing to do is to create the web worker using the Worker constructor. Then inside our button’s event listener, we send a number to the worker using worker.postMessage({ num }). After that, we set a function to listen for errors in the worker. Here we simply return the error. You can certainly do more if you want, like showing it in DOM. Next, we listen for messages from the worker. Once we get a message, we destructure time and fibNum, and continue the process of showing them in the DOM.

    Note that inside the web worker, the onmessage event is available in the worker’s scope, so we could have written it as self.onmessage and self.postMessage(). But in the parent script, we have to attach these to the worker itself.

    In the screenshot below you would see the web worker file in the sources tab of Chrome Dev Tools. What you should notice is that the UI stays responsive no matter what number you enter. This behavior is the magic of web workers.

    View of an active web worker file
    A running web worker file. (Large preview)

    We’ve made a lot of progress with our web app. But there’s something else we can do to make it better. Our current implementation uses a single worker to handle every computation. If a new message comes while one is running, the old one gets replaced. To get around this, we can create a new worker for each call to calculate the FN. Let’s see how to do that in the next section.

    Working With Multiple Web Workers

    Currently, we’re handling every request with a single worker. Thus an incoming request will replace a previous one that is yet to finish. What we want now is to make a small change to spawn a new web worker for every request. We will kill this worker once it’s done.

    Open up src/index.js and move the line that creates the web worker inside the button’s click event handler. Now the event handler should look like below.

    btn.addEventListener("click", (e) => {
      errPar.textContent = "";
      const num = window.Number(input.value);
      
      if (num < 2) {
        errPar.textContent = "Please enter a number greater than 2";
        return;
      }
      
      const worker = new window.Worker("src/fib-worker.js"); // this line has moved inside the event handler
      worker.postMessage({ num });
      worker.onerror = (err) => err;
      worker.onmessage = (e) => {
        const { time, fibNum } = e.data;
        const resultDiv = document.createElement("div");
        resultDiv.innerHTML = textCont(num, fibNum, time);
        resultDiv.className = "result-div";
        resultsContainer.appendChild(resultDiv);
        worker.terminate() // this line terminates the worker
      };
    });

    We made two changes.

    1. We moved this line const worker = new window.Worker("src/fib-worker.js") inside the button’s click event handler.
    2. We added this line worker.terminate() to discard the worker once we’re done with it.

    So for every click of the button, we create a new worker to handle the calculation. Thus we can keep changing the input, and each result will hit the screen once the computation finishes. In the screenshot below you can see that the values for 20 and 30 appear before that of 45. But I started 45 first. Once the function returns for 20 and 30, their results were posted, and the worker terminated. When everything finishes, we shouldn’t have any workers on the sources tab.

    showing Fibonacci numbers with terminated workers
    Illustration of Multiple independent workers. (Large preview)

    We could end this article right here, but if this were a react app, how would we bring web workers into it. That is the focus of the next section.

    Web Workers In React

    To get started, create a new react app using CRA. Copy the fib-worker.js file into the public/ folder of your react app. Putting the file here stems from the fact that React apps are single-page apps. That’s about the only thing that is specific to using the worker in a react application. Everything that follows from here is pure React.

    In src/ folder create a file helpers.js and export the ordinal_suffix() function from it.

    // src/helpers.js
    
    export const ordinal_suffix = (num) => {
      // 1st, 2nd, 3rd, 4th, etc.
      const j = num % 10;
      const k = num % 100;
      switch (true) {
        case j === 1 && k !== 11:
          return num + "st";
        case j === 2 && k !== 12:
          return num + "nd";
        case j === 3 && k !== 13:
          return num + "rd";
        default:
          return num + "th";
      }
    };

    Our app will require us to maintain some state, so create another file, src/reducer.js and paste in the state reducer.

    // src/reducers.js
    
    export const reducer = (state = {}, action) => {
      switch (action.type) {
        case "SET_ERROR":
          return { ...state, err: action.err };
        case "SET_NUMBER":
          return { ...state, num: action.num };
        case "SET_FIBO":
          return {
            ...state,
            computedFibs: [
              ...state.computedFibs,
              { id: action.id, nth: action.nth, loading: action.loading },
            ],
          };
        case "UPDATE_FIBO": {
          const curr = state.computedFibs.filter((c) => c.id === action.id)[0];
          const idx = state.computedFibs.indexOf(curr);
          curr.loading = false;
          curr.time = action.time;
          curr.fibNum = action.fibNum;
          state.computedFibs[idx] = curr;
          return { ...state };
        }
        default:
          return state;
      }
    };

    Let’s go over each action type one after the other.

    1. SET_ERROR: sets an error state when triggered.
    2. SET_NUMBER: sets the value in our input box to state.
    3. SET_FIBO: adds a new entry to the array of computed FNs.
    4. UPDATE_FIBO: here we look for a particular entry and replaces it with a new object which has the computed FN and the time taken to compute it.

    We shall use this reducer shortly. Before that, let’s create the component that will display the computed FNs. Create a new file src/Results.js and paste in the below code.

    // src/Results.js
    
    import React from "react";
    
    export const Results = (props) => {
      const { results } = props;
      return (
        <div id="results-container" className="results-container">
          {results.map((fb) => {
            const { id, nth, time, fibNum, loading } = fb;
            return (
              <div key={id} className="result-div">
                {loading ? (
                  <p>
                    Calculating the{" "}
                    <span className="bold" id="nth">
                      {nth}
                    </span>{" "}
                    Fibonacci number...
                  </p>
                ) : (
                  <>
                    <p id="timer">
                      Time: <span className="bold">{time} ms</span>
                    </p>
                    <p>
                      <span className="bold" id="nth">
                        {nth}
                      </span>{" "}
                      fibonnaci number:{" "}
                      <span className="bold" id="sum">
                        {fibNum}
                      </span>
                    </p>
                  </>
                )}
              </div>
            );
          })}
        </div>
      );
    };

    With this change, we start the process of converting our previous index.html file to jsx. This file has one responsibility: take an array of objects representing computed FNs and display them. The only difference from what we had before is the introduction of a loading state. So now when the computation is running, we show the loading state to let the user know that something is happening.

    Let’s put in the final pieces by updating the code inside src/App.js. The code is rather long, so we’ll do it in two steps. Let’s add the first block of code.

    import React from "react";
    import "./App.css";
    import { ordinal_suffix } from "./helpers";
    import { reducer } from './reducer'
    import { Results } from "./Results";
    function App() {
      const [info, dispatch] = React.useReducer(reducer, {
        err: "",
        num: "",
        computedFibs: [],
      });
      const runWorker = (num, id) => {
        dispatch({ type: "SET_ERROR", err: "" });
        const worker = new window.Worker('./fib-worker.js')
        worker.postMessage({ num });
        worker.onerror = (err) => err;
        worker.onmessage = (e) => {
          const { time, fibNum } = e.data;
          dispatch({
            type: "UPDATE_FIBO",
            id,
            time,
            fibNum,
          });
          worker.terminate();
        };
      };
      return (
        <div>
          <div className="heading-container">
            <h1>Computing the nth Fibonnaci number</h1>
          </div>
          <div className="body-container">
            <p id="error" className="error">
              {info.err}
            </p>
    
            // ... next block of code goes here ... //
    
            <Results results={info.computedFibs} />
          </div>
        </div>
      );
    }
    export default App;

    As usual, we bring in our imports. Then we instantiate a state and updater function with the useReducer hook. We then define a function, runWorker(), that takes a number and an ID and sets about calling a web worker to compute the FN for that number.

    Note that to create the worker, we pass a relative path to the worker constructor. At runtime, our React code gets attached to the public/index.html file, thus it can find the fib-worker.js file in the same directory. When the computation completes (triggered by worker.onmessage), the UPDATE_FIBO action gets dispatched, and the worker terminated afterward. What we have now is not much different from what we had previously.

    In the return block of this component, we render the same HTML we had before. We also pass the computed numbers array to the <Results /> component for rendering.

    Let’s add the final block of code inside the return statement.

            <div className="input-div">
              <input
                type="number"
                value={info.num}
                className="number-input"
                placeholder="Enter a number"
                onChange={(e) =>
                  dispatch({
                    type: "SET_NUMBER",
                    num: window.Number(e.target.value),
                  })
                }
              />
              <button
                id="submit-btn"
                className="btn-submit"
                onClick={() => {
                  if (info.num < 2) {
                    dispatch({
                      type: "SET_ERROR",
                      err: "Please enter a number greater than 2",
                    });
                    return;
                  }
                  const id = info.computedFibs.length;
                  dispatch({
                    type: "SET_FIBO",
                    id,
                    loading: true,
                    nth: ordinal_suffix(info.num),
                  });
                  runWorker(info.num, id);
                }}
              >
                Calculate
              </button>
            </div>

    We set an onChange handler on the input to update the info.num state variable. On the button, we define an onClick event handler. When the button gets clicked, we check if the number is greater than 2. Notice that before calling runWorker(), we first dispatch an action to add an entry to the array of computed FNs. It is this entry that will be updated once the worker finishes its job. In this way, every entry maintains its position in the list, unlike what we had before.

    Finally, copy the content of styles.css from before and replace the content of App.css.

    We now have everything in place. Now start up your react server and play around with some numbers. Take note of the loading state, which is a UX improvement. Also, note that the UI stays responsive even when you enter a number as high as 1000 and click “Calculate”.

    showing loading state while worker is active.
    Showing loading state and active web worker. (Large preview)

    Note the loading state and the active worker. Once the 46th value is computed the worker is killed and the loading state is replaced by the final result.

    Conclusion

    Phew! It has been a long ride, so let’s wrap it up. I encourage you to take a look at the MDN entry for web workers (see resources list below) to learn other ways of using web workers.

    In this article, we learned about what web workers are and the sort of problems they’re meant to solve. We also saw how to implement them using plain JavaScript. Finally, we saw how to implement web workers in a React application.

    I encourage you to take advantage of this great API to deliver a better experience for your users.

    Further Resources

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    Developing For The Semantic Web — Smashing Magazine

    10/07/2020

    About The Author

    Frederick O’Brien is a freelance journalist who conforms to most British stereotypes. His interests include American literature, graphic design, sustainable …
    More about
    Frederick

    The dream of a machine-readable Internet is as old as the Internet itself, but only in recent years has it really seemed possible. As major websites take strides towards data-fying their content, now’s the perfect time to jump on the bandwagon.

    In July the Wikimedia Foundation announced Abstract Wikipedia, an attempt to markup knowledge that is language-independent. In many respects, this is the culmination of decades of buildup, during which the dream of a Semantic Web has never quite taken off, but never quite disappeared either.

    As a matter of fact the Semantic Web is growing, and as it renews its mission we all stand to gain from incorporating semantic markup into our websites, be they personal blogs or social media giants. Whether you care about sophisticated web experiences, SEO, or fending off the tyranny of web monopolies, the Semantic Web deserves our attention.

    The benefits of developing for the Semantic Web are not always immediate, or visible, but every site that does strengthens the foundations of an open, transparent, decentralized internet.

    The Semantic Web

    What exactly is the Semantic Web? It is a machine-readable web, providing through metadata “a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.”

    The idea is as old as the World Wide Web itself. Older, in fact. It was a focal point of Tim Berners-Lee’s 1989 proposal. As he outlined, not only should documents form webs, but the data inside them should too:

    Diagram from Tim Berners-Lee’s World Wide Web proposal to CERN
    A diagram from Sir Tim Berners-Lee’s original proposal for the World Wide Web. (Large preview)

    The Semantic Web’s tread a rocky road in the decades since. Since the turn of the millennium, it has morphed into multiple concepts — open data, knowledge graphs — all effectively meaning the same thing: webs of data.

    As the W3C summarises, it is “an extension of the current web in which information is given well-defined meaning, better-enabling computers and people to work in cooperation.”

    Aaron Swartz speaking in front of a crowd
    Aaron Swartz speaking in 2012. Photograph by Daniel J. Sieradski. (Large preview)

    The idea has had its fair share of advocates. Internet hacktivist Aaron Swartz wrote a book manuscript about the Semantic Web called A Programmable Web. In it he wrote:

    “Documents can’t really be merged and integrated and queried; they serve mostly as isolated instances to be viewed and reviewed. But data are protean, able to shift into whatever shape best suits your needs.”

    For a variety of reasons, the Semantic Web has not taken off in the same way the Web has, though it is catching up. Several markups have tried to seize the mantle over the years — RDFa, OWL, and Schema to name a few — though none have become standard in the way, say, HTML or CSS have. The barrier to entry was too high.

    However, the dream of the Semantic Web has endured, and as more and more sites incorporate it into their designs there’s all the more reason to join the party. The more sites that get on board, the stronger the Semantic Web becomes.

    Further Reading

    Knowledge Without Borders

    Before getting into the weeds of how to design for the Semantic Web, it’s worth digging a little deeper into the why. What does it matter whether data is connected? Aren’t connected documents enough?

    There are several reasons why the Semantic Web continues to be pushed by those who care about a free and open internet. Understanding those reasons is essential to the implementation process. It shouldn’t be a case of ‘eat your vegetables, use semantic markup.’ The Semantic Web is something to believe in and be a part of.

    Benefits of the Semantic Web include:

    • Richer, more sophisticated web experiences
    • Bypassing content silos and internet monopolies
    • Improved search engine readability and rankings
    • Democratisation of information

    Most of these can be traced back to a core tenet of the Semantic Web: a universal language for data. Although the internet has already done wonders for international communication, there’s no escaping the fact some countries have it much better than others. Take languages used on the web vs. languages used in the real world, for example. The eagle-eyed among you may be able to spot a slight imbalance in the data below…

    Bar chart comparing languages spoken online and in real life
    The proportion of languages used on the web do not match up with those used in the real world. (Large preview)

    The borderless utopia of the web is not as close as it might seem to those of us inside the English-speaking bubble. Is that something to chastise anyone for? Not necessarily, but it is something to face up to. Doing so highlights the importance of markup that bridges those gaps. By enriching the data of the web, we take the strain off of its languages.

    This is the crux of the recently announced Abstract Wikipedia, which will attempt to decouple articles from the language they happen to be written in. Wikimedia Executive Director Katherine Maher writes: “Using code, volunteers will be able to translate these abstract ‘articles’ into their own languages. If successful, this could eventually allow everyone to read about any topic in Wikidata in their own language.”

    Abstract Wikipedia creator Denny Vrandečić has been a Semantic Web advocate for years, recognizing its potential to unlock untapped potential online. Breaking down national barriers is essential to that process.

    “No matter what language you publish your content in, you are going to miss out on including the vast majority of people in the world. The Web gave us this wonderful opportunity to have global reach — but by relying on a single language, or a small set of languages, we are squandering this opportunity. While the most important objective is to create good content in the first place, you invite more people to participate in the development of better content by being language-independent. It helps you lower the barriers to contribution and consumption, and it allows for many more people to benefit from that effort.”

    — Denny Vrandečić, Abstract Wikipedia creator

    A timely example of this has been data visualization during the COVID-19 pandemic. The virus has wreaked unspeakable havoc worldwide, but it has also been a shining moment for open data networks, allowing superb web apps, reporting, and more to be common across the web.

    Homepage of ncovid2019.live
    The ncovid2019.live dashboard was made by American high schooler Avi Schiffman and pulls data from WHO, the CDC, and COV19. (Large preview)

    And of course, when data is transparent and easily accessible, it makes it easier to identify anomalies… or straight up deceit. Widespread public access to the kind of information above would be unthinkable even 20 years ago. Now we expect it, and smell a rat when it’s denied us. Data is powerful, and if we want to, can be wielded for good.

    Similarly, checking ourselves out of content silos — a hallmark of the modern web experience — takes power away from web monopolies like Google, Facebook, and Twitter. We’re so used to third party platforms deciphering and presenting information that we forget they’re not strictly necessary.

    “If we had shared formats, shared protocols, we might still end up with certain providers playing a large role in certain markets — think of Gmail for email — but everyone is free to move to another provider, and the market remains competitive.”

    — Denny Vrandečić, Abstract Wikipedia creator

    The Semantic Web is silo-less; it is free, open, and abstract, enabling communication between different languages and platforms that would be far more difficult otherwise.

    Data-fying Online Content

    Designing for the Semantic Web boils down to data-fying online content — looking at your content and seeing what can (and should) be abstracted. What does this mean in practical terms, beyond vaguely agreeing it’s a worthwhile thing to do? It depends:

    1. If starting a project from scratch, incorporate Semantic Web considerations into what you do. As a website takes shape, weave semantic markup into its DNA.
    2. If updating or rebuilding a project, assess what could be woven into the Semantic Web that currently isn’t, then implement.

    Both cases basically amount to data-fying content. In this section, we will go through some examples of data abstraction and how it can make content better, smarter, and more widely available.

    Abstracting Information

    Designing and developing for the Semantic Web means looking at online content with your data hat on. Most of us experience the web as a series of connecting documents or pages; what you want to do with the Semantic Web is connect information. This means assessing your content for data points then adjusting the design based on what you find.

    Semantic Web advocate James Hendler outlines this process particularly well with his DIVE ethos. (DIVE into the data, eh? Eh?). It breaks down as follows:

    • Discover
      Find datasets and/or content (including outside your own organization).
    • Integrate
      Link the relations using meaningful labels.
    • Validate
      Provide inputs to modeling and simulation systems.
    • Explore
      Develop approaches to turn data into actionable knowledge.

    Developing for the Semantic Web is largely about having that birds-eye view of the things you make, and how it potentially feeds into infinitely richer web experiences. As Hendler says, actionable knowledge is the goal.

    This really can be applied to almost any type of web content, but let’s start with a common example: recipes. Let’s say you run a cooking blog, with new recipes every Thursday. If you’re French and post a smashing soufflé recipe on your personal blog in plain text, it’s only useful to those who can read French.

    However, by implementing semantic markup the blog can be transformed into a machine-readable recipe data set. Syntax exists for cooking terms to be abstracted. Schema, for example, which can work alongside Microdata, RDFa, or JSON-LD, has markup including:

    • prepTime
    • cookTime
    • recipeYield
    • recipeIngredient
    • estimatedCost
    • nutrition, breaking down into calories and fatContent
    • suitableForDiet.

    I could go on. The full range of options, with examples, can be read at Schema.org. In adding them to the post format the format of the recipe needn’t change at all — you’re simply putting the information in terms computers can understand.

    Screenshot of a BBC cottage pie recipe
    By converting editorial content into data, BBC recipes massively increase their potential usefulness. (Click for large preview)

    For example, everything highlighted blue in the BBC recipe above has also been given semantic markup — from cooking time to nutritional content. You can see what’s going on under the hood by entering the recipe URL into Google’s Rich Results Test. Note the ‘Add to shopping list’ functionality, an example of connection made possible by Semantic Web implementation. Good content becomes usable data.

    Most of us have crossed paths with this kind of sophistication via search results, but the applications are much wider than that. Semantic markup of recipes makes it easier for websites to be found and used by home assistants. Listed ingredients can be ordered from the local supermarket. Recipes could be filtered in all sorts of ways — for diets, allergies, religion, cost, you name it. Or let’s say you had a limited number of ingredients in the house. With a database you could input those ingredients and see what recipes fit the bill.

    The range of possibilities really do border on limitless. As Swartz said, data is protean. Once you have it you can use it in all sorts of weird and wonderful ways. This piece is not about those weird and wonderful ways so much as it is about making them possible. Designing for the Semantic Web makes subsequent design infinitely richer.

    Here’s a more personal example to show what I mean. A couple of friends and I run a little music webzine as a hobby. Though we publish the odd article or interview, the ‘main event’ is our weekly album reviews, in which the three of us each assign a score, choose favorite tracks, and write summaries. We’ve been going for more than five years, which means we have close to 250 reviews, which means an awful lot of potential data. We didn’t realize how much until we started redesigning the site.

    I touched upon this in a piece about baking structured data into the design process. In dissecting our reviews we realized they were chock full of information that could be given semantic markup. Artists, album names, artwork, release date, individual scores, overall scores, release type, and more. What’s more — and this is where it gets really exciting — we realized we could connect to an existing database: MusicBrainz.

    This two-way approach is the crux of the Semantic Web. When our music website relaunches it will be its own open data source with thousands of unique data points. Connecting to an existing music database will give our own data more context — and potential. Thousands of data points becomes tens of thousands of data points, maybe more.

    Chart showing how semantic markup connects on an album review
    With some simple semantic markup, seemingly innocuous web pages can become the centre of an huge information network. (Large preview)

    The graphic above only scratches the surface of how much information will be connected to reviews pages. The content is the same as it was before, only now it is plugged into a metadata ecosystem — the Giant Global Graph, as Berners-Lee once called it.

    Developing for the Semantic Web means identifying your own data, markup it up, then sussing out how it connects to other data. Because it does. It always does. And that process is how this…

    Illustration showing how semantic data connects across web pages
    (Large preview)

    … in time becomes this…

    The Linked Open Data Cloud
    The Linked Open Data Cloud, a constantly updating visualisation of the state of linked data online. (Large preview)

    The second image is The Linked Open Data Cloud, a constantly updating visualization of the web’s connected data. That red hive of connections is the sciences; the rest has some way to go. That’s where we come in.

    Useful Semantic Web Resources

    Plugging In

    The ideal of the Semantic Web is connection. Make data, share data, demand data. Be part of an information ecosystem. When you’re creating original data, great. Share it. When data already exists and you’d like to use it, pull it in.

    Here are just a handful of the data resources out there:

    Indeed, where databases like these exist, I’d go so far as to say the right thing to do would be update them where they’re lacking information. Why keep it to yourself? Become a contributor, a Semantic Web advocate.

    Implementation

    As far as building Semantic Webness into your sites goes, I’m certainly not advocating manual, doc-by-doc markup. Who’s got time for that? More often than not the solution is a case of standardizing a format and templating for it.

    Templating is the big opportunity here. How many people really have time to markup all that information manually? However, if you have custom inputs, you get the best of both worlds. Content can be filled with people-friendly information and the information exists as data ready to serve whatever purpose comes to mind.

    Take, for example, a static site generator like Eleventy, which has been enjoying a bit of a love-in from the dev community lately. You write a post, run it through a template, and you’re golden. So why not incorporate semantic markup into the template itself?

    Like Eleventy, the new version of our music webzine site uses Markdown for its posts. While we have the same old text posts we always did, every review now also includes the following metadata inputs, which are then pulled into the template:

    Metadata inputs in a Markdown document
    Incorporating metadata inputs into templates allows content to be converted into data, and at most adds a couple of minutes to any given post upload. (Large preview)

    Together with author details in the body of the post and some generic website info, this then translates to the following semantic markup:

    <script type="application/ld+json">
        {
      "@context": "http://schema.org/",
      "@type": "Review",
      "reviewBody": "One of the definitive albums released by, quite possibly, the greatest singer-songwriter we've ever seen. To those looking to probe Young's daunting discography: start here.",
      "datePublished": "2020-08-14",
      "author": [{
        "@type": "Person",
        "name": "André Dack"
      },
                {
        "@type": "Person",
        "name": "Frederick O'Brien"
      },
                {
        "@type": "Person",
        "name": "Marcus Lawrence"
      }],
      "itemReviewed": {
        "@type": "MusicAlbum",
        "name": "After the Gold Rush",
        "@id": "https://musicbrainz.org/release-group/b6a3952b-9977-351c-a80a-73e023143858",
        "image": "https://audioxide.com/images/album-artwork/after-the-gold-rush-neil-young.jpg",
        "albumProductionType": "http://schema.org/StudioAlbum",
        "albumReleaseType": "http://schema.org/AlbumRelease",
        "byArtist": {
            "@type": "MusicGroup",
            "name": "Neil Young",
            "@id": "https://musicbrainz.org/artist/75167b8b-44e4-407b-9d35-effe87b223cf"
        }
      },
      "reviewRating": {
        "@type": "Rating",
        "ratingValue": 27,
        "worstRating": 0,
        "bestRating": 30
      },
      "publisher": {
        "@type": "Organization",
        "name": "Audioxide",
        "description": "Independent music webzine founded in 2015. Publishes reviews, articles, interviews, and other oddities.",
        "url": "https://audioxide.com",
        "logo": "https://audioxide.com/logo-location.jpg",
        "sameAs" : [
        "https://facebook.com/audioxide",
        "https://twitter.com/audioxide",
        "https://instagram.com/audioxidecom"
      ]
        }
    }
        
        </script>

    Where before there was just text, on every single review page there will now also be machine-readable versions of what readers see when they visit the site. The words are all still there, the content has barely changed at all — it’s just been data-fyed. From rich search results to interactive review statistics pages, this massively increases what’s possible. The road ahead is wide and open. It also gives us a stake in MusicBrainz’s future. By connecting their data to our own data, we in turn want to see it do well, and will do our part to ensure it does.

    The appropriate semantic markup depends on the nature of a website, but odds are it exists. Start with the obvious inputs (date, author, content type, etc.) and work your way into the weeds of the content. The first step could be as simple as a hCard (a kind of digital ID card) for your personal website. Print out screenshots of pages and start annotating. You’ll be amazed by how much content can be data-fyed.

    Beyond Imagination

    Designing and developing for the Semantic Web is a practice that dates back to the Internet’s founding ideals. Whether you value beautiful, informative data visualization, want more sophisticated search results, wish to remove power from web monopolies, or simply believe in free and open information, the Semantic Web is your ally.

    Aaron Swartz closed his manuscript with a call to hope:

    “The Semantic Web is based on bet, a bet that giving the world tools to easily collaborate and communicate will lead to possibilities so wonderful we can scarcely even imagine them right now.”

    Abstract Wikipedia Denny Vrandečić echoes those sentiments today, saying:

    “There’s a need for a web infrastructure that will facilitate interoperability between services, which requires a common set of standards for representing data, and common protocols across providers.”

    The Semantic Web has limped along long enough for it to be clear that a silver bullet language is unlikely to appear, but there are enough now peacefully coexisting for Berners-Lee’s founding dream to be a reality for most of the web. Each of us can be advocates in our own neighborhoods.

    Be Better, Demand Better

    As Tim Berners-Lee has said, the Semantic Web is a culture as much as it is a technical hurdle. In a 2009 TED Talk he summed it up nicely: make linked data, demand linked data. That’s truer now than ever. The World Wide Web is only as open and connected and good as we force it to be. Whenever you make something online ask yourself, “How can this plug into the Semantic Web?” The answers will add new dimensions to the things we create, and create unimaginably wonderful new possibilities for years to come.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Useful Tools In Vue.js Web Development — Smashing Magazine

    10/05/2020

    About The Author

    Front-end developer based in Lagos, Nigeria. He enjoys converting designs into code and building things for the web.
    More about
    Timi

    There are some tools that developers that are just getting started with Vue or sometimes, have experience building with Vue sometimes do not know exist to make development in Vue a lot easier. In this article, we’re going to look at a few of these libraries, what they do, and how to use them during development.

    When working on a new project, there are certain features that are necessary depending on how the application is supposed to be used. For example, if you’ll be storing user-specific data, you’ll need to handle authentications, this will require the setting up of a form that needs to be validated. Things such as authentication and form validations are common; there are solutions that possibly fit your use case.

    To properly utilize your development time, it is better you use what is available, instead of inventing yours.

    As a new developer, there’s the possibility that you won’t be aware of all that the Vue ecosystem provides you. This article will help with that; it will cover certain useful tools that will aid you in building better Vue applications.

    Note: There are alternatives to these libraries and this article is in no way placing these few over the others. They are just the ones I’ve worked with.

    This tutorial is aimed at beginners that either just started learning about Vue or already have basic knowledge of Vue. All code snippets used in this tutorial can be found on my GitHub.

    Vue-notification

    During user interaction, there is often a need to display a success message, error message, or random info to the user. In this section, we’re going to look at how to display messages and warnings to your user using vue-notification. This package provides an interface with a nice animation/transition for displaying errors, general information, and success messages to your user across your application and it does not require a lot of configuration to get up and running.

    Installation

    You can install vue-notification in your project using either Yarn or NPM depending on the package manager for your project

    Yarn
    yarn add vue-notification
    
    npm
    npm install --save vue-notification
    

    After the installation is complete, the next thing would be to add it to the entry point in your app, the main.js file.

    main.js
    //several lines of existing code in the file
        import Notifications from 'vue-notification'
        Vue.use(Notifications)
      

    At this point, we only need to add the notifications component in the App.vue file before we can display notifications in our app. The reason why we’re adding this component to the App.vue file is to avoid repetition in our application because no matter the page the user is on in our app, components in App.vue (e.g the header & footer components) would always be available. This takes the pain of having to register the notification component in every file that we need to display a notification to the user.

    App.vue
    <template>
      <div id="app">
        <div id="nav">
          <router-link to="/">Home</router-link> |
          <router-link to="/about">Notifications</router-link>
        </div>
        <notifications group="demo"/>
        <router-view />
      </div>
    </template>
    

    Here, we add one instance of this component which accepts a group prop which would be used in grouping the different types of notifications we have. This is because the notifications component accepts a number of props that dictate how the component behaves and we’re going to look at a few of these.

    1. group
      This prop is used to specify the different types of notifications you might have in your app. For instance, you might decide to use different styles and behavior depending on what purpose the notification is supposed to serve, form validation, API response, etc.
    2. type
      This prop accepts a value that serves as a ‘class name’ for each notification type we have in our application and examples can include success, error, and warn. If we use any one of these as a notification type, we can easily style the component by using this class format vue-notification + '.' + type, i.e .vue-notification.warn for warn, and so on.
    3. duration
      This prop specifies how long the notification component should appear before disappearing. It accepts a number as a value in ms and also accepts a negative number (-1) if you want it to remain on your user’s screen till they click on it.
    4. position
      This prop is used in setting the position you want notifications to appear from in your app. Some of the available options are top left, top right, top center, bottom right, bottom left, and bottom center.

    We can add these props to our component in App.vue so it now looks like this;

    <template>
      <div id="app">
        <div id="nav">
          <router-link to="/">Home</router-link> |
          <router-link to="/about">Notifications</router-link>
        </div>
        <notifications
          :group="group"
          :type="type"
          :duration="duration"
          :position="position"
        />
        <router-view />
      </div>
    </template>
    <script>
      export default {
        data() {
          return {
            duration: -1,
            group: "demo",
            position: "top center",
            type: "info",
          };
        },
      };
    </script>
    <style>
      .vue-notification.info {
        border-left: 0;
        background-color: orange;
      }
      .vue-notification.success {
        border-left: 0;
        background-color: limegreen;
      }
      .vue-notification.error {
        border-left: 0;
        background-color: red;
      }
    </style>
    

    We also add styling for the different notification types that we would be using in our application. Note that other than group, we can pass each of the remaining props on the fly whenever we want to display a notification and it would still work accordingly. To display a notification in any of your Vue files, you can do the following.

    vueFile.vue
    this.$notify({
      group: "demo",
      type: "error",
      text: "This is an error notification",
    });
    

    Here, we create a notification of type error under the group notification of demo. The property text accepts the message you want the notification to contain and in this case, the message is ‘This is an error notification’. This is what the notification would look like in your app.

    vue-notification with type ‘error’ in action
    vue-notification in action: error notification displaying in the browser. (Large preview)

    You can find out the other available props and other ways to configure the notification on the official docs page.

    Vuelidate

    One of the most common elements used on the web are form elements (input[type='text'], input[type='email'], input[type='password'], and so on) and there is always a need to validate user input to make sure they’re sending the right data and/or using the right format in the input field. With Vuelidate, you can add validation to the forms in your Vue.js application, saving time and benefitting from the time put into this package. I had been hearing about Vuelidate for a while but I was a little reluctant to take a look at it because I thought it would be too complex which meant I was writing validations from scratch for most of the form fields in the apps I worked on.

    When I eventually looked at the docs, I found out it was not difficult to get started with and I could validate my form fields in no time and move on to the next thing.

    Installation

    You can install Vuelidate using any of the following package managers.

    Yarn
    yarn add vuelidate
    
    npm
    npm install vuelidate --save
    

    After installation, the next thing would be to add it to your app’s config in the main.js file so you can use it in your vue files.

    import Vuelidate from 'vuelidate'
    Vue.use(Vuelidate)
    

    Assuming you have a form that looks like this in your app;

    vuelidate.vue
    <template>
      <form @submit.prevent="login" class="form">
        <div class="input__container">
          <label for="fullName" class="input__label">Full Name</label>
          <input
            type="text"
            name="fullName"
            id="fullName"
            v-model="form.fullName"
            class="input__field"
          />
        </div>
        <div class="input__container">
          <label for="email" class="input__label">Email</label>
          <input
            type="email"
            name="email"
            id="email"
            v-model="form.email"
            class="input__field"
          />
        </div>
        <div class="input__container">
          <label for="email" class="input__label">Age</label>
          <input
            type="number"
            name="age"
            id="age"
            v-model="form.age"
            class="input__field"
          />
        </div>
        <div class="input__container">
          <label for="password" class="input__label">Password</label>
          <input
            type="password"
            name="password"
            id="password"
            v-model="form.password"
            class="input__field"
          />
        </div>
        <input type="submit" value="LOGIN" class="input__button" />
        <p class="confirmation__text" v-if="submitted">Form clicked</p>
      </form>
    </template>
    <script>
      export default {
        data() {
          return {
            submitted: false,
            form: {
              email: null,
              fullName: null,
              age: null,
              password: null,
            },
          };
        },
        methods: {
          login() {
            this.submitted = true;
          },
        },
      };
    </script>
    

    Now to validate this type of form, you first need to decide on what type of validation you need for each form field. For instance, you can decide you need the minimum length of the fullName to be 10 and the minimum age to be 18.

    Vuelidate comes with built-in validators that we only need to import it to use. We can also choose to validate the password field based on a particular format, e.g Password should contain at least a lower case letter, an upper case letter, and a special character. We can write our own little validator that does this and plug it into the list of Vuelidate’s plugin.

    Let’s take it step by step.

    Using Built-In Validators
    <script>
      import {
        required,
        minLength,
        minValue,
        email,
      } from "vuelidate/lib/validators";
      export default {
        validations: {
          form: {
            email: {
              email,
              required,
            },
            fullName: {
              minLength: minLength(10),
              required,
            },
            age: {
              required,
              minValue: minValue(18),
            },
          },
        },
      };
    </script>
    

    Here, we import some validators that we need to properly validate our form. We also add a validations property where we define the validation rules for each form field that we want to validate.

    At this point, if you inspect the devTools for your app, you should see something that looks like this;

    vuelidate computed property
    vuelidate computed property (Large preview)

    The $v computed property contains a number of methods that are useful in confirming the validity of our form but we’re only going to focus on a few of them:

    1. $invalid
      To check if the form passes all validation.
    2. email
      To check that the value is a valid email address.
    3. minValue
      To check that the value of age passes the minValue check.
    4. minLength
      To verify the length of fullName.
    5. required
      To ensure all required fields are provided.

    If you enter a value for age less than the minimum age set in the validation and check $v.form.age.minValue, it would be set to false and this means the value in the input field doesn’t pass the minValue validation check.

    Using Custom Validators

    We also need to validate our password field and ensure it contains the required format but Vuelidate does not have a built-in validator that we can use to achieve this. We can write our own custom validator that does this using RegEx. This custom validator would look like this;

    <script>
      import {
        required,
        minLength,
        minValue,
        email,
      } from "vuelidate/lib/validators";
      export default {
        validations: {
          form: {
    //existing validator rules
            password: {
              required,
              validPassword(password) {
                let regExp = /^(?=.*[0-9])(?=.*[!@#$%^&*])(?=.*[A-Z]+)[a-zA-Z0-9!@#$%^&*]{6,}$/;
                return regExp.test(password);
              },
            },
          },
        },
      };
    </script>
    

    Here, we create a custom validator that uses a Regex to check that the password contains the following;

    1. At least one uppercase letter;
    2. At least one lowercase letter;
    3. At least one special character;
    4. At least one number;
    5. Must have a minimum length of 6.

    If you try to enter any password that doesn’t meet any of the requirements listed above, the validPassword would be set to false.

    Now that we’re sure our validations are working, we have to display the appropriate error messages so the user knows why they can’t proceed. This would look like this:

    <template>
      <form @submit.prevent="login" class="form">
        <div class="input__container">
          <label for="fullName" class="input__label">Full Name</label>
          <input
            type="text"
            name="fullName"
            id="fullName"
            v-model="form.fullName"
            class="input__field"
          />
          <p class="error__text" v-if="!$v.form.fullName.required">
            This field is required
          </p>
        </div>
        <div class="input__container">
          <label for="email" class="input__label">Email</label>
          <input
            type="email"
            name="email"
            id="email"
            v-model="form.email"
            class="input__field"
          />
          <p class="error__text" v-if="!$v.form.email.required">
            This field is required
          </p>
          <p class="error__text" v-if="!$v.form.email.email">
            This email is invalid
          </p>
        </div>
        <div class="input__container">
          <label for="email" class="input__label">Age</label>
          <input
            type="number"
            name="age"
            id="age"
            v-model="form.age"
            class="input__field"
          />
          <p class="error__text" v-if="!$v.form.age.required">
            This field is required
          </p>
        </div>
        <div class="input__container">
          <label for="password" class="input__label">Password</label>
          <input
            type="password"
            name="password"
            id="password"
            v-model="form.password"
            class="input__field"
          />
          <p class="error__text" v-if="!$v.form.password.required">
            This field is required
          </p>
          <p class="error__text" v-else-if="!$v.form.password.validPassword">
            Password should contain at least a lower case letter, an upper case
            letter, a number and a special character
          </p>
        </div>
        <input type="submit" value="LOGIN" class="input__button" />
      </form>
    </template>
    

    Here, we add a paragraph that displays either a text telling the user that a field is required, an inputted value for email is not valid or that the password doesn’t contain the required characters. If we look at this in your browser, you would see the errors already appearing under each input field.

    error texts in the form
    Error texts for each input field (Large preview)

    This is bad for user experience as the user is yet to interact with the form and as such the error texts should not be visible at least, till the user tries to submit the form. To fix this, we would add submitted to the condition required for the error texts to show and also switch the value of submitted to true when the user clicks on the submit button.

    <template>
      <form @submit.prevent="login" class="form">
        <div class="input__container">
          <label for="fullName" class="input__label">Full Name</label>
          <input
            type="text"
            name="fullName"
            id="fullName"
            v-model="form.fullName"
            class="input__field"
          />
          <p class="error__text" v-if="submitted && !$v.form.fullName.required">
            This field is required
          </p>
        </div>
        <div class="input__container">
          <label for="email" class="input__label">Email</label>
          <input
            type="email"
            name="email"
            id="email"
            v-model="form.email"
            class="input__field"
          />
          <p class="error__text" v-if="submitted && !$v.form.email.required">
            This field is required
          </p>
          <p class="error__text" v-if="submitted && !$v.form.email.email">
            This email is invalid
          </p>
        </div>
        <div class="input__container">
          <label for="email" class="input__label">Age</label>
          <input
            type="number"
            name="age"
            id="age"
            v-model="form.age"
            class="input__field"
          />
          <p class="error__text" v-if="submitted && !$v.form.age.required">
            This field is required
          </p>
        </div>
        <div class="input__container">
          <label for="password" class="input__label">Password</label>
          <input
            type="password"
            name="password"
            id="password"
            v-model="form.password"
            class="input__field"
          />
          <p class="error__text" v-if="submitted && !$v.form.password.required">
            This field is required
          </p>
          <p
            class="error__text"
            v-else-if="submitted && !$v.form.password.validPassword"
          >
            Password should contain at least a lower case letter, an upper case
            letter, a number and a special character
          </p>
        </div>
        <input type="submit" value="LOGIN" class="input__button" />
      </form>
    </template>
    

    Now the error texts do not appear until the user clicks on the submit button and this is much better for the user. Each validation error would appear if the value inputted in the form does not satisfy the validation.

    Finally, we would only want to process the user’s input when all validations on our form have passed and one way we can do this would be to use the $invalid property on the form which is present in the $v computed property. Let us take a look at how to do that:

    methods: {
          login() {
            this.submitted = true;
            let invalidForm = this.$v.form.$invalid;
            //check that every field in this form has been entered correctly.
            if (!invalidForm) {
              // process the form data
            }
          },
        },
    

    Here, we’re checking to ensure that the form has been completely filled and filled correctly. If it returns false, that means the form is valid and we can process the data from the form but if it is true , it means that the form is still invalid and the user still needs to tend to some errors in the form. We can also use this property to disable or style the submit button depending on your preference.

    Vuex-persistedstate

    During development, there are instances where you would store data like a user’s info and token in your Vuex store. But your Vuex store data would not persist if your users try to refresh your app from the browser or enter a new route from the URL tab of your browser and the current state of your application gets lost with it. This causes the user to be redirected to the login page if the route is being protected with navigation guard which is abnormal behavior for your app to have. This can be fixed with vuex-persistedstate, let look at how.

    Installation

    You can install this plugin using any one of the two methods:

    Yarn
    yarn add vuex-persistedstate
    
    npm
    npm install --save vuex-persistedstate
    

    After the installation process is complete, the next step would be to configure this plugin to be ready for use in your Vuex store.

    import Vue from 'vue'
    import Vuex from 'vuex'
    import createPersistedState from "vuex-persistedstate";
    Vue.use(Vuex)
    export default new Vuex.Store({
        state: {},
        mutations: {},
        actions: {},
        modules: {},
        plugins: [createPersistedState()]
    })
    

    At this point, all of our Vuex Store would be stored in localStorage (by default) but vuex-persistedstate comes with the option to use sessionStorage or cookies.

    import Vue from 'vue'
    import Vuex from 'vuex'
    import createPersistedState from "vuex-persistedstate";
    Vue.use(Vuex)
    export default new Vuex.Store({
        state: {},
        mutations: {},
        actions: {},
        modules: {},
      // changes storage to sessionStorage
        plugins: [createPersistedState({ storage: window.sessionStorage });
    ]
    })
    

    To confirm that our Store would persist after refreshing or closing the browser tab, let us update our store to look like this:

    import Vue from 'vue'
    import Vuex from 'vuex'
    import createPersistedState from "vuex-persistedstate";
    Vue.use(Vuex)
    export default new Vuex.Store({
        state: {
            user: null
        },
        mutations: {
            SET_USER(state, user) {
                state.user = user
            }
        },
        actions: {
            getUser({ commit }, userInfo) {
                commit('SET_USER', userInfo)
            }
        },
        plugins: [createPersistedState()]
    })
    

    Here, we add a user state that would store user data from the form created in the previous section. We also add a SET_USER mutation that would be used in modifying the user state. Finally, we add a getUser action that would receive the user object and pass it to the SET_USER mutation property. The next would be to dispatch this action after validating our form successfully. This looks like this:

    methods: {
        login() {
          this.submitted = true;
          let invalidForm = this.$v.form.$invalid;
          let form = this.form;
          //check that every field in this form has been entered correctly.
          if (!invalidForm) {
            // process the form data
            this.$store.dispatch("getUser", form);
          }
        },
      },
    

    Now, if you correctly fill the form, submit it, and open the localStorage section in the applications tab in the devTools of your browser, you should see a vuex property that looks like this:

    vuex-persistedstate in localStorage
    Vuex store in localStorage (Large preview)

    At this point, if you refresh your browser or open your app in a new tab, your user state would still persist across these tabs/session (on localStorage).

    Conclusion

    There are a lot of libraries that can be very useful in Vuejs web development and sometimes it can be hard to choose the one to use or where to find them. The following links contain libraries that you can use in your Vue.js application.

    1. vuejsexamples.com.
    2. madewithvuejs.com.

    There is often more than one library that does the same thing that you’re trying to achieve in your application when searching for a ‘library’, the important thing is to make sure the option you settle for works for you and is being maintained by its creator(s) so it doesn’t cause your application to break.

    Further Resources

    Smashing Editorial
    (ks, ra, il)

    Source link