Browsing Tag: App

    web design

    Building A Web App With React, Redux And Sanity.io — Smashing Magazine

    02/11/2021

    About The Author

    Ifeanyi Dike is a full-stack developer in Abuja, Nigeria. He’s the team lead at Sterling Digitals Limited but also open to more opportunities and …
    More about
    Ifeanyi

    Headless CMS is a powerful and easy way to manage content and access API. Built on React, Sanity.io is a seamless tool for flexible content management. It can be used to build simple to complex applications from the ground up.

    In this article, we’ll build a simple listing app with Sanity.io and React. Our global states will be managed with Redux and the application will be styled with styled-components.

    The fast evolution of digital platforms have placed serious limitations on traditional CMS like WordPress. These platforms are coupled, inflexible and are focused on the project, rather than the product. Thankfully, several headless CMS have been developed to tackle these challenges and many more.

    Unlike traditional CMS, headless CMS, which can be described as Software as a Service (SaaS), can be used to develop websites, mobile apps, digital displays, and many more. They can be used on limitless platforms. If you are looking for a CMS that is platform independent, developer-first, and offers cross platform support, you need not look farther from headless CMS.

    A headless CMS is simply a CMS without a head. The head here refers to the frontend or the presentation layer while the body refers to the backend or the content repository. This offers a lot of interesting benefits. For instance, it allows the developer to choose any frontend of his choice and you can also design the presentation layer as you want.

    There are lots of headless CMS out there, some of the most popular ones include Strapi, Contentful, Contentstack, Sanity, Butter CMS, Prismic, Storyblok, Directus, etc. These headless CMS are API-based and have their individual strong points. For instance, CMS like Sanity, Strapi, Contentful, and Storyblok are free for small projects.

    These headless CMS are based on different tech stacks as well. While Sanity.io is based on React.js, Storyblok is based on Vue.js. As a React developer, this is the major reason why I quickly picked interest in Sanity. However, being a headless CMS, each of these platforms can be plugged on any frontend, whether Angular, Vue or React.

    Each of these headless CMS has both free and paid plans which represent significant price jump. Although these paid plans offer more features, you wouldn’t want to pay all that much for a small to mid-sized project. Sanity tries to solve this problem by introducing pay-as-you-go options. With these options, you will be able to pay for what you use and avoid the price jump.

    Another reason why I choose Sanity.io is their GROQ language. For me, Sanity stands out from the crowd by offering this tool. Graphical-Relational Object Queries (GROQ) reduces development time, helps you get the content you need in the form you need it, and also helps the developer to create a document with a new content model without code changes.

    Moreover, developers are not constrained to the GROQ language. You can also use GraphQL or even the traditional axios and fetch in your React app to query the backend. Like most other headless CMS, Sanity has comprehensive documentation that contains helpful tips to build on the platform.

    Note: This article requires a basic understanding of React, Redux and CSS.

    Getting Started With Sanity.io

    To use Sanity in your machine, you’ll need to install the Sanity CLI tool. While this can be installed locally on your project, it is preferable to install it globally to make it accessible to any future applications.

    To do this, enter the following commands in your terminal.

    npm install -g @sanity/cli

    The -g flag in the above command enables global installation.

    Next, we need to initialize Sanity in our application. Although this can be installed as a separate project, it is usually preferable to install it within your frontend app (in this case React).

    In her blog, Kapehe explained in detail how to integrate Sanity with React. It will be helpful to go through the article before continuing with this tutorial.

    Enter the following commands to initialize Sanity in your React app.

    sanity init

    The sanity command becomes available to us when we installed the Sanity CLI tool. You can view a list of the available Sanity commands by typing sanity or sanity help in your terminal.

    When setting up or initializing your project, you’ll need to follow the prompts to customize it. You’ll also be required to create a dataset and you can even choose their custom dataset populated with data. For this listing app, we will be using Sanity’s custom sci-fi movies dataset. This will save us from entering the data ourselves.

    To view and edit your dataset, cd to the Sanity subdirectory in your terminal and enter sanity start. This usually runs on http://localhost:3333/. You may be required to login to access the interface (make sure you login with the same account you used when initializing the project). A screenshot of the environment is shown below.

    Sanity server overview
    An overview of the sanity server for the sci-fi movie dataset. (Large preview)

    Sanity-React Two-way Communication

    Sanity and React need to communicate with each other for a fully functional application.

    CORS Origins Setting In Sanity Manager

    We’ll first connect our React app to Sanity. To do this, login to https://manage.sanity.io/ and locate CORS origins under API Settings in the Settings tab. Here, you’ll need to hook your frontend origin to the Sanity backend. Our React app runs on http://localhost:3000/ by default, so we need to add that to the CORS.

    This is shown in the figure below.

    CORS origin settings
    Setting CORS origin in Sanity.io Manager. (Large preview)

    Connecting Sanity To React

    Sanity associates a project ID to every project you create. This ID is needed when connecting it to your frontend application. You can find the project ID in your Sanity Manager.

    The backend communicates with React using a library known as sanity client. You need to install this library in your Sanity project by entering the following commands.

    npm install @sanity/client

    Create a file sanitySetup.js (the filename does not matter), in your project src folder and enter the following React codes to set up a connection between Sanity and React.

    import sanityClient from "@sanity/client"
    export default sanityClient({
        projectId: PROJECT_ID,
        dataset: DATASET_NAME,
        useCdn: true
    });

    We passed our projectId, dataset name and a boolean useCdn to the instance of the sanity client imported from @sanity/client. This works the magic and connects our app to the backend.

    Now that we’ve completed the two-way connection, let’s jump right in to build our project.

    Setting Up And Connecting Redux To Our App

    We’ll need a few dependencies to work with Redux in our React app. Open up your terminal in your React environment and enter the following bash commands.

    npm install redux react-redux redux-thunk
    

    Redux is a global state management library that can be used with most frontend frameworks and libraries such as React. However, we need an intermediary tool react-redux to enable communication between our Redux store and our React application. Redux thunk will help us to return a function instead of an action object from Redux.

    While we could write the entire Redux workflow in one file, it is often neater and better to separate our concerns. For this, we will divide our workflow into three files namely, actions, reducers, and then the store. However, we also need a separate file to store the action types, also known as constants.

    Setting Up The Store

    The store is the most important file in Redux. It organizes and packages the states and ships them to our React application.

    Here is the initial setup of our Redux store needed to connect our Redux workflow.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import reducers from "./reducers/";
    
    export default createStore(
      reducers,
      applyMiddleware(thunk)
    );
    

    The createStore function in this file takes three parameters: the reducer (required), the initial state and the enhancer (usually a middleware, in this case, thunk supplied through applyMiddleware). Our reducers will be stored in a reducers folder and we’ll combine and export them in an index.js file in the reducers folder. This is the file we imported in the code above. We’ll revisit this file later.

    Introduction To Sanity’s GROQ Language

    Sanity takes querying on JSON data a step further by introducing GROQ. GROQ stands for Graph-Relational Object Queries. According to Sanity.io, GROQ is a declarative query language designed to query collections of largely schema-less JSON documents.

    Sanity even provides the GROQ Playground to help developers become familiar with the language. However, to access the playground, you need to install sanity vision.
    Run sanity install @sanity/vision on your terminal to install it.

    GROQ has a similar syntax to GraphQL but it is more condensed and easier to read. Furthermore, unlike GraphQL, GROQ can be used to query JSON data.

    For instance, to retrieve every item in our movie document, we’ll use the following GROQ syntax.

    *[_type == "movie"]

    However, if we wish to retrieve only the _ids and crewMembers in our movie document. We need to specify those fields as follows.

    `*[_type == 'movie']{                                             
        _id,
        crewMembers
    }
    

    Here, we used * to tell GROQ that we want every document of _type movie. _type is an attribute under the movie collection. We can also return the type like we did the _id and crewMembers as follows:

    *[_type == 'movie']{                                             
        _id,
        _type,
        crewMembers
    }
    

    We’ll work more on GROQ by implementing it in our Redux actions but you can check Sanity.io’s documentation for GROQ to learn more about it. The GROQ query cheat sheet provides a lot of examples to help you master the query language.

    Setting Up Constants

    We need constants to track the action types at every stage of the Redux workflow. Constants help to determine the type of action dispatched at each point in time. For instance, we can track when the API is loading, fully loaded and when an error occurs.

    We don’t necessarily need to define constants in a separate file but for simplicity and clarity, this is usually the best practice in Redux.

    By convention, constants in Javascript are defined with uppercase. We’ll follow the best practices here to define our constants. Here is an example of a constant for denoting requests for moving movie fetching.

    export const MOVIE_FETCH_REQUEST = "MOVIE_FETCH_REQUEST";

    Here, we created a constant MOVIE_FETCH_REQUEST that denotes an action type of MOVIE_FETCH_REQUEST. This helps us to easily call this action type without using strings and avoid bugs. We also exported the constant to be available anywhere in our project.

    Similarly, we can create other constants for fetching action types denoting when the request succeeds or fails. A complete code for the movieConstants.js is given in the code below.

    Here we have defined several constants for fetching a movie or list of movies, sorting and fetching the most popular movies. Notice that we set constants to determine when the request is loading, successful and failed.

    Similarly, our personConstants.js file is given below:

    export const PERSONS_FETCH_REQUEST = "PERSONS_FETCH_REQUEST";
    export const PERSONS_FETCH_SUCCESS = "PERSONS_FETCH_SUCCESS";
    export const PERSONS_FETCH_FAIL = "PERSONS_FETCH_FAIL";
    
    export const PERSON_FETCH_REQUEST = "PERSON_FETCH_REQUEST";
    export const PERSON_FETCH_SUCCESS = "PERSON_FETCH_SUCCESS";
    export const PERSON_FETCH_FAIL = "PERSON_FETCH_FAIL";
    
    export const PERSONS_COUNT = "PERSONS_COUNT";

    Like the movieConstants.js, we set a list of constants for fetching a person or persons. We also set a constant for counting persons. The constants follow the convention described for movieConstants.js and we also exported them to be accessible to other parts of our application.

    Finally, we’ll implement light and dark mode in the app and so we have another constants file globalConstants.js. Let’s take a look at it.

    export const SET_LIGHT_THEME = "SET_LIGHT_THEME";
    export const SET_DARK_THEME = "SET_DARK_THEME";

    Here we set constants to determine when light or dark mode is dispatched. SET_LIGHT_THEME determines when the user switches to the light theme and SET_DARK_THEME determines when the dark theme is selected. We also exported our constants as shown.

    Setting Up The Actions

    By convention, our actions are stored in a separate folder. Actions are grouped according to their types. For instance, our movie actions are stored in movieActions.js while our person actions are stored in personActions.js file.

    We also have globalActions.js to take care of toggling the theme from light to dark mode.

    Let’s fetch all movies in moviesActions.js.

    import sanityAPI from "../../sanitySetup";
    import {
      MOVIES_FETCH_FAIL,
      MOVIES_FETCH_REQUEST,
      MOVIES_FETCH_SUCCESS  
    } from "../constants/movieConstants";
    
    const fetchAllMovies = () => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_FETCH_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                            
              _id,
              "poster": poster.asset->url,
          } `
        );
        dispatch({
          type: MOVIES_FETCH_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_FETCH_FAIL,
          payload: error.message
        });
      }
    };

    Remember when we created the sanitySetup.js file to connect React to our Sanity backend? Here, we imported the setup to enable us to query our sanity backend using GROQ. We also imported a few constants exported from the movieConstants.js file in the constants folder.

    Next, we created the fetchAllMovies action function for fetching every movie in our collection. Most traditional React applications use axios or fetch to fetch data from the backend. But while we could use any of these here, we’re using Sanity’s GROQ. To enter the GROQ mode, we need to call sanityAPI.fetch() function as shown in the code above. Here, sanityAPI is the React-Sanity connection we set up earlier. This returns a Promise and so it has to be called asynchronously. We’ve used the async-await syntax here, but we can also use the .then syntax.

    Since we are using thunk in our application, we can return a function instead of an action object. However, we chose to pass the return statement in one line.

    const fetchAllMovies = () => async (dispatch) => {
      ...
    }

    Note that we can also write the function this way:

    const fetchAllMovies = () => {
      return async (dispatch)=>{
        ...
      }
    }

    In general, to fetch all movies, we first dispatched an action type that tracks when the request is still loading. We then used Sanity’s GROQ syntax to asynchronously query the movie document. We retrieved the _id and the poster url of the movie data. We then returned a payload containing the data gotten from the API.

    Similarly, we can retrieve movies by their _id, sort movies, and get the most popular movies.

    We can also fetch movies that match a particular person’s reference. We did this in the fetchMoviesByRef function.

    const fetchMoviesByRef = (ref) => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_REF_FETCH_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie' 
                && (castMembers[person._ref match '${ref}'] || 
                    crewMembers[person._ref match '${ref}'])            
                ]{                                             
                    _id,                              
                    "poster" : poster.asset->url,
                    title
                } `
        );
        dispatch({
          type: MOVIES_REF_FETCH_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_REF_FETCH_FAIL,
          payload: error.message
        });
      }
    };

    This function takes an argument and checks if person._ref in either the castMembers or crewMembers matches the passed argument. We return the movie _id, poster url, and title alongside. We also dispatch an action of type MOVIES_REF_FETCH_SUCCESS, attaching a payload of the returned data, and if an error occurs, we dispatch an action of type MOVIE_REF_FETCH_FAIL, attaching a payload of the error message, thanks to the try-catch wrapper.

    In the fetchMovieById function, we used GROQ to retrieve a movie that matches a particular id passed to the function.

    The GROQ syntax for the function is shown below.

    const data = await sanityAPI.fetch(
          `*[_type == 'movie' && _id == '${id}']{                                               
                    _id,
                    "cast" :
                        castMembers[]{
                            "ref": person._ref,
                            characterName, 
                            "name": person->name,
                            "image": person->image.asset->url
                        }
                    ,
                    "crew" :
                        crewMembers[]{
                            "ref": person._ref,
                            department, 
                            job,
                            "name": person->name,
                            "image": person->image.asset->url
                        }
                    ,                
                    "overview":   {                    
                        "text": overview[0].children[0].text
                      },
                    popularity,
                    "poster" : poster.asset->url,
                    releaseDate,                                
                    title
                }[0]`
        );

    Like the fetchAllMovies action, we started by selecting all documents of type movie but we went further to select only those with an id supplied to the function. Since we intend to display a lot of details for the movie, we specified a bunch of attributes to retrieve.

    We retrieved the movie id and also a few attributes in the castMembers array namely ref, characterName, the person’s name, and the person’s image. We also changed the alias from castMembers to cast.

    Like the castMembers, we selected a few attributes from the crewMembers array, namely ref, department, job, the person’s name and the person’s image. we also changed the alias from crewMembers to crew.

    In the same way, we selected the overview text, popularity, movie’s poster url, movie’s release date and title.

    Sanity’s GROQ language also allows us to sort a document. To sort an item, we pass order next to a pipe operator.

    For instance, if we wish to sort movies by their releaseDate in ascending order, we could do the following.

    const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                            
              ...
          } | order(releaseDate, asc)`
        );
    

    We used this notion in the sortMoviesBy function to sort either by ascending or descending order.

    Let’s take a look at this function below.

    const sortMoviesBy = (item, type) => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_SORT_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                
                    _id,                                               
                    "poster" : poster.asset->url,    
                    title
                    } | order( ${item} ${type})`
        );
        dispatch({
          type: MOVIES_SORT_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_SORT_FAIL,
          payload: error.message
        });
      }
    };

    We began by dispatching an action of type MOVIES_SORT_REQUEST to determine when the request is loading. We then used the GROQ syntax to sort and fetch data from the movie collection. The item to sort by is supplied in the variable item and the mode of sorting (ascending or descending) is supplied in the variable type. Consequently, we returned the id, poster url, and title. Once the data is returned, we dispatched an action of type MOVIES_SORT_SUCCESS and if it fails, we dispatch an action of type MOVIES_SORT_FAIL.

    A similar GROQ concept applies to the getMostPopular function. The GROQ syntax is shown below.

    const data = await sanityAPI.fetch(
          `
                *[_type == 'movie']{ 
                    _id,                              
                    "overview":   {                    
                        "text": overview[0].children[0].text
                    },                
                    "poster" : poster.asset->url,    
                    title 
                }| order(popularity desc) [0..2]`
        );

    The only difference here is that we sorted the movies by popularity in descending order and then selected only the first three. The items are returned in a zero-based index and so the first three items are items 0, 1 and 2. If we wish to retrieve the first ten items, we could pass [0..9] to the function.

    Here’s the complete code for the movie actions in the movieActions.js file.

    Setting Up The Reducers

    Reducers are one of the most important concepts in Redux. They take the previous state and determine the state changes.

    Typically, we’ll be using the switch statement to execute a condition for each action type. For instance, we can return loading when the action type denotes loading, and then the payload when it denotes success or error. It is expected to take in the initial state and the action as arguments.

    Our movieReducers.js file contains various reducers to match the actions defined in the movieActions.js file. However, each of the reducers has a similar syntax and structure. The only differences are the constants they call and the values they return.

    Let’s start by taking a look at the fetchAllMoviesReducer in the movieReducers.js file.

    import {
      MOVIES_FETCH_FAIL,
      MOVIES_FETCH_REQUEST,
      MOVIES_FETCH_SUCCESS,  
    } from "../constants/movieConstants";
    
    const fetchAllMoviesReducer = (state = {}, action) => {
      switch (action.type) {
        case MOVIES_FETCH_REQUEST:
          return {
            loading: true
          };
        case MOVIES_FETCH_SUCCESS:
          return {
            loading: false,
            movies: action.payload
          };
        case MOVIES_FETCH_FAIL:
          return {
            loading: false,
            error: action.payload
          };
        case MOVIES_FETCH_RESET:
          return {};
        default:
          return state;
      }
    };

    Like all reducers, the fetchAllMoviesReducer takes the initial state object (state) and the action object as arguments. We used the switch statement to check the action types at each point in time. If it corresponds to MOVIES_FETCH_REQUEST, we return loading as true to enable us to show a loading indicator to the user.

    If it corresponds to MOVIES_FETCH_SUCCESS, we turn off the loading indicator and then return the action payload in a variable movies. But if it is MOVIES_FETCH_FAIL, we also turn off the loading and then return the error. We also want the option to reset our movies. This will enable us to clear the states when we need to do so.

    We have the same structure for other reducers. The complete movieReducers.js is shown below.

    We also followed the exact same structure for personReducers.js. For instance, the fetchAllPersonsReducer function defines the states for fetching all persons in the database.

    This is given in the code below.

    import {
      PERSONS_FETCH_FAIL,
      PERSONS_FETCH_REQUEST,
      PERSONS_FETCH_SUCCESS,
    } from "../constants/personConstants";
    
    const fetchAllPersonsReducer = (state = {}, action) => {
      switch (action.type) {
        case PERSONS_FETCH_REQUEST:
          return {
            loading: true
          };
        case PERSONS_FETCH_SUCCESS:
          return {
            loading: false,
            persons: action.payload
          };
        case PERSONS_FETCH_FAIL:
          return {
            loading: false,
            error: action.payload
          };
        default:
          return state;
      }
    };
    

    Just like the fetchAllMoviesReducer, we defined fetchAllPersonsReducer with state and action as arguments. These are standard setup for Redux reducers. We then used the switch statement to check the action types and if it’s of type PERSONS_FETCH_REQUEST, we return loading as true. If it’s PERSONS_FETCH_SUCCESS, we switch off loading and return the payload, and if it’s PERSONS_FETCH_FAIL, we return the error.

    Combining Reducer

    Redux’s combineReducers function allows us to combine more than one reducer and pass it to the store. We’ll combine our movies and persons reducers in an index.js file within the reducers folder.

    Let’s take a look at it.

    import { combineReducers } from "redux";
    import {
      fetchAllMoviesReducer,
      fetchMovieByIdReducer,
      sortMoviesByReducer,
      getMostPopularReducer,
      fetchMoviesByRefReducer
    } from "./movieReducers";
    
    import {
      fetchAllPersonsReducer,
      fetchPersonByIdReducer,
      countPersonsReducer
    } from "./personReducers";
    
    import { toggleTheme } from "./globalReducers";
    
    export default combineReducers({
      fetchAllMoviesReducer,
      fetchMovieByIdReducer,
      fetchAllPersonsReducer,
      fetchPersonByIdReducer,
      sortMoviesByReducer,
      getMostPopularReducer,
      countPersonsReducer,
      fetchMoviesByRefReducer,
      toggleTheme
    });

    Here we imported all the reducers from the movies, persons, and global reducers file and passed them to combineReducers function. The combineReducers function takes an object which allows us to pass all our reducers. We can even add an alias to the arguments in the process.

    We’ll work on the globalReducers later.

    We can now pass the reducers in the Redux store.js file. This is shown below.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import reducers from "./reducers/index";
    
    export default createStore(reducers, initialState, applyMiddleware(thunk));
    

    Having set up our Redux workflow, let’s set up our react application.

    Setting Up Our React Application

    Our react application will list movies and their corresponding cast and crewmembers. We will be using react-router-dom for routing and styled-components for styling the app. We’ll also use Material UI for icons and some UI components.

    Enter the following bash command to install the dependencies.

    npm install react-router-dom @material-ui/core @material-ui/icons query-string

    Here’s what we’ll be building.

    Connecting Redux To Our React App

    React-redux ships with a Provider function that allows us to connect our application to the Redux store. To do this, we have to pass an instance of the store to the Provider. We can do this either in our index.js or App.js file.

    Here’s our index.js file.

    import React from "react";
    import ReactDOM from "react-dom";
    import "./index.css";
    import App from "./App";
    import { Provider } from "react-redux";
    import store from "./redux/store";
    ReactDOM.render(
      <Provider store={store}>
        <App />
      </Provider>,
      document.getElementById("root")
    );

    Here, we imported Provider from react-redux and store from our Redux store. Then we wrapped our entire components tree with the Provider, passing the store to it.

    Next, we need react-router-dom for routing in our React application. react-router-dom comes with BrowserRouter, Switch and Route that can be used to define our path and routes.

    We do this in our App.js file. This is shown below.

    import React from "react";
    import Header from "./components/Header";
    import Footer from "./components/Footer";
    import { BrowserRouter as Router, Switch, Route } from "react-router-dom";
    import MoviesList from "./pages/MoviesListPage";
    import PersonsList from "./pages/PersonsListPage";
    
    function App() {
    
      return (
          <Router>
            <main className="contentwrap">
              <Header />
              <Switch>
                <Route path="/persons/">
                  <PersonsList />
                </Route>
                <Route path="/" exact>
                  <MoviesList />
                </Route>
              </Switch>
            </main>
            <Footer />
          </Router>
      );
    }
    export default App;

    This is a standard setup for routing with react-router-dom. You can check it out in their documentation. We imported our components Header, Footer, PersonsList and MovieList. We then set up the react-router-dom by wrapping everything in Router and Switch.

    Since we want our pages to share the same header and footer, we had to pass the <Header /> and <Footer /> component before wrapping the structure with Switch. We also did a similar thing with the main element since we want it to wrap the entire application.

    We passed each component to the route using Route from react-router-dom.

    Defining Our Pages And Components

    Our application is organized in a structured way. Reusable components are stored in the components folder while Pages are stored in the pages folder.

    Our pages comprise movieListPage.js, moviePage.js, PersonListPage.js and PersonPage.js. The MovieListPage.js lists all the movies in our Sanity.io backend as well as the most popular movies.

    To list all the movies, we simply dispatch the fetchAllMovies action defined in our movieAction.js file. Since we need to fetch the list as soon as the page loads, we have to define it in the useEffect. This is shown below.

    import React, { useEffect } from "react";
    import { fetchAllMovies } from "../redux/actions/movieActions";
    import { useDispatch, useSelector } from "react-redux";
    
    const MoviesListPage = () => {
      const dispatch = useDispatch();
      useEffect(() => {    
          dispatch(fetchAllMovies());
      }, [dispatch]);
    
      const { loading, error, movies } = useSelector(
        (state) => state.fetchAllMoviesReducer
      );
      
      return (
        ...
      )
    };
    export default MoviesListPage;
    

    Thanks to the useDispatch and useSelector Hooks, we can dispatch Redux actions and select the appropriate states from the Redux store. Notice that the states loading, error and movies were defined in our Reducer functions and here selected them using the useSelector Hook from React Redux. These states namely loading, error and movies become available immediately we dispatched the fetchAllMovies() actions.

    Once we get the list of movies, we can display it in our application using the map function or however we wish.

    Here is the complete code for the moviesListPage.js file.

    We started by dispatching the getMostPopular movies action (this action selects the movies with the highest popularity) in the useEffect Hook. This allows us to retrieve the most popular movies as soon as the page loads. Additionally, we allowed users to sort movies by their releaseDate and popularity. This is handled by the sortMoviesBy action dispatched in the code above. Furthermore, we dispatched the fetchAllMovies depending on the query parameters.

    Also, we used the useSelector Hook to select the corresponding reducers for each of these actions. We selected the states for loading, error and movies for each of the reducers.

    After getting the movies from the reducers, we can now display them to the user. Here, we have used the ES6 map function to do this. We first displayed a loader whenever each of the movie states is loading and if there’s an error, we display the error message. Finally, if we get a movie, we display the movie image to the user using the map function. We wrapped the entire component in a MovieListContainer component.

    The <MovieListContainer> … </MovieListContainer> tag is a div defined using styled components. We’ll take a brief look at that soon.

    Styling Our App With Styled Components

    Styled components allow us to style our pages and components on an individual basis. It also offers some interesting features such as inheritance, Theming, passing of props, etc.

    Although we always want to style our pages on an individual basis, sometimes global styling may be desirable. Interestingly, styled-components provide a way to do that, thanks to the createGlobalStyle function.

    To use styled-components in our application, we need to install it. Open your terminal in your react project and enter the following bash command.

    npm install styled-components

    Having installed styled-components, Let’s get started with our global styles.

    Let’s create a separate folder in our src directory named styles. This will store all our styles. Let’s also create a globalStyles.js file within the styles folder. To create global style in styled-components, we need to import createGlobalStyle.

    import { createGlobalStyle } from "styled-components";

    We can then define our styles as follows:

    export const GlobalStyle = createGlobalStyle`
      ...
    `

    Styled components make use of the template literal to define props. Within this literal, we can write our traditional CSS codes.

    We also imported deviceWidth defined in a file named definition.js. The deviceWidth holds the definition of breakpoints for setting our media queries.

    import { deviceWidth } from "./definition";

    We set overflow to hidden to control the flow of our application.

    html, body{
            overflow-x: hidden;
    }

    We also defined the header style using the .header style selector.

    .header{
      z-index: 5;
      background-color: ${(props)=>props.theme.midDarkBlue}; 
      display:flex;
      align-items:center;
      padding: 0 20px;
      height:50px;
      justify-content:space-between;
      position:fixed;
      top:0;
      width:100%;
      @media ${deviceWidth.laptop_lg}
      {
        width:97%;
      }
      ...
    }

    Here, various styles such as the background color, z-index, padding, and lots of other traditional CSS properties are defined.

    We’ve used the styled-components props to set the background color. This allows us to set dynamic variables that can be passed from our component. Moreover, we also passed the theme’s variable to enable us to make the most of our theme toggling.

    Theming is possible here because we have wrapped our entire application with the ThemeProvider from styled-components. We’ll talk about this in a moment. Furthermore, we used the CSS flexbox to properly style our header and set the position to fixed to make sure it remains fixed with respect to the browser. We also defined the breakpoints to make the headers mobile friendly.

    Here is the complete code for our globalStyles.js file.

    import { createGlobalStyle } from "styled-components";
    import { deviceWidth } from "./definition";
    
    export const GlobalStyle = createGlobalStyle`
        html{
            overflow-x: hidden;
        }
        body{
            background-color: ${(props) => props.theme.lighter};        
            overflow-x: hidden;   
            min-height: 100vh;     
            display: grid;
            grid-template-rows: auto 1fr auto;
        }
        #root{        
            display: grid;
            flex-direction: column;   
        }    
        h1,h2,h3, label{
            font-family: 'Aclonica', sans-serif;        
        }
        h1, h2, h3, p, span:not(.MuiIconButton-label), 
        div:not(.PrivateRadioButtonIcon-root-8), div:not(.tryingthis){
            color: ${(props) => props.theme.bodyText}
        }
        
        p, span, div, input{
            font-family: 'Jost', sans-serif;       
        }
        
        .paginate button{
            color: ${(props) => props.theme.bodyText}
        }
        
        .header{
            z-index: 5;    
            background-color: ${(props) => props.theme.midDarkBlue};                
            display: flex;
            align-items: center;   
            padding: 0 20px;        
            height: 50px;
            justify-content: space-between;
            position: fixed;
            top: 0;
            width: 100%;
            @media ${deviceWidth.laptop_lg}{
                width: 97%;            
            }               
            
            @media ${deviceWidth.tablet}{
                width: 100%;
                justify-content: space-around;
            }
            a{
                text-decoration: none;
            }
            label{
                cursor: pointer;
                color: ${(props) => props.theme.goldish};
                font-size: 1.5rem;
            }        
            .hamburger{
                cursor: pointer;   
                color: ${(props) => props.theme.white};
                @media ${deviceWidth.desktop}{
                    display: none;
                }
                @media ${deviceWidth.tablet}{
                    display: block;                
                }
            }  
                     
        }    
        .mobileHeader{
            z-index: 5;        
            background-color: ${(props) =>
              props.theme.darkBlue};                    
            color: ${(props) => props.theme.white};
            display: grid;
            place-items: center;        
            
            width: 100%;      
            @media ${deviceWidth.tablet}{
                width: 100%;                   
            }                         
            
            height: calc(100% - 50px);                
            transition: all 0.5s ease-in-out; 
            position: fixed;        
            right: 0;
            top: 50px;
            .menuitems{
                display: flex;
                box-shadow: 0 0 5px ${(props) => props.theme.lightshadowtheme};           
                flex-direction: column;
                align-items: center;
                justify-content: space-around;                        
                height: 60%;            
                width: 40%;
                a{
                    display: flex;
                    flex-direction: column;
                    align-items:center;
                    cursor: pointer;
                    color: ${(props) => props.theme.white};
                    text-decoration: none;                
                    &:hover{
                        border-bottom: 2px solid ${(props) => props.theme.goldish};
                        .MuiSvgIcon-root{
                            color: ${(props) => props.theme.lightred}
                        }
                    }
                }
            }
        }
        
        footer{                
            min-height: 30px;        
            margin-top: auto;
            display: flex;
            flex-direction: column;
            align-items: center;
            justify-content: center;        
            font-size: 0.875rem;        
            background-color: ${(props) => props.theme.midDarkBlue};      
            color: ${(props) => props.theme.white};        
        }    
    `;
    

    Notice that we wrote pure CSS code within the literal but there are a few exceptions. Styled-components allows us to pass props. You can learn more about this in the documentation.

    Apart from defining global styles, we can define styles for individual pages.

    For instance, here is the style for the PersonListPage.js defined in PersonStyle.js in the styles folder.

    import styled from "styled-components";
    import { deviceWidth, colors } from "./definition";
    
    export const PersonsListContainer = styled.div`
      margin: 50px 80px;
      @media ${deviceWidth.tablet} {
        margin: 50px 10px;
      }
      a {
        text-decoration: none;
      }
      .top {
        display: flex;
        justify-content: flex-end;
        padding: 5px;
        .MuiSvgIcon-root {
          cursor: pointer;
          &:hover {
            color: ${colors.darkred};
          }
        }
      }
      .personslist {
        margin-top: 20px;
        display: grid;
        place-items: center;
        grid-template-columns: repeat(5, 1fr);
        @media ${deviceWidth.laptop} {
          grid-template-columns: repeat(4, 1fr);
        }
        @media ${deviceWidth.tablet} {
          grid-template-columns: repeat(3, 1fr);
        }
        @media ${deviceWidth.tablet_md} {
          grid-template-columns: repeat(2, 1fr);
        }
        @media ${deviceWidth.mobile_lg} {
          grid-template-columns: repeat(1, 1fr);
        }
        grid-gap: 30px;
        .person {
          width: 200px;
          position: relative;
          img {
            width: 100%;
          }
          .content {
            position: absolute;
            bottom: 0;
            left: 8px;
            border-right: 2px solid ${colors.goldish};
            border-left: 2px solid ${colors.goldish};
            border-radius: 10px;
            width: 80%;
            margin: 20px auto;
            padding: 8px 10px;
            background-color: ${colors.transparentWhite};
            color: ${colors.darkBlue};
            h2 {
              font-size: 1.2rem;
            }
          }
        }
      }
    `;
    

    We first imported styled from styled-components and deviceWidth from the definition file. We then defined PersonsListContainer as a div to hold our styles. Using media queries and the established breakpoints, we made the page mobile-friendly by setting various breakpoints.

    Here, we have used only the standard browser breakpoints for small, large and very large screens. We also made the most of the CSS flexbox and grid to properly style and display our content on the page.

    To use this style in our PersonListPage.js file, we simply imported it and added it to our page as follows.

    import React from "react";
    
    const PersonsListPage = () => {
      return (
        <PersonsListContainer>
          ...
        </PersonsListContainer>
      );
    };
    export default PersonsListPage;
    

    The wrapper will output a div because we defined it as a div in our styles.

    Adding Themes And Wrapping It Up

    It’s always a cool feature to add themes to our application. For this, we need the following:

    • Our custom themes defined in a separate file (in our case definition.js file).
    • The logic defined in our Redux actions and reducers.
    • Calling our theme in our application and passing it through the component tree.

    Let’s check this out.

    Here is our theme object in the definition.js file.

    export const theme = {
      light: {
        dark: "#0B0C10",
        darkBlue: "#253858",
        midDarkBlue: "#42526e",
        lightBlue: "#0065ff",
        normal: "#dcdcdd",
        lighter: "#F4F5F7",
        white: "#FFFFFF",
        darkred: "#E85A4F",
        lightred: "#E98074",
        goldish: "#FFC400",
        bodyText: "#0B0C10",
        lightshadowtheme: "rgba(0, 0, 0, 0.1)"
      },
      dark: {
        dark: "white",
        darkBlue: "#06090F",
        midDarkBlue: "#161B22",
        normal: "#dcdcdd",
        lighter: "#06090F",
        white: "white",
        darkred: "#E85A4F",
        lightred: "#E98074",
        goldish: "#FFC400",
        bodyText: "white",
        lightshadowtheme: "rgba(255, 255, 255, 0.9)"
      }
    };
    

    We have added various color properties for the light and dark themes. The colors are carefully chosen to enable visibility both in light and dark mode. You can define your themes as you want. This is not a hard and fast rule.

    Next, let’s add the functionality to Redux.

    We have created globalActions.js in our Redux actions folder and added the following codes.

    import { SET_DARK_THEME, SET_LIGHT_THEME } from "../constants/globalConstants";
    import { theme } from "../../styles/definition";
    
    export const switchToLightTheme = () => (dispatch) => {
      dispatch({
        type: SET_LIGHT_THEME,
        payload: theme.light
      });
      localStorage.setItem("theme", JSON.stringify(theme.light));
      localStorage.setItem("light", JSON.stringify(true));
    };
    
    export const switchToDarkTheme = () => (dispatch) => {
      dispatch({
        type: SET_DARK_THEME,
        payload: theme.dark
      });
      localStorage.setItem("theme", JSON.stringify(theme.dark));
      localStorage.setItem("light", JSON.stringify(false));
    };

    Here, we simply imported our defined themes. Dispatched the corresponding actions, passing the payload of the themes we needed. The payload results are stored in the local storage using the same keys for both light and dark themes. This enables us to persist the states in the browser.

    We also need to define our reducer for the themes.

    import { SET_DARK_THEME, SET_LIGHT_THEME } from "../constants/globalConstants";
    
    export const toggleTheme = (state = {}, action) => {
      switch (action.type) {
        case SET_LIGHT_THEME:
          return {
            theme: action.payload,
            light: true
          };
        case SET_DARK_THEME:
          return {
            theme: action.payload,
            light: false
          };
        default:
          return state;
      }
    };

    This is very similar to what we’ve been doing. We used the switch statement to check the type of action and then returned the appropriate payload. We also returned a state light that determines whether light or dark theme is selected by the user. We’ll use this in our components.

    We also need to add it to our root reducer and store. Here is the complete code for our store.js.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import { theme as initialTheme } from "../styles/definition";
    import reducers from "./reducers/index";
    
    const theme = localStorage.getItem("theme")
      ? JSON.parse(localStorage.getItem("theme"))
      : initialTheme.light;
    
    const light = localStorage.getItem("light")
      ? JSON.parse(localStorage.getItem("light"))
      : true;
    
    const initialState = {
      toggleTheme: { light, theme }
    };
    export default createStore(reducers, initialState, applyMiddleware(thunk));

    Since we needed to persist the theme when the user refreshes, we had to get it from the local storage using localStorage.getItem() and pass it to our initial state.

    Adding The Functionality To Our React Application

    Styled components provide us with ThemeProvider that allows us to pass themes through our application. We can modify our App.js file to add this functionality.

    Let’s take a look at it.

    import React from "react";
    import { BrowserRouter as Router, Switch, Route } from "react-router-dom";
    import { useSelector } from "react-redux";
    import { ThemeProvider } from "styled-components";
    
    function App() {
      const { theme } = useSelector((state) => state.toggleTheme);
      let Theme = theme ? theme : {};
      return (
        <ThemeProvider theme={Theme}>
          <Router>
            ...
          </Router>
        </ThemeProvider>
      );
    }
    export default App;

    By passing themes through the ThemeProvider, we can easily use the theme props in our styles.

    For instance, we can set the color to our bodyText custom color as follows.

    color: ${(props) => props.theme.bodyText};

    We can use the custom themes anywhere we need color in our application.

    For example, to define border-bottom, we do the following.

    border-bottom: 2px solid ${(props) => props.theme.goldish};

    Conclusion

    We began by delving into Sanity.io, setting it up and connecting it to our React application. Then we set up Redux and used the GROQ language to query our API. We saw how to connect and use Redux to our React app using react-redux, use styled-components and theming.

    However, we only scratched the surface on what is possible with these technologies. I encourage you to go through the code samples in my GitHub repo and try your hands on a completely different project using these technologies to learn and master them.

    Resources

    Smashing Editorial
    (ks, vf, yk, il)

    Source link

    web design

    How To Port Your Web App To Microsoft Teams — Smashing Magazine

    02/02/2021

    About The Authors

    Tomomi Imura (@girlie_mac) is an avid open web technology advocate and a full-stack engineer, who is currently working as a Cloud Advocate at Microsoft in San …
    More about
    Tomomi & Daisy

    On your list of places where people might access your web app, “Microsoft Teams” is probably number “not-on-the-list”. But it turns out that making your application accessible where your users are already working has some profound benefits. In this article, we’ll look at how Microsoft Teams makes web apps a first-class citizen, and how it enables you to interact with those apps in completely new ways. 

    Perhaps you are using Microsoft Teams at work and want to build an app that runs inside Teams. Or maybe you’ve already published an app on another platform and want to gain more users on Teams. In this article, we’ll see how to build a new web application in Teams, and how to integrate an existing one — with just a few lines of code.

    You don’t need any prior experience to get started. We’ll use bare-minimum HTML code and toolsets to build a Teams tab (the simplest version of an app in Teams). While you’re walking through this tutorial, if you want to dive deeper, check out the on-demand videos from Learn Together: Developing Apps for Teams. It turns out that making your web application accessible where your users are already working has some benefits, including a reach of over 115 million daily active users. Let’s dive in!

    Microsoft Teams as a platform

    You may be familiar with Teams as a collaborative communication tool, but as a developer, you could also view it as a development platform. In fact, Teams provides an alternative way to interact with and distribute your existing web applications. This is primarily because the tool has always been designed with the web in mind. One of the key benefits of integrating web apps into Teams is providing a more productive way for users — your colleagues and Teams users around the world — to get the work done.

    Integration through tabs, embedded web apps

    While there are many different paths to building and deploying Teams apps, one of the easiest is to integrate your existing web apps with Teams through what is called “tabs.” Tabs are basically embedded web apps created using HTML, TypeScript (or JavaScript), client-side frameworks such as React, or any server-side framework such as .NET.

    Tabs allow you to surface content in your app by essentially embedding a web page in Teams using <iframes>. The application was specifically designed with this capability in mind, so you can integrate existing web apps to create custom experiences for yourself, your team, and your app users.

    One useful feature about integrating your web apps with Teams is that you can pretty much use the developer tools you’re likely already familiar with: Git, Node.js, npm, and Visual Studio Code. To expand your apps with additional capabilities, you can use specialized tools such as the Teams Yeoman generator command line tool or Teams Toolkit Visual Studio Code extension and the Microsoft Teams JavaScript client SDK. They allow you to retrieve additional information and enhance the content you display in your Teams tab.

    Build a tab with an existing code sample

    Let’s get started with the basics. (If you want to take it a step further to actually deploy your app, you can jump to the Learn Together videos) to learn more.

    To simplify the steps, let’s take a look at a code sample, so instead of the tooling outlined above, the only things you’ll need are:

    In this article, we’re going to use a web-based IDE called Glitch, which allows you to host and run this project quickly in the browser, so you don’t have to worry about the tunneling or deployment at this time. For the full-scale approach from start to finish, you can check out a comprehensive tutorial on Microsoft Docs, which includes examples of a slightly more advanced messaging extension or a bot.

    Although Glitch is a great tool for tutorial purposes, this is not a scalable environment so, in reality, you’ll also need a way to deploy and host your web services. In a nutshell, while you are developing, you need to set up a local development with a localhost tunneling, such as the 3rd party tool ngrok, and for production, you’ll need to deploy your app to a cloud service provider, for example, Microsoft Azure Web Services.

    Also, you can use on-premises infrastructure to host your web services, but they must be publicly accessible (not behind a firewall). For this article, we will focus on how to make your web app available on Teams, so let’s go back to Glitch for now!

    First, let’s go to the sample code page and remix the project. Remixing is like forking a repo on GitHub, so it generates a copy of the project for you, letting you modify the code however you want without messing with the original.

    Remix the sample code page first. We’ll use it a starting foundation for our project. (Large preview)

    Once you have your own project repo, you’ll also automatically get your own web server URL. For example, if your generated project name is achieved-diligent-bell, your web server URL would be https://achieved-diligent-bell.glitch.me. Of course, you can customize the name if you want.

    Double-check your project name in the left upper corner. (Large preview)

    Web services up and running, you’ll need to create an app package that can be distributed and installed in Teams. The app package to be installed to the Teams client contains two icons and a JSON manifest file describe the metadata for your app, the extension points your app is using, and pointers to the services powering those extension points.

    Create an app package

    Now, you will need to create an app package to make your web app available in Teams. The package includes:

    📁 your-app-package
     └── 📄 manifest.json
     └── 🖼 color.png (192x192)
     └── 🖼 outline.png (32x32)
    

    When creating your app package, you can choose to create it manually or use App Studio, which is a useful app inside Teams that helps developers make Teams apps (yes, meta indeed). App Studio will guide you through the configuration of the app and create your app manifest automatically.

    Once you have installed the App Studio app in your Teams client, open the app. You can launch it by clicking the three dots in the left menu bar.

    Launch the App Studio app by clicking the three dots in the left menu bar. (Large preview)

    Then, click the Manifest Editor tab from the top and select Create a new app.

    Proceed with the Manifest Editor in the top navigation and select ‘Create a new app’. (Large preview)

    You are going to need to fill out all the required fields including the app names, descriptions, etc.

    Fill in some details, such as app names and descriptions. (Large preview)

    In the App URLs section, fill out your privacy and TOU web pages. In this example, we are just using the placeholder URL, https://example.com.

    Configure your personal tab by selecting Capabilities > Tabs from the left menu.

    Now, you can configure the capabilities of the tab. (Large preview)

    Click the Add button under Add a personal tab and enter the info. Under Content URL, enter your webpage URL (in this case, it should be https://[your-project-name].glitch.me/index.html).

    You will need to add your content URL — the one we’ve defined earlier. (Large preview)

    In the index.html file has a few lines of static HTML code:

    <h1>Hello world! </h1>
    <p>This is the bare-minimum setting for MS Teams Tabs.</p>
    

    Feel free to tweak the content in the index.html as you want. This is the content to be displayed in your Teams client. Finally, go to Finish > Test and distribute.

    Now you should be ready to finish, test and distribute. (Large preview)

    If you get any errors, you’ll have to go back and fix them. Otherwise, you can proceed by clicking “Install”. And voilà, now you have your own personal tab!

    Here we go: our first Tab is ready to go. (Large preview)

    Additional features with Teams SDK

    This code sample only contains the bare minimal HTML code sample to just show you how to configure Teams to display your web app in Tabs. But of course, your web apps don’t need to be static, and you can use web frameworks such as React if you wish! (There are more deep-dive examples using React that you can dive into as well.)

    Teams has its own JavaScript SDK to provide additional functionality too, such as loading a configuration popup for teams, get user’s locale info, etc.

    One useful example is detecting the “theme” of a Teams client — Teams has three themes, light (default), dark, and high-contrast mode. You would think CSS should handle the theming, but remember, your web app is displayed inside of the Teams’ iframe, so you would need to use the SDK to handle the color change.

    You can include the JavaScript either from npm:

    npm install --save @microsoft/teams-js
    

    Or include in your HTML:

    <script src="https://statics.teams.cdn.office.net/sdk/v1.8.0/js/MicrosoftTeams.min.js"></script>
    

    Now you can detect the current theme with the getContext method. And this is how you can determine the body text color:

    microsoftTeams.initialize();
    
    microsoftTeams.getContext((context) => {
      if(context.theme !== 'default') {
        document.body.style.color = '#fff';  }
    });
    

    The theme can be changed by a user after loading, so to detect the theme change event, add the following code snippet:

    microsoftTeams.registerOnThemeChangeHandler((theme)=> {
      if(theme !== 'default') {
        document.body.style.color = '#fff';
        document.body.style.color = 'inherit';
    }
    });
    
    And so we’ve switched from a light mode to dark mode. (Large preview)

    Hopefully, this simple tutorial helped you to get started with the first steps. If you’d like to continue developing for Teams, you can add more capabilities such as adding Teams-native UI components, search features, messaging extensions, and conversational bots, to build more interactive applications.

    For a comprehensive guide using the recommended toolsets (Visual Studio Code, Yeoman Generator, etc.), check out Teams Developer Docs where you can learn more about tabs, messaging extensions, bots, webhooks, and the other capabilities that the Teams developer platform provides.

    Next Steps

    With just a few clicks, you can integrate your apps into Teams and create new experiences for your users. And once you’ve developed apps and deployed them to Teams, you’ll have the potential of reaching a wide audience of users that use Teams daily.

    You can get started building today or learn more from Learn Together: Developing Apps for Teams with on-demand videos and demos all around building apps for Teams.

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Building A Stocks Price Notifier App Using React, Apollo GraphQL And Hasura — Smashing Magazine

    12/21/2020

    About The Author

    Software Engineer, trying to make sense of every line of code she writes. Ankita is a JavaScript Enthusiast and adores its weird parts. She’s also an obsessed …
    More about
    Ankita
    Masand

    In this article, we’ll learn how to build an event-based application and send a web-push notification when a particular event is triggered. We’ll set up database tables, events, and scheduled triggers on the Hasura GraphQL engine and wire up the GraphQL endpoint to the front-end application to record the stock price preference of the user.

    The concept of getting notified when the event of your choice has occurred has become popular compared to being glued onto the continuous stream of data to find that particular occurrence yourself. People prefer to get relevant emails/messages when their preferred event has occurred as opposed to being hooked on the screen to wait for that event to happen. The events-based terminology is also quite common in the world of software.

    How awesome would that be if you could get the updates of the price of your favorite stock on your phone?

    In this article, we’re going to build a Stocks Price Notifier application by using React, Apollo GraphQL, and Hasura GraphQL engine. We’re going to start the project from a create-react-app boilerplate code and would build everything ground up. We’ll learn how to set up the database tables, and events on the Hasura console. We’ll also learn how to wire up Hasura’s events to get stock price updates using web-push notifications.

    Here’s a quick glance at what we would be building:

    Overview of Stock Price Notifier Application
    Stock Price Notifier Application

    Let’s get going!

    An Overview Of What This Project Is About

    The stocks data (including metrics such as high, low, open, close, volume) would be stored in a Hasura-backed Postgres database. The user would be able to subscribe to a particular stock based on some value or he can opt to get notified every hour. The user will get a web-push notification once his subscription criteria are fulfilled.

    This looks like a lot of stuff and there would obviously be some open questions on how we’ll be building out these pieces.

    Here’s a plan on how we would accomplish this project in four steps:

    1. Fetching the stocks data using a NodeJs script
      We’ll start by fetching the stock data using a simple NodeJs script from one of the providers of stocks API — Alpha Vantage. This script will fetch the data for a particular stock in intervals of 5mins. The response of the API includes high, low, open, close and volume. This data will be then be inserted in the Postgres database that is integrated with the Hasura back-end.
    2. Setting up The Hasura GraphQL engine
      We’ll then set-up some tables on the Postgres database to record data points. Hasura automatically generates the GraphQL schemas, queries, and mutations for these tables.
    3. Front-end using React and Apollo Client
      The next step is to integrate the GraphQL layer using the Apollo client and Apollo Provider (the GraphQL endpoint provided by Hasura). The data-points will be shown as charts on the front-end. We’ll also build the subscription options and will fire corresponding mutations on the GraphQL layer.
    4. Setting up Event/Scheduled triggers
      Hasura provides an excellent tooling around triggers. We’ll be adding event & scheduled triggers on the stocks data table. These triggers will be set if the user is interested in getting a notification when the stock prices reach a particular value (event trigger). The user can also opt for getting a notification of a particular stock every hour (scheduled trigger).

    Now that the plan is ready, let’s put it into action!

    Here’s the GitHub repository for this project. If you get lost anywhere in the code below, refer to this repository and get back to speed!

    Fetching The Stocks Data Using A NodeJs Script

    This is not that complicated as it sounds! We’ll have to write a function that fetches data using the Alpha Vantage endpoint and this fetch call should be fired in an interval of 5 mins (You guessed it right, we’ll have to put this function call in setInterval).

    If you’re still wondering what Alpha Vantage is and just want to get this out of your head before hopping onto the coding part, then here it is:

    Alpha Vantage Inc. is a leading provider of free APIs for realtime and historical data on stocks, forex (FX), and digital/cryptocurrencies.

    We would be using this endpoint to get the required metrics of a particular stock. This API expects an API key as one of the parameters. You can get your free API key from here. We’re now good to get onto the interesting bit — let’s start writing some code!

    Installing Dependencies

    Create a stocks-app directory and create a server directory inside it. Initialize it as a node project using npm init and then install these dependencies:

    npm i isomorphic-fetch pg nodemon --save

    These are the only three dependencies that we’d need to write this script of fetching the stock prices and storing them in the Postgres database.

    Here’s a brief explanation of these dependencies:

    • isomorphic-fetch
      It makes it easy to use fetch isomorphically (in the same form) on both the client and the server.
    • pg
      It is a non-blocking PostgreSQL client for NodeJs.
    • nodemon
      It automatically restarts the server on any file changes in the directory.
    Setting up the configuration

    Add a config.js file at the root level. Add the below snippet of code in that file for now:

    const config = {
      user: '<DATABASE_USER>',
      password: '<DATABASE_PASSWORD>',
      host: '<DATABASE_HOST>',
      port: '<DATABASE_PORT>',
      database: '<DATABASE_NAME>',
      ssl: '<IS_SSL>',
      apiHost: 'https://www.alphavantage.co/',
    };
    
    module.exports = config;

    The user, password, host, port, database, ssl are related to the Postgres configuration. We’ll come back to edit this while we set up the Hasura engine part!

    Initializing The Postgres Connection Pool For Querying The Database

    A connection pool is a common term in computer science and you’ll often hear this term while dealing with databases.

    While querying data in databases, you’ll have to first establish a connection to the database. This connection takes in the database credentials and gives you a hook to query any of the tables in the database.

    Note: Establishing database connections is costly and also wastes significant resources. A connection pool caches the database connections and re-uses them on succeeding queries. If all the open connections are in use, then a new connection is established and is then added to the pool.

    Now that it is clear what the connection pool is and what is it used for, let’s start by creating an instance of the pg connection pool for this application:

    Add pool.js file at the root level and create a pool instance as:

    const { Pool } = require('pg');
    const config = require('./config');
    
    const pool = new Pool({
      user: config.user,
      password: config.password,
      host: config.host,
      port: config.port,
      database: config.database,
      ssl: config.ssl,
    });
    
    module.exports = pool;

    The above lines of code create an instance of Pool with the configuration options as set in the config file. We’re yet to complete the config file but there won’t be any changes related to the configuration options.

    We’ve now set the ground and are ready to start making some API calls to the Alpha Vantage endpoint.

    Let’s get onto the interesting bit!

    Fetching The Stocks Data

    In this section, we’ll be fetching the stock data from the Alpha Vantage endpoint. Here’s the index.js file:

    const fetch = require('isomorphic-fetch');
    const getConfig = require('./config');
    const { insertStocksData } = require('./queries');
    
    const symbols = [
      'NFLX',
      'MSFT',
      'AMZN',
      'W',
      'FB'
    ];
    
    (function getStocksData () {
    
      const apiConfig = getConfig('apiHostOptions');
      const { host, timeSeriesFunction, interval, key } = apiConfig;
    
      symbols.forEach((symbol) => {
        fetch(`${host}query/?function=${timeSeriesFunction}&symbol=${symbol}&interval=${interval}&apikey=${key}`)
        .then((res) => res.json())
        .then((data) => {
          const timeSeries = data['Time Series (5min)'];
          Object.keys(timeSeries).map((key) => {
            const dataPoint = timeSeries[key];
            const payload = [
              symbol,
              dataPoint['2. high'],
              dataPoint['3. low'],
              dataPoint['1. open'],
              dataPoint['4. close'],
              dataPoint['5. volume'],
              key,
            ];
            insertStocksData(payload);
          });
        });
      })
    })()

    For the purpose of this project, we’re going to query prices only for these stocks — NFLX (Netflix), MSFT (Microsoft), AMZN (Amazon), W (Wayfair), FB (Facebook).

    Refer this file for the config options. The IIFE getStocksData function is not doing much! It loops through these symbols and queries the Alpha Vantage endpoint ${host}query/?function=${timeSeriesFunction}&symbol=${symbol}&interval=${interval}&apikey=${key} to get the metrics for these stocks.

    The insertStocksData function puts these data points in the Postgres database. Here’s the insertStocksData function:

    const insertStocksData = async (payload) => {
      const query = 'INSERT INTO stock_data (symbol, high, low, open, close, volume, time) VALUES ($1, $2, $3, $4, $5, $6, $7)';
      pool.query(query, payload, (err, result) => {
        console.log('result here', err);
      });
    };

    This is it! We have fetched data points of the stock from the Alpha Vantage API and have written a function to put these in the Postgres database in the stock_data table. There is just one missing piece to make all this work! We’ve to populate the correct values in the config file. We’ll get these values after setting up the Hasura engine. Let’s get to that right away!

    Please refer to the server directory for the complete code on fetching data points from Alpha Vantage endpoint and populating that to the Hasura Postgres database.

    If this approach of setting up connections, configuration options, and inserting data using the raw query looks a bit difficult, please don’t worry about that! We’re going to learn how to do all this the easy way with a GraphQL mutation once the Hasura engine is set up!

    Setting Up The Hasura GraphQL Engine

    It is really simple to set up the Hasura engine and get up and running with the GraphQL schemas, queries, mutations, subscriptions, event triggers, and much more!

    Click on Try Hasura and enter the project name:

    Creating a Hasura Project
    Creating a Hasura Project. (Large preview)

    I’m using the Postgres database hosted on Heroku. Create a database on Heroku and link it to this project. You should then be all set to experience the power of query-rich Hasura console.

    Please copy the Postgres DB URL that you’ll get after creating the project. We’ll have to put this in the config file.

    Click on Launch Console and you’ll be redirected to this view:

    Hasura Console
    Hasura Console. (Large preview)

    Let’s start building the table schema that we’d need for this project.

    Creating Tables Schema On The Postgres Database

    Please go to the Data tab and click on Add Table! Let’s start creating some of the tables:

    symbol table

    This table would be used for storing the information of the symbols. For now, I’ve kept two fields here — id and company. The field id is a primary key and company is of type varchar. Let’s add some of the symbols in this table:

    symbol table
    symbol table. (Large preview)
    stock_data table

    The stock_data table stores id, symbol, time and the metrics such as high, low, open, close, volume. The NodeJs script that we wrote earlier in this section will be used to populate this particular table.

    Here’s how the table looks like:

    stock_data table
    stock_data table. (Large preview)

    Neat! Let’s get to the other table in the database schema!

    user_subscription table

    The user_subscription table stores the subscription object against the user Id. This subscription object is used for sending web-push notifications to the users. We’ll learn later in the article how to generate this subscription object.

    There are two fields in this table — id is the primary key of type uuid and subscription field is of type jsonb.

    events table

    This is the important one and is used for storing the notification event options. When a user opts-in for the price updates of a particular stock, we store that event information in this table. This table contains these columns:

    • id: is a primary key with the auto-increment property.
    • symbol: is a text field.
    • user_id: is of type uuid.
    • trigger_type: is used for storing the event trigger type — time/event.
    • trigger_value: is used for storing the trigger value. For example, if a user has opted in for price-based event trigger — he wants updates if the price of the stock has reached 1000, then the trigger_value would be 1000 and the trigger_type would be event.

    These are all the tables that we’d need for this project. We also have to set up relations among these tables to have a smooth data flow and connections. Let’s do that!

    Setting up relations among tables

    The events table is used for sending web-push notifications based on the event value. So, it makes sense to connect this table with the user_subscription table to be able to send push notifications on the subscriptions stored in this table.

    events.user_id  → user_subscription.id

    The stock_data table is related to the symbols table as:

    stock_data.symbol  → symbol.id

    We also have to construct some relations on the symbol table as:

    stock_data.symbol  → symbol.id
    events.symbol  → symbol.id

    We’ve now created the required tables and also established the relations among them! Let’s switch to the GRAPHIQL tab on the console to see the magic!

    Hasura has already set up the GraphQL queries based on these tables:

    GraphQL Queries/Mutations on the Hasura console
    GraphQL Queries/Mutations on the Hasura console. (Large preview)

    It is plainly simple to query on these tables and you can also apply any of these filters/properties (distinct_on, limit, offset, order_by, where) to get the desired data.

    This all looks good but we have still not connected our server-side code to the Hasura console. Let’s complete that bit!

    Connecting The NodeJs Script To The Postgres Database

    Please put the required options in the config.js file in the server directory as:

    const config = {
      databaseOptions: {
        user: '<DATABASE_USER>',
        password: '<DATABASE_PASSWORD>',
        host: '<DATABASE_HOST>',
        port: '<DATABASE_PORT>',
        database: '<DATABASE_NAME>',
        ssl: true,
      },
      apiHostOptions: {
        host: 'https://www.alphavantage.co/',
        key: '<API_KEY>',
        timeSeriesFunction: 'TIME_SERIES_INTRADAY',
        interval: '5min'
      },
      graphqlURL: '<GRAPHQL_URL>'
    };
    
    const getConfig = (key) => {
      return config[key];
    };
    
    module.exports = getConfig;

    Please put these options from the database string that was generated when we created the Postgres database on Heroku.

    The apiHostOptions consists of the API related options such as host, key, timeSeriesFunction and interval.

    You’ll get the graphqlURL field in the GRAPHIQL tab on the Hasura console.

    The getConfig function is used for returning the requested value from the config object. We’ve already used this in index.js in the server directory.

    It’s time to run the server and populate some data in the database. I’ve added one script in package.json as:

    "scripts": {
        "start": "nodemon index.js"
    }

    Run npm start on the terminal and the data points of the symbols array in index.js should be populated in the tables.

    Refactoring The Raw Query In The NodeJs Script To GraphQL Mutation

    Now that the Hasura engine is set up, let’s see how easy can it be to call a mutation on the stock_data table.

    The function insertStocksData in queries.js uses a raw query:

    const query = 'INSERT INTO stock_data (symbol, high, low, open, close, volume, time) VALUES ($1, $2, $3, $4, $5, $6, $7)';

    Let’s refactor this query and use mutation powered by the Hasura engine. Here’s the refactored queries.js in the server directory:

    
    const { createApolloFetch } = require('apollo-fetch');
    const getConfig = require('./config');
    
    const GRAPHQL_URL = getConfig('graphqlURL');
    const fetch = createApolloFetch({
      uri: GRAPHQL_URL,
    });
    
    const insertStocksData = async (payload) => {
      const insertStockMutation = await fetch({
        query: `mutation insertStockData($objects: [stock_data_insert_input!]!) {
          insert_stock_data (objects: $objects) {
            returning {
              id
            }
          }
        }`,
        variables: {
          objects: payload,
        },
      });
      console.log('insertStockMutation', insertStockMutation);
    };
    
    module.exports = {
      insertStocksData
    }

    Please note: We’ve to add graphqlURL in the config.js file.

    The apollo-fetch module returns a fetch function that can be used to query/mutate the date on the GraphQL endpoint. Easy enough, right?

    The only change that we’ve to do in index.js is to return the stocks object in the format as required by the insertStocksData function. Please check out index2.js and queries2.js for the complete code with this approach.

    Now that we’ve accomplished the data-side of the project, let’s move onto the front-end bit and build some interesting components!

    Note: We don’t have to keep the database configuration options with this approach!

    Front-end Using React And Apollo Client

    The front-end project is in the same repository and is created using the create-react-app package. The service worker generated using this package supports assets caching but it doesn’t allow more customizations to be added to the service worker file. There are already some open issues to add support for custom service worker options. There are ways to get away with this problem and add support for a custom service worker.

    Let’s start by looking at the structure for the front-end project:

    Project Directory
    Project Directory. (Large preview)

    Please check the src directory! Don’t worry about the service worker related files for now. We’ll learn more about these files later in this section. The rest of the project structure looks simple. The components folder will have the components (Loader, Chart); the services folder contains some of the helper functions/services used for transforming objects in the required structure; styles as the name suggests contains the sass files used for styling the project; views is the main directory and it contains the view layer components.

    We’d need just two view components for this project — The Symbol List and the Symbol Timeseries. We’ll build the time-series using the Chart component from the highcharts library. Let’s start adding code in these files to build up the pieces on the front-end!

    Installing Dependencies

    Here’s the list of dependencies that we’ll need:

    • apollo-boost
      Apollo boost is a zero-config way to start using Apollo Client. It comes bundled with the default configuration options.
    • reactstrap and bootstrap
      The components are built using these two packages.
    • graphql and graphql-type-json
      graphql is a required dependency for using apollo-boost and graphql-type-json is used for supporting the json datatype being used in the GraphQL schema.
    • highcharts and highcharts-react-official
      And these two packages will be used for building the chart:

    • node-sass
      This is added for supporting sass files for styling.

    • uuid
      This package is used for generating strong random values.

    All of these dependencies will make sense once we start using them in the project. Let’s get onto the next bit!

    Setting Up Apollo Client

    Create a apolloClient.js inside the src folder as:

    import ApolloClient from 'apollo-boost';
    
    const apolloClient = new ApolloClient({
      uri: '<HASURA_CONSOLE_URL>'
    });
    
    export default apolloClient;

    The above code instantiates ApolloClient and it takes in uri in the config options. The uri is the URL of your Hasura console. You’ll get this uri field on the GRAPHIQL tab in the GraphQL Endpoint section.

    The above code looks simple but it takes care of the main part of the project! It connects the GraphQL schema built on Hasura with the current project.

    We also have to pass this apollo client object to ApolloProvider and wrap the root component inside ApolloProvider. This will enable all the nested components inside the main component to use client prop and fire queries on this client object.

    Let’s modify the index.js file as:

    const Wrapper = () => {
    /* some service worker logic - ignore for now */
      const [insertSubscription] = useMutation(subscriptionMutation);
      useEffect(() => {
        serviceWorker.register(insertSubscription);
      }, [])
      /* ignore the above snippet */
      return <App />;
    }
    
    ReactDOM.render(
      <ApolloProvider client={apolloClient}>
        <Wrapper />
      </ApolloProvider>,
      document.getElementById('root')
    );

    Please ignore the insertSubscription related code. We’ll understand that in detail later. The rest of the code should be simple to get around. The render function takes in the root component and the elementId as parameters. Notice client (ApolloClient instance) is being passed as a prop to ApolloProvider. You can check the complete index.js file here.

    Setting Up The Custom Service Worker

    A Service worker is a JavaScript file that has the capability to intercept network requests. It is used for querying the cache to check if the requested asset is already present in the cache instead of making a ride to the server. Service workers are also used for sending web-push notifications to the subscribed devices.

    We’ve to send web-push notifications for the stock price updates to the subscribed users. Let’s set the ground and build this service worker file!

    The insertSubscription related snipped in the index.js file is doing the work of registering service worker and putting the subscription object in the database using subscriptionMutation.

    Please refer queries.js for all the queries and mutations being used in the project.

    serviceWorker.register(insertSubscription); invokes the register function written in the serviceWorker.js file. Here it is:

    export const register = (insertSubscription) => {
      if ('serviceWorker' in navigator) {
        const swUrl = `${process.env.PUBLIC_URL}/serviceWorker.js`
        navigator.serviceWorker.register(swUrl)
          .then(() => {
            console.log('Service Worker registered');
            return navigator.serviceWorker.ready;
          })
          .then((serviceWorkerRegistration) => {
            getSubscription(serviceWorkerRegistration, insertSubscription);
            Notification.requestPermission();
          })
      }
    }

    The above function first checks if serviceWorker is supported by the browser and then registers the service worker file hosted on the URL swUrl. We’ll check this file in a moment!

    The getSubscription function does the work of getting the subscription object using the subscribe method on the pushManager object. This subscription object is then stored in the user_subscription table against a userId. Please note that the userId is being generated using the uuid function. Let’s check out the getSubscription function:

    const getSubscription = (serviceWorkerRegistration, insertSubscription) => {
      serviceWorkerRegistration.pushManager.getSubscription()
        .then ((subscription) => {
          const userId = uuidv4();
          if (!subscription) {
            const applicationServerKey = urlB64ToUint8Array('<APPLICATION_SERVER_KEY>')
            serviceWorkerRegistration.pushManager.subscribe({
              userVisibleOnly: true,
              applicationServerKey
            }).then (subscription => {
              insertSubscription({
                variables: {
                  userId,
                  subscription
                }
              });
              localStorage.setItem('serviceWorkerRegistration', JSON.stringify({
                userId,
                subscription
              }));
            })
          }
        })
    }

    You can check serviceWorker.js file for the complete code!

    Notification Popup
    Notification Popup. (Large preview)

    Notification.requestPermission() invoked this popup that asks the user for the permission for sending notifications. Once the user clicks on Allow, a subscription object is generated by the push service. We’re storing that object in the localStorage as:

    Webpush Subscriptions object
    Webpush Subscriptions object. (Large preview)

    The field endpoint in the above object is used for identifying the device and the server uses this endpoint to send web push notifications to the user.

    We have done the work of initializing and registering the service worker. We also have the subscription object of the user! This is working all good because of the serviceWorker.js file present in the public folder. Let’s now set up the service worker to get things ready!

    This is a bit difficult topic but let’s get it right! As mentioned earlier, the create-react-app utility doesn’t support customizations by default for the service worker. We can achieve customer service worker implementation using workbox-build module.

    We also have to make sure that the default behavior of pre-caching files is intact. We’ll modify the part where the service worker gets build in the project. And, workbox-build helps in achieving exactly that! Neat stuff! Let’s keep it simple and list down all that we have to do to make the custom service worker work:

    • Handle the pre-caching of assets using workboxBuild.
    • Create a service worker template for caching assets.
    • Create sw-precache-config.js file to provide custom configuration options.
    • Add the build service worker script in the build step in package.json.

    Don’t worry if all this sounds confusing! The article doesn’t focus on explaining the semantics behind each of these points. We’ve to focus on the implementation part for now! I’ll try to cover the reasoning behind doing all the work to make a custom service worker in another article.

    Let’s create two files sw-build.js and sw-custom.js in the src directory. Please refer to the links to these files and add the code to your project.

    Let’s now create sw-precache-config.js file at the root level and add the following code in that file:

    module.exports = {
      staticFileGlobs: [
        'build/static/css/**.css',
        'build/static/js/**.js',
        'build/index.html'
      ],
      swFilePath: './build/serviceWorker.js',
      stripPrefix: 'build/',
      handleFetch: false,
      runtimeCaching: [{
        urlPattern: /this\.is\.a\.regex/,
        handler: 'networkFirst'
      }]
    }

    Let’s also modify the package.json file to make room for building the custom service worker file:

    Add these statements in the scripts section:

    "build-sw": "node ./src/sw-build.js",
    "clean-cra-sw": "rm -f build/precache-manifest.*.js && rm -f build/service-worker.js",

    And modify the build script as:

    "build": "react-scripts build && npm run build-sw && npm run clean-cra-sw",

    The setup is finally done! We now have to add a custom service worker file inside the public folder:

    function showNotification (event) {
      const eventData = event.data.json();
      const { title, body } = eventData
      self.registration.showNotification(title, { body });
    }
    
    self.addEventListener('push', (event) => {
      event.waitUntil(showNotification(event));
    })

    We’ve just added one push listener to listen to push-notifications being sent by the server. The function showNotification is used for displaying web push notifications to the user.

    This is it! We’re done with all the hard work of setting up a custom service worker to handle web push notifications. We’ll see these notifications in action once we build the user interfaces!

    We’re getting closer to building the main code pieces. Let’s now start with the first view!

    Symbol List View

    The App component being used in the previous section looks like this:

    import React from 'react';
    import SymbolList from './views/symbolList';
    
    const App = () => {
      return <SymbolList />;
    };
    
    export default App;

    It is a simple component that returns SymbolList view and SymbolList does all the heavy-lifting of displaying symbols in a neatly tied user interface.

    Let’s look at symbolList.js inside the views folder:

    Please refer to the file here!

    The component returns the results of the renderSymbols function. And, this data is being fetched from the database using the useQuery hook as:

    const { loading, error, data } = useQuery(symbolsQuery, {variables: { userId }});

    The symbolsQuery is defined as:

    export const symbolsQuery = gql`
      query getSymbols($userId: uuid) {
        symbol {
          id
          company
          symbol_events(where: {user_id: {_eq: $userId}}) {
            id
            symbol
            trigger_type
            trigger_value
            user_id
          }
          stock_symbol_aggregate {
            aggregate {
              max {
                high
                volume
              }
              min {
                low
                volume
              }
            }
          }
        }
      }
    `;

    It takes in userId and fetches the subscribed events of that particular user to display the correct state of the notification icon (bell icon that is being displayed along with the title). The query also fetches the max and min values of the stock. Notice the use of aggregate in the above query. Hasura’s Aggregation queries do the work behind the scenes to fetch the aggregate values like count, sum, avg, max, min, etc.

    Based on the response from the above GraphQL call, here’s the list of cards that are displayed on the front-end:

    Stock Cards
    Stock Cards. (Large preview)

    The card HTML structure looks something like this:

    <div key={id}>
      <div className="card-container">
        <Card>
          <CardBody>
            <CardTitle className="card-title">
              <span className="company-name">{company}  </span>
                <Badge color="dark" pill>{id}</Badge>
                <div className={classNames({'bell': true, 'disabled': isSubscribed})} id={`subscribePopover-${id}`}>
                  <FontAwesomeIcon icon={faBell} title="Subscribe" />
                </div>
            </CardTitle>
            <div className="metrics">
              <div className="metrics-row">
                <span className="metrics-row--label">High:</span> 
                <span className="metrics-row--value">{max.high}</span>
                <span className="metrics-row--label">{' '}(Volume: </span> 
                <span className="metrics-row--value">{max.volume}</span>)
              </div>
              <div className="metrics-row">
                <span className="metrics-row--label">Low: </span>
                <span className="metrics-row--value">{min.low}</span>
                <span className="metrics-row--label">{' '}(Volume: </span>
                <span className="metrics-row--value">{min.volume}</span>)
              </div>
            </div>
            <Button className="timeseries-btn" outline onClick={() => toggleTimeseries(id)}>Timeseries</Button>{' '}
          </CardBody>
        </Card>
        <Popover
          className="popover-custom" 
          placement="bottom" 
          target={`subscribePopover-${id}`}
          isOpen={isSubscribePopoverOpen === id}
          toggle={() => setSubscribeValues(id, symbolTriggerData)}
        >
          <PopoverHeader>
            Notification Options
            <span className="popover-close">
              <FontAwesomeIcon 
                icon={faTimes} 
                onClick={() => handlePopoverToggle(null)}
              />
            </span>
          </PopoverHeader>
          {renderSubscribeOptions(id, isSubscribed, symbolTriggerData)}
        </Popover>
      </div>
      <Collapse isOpen={expandedStockId === id}>
        {
          isOpen(id) ? <StockTimeseries symbol={id}/> : null
        }
      </Collapse>
    </div>

    We’re using the Card component of ReactStrap to render these cards. The Popover component is used for displaying the subscription-based options:

    Notification Options
    Notification Options. (Large preview)

    When the user clicks on the bell icon for a particular stock, he can opt-in to get notified every hour or when the price of the stock has reached the entered value. We’ll see this in action in the Events/Time Triggers section.

    Note: We’ll get to the StockTimeseries component in the next section!

    Please refer to symbolList.js for the complete code related to the stocks list component.

    Stock Timeseries View

    The StockTimeseries component uses the query stocksDataQuery:

    export const stocksDataQuery = gql`
      query getStocksData($symbol: String) {
        stock_data(order_by: {time: desc}, where: {symbol: {_eq: $symbol}}, limit: 25) {
          high
          low
          open
          close
          volume
          time
        }
      }
    `;

    The above query fetches the recent 25 data points of the selected stock. For example, here is the chart for the Facebook stock open metric:

    Stock Prices timeline
    Stock Prices timeline. (Large preview)

    This is a straightforward component where we pass in some chart options to [HighchartsReact] component. Here are the chart options:

    const chartOptions = {
      title: {
        text: `${symbol} Timeseries`
      },
      subtitle: {
        text: 'Intraday (5min) open, high, low, close prices & volume'
      },
      yAxis: {
        title: {
          text: '#'
        }
      },
      xAxis: {
        title: {
          text: 'Time'
        },
        categories: getDataPoints('time')
      },
      legend: {
        layout: 'vertical',
        align: 'right',
        verticalAlign: 'middle'
      },
      series: [
        {
          name: 'high',
          data: getDataPoints('high')
        }, {
          name: 'low',
          data: getDataPoints('low')
        }, {
          name: 'open',
          data: getDataPoints('open')
        },
        {
          name: 'close',
          data: getDataPoints('close')
        },
        {
          name: 'volume',
          data: getDataPoints('volume')
        }
      ]
    }

    The X-Axis shows the time and the Y-Axis shows the metric value at that time. The function getDataPoints is used for generating a series of points for each of the series.

    const getDataPoints = (type) => {
      const values = [];
      data.stock_data.map((dataPoint) => {
        let value = dataPoint[type];
        if (type === 'time') {
          value = new Date(dataPoint['time']).toLocaleString('en-US');
        }
        values.push(value);
      });
      return values;
    }

    Simple! That’s how the Chart component is generated! Please refer to Chart.js and stockTimeseries.js files for the complete code on stock time-series.

    You should now be ready with the data and the user interfaces part of the project. Let’s now move onto the interesting part — setting up event/time triggers based on the user’s input.

    Setting Up Event/Scheduled Triggers

    In this section, we’ll learn how to set up triggers on the Hasura console and how to send web push notifications to the selected users. Let’s get started!

    Events Triggers On Hasura Console

    Let’s create an event trigger stock_value on the table stock_data and insert as the trigger operation. The webhook will run every time there is an insert in the stock_data table.

    Event triggers setup
    Event triggers setup. (Large preview)

    We’re going to create a glitch project for the webhook URL. Let me put down a bit about webhooks to make easy clear to understand:

    Webhooks are used for sending data from one application to another on the occurrence of a particular event. When an event is triggered, an HTTP POST call is made to the webhook URL with the event data as the payload.

    In this case, when there is an insert operation on the stock_data table, an HTTP post call will be made to the configured webhook URL (post call in the glitch project).

    Glitch Project For Sending Web-push Notifications

    We’ve to get the webhook URL to put in the above event trigger interface. Go to glitch.com and create a new project. In this project, we’ll set up an express listener and there will be an HTTP post listener. The HTTP POST payload will have all the details of the stock datapoint including open, close, high, low, volume, time. We’ll have to fetch the list of users subscribed to this stock with the value equal to the close metric.

    These users will then be notified of the stock price via web-push notifications.

    That’s all we’ve to do to achieve the desired target of notifying users when the stock price reaches the expected value!

    Let’s break this down into smaller steps and implement them!

    Installing Dependencies

    We would need the following dependencies:

    • express: is used for creating an express server.
    • apollo-fetch: is used for creating a fetch function for getting data from the GraphQL endpoint.
    • web-push: is used for sending web push notifications.

    Please write this script in package.json to run index.js on npm start command:

    "scripts": {
      "start": "node index.js"
    }
    Setting Up Express Server

    Let’s create an index.js file as:

    const express = require('express');
    const bodyParser = require('body-parser');
    
    const app = express();
    app.use(bodyParser.json());
    
    const handleStockValueTrigger = (eventData, res) => {
      /* Code for handling this trigger */
    }
    
    app.post('/', (req, res) => {
      const { body } = req
      const eventType = body.trigger.name
      const eventData = body.event
      
      switch (eventType) {
        case 'stock-value-trigger':
          return handleStockValueTrigger(eventData, res);
      }
      
    });
    
    app.get('/', function (req, res) {
      res.send('Hello World - For Event Triggers, try a POST request?');
    });
    
    var server = app.listen(process.env.PORT, function () {
        console.log(`server listening on port ${process.env.PORT}`);
    });
    

    In the above code, we’ve created post and get listeners on the route /. get is simple to get around! We’re mainly interested in the post call. If the eventType is stock-value-trigger, we’ll have to handle this trigger by notifying the subscribed users. Let’s add that bit and complete this function!

    Fetching Subscribed Users
    const fetch = createApolloFetch({
      uri: process.env.GRAPHQL_URL
    });
    
    const getSubscribedUsers = (symbol, triggerValue) => {
      return fetch({
        query: `query getSubscribedUsers($symbol: String, $triggerValue: numeric) {
          events(where: {symbol: {_eq: $symbol}, trigger_type: {_eq: "event"}, trigger_value: {_gte: $triggerValue}}) {
            user_id
            user_subscription {
              subscription
            }
          }
        }`,
        variables: {
          symbol,
          triggerValue
        }
      }).then(response => response.data.events)
    }
    
    
    const handleStockValueTrigger = async (eventData, res) => {
      const symbol = eventData.data.new.symbol;
      const triggerValue = eventData.data.new.close;
      const subscribedUsers = await getSubscribedUsers(symbol, triggerValue);
      const webpushPayload = {
        title: `${symbol} - Stock Update`,
        body: `The price of this stock is ${triggerValue}`
      }
      subscribedUsers.map((data) => {
        sendWebpush(data.user_subscription.subscription, JSON.stringify(webpushPayload));
      })
      res.json(eventData.toString());
    }
    

    In the above handleStockValueTrigger function, we’re first fetching the subscribed users using the getSubscribedUsers function. We’re then sending web-push notifications to each of these users. The function sendWebpush is used for sending the notification. We’ll look at the web-push implementation in a moment.

    The function getSubscribedUsers uses the query:

    query getSubscribedUsers($symbol: String, $triggerValue: numeric) {
      events(where: {symbol: {_eq: $symbol}, trigger_type: {_eq: "event"}, trigger_value: {_gte: $triggerValue}}) {
        user_id
        user_subscription {
          subscription
        }
      }
    }

    This query takes in the stock symbol and the value and fetches the user details including user-id and user_subscription that matches these conditions:

    • symbol equal to the one being passed in the payload.
    • trigger_type is equal to event.
    • trigger_value is greater than or equal to the one being passed to this function (close in this case).

    Once we get the list of users, the only thing that remains is sending web-push notifications to them! Let’s do that right away!

    Sending Web-Push Notifications To The Subscribed Users

    We’ve to first get the public and the private VAPID keys to send web-push notifications. Please store these keys in the .env file and set these details in index.js as:

    webPush.setVapidDetails(
      'mailto:<YOUR_MAIL_ID>',
      process.env.PUBLIC_VAPID_KEY,
      process.env.PRIVATE_VAPID_KEY
    );
    
    const sendWebpush = (subscription, webpushPayload) => {
      webPush.sendNotification(subscription, webpushPayload).catch(err => console.log('error while sending webpush', err))
    }

    The sendNotification function is used for sending the web-push on the subscription endpoint provided as the first parameter.

    That’s all is required to successfully send web-push notifications to the subscribed users. Here’s the complete code defined in index.js:

    const express = require('express');
    const bodyParser = require('body-parser');
    const { createApolloFetch } = require('apollo-fetch');
    const webPush = require('web-push');
    
    webPush.setVapidDetails(
      'mailto:<YOUR_MAIL_ID>',
      process.env.PUBLIC_VAPID_KEY,
      process.env.PRIVATE_VAPID_KEY
    );
    
    const app = express();
    app.use(bodyParser.json());
    
    const fetch = createApolloFetch({
      uri: process.env.GRAPHQL_URL
    });
    
    const getSubscribedUsers = (symbol, triggerValue) => {
      return fetch({
        query: `query getSubscribedUsers($symbol: String, $triggerValue: numeric) {
          events(where: {symbol: {_eq: $symbol}, trigger_type: {_eq: "event"}, trigger_value: {_gte: $triggerValue}}) {
            user_id
            user_subscription {
              subscription
            }
          }
        }`,
        variables: {
          symbol,
          triggerValue
        }
      }).then(response => response.data.events)
    }
    
    const sendWebpush = (subscription, webpushPayload) => {
      webPush.sendNotification(subscription, webpushPayload).catch(err => console.log('error while sending webpush', err))
    }
    
    const handleStockValueTrigger = async (eventData, res) => {
      const symbol = eventData.data.new.symbol;
      const triggerValue = eventData.data.new.close;
      const subscribedUsers = await getSubscribedUsers(symbol, triggerValue);
      const webpushPayload = {
        title: `${symbol} - Stock Update`,
        body: `The price of this stock is ${triggerValue}`
      }
      subscribedUsers.map((data) => {
        sendWebpush(data.user_subscription.subscription, JSON.stringify(webpushPayload));
      })
      res.json(eventData.toString());
    }
    
    app.post('/', (req, res) => {
      const { body } = req
      const eventType = body.trigger.name
      const eventData = body.event
      
      switch (eventType) {
        case 'stock-value-trigger':
          return handleStockValueTrigger(eventData, res);
      }
      
    });
    
    app.get('/', function (req, res) {
      res.send('Hello World - For Event Triggers, try a POST request?');
    });
    
    var server = app.listen(process.env.PORT, function () {
        console.log("server listening");
    });

    Let’s test out this flow by subscribing to stock with some value and manually inserting that value in the table (for testing)!

    I subscribed to AMZN with value as 2000 and then inserted a data point in the table with this value. Here’s how the stocks notifier app notified me right after the insertion:

    Inserting a row in stock_data table for testing
    Inserting a row in stock_data table for testing. (Large preview)

    Neat! You can also check the event invocation log here:

    Event Log
    Event Log. (Large preview)

    The webhook is doing the work as expected! We’re all set for the event triggers now!

    Scheduled/Cron Triggers

    We can achieve a time-based trigger for notifying the subscriber users every hour using the Cron event trigger as:

    Cron/Scheduled Trigger setup
    Cron/Scheduled Trigger setup. (Large preview)

    We can use the same webhook URL and handle the subscribed users based on the trigger event type as stock_price_time_based_trigger. The implementation is similar to the event-based trigger.

    Conclusion

    In this article, we built a stock price notifier application. We learned how to fetch prices using the Alpha Vantage APIs and store the data points in the Hasura backed Postgres database. We also learned how to set up the Hasura GraphQL engine and create event-based and scheduled triggers. We built a glitch project for sending web-push notifications to the subscribed users.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Designing An Attractive And Usable Data Importer For Your App — Smashing Magazine

    12/02/2020

    About The Author

    Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
    More about
    Suzanne
    Scacca

    Even though the development of a data importer is a complex matter, you don’t want your users’ experience with it to be just as complex or complicated. The second they experience any friction or fault in data onboarding, the chances of them bailing from the software will skyrocket. So, in this post, we’re going to focus on how best to present your data importer to users.

    If you’ve ever tried to import data into an app before, you know, as a user, how varied the experience can be. In some cases, the user is overwhelmed with instructions on how to use the importer. In others, there’s no direction at all. And while that might look nicer than an importer overrun with directions and links to documentation on how to use it, a completely useless UI will also cause users frustration once the inevitable errors start getting thrown.

    So, when you’re designing an app or software that needs a data importer, how do you ensure this doesn’t happen to your end users? Do you try to custom build or find a Goldilocks solution that strikes the right balance between minimal and informative? And what should that even look like?

    Today, I want to look at four ways to ensure that the user interface design of your data importer doesn’t get in the way of a positive user experience.

    Quick note before I start: I’ll be using live data importer examples to demonstrate how to design this on your own. However, if you’d rather just use a ready-made data importer, but don’t have time to review the existing options against these good design practices, Flatfile Concierge is what you’re looking for. I’ll show some examples of it as we go along and tell you a bit more about it at the end of this post.

    UI Design Tips For Your Software’s Data Importer

    There are many challenges in data onboarding for apps and software. But if you can get the UI right — in other words, provide your end users with an attractive and usable importer — you can effectively minimize those challenges.

    Here’s what your data importer should look like if you want to make that a reality for your users:

    1. Format The Instructions For Readability

    It doesn’t matter how straightforward the data import process is. You can never assume that your end users will automatically know how to format their file(s), which file types are allowed and what sort of file size limitations there may be.

    So, the main importer page must have instructions for them. Just be careful about going overboard.

    If you leave them with a wall of text explaining what the importer is for, they’ll get annoyed with the redundant information holding them up from getting started. And if you spell out each possible step in minute detail, their eyes are going to glaze over. Worst-case scenario, they’ll start the experience feeling as though they’re being talked down to. None of these outcomes is ideal.

    To find the sweet spot, aim for the following:

    Simplify the instructions into 100 words or less.

    PayPal’s invoice importer is a good example of this:

    The PayPal bulk invoice importer provides a single paragraph with instructions on how to use the importer
    PayPal allows business users to bulk-import and send invoices. (Image source: PayPal) (Large preview)

    There’s a single paragraph on this page that tells users that files need to:

    • Be in CSV format;
    • Include fields for the email address, item name, and invoice amount;
    • Include no more than 1000 invoices.

    For anyone that misses the bit about the file format, they’ll get a reminder of it in the upload field.

    The rest of the information (the link to the file template and FAQs on how to batch invoice) is linked out to other pages, which keeps this importer page nice and short.

    When possible, I’d recommend formatting the instructions using paragraphs, bulletpoints, bolded headers or white space. This would be similar to how you’d structure text for readability on a web or app page.

    QuickBooks Self-Employed shows us how this might work:

    QuickBooks Self-Employed lets users import cash transactions into the software with a 3-step import process
    QuickBooks Self-Employed gives users the ability to import business revenue and expense records into the software. (Image source: QuickBooks Self-Employed) (Large preview)

    There are three steps presented and each is kept short and to the point. By adding extra space between and around them, reading the export/import instructions will seem less daunting.

    One last thing you can do is to make the “Import” button stand out so that users that use the importer more than once can quickly skip past the instructions on subsequent uses.

    Here’s how this might look if you use Flatfile as your data importer:

    An example of a data importer instructions page from Flatfile with a bright purple ‘Upload data from file’ button
    An example of a data importer instructions page from Flatfile. (Image source: Flatfil) (Large preview)

    The button stands out clear as day on this page. And for those who have used this importer before, they won’t need to read through the instructions on the right for a reminder of what kinds of file types are allowed. There’s a note right beneath the button that clarifies this.

    What’s more, the button is in the top-left corner, which is where most users’ eyes initially focus on a new page. So, the strong color of the button coupled with the priority placement will help users quickly get the import process started.

    2. Show Them All The Import Options That Are Available

    Consumers often expect companies to provide them with options. This is something we’ve seen a lot lately in e-commerce, with shoppers wanting various purchase options available (e.g. pick up in-store, curbside pickup, two-day delivery, etc.)

    If it makes sense to do so for your app, consider giving your users the same kind of flexibility and control over how they import their data. And when you do, design each option so that it’s clear — just by looking at it — what action comes next.

    For instance, this is the expense and income importer for AND.CO:

    AND.CO expenses and income importer box: upload CSV file by clicking and selecting file or dragging and dropping it into the page
    AND.CO invites users to import their expenses & income by uploading their files or dragging and dropping them into the interface. (Image source: AND.CO)(Large preview)

    The block with the dashed border tells users that they have at least one option: Drag-and-drop their CSV file into the widget to upload. While an importer design like this doesn’t always allow for click-to-upload, this one does (per the instructions).

    Flatfile uses a similar design at the top of the import page:

    Flatfile upload widget allows for drag-and-drop or click-to-upload for data import
    Flatfile enables users to import their files through drag-and-drop or click-to-upload. (Image source: Flatfile) (Large preview)

    The difference between these two examples is that Flatfile includes an upload button inside the dashed-border box so that it’s clear that both import options are available.

    There’s also a third option beneath this block:

    Flatfile data importer includes spreadsheet tool to manually enter data
    Flatfile enables users to import their data manually into this spreadsheet. (Image source: Flatfile) (Large preview)

    It’s a good idea to include a manual import option if your end users will return to the importer to add small handfuls of data and don’t want to prepare a file every time.

    One last way to present import options is through the use of third-party software logos as Asana does:

    Asana data import options: select a CSV file or import from other tools like Trello, Wrike and Airtable
    Asana allows users to upload project data with a CSV file or imported from other software (Image source: Asana) (Large preview)

    The standard CSV file import option is available at the top of the page. Beneath that, though, are apps that their users are most likely to have stored their project data in.

    As you can see, the visual presentation of the import options is just as important as the instructions provided. So, rather than try to get creative here, just use a tried-and-true design that your end users will be familiar with and will help them instantly identify the import option they prefer.

    3. Make Complex Imports Look Easy

    At this stage of the data import process, things can get a little hairy. Even if you have a flawless import process on the backend, the way it’s presented to your end users can be a problem if the complexities of the process start to show through.

    There are two things you can do with the UI to keep that from happening. This point will cover what you can do if the import process itself is complex.

    HubSpot is a robust marketing and sales software, so it’s no surprise the data import process would take a while. Regardless, it starts simply enough, asking users if they’re going to import their data or pull it in from another platform:

    HubSpot data import page allows users to start the import or do a two-way sync with other software
    HubSpot users are invited to import or sync their company data. (Image source: HubSpot) (Large preview)

    Now, this design goes against what I just talked about in the last point about designing the first page. However, there’s a reason why this was a good choice.

    Let’s say this HubSpot user decides to import their data from a CSV file. They’d select “Import” and then go to this page:

    HubSpot data importer asks users ‘What would you like to import?’: a file from computer or an opt-out list
    HubSpot asks users what kind of data they want to import. (Image source: HubSpot) (Large preview)

    If HubSpot used the typical import page design, this page would require users to pause and then get acquainted with the new interface before moving on.

    So, this is something to consider if you have a complex data onboarding process that needs to be broken up into multiple steps before the actual import begins.

    Assuming the user just wants to import a CSV, XLS or XLSX, they’ll find themselves here next:

    HubSpot data importer asks ‘How many files are you importing?’: one file or multiple files with associations
    HubSpot asks users how many files they need to import. (Image source: HubSpot) (Large preview)

    What’s nice about this approach is that it prevents users from having to go through the importer once for every file they have to upload. If there’s related data, they can select ‘Multiple files with associations’ and the importer will help them make those connections:

    HubSpot data importer asks users to ‘Select the two objects you’d like to import and associate’, like Companies and Contacts
    HubSpot asks users to select two objects to import and associate with one another. (Image source: HubSpot) (Large preview)

    This way, it’s not the users’ responsibility to merge the data in their files. Nor do they have to spend hours going through their imported records to merge related records. This importer helps them do it.

    The next screen is similar to the “How many files are you importing?” screen. This one appears, however, when the user selects “One file”:

    HubSpot data importer asks users ‘How many objects are you importing?’: one object or multiple objects
    HubSpot asks users how many objects they’re going to import into the software. (Image source: HubSpot) (Large preview)

    This again is aimed at keeping users from importing data and then spending excessive amounts of time cleaning it up.

    Next, we have the part of the process where the user finally sees the importer. While it’s not exactly like the designs we looked at before, it’s still intuitive enough where users will know how to upload their files into it:

    The HubSpot data importer page is specific to what the end user is uploading. This example is for a Contacts file
    HubSpot invites users to upload their contacts into the data importer. (Image source: HubSpot) (Large preview)

    While I realize this is a lot of steps to get to a page that other software would show first, think about how much quicker these users are able to get inside HubSpot and to start working.

    If you have a complex upload process (i.e. multiple files, object associations, etc.), consider using a similar design with each question on its own page as well as consistently presented options.

    4. Use Color To Make Data Cleanup Speedy

    The other way to simplify an otherwise complex import process is applicable to all data importers. In particular, this tip pertains to the final steps in the data onboarding process:

    • Data validation
    • Data sanitization

    Now, having a data importer that can actually do some of this work is going to be a huge help. However, it’s ultimately up to your end users to review what they’ve imported and to approve it before they allow it inside the software.

    To help them not be so overwhelmed by all the data and everything they need to address, use color to guide them through it.

    For this example, we’re going to look at ClickUp. And if it looks familiar to you, that’s because it should. It was built using Flatfile’s data importer.

    Let’s start with the first part of the data validation process:

    ClickUp data importer asks users ‘Does this row contain column names?’ for better data processing
    The ClickUp data importer asks end users to confirm if column names are in the top row. (Image source: ClickUp) (Large preview)

    This page is straightforward enough. It shows the user a snippet from their imported data and asks them if the row pointed to contains column names.

    But look at the green “Yes” button. While this is a design tactic we use for web and app interfaces (i.e. make the desired call-to-action a positive and eye-catching color), there’s another reason this is here.

    Assuming the column names are there and ClickUp can easily interpret the data, this is what the user sees next:

    ClickUp data importer helps users validate data with auto-matched columns and green ‘Confirm mapping’ buttons
    The ClickUp data importer uses the color green to style the ‘Confirm mapping’ buttons. (Image source: ClickUp) (Large preview)

    This is the data importer’s attempt at making light work of data validation. On the left are all the identified columns from the file.

    On the right is information about how the columns were matched to ClickUp’s fields. There are also three possible data validation options:

    1. Confirm mapping (in green);
    2. Ignore this column (in a grey ghost button);
    3. Include as a custom field (in another ghost button).

    The green button here matches what we saw on the last screen. So, users have already been conditioned to view this green button as an affirmative, which will help them quickly go through all the results and confirm the fields that were correctly matched.

    Green and grey aren’t the only colors that should appear in your data importer.

    If errors should arise (which isn’t a bad thing), your users should have a chance to fix them before the data gets uploaded. Depending on where in the app the errors appear, you might want to design them differently.

    For instance, ClickUp uses an orange warning symbol to call out issues with values during validation:

    ClickUp data importer orange exclamation point warning symbols for values not present in the software
    The ClickUp data importer assigns orange warning symbols for values that don’t exist in the software. (Image source: ClickUp) (Large preview)

    This allows ClickUp to tell users, “Yes, the column names match, but your values don’t line up with what we use.”

    ClickUp then uses a red highlighter during data sanitization to point out errors with fields:

    ClickUp data importer highlights required rows with missing or incorrect data in red
    The ClickUp data importer highlights required rows with missing or incorrect data in red. (Image source: ClickUp) (Large preview)

    This is the final step before upload, so this is ClickUp’s last attempt at getting its users to perfect their data import. In this case, ClickUp highlights a field in red if it’s marked as required but contains no data.

    The color alone should call attention to the fields. However, what if the user had imported a file with hundreds or thousands of rows and doesn’t see the red at first glance? Giving them a way to zero in on these red lines would be super valuable.

    And ClickUp’s “Only show rows with problems” toggle does this:

    ClickUp data importer toggle ‘Only show rows with problems’ reveals only required fields containing errors
    The ClickUp data importer lets users only show rows that have problems. (Image source: ClickUp) (Large preview)

    Let’s face it: Unless your data importer tells your users when and where there’s a problem with their data, they’re probably not going to give it a second glance. That is, not until they’re in the software and wondering why their records are all messed up.

    Of course, they’ll blame it on the importer and the software; not on their own negligence. So, providing these colorful markers throughout the process will be a huge help.

    Wrapping Up

    As I mentioned before, if you’re not confident that you can pull off the tricky balancing act between building a friction- and error-free data importer while designing it to be attractive, intuitive and helpful, then why bother?

    As we’ve already seen, Flatfile Concierge is a ready-made data importer solution that’s not only built to handle a wide range of data import scenarios, but it looks great, too. By letting it power your data import process, you can devote more time to building products and your clients can dedicate more time to providing their users with better customer service and support.

    Smashing Editorial
    (ra, ef, il)

    Source link

    web design

    Managing Long-Running Tasks In A React App With Web Workers — Smashing Magazine

    10/15/2020

    About The Author

    Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a …
    More about
    Chidi

    In this tutorial, we’re going to learn how to use the Web Worker API to manage time-consuming and UI-blocking tasks in a JavaScript app by building a sample web app that leverages Web Workers. Finally, we’ll end the article by transferring everything to a React application.

    Response time is a big deal when it comes to web applications. Users demand instantaneous responses, no matter what your app may be doing. Whether it’s only displaying a person’s name or crunching numbers, web app users demand that your app responds to their command every single time. Sometimes that can be hard to achieve given the single-threaded nature of JavaScript. But in this article, we’ll learn how we can leverage the Web Worker API to deliver a better experience.

    In writing this article, I made the following assumptions:

    1. To be able to follow along, you should have at least some familiarity with JavaScript and the document API;
    2. You should also have a working knowledge of React so that you can successfully start a new React project using Create React App.

    If you need more insights into this topic, I’ve included a number of links in the “Further Resources” section to help you get up to speed.

    First, let’s get started with Web Workers.

    What Is A Web Worker?

    To understand Web Workers and the problem they’re meant to solve, it is necessary to get a grasp of how JavaScript code is executed at runtime. During runtime, JavaScript code is executed sequentially and in a turn-by-turn manner. Once a piece of code ends, then the next one in line starts running, and so on. In technical terms, we say that JavaScript is single-threaded. This behavior implies that once some piece of code starts running, every code that comes after must wait for that code to finish execution. Thus, every line of code “blocks” the execution of everything else that comes after it. It is therefore desirable that every piece of code finish as quickly as possible. If some piece of code takes too long to finish our program would appear to have stopped working. On the browser, this manifests as a frozen, unresponsive page. In some extreme cases, the tab will freeze altogether.

    Imagine driving on a single-lane. If any of the drivers ahead of you happen to stop moving for any reason, then, you have a traffic jam. With a program like Java, traffic could continue on other lanes. Thus Java is said to be multi-threaded. Web Workers are an attempt to bring multi-threaded behavior to JavaScript.

    The screenshot below shows that the Web Worker API is supported by many browsers, so you should feel confident in using it.

    Showing browser support chart for web workers
    Web Workers browser support. (Large preview)

    Web Workers run in background threads without interfering with the UI, and they communicate with the code that created them by way of event handlers.

    An excellent definition of a Web Worker comes from MDN:

    “A worker is an object created using a constructor (e.g. Worker() that runs a named JavaScript file — this file contains the code that will run in the worker thread; workers run in another global context that is different from the current window. Thus, using the window shortcut to get the current global scope (instead of self within a Worker will return an error.”

    A worker is created using the Worker constructor.

    const worker = new Worker('worker-file.js')

    It is possible to run most code inside a web worker, with some exceptions. For example, you can’t manipulate the DOM from inside a worker. There is no access to the document API.

    Workers and the thread that spawns them send messages to each other using the postMessage() method. Similarly, they respond to messages using the onmessage event handler. It’s important to get this difference. Sending messages is achieved using a method; receiving a message back requires an event handler. The message being received is contained in the data attribute of the event. We will see an example of this in the next section. But let me quickly mention that the sort of worker we’ve been discussing is called a “dedicated worker”. This means that the worker is only accessible to the script that called it. It is also possible to have a worker that is accessible from multiple scripts. These are called shared workers and are created using the SharedWorker constructor, as shown below.

    const sWorker = new SharedWorker('shared-worker-file.js')

    To learn more about Workers, please see this MDN article. The purpose of this article is to get you started with using Web workers. Let’s get to it by computing the nth Fibonacci number.

    Computing The Nth Fibonacci Number

    Note: For this and the next two sections, I’m using Live Server on VSCode to run the app. You can certainly use something else.

    This is the section you’ve been waiting for. We’ll finally write some code to see Web Workers in action. Well, not so fast. We wouldn’t appreciate the job a Web Worker does unless we run into the sort of problems it solves. In this section, we’re going to see an example problem, and in the following section, we’ll see how a web worker helps us do better.

    Imagine you were building a web app that allowed users to calculate the nth Fibonacci number. In case you’re new to the term ‘Fibonacci number’, you can read more about it here, but in summary, Fibonacci numbers are a sequence of numbers such that each number is the sum of the two preceding numbers.

    Mathematically, it is expressed as:

    Thus the first few numbers of the sequence are:

    1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 ...

    In some sources, the sequence starts at F0 = 0, in which case the formula below holds for n > 1:

    In this article we’ll start at F1 = 1. One thing we can see right away from the formula is that the numbers follow a recursive pattern. The task at hand now is to write a recursive function to compute the nth Fibonacci number (FN).

    After a few tries, I believe you can easily come up with the function below.

    const fib = n => {
      if (n < 2) {
        return n // or 1
      } else {
        return fib(n - 1) + fib(n - 2)
      }
    }

    The function is simple. If n is less than 2, return n (or 1), otherwise, return the sum of the n-1 and n-2 FNs. With arrow functions and ternary operator, we can come up with a one-liner.

    const fib = n => (n < 2 ? n : fib(n-1) + fib(n-2))

    This function has a time complexity of 0(2n). This simply means that as the value of n increases, the time required to compute the sum increases exponentially. This makes for a really long-running task that could potentially interfere with our UI, for large values of n. Let’s see this in action.

    Note: This is by no means the best way to solve this particular problem. My choice of using this method is for the purpose of this article.

    To start, create a new folder and name it whatever you like. Now inside that folder create a src/ folder. Also, create an index.html file in the root folder. Inside the src/ folder, create a file named index.js.

    Open up index.html and add the following HTML code.

    <!DOCTYPE html>
    <html>
    <head>
      <link rel="stylesheet" href="styles.css">
    </head>
    <body>
      <div class="heading-container">
        <h1>Computing the nth Fibonnaci number</h1>
      </div>
      <div class="body-container">
        <p id='error' class="error"></p>
        <div class="input-div">
          <input id='number-input' class="number-input" type='number' placeholder="Enter a number" />
          <button id='submit-btn' class="btn-submit">Calculate</button>
        </div>
        <div id='results-container' class="results"></div>
      </div>
      <script src="http://www.smashingmagazine.com/src/index.js"></script>
    </body>
    </html>

    This part is very simple. First, we have a heading. Then we have a container with an input and a button. A user would enter a number then click on “Calculate”. We also have a container to hold the result of the calculation. Lastly, we include the src/index.js file in a script tag.

    You may delete the stylesheet link. But if you’re short on time, I have defined some CSS which you can use. Just create the styles.css file at the root folder and add the styles below:

    
    body {
        margin: 0;
        padding: 0;
        box-sizing: border-box;
      }
      
      .body-container,
      .heading-container {
        padding: 0 20px;
      }
      
      .heading-container {
        padding: 20px;
        color: white;
        background: #7a84dd;
      }
      
      .heading-container > h1 {
        margin: 0;
      }
      
      .body-container {
        width: 50%
      }
      
      .input-div {
        margin-top: 15px;
        margin-bottom: 15px;
        display: flex;
        align-items: center;
      }
      
      .results {
        width: 50vw;
      }
      
      .results>p {
        font-size: 24px;
      }
      
      .result-div {
        padding: 5px 10px;
        border-radius: 5px;
        margin: 10px 0;
        background-color: #e09bb7;
      }
      
      .result-div p {
        margin: 5px;
      }
      
      span.bold {
        font-weight: bold;
      }
      
      input {
        font-size: 25px;
      }
      
      p.error {
        color: red;
      }
      
      .number-input {
        padding: 7.5px 10px;
      }
      
      .btn-submit {
        padding: 10px;
        border-radius: 5px;
        border: none;
        background: #07f;
        font-size: 24px;
        color: white;
        cursor: pointer;
        margin: 0 10px;
      }

    Now open up src/index.js let’s slowly develop it. Add the code below.

    const fib = (n) => (n < 2 ? n : fib(n - 1) + fib(n - 2));
    
    const ordinal_suffix = (num) => {
      // 1st, 2nd, 3rd, 4th, etc.
      const j = num % 10;
      const k = num % 100;
      switch (true) {
        case j === 1 && k !== 11:
          return num + "st";
        case j === 2 && k !== 12:
          return num + "nd";
        case j === 3 && k !== 13:
          return num + "rd";
        default:
          return num + "th";
      }
    };
    const textCont = (n, fibNum, time) => {
      const nth = ordinal_suffix(n);
      return `
      <p id='timer'>Time: <span class='bold'>${time} ms</span></p>
      <p><span class="bold" id='nth'>${nth}</span> fibonnaci number: <span class="bold" id='sum'>${fibNum}</span></p>
      `;
    };

    Here we have three functions. The first one is the function we saw earlier for calculating the nth FN. The second function is just a utility function to attach an appropriate suffix to an integer number. The third function takes some arguments and outputs a markup which we will later insert in the DOM. The first argument is the number whose FN is being computed. The second argument is the computed FN. The last argument is the time it takes to perform the computation.

    Still in src/index.js, add the below code just under the previous one.

    const errPar = document.getElementById("error");
    const btn = document.getElementById("submit-btn");
    const input = document.getElementById("number-input");
    const resultsContainer = document.getElementById("results-container");
    
    btn.addEventListener("click", (e) => {
      errPar.textContent = '';
      const num = window.Number(input.value);
    
      if (num < 2) {
        errPar.textContent = "Please enter a number greater than 2";
        return;
      }
    
      const startTime = new Date().getTime();
      const sum = fib(num);
      const time = new Date().getTime() - startTime;
    
      const resultDiv = document.createElement("div");
      resultDiv.innerHTML = textCont(num, sum, time);
      resultDiv.className = "result-div";
      resultsContainer.appendChild(resultDiv);
    });

    First, we use the document API to get hold of DOM nodes in our HTML file. We get a reference to the paragraph where we’ll display error messages; the input; the calculate button and the container where we’ll show our results.

    Next, we attach a “click” event handler to the button. When the button gets clicked, we take whatever is inside the input element and convert it to a number, if we get anything less than 2, we display an error message and return. If we get a number greater than 2, we continue. First, we record the current time. After that, we calculate the FN. When that finishes, we get a time difference that represents how long the computation took. In the remaining part of the code, we create a new div. We then set its inner HTML to be the output of the textCont() function we defined earlier. Finally, we add a class to it (for styling) and append it to the results container. The effect of this is that each computation will appear in a separate div below the previous one.

    Showing computed Fibonacci numbers up to 43
    Some Fibonacci numbers. (Large preview)

    We can see that as the number increases, the computation time also increases (exponentially). For instance, from 30 to 35, we had the computation time jump from 13ms to 130ms. We can still consider those operations to be “fast”. At 40 we see a computation time of over 1 second. On my machine, this is where I start noticing the page become unresponsive. At this point, I can no longer interact with the page while the computation is on-going. I can’t focus on the input or do anything else.

    Recall when we talked about JavaScript being single-threaded? Well, that thread has been “blocked” by this long-running computation, so everything else must “wait” for it to finish. It may start at a lower or higher value on your machine, but you’re bound to reach that point. Notice that it took almost 10s to compute that of 44. If there were other things to do on your web app, well, the user has to wait for Fib(44) to finish before they can continue. But if you deployed a web worker to handle that calculation, your users could carry on with something else while that runs.

    Let’s now see how web workers help us overcome this problem.

    An Example Web Worker In Action

    In this section, we’ll delegate the job of computing the nth FN to a web worker. This will help free up the main thread and keep our UI responsive while the computation is on-going.

    Getting started with web workers is surprisingly simple. Let’s see how. Create a new file src/fib-worker.js. and enter the following code.

    const fib = (n) => (n < 2 ? n : fib(n - 1) + fib(n - 2));
    
    onmessage = (e) => {
      const { num } = e.data;
      const startTime = new Date().getTime();
      const fibNum = fib(num);
      postMessage({
        fibNum,
        time: new Date().getTime() - startTime,
      });
    };

    Notice that we have moved the function that calculates the nth Fibonacci number, fib inside this file. This file will be run by our web worker.

    Recall in the section What is a web worker, we mentioned that web workers and their parent communicate using the onmessage event handler and postMessage() method. Here we’re using the onmessage event handler to listen to messages from the parent script. Once we get a message, we destructure the number from the data attribute of the event. Next, we get the current time and start the computation. Once the result is ready, we use the postMessage() method to post the results back to the parent script.

    Open up src/index.js let’s make some changes.

    ...
    
    const worker = new window.Worker("src/fib-worker.js");
    
    btn.addEventListener("click", (e) => {
      errPar.textContent = "";
      const num = window.Number(input.value);
      if (num < 2) {
        errPar.textContent = "Please enter a number greater than 2";
        return;
      }
    
      worker.postMessage({ num });
      worker.onerror = (err) => err;
      worker.onmessage = (e) => {
        const { time, fibNum } = e.data;
        const resultDiv = document.createElement("div");
        resultDiv.innerHTML = textCont(num, fibNum, time);
        resultDiv.className = "result-div";
        resultsContainer.appendChild(resultDiv);
      };
    });

    The first thing to do is to create the web worker using the Worker constructor. Then inside our button’s event listener, we send a number to the worker using worker.postMessage({ num }). After that, we set a function to listen for errors in the worker. Here we simply return the error. You can certainly do more if you want, like showing it in DOM. Next, we listen for messages from the worker. Once we get a message, we destructure time and fibNum, and continue the process of showing them in the DOM.

    Note that inside the web worker, the onmessage event is available in the worker’s scope, so we could have written it as self.onmessage and self.postMessage(). But in the parent script, we have to attach these to the worker itself.

    In the screenshot below you would see the web worker file in the sources tab of Chrome Dev Tools. What you should notice is that the UI stays responsive no matter what number you enter. This behavior is the magic of web workers.

    View of an active web worker file
    A running web worker file. (Large preview)

    We’ve made a lot of progress with our web app. But there’s something else we can do to make it better. Our current implementation uses a single worker to handle every computation. If a new message comes while one is running, the old one gets replaced. To get around this, we can create a new worker for each call to calculate the FN. Let’s see how to do that in the next section.

    Working With Multiple Web Workers

    Currently, we’re handling every request with a single worker. Thus an incoming request will replace a previous one that is yet to finish. What we want now is to make a small change to spawn a new web worker for every request. We will kill this worker once it’s done.

    Open up src/index.js and move the line that creates the web worker inside the button’s click event handler. Now the event handler should look like below.

    btn.addEventListener("click", (e) => {
      errPar.textContent = "";
      const num = window.Number(input.value);
      
      if (num < 2) {
        errPar.textContent = "Please enter a number greater than 2";
        return;
      }
      
      const worker = new window.Worker("src/fib-worker.js"); // this line has moved inside the event handler
      worker.postMessage({ num });
      worker.onerror = (err) => err;
      worker.onmessage = (e) => {
        const { time, fibNum } = e.data;
        const resultDiv = document.createElement("div");
        resultDiv.innerHTML = textCont(num, fibNum, time);
        resultDiv.className = "result-div";
        resultsContainer.appendChild(resultDiv);
        worker.terminate() // this line terminates the worker
      };
    });

    We made two changes.

    1. We moved this line const worker = new window.Worker("src/fib-worker.js") inside the button’s click event handler.
    2. We added this line worker.terminate() to discard the worker once we’re done with it.

    So for every click of the button, we create a new worker to handle the calculation. Thus we can keep changing the input, and each result will hit the screen once the computation finishes. In the screenshot below you can see that the values for 20 and 30 appear before that of 45. But I started 45 first. Once the function returns for 20 and 30, their results were posted, and the worker terminated. When everything finishes, we shouldn’t have any workers on the sources tab.

    showing Fibonacci numbers with terminated workers
    Illustration of Multiple independent workers. (Large preview)

    We could end this article right here, but if this were a react app, how would we bring web workers into it. That is the focus of the next section.

    Web Workers In React

    To get started, create a new react app using CRA. Copy the fib-worker.js file into the public/ folder of your react app. Putting the file here stems from the fact that React apps are single-page apps. That’s about the only thing that is specific to using the worker in a react application. Everything that follows from here is pure React.

    In src/ folder create a file helpers.js and export the ordinal_suffix() function from it.

    // src/helpers.js
    
    export const ordinal_suffix = (num) => {
      // 1st, 2nd, 3rd, 4th, etc.
      const j = num % 10;
      const k = num % 100;
      switch (true) {
        case j === 1 && k !== 11:
          return num + "st";
        case j === 2 && k !== 12:
          return num + "nd";
        case j === 3 && k !== 13:
          return num + "rd";
        default:
          return num + "th";
      }
    };

    Our app will require us to maintain some state, so create another file, src/reducer.js and paste in the state reducer.

    // src/reducers.js
    
    export const reducer = (state = {}, action) => {
      switch (action.type) {
        case "SET_ERROR":
          return { ...state, err: action.err };
        case "SET_NUMBER":
          return { ...state, num: action.num };
        case "SET_FIBO":
          return {
            ...state,
            computedFibs: [
              ...state.computedFibs,
              { id: action.id, nth: action.nth, loading: action.loading },
            ],
          };
        case "UPDATE_FIBO": {
          const curr = state.computedFibs.filter((c) => c.id === action.id)[0];
          const idx = state.computedFibs.indexOf(curr);
          curr.loading = false;
          curr.time = action.time;
          curr.fibNum = action.fibNum;
          state.computedFibs[idx] = curr;
          return { ...state };
        }
        default:
          return state;
      }
    };

    Let’s go over each action type one after the other.

    1. SET_ERROR: sets an error state when triggered.
    2. SET_NUMBER: sets the value in our input box to state.
    3. SET_FIBO: adds a new entry to the array of computed FNs.
    4. UPDATE_FIBO: here we look for a particular entry and replaces it with a new object which has the computed FN and the time taken to compute it.

    We shall use this reducer shortly. Before that, let’s create the component that will display the computed FNs. Create a new file src/Results.js and paste in the below code.

    // src/Results.js
    
    import React from "react";
    
    export const Results = (props) => {
      const { results } = props;
      return (
        <div id="results-container" className="results-container">
          {results.map((fb) => {
            const { id, nth, time, fibNum, loading } = fb;
            return (
              <div key={id} className="result-div">
                {loading ? (
                  <p>
                    Calculating the{" "}
                    <span className="bold" id="nth">
                      {nth}
                    </span>{" "}
                    Fibonacci number...
                  </p>
                ) : (
                  <>
                    <p id="timer">
                      Time: <span className="bold">{time} ms</span>
                    </p>
                    <p>
                      <span className="bold" id="nth">
                        {nth}
                      </span>{" "}
                      fibonnaci number:{" "}
                      <span className="bold" id="sum">
                        {fibNum}
                      </span>
                    </p>
                  </>
                )}
              </div>
            );
          })}
        </div>
      );
    };

    With this change, we start the process of converting our previous index.html file to jsx. This file has one responsibility: take an array of objects representing computed FNs and display them. The only difference from what we had before is the introduction of a loading state. So now when the computation is running, we show the loading state to let the user know that something is happening.

    Let’s put in the final pieces by updating the code inside src/App.js. The code is rather long, so we’ll do it in two steps. Let’s add the first block of code.

    import React from "react";
    import "./App.css";
    import { ordinal_suffix } from "./helpers";
    import { reducer } from './reducer'
    import { Results } from "./Results";
    function App() {
      const [info, dispatch] = React.useReducer(reducer, {
        err: "",
        num: "",
        computedFibs: [],
      });
      const runWorker = (num, id) => {
        dispatch({ type: "SET_ERROR", err: "" });
        const worker = new window.Worker('./fib-worker.js')
        worker.postMessage({ num });
        worker.onerror = (err) => err;
        worker.onmessage = (e) => {
          const { time, fibNum } = e.data;
          dispatch({
            type: "UPDATE_FIBO",
            id,
            time,
            fibNum,
          });
          worker.terminate();
        };
      };
      return (
        <div>
          <div className="heading-container">
            <h1>Computing the nth Fibonnaci number</h1>
          </div>
          <div className="body-container">
            <p id="error" className="error">
              {info.err}
            </p>
    
            // ... next block of code goes here ... //
    
            <Results results={info.computedFibs} />
          </div>
        </div>
      );
    }
    export default App;

    As usual, we bring in our imports. Then we instantiate a state and updater function with the useReducer hook. We then define a function, runWorker(), that takes a number and an ID and sets about calling a web worker to compute the FN for that number.

    Note that to create the worker, we pass a relative path to the worker constructor. At runtime, our React code gets attached to the public/index.html file, thus it can find the fib-worker.js file in the same directory. When the computation completes (triggered by worker.onmessage), the UPDATE_FIBO action gets dispatched, and the worker terminated afterward. What we have now is not much different from what we had previously.

    In the return block of this component, we render the same HTML we had before. We also pass the computed numbers array to the <Results /> component for rendering.

    Let’s add the final block of code inside the return statement.

            <div className="input-div">
              <input
                type="number"
                value={info.num}
                className="number-input"
                placeholder="Enter a number"
                onChange={(e) =>
                  dispatch({
                    type: "SET_NUMBER",
                    num: window.Number(e.target.value),
                  })
                }
              />
              <button
                id="submit-btn"
                className="btn-submit"
                onClick={() => {
                  if (info.num < 2) {
                    dispatch({
                      type: "SET_ERROR",
                      err: "Please enter a number greater than 2",
                    });
                    return;
                  }
                  const id = info.computedFibs.length;
                  dispatch({
                    type: "SET_FIBO",
                    id,
                    loading: true,
                    nth: ordinal_suffix(info.num),
                  });
                  runWorker(info.num, id);
                }}
              >
                Calculate
              </button>
            </div>

    We set an onChange handler on the input to update the info.num state variable. On the button, we define an onClick event handler. When the button gets clicked, we check if the number is greater than 2. Notice that before calling runWorker(), we first dispatch an action to add an entry to the array of computed FNs. It is this entry that will be updated once the worker finishes its job. In this way, every entry maintains its position in the list, unlike what we had before.

    Finally, copy the content of styles.css from before and replace the content of App.css.

    We now have everything in place. Now start up your react server and play around with some numbers. Take note of the loading state, which is a UX improvement. Also, note that the UI stays responsive even when you enter a number as high as 1000 and click “Calculate”.

    showing loading state while worker is active.
    Showing loading state and active web worker. (Large preview)

    Note the loading state and the active worker. Once the 46th value is computed the worker is killed and the loading state is replaced by the final result.

    Conclusion

    Phew! It has been a long ride, so let’s wrap it up. I encourage you to take a look at the MDN entry for web workers (see resources list below) to learn other ways of using web workers.

    In this article, we learned about what web workers are and the sort of problems they’re meant to solve. We also saw how to implement them using plain JavaScript. Finally, we saw how to implement web workers in a React application.

    I encourage you to take advantage of this great API to deliver a better experience for your users.

    Further Resources

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    Setting Up An API Using Flask, Google’s Cloud SQL And App Engine — Smashing Magazine

    08/19/2020

    About The Author

    Wole Oyekanmi is a data scientist who is working on applying machine learning to consumer finance. When he’s not working, he enjoys drumming up new startup …
    More about
    Wole

    Flask makes it possible for developers to build an API for whatever use case they might have. In this tutorial, we’ll learn how to set up Google Cloud, Cloud SQL, and App Engine to build a Flask API. (Cloud SQL is a fully managed platform-as-a-service (PaaS) database engine, and App Engine is a fully managed PaaS for hosting applications.)

    A few Python frameworks can be used to create APIs, two of which are Flask and Django. Frameworks comes with functionality that makes it easy for developers to implement the features that users need to interact with their applications. The complexity of a web application could be a deciding factor when you’re choosing which framework to work with.

    Django

    Django is a robust framework that has a predefined structure with built-in functionality. The downside of its robustness, however, is that it could make the framework too complex for certain projects. It’s best suited to complex web applications that need to leverage the advanced functionality of Django.

    Flask

    Flask, on the other hand, is a lightweight framework for building APIs. Getting started with it is easy, and packages are available to make it robust as you go. This article will focus on defining the view functions and controller and on connecting to a database on Google Cloud and deploying to Google Cloud.

    For the purpose of learning, we’ll build a Flask API with a few endpoints to manage a collection of our favorite songs. The endpoints will be for GET and POST requests: fetching and creating resources. Alongside that, we will be using the suite of services on the Google Cloud platform. We’ll set up Google’s Cloud SQL for our database and launch our app by deploying to App Engine. This tutorial is aimed at beginners who are taking their first stab at using Google Cloud for their app.

    Setting Up A Flask Project

    This tutorial assumes you have Python 3.x installed. If you don’t, head over to the official website to download and install it.

    To check whether Python is installed, launch your command-line interface (CLI) and run the command below:

    python -V
    

    Our first step is to create the directory where our project will live. We will call it flask-app:

    mkdir flask-app && cd flask-app
    

    The first thing to do when starting a Python project is to create a virtual environment. Virtual environments isolate your working Python development. This means that this project can have its own dependencies, different from other project on your machines. venv is a module that ships with Python 3.

    Let’s create a virtual environment in our flask-app directory:

    python3 -m venv env
    

    This command creates an env folder in our directory. The name (in this case, env) is an alias for the virtual environment and can be named anything.

    Now that we’ve created the virtual environment, we have to tell our project to use it. To activate our virtual environment, use the following command:

    source env/bin/activate
    

    You will see that your CLI prompt now has env at the beginning, indicating that our environment is active.

    It shows the env prompt to indicate that an environment is active
    (env) appears before the prompt (Large preview)

    Now, let’s install our Flask package:

    pip install flask
    

    Create a directory named api in our current directory. We’re creating this directory so that we have a folder where our app’s other folders will reside.

    mkdir api && cd api
    

    Next, create a main.py file, which will serve as the entry point to our app:

    touch main.py
    

    Open main.py, and enter the following code:

    #main.py
    from flask import Flask
    
    app = Flask(__name__)
    
    @app.route('/')
    def home():
      return 'Hello World'
    
    if __name__ == '__main__':
      app.run()
    

    Let’s understand what we’ve done here. We first imported the Flask class from the Flask package. Then, we created an instance of the class and assigned it to app. Next, we created our first endpoint, which points to our app’s root. In summary, this is a view function that invokes the / route — it returns Hello World.

    Let’s run the app:

    python main.py
    

    This starts our local server and serves our app on http://127.0.0.1:5000/. Input the URL in your browser, and you will see the Hello World response printed on your screen.

    And voilà! Our app is up and running. The next task is to make it functional.

    To call our endpoints, we will be using Postman, which is a service that helps developers test endpoints. You can download it from the official website.

    Let’s make main.py return some data:

    #main.py
    from flask import Flask, jsonify
    app = Flask(__name__)
    songs = [
        {
            "title": "Rockstar",
            "artist": "Dababy",
            "genre": "rap",
        },
        {
            "title": "Say So",
            "artist": "Doja Cat",
            "genre": "Hiphop",
        },
        {
            "title": "Panini",
            "artist": "Lil Nas X",
            "genre": "Hiphop"
        }
    ]
    @app.route('/songs')
    def home():
        return jsonify(songs)
    
    if __name__ == '__main__':
      app.run()
    

    Here, we included a list of songs, including the song’s title and artist’s name. We then changed the root / route to /songs. This route returns the array of songs that we specified. In order to get our list as a JSON value, we JSONified the list by passing it through jsonify. Now, rather than seeing a simple Hello world, we see a list of artists when we access the http://127.0.0.1:5000/songs endpoint.

    This image shows the response from a get request
    A get response from Postman (Large preview)

    You may have noticed that after every change, we had to restart our server. To enable auto-reloading when the code changes, let’s enable the debug option. To do this, change app.run to this:

    app.run(debug=True)
    

    Next, let’s add a song using a post request to our array. First, import the request object, so that we can process incoming request from our users. We’ll later use the request object in the view function to get the user’s input in JSON.

    #main.py
    from flask import Flask, jsonify, request
    
    app = Flask(__name__)
    songs = [
        {
            "title": "Rockstar",
            "artist": "Dababy",
            "genre": "rap",
        },
        {
            "title": "Say So",
            "artist": "Doja Cat",
            "genre": "Hiphop",
        },
        {
            "title": "Panini",
            "artist": "Lil Nas X",
            "genre": "Hiphop"
        }
    ]
    @app.route('/songs')
    def home():
        return jsonify(songs)
    
    @app.route('/songs', methods=['POST'])
    def add_songs():
        song = request.get_json()
        songs.append(song)
        return jsonify(songs)
    
    if __name__ == '__main__':
      app.run(debug=True)
    

    Our add_songs view function takes a user-submitted song and appends it to our existing list of songs.

    This image demonstrates a post request using Postman
    Post request from Postman (Large preview)

    So far, we have returned our data from a Python list. This is just experimental, because in a more robust environment, our newly added data would be lost if we restarted the server. That is not feasible, so we will require a live database to store and retrieve the data. In comes Cloud SQL.

    Why Use A Cloud SQL Instance?

    According to the official website:

    “Google Cloud SQL is a fully-managed database service that makes it easy to set-up, maintain, manage and administer your relational MySQL and PostgreSQL databases in the cloud. Hosted on Google Cloud Platform, Cloud SQL provides a database infrastructure for applications running anywhere.”

    This means that we can outsource the management of a database’s infrastructure entirely to Google, at flexible pricing.

    Difference Between Cloud SQL And A Self-Managed Compute Engine

    On Google Cloud, we can spin up a virtual machine on Google’s Compute Engine infrastructure and install our SQL instance. This means we will be responsible for vertical scalability, replication, and a host of other configuration. With Cloud SQL, we get a lot of configuration out of the box, so we can spend more time on the code and less time setting up.

    Before we begin:

    1. Sign up for Google Cloud. Google offers $300 in free credit to new users.
    2. Create a project. This is pretty straightforward and can be done right from the console.

    Create A Cloud SQL Instance

    After signing up for Google Cloud, in the left panel, scroll to the “SQL” tab and click on it.

    This image shows a sub-section of GCP services
    Snapshot of GCP services (Large preview)
    This image shows the three database engines in offering for Cloud SQL
    Cloud SQL’s console page (Large preview)

    First, we are required to choose an SQL engine. We’ll go with MySQL for this article.

    This image show the page for creating a Cloud SQL instance
    Creating a new Cloud SQL instance (Large preview)

    Next, we’ll create an instance. By default, our instance will be created in the US, and the zone will be automatically selected for us.

    Set the root password and give the instance a name, and then click the “Create” button. You can further configure the instance by clicking the “Show configuration options” dropdown. The settings allows you to configure the instance’s size, storage capacity, security, availability, backups, and more. For this article, we will go with the default settings. Not to worry, these variables can be changed later.

    It might take a few minutes for the process to finish. You’ll know the instance is ready when you see a green checkmark. Click on your instance’s name to go to the details page.

    Now, that we’re up and running, we will do a few things:

    1. Create a database.
    2. Create a new user.
    3. Whitelist our IP address.

    Create A Database

    Navigate to the “Database” tab to create a database.

    This image shows the creation of a new user on Cloud SQL
    Creating a new database on Cloud SQL (Large preview)

    Create A New User

    Creating a new user on Cloud SQL (Large preview)

    In the “Host name” section, set it to allow “% (any host)”.

    Whitelist IP Address

    You can connect to your database instance in one of two ways. A private IP address requires a virtual private cloud (VPC). If you go for this option, Google Cloud will create a Google-managed VPC and place your instance in it. For this article, we will use the public IP address, which is the default. It is public in the sense that only people whose IP addresses have been whitelisted can access the database.

    To whitelist your IP address, type my ip in a Google search to get your IP. Then, go to the “Connections” tab and “Add Network”.

    This image shows the page for IP whitelisting
    Whitelist your IP address (Large preview)

    Connect To The Instance

    Next, navigate to the “Overview” panel and connect using the cloud shell.

    This image shows the Cloud SQL dashboard
    Cloud SQL dashboard (Large preview)

    The command to connect to our Cloud SQL instance will be pre-typed in the console.

    You may use either the root user or the user who was created earlier. In the command below, we’re saying: Connect to the flask-demo instance as the user USERNAME. You will be prompted to input the user’s password.

    gcloud sql connect flask-demo --user=USERNAME
    

    If you get an error saying that you don’t have a project ID, you can get your project’s ID by running this:

    gcloud projects list
    

    Take the project ID that was outputted from the command above, and input it into the command below, replacing PROJECT_ID with it.

    gcloud config set project PROJECT_ID
    

    Then, run the gcloud sql connect command, and we will be connected.

    Run this command to see the active databases:

    > show databases;
    
    This image shows the shell output for when we run show databases in the cloud shell
    Shell output for “show databases” (Large preview)

    My database is named db_demo, and I’ll run the command below to use the db_demo database. You might see some other databases, such as information_schema and performance_schema. These are there to store table meta data.

    > use db_demo;
    

    Next, create a table that mirrors the list from our Flask app. Type the code below on a notepad and paste it in your cloud shell:

    create table songs(
    song_id INT NOT NULL AUTO_INCREMENT,
    title VARCHAR(255),
    artist VARCHAR(255),
    genre VARCHAR(255),
    PRIMARY KEY(song_id)
    );
    

    This code is a SQL command that creates a table named songs, with four columns (song_id, title, artist, and genre). We’ve also instructed that the table should define song_id as a primary key and increment automatically from 1.

    Now, run show tables; to confirm that the table has been created.

    This image shows the shell output for when we run show tables in the cloud shell
    Shell output for “show tables” (Large preview)

    And just like that, we have created a database and our songs table.

    Our next task is to set up Google App Engine so that we can deploy our app.

    Google App Engine

    App Engine is a fully managed platform for developing and hosting web applications at scale. An advantage of deploying to App Engine is that it enables an app to scale automatically to meet incoming traffic.

    The App Engine website says:

    “With zero server management and zero configuration deployments, developers can focus only on building great applications without the management overhead.”

    Set Up App Engine

    There are a few ways to set up App Engine: through the UI of Google Cloud Console or through the Google Cloud SDK. We will use the SDK for this section. It enables us to deploy, manage, and monitor our Google Cloud instance from our local machine.

    Install Google Cloud SDK

    Follow the instructions to download and install the SDK for Mac or Windows. The guide will also show you how to initialize the SDK in your CLI and how to pick a Google Cloud project.

    Now that the SDK has been installed, we’re going to go update our Python script with our database’s credentials and deploy to App Engine.

    Local Setup

    In our local environment, we are going to update the setup to suit our new architecture, which includes Cloud SQL and App Engine.

    First, add an app.yaml file to our root folder. This is a configuration file that App Engine requires to host and run our app. It tells App Engine of our runtime and other variables that might be required. For our app, we will need to add our database’s credentials as environment variables, so that App Engine is aware of our database’s instance.

    In the app.yaml file, add the snippet below. You will have gotten the runtime and database variables from setting up the database. Replace the values with the username, password, database name, and connection name that you used when setting up Cloud SQL.

    #app.yaml
    runtime: python37
    
    env_variables:
      CLOUD_SQL_USERNAME: YOUR-DB-USERNAME
      CLOUD_SQL_PASSWORD: YOUR-DB-PASSWORD
      CLOUD_SQL_DATABASE_NAME: YOUR-DB-NAME
      CLOUD_SQL_CONNECTION_NAME: YOUR-CONN-NAME
    

    Now, we are going to install PyMySQL. This is a Python MySQL package that connects and performs queries on a MySQL database. Install the PyMySQL package by running this line in your CLI:

    pip install pymysql
    

    At this point, we are ready to use PyMySQL to connect to our Cloud SQL database from the app. This will enable us to get and insert queries in our database.

    Initialize Database Connector

    First, create a db.py file in our root folder, and add the code below:

    #db.py
    import os
    import pymysql
    from flask import jsonify
    
    db_user = os.environ.get('CLOUD_SQL_USERNAME')
    db_password = os.environ.get('CLOUD_SQL_PASSWORD')
    db_name = os.environ.get('CLOUD_SQL_DATABASE_NAME')
    db_connection_name = os.environ.get('CLOUD_SQL_CONNECTION_NAME')
    
    
    def open_connection():
        unix_socket = '/cloudsql/{}'.format(db_connection_name)
        try:
            if os.environ.get('GAE_ENV') == 'standard':
                conn = pymysql.connect(user=db_user, password=db_password,
                                    unix_socket=unix_socket, db=db_name,
                                    cursorclass=pymysql.cursors.DictCursor
                                    )
        except pymysql.MySQLError as e:
            print(e)
    
        return conn
    
    
    def get_songs():
        conn = open_connection()
        with conn.cursor() as cursor:
            result = cursor.execute('SELECT * FROM songs;')
            songs = cursor.fetchall()
            if result > 0:
                got_songs = jsonify(songs)
            else:
                got_songs = 'No Songs in DB'
        conn.close()
        return got_songs
    
    def add_songs(song):
        conn = open_connection()
        with conn.cursor() as cursor:
            cursor.execute('INSERT INTO songs (title, artist, genre) VALUES(%s, %s, %s)', (song["title"], song["artist"], song["genre"]))
        conn.commit()
        conn.close()
    

    We did a few things here.

    First, we retrieved our database credentials from the app.yaml file using the os.environ.get method. App Engine is able to make environment variables that are defined in app.yaml available in the app.

    Secondly, we created an open_connection function. It connects to our MySQL database with the credentials.

    Thirdly, we added two functions: get_songs and add_songs. The first initiates a connection to the database by calling the open_connection function. It then queries the songs table for every row and, if empty, returns “No Songs in DB”. The add_songs function inserts a new record into the songs table.

    Finally, we return to where we started, our main.py file. Now, instead of getting our songs from an object, as we did earlier, we call the add_songs function to insert a record, and we call the get_songs function to retrieve the records from the database.

    Let’s refactor main.py:

    #main.py
    from flask import Flask, jsonify, request
    from db import get_songs, add_songs
    
    app = Flask(__name__)
    
    @app.route('/', methods=['POST', 'GET'])
    def songs():
        if request.method == 'POST':
            if not request.is_json:
                return jsonify({"msg": "Missing JSON in request"}), 400  
    
            add_songs(request.get_json())
            return 'Song Added'
    
        return get_songs()    
    
    if __name__ == '__main__':
        app.run()
    

    We imported the get_songs and add_songs functions and called them in our songs() view function. If we are making a post request, we call the add_songs function, and if we are making a get request, we call the get_songs function.

    And our app is done.

    Next up is adding a requirements.txt file. This file contains a list of packages necessary to run the app. App Engine checks this file and installs the listed packages.

    pip freeze | grep "Flask|PyMySQL" > requirements.txt
    

    This line gets the two packages that we are using for the app (Flask and PyMySQL), creates a requirements.txt file, and appends the packages and their versions to the file.

    At this point, we have added three new files: db.py, app.yaml, and requirements.txt.

    Deploy to Google App Engine

    Run the following command to deploy your app:

    gcloud app deploy
    

    If it went well, your console will output this:

    This image shows the output when deploying to App Engine
    CLI output for App Engine deployment (Large preview)

    Your app is now running on App Engine. To see it in the browser, run gcloud app browse in your CLI.

    We can launch Postman to test our post and get requests.

    This image demonstrates a post request to our deployed app
    Demonstrating a post request (Large preview)
    This image demonstrates a get request to our deployed app
    Demonstrating a get request (Large preview)

    Our app is now hosted on Google’s infrastructure, and we can tweak the configuration to get all of the benefits of a serverless architecture. Going forward, you can build on this article to make your serverless application more robust.

    Conclusion

    Using a platform-as-a-service (PaaS) infrastructure like App Engine and Cloud SQL basically abstracts away the infrastructure level and enables us to build more quickly. As developers, we do not have to worry about configuration, backing up and restoring, the operating system, auto-scaling, firewalls, migrating traffic, and so on. However, if you need control over the underlying configuration, then it might be better to use a custom-built service.

    References

    Smashing Editorial
    (ks, ra, al, il)

    Source link

    web design

    Setting Up An API Using Flask, Google’s Cloud SQL And App Engine — Smashing Magazine

    08/19/2020

    About The Author

    Wole Oyekanmi is a data scientist who is working on applying machine learning to consumer finance. When he’s not working, he enjoys drumming up new startup …
    More about
    Wole

    Flask makes it possible for developers to build an API for whatever use case they might have. In this tutorial, we’ll learn how to set up Google Cloud, Cloud SQL, and App Engine to build a Flask API. (Cloud SQL is a fully managed platform-as-a-service (PaaS) database engine, and App Engine is a fully managed PaaS for hosting applications.)

    A few Python frameworks can be used to create APIs, two of which are Flask and Django. Frameworks comes with functionality that makes it easy for developers to implement the features that users need to interact with their applications. The complexity of a web application could be a deciding factor when you’re choosing which framework to work with.

    Django

    Django is a robust framework that has a predefined structure with built-in functionality. The downside of its robustness, however, is that it could make the framework too complex for certain projects. It’s best suited to complex web applications that need to leverage the advanced functionality of Django.

    Flask

    Flask, on the other hand, is a lightweight framework for building APIs. Getting started with it is easy, and packages are available to make it robust as you go. This article will focus on defining the view functions and controller and on connecting to a database on Google Cloud and deploying to Google Cloud.

    For the purpose of learning, we’ll build a Flask API with a few endpoints to manage a collection of our favorite songs. The endpoints will be for GET and POST requests: fetching and creating resources. Alongside that, we will be using the suite of services on the Google Cloud platform. We’ll set up Google’s Cloud SQL for our database and launch our app by deploying to App Engine. This tutorial is aimed at beginners who are taking their first stab at using Google Cloud for their app.

    Setting Up A Flask Project

    This tutorial assumes you have Python 3.x installed. If you don’t, head over to the official website to download and install it.

    To check whether Python is installed, launch your command-line interface (CLI) and run the command below:

    python -V
    

    Our first step is to create the directory where our project will live. We will call it flask-app:

    mkdir flask-app && cd flask-app
    

    The first thing to do when starting a Python project is to create a virtual environment. Virtual environments isolate your working Python development. This means that this project can have its own dependencies, different from other project on your machines. venv is a module that ships with Python 3.

    Let’s create a virtual environment in our flask-app directory:

    python3 -m venv env
    

    This command creates an env folder in our directory. The name (in this case, env) is an alias for the virtual environment and can be named anything.

    Now that we’ve created the virtual environment, we have to tell our project to use it. To activate our virtual environment, use the following command:

    source env/bin/activate
    

    You will see that your CLI prompt now has env at the beginning, indicating that our environment is active.

    It shows the env prompt to indicate that an environment is active
    (env) appears before the prompt (Large preview)

    Now, let’s install our Flask package:

    pip install flask
    

    Create a directory named api in our current directory. We’re creating this directory so that we have a folder where our app’s other folders will reside.

    mkdir api && cd api
    

    Next, create a main.py file, which will serve as the entry point to our app:

    touch main.py
    

    Open main.py, and enter the following code:

    #main.py
    from flask import Flask
    
    app = Flask(__name__)
    
    @app.route('/')
    def home():
      return 'Hello World'
    
    if __name__ == '__main__':
      app.run()
    

    Let’s understand what we’ve done here. We first imported the Flask class from the Flask package. Then, we created an instance of the class and assigned it to app. Next, we created our first endpoint, which points to our app’s root. In summary, this is a view function that invokes the / route — it returns Hello World.

    Let’s run the app:

    python main.py
    

    This starts our local server and serves our app on http://127.0.0.1:5000/. Input the URL in your browser, and you will see the Hello World response printed on your screen.

    And voilà! Our app is up and running. The next task is to make it functional.

    To call our endpoints, we will be using Postman, which is a service that helps developers test endpoints. You can download it from the official website.

    Let’s make main.py return some data:

    #main.py
    from flask import Flask, jsonify
    app = Flask(__name__)
    songs = [
        {
            "title": "Rockstar",
            "artist": "Dababy",
            "genre": "rap",
        },
        {
            "title": "Say So",
            "artist": "Doja Cat",
            "genre": "Hiphop",
        },
        {
            "title": "Panini",
            "artist": "Lil Nas X",
            "genre": "Hiphop"
        }
    ]
    @app.route('/songs')
    def home():
        return jsonify(songs)
    
    if __name__ == '__main__':
      app.run()
    

    Here, we included a list of songs, including the song’s title and artist’s name. We then changed the root / route to /songs. This route returns the array of songs that we specified. In order to get our list as a JSON value, we JSONified the list by passing it through jsonify. Now, rather than seeing a simple Hello world, we see a list of artists when we access the http://127.0.0.1:5000/songs endpoint.

    This image shows the response from a get request
    A get response from Postman (Large preview)

    You may have noticed that after every change, we had to restart our server. To enable auto-reloading when the code changes, let’s enable the debug option. To do this, change app.run to this:

    app.run(debug=True)
    

    Next, let’s add a song using a post request to our array. First, import the request object, so that we can process incoming request from our users. We’ll later use the request object in the view function to get the user’s input in JSON.

    #main.py
    from flask import Flask, jsonify, request
    
    app = Flask(__name__)
    songs = [
        {
            "title": "Rockstar",
            "artist": "Dababy",
            "genre": "rap",
        },
        {
            "title": "Say So",
            "artist": "Doja Cat",
            "genre": "Hiphop",
        },
        {
            "title": "Panini",
            "artist": "Lil Nas X",
            "genre": "Hiphop"
        }
    ]
    @app.route('/songs')
    def home():
        return jsonify(songs)
    
    @app.route('/songs', methods=['POST'])
    def add_songs():
        song = request.get_json()
        songs.append(song)
        return jsonify(songs)
    
    if __name__ == '__main__':
      app.run(debug=True)
    

    Our add_songs view function takes a user-submitted song and appends it to our existing list of songs.

    This image demonstrates a post request using Postman
    Post request from Postman (Large preview)

    So far, we have returned our data from a Python list. This is just experimental, because in a more robust environment, our newly added data would be lost if we restarted the server. That is not feasible, so we will require a live database to store and retrieve the data. In comes Cloud SQL.

    Why Use A Cloud SQL Instance?

    According to the official website:

    “Google Cloud SQL is a fully-managed database service that makes it easy to set-up, maintain, manage and administer your relational MySQL and PostgreSQL databases in the cloud. Hosted on Google Cloud Platform, Cloud SQL provides a database infrastructure for applications running anywhere.”

    This means that we can outsource the management of a database’s infrastructure entirely to Google, at flexible pricing.

    Difference Between Cloud SQL And A Self-Managed Compute Engine

    On Google Cloud, we can spin up a virtual machine on Google’s Compute Engine infrastructure and install our SQL instance. This means we will be responsible for vertical scalability, replication, and a host of other configuration. With Cloud SQL, we get a lot of configuration out of the box, so we can spend more time on the code and less time setting up.

    Before we begin:

    1. Sign up for Google Cloud. Google offers $300 in free credit to new users.
    2. Create a project. This is pretty straightforward and can be done right from the console.

    Create A Cloud SQL Instance

    After signing up for Google Cloud, in the left panel, scroll to the “SQL” tab and click on it.

    This image shows a sub-section of GCP services
    Snapshot of GCP services (Large preview)
    This image shows the three database engines in offering for Cloud SQL
    Cloud SQL’s console page (Large preview)

    First, we are required to choose an SQL engine. We’ll go with MySQL for this article.

    This image show the page for creating a Cloud SQL instance
    Creating a new Cloud SQL instance (Large preview)

    Next, we’ll create an instance. By default, our instance will be created in the US, and the zone will be automatically selected for us.

    Set the root password and give the instance a name, and then click the “Create” button. You can further configure the instance by clicking the “Show configuration options” dropdown. The settings allows you to configure the instance’s size, storage capacity, security, availability, backups, and more. For this article, we will go with the default settings. Not to worry, these variables can be changed later.

    It might take a few minutes for the process to finish. You’ll know the instance is ready when you see a green checkmark. Click on your instance’s name to go to the details page.

    Now, that we’re up and running, we will do a few things:

    1. Create a database.
    2. Create a new user.
    3. Whitelist our IP address.

    Create A Database

    Navigate to the “Database” tab to create a database.

    This image shows the creation of a new user on Cloud SQL
    Creating a new database on Cloud SQL (Large preview)

    Create A New User

    Creating a new user on Cloud SQL (Large preview)

    In the “Host name” section, set it to allow “% (any host)”.

    Whitelist IP Address

    You can connect to your database instance in one of two ways. A private IP address requires a virtual private cloud (VPC). If you go for this option, Google Cloud will create a Google-managed VPC and place your instance in it. For this article, we will use the public IP address, which is the default. It is public in the sense that only people whose IP addresses have been whitelisted can access the database.

    To whitelist your IP address, type my ip in a Google search to get your IP. Then, go to the “Connections” tab and “Add Network”.

    This image shows the page for IP whitelisting
    Whitelist your IP address (Large preview)

    Connect To The Instance

    Next, navigate to the “Overview” panel and connect using the cloud shell.

    This image shows the Cloud SQL dashboard
    Cloud SQL dashboard (Large preview)

    The command to connect to our Cloud SQL instance will be pre-typed in the console.

    You may use either the root user or the user who was created earlier. In the command below, we’re saying: Connect to the flask-demo instance as the user USERNAME. You will be prompted to input the user’s password.

    gcloud sql connect flask-demo --user=USERNAME
    

    If you get an error saying that you don’t have a project ID, you can get your project’s ID by running this:

    gcloud projects list
    

    Take the project ID that was outputted from the command above, and input it into the command below, replacing PROJECT_ID with it.

    gcloud config set project PROJECT_ID
    

    Then, run the gcloud sql connect command, and we will be connected.

    Run this command to see the active databases:

    > show databases;
    
    This image shows the shell output for when we run show databases in the cloud shell
    Shell output for “show databases” (Large preview)

    My database is named db_demo, and I’ll run the command below to use the db_demo database. You might see some other databases, such as information_schema and performance_schema. These are there to store table meta data.

    > use db_demo;
    

    Next, create a table that mirrors the list from our Flask app. Type the code below on a notepad and paste it in your cloud shell:

    create table songs(
    song_id INT NOT NULL AUTO_INCREMENT,
    title VARCHAR(255),
    artist VARCHAR(255),
    genre VARCHAR(255),
    PRIMARY KEY(song_id)
    );
    

    This code is a SQL command that creates a table named songs, with four columns (song_id, title, artist, and genre). We’ve also instructed that the table should define song_id as a primary key and increment automatically from 1.

    Now, run show tables; to confirm that the table has been created.

    This image shows the shell output for when we run show tables in the cloud shell
    Shell output for “show tables” (Large preview)

    And just like that, we have created a database and our songs table.

    Our next task is to set up Google App Engine so that we can deploy our app.

    Google App Engine

    App Engine is a fully managed platform for developing and hosting web applications at scale. An advantage of deploying to App Engine is that it enables an app to scale automatically to meet incoming traffic.

    The App Engine website says:

    “With zero server management and zero configuration deployments, developers can focus only on building great applications without the management overhead.”

    Set Up App Engine

    There are a few ways to set up App Engine: through the UI of Google Cloud Console or through the Google Cloud SDK. We will use the SDK for this section. It enables us to deploy, manage, and monitor our Google Cloud instance from our local machine.

    Install Google Cloud SDK

    Follow the instructions to download and install the SDK for Mac or Windows. The guide will also show you how to initialize the SDK in your CLI and how to pick a Google Cloud project.

    Now that the SDK has been installed, we’re going to go update our Python script with our database’s credentials and deploy to App Engine.

    Local Setup

    In our local environment, we are going to update the setup to suit our new architecture, which includes Cloud SQL and App Engine.

    First, add an app.yaml file to our root folder. This is a configuration file that App Engine requires to host and run our app. It tells App Engine of our runtime and other variables that might be required. For our app, we will need to add our database’s credentials as environment variables, so that App Engine is aware of our database’s instance.

    In the app.yaml file, add the snippet below. You will have gotten the runtime and database variables from setting up the database. Replace the values with the username, password, database name, and connection name that you used when setting up Cloud SQL.

    #app.yaml
    runtime: python37
    
    env_variables:
      CLOUD_SQL_USERNAME: YOUR-DB-USERNAME
      CLOUD_SQL_PASSWORD: YOUR-DB-PASSWORD
      CLOUD_SQL_DATABASE_NAME: YOUR-DB-NAME
      CLOUD_SQL_CONNECTION_NAME: YOUR-CONN-NAME
    

    Now, we are going to install PyMySQL. This is a Python MySQL package that connects and performs queries on a MySQL database. Install the PyMySQL package by running this line in your CLI:

    pip install pymysql
    

    At this point, we are ready to use PyMySQL to connect to our Cloud SQL database from the app. This will enable us to get and insert queries in our database.

    Initialize Database Connector

    First, create a db.py file in our root folder, and add the code below:

    #db.py
    import os
    import pymysql
    from flask import jsonify
    
    db_user = os.environ.get('CLOUD_SQL_USERNAME')
    db_password = os.environ.get('CLOUD_SQL_PASSWORD')
    db_name = os.environ.get('CLOUD_SQL_DATABASE_NAME')
    db_connection_name = os.environ.get('CLOUD_SQL_CONNECTION_NAME')
    
    
    def open_connection():
        unix_socket = '/cloudsql/{}'.format(db_connection_name)
        try:
            if os.environ.get('GAE_ENV') == 'standard':
                conn = pymysql.connect(user=db_user, password=db_password,
                                    unix_socket=unix_socket, db=db_name,
                                    cursorclass=pymysql.cursors.DictCursor
                                    )
        except pymysql.MySQLError as e:
            print(e)
    
        return conn
    
    
    def get_songs():
        conn = open_connection()
        with conn.cursor() as cursor:
            result = cursor.execute('SELECT * FROM songs;')
            songs = cursor.fetchall()
            if result > 0:
                got_songs = jsonify(songs)
            else:
                got_songs = 'No Songs in DB'
        conn.close()
        return got_songs
    
    def add_songs(song):
        conn = open_connection()
        with conn.cursor() as cursor:
            cursor.execute('INSERT INTO songs (title, artist, genre) VALUES(%s, %s, %s)', (song["title"], song["artist"], song["genre"]))
        conn.commit()
        conn.close()
    

    We did a few things here.

    First, we retrieved our database credentials from the app.yaml file using the os.environ.get method. App Engine is able to make environment variables that are defined in app.yaml available in the app.

    Secondly, we created an open_connection function. It connects to our MySQL database with the credentials.

    Thirdly, we added two functions: get_songs and add_songs. The first initiates a connection to the database by calling the open_connection function. It then queries the songs table for every row and, if empty, returns “No Songs in DB”. The add_songs function inserts a new record into the songs table.

    Finally, we return to where we started, our main.py file. Now, instead of getting our songs from an object, as we did earlier, we call the add_songs function to insert a record, and we call the get_songs function to retrieve the records from the database.

    Let’s refactor main.py:

    #main.py
    from flask import Flask, jsonify, request
    from db import get_songs, add_songs
    
    app = Flask(__name__)
    
    @app.route('/', methods=['POST', 'GET'])
    def songs():
        if request.method == 'POST':
            if not request.is_json:
                return jsonify({"msg": "Missing JSON in request"}), 400  
    
            add_songs(request.get_json())
            return 'Song Added'
    
        return get_songs()    
    
    if __name__ == '__main__':
        app.run()
    

    We imported the get_songs and add_songs functions and called them in our songs() view function. If we are making a post request, we call the add_songs function, and if we are making a get request, we call the get_songs function.

    And our app is done.

    Next up is adding a requirements.txt file. This file contains a list of packages necessary to run the app. App Engine checks this file and installs the listed packages.

    pip freeze | grep "Flask|PyMySQL" > requirements.txt
    

    This line gets the two packages that we are using for the app (Flask and PyMySQL), creates a requirements.txt file, and appends the packages and their versions to the file.

    At this point, we have added three new files: db.py, app.yaml, and requirements.txt.

    Deploy to Google App Engine

    Run the following command to deploy your app:

    gcloud app deploy
    

    If it went well, your console will output this:

    This image shows the output when deploying to App Engine
    CLI output for App Engine deployment (Large preview)

    Your app is now running on App Engine. To see it in the browser, run gcloud app browse in your CLI.

    We can launch Postman to test our post and get requests.

    This image demonstrates a post request to our deployed app
    Demonstrating a post request (Large preview)
    This image demonstrates a get request to our deployed app
    Demonstrating a get request (Large preview)

    Our app is now hosted on Google’s infrastructure, and we can tweak the configuration to get all of the benefits of a serverless architecture. Going forward, you can build on this article to make your serverless application more robust.

    Conclusion

    Using a platform-as-a-service (PaaS) infrastructure like App Engine and Cloud SQL basically abstracts away the infrastructure level and enables us to build more quickly. As developers, we do not have to worry about configuration, backing up and restoring, the operating system, auto-scaling, firewalls, migrating traffic, and so on. However, if you need control over the underlying configuration, then it might be better to use a custom-built service.

    References

    Smashing Editorial
    (ks, ra, al, il)

    Source link

    web design

    4 Lessons Web App Designers Can Learn From Google — Smashing Magazine

    08/12/2020

    About The Author

    Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
    More about
    Suzanne
    Scacca

    There’s a reason why Google dominates market share for things like search engines, web browsers, email clients and cloud storage services. It knows exactly what consumers want and it has designed simple, intuitive, and useful solutions for them. If there’s one company whose product features you should be mirroring, it’s Google.

    Whenever I’m curious about what more we could be doing to improve our users’ experiences, the first place I look to is Google. More specifically, I go to the Google Developers site or Think with Google to pull the latest consumer data.

    But I was thinking today, “Why don’t we just copy what Google does?”

    After all, Google has to walk the walk. If not, how would it ever convince anyone to adhere to its SEO and UX recommendations and guidelines?

    The only thing is, Google’s sites and apps aren’t very attractive. They’re practical and intuitive, that’s for sure. But designs worth emulating? Eh.

    That doesn’t really matter though. The basic principles for building a good web app exist across each of its platforms. So, if we’re looking for a definitive answer on what will provide SaaS users with the best experience, I think we need to start by dissecting Google’s platforms.

    What Google Teaches Us About Good Web App Design

    What we want to focus on are the components that make Google’s products so easy to use time and time again. By replicating these features within your own app, you’ll effectively reduce (if not altogether remove) the friction your users would otherwise encounter.

    1. Make the First Thing They See Their Top Priority

    When users enter your dashboard, the last thing you want is for them to be overwhelmed. Their immediate impression whenever they enter your app or return to the dashboard should be:

    “I’m exactly where I need to be.”

    Not:

    “What the heck is going on here? Where do I find X?”

    Now, depending on the purpose of your app, there are usually one or two things your users are going to be most concerned with.

    Let’s say you have an app like Google Translate that has a clear utilitarian purpose. There’s absolutely no excuse for cluttering the main page. They’ve come here to do one thing:

    Google Translate translator tool
    Google Translate users don’t have to hunt around for the translator tool. (Source: Google Translate) (Large preview)

    So, don’t waste their time. Place the tool front and center and let all other pages, settings or notices appear as secondary features of the app.

    Something else this example teaches us is how you should configure your tool for users. Google could easily just leave this open-ended, but it defaults to:

    Default Language —> English

    Google’s data likely shows that this is the most popular way users use this app.

    Although you can’t see it in the desktop app, you can see it on mobile. The formula goes like this:

    Default Language —> Recent Language

    I suspect that, for first-time users, Google will set the translation to the user’s native language (as indicated in their Google user settings).

    If you have the data available, use it to configure defaults that reduce the number of steps your users have to take, too.

    Not every web app provides users with a hands-on tool for solving a problem. In some cases, apps enable users to streamline and automate complex processes, which means their primary concern is going to be how well those processes are performing.

    For that, we can look at a product like Google Search Console, which connects users to data on how their sites perform in Google search as well as insights into problems that might be holding them back.

    It’s no surprise then that the first thing they see upon entering it is this:

    Google Search Console overview - Performance and Coverage stats
    The Google Search Console overview page shows users stats on Performance and Coverage. (Source: Google Search Console) (Large preview)

    Performance (the number of clicks in Google search) and Coverage (number of pages indexed without error) are above the fold. Below it is another chart that displays recommended enhancements to improve core web vitals, mobile usability and sitelinks searchbox visibility.

    Bottom line: The Overview page isn’t littered with charts depicting every data point collected by Google Search Console. Instead, it displays only the top priorities so users can get a bird’s-eye view of what’s going on and not get lost in data they don’t need at that time.

    2. Create a Useful and Simple Navigation Wherever Relevant

    This one seems like a no-brainer, but I’ll show you why I bring it up.

    Zoom is a great video conferencing app. There’s no arguing that. However, when users want to schedule a meeting from their browser, this is what they see:

    Zoom in-browser web app with multiple menus to choose from
    The Zoom web app complicates things with multiple menus. (Source: Zoom) (Large preview)

    The “Join Meeting” and “Host Meeting” options are fine as they both eventually push the user into the desktop app. However, the “Schedule Meeting” in-browser experience isn’t great because it leaves the website navigation bars in place, which only serves as a distraction from the app’s sidebar on the left.

    Once your users have created a login and have access to your app, they don’t need to see your site anymore. Ditch the website navigation and let them be submersed in the app.

    Or do as Google Hangouts does. Lay your app out the way users expect an app to be laid out:

    • Primary navigation along the left side,
    • Hamburger menu button and/or More (…) button contain the secondary navigation,
    • Wide open space for users to play in the app.
    Google Hangouts distraction-free interface and simple navigation
    A look inside Google Hangouts and its distraction-free interface and navigation. (Source: Google Hangouts) (Large preview)

    But Google Hangouts doesn’t do away with the google.com website completely. For users that want to quickly navigate to one of Google’s other products, they can use the grid-shaped icon in the top-right corner. So, if you feel it’s necessary for your users to be able to visit your website once again, you can build it into the app that way.

    This example also demonstrates how important it is to keep your navigation as simple as possible.

    Google Hangouts’ primary navigation uses symbols to represent each of the app’s tabs/options:

    Google Hangouts primary navigation design - icons only
    Google Hangouts uses icons to represent the tabs of its primary navigation. (Source: Google Hangouts) (Large preview)

    While I think it’s okay for Google Hangouts to get away with this icon-only menu design, be careful with this approach. Unless the icons are universally understood (like the hamburger menu, search magnifying glass, or the plus sign), you can’t risk introducing icons that create more confusion.

    As NNG points out, there’s a difference between an icon being recognizable and its meaning being indisputable.

    So, one way you can get around this is to make the outward appearance of the menu icon-only. But upon hover, the labels appear so that users have additional context for what each means.

    As for any secondary navigation you might need — including a Settings navigation — you can write out the labels since it will only appear upon user activation.

    Google Hangouts secondary navigation design - icons and labels
    The Google Hangouts secondary navigation uses an icon and label for each tab. (Source: Google Hangouts) (Large preview)

    Although some of the icons would be easy enough to identify, not all of them would instantly be recognizable (like “Invites” and “Hangouts Dialer”). If even one tab in your secondary navigation is rarely seen across other apps, spell them all out.

    One last thing: The divider lines in this menu are a great choice. Rather than jam 10 tabs/options into this navigation bar together, they’re logically grouped, making it easier for users to find what they’re looking for.

    3. Provide Users with Predictive Search Functionality

    Every app should have a search bar. It might be there to help users sift through content, to find the contact they’re looking for from a long list, or to ask a question about something in the app.

    The more complex your app is, the more critical a role internal search is going to play. But if you want to improve your users’ search experience even more, you’ll want to power yours with predictive search functionality.

    Even though I’m sure you have a Support line, maybe a chatbot and perhaps an FAQs or Knowledgebase to help users find what they need, a smart search bar can connect them to what they’re really looking for (even if they don’t know how to articulate it).

    Google has this search functionality baked into most of its products.

    You’re familiar with autocomplete within the Google search engine itself. But here are some other use cases for smart search capabilities.

    Google Drive connects users to documents (of all types — Docs, Sheets, Slides and more) as well as collaborators that match the search query.

    Google Drive search for 'speed'
    An example search for ‘speed’ within Google Drive. (Source: Google Drive) (Large preview)

    Users can, of course, be taken to a full search results page. However, the search bar itself predicts which content is the most relevant for the query. In this case, these are the most recent pieces of content I’ve written that include the term “speed” in the title.

    Google Maps is a neat use case as it pulls data from a variety of connected (Google) sources to try and predict what its users are looking for.

    Google Maps predictive search example 'Alicia'
    Google Maps pulls from a variety of sources to predict where users want to travel to. (Source: Google Maps) (Large preview)

    In this example, I typed in “Alicia”. Now, Google Maps knows me pretty well, so the first result is actually the address of one of my contacts. The remaining results are for addresses or businesses within a 45-mile radius containing the word “Alicia”.

    It doesn’t just pull from there though. This is one of those cases where the more enjoyable you make the in-app experience, the more your users will engage with it — which means more data.

    For example, this is what I see when I search for “Three”:

    Google Maps displays a 'Favorite' location when a user searches for 'three'
    Google Maps will provide ‘Favorite’ locations in search results when relevant. (Source: Google Maps) (Large preview)

    The very first thing it pulls up is a restaurant called Three Sisters (which is a fantastic restaurant in the city of Providence, by the way). If you look just above the center of the map where the red heart is, that’s the restaurant. This means that I’ve added it to my Favorite places and Google Maps actually calls it out as such in my search results.

    Imagine how much more your users would love your app if it wasn’t always a struggle to get to the content, data or page they were looking for. Or to perform a desired action. When you give your users the ability to personalize their experience like this, use the information they’ve given you to improve their search experience, too.

    4. Enable Users to Change the Design and Layout of the App

    As a designer, you can do your best to design a great experience for your users. But let’s face it:

    You’re never going to please everyone.

    Unlike a website, though, which is pretty much what-you-see-is-what-you-get, SaaS users have the ability to change the design and layout of what they’re interacting with — if you let them. And you should.

    There are many different ways this might apply to the app you’ve built.

    Google Calendar, for example, has a ton of customization options available.

    Google Calendar - view customizations
    Google Calendar allows users to customize the look and view of their calendars. (Source: Google Calendar) (Large preview)

    On the far left is a list of “My calendars”. Users can click which calendars and associated events they want to see within the app.

    In the bottom-right corner is an arrowhead. This enables users to hide the Google apps side panel and give them more room to focus on upcoming events and appointments.

    In the top-right, users have two places where they can customize their calendar:

    • The Settings bar allows them to adjust the color and density of the calendar.
    • The “Month” dropdown allows them to adjust how much of the calendar is seen at once.

    These customizations would all be useful for any sort of project management, planning or appointment scheduling app.

    For other apps, I’d recommend looking at Gmail. It’s chock full of customizations that you could adapt for your app.

    Previously, if users clicked the Settings widget, it would move them out of the app and into the dedicated settings panel. To be honest, it was annoying, especially if you just wanted to make a small tweak.

    Gmail Settings panel - design and layout customization options
    Gmail’s Settings reveals a list of design and layout customization options. (Source: Gmail) (Large preview)

    Now, the Settings button opens this panel within Gmail. It enables users to adjust things like:

    • Line spacing,
    • Background theme,
    • Inbox sorting priorities,
    • Reading pane layout,
    • Conversation view on/off.

    This is a recent update to Gmail’s settings, which probably means these are the most commonly used design customizations its users actually use.

    For any customizations users want to make that they can’t find in this new panel, they can click “See all settings” and customize the in-app design and layout (among other things) even further.

    Other customizations you might find value in enabling in your app are:

    • Keyboard control,
    • Dark mode,
    • Color-blind mode,
    • Text resizing,
    • List/grid view toggling,
    • Widget and banner hiding,
    • Columns displayed.

    Not only do these design and layout controls enable users to create an interface they enjoy looking at and that works better for their purposes, it can also help with accessibility.

    Wrapping Up

    There’s a reason why Google dominates market share with many of its products. It gets the user experience. Of course, this is due largely to the fact that it has access to more user data than most companies.

    And while we should be designing solutions for our specific audiences, there’s no denying that Google’s products can help us set a really strong base for any audience — if we just pay attention to the trends across its platforms.

    Further Reading on SmashingMag:

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    4 Lessons Web App Designers Can Learn From Google — Smashing Magazine

    08/12/2020

    About The Author

    Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
    More about
    Suzanne
    Scacca

    There’s a reason why Google dominates market share for things like search engines, web browsers, email clients and cloud storage services. It knows exactly what consumers want and it has designed simple, intuitive, and useful solutions for them. If there’s one company whose product features you should be mirroring, it’s Google.

    Whenever I’m curious about what more we could be doing to improve our users’ experiences, the first place I look to is Google. More specifically, I go to the Google Developers site or Think with Google to pull the latest consumer data.

    But I was thinking today, “Why don’t we just copy what Google does?”

    After all, Google has to walk the walk. If not, how would it ever convince anyone to adhere to its SEO and UX recommendations and guidelines?

    The only thing is, Google’s sites and apps aren’t very attractive. They’re practical and intuitive, that’s for sure. But designs worth emulating? Eh.

    That doesn’t really matter though. The basic principles for building a good web app exist across each of its platforms. So, if we’re looking for a definitive answer on what will provide SaaS users with the best experience, I think we need to start by dissecting Google’s platforms.

    What Google Teaches Us About Good Web App Design

    What we want to focus on are the components that make Google’s products so easy to use time and time again. By replicating these features within your own app, you’ll effectively reduce (if not altogether remove) the friction your users would otherwise encounter.

    1. Make the First Thing They See Their Top Priority

    When users enter your dashboard, the last thing you want is for them to be overwhelmed. Their immediate impression whenever they enter your app or return to the dashboard should be:

    “I’m exactly where I need to be.”

    Not:

    “What the heck is going on here? Where do I find X?”

    Now, depending on the purpose of your app, there are usually one or two things your users are going to be most concerned with.

    Let’s say you have an app like Google Translate that has a clear utilitarian purpose. There’s absolutely no excuse for cluttering the main page. They’ve come here to do one thing:

    Google Translate translator tool
    Google Translate users don’t have to hunt around for the translator tool. (Source: Google Translate) (Large preview)

    So, don’t waste their time. Place the tool front and center and let all other pages, settings or notices appear as secondary features of the app.

    Something else this example teaches us is how you should configure your tool for users. Google could easily just leave this open-ended, but it defaults to:

    Default Language —> English

    Google’s data likely shows that this is the most popular way users use this app.

    Although you can’t see it in the desktop app, you can see it on mobile. The formula goes like this:

    Default Language —> Recent Language

    I suspect that, for first-time users, Google will set the translation to the user’s native language (as indicated in their Google user settings).

    If you have the data available, use it to configure defaults that reduce the number of steps your users have to take, too.

    Not every web app provides users with a hands-on tool for solving a problem. In some cases, apps enable users to streamline and automate complex processes, which means their primary concern is going to be how well those processes are performing.

    For that, we can look at a product like Google Search Console, which connects users to data on how their sites perform in Google search as well as insights into problems that might be holding them back.

    It’s no surprise then that the first thing they see upon entering it is this:

    Google Search Console overview - Performance and Coverage stats
    The Google Search Console overview page shows users stats on Performance and Coverage. (Source: Google Search Console) (Large preview)

    Performance (the number of clicks in Google search) and Coverage (number of pages indexed without error) are above the fold. Below it is another chart that displays recommended enhancements to improve core web vitals, mobile usability and sitelinks searchbox visibility.

    Bottom line: The Overview page isn’t littered with charts depicting every data point collected by Google Search Console. Instead, it displays only the top priorities so users can get a bird’s-eye view of what’s going on and not get lost in data they don’t need at that time.

    2. Create a Useful and Simple Navigation Wherever Relevant

    This one seems like a no-brainer, but I’ll show you why I bring it up.

    Zoom is a great video conferencing app. There’s no arguing that. However, when users want to schedule a meeting from their browser, this is what they see:

    Zoom in-browser web app with multiple menus to choose from
    The Zoom web app complicates things with multiple menus. (Source: Zoom) (Large preview)

    The “Join Meeting” and “Host Meeting” options are fine as they both eventually push the user into the desktop app. However, the “Schedule Meeting” in-browser experience isn’t great because it leaves the website navigation bars in place, which only serves as a distraction from the app’s sidebar on the left.

    Once your users have created a login and have access to your app, they don’t need to see your site anymore. Ditch the website navigation and let them be submersed in the app.

    Or do as Google Hangouts does. Lay your app out the way users expect an app to be laid out:

    • Primary navigation along the left side,
    • Hamburger menu button and/or More (…) button contain the secondary navigation,
    • Wide open space for users to play in the app.
    Google Hangouts distraction-free interface and simple navigation
    A look inside Google Hangouts and its distraction-free interface and navigation. (Source: Google Hangouts) (Large preview)

    But Google Hangouts doesn’t do away with the google.com website completely. For users that want to quickly navigate to one of Google’s other products, they can use the grid-shaped icon in the top-right corner. So, if you feel it’s necessary for your users to be able to visit your website once again, you can build it into the app that way.

    This example also demonstrates how important it is to keep your navigation as simple as possible.

    Google Hangouts’ primary navigation uses symbols to represent each of the app’s tabs/options:

    Google Hangouts primary navigation design - icons only
    Google Hangouts uses icons to represent the tabs of its primary navigation. (Source: Google Hangouts) (Large preview)

    While I think it’s okay for Google Hangouts to get away with this icon-only menu design, be careful with this approach. Unless the icons are universally understood (like the hamburger menu, search magnifying glass, or the plus sign), you can’t risk introducing icons that create more confusion.

    As NNG points out, there’s a difference between an icon being recognizable and its meaning being indisputable.

    So, one way you can get around this is to make the outward appearance of the menu icon-only. But upon hover, the labels appear so that users have additional context for what each means.

    As for any secondary navigation you might need — including a Settings navigation — you can write out the labels since it will only appear upon user activation.

    Google Hangouts secondary navigation design - icons and labels
    The Google Hangouts secondary navigation uses an icon and label for each tab. (Source: Google Hangouts) (Large preview)

    Although some of the icons would be easy enough to identify, not all of them would instantly be recognizable (like “Invites” and “Hangouts Dialer”). If even one tab in your secondary navigation is rarely seen across other apps, spell them all out.

    One last thing: The divider lines in this menu are a great choice. Rather than jam 10 tabs/options into this navigation bar together, they’re logically grouped, making it easier for users to find what they’re looking for.

    3. Provide Users with Predictive Search Functionality

    Every app should have a search bar. It might be there to help users sift through content, to find the contact they’re looking for from a long list, or to ask a question about something in the app.

    The more complex your app is, the more critical a role internal search is going to play. But if you want to improve your users’ search experience even more, you’ll want to power yours with predictive search functionality.

    Even though I’m sure you have a Support line, maybe a chatbot and perhaps an FAQs or Knowledgebase to help users find what they need, a smart search bar can connect them to what they’re really looking for (even if they don’t know how to articulate it).

    Google has this search functionality baked into most of its products.

    You’re familiar with autocomplete within the Google search engine itself. But here are some other use cases for smart search capabilities.

    Google Drive connects users to documents (of all types — Docs, Sheets, Slides and more) as well as collaborators that match the search query.

    Google Drive search for 'speed'
    An example search for ‘speed’ within Google Drive. (Source: Google Drive) (Large preview)

    Users can, of course, be taken to a full search results page. However, the search bar itself predicts which content is the most relevant for the query. In this case, these are the most recent pieces of content I’ve written that include the term “speed” in the title.

    Google Maps is a neat use case as it pulls data from a variety of connected (Google) sources to try and predict what its users are looking for.

    Google Maps predictive search example 'Alicia'
    Google Maps pulls from a variety of sources to predict where users want to travel to. (Source: Google Maps) (Large preview)

    In this example, I typed in “Alicia”. Now, Google Maps knows me pretty well, so the first result is actually the address of one of my contacts. The remaining results are for addresses or businesses within a 45-mile radius containing the word “Alicia”.

    It doesn’t just pull from there though. This is one of those cases where the more enjoyable you make the in-app experience, the more your users will engage with it — which means more data.

    For example, this is what I see when I search for “Three”:

    Google Maps displays a 'Favorite' location when a user searches for 'three'
    Google Maps will provide ‘Favorite’ locations in search results when relevant. (Source: Google Maps) (Large preview)

    The very first thing it pulls up is a restaurant called Three Sisters (which is a fantastic restaurant in the city of Providence, by the way). If you look just above the center of the map where the red heart is, that’s the restaurant. This means that I’ve added it to my Favorite places and Google Maps actually calls it out as such in my search results.

    Imagine how much more your users would love your app if it wasn’t always a struggle to get to the content, data or page they were looking for. Or to perform a desired action. When you give your users the ability to personalize their experience like this, use the information they’ve given you to improve their search experience, too.

    4. Enable Users to Change the Design and Layout of the App

    As a designer, you can do your best to design a great experience for your users. But let’s face it:

    You’re never going to please everyone.

    Unlike a website, though, which is pretty much what-you-see-is-what-you-get, SaaS users have the ability to change the design and layout of what they’re interacting with — if you let them. And you should.

    There are many different ways this might apply to the app you’ve built.

    Google Calendar, for example, has a ton of customization options available.

    Google Calendar - view customizations
    Google Calendar allows users to customize the look and view of their calendars. (Source: Google Calendar) (Large preview)

    On the far left is a list of “My calendars”. Users can click which calendars and associated events they want to see within the app.

    In the bottom-right corner is an arrowhead. This enables users to hide the Google apps side panel and give them more room to focus on upcoming events and appointments.

    In the top-right, users have two places where they can customize their calendar:

    • The Settings bar allows them to adjust the color and density of the calendar.
    • The “Month” dropdown allows them to adjust how much of the calendar is seen at once.

    These customizations would all be useful for any sort of project management, planning or appointment scheduling app.

    For other apps, I’d recommend looking at Gmail. It’s chock full of customizations that you could adapt for your app.

    Previously, if users clicked the Settings widget, it would move them out of the app and into the dedicated settings panel. To be honest, it was annoying, especially if you just wanted to make a small tweak.

    Gmail Settings panel - design and layout customization options
    Gmail’s Settings reveals a list of design and layout customization options. (Source: Gmail) (Large preview)

    Now, the Settings button opens this panel within Gmail. It enables users to adjust things like:

    • Line spacing,
    • Background theme,
    • Inbox sorting priorities,
    • Reading pane layout,
    • Conversation view on/off.

    This is a recent update to Gmail’s settings, which probably means these are the most commonly used design customizations its users actually use.

    For any customizations users want to make that they can’t find in this new panel, they can click “See all settings” and customize the in-app design and layout (among other things) even further.

    Other customizations you might find value in enabling in your app are:

    • Keyboard control,
    • Dark mode,
    • Color-blind mode,
    • Text resizing,
    • List/grid view toggling,
    • Widget and banner hiding,
    • Columns displayed.

    Not only do these design and layout controls enable users to create an interface they enjoy looking at and that works better for their purposes, it can also help with accessibility.

    Wrapping Up

    There’s a reason why Google dominates market share with many of its products. It gets the user experience. Of course, this is due largely to the fact that it has access to more user data than most companies.

    And while we should be designing solutions for our specific audiences, there’s no denying that Google’s products can help us set a really strong base for any audience — if we just pay attention to the trends across its platforms.

    Further Reading on SmashingMag:

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Is Redesigning Your Mobile App A Bad Idea? — Smashing Magazine

    07/14/2020

    About The Author

    Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
    More about
    Suzanne
    Scacca

    The Scrabble GO, Instacart and YouTube mobile apps have recently undergone disruptive redesigns. Were they worth it in the end? Judging by their users’ reactions, the answer to that is “No”. But that doesn’t mean that redesigns or design tweaks are a bad idea after launch. Let’s take a look at the mistakes made and the lessons we can extract from them.

    I’m all for updating and upgrading mobile apps. I think if you’re not constantly looking at ways to improve the user experience, it’s just too easy to fall behind.

    That said, a redesign should be done for the right reasons.

    If it’s an existing app that’s already popular with users, any changes made to the design or content should be done in very small, incremental, strategic chunks through A/B testing.

    If your app is experiencing serious issues with user acquisition or retention, then a redesign is probably necessary. Just be careful. You could end up making things even worse than they were before.

    Let’s take a look at some recent redesign fails and review the lessons we can all learn from them.

    Lesson #1: Never Mess With A Classic Interface (Scrabble GO)

    Scrabble is one of the most profitable board games of all time, so it’s no surprise that EA decided to turn it into a mobile app. And it was well-received.

    However, that all changed in early 2020 when the app was sold to Scopely and it was redesigned as an ugly, confusing and overwhelming mess of its former self.

    Let me introduce you to Scrabble GO as it stands today.

    The splash screen introducing gamers into the app looks nice. Considering how classically simply and beautiful the board game is, this is a good sign. Until this happens:

    Scrabble GO home screen
    The Scrabble GO home screen is distraction overload. (Source: Scrabble GO) (Large preview)

    I don’t even know where to start with this, but I’m going to try:

    • The colors are way over-the-top and there are too many.
    • Since “Start New Game” is the primary action users want to take, it should be the only button in that color, but “Level 5” and “Level 6” distract from it.
    • The interface is so cluttered that it’s hard to focus on any particular part of it.
    • There’s no sense of control or priority within the design.
    • The navigation has gated pages! And I’m not sure what that icon on the left is supposed to be… gems and rewards? Why then is there a gem counter in the top banner?

    Beyond the UI of the homescreen, the UI and UX within the game board have been altered, too.

    Take, for instance, this plea from @lageerdes on Twitter:

    Twitter user @lageerdes tells Scrabble GO she wants the old app back
    Twitter user @lageerdes asks Scrabble GO why the old functionality is gone. (Source: Twitter) (Large preview)

    It took Scrabble GO over a week to tell @lageerdes something that could’ve easily been spelled out in a game FAQ or Settings page. These aren’t the only classic features that the new app has either complicated or done away with.

    Now, Scopely took note of the negative comments from users and promised to revamp the app accordingly (which was promising). But rather than revert back to the old and much-loved design, it just added a new mode:

    Scrabble GO settings with 'Mode Settings'
    Scrabble GO added new ‘Mode Settings’ to appease users. (Source: Scrabble GO) (Large preview)

    You’d think that the mode switcher would be more prominently displayed — like in the menu bar. Instead, it’s buried under the “Profile Settings” tab and there’s no indication anywhere in the app that the classic mode even exists.

    Sadly, classic mode isn’t much of an improvement (classic is on the right):

    Scrabble GO home screen vs. the newly designed classic home screen
    The new Scrabble GO home screen versus the newly designed classic mode home screen. (Source: Scrabble GO) (Large preview)

    The colors are toned down, some of the elements in the top-half have been cut out or minimized, but it doesn’t address any of the users’ issues with the app or game play.

    Worse, many users are reporting the app crashes on them, as this complaint from Twitter user @monicamhere demonstrates:

    Twitter user @monicamhere complains that Scrabble GO app crashes
    Twitter user @monicamhere complains to Scrabble GO about the app crashing. (Source: Twitter) (Large preview)

    I suspect this is happening because the developers jammed a second overloaded mode into the app rather than simply refine the existing one based on user feedback.

    So, what’s the lesson here?

    • For starters, don’t mess with a classic.
      The old mobile app closely resembled the physical board game and it was a huge part of its appeal. When you throw out an old design for something (seemingly) more trendy, you run the risk of alienating once-loyal users.
    • Also, if it ain’t broke, don’t fix it.
      Previously, the app was very easy to use and came with all the features and functionality users were familiar with from the board game. Now, they’re left with a non-intuitive and distracting mess.
    • If your users are telling you to ditch the redesign, listen to them.
      Who are you building this app for? Yourself or the users who are going to play with it and put money into your pocket?

    Listen to what your users have to say. It’s valuable feedback that could make a world of difference in the user experience.

    Lesson #2: Never Mislead Users At Checkout (Instacart)

    This is an interesting case because the people who objected to this particular Instacart UI update weren’t its primary users.

    Here’s why the change was an issue:

    Users go onto the Instacart website or mobile app and do their grocery shopping from the local store of their choice. It’s a pretty neat concept:

    Instacart mobile app - shopping with Wegmans
    Instacart users can do virtual shopping with grocery stores like Wegmans. (Source: Instacart) (Large preview)

    Users quickly search for items and add them to their virtual shopping cart. In many cases, they have the option to either do curbside pickup or have the groceries delivered to their front doorstep. Either way, a dedicated “shopper” picks out the items and bags them up.

    When the user is done shopping, they get a chance to review their cart and make final changes before checking out.

    On the checkout page, users get to pick when they want their order fulfilled. Beneath this section, they find a high-level summary of their charges:

    Instacart checkout tab with summary of charges
    Instacart checkout sums up the total costs of a user’s order. (Source: Instacart) (Large preview)

    At first glance, this all appears pretty-straightforward.

    • The cost of their cart is $174.40, which they already knew.
    • There’s a service fee of $9.99.
    • Sales tax is $4.11.
    • And the total is $197.22.

    But before all that is a section called “Delivery Tip”. This is where Instacart’s shoppers take issue.

    They argued that this is a dark pattern. And it is. Let me explain:

    The first thing that’s wrong is that the Delivery Tip isn’t included with the rest of the line items. If it’s part of the calculation, it should be present down there and not separated out in its own section.

    The second thing that’s wrong is that the tip is automatically set at 5% or $2.00. This was the shoppers’ biggest grievance at the time. They believed that because the “(5.0%)” in the delivery tip line wasn’t there in 2018, users might’ve seen the amount and thought “That seems reasonable enough” and left it at that. Whereas if you spell out the percentage, users may be inclined to leave more money.

    For users who take the time to read through their charges and realize that they can leave a larger tip, this is what the tip update page looks like for small orders:

    Instacart delivery tip customization
    Instacart enables users to change the way they tip the delivery person. (Source: Instacart) (Large preview)

    It’s oddly organized as the pre-selected amount sits at the very bottom of the page. And then there’s a random $6 tip included as if the app creators didn’t want to calculate what 20% would be.

    That’s not how the tip is presented to users with larger orders though:

    Instacart users can customize delivery tip on big orders
    Instacart enables users to customize the tip they leave the delivery person, from 5% to 20% or they can customize the amount. (Source: Instacart) (Large preview)

    It’s a strange choice to present users with a different tip page layout. It’s also strange that this one includes an open field to input a custom tip (under “Other amount”) when it’s not available on smaller orders.

    If Instacart wants to avoid angering its shoppers and users, there needs to be more transparency about what’s going on and they need to fix the checkout page.

    Dark patterns have no place in app design and especially not at checkout.

    If you’re building an app that provide users with delivery, pickup or personal shopper services (which is becoming increasingly more common), I’d recommend designing your checkout page like Grubhub’s:

    Grubhub checkout page with tips
    The Grubhub checkout page recaps the user’s order and provides tip amounts in percentages. (Source: Grubhub) (Large preview)

    Users not only get a chance to see their items at the time of checkout, but the tip line is not deceptively designed or hidden. It sticks right there to the bottom of the page.

    What’s more, tips are displayed as percentage amounts instead of random dollars. For U.S. consumers that are used to tipping 20% for good service, this is a much better way to ensure they leave a worthwhile tip for service workers rather than assume the dollar amount is okay.

    And if they want to leave more or less, they can use the “Custom” option to input their own value.

    Lesson #3: Never Waver In Your Decision To Roll Back (YouTube)

    When the majority of your users speak up and say, “I really don’t like this new feature/update/design”, commit to whatever choice you make.

    If you agree that the new feature sucks, then roll it back. And keep it that way.

    If you don’t agree, then tweak it or just give it time until users get back on your side.

    Just don’t flip-flop.

    Here’s what happened when YouTube switched things up on its users… and then switched them again:

    In 2019, YouTube tested hiding its comments section beneath this icon:

    The Verge and XDA Developers - YouTube comments test
    The Verge and XDA Developers report on a new placement of YouTube comments in 2019. (Source: Verge) (Large preview)

    Before this test, comments appeared at the very bottom of the app, beneath the “Up next” video recommendations. With this update, however, they were moved behind this new button. Users would only see comments if they clicked it.

    The response to the redesign clearly wasn’t positive as YouTube rolled back the update.

    In 2020, YouTube decided to play around with the comments section again. Unlike the 2019 update, though, YouTube’s committed to this one (so far).

    Here’s where the comments appear now:

    YouTube comments section design in 2020
    The YouTube comments redesign puts the comments above the ‘Up next’ section. (Source: YouTube) (Large preview)

    They’re sandwiched between the “Subscribe” bar and the “Up next” section.

    If YouTube users go looking for the comments section in the old spot, they’re going to find this message now:

    YouTube notice: ‘Comments have moved’
    A notice appears when YouTube users go looking for comments in the old location. (Source: YouTube) (Large preview)

    This is a nice touch. Think about how many times you’ve had to redesign something in an app or on a website, but had no way of letting regular users know about it. Not only does this tell them there’s been a change, but “Go To Comments” takes them there.

    With this tooltip, YouTube doesn’t assume that users will zero in on the new section right away. It shows them where it is:

    YouTube new comments section tooltip
    YouTube users see tooltip that shows them where the new comments section is. (Source: YouTube) (Large preview)

    I actually think this is a good redesign. YouTube might be a place for some users to mindlessly watch video after video, but it’s a social media platform as well. By hiding the comments section under a button or tucking them into the bottom of the page, does that really encourage socialization? Of course not.

    That said, users aren’t responding well to this change either, as Digital Information World reports. From what I can tell, the backlash is due to Google/YouTube disrupting the familiarity users have with the app’s layout. There’s really nothing here that suggests friction or disruption in their experience. It’s not even like the new section gets in the way or impedes users from binge-watching videos.

    This is a tricky one because I don’t believe that YouTube should roll this update back.

    There must be something in YouTube’s data that’s telling it that the bottom of the app is a bad place for comments, which is why it’s taking another stab at a redesign. It might be low engagement rates or people expressing annoyance at having to scroll so much to find them.

    As such, I think this is a case for a mobile app developer not to listen to its users. And, in order to restore their trust and satisfaction, YouTube will need to hold firm to its decision this time.

    Is A Mobile App Redesign The Best Idea For You?

    Honestly, it’s impossible to please everyone. However, your goal should be to please, at the very least, most of your users.

    So, if you’re planning to redesign your app, I’d suggest taking the safe approach and A/B testing it first to see what kind of feedback you get.

    That way, you’ll only push out data-backed updates that improve the overall user experience. And you won’t have to deal with rolling back the app or the negative press you get from media outlets, social media comments, or app store reviews.

    Further Reading on SmashingMag:

    Smashing Editorial
    (ra, yk, il)

    Source link