Browsing Tag: Building

    web design

    Building User Trust In UX Design — Smashing Magazine

    02/26/2021

    About The Author

    Adam is a senior lead UX/UI designer with more than 8 years of experience. Adam’s passion for design steadily grew into establishing his own agency, that …
    More about
    Adam

    Trust is at the heart of a long-term strategy of any product. There are many ways to earn it, and even more ways to lose it. In this article, we’ll go through how you, as a product designer, can make sure your product nurtures and retains trust throughout every touchpoint. To do that, we’ll be borrowing some of the tricks marketers and product people have up their sleeves.

    Building trust is one of the central goals of user experience design. And yet trust is a concept that’s very hard to define in a precise manner. We all know it when we feel it but often fall short of putting it in words. Being able to turn the elusive and intangible into actionable and concrete steps, however, is exactly what makes UX so crucial in the modern business ecosystem.

    Although a product experience that is useful and coherent is what fundamentally builds a sense of security and satisfaction, there’s a lot more nuance that goes building it. That’s what this article is about. We’ll take a deeper dive into users’ trust and how we can use UX to build a lasting relationship with your clientele.

    Instilling trust goes beyond the bare visuals of a product. Ideally, a UX designer’s work starts well before the first lines are drawn and long after designs are deployed.

    Being more present allows us to achieve a comprehensive view of the whole customer lifecycle, which also encourages us to borrow tools and approaches from marketers, product managers and developers. Being well-rounded in the product development activities is yet another aspect that we’ll advocate for throughout the piece. As a result of dabbling in non-design activities, we can gather an in-depth understanding of all areas where trust is vital.

    Think About The Customer Journey

    A central competency of UX design is a good understanding of your users’ needs, preferences, and emotions. Therefore, overtime we, designers, need to develop a wide array of skills to improve our understanding of our users and their interaction with our products.

    One such way entails using qualitative data and detailed analytics, which is vital in allowing us to outline a user’s persona’s most important qualities. Analytics can be used to create hypotheses and validate or discard them. As a result, you’ll be able to create experiences that will foster customer loyalty and a sustained sense of trust.

    Let’s look into the stages of a customer journey and explore how UX designers can bring value to the table. You might also notice the way we suggest to structure the customer journey map is marketing-oriented. Such marketing-orientedness speaks to the purpose of this article: to give designers a broader perspective.

    Below, we can see one such example of a customer journey that’s structured around the so-called “funnel” marketers and sales-people use:

    example of a customer journey
    Designed by Adam Fard UX Studio (Large preview)

    Below is the classic visualization of a sales/marketing funnel. You may have come across different wordings for the stages but this doesn’t change their essence. The reason this visualization is shaped like a funnel is simple: only a small portion of people who come across your product will end up becoming a paying customer. We’ve also combined the intent and action into one stage, since in the context of building trust through good UX they’re fairly similar.

    sales funnel
    Illustration by Adam Fard UX Studio. (Large preview)

    We’ve also combined the intent and action into one stage, since in the context of building trust through good UX they’re fairly similar.

    Now we need to apply this funnel thinking to a customer journey. That’s exactly what we did with the customer journey map (CJM) below. This map was created for one of our projects a while ago, and was tweaked significantly to respect the client’s privacy. By focusing on the whole funnel, we were able to go beyond the product UI, and audit the whole UX from the very first users’ interaction with the product in question.

    Now that we’ve talked briefly about how we can map users’ journey to pinpoint trust-sensitive areas, let’s move on to the first stage of the funnel: Awareness.

    Awareness

    Awareness is the stage where we should analyze how customers learn about a product or service. When devising a strategy for this step, we need to start from our users’ problems and their most common pain points. Being user-centric enables us to think about the best ways to approach potential customers while they are trying to tackle a certain pain point. The goal here is to have a reserved and more educational tone.

    sales funnel
    Illustration by Adam Fard UX Studio. (Large preview)

    Sounding too corporate or salesy can have an adverse effect on a person that isn’t familiar with the product. The way we should approach the awareness stage depends on whether your product is launched or not.

    In order to map a journey that is representative of real users we need real data. The ways of collecting this data will depend on whether the product in question is launched or not. Let’s go through both of these scenarios separately.

    two scenarios of collecting the data
    Illustration by Adam Fard UX Studio. (Large preview)

    The Approach For Launched Products

    A product or service that has already hit the market can learn a lot about the people it attracts. Both qualitative and quantitative methods can provide us with a wealth of valuable insight.

    There are plenty of tools and techniques in the market that will help get to know your users better. Here are the ones that are used the most often:

    Let’s break down the three in more detail.

    Google Analytics

    Google Analytics is a popular tool that is predominantly used by marketers, but it has gradually been adopted by UX specialists as well. It’s an excellent way to learn about the types of audiences you need to design for and create hypotheses about their preferences. More importantly, Google Analytics gives us insights on how people find you. Conversely, it allows you to learn how people do not find you.

    A launched product can dive into a variety of values to better their understanding of their clientele. Here are a few of them:

    • Top Sources Of Traffic
      This allows you to understand what are the most successful channels that drive awareness. Are you active enough on these channels? Can anything be improved in terms of your online presence?

    Here’s how Google Analytics present data on where your users come from:

    Google Analytics' data
    (Large preview)
    • User Demographics
      This provides you with data on your audience’s age, gender, lifestyle, and interests. That’s one of the ways you can validate a UX persona you’ve created with data rather than your assumptions;

    Here’s how Google Analytics visualizes the data on the users’ location:

    Google Analytics' visualization of data
    A screenshot taken from Google Analytics. (Large preview)
    • Keyword insights — you can use two approaches here. The first one involves the usage of Google Search Console. It shows you the keywords your audience uses to locate your page. It provides you with a wealth of insight into user pain points and can inform your keyword strategy.

    The second approach is gauging the data from SEO tools like ahrefs or SEMrush to see how people phrase their search query when they face a problem your product solves.

    Once you have an understanding of the keywords that your potential customers use, put them in Google. What do you find there? A competitor product? An aggregator website like Capterra or Clutch? Perhaps nothing that suits the query? Answers to these questions will be invaluable in informing your decisions about optimizing the very first stages of your custom journey.

    Here’s how Google Search console shows which keywords users use that end up visiting to your website:

    Google Analytics' data
    A screenshot taken from Google Analytics’ Search Console. (Large preview)
    FullStory And Its Equivalents

    There is now a great variety of UX tools when it comes to analytics engines. They help translate complex data into actionable insights on how to improve your online presence. The tool that we use, and see other designers use very often is FullStory. Such tools are a great solution when you’re looking to reduce UI friction, find ways to enhance funnel completion, and so forth.

    By using such tools, businesses can learn a lot about user behavior and how they can calibrate products to their needs. Do your users read the product description you write? Do they skim it? What part of the page seems to grab their attention? Does that validate or refute your initial assumptions?

    FullStory tool
    Image source: fullstory.com (Large preview)
    User Interviews

    Interviewing your user base has a broad spectrum of benefits when it comes to understanding their motivations, values, and experiences. There are many kinds of interviews, i.e. structured, unstructured, ones that feature closed or leading questions, and so on. They all have their benefits and can be tailored specifically to your service or user base to extract maximum insight.

    For the purposes of creating a customer journey map that visualizes real data, consider asking questions like:

    “How would you go about looking for an X service or product?”

    “What information is/was the most important while making a purchasing decision?”

    “What are some of the red flags for you when searching for our service/product?”

    pic of a user interview
    Image source: shutterstock.com (Large preview)

    Approach For Products Pending Launch

    There’s plenty of valuable insight that can be gathered without having a launched product. Designs that instill trust from day one are bound to maximize an organization’s success in the long run.

    Here are the tools and techniques you should use:

    Let’s go through each of those.

    Keyword And Online Research

    One of the most straightforward ways to establish whether a product is fit for its market is keyword research. Often, looking for keywords is associated with SEM and SEO practices, but there’s a catch. This kind of research will reveal a lot about the most prominent needs on the market as well.

    There are a few methods of keyword research can be used to establish market fitness:

    • Mining For Questions And Answers
      Think about websites like Quora or Reddit. Are people asking about how to solve a problem your product solves? What are the ways they currently go about solving it?
    screenshot from a Reddit thread
    A screenshot from a Reddit thread. (Large preview)
    • Competitor Reviews And Descriptions
      Is there a trend on why competitors get bad reviews? Conversely, is there something that helps them get better reviews? Is there a gap in their features?
    • Social Listening
      Go through twitter, facebook, LinkedIn hashtags and groups. See if there are communities that are built around the problem you solve or the demographic you target. If so, see what these people talk about, ask them questions.
    • Keyword Research Tools
      This research method helps you learn two things. The first one is whether people have a need for your product or service. By seeing the number of queries in a given period of time you can draw conclusions about the viability of your product. The second valuable insight is seeing how people describe the problem you’re solving. Knowing how people talk about their pains, in turn, will help you speak the same language with your customers.
    User Interviews

    To some, conducting user interviews before product launch may seem pointless, but it’s far from being true.

    By understanding who your potential customers are and learning about their needs and preferences is a valuable vehicle for building trust.

    Here are a few important things you can learn from potential users:

    • Whether or not they like your design.
      The visual side of a product is a vital link, allowing to build trust. For someone to like your design, of course, implies that you already have some designs complete.
    • Whether or not they find your product idea useful.
      This information will allow you to analyze how fit your product is for the market.
    • The features that they’d like to see in your product.
      This will help you quickly adapt to the needs of your customers.
    • Whether or not they find it easy to use your product.
      This data will inform your product’s usability, which too implies having some designs complete. A prototype would be ideal for early usability testing.

    Thorough and well-planned user interviews are instrumental in making intelligent business decisions. They provide you with invaluable insight rooted in feedback directly from your potential users.

    Competitor Research

    Understanding your competitors’ products is vital when it comes to market differentiation. It enables us to learn what customers are lacking and fill in those gaps.

    Here are a few things that’ll help you conduct a simple competitor research with trust in mind:

    • Choose the right competitors to research.
      By the way, these don’t have to be digital products. For example, simple notepad is a competitor to productivity apps, as they solve the same problem: being on top of your tasks and staying productive. How does that help with trust and creating a CJM? It allows you to empathize and put yourself in the shoes of your users. Also, it helps your craft authentic and relatable messaging that resonates with people.
    • Ensure that your analysis is consistent.
      It’s important to have a clear understanding of which aspects you’re going to analyze. Come up with analysis criteria, so that your notes are structured and easy to draw conclusions from.
      Considering different options is almost always a part of a customer’s journey. You have to make it easy to understand how you’re better than the alternatives.
    • Establish the best sources for your data.
      The best source is users: either yours or someone else’s. Period. But a few google searches would certainly do no harm.
    • Define the best ways to incorporate your findings into your product at its inception.

    Studying your competition will provide you with a wealth of quantitative and qualitative data that will guide your business decisions. As a result, you’ll create a product that fits your users’ needs and instills trust and satisfaction.

    Consideration & Acquisition

    Users that have made it to the consideration stage are interested in your product but aren’t prepared to become paying customers. At this point, they’re evaluating the options offered by your competition and assessing whether they’ll get the value they’re looking for.

    sales funnel
    Designed by Adam Fard UX Studio. (Large preview)

    There is a wide array of things businesses can do to motivate users to transition into a paying relationship through building trust. Here are a few of them:

    Explain How Your Algorithms Work

    If your product revolves around AI/ML algorithms, to enhance customer experience, it’s important to explain how it works.

    We’re typically very sensitive about our data. Respectively, there’s no reason to think that users will blindly trust a product’s AI. It’s our responsibility to counteract the distrust by explaining how it works and what kind of data it will use.

    Here are a few great ways you can outline the AI’s functionality while also encouraging them to make their own informed decisions:

    • Calibrate Trust
      AI systems are based on stats and numbers, which means that they can’t replace rational human thought. Emphasize that your algorithm is skilled at giving suggestions, but users should make their own choices.
    • Display Confidence Levels
      An essential aspect of the scientific approach is that there are no facts — there is only evidence. Make sure to communicate how confident your algorithm is of something to be true.
    • Explain Algorithm Outputs
      The results of an analysis must be accompanied by a clear explanation thereof.

    Good UX & UI

    A well-executed UI is at the crux of user trust. Satisfying visuals, consistency, and ethical design will make your product appear trustworthy. Lacking the above will dissuade people from purchasing your product or services.

    Here’s an older design example. Would you willingly use such service, especially when the competitors’ design isn’t stuck in 2003?

    screenshot of how Gmail looked in 2003
    Here’s how Gmail looked in 2003. (Sorce: Vala Afshar) (Large preview)

    No offense to Gmail’s former self, by the way. There’s a reason it doesn’t look like that anymore though.

    The same could also be said about your product’s UX. Confusing user flows, poor feature discoverability, and other other usability issues are a surefire way to scare away a good chunk of new users. A good remedy to such pitfall is making sure your design adheres to the usability heuristics. If you’re dealing with legacy design, conducting a heuristic evaluation would also serve you well.

    Also, stuff like fake buttons, dark patterns, and a wonky interface are guaranteed to seriously hinder your growth.

    an example of a website that employs dark patterns
    An example of a website that clearly employs dark patterns. (Source: pdfblog.com) (Large preview)

    Testimonials & Reviews

    Customer reviews are essential when it comes to building trust. There’s a significant body of research indicating that positive feedback can boost your sales and conversions.

    You don’t have to take our word for it. Here’s what researchers in Spiegel Research Center have to say about the importance of review:

    Based on data from the high-end gift retailer, we found that as products begin displaying reviews, conversion rates escalate rapidly. The purchase likelihood for a product with five reviews is 270% greater than the purchase likelihood of a product with no reviews.

    A screenshot taken from Clutch with reviews
    A screenshot taken from Clutch. (Large preview)

    Plus, studies have shown that people use testimonials to assess how trustworthy a product is.

    It’s also worth noting that people who have negative experiences are a lot more likely to write a review, rather than the ones who had a good one. That’s why you should be creative in asking people to leave reviews. Here’s how Upwork approaches soliciting feedback.

    A screenshot taken from Upwork with reviews
    A screenshot taken from Upwork. (Large preview)

    Notice that Upwork allows you to see what review a customer left only after you’ve left one. It’s fascinating how they leverage curiosity to encourage users to leave feedback.

    Over 90 percent of internet users read online reviews, and almost 85 percent of them trust them as much as a recommendation from a friend. Reviews are an important part of a trustworthy online presence.

    That being said, it’s important not to create fake reviews that glorify your product. Please don’t buy reviews or mislead users in any different way. People can generally sense when praise is excessive and disingenuous. Furthermore, users appreciate a few negative reviews as well.

    A study conducted by the North Western University and Power Reviews concluded the following:

    “As it turns out, perfect reviews aren’t the best for businesses, either. Our research with Northwestern University found that purchase probability peaks when a product’s average star rating is between 4.2 – 4.5, because a perfect 5-star rating is perceived by consumers as too good to be true.”

    Badges

    Trust badges are icons that inform your users about the security of your product/service. Badges are especially important if your site has a payment page.

    different types of badges
    Badges like these help instill trust. (Source: Marianne Wright) (Large preview)

    Providing your credit card information on a website is a sign of trust. Therefore it’s essential that we not only abide by security standards but also convey the fact that we do.

    Badges are also invaluable when it comes to showcasing important partnerships or rewards. For example, b2b companies often display awards from websites like Clutch or GoodFirms.

    examples of different badges
    (Large preview)

    Good Spelling And Grammar

    A poorly written copy is a simple way to ruin your online credibility. A few typos will certainly dissuade some people from using your product by losing their trust in it.

    Think of it this way: How can you trust a service that can even get their text right? Would you trust their online security? Would you be willing to provide your card information to them?

    The pitfall of poor grammar and spelling might seem obvious, but oftentimes the UX copy is written in a rush. And we designers are prone to glazing over the copy without giving it too much consideration.

    You’d be surprised how many error notifications and other system messages are written in a hurry never to be reviewed again.

    Blunders like on the screenshot below, in our experience, happen way too often:

    example of error notifications
    Notice how the error message uses jargon. (Source: Alex Birkett) (Large preview)

    Retention

    Considering that a customer has made it to the retention stage, it’s fair to say that you’ve earned their trust. However, it’s essential to mention that this trust needs to be retained, to ensure that they’ll continue using your product. Moreover, whenever there are people involved screw-ups are bound to happen. That means that you need to have a plan for fixing mistakes and getting the trust back.

    sales tunnel
    Illustration by adamfard.com (Large preview)

    Here are a few things you can do to elevate user experience and maintain a high trust level:

    Emails

    Effective email communication is paramount to customer retention. A whitepaper done by Emarsys indicates that about 45 of the businesses they surveyed use e-mails to retain their customers.

    As a communication medium, email is among the most expressive. It can convey emotions through text and media while also addressing customers’ needs.

    A user-centric approach to email marketing is bound to keep your customers happy, informed, and engaged. That implies not spamming and providing actual value or entertainment. Preferably, both.

    Forever 21 mailing
    Look at how Forever 21 does damage control to retain their customers’ loyalty. (Source: Iuliia Nesterenko) (Large preview)

    Notifications

    Consistent and well-thought-out push notifications are also a great way to keep your customers intrigued.

    First off, it’s always a good idea to welcome your users. They’ve just made an important step — they’ve bought your product or purchased a membership. It’s a simple and elegant way of thanking your customer for their choice.

    Secondly, consider notifying them about exclusive offers. Sharing information on special deals allows you to provide them with extra value for merely being a customer of yours.

    Finally, consider personalizing your notifications. Using users’ name or recent activity to notify them about relevant stuff will also skyrocket their engagement. However, it’s worth mentioning that being explicit about having users’ information too often or using sensitive data to personalize notifications can come across creepy.

    A screenshot of a Starbucks app notification.
    A screenshot of a Starbucks app notification. (Large preview)

    Whether the notification above is creepy is up for you to decide 🙂

    In-product Perks

    There are a variety of bonuses you can offer to build trust in the retention stage. They nudge our customers to use your product actively. These are especially potent in making up for any screw-ups.

    Here are a few popular ones you can look into:

    • Closed beta access to new features;
    • Seasonal discounts;
    • Loyalty programs;
    • Discounts on renewals.
    an example of Kate Spade’s notification
    Notice how Kate Spade nudges the users towards the purchase. (Large preview)

    Conclusion

    Phew, reading this article must have been quite a journey. We’ve almost reached the end. In order to help you consolidate everything in this article, let us try to recap its contents.

    Creating a successful product is all about building trust. Luckily, there are so many ways to improve a product’s trustworthiness through UX. However, it’s essential to make these practices consistent. Customers seek to interact with brands that can deliver great experience throughout all interactions and touchpoints.

    One of the ways to account for each touch point is reconciling two journey mapping techniques — marketing & sales funnel and customer journey map. The funnel allows us to go beyond the in-app experience that designers often are reluctant to do while a customer journey map provides empathy, structure and the depth of analysis.

    Listing all of the ways to boost trustworthiness for each funnel stage would take another couple of pages, so a simple advice would do. Empathy is the key for getting in your users’ shoes and tackling their trust concerns. For a more concrete list of guidelines, scroll up and skim through the headers. That should jog your memory.

    The bottom line is that we encourage you, dear reader, to shortlist the stages your users go through before actually becoming your users. Is there anything that might undermine your product’s trustworthiness? Is there anything you could improve and nudge a soon-to-be user in the right direction? Giving definitive answers to these questions and addressing them is a surefire for a better designed product.

    Further Reading on SmashingMag:

    Smashing Editorial
    (ah, vf, yk, il)

    Source link

    web design

    Building A Discord Bot Using Discord.js — Smashing Magazine

    02/25/2021

    About The Author

    Subha is a freelance web developer and a learner who is always passionate about learning and experimenting with new things. He loves to write about his new …
    More about
    Subha

    An introduction to building a Discord bot using the Discord.js module. The bot will share random jokes, assign or revoke user roles, and post tweets of a specific account to a Discord channel.

    Team communication platforms are getting popular day by day, as more and more people work from home. Slack and Discord are two of the most popular team communication platforms. While Discord is focused on gamers, some functionality, such as the ability to add up to 50 members in the voice call room, make it an excellent alternative to Slack. One of the most significant advantages of using such a platform is that many tasks can be automated using bots.

    In this article, we’ll build a bot from scratch using JavaScript and with help from Discord.js. We’ll cover the process from building the bot up to deploying it to the cloud. Before building our bot, let’s jot down the functionality that our bot will have:

    • Share random jokes from an array of jokes.
    • Add and remove user roles by selecting emoji.
    • Share tweets from a particular account to a particular channel.

    Because the Discord.js module is based on Node.js, I’ll assume that you are somewhat familiar with Node.js and npm. Familiarity with JavaScript is a must for this article.

    Now that we know the prerequisites and our goal, let’s start. And if you want to clone and explore the code right away, you can with the GitHub repository.

    Steps To Follow

    We will be building the bot by following a few steps.

    First, we’ll build a Discord server. A Discord server is like a group in which you can assign various topics to various channels, very similar to a Slack server. A major difference between Slack and Discord is that Slack requires different login credentials to access different servers, whereas in Discord you can access all of the servers that you are part of with a single authentication.

    The reason we need to create a server is that, without admin privileges for a server, we won’t be able to add a bot to the server. Once our server is created, we will add the bot to the server and get the access token from Discord’s developer portal. This token allows us to communicate with the Discord API. Discord provides an official open API for us to interact with. The API can be used for anything from serving requests for bots to integrating OAuth. The API supports everything from a single-server bot all the way up to a bot that can be integrated on hundreds of servers. It is very powerful and can be implemented in a lot of ways.

    The Discord.js library will help us to communicate with the Discord API using the access token. All of the functions will be based on the Discord API. Then, we can start coding our bot. We will start by writing small bits of code that will introduce us to the Discord API and the Discord.js library. We will then understand the concept of partials in Discord.js. Once we understand partials, we’ll add what’s known as a “reaction role” system to the bot. With that done, we will also know how to communicate with Twitter using an npm package called twit. This npm package will help us to integrate the Twitter tweet-forwarding functionality. Finally, we will deploy it to the cloud using Heroku.

    Now that we know how we are going to build our bot, let’s start working on it.

    Building A Discord Server

    The first thing we have to do is create a Discord server. Without a server with admin privileges, we won’t be able to integrate the bot.

    Building a Discord server is easy, and Discord now provides templates, which make it even easier. Follow the steps below, and your Discord server will be ready. First, we’ll choose how we are going to access the Discord portal. We can use either the web version or the app. Both work the same way. We’ll use the web version for this tutorial.

    If you’re reading this article, I’ll assume that you already have a Discord account. If not, just create an account as you would on any other website. Click the “Login” button in the top right, and log in if you have an account, or click the “Register” button. Fill out the simple form, complete the Captcha, and you will have successfully created an account. After opening the Discord app or website, click the plus icon on the left side, where the server list is. When you click it, you’ll be prompted to choose a template or to create your own.

    Creating a server from a template or from scratch in Discord
    Creating a server in Discord (Large preview)

    We’ll choose the “Create My Own” option. Let’s skip the next question. We’ll call our Discord server “Smashing Example”. You may also provide a photo for your server. Clicking the “Create” button will create your server.

    Registering the Bot With Discord

    Before coding the bot, we need to get a token provided by Discord. This token will establish a connection from our code to Discord. To get the token, we have to register our bot with our server. To register the bot, we have to visit Discord’s developer portal. If you are building a Discord app for the first time, you’ll find an empty list there. To register our app, click on the “New Application” link in the top-right corner. Give your application a name, and click the “Create” button. We’ll name our app “Smashing App”.

    Adding a new app to the Discord Developer Portal

    The new menu gives us some options. On the right side is an option labelled “Bot”. Click it, and select “Add Bot”. Click the confirmation, change the name of the bot if you want, save the changes, and copy the token received from this page. Our bot is now registered with Discord. We can start adding functionality and coding the bot.

    Building The Bot

    What Is Discord.js?

    Discord.js defines itself like so:

    Discord.js is a powerful node.js module that allows you to interact with the Discord API very easily. It takes a much more object-oriented approach than most other JS Discord libraries, making your bot’s code significantly tidier and easier to comprehend.

    So, Discord.js makes interaction with the Discord API much easier. It has 100% coverage with the official Discord API.

    Initializing The Bot

    Open your favorite text editor, and create a folder in which all of your files will be saved. Open the command-line interface (CLI), cd into the folder, and initialize the folder with npm: npm init -y.

    We will need two packages to start building the bot. The first is dotenv, and the second, obviously, is the Discord.js Node.js module. If you are familiar with Node.js, then you’ll be familiar with the dotenv package. It loads the environment variables from a file named .env to process.env.

    Install these two using npm i dotenv discord.js.

    Once the installation is complete, create two files in your root folder. Name one of the files .env. Name the other main file whatever you want. I’ll name it app.js. The folder structure will look like this:

    │    .env
    │    app.js
    │    package-lock.json
    │    package.json
    └─── node_modules
    

    We’ll store tokens and other sensitive information in the .env file, and store the code that produces the results in the app.js file.

    Open the .env file, and create a new variable. Let’s name the variable BOT_TOKEN for this example. Paste your token in this file. The .env file will look similar to this now:

    BOT_TOKEN=ODAxNzE1NTA2Njc1NDQ5ODY3.YAktvw.xxxxxxxxxxxxxxxxxxxxxxxx
    

    We can start working on the app.js file. The first thing to do is to require the modules that we installed.

    const Discord = require('discord.js');
    require('dotenv').config();
    

    The dotenv module is initialized using the config() method. We can pass in parameters to the config() method. But because this is a very simple use of the dotenv module, we don’t need any special function from it.

    To start using the Discord.js module, we have to initialize a constructor. This is shown in the documentation:

    const client = new Discord.Client();
    

    The Discord.js module provides a method named client.on. The client.on method listens for various events. The Discord.js library is event-based, meaning that every time an event is emitted from Discord, the functionality attached to that event will be invoked.

    The first event we will listen for is the ready event. This method will fire up when the connection with the Discord API is ready. In this method, we can pass in functions that will be executed when a connection is established between the Discord API and our app. Let’s pass a console.log statement in this method, so that we can know whether a connection is established. The client.on method with the ready event will look like this:

    client.on('ready', () => {
      console.log('Bot is ready');
    });
    

    But, this won’t establish a connection with the API because we haven’t logged into the bot with the Discord server. To enable this, the Discord.js module provides a login method. By using the login method available on the client and passing the token in this method, we can log into the app with the Discord server.

    client.login(process.env.BOT_TOKEN)
    

    If you start the app now — with node app.js or, if you are using nodemon, then with nodemon app.js — you will be able to see the console message that you defined. Our bot has successfully logged in with the Discord server now. We can start experimenting with some functionality.

    Let’s start by getting some message content depending on the code.

    The message Event

    The message event listens for some message. Using the reply method, we can program the bot to reply according to the user’s message.

    client.on('message', (msg) => {
      if (msg.content === 'Hello') msg.reply('Hi');
    });
    

    This example code will reply with a “Hi” whenever a “Hello” message is received. But in order to make this work, we have to connect the bot with a server.

    Connecting The Bot With A Discord Server

    Up to this point, the bot is not connected with any server. To connect with our server (Smashing Example), visit Discord’s developer portal. Click on the name of the app that we created earlier in this tutorial (in our case, “Smashing App”). Select the app, and click on the “OAuth2” option in the menu. You’ll find a group named “Scopes”. Check the “bot” checkbox, and copy the URL that is generated.

    Connecting the bot with the Discord server
    OAuth for bot (Large preview)

    Visit this URL in a new tab, choose your server, and click on “Authorize”. Complete the Captcha, and our bot will now be connected with the server that we chose.

    If you visit the Discord server now, you will see that a notification has already been sent by Discord, and the bot is now also showing up in the members’ list on the right side.

    Adding Functionality to the Bot

    Now that our bot is connected with the server, if you send a “Hello” to the server, the bot will reply with a “Hi”. This is just an introduction to the Discord API. The real fun is about to start.

    To familiarize ourselves a bit more with the Discord.js module, let’s add functionality that sends a joke whenever a particular command is received. This is similar to what we have just done.

    Adding A Random Joke Function To The Bot

    To make this part clearer and easier to understand, we aren’t going to use any APIs. The jokes that our bot will return will be a simple array. A random number will be generated each time within the range of the array, and that specific location of the array will be accessed to return a joke.

    In case you have ever used functionality provided by a bot in Discord, you might have noticed that some special character distinguishes normal messages from special commands. I am going to use a ? in front of our commands to make them look different than normal messages. So, our joke command will be ?joke.

    We will create an array named jokes in our app.js file. The way we will get a random joke from the array is by using this formula:

    jokes[Math.floor(Math.random() * jokes.length)]
    

    The Math.random() * jokes.length formula will generate a random number within the range of the array. The Math.floor method will floor the number that is generated.

    If you console.log() this, Math.floor(Math.random() * jokes.length), you’ll get a better understanding. Finally, jokes[] will give us a random joke from the jokes array.

    You might have noticed that our first code was used to reply to our message. But we don’t want to get a reply here. Rather, we want to get a joke as a message, without tagging anyone. For this, the Discord.js module has a method named channel.send(). Using this method, we can send messages to the channel where the command was called. So, the complete code up to this point looks like this:

    const Discord = require('discord.js');
    require('dotenv').config();
    
    const client = new Discord.Client();
    
    client.login(process.env.BOT_TOKEN);
    
    client.on('ready', () => console.log('The Bot is ready!'));
    
    // Adding jokes function
    
    // Jokes from dcslsoftware.com/20-one-liners-only-software-developers-understand/
    // www.journaldev.com/240/my-25-favorite-programming-quotes-that-are-funny-too
    const jokes = [
      'I went to a street where the houses were numbered 8k, 16k, 32k, 64k, 128k, 256k and 512k. It was a trip down Memory Lane.',
      '“Debugging” is like being the detective in a crime drama where you are also the murderer.',
      'The best thing about a Boolean is that even if you are wrong, you are only off by a bit.',
      'A programmer puts two glasses on his bedside table before going to sleep. A full one, in case he gets thirsty, and an empty one, in case he doesn’t.',
      'If you listen to a UNIX shell, can you hear the C?',
      'Why do Java programmers have to wear glasses? Because they don’t C#.',
      'What sits on your shoulder and says “Pieces of 7! Pieces of 7!”? A Parroty Error.',
      'When Apple employees die, does their life HTML5 in front of their eyes?',
      'Without requirements or design, programming is the art of adding bugs to an empty text file.',
      'Before software can be reusable it first has to be usable.',
      'The best method for accelerating a computer is the one that boosts it by 9.8 m/s2.',
      'I think Microsoft named .Net so it wouldn’t show up in a Unix directory listing.',
      'There are two ways to write error-free programs; only the third one works.',
    ];
    
    client.on('message', (msg) => {
      if (msg.content === '?joke') {
        msg.channel.send(jokes[Math.floor(Math.random() * jokes.length)]);
      }
    });
    

    I have removed the “Hello”/“Hi” part of the code because that is of no use to us anymore.

    Now that you have a basic understanding of the Discord.js module, let’s go deeper. But the module can do a lot more — for example, adding roles to a person or banning them or kicking them out. For now, we will be building a simple reaction-role system.

    Building A Reaction-Role System

    Whenever a user responds with a special emoji in a particular message or channel, a role tied to that emoji will be given to the user. The implementation will be very simple. But before building this reaction-role system, we have to understand partials.

    Partials

    Partial is a Discord.js concept. Discord.js usually caches all messages, which means that it stores messages in a collection. When a cached message receives some event, like getting a message or a reply, an event is emitted. But messages sent before the bot has started are uncached. So, reacting to such instances will not emit any event, unless we fetch them before we use them. Version 12 of the Discord.js library introduces the concept of partials. If we want to capture such uncached events, we have to opt in to partials. The library has five types of partials:

    1. USER
    2. CHANNEL
    3. GUILD_MEMBER
    4. MESSAGE
    5. REACTION

    In our case, we will need only three types of partials:

    • USER, the person who reacts;
    • MESSAGE, the message being reacted to;
    • REACTION, the reaction given by the user to the message.

    The documentation has more about partials.

    The Discord.js library provides a very easy way to use partials. We just need to add a single line of code, passing an object in the Discord.Client() constructor. The new constructor looks like this:

    const client = new Discord.Client({
      partials: ['MESSAGE', 'REACTION', 'CHANNEL'],
    });
    

    Creating Roles On The Discord Server

    To enable the reaction-role system, we have to create some roles. The first role we are going to create is the bot role. To create a role, go to “Server Settings”:

    Open server settings to create roles
    Server settings option (Large preview)

    In the server settings, go to the “Roles” option, and click on the small plus icon (+) beside where it says “Roles”.

    Creating roles in Discord
    Adding roles (Large preview)

    First, let’s create the bot role, and make sure to check the “Manage Roles” option in the role options menu. Once the bot role is created, you can add some more roles. I’ve added js, c++, and python roles. You don’t have to give them any special ability, but it’s an option.

    Here, remember one thing: The Discord roles work based on priority. Any role that has roles below it can manage the roles below it, but it can’t manage the roles above it. We want our bot role to manage the js, c++, and python roles. So, make sure that the bot role is above the other roles. Simply drag and drop to change the order of the roles in the “Roles” menu of your server settings.

    When you are done creating roles, assign the bot role to the bot. To give a role, click on the bot’s name in the members’ list on the server’s right side, and then click on the small plus icon (+). It’ll show you all of the available roles. Select the “bot” role here, and you will be done.

    Assigning roles manually
    Assinging roles (Large preview)

    Activating Developer Mode in Discord

    The roles we have created cannot be used by their names in our code. In Discord, everything from messages to roles has its own ID. If you click on the “more” indicator in any message, you’ll see an option named “Copy ID”. This option is available for everything in Discord, including roles.

    Copy ID option in Discord
    Copy ID in Discord (Large preview)

    Most likely, you won’t find this option by default. You’ll have to activate an option called “Developer Mode”. To activate it, head to the Discord settings (not your server settings), right next to your name in the bottom left. Then go to the “Appearance” option under “App Settings”, and activate “Developer Mode” from here. Now you’ll be able to copy IDs.

    messageReactionAdd and messageReactionRemove

    The event that needs to be emitted when a message is reacted is messageReactionAdd. And whenever a reaction is removed, the messageReactionRemove event should be emitted.

    Let’s continue building the system. As I said, first we need to listen for the messageReactionAdd event. Both the messageReactionAdd and messageReactionRemove events take two parameters in their callback function. The first parameter is reaction, and the second is user. These are pretty self-explanatory.

    Coding the Reaction-Role Functionality

    First, we’ll create a message that describes which emoji will give which role, something like what I’ve done here:

    The reaction-role message on server
    Reaction-role message (Large preview)

    You might be thinking, how are we going to use those emoji in our code? The default emoji are Unicode, and we will have to copy the Unicode version. If you follow the syntax :emojiName: and hit “Enter”, you will get an emoji with the name. For example, my emoji for the JavaScript role is fox; so, if I type in :fox: and hit “Enter” in Discord, I’ll receive a fox emoji. Similarly, I would use :tiger: and :snake: to get those emoji. Keep these in your Discord setup; we will need them later.

    Getting Unicode emoji
    Getting Unicode emoji (Large preview)

    Here is the starting code. This part of the code simply checks for some edge cases. Once we understand these cases, we’ll implement the logic of the reaction-role system.

    // Adding reaction-role function
    client.on('messageReactionAdd', async (reaction, user) => {
      if (reaction.message.partial) await reaction.message.fetch();
      if (reaction.partial) await reaction.fetch();
      if (user.bot) return;
      if (!reaction.message.guild) return;
    });
    

    We are passing in an asynchronous function. In the callback, the first thing we are doing is checking whether the message is a partial. If it is, then we fetch it, meaning caching or storing it in a JavaScript map method. Similarly, we are checking whether the reaction itself is a partial and then doing the same thing. Then, we check whether the user who reacted is a bot, because we don’t want to assign roles to the bot that is reacting to our messages. Finally, we are checking whether the message is on the server. Discord.js uses guild as an alternative name of the server. If the message is not on the server, then we would stop the function.

    Our bot will only assign the roles if the message is in the roles channel. If you right-click on the roles channel, you’ll see a “Copy ID” option. Copy the ID and follow along.

    if (reaction.message.channel.id == '802209416685944862') {
      if (reaction.emoji.name === '🦊') {
        await reaction.message.guild.members.cache
          .get(user.id)
          .roles.add('802208163776167977');
      }
      if (reaction.emoji.name === '🐯') {
        await reaction.message.guild.members.cache
          .get(user.id)
          .roles.add('802208242696192040');
      }
      if (reaction.emoji.name === '🐍') {
        await reaction.message.guild.members.cache
          .get(user.id)
          .roles.add('802208314766524526');
      }
    } else return;
    

    Above is the rest of the code in the callback. We are using the reaction.message.channel.id property to get the ID of the channel. Then, we are comparing it with the roles channel ID that we just copied. If it is true, then we check for the emoji and compare them with the reactions. The reaction.emoji.name returns the emoji that was used to react. We compare it with our Unicode version of the emoji. If they match, then we await for the reaction.message.guild.members.cache property.

    The cache is a collection that stores the data. These collections are a JavaScript Map with additional utilities. One of the utilities that it provides is the get method. To get anything by ID, we can simply pass in the ID in this method. So, we pass the user.id in the get method to get the user. Finally, the roles.add method adds the role to the user. In the roles.add method, we are passing the role ID. You can find the role ID in your server setting’s “Role” option. Right-clicking on a role will give you the option to copy the role ID. And we are done adding the reaction-role system to our bot!

    We can add functionality for a role to be removed when a user removes their reaction from the message. This is exactly the same as our code above, the only difference being that we are listening for the messageReactionRemove event and using the roles.remove method. So, the complete code for adding and removing roles would be like this:

    // Adding reaction-role function
    client.on('messageReactionAdd', async (reaction, user) => {
      if (reaction.message.partial) await reaction.message.fetch();
      if (reaction.partial) await reaction.fetch();
      if (user.bot) return;
      if (!reaction.message.guild) return;
      if (reaction.message.channel.id == '802209416685944862') {
        if (reaction.emoji.name === '🦊') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.add('802208163776167977');
        }
        if (reaction.emoji.name === '🐯') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.add('802208242696192040');
        }
        if (reaction.emoji.name === '🐍') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.add('802208314766524526');
        }
      } else return;
    });
    
    // Removing reaction roles
    client.on('messageReactionRemove', async (reaction, user) => {
      if (reaction.message.partial) await reaction.message.fetch();
      if (reaction.partial) await reaction.fetch();
      if (user.bot) return;
      if (!reaction.message.guild) return;
      if (reaction.message.channel.id == '802209416685944862') {
        if (reaction.emoji.name === '🦊') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.remove('802208163776167977');
        }
        if (reaction.emoji.name === '🐯') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.remove('802208242696192040');
        }
        if (reaction.emoji.name === '🐍') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.remove('802208314766524526');
        }
      } else return;
    });
    

    Adding Twitter Forwarding Function

    The next function we are going to add to our bot is going to be a bit more challenging. We want to focus on a particular Twitter account, so that any time the Twitter account posts a tweet, it will be forwarded to our Discord channel.

    Before starting to code, we will have to get the required tokens from the Twitter developer portal. Visit the portal and create a new app by clicking the “Create App” button in the “Overview” option. Give your app a name, copy all of the tokens, and paste them in the .env file of your code, with the proper names. Then click on “App Settings”, and enable the three-legged OAuth feature. Add the URLs below as callback URLs for testing purposes:

    http://127.0.0.1/
    https://localhost/
    

    If you own a website, add the address to the website URL and click “Save”. Head over to the “Keys and Tokens” tab, and generate the access keys and tokens. Copy and save them in your .env file. Our work with the Twitter developer portal is done. We can go back to our text editor to continue coding the bot. To achieve the functionality we want, we have to add another npm package named twit. It is a Twitter API client for Node.js. It supports both REST and streaming API.

    First, install the twit package using npm install twit, and require it in your main file:

    const Twit = require('twit');
    

    We have to create a twit instance using the Twit constructor. Pass in an object in the Twit constructor with all of the tokens that we got from Twitter:

    const T = new Twit({
      consumer_key: process.env.API_TOKEN,
      consumer_secret: process.env.API_SECRET,
      access_token: process.env.ACCESS_KEY,
      access_token_secret: process.env.ACCESS_SECRET,
      bearer_token: process.env.BEARER_TOKEN,
      timeout_ms: 60 * 1000,
    });
    

    A timeout is also specified here. We want all of the forwards to be in a specific channel. I have created a separate channel called “Twitter forwards”, where all of the tweets will be forwarded. I have already explained how you can create a channel. Create your own channel and copy the ID.

    // Destination Channel Twitter Forwards
    const dest = '803285069715865601';
    

    Now we have to create a stream. A stream API allows access to a stream of data over the network. The data is broken into smaller chunks, and then it is transmitted. Here is our code to stream the data:

    // Create a stream to follow tweets
    const stream = T.stream('statuses/filter', {
      follow: '32771325', // @Stupidcounter
    });
    

    In the follow key, I am specifying @Stupidcounter because it tweets every minute, which is great for our testing purposes. You can provide any Twitter handle’s ID to get its tweets. TweeterID will give you the ID of any handle. Finally, use the stream.on method to get the data and stream it to the desired channel.

    stream.on('tweet', (tweet) => {
      const twitterMessage = `Read the latest tweet by ${tweet.user.name} (@${tweet.user.screen_name}) here: https://twitter.com/${tweet.user.screen_name}/status/${tweet.id_str}`;
      client.channels.cache.get(dest).send(twitterMessage);
      return;
    });
    

    We are listening for the tweet event and, whenever that occurs, passing the tweet to a callback function. We’ll build a custom message; in our case, the message will be:

    Read the latest tweet by The Count (@Stupidcounter) here: https://twitter.com/Stupidcounter/status/1353949542346084353
    

    Again, we are using the client.channels.cache.get method to get the desired channel and the .send method to send our message. Now, run your bot and wait for a minute. The Twitter message will be sent to the server.

    The bot sends the tweet to Discord
    Tweets forwarded to Discord (Large preview)

    So, here is the complete Twitter forwarding code:

    // Adding Twitter forward function
    const Twit = require('twit');
    const T = new Twit({
      consumer_key: process.env.API_TOKEN,
      consumer_secret: process.env.API_SECRET,
      access_token: process.env.ACCESS_KEY,
      access_token_secret: process.env.ACCESS_SECRET,
      bearer_token: process.env.BEARER_TOKEN,
      timeout_ms: 60 * 1000,
    });
    
    // Destination channel Twitter forwards
    const dest = '803285069715865601';
    // Create a stream to follow tweets
    const stream = T.stream('statuses/filter', {
      follow: '32771325', // @Stupidcounter
    });
    
    stream.on('tweet', (tweet) => {
      const twitterMessage = `Read the latest tweet by ${tweet.user.name} (@${tweet.user.screen_name}) here: https://twitter.com/${tweet.user.screen_name}/status/${tweet.id_str}`;
      client.channels.cache.get(dest).send(twitterMessage);
      return;
    });
    

    All of the functions that we want to add are done. The only thing left now is to deploy it to the cloud. We’ll use Heroku for that.

    Deploying The Bot To Heroku

    First, create a new file in the root directory of your bot code’s folder. Name it Procfile. This Procfile will contain the commands to be executed when the program starts. In the file, we will add worker: node app.js, which will inform Heroku about which file to run at startup.

    After adding the file, let’s initiate a git repository, and push our code to GitHub (how to do so is beyond the scope of this article). One thing I would suggest is to add the node_modules folder and the .env file to the .gitignore file, so that your package size remains small and sensitive information does not get shared outside.

    Once you’ve successfully pushed all of your code to GitHub, visit the Heroku website. Log in, or create an account if you don’t have one already. Click on the “New” button to create a new app, and name it as you wish. Choose the “Deployment Method” as GitHub.

    Choose GitHub as deployment method
    Choose GitHub as the deployment method (Large preview)

    Search for your app, and click on connect once you find it. Enable automatic deployment from the “Deploy” menu, so that each time you push changes to the code, the code will get deployed automatically to Heroku.

    Now, we have to add the configuration variables to Heroku, which is very easy. Go to the “Settings” option, below your app’s name, and click on “Reveal Config Vars”.

    Revealing and adding configuration variables to Heroku
    Config Vars on Heroku (Large preview)

    Here, we’ve added the configuration variables as key-value pairs. Once you are done, go to the “Deploy” tab again, and click on “Deploy Branch” under “Manual Deploy”.

    The last thing to consider is that you might encounter a 60-second error crash that stops the bot from executing. To prevent this from happening, we have to change the worker type of the app. In Heroku, if you go to the “Resources” tab of your app, you’ll see that, under “Free Dynos”, web npm start is enabled. We have to turn this off and enable worker node app.js. So, click on the edit button beside the web npm start button, turn it off, and enable the worker node app.js option. Confirm the change. Restart all of your dynos, and we are done!

    Conclusion

    I hope you’ve enjoyed reading this article. I tried to cover all of the basics that you need to understand in developing a complicated bot. Discord.js’ documentation is a great place to learn more. It has great explanations. Also, you will find all of the code in the GitHub repository. And here are a few resources that will be helpful in your further development:

    Smashing Editorial
    (vf, il, al)

    Source link

    web design

    Building Your Own Personal Learning Curriculum — Smashing Magazine

    02/19/2021

    About The Author

    Kirsty is an ex-journalist, ex-bid-manager who’s now on her third career as a developer. She specializes in making mobile apps using React Native.
    More about
    Kirsty

    As developers, we’re constantly learning new languages and frameworks. But how can you structure this learning to ensure maximum benefit while still progressing? Here’s how you can devise your own curriculum to keep moving in the right direction.

    After completing a bootcamp in March 2019, I was overwhelmed by the choice of frameworks, libraries, languages, and courses I had to choose from to continue independent learning and hopefully score myself one of those elusive junior developer jobs. Almost everyone I spoke with had a different opinion on what was important and worth pursuing, but most of them agreed that learning ‘the fundamentals’ was important, while never really specifying what they were.

    Even after getting my first developer job last summer it quickly became apparent that I had to do regular extra-curricular learning to meet the demands of this new role. I flitted between Udemy courses for a while, and while I did learn, I often found myself going through the motions of copying the instructor without developing problem-solving skills on my own. It took me making my own small, scratch pad side-project to really grasp the new material. So now I knew what my learning style was: initial exposure to ideas in a course or at work, create a sketch of a project to solidify concepts, and then use this new learning in my daily work if I could.

    Most of us have a solid (if hard-earned) sense of how we learn best in the short-term, but how does that translate to the structure of long-term extracurricular learning? I’m going to tell you how I worked this problem out for myself. These strategies will work across the spectrum of experience, whether you’re brand new to software development or a seasoned engineer.

    Your Own Personal Curriculum

    This is my method of putting together a learning curriculum. As someone with limited time and a tendency to be taken by the breeze of impulsivity at the expense of sustained, focused working, I found this method the most compatible with my brain and general rhythms. Your mileage, of course, may vary.

    Use ‘Dream Job’ Role Specifications To Set Goals

    I found this step really useful for drowning out all the ambient noise and getting myself to focus on things that will be practically useful to my career in the next five or so years. (As more of a front-end developer, I often found myself making goo-goo eyes at learning Rust. While fascinating, it’s not exactly a priority.)

    You may not want to work at a huge tech company yourself, but it is worth looking at what they prioritize when hiring as, for better or worse, the big companies tend to set the tone for the industry at large. I have a small shopping list of non-evil companies I’d like to end up at one day and they all broadly share the same priorities: semantic HTML/CSS, excellent vanilla JS skills, accessibility, and a popular framework. One day I am going to learn Rust, but, for now, working on these skills are my top priorities.

    Graphic showing how job specifications can translate to learning goals
    It’s worth looking at job boards and study what requirements companies are looking for when hiring developers. (Large preview)

    I tend to favor Indeed, Guardian Jobs and LinkedIn for getting a broad sweep of jobs that are on the market, but equally useful is Twitter (just search [company name you are interested in] and ‘jobs’), and keep a periodic eye on the ‘Careers’ page of your favorite few companies every couple of months. Jessica Rose, who tweets as @jesslynnrose, frequently writes long Twitter threads of job vacancies at ‘non-evil companies’ that sometimes aren’t very well publicized elsewhere.

    Once you’ve gathered a few job specs, try to spot the commonalities between them and make note of them. We’ll use them for the next step.

    Identify Opportunities To Develop The Skills You Want

    Remember that list I mentioned? Split it into two columns. Column one: things you can work on in your day job. Column two: things you need to look at in your own time.

    At-Work Learning

    The things on your list that are covered by your day-job are the things you need to worry about the least. No matter what, your skills in these areas will improve with time. Some of you will be working at enormous organizations staffed by many developers with various levels of seniority and specialisms, and I advise you to milk that for all it’s worth, to put it bluntly.

    If accessibility is a knowledge gap of yours that you’d like to improve on, try to swallow any nerves and approach someone at your workplace who has those skills for a chat/a Zoom coffee. Try to pair with them, with the understanding that you can ‘pair’ on things that aren’t coding problems. Chat to them about where they find their information, which Twitter accounts, blogs, and podcasts they keep up with, and how they remain up-to-date with new developments themselves.

    At-Home Learning

    As someone with two prior careers, neither of which were computer science-related, who entered the industry via a nine-week bootcamp a year ago, I have a rudimentary understanding of computer science, and lots of you are likely in the same situation.

    I have found Frontend Masters to be invaluable when it comes to really well-designed courses on computer science principles and more specific learning. Personally, I’ve found Will Sentance’s courses on Frontend Masters to be valuable for understanding the how and why when it comes to vanilla JavaScript. Equally, Brian Holt’s ‘Four Semesters of Computer Science in 5 Hours’ courses expose students to the sort of concepts that can arise in tech interviews.

    There is a monthly subscription fee for Frontend Masters, and it is well worth it, but there are plenty of wonderful free resources out there. I really recommend that anyone who hasn’t already done so enrolls in CS50. The course, run by Harvard University, is a wonderful, free resource, that will expose you to C, Python, JavaScript and modules on ethics and basic data structures. The lectures are enormous fun, and you can do as much or as little of the course as you like, with no time constraints.

    Students onstage during a CS50 programming lecture at Harvard
    CS50 is a wonderful free resource teaching a thorough introduction to computer science and the art of programming. (Large preview)

    Equally, FreeCodeCamp has well earned its status as a key starting point for both self-taught developers and those wishing to build on their existing skills, and I encourage you to seek out courses relevant to your interests on Udemy (I’d suggest not to buy a Udemy course that isn’t on a heavy discount. Their sales come around once every few weeks, and there are always discount codes floating around).

    A particular favorite of mine is The Complete Node.js Developer Course by Andrew Mead, and I adore Colt Steele’s courses (there’s a particularly good one on algorithms and data structures that will help you if you ever find yourself on the more algorithm-heavy side of the tech interviewing spectrum). Smashing Magazine also runs frequent online workshops on a range of subjects that will help you to improve your skills.

    As you might know, the skills that get people jobs in tech can often diverge from the skills people need to use on the job. These courses will teach you computer science fundamentals while keeping you nimble for interviews, and help you to fill any potentially crushing silences with snappy summations of different data structures, and their pros and cons. The point is not to do all of these courses, but to identify and combine the ones that fit with the job specs you’ve targeted.

    I’ve found Twitter to be incredibly helpful for finding people to chat to about code problems. Last spring I shouted into the void about an issue I was having with Android Studio and was surprised to be on a video call with an Android developer less than ten minutes later, and, not only that, he seemed pleased to help!

    Don’t underestimate the kindness of the developer community and don’t be shy about calling on it for help when you need it, and do your best to put yourself into situations where you can talk to people from a similar world to you within the wider context of the tech industry. The #CodeNewbie hashtag is a handy thing to use if you’re ever in need of help.

    Set Targets And Timetables

    Now it’s time to tie your self-directed learning goals to some targets. Try not to set the bar too high — if it’s unrealistic for you to complete a Udemy course in one week, don’t try to push yourself to do it so hard you either meet the target at the expense of other important things in your life or fail to meet the target and make yourself feel like a failure. The idea is to keep yourself on track, applying gentle pressure to stay motivated, but not so much that you feel overwhelmed and lose all motivation.

    As a morning person, I feel best able to concentrate on study in the hours before my day-job starts at 9.30 am. With this in mind, and using the wonderful time-tracking tool, Toggl, I spend 7 am to 9 am two mornings per week on code study. Using Toggl was extremely important to me because, no matter how much I do, I often feel as if it isn’t enough. But with Toggl’s help, I could see for certain that I was doing a minimum of four hours per week (with extra in the evenings and weekends if I felt like it) and I felt better able to step away from my laptop and rest when my time was up, safe in the knowledge that I’d racked up an acceptable amount of hours by my own standards.

    Make Progress Measureable

    Think about your average weekly schedule and try to block off some time in the day when you stand the best chance of securing unbroken focus. Some of you will be carers or otherwise extremely busy, and it’s probably going to be better for you to take your time as and when you can get it.

    If it helps you to see an example, my personal targets are the following:

    • Complete a Node.js Udemy course by the end of February.
    • Do 30 minutes of Execute Program before work every day.

    Try to err on the side of caution for your first set of targets. You can always turn up the pressure if you want to, but it’s better to do so once you’ve succeeded at a few; make sure you have the spare mental and physical space to really concentrate on what you need to do to stay on track.

    Find A Mentor, Or A Buddy, Or Both!

    Mentorship is something that most developers would recommend to improve skills, but, from my personal experience, finding someone with the time to guide you is a challenge, especially now. There are resources such as Coding Coach that may help you, and I know a few developers who found their dream mentor at meet-ups and on social media, but finding the perfect match is easier said than done.

    I spoke to Falina Lothamer, an Instructional Designer at Thinkful — a Massive Open Online Course (or MOOC, for short) — to get an idea of how professionals approach independent learning. She was very clear that finding and working with a mentor is key to progressing your skills as a developer.

    “If you need to have something laid out for you, having that mentor to say: ‘Here’s where I think you should focus’, showing you what they’re doing at their job, and sharing their opinion on what the future of your area of tech is is going to help a lot. I think there are a lot of people in the industry who are willing to fill that mentor role and do for others what someone has done for them.”

    After expressing some of my frustrations at having hit a brick wall with a number of Udemy courses, simply finding myself retaining information and not necessarily having the confidence in what I’ve learned to apply it in other areas, or on other projects, Fallina was clear that being accountable to another person — ideally a mentor but equally another developer with a similar amount of professional experience to you — is essential.

    “As a developer, you need to look for opportunities to demonstrate what you know, and how you’re learning. Having someone else to talk to about the challenges you’re facing, and having space to talk it over with someone and to realize ‘this thing that I’m trying to do is complicated, I’m not a terrible developer’, having that validation can be huge.”

    For those who don’t manage to find a senior developer to take them under their wing, I recommend taking Fallina’s advice and making yourself accountable to someone else in the industry at a similar level to you. Developers banding together and sharing stories will reinforce that this job is hard, and that they are not the only engineers struggling to get by at work some days. This work can be very emotionally taxing, and having a buddy to struggle along with will be invaluable on those days when nothing seems to be going well.

    I’d recommend signing up to Interview Cake, Execute Program, or a relevant Udemy course for your skill level and specialism, and completing the same exercises as your buddy at roughly the same time. Discuss what you found easier, and where you fell down, and maintain contact with one another throughout. While you certainly can do these things alone, fostering a sense of community will help you to stay on task, and make it more likely that you stick at it.

    A Case For Scratch Pad Applications

    If you’ve got the time and energy to pour into a large side-project on top of work, more power to you, but I find the pressure to do so somewhat burdensome. Instead, I am a fan of the scratch pad project, primarily because I really benefit from following lots of new ideas at once, and quickly become disinterested in personal projects there is no time pressure to drive me along.

    If your side-project makes an API call, displays the information in a semi-appealing way, and you’ve learned something from the process, and building the project out into a larger application doesn’t fit with your neurology, your caring schedule, or your tastes, then give yourself a break. You wouldn’t sneer at an artist for sketching, and you certainly shouldn’t feel bad if your side-projects are half-formed mutants as long as you’re getting something out of the process. My GitHub repositories are elegies to good ideas gone by, and I’ve made my peace with it.

    Roundup

    Given the state of the world right now, the last thing I want to be is another voice demanding productivity in lockdown. That’s not what this is about. These are simply steps that worked for me when I needed to learn over time, without burning out or placing undue pressure on myself. If they work for you, wonderful. If not, no worries. We all have our own pace.

    Steps

    1. Use job specs to identify key skills.
    2. Split those skills between at-work learning and in-your-own-time learning.
    3. Set clear, measurable, realistic goals, and step them up only when you’ve found your rhythm.
    4. Find a mentor or buddy so you’re accountable for those goals.
    5. Relax! Messy learning is better than no learning.

    Useful Resources

    Good luck!

    Smashing Editorial
    (fb, vf, il)

    Source link

    web design

    Building A Web App With React, Redux And Sanity.io — Smashing Magazine

    02/11/2021

    About The Author

    Ifeanyi Dike is a full-stack developer in Abuja, Nigeria. He’s the team lead at Sterling Digitals Limited but also open to more opportunities and …
    More about
    Ifeanyi

    Headless CMS is a powerful and easy way to manage content and access API. Built on React, Sanity.io is a seamless tool for flexible content management. It can be used to build simple to complex applications from the ground up.

    In this article, we’ll build a simple listing app with Sanity.io and React. Our global states will be managed with Redux and the application will be styled with styled-components.

    The fast evolution of digital platforms have placed serious limitations on traditional CMS like WordPress. These platforms are coupled, inflexible and are focused on the project, rather than the product. Thankfully, several headless CMS have been developed to tackle these challenges and many more.

    Unlike traditional CMS, headless CMS, which can be described as Software as a Service (SaaS), can be used to develop websites, mobile apps, digital displays, and many more. They can be used on limitless platforms. If you are looking for a CMS that is platform independent, developer-first, and offers cross platform support, you need not look farther from headless CMS.

    A headless CMS is simply a CMS without a head. The head here refers to the frontend or the presentation layer while the body refers to the backend or the content repository. This offers a lot of interesting benefits. For instance, it allows the developer to choose any frontend of his choice and you can also design the presentation layer as you want.

    There are lots of headless CMS out there, some of the most popular ones include Strapi, Contentful, Contentstack, Sanity, Butter CMS, Prismic, Storyblok, Directus, etc. These headless CMS are API-based and have their individual strong points. For instance, CMS like Sanity, Strapi, Contentful, and Storyblok are free for small projects.

    These headless CMS are based on different tech stacks as well. While Sanity.io is based on React.js, Storyblok is based on Vue.js. As a React developer, this is the major reason why I quickly picked interest in Sanity. However, being a headless CMS, each of these platforms can be plugged on any frontend, whether Angular, Vue or React.

    Each of these headless CMS has both free and paid plans which represent significant price jump. Although these paid plans offer more features, you wouldn’t want to pay all that much for a small to mid-sized project. Sanity tries to solve this problem by introducing pay-as-you-go options. With these options, you will be able to pay for what you use and avoid the price jump.

    Another reason why I choose Sanity.io is their GROQ language. For me, Sanity stands out from the crowd by offering this tool. Graphical-Relational Object Queries (GROQ) reduces development time, helps you get the content you need in the form you need it, and also helps the developer to create a document with a new content model without code changes.

    Moreover, developers are not constrained to the GROQ language. You can also use GraphQL or even the traditional axios and fetch in your React app to query the backend. Like most other headless CMS, Sanity has comprehensive documentation that contains helpful tips to build on the platform.

    Note: This article requires a basic understanding of React, Redux and CSS.

    Getting Started With Sanity.io

    To use Sanity in your machine, you’ll need to install the Sanity CLI tool. While this can be installed locally on your project, it is preferable to install it globally to make it accessible to any future applications.

    To do this, enter the following commands in your terminal.

    npm install -g @sanity/cli

    The -g flag in the above command enables global installation.

    Next, we need to initialize Sanity in our application. Although this can be installed as a separate project, it is usually preferable to install it within your frontend app (in this case React).

    In her blog, Kapehe explained in detail how to integrate Sanity with React. It will be helpful to go through the article before continuing with this tutorial.

    Enter the following commands to initialize Sanity in your React app.

    sanity init

    The sanity command becomes available to us when we installed the Sanity CLI tool. You can view a list of the available Sanity commands by typing sanity or sanity help in your terminal.

    When setting up or initializing your project, you’ll need to follow the prompts to customize it. You’ll also be required to create a dataset and you can even choose their custom dataset populated with data. For this listing app, we will be using Sanity’s custom sci-fi movies dataset. This will save us from entering the data ourselves.

    To view and edit your dataset, cd to the Sanity subdirectory in your terminal and enter sanity start. This usually runs on http://localhost:3333/. You may be required to login to access the interface (make sure you login with the same account you used when initializing the project). A screenshot of the environment is shown below.

    Sanity server overview
    An overview of the sanity server for the sci-fi movie dataset. (Large preview)

    Sanity-React Two-way Communication

    Sanity and React need to communicate with each other for a fully functional application.

    CORS Origins Setting In Sanity Manager

    We’ll first connect our React app to Sanity. To do this, login to https://manage.sanity.io/ and locate CORS origins under API Settings in the Settings tab. Here, you’ll need to hook your frontend origin to the Sanity backend. Our React app runs on http://localhost:3000/ by default, so we need to add that to the CORS.

    This is shown in the figure below.

    CORS origin settings
    Setting CORS origin in Sanity.io Manager. (Large preview)

    Connecting Sanity To React

    Sanity associates a project ID to every project you create. This ID is needed when connecting it to your frontend application. You can find the project ID in your Sanity Manager.

    The backend communicates with React using a library known as sanity client. You need to install this library in your Sanity project by entering the following commands.

    npm install @sanity/client

    Create a file sanitySetup.js (the filename does not matter), in your project src folder and enter the following React codes to set up a connection between Sanity and React.

    import sanityClient from "@sanity/client"
    export default sanityClient({
        projectId: PROJECT_ID,
        dataset: DATASET_NAME,
        useCdn: true
    });

    We passed our projectId, dataset name and a boolean useCdn to the instance of the sanity client imported from @sanity/client. This works the magic and connects our app to the backend.

    Now that we’ve completed the two-way connection, let’s jump right in to build our project.

    Setting Up And Connecting Redux To Our App

    We’ll need a few dependencies to work with Redux in our React app. Open up your terminal in your React environment and enter the following bash commands.

    npm install redux react-redux redux-thunk
    

    Redux is a global state management library that can be used with most frontend frameworks and libraries such as React. However, we need an intermediary tool react-redux to enable communication between our Redux store and our React application. Redux thunk will help us to return a function instead of an action object from Redux.

    While we could write the entire Redux workflow in one file, it is often neater and better to separate our concerns. For this, we will divide our workflow into three files namely, actions, reducers, and then the store. However, we also need a separate file to store the action types, also known as constants.

    Setting Up The Store

    The store is the most important file in Redux. It organizes and packages the states and ships them to our React application.

    Here is the initial setup of our Redux store needed to connect our Redux workflow.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import reducers from "./reducers/";
    
    export default createStore(
      reducers,
      applyMiddleware(thunk)
    );
    

    The createStore function in this file takes three parameters: the reducer (required), the initial state and the enhancer (usually a middleware, in this case, thunk supplied through applyMiddleware). Our reducers will be stored in a reducers folder and we’ll combine and export them in an index.js file in the reducers folder. This is the file we imported in the code above. We’ll revisit this file later.

    Introduction To Sanity’s GROQ Language

    Sanity takes querying on JSON data a step further by introducing GROQ. GROQ stands for Graph-Relational Object Queries. According to Sanity.io, GROQ is a declarative query language designed to query collections of largely schema-less JSON documents.

    Sanity even provides the GROQ Playground to help developers become familiar with the language. However, to access the playground, you need to install sanity vision.
    Run sanity install @sanity/vision on your terminal to install it.

    GROQ has a similar syntax to GraphQL but it is more condensed and easier to read. Furthermore, unlike GraphQL, GROQ can be used to query JSON data.

    For instance, to retrieve every item in our movie document, we’ll use the following GROQ syntax.

    *[_type == "movie"]

    However, if we wish to retrieve only the _ids and crewMembers in our movie document. We need to specify those fields as follows.

    `*[_type == 'movie']{                                             
        _id,
        crewMembers
    }
    

    Here, we used * to tell GROQ that we want every document of _type movie. _type is an attribute under the movie collection. We can also return the type like we did the _id and crewMembers as follows:

    *[_type == 'movie']{                                             
        _id,
        _type,
        crewMembers
    }
    

    We’ll work more on GROQ by implementing it in our Redux actions but you can check Sanity.io’s documentation for GROQ to learn more about it. The GROQ query cheat sheet provides a lot of examples to help you master the query language.

    Setting Up Constants

    We need constants to track the action types at every stage of the Redux workflow. Constants help to determine the type of action dispatched at each point in time. For instance, we can track when the API is loading, fully loaded and when an error occurs.

    We don’t necessarily need to define constants in a separate file but for simplicity and clarity, this is usually the best practice in Redux.

    By convention, constants in Javascript are defined with uppercase. We’ll follow the best practices here to define our constants. Here is an example of a constant for denoting requests for moving movie fetching.

    export const MOVIE_FETCH_REQUEST = "MOVIE_FETCH_REQUEST";

    Here, we created a constant MOVIE_FETCH_REQUEST that denotes an action type of MOVIE_FETCH_REQUEST. This helps us to easily call this action type without using strings and avoid bugs. We also exported the constant to be available anywhere in our project.

    Similarly, we can create other constants for fetching action types denoting when the request succeeds or fails. A complete code for the movieConstants.js is given in the code below.

    Here we have defined several constants for fetching a movie or list of movies, sorting and fetching the most popular movies. Notice that we set constants to determine when the request is loading, successful and failed.

    Similarly, our personConstants.js file is given below:

    export const PERSONS_FETCH_REQUEST = "PERSONS_FETCH_REQUEST";
    export const PERSONS_FETCH_SUCCESS = "PERSONS_FETCH_SUCCESS";
    export const PERSONS_FETCH_FAIL = "PERSONS_FETCH_FAIL";
    
    export const PERSON_FETCH_REQUEST = "PERSON_FETCH_REQUEST";
    export const PERSON_FETCH_SUCCESS = "PERSON_FETCH_SUCCESS";
    export const PERSON_FETCH_FAIL = "PERSON_FETCH_FAIL";
    
    export const PERSONS_COUNT = "PERSONS_COUNT";

    Like the movieConstants.js, we set a list of constants for fetching a person or persons. We also set a constant for counting persons. The constants follow the convention described for movieConstants.js and we also exported them to be accessible to other parts of our application.

    Finally, we’ll implement light and dark mode in the app and so we have another constants file globalConstants.js. Let’s take a look at it.

    export const SET_LIGHT_THEME = "SET_LIGHT_THEME";
    export const SET_DARK_THEME = "SET_DARK_THEME";

    Here we set constants to determine when light or dark mode is dispatched. SET_LIGHT_THEME determines when the user switches to the light theme and SET_DARK_THEME determines when the dark theme is selected. We also exported our constants as shown.

    Setting Up The Actions

    By convention, our actions are stored in a separate folder. Actions are grouped according to their types. For instance, our movie actions are stored in movieActions.js while our person actions are stored in personActions.js file.

    We also have globalActions.js to take care of toggling the theme from light to dark mode.

    Let’s fetch all movies in moviesActions.js.

    import sanityAPI from "../../sanitySetup";
    import {
      MOVIES_FETCH_FAIL,
      MOVIES_FETCH_REQUEST,
      MOVIES_FETCH_SUCCESS  
    } from "../constants/movieConstants";
    
    const fetchAllMovies = () => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_FETCH_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                            
              _id,
              "poster": poster.asset->url,
          } `
        );
        dispatch({
          type: MOVIES_FETCH_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_FETCH_FAIL,
          payload: error.message
        });
      }
    };

    Remember when we created the sanitySetup.js file to connect React to our Sanity backend? Here, we imported the setup to enable us to query our sanity backend using GROQ. We also imported a few constants exported from the movieConstants.js file in the constants folder.

    Next, we created the fetchAllMovies action function for fetching every movie in our collection. Most traditional React applications use axios or fetch to fetch data from the backend. But while we could use any of these here, we’re using Sanity’s GROQ. To enter the GROQ mode, we need to call sanityAPI.fetch() function as shown in the code above. Here, sanityAPI is the React-Sanity connection we set up earlier. This returns a Promise and so it has to be called asynchronously. We’ve used the async-await syntax here, but we can also use the .then syntax.

    Since we are using thunk in our application, we can return a function instead of an action object. However, we chose to pass the return statement in one line.

    const fetchAllMovies = () => async (dispatch) => {
      ...
    }

    Note that we can also write the function this way:

    const fetchAllMovies = () => {
      return async (dispatch)=>{
        ...
      }
    }

    In general, to fetch all movies, we first dispatched an action type that tracks when the request is still loading. We then used Sanity’s GROQ syntax to asynchronously query the movie document. We retrieved the _id and the poster url of the movie data. We then returned a payload containing the data gotten from the API.

    Similarly, we can retrieve movies by their _id, sort movies, and get the most popular movies.

    We can also fetch movies that match a particular person’s reference. We did this in the fetchMoviesByRef function.

    const fetchMoviesByRef = (ref) => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_REF_FETCH_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie' 
                && (castMembers[person._ref match '${ref}'] || 
                    crewMembers[person._ref match '${ref}'])            
                ]{                                             
                    _id,                              
                    "poster" : poster.asset->url,
                    title
                } `
        );
        dispatch({
          type: MOVIES_REF_FETCH_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_REF_FETCH_FAIL,
          payload: error.message
        });
      }
    };

    This function takes an argument and checks if person._ref in either the castMembers or crewMembers matches the passed argument. We return the movie _id, poster url, and title alongside. We also dispatch an action of type MOVIES_REF_FETCH_SUCCESS, attaching a payload of the returned data, and if an error occurs, we dispatch an action of type MOVIE_REF_FETCH_FAIL, attaching a payload of the error message, thanks to the try-catch wrapper.

    In the fetchMovieById function, we used GROQ to retrieve a movie that matches a particular id passed to the function.

    The GROQ syntax for the function is shown below.

    const data = await sanityAPI.fetch(
          `*[_type == 'movie' && _id == '${id}']{                                               
                    _id,
                    "cast" :
                        castMembers[]{
                            "ref": person._ref,
                            characterName, 
                            "name": person->name,
                            "image": person->image.asset->url
                        }
                    ,
                    "crew" :
                        crewMembers[]{
                            "ref": person._ref,
                            department, 
                            job,
                            "name": person->name,
                            "image": person->image.asset->url
                        }
                    ,                
                    "overview":   {                    
                        "text": overview[0].children[0].text
                      },
                    popularity,
                    "poster" : poster.asset->url,
                    releaseDate,                                
                    title
                }[0]`
        );

    Like the fetchAllMovies action, we started by selecting all documents of type movie but we went further to select only those with an id supplied to the function. Since we intend to display a lot of details for the movie, we specified a bunch of attributes to retrieve.

    We retrieved the movie id and also a few attributes in the castMembers array namely ref, characterName, the person’s name, and the person’s image. We also changed the alias from castMembers to cast.

    Like the castMembers, we selected a few attributes from the crewMembers array, namely ref, department, job, the person’s name and the person’s image. we also changed the alias from crewMembers to crew.

    In the same way, we selected the overview text, popularity, movie’s poster url, movie’s release date and title.

    Sanity’s GROQ language also allows us to sort a document. To sort an item, we pass order next to a pipe operator.

    For instance, if we wish to sort movies by their releaseDate in ascending order, we could do the following.

    const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                            
              ...
          } | order(releaseDate, asc)`
        );
    

    We used this notion in the sortMoviesBy function to sort either by ascending or descending order.

    Let’s take a look at this function below.

    const sortMoviesBy = (item, type) => async (dispatch) => {
      try {
        dispatch({
          type: MOVIES_SORT_REQUEST
        });
        const data = await sanityAPI.fetch(
          `*[_type == 'movie']{                                
                    _id,                                               
                    "poster" : poster.asset->url,    
                    title
                    } | order( ${item} ${type})`
        );
        dispatch({
          type: MOVIES_SORT_SUCCESS,
          payload: data
        });
      } catch (error) {
        dispatch({
          type: MOVIES_SORT_FAIL,
          payload: error.message
        });
      }
    };

    We began by dispatching an action of type MOVIES_SORT_REQUEST to determine when the request is loading. We then used the GROQ syntax to sort and fetch data from the movie collection. The item to sort by is supplied in the variable item and the mode of sorting (ascending or descending) is supplied in the variable type. Consequently, we returned the id, poster url, and title. Once the data is returned, we dispatched an action of type MOVIES_SORT_SUCCESS and if it fails, we dispatch an action of type MOVIES_SORT_FAIL.

    A similar GROQ concept applies to the getMostPopular function. The GROQ syntax is shown below.

    const data = await sanityAPI.fetch(
          `
                *[_type == 'movie']{ 
                    _id,                              
                    "overview":   {                    
                        "text": overview[0].children[0].text
                    },                
                    "poster" : poster.asset->url,    
                    title 
                }| order(popularity desc) [0..2]`
        );

    The only difference here is that we sorted the movies by popularity in descending order and then selected only the first three. The items are returned in a zero-based index and so the first three items are items 0, 1 and 2. If we wish to retrieve the first ten items, we could pass [0..9] to the function.

    Here’s the complete code for the movie actions in the movieActions.js file.

    Setting Up The Reducers

    Reducers are one of the most important concepts in Redux. They take the previous state and determine the state changes.

    Typically, we’ll be using the switch statement to execute a condition for each action type. For instance, we can return loading when the action type denotes loading, and then the payload when it denotes success or error. It is expected to take in the initial state and the action as arguments.

    Our movieReducers.js file contains various reducers to match the actions defined in the movieActions.js file. However, each of the reducers has a similar syntax and structure. The only differences are the constants they call and the values they return.

    Let’s start by taking a look at the fetchAllMoviesReducer in the movieReducers.js file.

    import {
      MOVIES_FETCH_FAIL,
      MOVIES_FETCH_REQUEST,
      MOVIES_FETCH_SUCCESS,  
    } from "../constants/movieConstants";
    
    const fetchAllMoviesReducer = (state = {}, action) => {
      switch (action.type) {
        case MOVIES_FETCH_REQUEST:
          return {
            loading: true
          };
        case MOVIES_FETCH_SUCCESS:
          return {
            loading: false,
            movies: action.payload
          };
        case MOVIES_FETCH_FAIL:
          return {
            loading: false,
            error: action.payload
          };
        case MOVIES_FETCH_RESET:
          return {};
        default:
          return state;
      }
    };

    Like all reducers, the fetchAllMoviesReducer takes the initial state object (state) and the action object as arguments. We used the switch statement to check the action types at each point in time. If it corresponds to MOVIES_FETCH_REQUEST, we return loading as true to enable us to show a loading indicator to the user.

    If it corresponds to MOVIES_FETCH_SUCCESS, we turn off the loading indicator and then return the action payload in a variable movies. But if it is MOVIES_FETCH_FAIL, we also turn off the loading and then return the error. We also want the option to reset our movies. This will enable us to clear the states when we need to do so.

    We have the same structure for other reducers. The complete movieReducers.js is shown below.

    We also followed the exact same structure for personReducers.js. For instance, the fetchAllPersonsReducer function defines the states for fetching all persons in the database.

    This is given in the code below.

    import {
      PERSONS_FETCH_FAIL,
      PERSONS_FETCH_REQUEST,
      PERSONS_FETCH_SUCCESS,
    } from "../constants/personConstants";
    
    const fetchAllPersonsReducer = (state = {}, action) => {
      switch (action.type) {
        case PERSONS_FETCH_REQUEST:
          return {
            loading: true
          };
        case PERSONS_FETCH_SUCCESS:
          return {
            loading: false,
            persons: action.payload
          };
        case PERSONS_FETCH_FAIL:
          return {
            loading: false,
            error: action.payload
          };
        default:
          return state;
      }
    };
    

    Just like the fetchAllMoviesReducer, we defined fetchAllPersonsReducer with state and action as arguments. These are standard setup for Redux reducers. We then used the switch statement to check the action types and if it’s of type PERSONS_FETCH_REQUEST, we return loading as true. If it’s PERSONS_FETCH_SUCCESS, we switch off loading and return the payload, and if it’s PERSONS_FETCH_FAIL, we return the error.

    Combining Reducer

    Redux’s combineReducers function allows us to combine more than one reducer and pass it to the store. We’ll combine our movies and persons reducers in an index.js file within the reducers folder.

    Let’s take a look at it.

    import { combineReducers } from "redux";
    import {
      fetchAllMoviesReducer,
      fetchMovieByIdReducer,
      sortMoviesByReducer,
      getMostPopularReducer,
      fetchMoviesByRefReducer
    } from "./movieReducers";
    
    import {
      fetchAllPersonsReducer,
      fetchPersonByIdReducer,
      countPersonsReducer
    } from "./personReducers";
    
    import { toggleTheme } from "./globalReducers";
    
    export default combineReducers({
      fetchAllMoviesReducer,
      fetchMovieByIdReducer,
      fetchAllPersonsReducer,
      fetchPersonByIdReducer,
      sortMoviesByReducer,
      getMostPopularReducer,
      countPersonsReducer,
      fetchMoviesByRefReducer,
      toggleTheme
    });

    Here we imported all the reducers from the movies, persons, and global reducers file and passed them to combineReducers function. The combineReducers function takes an object which allows us to pass all our reducers. We can even add an alias to the arguments in the process.

    We’ll work on the globalReducers later.

    We can now pass the reducers in the Redux store.js file. This is shown below.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import reducers from "./reducers/index";
    
    export default createStore(reducers, initialState, applyMiddleware(thunk));
    

    Having set up our Redux workflow, let’s set up our react application.

    Setting Up Our React Application

    Our react application will list movies and their corresponding cast and crewmembers. We will be using react-router-dom for routing and styled-components for styling the app. We’ll also use Material UI for icons and some UI components.

    Enter the following bash command to install the dependencies.

    npm install react-router-dom @material-ui/core @material-ui/icons query-string

    Here’s what we’ll be building.

    Connecting Redux To Our React App

    React-redux ships with a Provider function that allows us to connect our application to the Redux store. To do this, we have to pass an instance of the store to the Provider. We can do this either in our index.js or App.js file.

    Here’s our index.js file.

    import React from "react";
    import ReactDOM from "react-dom";
    import "./index.css";
    import App from "./App";
    import { Provider } from "react-redux";
    import store from "./redux/store";
    ReactDOM.render(
      <Provider store={store}>
        <App />
      </Provider>,
      document.getElementById("root")
    );

    Here, we imported Provider from react-redux and store from our Redux store. Then we wrapped our entire components tree with the Provider, passing the store to it.

    Next, we need react-router-dom for routing in our React application. react-router-dom comes with BrowserRouter, Switch and Route that can be used to define our path and routes.

    We do this in our App.js file. This is shown below.

    import React from "react";
    import Header from "./components/Header";
    import Footer from "./components/Footer";
    import { BrowserRouter as Router, Switch, Route } from "react-router-dom";
    import MoviesList from "./pages/MoviesListPage";
    import PersonsList from "./pages/PersonsListPage";
    
    function App() {
    
      return (
          <Router>
            <main className="contentwrap">
              <Header />
              <Switch>
                <Route path="/persons/">
                  <PersonsList />
                </Route>
                <Route path="/" exact>
                  <MoviesList />
                </Route>
              </Switch>
            </main>
            <Footer />
          </Router>
      );
    }
    export default App;

    This is a standard setup for routing with react-router-dom. You can check it out in their documentation. We imported our components Header, Footer, PersonsList and MovieList. We then set up the react-router-dom by wrapping everything in Router and Switch.

    Since we want our pages to share the same header and footer, we had to pass the <Header /> and <Footer /> component before wrapping the structure with Switch. We also did a similar thing with the main element since we want it to wrap the entire application.

    We passed each component to the route using Route from react-router-dom.

    Defining Our Pages And Components

    Our application is organized in a structured way. Reusable components are stored in the components folder while Pages are stored in the pages folder.

    Our pages comprise movieListPage.js, moviePage.js, PersonListPage.js and PersonPage.js. The MovieListPage.js lists all the movies in our Sanity.io backend as well as the most popular movies.

    To list all the movies, we simply dispatch the fetchAllMovies action defined in our movieAction.js file. Since we need to fetch the list as soon as the page loads, we have to define it in the useEffect. This is shown below.

    import React, { useEffect } from "react";
    import { fetchAllMovies } from "../redux/actions/movieActions";
    import { useDispatch, useSelector } from "react-redux";
    
    const MoviesListPage = () => {
      const dispatch = useDispatch();
      useEffect(() => {    
          dispatch(fetchAllMovies());
      }, [dispatch]);
    
      const { loading, error, movies } = useSelector(
        (state) => state.fetchAllMoviesReducer
      );
      
      return (
        ...
      )
    };
    export default MoviesListPage;
    

    Thanks to the useDispatch and useSelector Hooks, we can dispatch Redux actions and select the appropriate states from the Redux store. Notice that the states loading, error and movies were defined in our Reducer functions and here selected them using the useSelector Hook from React Redux. These states namely loading, error and movies become available immediately we dispatched the fetchAllMovies() actions.

    Once we get the list of movies, we can display it in our application using the map function or however we wish.

    Here is the complete code for the moviesListPage.js file.

    We started by dispatching the getMostPopular movies action (this action selects the movies with the highest popularity) in the useEffect Hook. This allows us to retrieve the most popular movies as soon as the page loads. Additionally, we allowed users to sort movies by their releaseDate and popularity. This is handled by the sortMoviesBy action dispatched in the code above. Furthermore, we dispatched the fetchAllMovies depending on the query parameters.

    Also, we used the useSelector Hook to select the corresponding reducers for each of these actions. We selected the states for loading, error and movies for each of the reducers.

    After getting the movies from the reducers, we can now display them to the user. Here, we have used the ES6 map function to do this. We first displayed a loader whenever each of the movie states is loading and if there’s an error, we display the error message. Finally, if we get a movie, we display the movie image to the user using the map function. We wrapped the entire component in a MovieListContainer component.

    The <MovieListContainer> … </MovieListContainer> tag is a div defined using styled components. We’ll take a brief look at that soon.

    Styling Our App With Styled Components

    Styled components allow us to style our pages and components on an individual basis. It also offers some interesting features such as inheritance, Theming, passing of props, etc.

    Although we always want to style our pages on an individual basis, sometimes global styling may be desirable. Interestingly, styled-components provide a way to do that, thanks to the createGlobalStyle function.

    To use styled-components in our application, we need to install it. Open your terminal in your react project and enter the following bash command.

    npm install styled-components

    Having installed styled-components, Let’s get started with our global styles.

    Let’s create a separate folder in our src directory named styles. This will store all our styles. Let’s also create a globalStyles.js file within the styles folder. To create global style in styled-components, we need to import createGlobalStyle.

    import { createGlobalStyle } from "styled-components";

    We can then define our styles as follows:

    export const GlobalStyle = createGlobalStyle`
      ...
    `

    Styled components make use of the template literal to define props. Within this literal, we can write our traditional CSS codes.

    We also imported deviceWidth defined in a file named definition.js. The deviceWidth holds the definition of breakpoints for setting our media queries.

    import { deviceWidth } from "./definition";

    We set overflow to hidden to control the flow of our application.

    html, body{
            overflow-x: hidden;
    }

    We also defined the header style using the .header style selector.

    .header{
      z-index: 5;
      background-color: ${(props)=>props.theme.midDarkBlue}; 
      display:flex;
      align-items:center;
      padding: 0 20px;
      height:50px;
      justify-content:space-between;
      position:fixed;
      top:0;
      width:100%;
      @media ${deviceWidth.laptop_lg}
      {
        width:97%;
      }
      ...
    }

    Here, various styles such as the background color, z-index, padding, and lots of other traditional CSS properties are defined.

    We’ve used the styled-components props to set the background color. This allows us to set dynamic variables that can be passed from our component. Moreover, we also passed the theme’s variable to enable us to make the most of our theme toggling.

    Theming is possible here because we have wrapped our entire application with the ThemeProvider from styled-components. We’ll talk about this in a moment. Furthermore, we used the CSS flexbox to properly style our header and set the position to fixed to make sure it remains fixed with respect to the browser. We also defined the breakpoints to make the headers mobile friendly.

    Here is the complete code for our globalStyles.js file.

    import { createGlobalStyle } from "styled-components";
    import { deviceWidth } from "./definition";
    
    export const GlobalStyle = createGlobalStyle`
        html{
            overflow-x: hidden;
        }
        body{
            background-color: ${(props) => props.theme.lighter};        
            overflow-x: hidden;   
            min-height: 100vh;     
            display: grid;
            grid-template-rows: auto 1fr auto;
        }
        #root{        
            display: grid;
            flex-direction: column;   
        }    
        h1,h2,h3, label{
            font-family: 'Aclonica', sans-serif;        
        }
        h1, h2, h3, p, span:not(.MuiIconButton-label), 
        div:not(.PrivateRadioButtonIcon-root-8), div:not(.tryingthis){
            color: ${(props) => props.theme.bodyText}
        }
        
        p, span, div, input{
            font-family: 'Jost', sans-serif;       
        }
        
        .paginate button{
            color: ${(props) => props.theme.bodyText}
        }
        
        .header{
            z-index: 5;    
            background-color: ${(props) => props.theme.midDarkBlue};                
            display: flex;
            align-items: center;   
            padding: 0 20px;        
            height: 50px;
            justify-content: space-between;
            position: fixed;
            top: 0;
            width: 100%;
            @media ${deviceWidth.laptop_lg}{
                width: 97%;            
            }               
            
            @media ${deviceWidth.tablet}{
                width: 100%;
                justify-content: space-around;
            }
            a{
                text-decoration: none;
            }
            label{
                cursor: pointer;
                color: ${(props) => props.theme.goldish};
                font-size: 1.5rem;
            }        
            .hamburger{
                cursor: pointer;   
                color: ${(props) => props.theme.white};
                @media ${deviceWidth.desktop}{
                    display: none;
                }
                @media ${deviceWidth.tablet}{
                    display: block;                
                }
            }  
                     
        }    
        .mobileHeader{
            z-index: 5;        
            background-color: ${(props) =>
              props.theme.darkBlue};                    
            color: ${(props) => props.theme.white};
            display: grid;
            place-items: center;        
            
            width: 100%;      
            @media ${deviceWidth.tablet}{
                width: 100%;                   
            }                         
            
            height: calc(100% - 50px);                
            transition: all 0.5s ease-in-out; 
            position: fixed;        
            right: 0;
            top: 50px;
            .menuitems{
                display: flex;
                box-shadow: 0 0 5px ${(props) => props.theme.lightshadowtheme};           
                flex-direction: column;
                align-items: center;
                justify-content: space-around;                        
                height: 60%;            
                width: 40%;
                a{
                    display: flex;
                    flex-direction: column;
                    align-items:center;
                    cursor: pointer;
                    color: ${(props) => props.theme.white};
                    text-decoration: none;                
                    &:hover{
                        border-bottom: 2px solid ${(props) => props.theme.goldish};
                        .MuiSvgIcon-root{
                            color: ${(props) => props.theme.lightred}
                        }
                    }
                }
            }
        }
        
        footer{                
            min-height: 30px;        
            margin-top: auto;
            display: flex;
            flex-direction: column;
            align-items: center;
            justify-content: center;        
            font-size: 0.875rem;        
            background-color: ${(props) => props.theme.midDarkBlue};      
            color: ${(props) => props.theme.white};        
        }    
    `;
    

    Notice that we wrote pure CSS code within the literal but there are a few exceptions. Styled-components allows us to pass props. You can learn more about this in the documentation.

    Apart from defining global styles, we can define styles for individual pages.

    For instance, here is the style for the PersonListPage.js defined in PersonStyle.js in the styles folder.

    import styled from "styled-components";
    import { deviceWidth, colors } from "./definition";
    
    export const PersonsListContainer = styled.div`
      margin: 50px 80px;
      @media ${deviceWidth.tablet} {
        margin: 50px 10px;
      }
      a {
        text-decoration: none;
      }
      .top {
        display: flex;
        justify-content: flex-end;
        padding: 5px;
        .MuiSvgIcon-root {
          cursor: pointer;
          &:hover {
            color: ${colors.darkred};
          }
        }
      }
      .personslist {
        margin-top: 20px;
        display: grid;
        place-items: center;
        grid-template-columns: repeat(5, 1fr);
        @media ${deviceWidth.laptop} {
          grid-template-columns: repeat(4, 1fr);
        }
        @media ${deviceWidth.tablet} {
          grid-template-columns: repeat(3, 1fr);
        }
        @media ${deviceWidth.tablet_md} {
          grid-template-columns: repeat(2, 1fr);
        }
        @media ${deviceWidth.mobile_lg} {
          grid-template-columns: repeat(1, 1fr);
        }
        grid-gap: 30px;
        .person {
          width: 200px;
          position: relative;
          img {
            width: 100%;
          }
          .content {
            position: absolute;
            bottom: 0;
            left: 8px;
            border-right: 2px solid ${colors.goldish};
            border-left: 2px solid ${colors.goldish};
            border-radius: 10px;
            width: 80%;
            margin: 20px auto;
            padding: 8px 10px;
            background-color: ${colors.transparentWhite};
            color: ${colors.darkBlue};
            h2 {
              font-size: 1.2rem;
            }
          }
        }
      }
    `;
    

    We first imported styled from styled-components and deviceWidth from the definition file. We then defined PersonsListContainer as a div to hold our styles. Using media queries and the established breakpoints, we made the page mobile-friendly by setting various breakpoints.

    Here, we have used only the standard browser breakpoints for small, large and very large screens. We also made the most of the CSS flexbox and grid to properly style and display our content on the page.

    To use this style in our PersonListPage.js file, we simply imported it and added it to our page as follows.

    import React from "react";
    
    const PersonsListPage = () => {
      return (
        <PersonsListContainer>
          ...
        </PersonsListContainer>
      );
    };
    export default PersonsListPage;
    

    The wrapper will output a div because we defined it as a div in our styles.

    Adding Themes And Wrapping It Up

    It’s always a cool feature to add themes to our application. For this, we need the following:

    • Our custom themes defined in a separate file (in our case definition.js file).
    • The logic defined in our Redux actions and reducers.
    • Calling our theme in our application and passing it through the component tree.

    Let’s check this out.

    Here is our theme object in the definition.js file.

    export const theme = {
      light: {
        dark: "#0B0C10",
        darkBlue: "#253858",
        midDarkBlue: "#42526e",
        lightBlue: "#0065ff",
        normal: "#dcdcdd",
        lighter: "#F4F5F7",
        white: "#FFFFFF",
        darkred: "#E85A4F",
        lightred: "#E98074",
        goldish: "#FFC400",
        bodyText: "#0B0C10",
        lightshadowtheme: "rgba(0, 0, 0, 0.1)"
      },
      dark: {
        dark: "white",
        darkBlue: "#06090F",
        midDarkBlue: "#161B22",
        normal: "#dcdcdd",
        lighter: "#06090F",
        white: "white",
        darkred: "#E85A4F",
        lightred: "#E98074",
        goldish: "#FFC400",
        bodyText: "white",
        lightshadowtheme: "rgba(255, 255, 255, 0.9)"
      }
    };
    

    We have added various color properties for the light and dark themes. The colors are carefully chosen to enable visibility both in light and dark mode. You can define your themes as you want. This is not a hard and fast rule.

    Next, let’s add the functionality to Redux.

    We have created globalActions.js in our Redux actions folder and added the following codes.

    import { SET_DARK_THEME, SET_LIGHT_THEME } from "../constants/globalConstants";
    import { theme } from "../../styles/definition";
    
    export const switchToLightTheme = () => (dispatch) => {
      dispatch({
        type: SET_LIGHT_THEME,
        payload: theme.light
      });
      localStorage.setItem("theme", JSON.stringify(theme.light));
      localStorage.setItem("light", JSON.stringify(true));
    };
    
    export const switchToDarkTheme = () => (dispatch) => {
      dispatch({
        type: SET_DARK_THEME,
        payload: theme.dark
      });
      localStorage.setItem("theme", JSON.stringify(theme.dark));
      localStorage.setItem("light", JSON.stringify(false));
    };

    Here, we simply imported our defined themes. Dispatched the corresponding actions, passing the payload of the themes we needed. The payload results are stored in the local storage using the same keys for both light and dark themes. This enables us to persist the states in the browser.

    We also need to define our reducer for the themes.

    import { SET_DARK_THEME, SET_LIGHT_THEME } from "../constants/globalConstants";
    
    export const toggleTheme = (state = {}, action) => {
      switch (action.type) {
        case SET_LIGHT_THEME:
          return {
            theme: action.payload,
            light: true
          };
        case SET_DARK_THEME:
          return {
            theme: action.payload,
            light: false
          };
        default:
          return state;
      }
    };

    This is very similar to what we’ve been doing. We used the switch statement to check the type of action and then returned the appropriate payload. We also returned a state light that determines whether light or dark theme is selected by the user. We’ll use this in our components.

    We also need to add it to our root reducer and store. Here is the complete code for our store.js.

    import { createStore, applyMiddleware } from "redux";
    import thunk from "redux-thunk";
    import { theme as initialTheme } from "../styles/definition";
    import reducers from "./reducers/index";
    
    const theme = localStorage.getItem("theme")
      ? JSON.parse(localStorage.getItem("theme"))
      : initialTheme.light;
    
    const light = localStorage.getItem("light")
      ? JSON.parse(localStorage.getItem("light"))
      : true;
    
    const initialState = {
      toggleTheme: { light, theme }
    };
    export default createStore(reducers, initialState, applyMiddleware(thunk));

    Since we needed to persist the theme when the user refreshes, we had to get it from the local storage using localStorage.getItem() and pass it to our initial state.

    Adding The Functionality To Our React Application

    Styled components provide us with ThemeProvider that allows us to pass themes through our application. We can modify our App.js file to add this functionality.

    Let’s take a look at it.

    import React from "react";
    import { BrowserRouter as Router, Switch, Route } from "react-router-dom";
    import { useSelector } from "react-redux";
    import { ThemeProvider } from "styled-components";
    
    function App() {
      const { theme } = useSelector((state) => state.toggleTheme);
      let Theme = theme ? theme : {};
      return (
        <ThemeProvider theme={Theme}>
          <Router>
            ...
          </Router>
        </ThemeProvider>
      );
    }
    export default App;

    By passing themes through the ThemeProvider, we can easily use the theme props in our styles.

    For instance, we can set the color to our bodyText custom color as follows.

    color: ${(props) => props.theme.bodyText};

    We can use the custom themes anywhere we need color in our application.

    For example, to define border-bottom, we do the following.

    border-bottom: 2px solid ${(props) => props.theme.goldish};

    Conclusion

    We began by delving into Sanity.io, setting it up and connecting it to our React application. Then we set up Redux and used the GROQ language to query our API. We saw how to connect and use Redux to our React app using react-redux, use styled-components and theming.

    However, we only scratched the surface on what is possible with these technologies. I encourage you to go through the code samples in my GitHub repo and try your hands on a completely different project using these technologies to learn and master them.

    Resources

    Smashing Editorial
    (ks, vf, yk, il)

    Source link

    web design

    Building A Stocks Price Notifier App Using React, Apollo GraphQL And Hasura — Smashing Magazine

    12/21/2020

    About The Author

    Software Engineer, trying to make sense of every line of code she writes. Ankita is a JavaScript Enthusiast and adores its weird parts. She’s also an obsessed …
    More about
    Ankita
    Masand

    In this article, we’ll learn how to build an event-based application and send a web-push notification when a particular event is triggered. We’ll set up database tables, events, and scheduled triggers on the Hasura GraphQL engine and wire up the GraphQL endpoint to the front-end application to record the stock price preference of the user.

    The concept of getting notified when the event of your choice has occurred has become popular compared to being glued onto the continuous stream of data to find that particular occurrence yourself. People prefer to get relevant emails/messages when their preferred event has occurred as opposed to being hooked on the screen to wait for that event to happen. The events-based terminology is also quite common in the world of software.

    How awesome would that be if you could get the updates of the price of your favorite stock on your phone?

    In this article, we’re going to build a Stocks Price Notifier application by using React, Apollo GraphQL, and Hasura GraphQL engine. We’re going to start the project from a create-react-app boilerplate code and would build everything ground up. We’ll learn how to set up the database tables, and events on the Hasura console. We’ll also learn how to wire up Hasura’s events to get stock price updates using web-push notifications.

    Here’s a quick glance at what we would be building:

    Overview of Stock Price Notifier Application
    Stock Price Notifier Application

    Let’s get going!

    An Overview Of What This Project Is About

    The stocks data (including metrics such as high, low, open, close, volume) would be stored in a Hasura-backed Postgres database. The user would be able to subscribe to a particular stock based on some value or he can opt to get notified every hour. The user will get a web-push notification once his subscription criteria are fulfilled.

    This looks like a lot of stuff and there would obviously be some open questions on how we’ll be building out these pieces.

    Here’s a plan on how we would accomplish this project in four steps:

    1. Fetching the stocks data using a NodeJs script
      We’ll start by fetching the stock data using a simple NodeJs script from one of the providers of stocks API — Alpha Vantage. This script will fetch the data for a particular stock in intervals of 5mins. The response of the API includes high, low, open, close and volume. This data will be then be inserted in the Postgres database that is integrated with the Hasura back-end.
    2. Setting up The Hasura GraphQL engine
      We’ll then set-up some tables on the Postgres database to record data points. Hasura automatically generates the GraphQL schemas, queries, and mutations for these tables.
    3. Front-end using React and Apollo Client
      The next step is to integrate the GraphQL layer using the Apollo client and Apollo Provider (the GraphQL endpoint provided by Hasura). The data-points will be shown as charts on the front-end. We’ll also build the subscription options and will fire corresponding mutations on the GraphQL layer.
    4. Setting up Event/Scheduled triggers
      Hasura provides an excellent tooling around triggers. We’ll be adding event & scheduled triggers on the stocks data table. These triggers will be set if the user is interested in getting a notification when the stock prices reach a particular value (event trigger). The user can also opt for getting a notification of a particular stock every hour (scheduled trigger).

    Now that the plan is ready, let’s put it into action!

    Here’s the GitHub repository for this project. If you get lost anywhere in the code below, refer to this repository and get back to speed!

    Fetching The Stocks Data Using A NodeJs Script

    This is not that complicated as it sounds! We’ll have to write a function that fetches data using the Alpha Vantage endpoint and this fetch call should be fired in an interval of 5 mins (You guessed it right, we’ll have to put this function call in setInterval).

    If you’re still wondering what Alpha Vantage is and just want to get this out of your head before hopping onto the coding part, then here it is:

    Alpha Vantage Inc. is a leading provider of free APIs for realtime and historical data on stocks, forex (FX), and digital/cryptocurrencies.

    We would be using this endpoint to get the required metrics of a particular stock. This API expects an API key as one of the parameters. You can get your free API key from here. We’re now good to get onto the interesting bit — let’s start writing some code!

    Installing Dependencies

    Create a stocks-app directory and create a server directory inside it. Initialize it as a node project using npm init and then install these dependencies:

    npm i isomorphic-fetch pg nodemon --save

    These are the only three dependencies that we’d need to write this script of fetching the stock prices and storing them in the Postgres database.

    Here’s a brief explanation of these dependencies:

    • isomorphic-fetch
      It makes it easy to use fetch isomorphically (in the same form) on both the client and the server.
    • pg
      It is a non-blocking PostgreSQL client for NodeJs.
    • nodemon
      It automatically restarts the server on any file changes in the directory.
    Setting up the configuration

    Add a config.js file at the root level. Add the below snippet of code in that file for now:

    const config = {
      user: '<DATABASE_USER>',
      password: '<DATABASE_PASSWORD>',
      host: '<DATABASE_HOST>',
      port: '<DATABASE_PORT>',
      database: '<DATABASE_NAME>',
      ssl: '<IS_SSL>',
      apiHost: 'https://www.alphavantage.co/',
    };
    
    module.exports = config;

    The user, password, host, port, database, ssl are related to the Postgres configuration. We’ll come back to edit this while we set up the Hasura engine part!

    Initializing The Postgres Connection Pool For Querying The Database

    A connection pool is a common term in computer science and you’ll often hear this term while dealing with databases.

    While querying data in databases, you’ll have to first establish a connection to the database. This connection takes in the database credentials and gives you a hook to query any of the tables in the database.

    Note: Establishing database connections is costly and also wastes significant resources. A connection pool caches the database connections and re-uses them on succeeding queries. If all the open connections are in use, then a new connection is established and is then added to the pool.

    Now that it is clear what the connection pool is and what is it used for, let’s start by creating an instance of the pg connection pool for this application:

    Add pool.js file at the root level and create a pool instance as:

    const { Pool } = require('pg');
    const config = require('./config');
    
    const pool = new Pool({
      user: config.user,
      password: config.password,
      host: config.host,
      port: config.port,
      database: config.database,
      ssl: config.ssl,
    });
    
    module.exports = pool;

    The above lines of code create an instance of Pool with the configuration options as set in the config file. We’re yet to complete the config file but there won’t be any changes related to the configuration options.

    We’ve now set the ground and are ready to start making some API calls to the Alpha Vantage endpoint.

    Let’s get onto the interesting bit!

    Fetching The Stocks Data

    In this section, we’ll be fetching the stock data from the Alpha Vantage endpoint. Here’s the index.js file:

    const fetch = require('isomorphic-fetch');
    const getConfig = require('./config');
    const { insertStocksData } = require('./queries');
    
    const symbols = [
      'NFLX',
      'MSFT',
      'AMZN',
      'W',
      'FB'
    ];
    
    (function getStocksData () {
    
      const apiConfig = getConfig('apiHostOptions');
      const { host, timeSeriesFunction, interval, key } = apiConfig;
    
      symbols.forEach((symbol) => {
        fetch(`${host}query/?function=${timeSeriesFunction}&symbol=${symbol}&interval=${interval}&apikey=${key}`)
        .then((res) => res.json())
        .then((data) => {
          const timeSeries = data['Time Series (5min)'];
          Object.keys(timeSeries).map((key) => {
            const dataPoint = timeSeries[key];
            const payload = [
              symbol,
              dataPoint['2. high'],
              dataPoint['3. low'],
              dataPoint['1. open'],
              dataPoint['4. close'],
              dataPoint['5. volume'],
              key,
            ];
            insertStocksData(payload);
          });
        });
      })
    })()

    For the purpose of this project, we’re going to query prices only for these stocks — NFLX (Netflix), MSFT (Microsoft), AMZN (Amazon), W (Wayfair), FB (Facebook).

    Refer this file for the config options. The IIFE getStocksData function is not doing much! It loops through these symbols and queries the Alpha Vantage endpoint ${host}query/?function=${timeSeriesFunction}&symbol=${symbol}&interval=${interval}&apikey=${key} to get the metrics for these stocks.

    The insertStocksData function puts these data points in the Postgres database. Here’s the insertStocksData function:

    const insertStocksData = async (payload) => {
      const query = 'INSERT INTO stock_data (symbol, high, low, open, close, volume, time) VALUES ($1, $2, $3, $4, $5, $6, $7)';
      pool.query(query, payload, (err, result) => {
        console.log('result here', err);
      });
    };

    This is it! We have fetched data points of the stock from the Alpha Vantage API and have written a function to put these in the Postgres database in the stock_data table. There is just one missing piece to make all this work! We’ve to populate the correct values in the config file. We’ll get these values after setting up the Hasura engine. Let’s get to that right away!

    Please refer to the server directory for the complete code on fetching data points from Alpha Vantage endpoint and populating that to the Hasura Postgres database.

    If this approach of setting up connections, configuration options, and inserting data using the raw query looks a bit difficult, please don’t worry about that! We’re going to learn how to do all this the easy way with a GraphQL mutation once the Hasura engine is set up!

    Setting Up The Hasura GraphQL Engine

    It is really simple to set up the Hasura engine and get up and running with the GraphQL schemas, queries, mutations, subscriptions, event triggers, and much more!

    Click on Try Hasura and enter the project name:

    Creating a Hasura Project
    Creating a Hasura Project. (Large preview)

    I’m using the Postgres database hosted on Heroku. Create a database on Heroku and link it to this project. You should then be all set to experience the power of query-rich Hasura console.

    Please copy the Postgres DB URL that you’ll get after creating the project. We’ll have to put this in the config file.

    Click on Launch Console and you’ll be redirected to this view:

    Hasura Console
    Hasura Console. (Large preview)

    Let’s start building the table schema that we’d need for this project.

    Creating Tables Schema On The Postgres Database

    Please go to the Data tab and click on Add Table! Let’s start creating some of the tables:

    symbol table

    This table would be used for storing the information of the symbols. For now, I’ve kept two fields here — id and company. The field id is a primary key and company is of type varchar. Let’s add some of the symbols in this table:

    symbol table
    symbol table. (Large preview)
    stock_data table

    The stock_data table stores id, symbol, time and the metrics such as high, low, open, close, volume. The NodeJs script that we wrote earlier in this section will be used to populate this particular table.

    Here’s how the table looks like:

    stock_data table
    stock_data table. (Large preview)

    Neat! Let’s get to the other table in the database schema!

    user_subscription table

    The user_subscription table stores the subscription object against the user Id. This subscription object is used for sending web-push notifications to the users. We’ll learn later in the article how to generate this subscription object.

    There are two fields in this table — id is the primary key of type uuid and subscription field is of type jsonb.

    events table

    This is the important one and is used for storing the notification event options. When a user opts-in for the price updates of a particular stock, we store that event information in this table. This table contains these columns:

    • id: is a primary key with the auto-increment property.
    • symbol: is a text field.
    • user_id: is of type uuid.
    • trigger_type: is used for storing the event trigger type — time/event.
    • trigger_value: is used for storing the trigger value. For example, if a user has opted in for price-based event trigger — he wants updates if the price of the stock has reached 1000, then the trigger_value would be 1000 and the trigger_type would be event.

    These are all the tables that we’d need for this project. We also have to set up relations among these tables to have a smooth data flow and connections. Let’s do that!

    Setting up relations among tables

    The events table is used for sending web-push notifications based on the event value. So, it makes sense to connect this table with the user_subscription table to be able to send push notifications on the subscriptions stored in this table.

    events.user_id  → user_subscription.id

    The stock_data table is related to the symbols table as:

    stock_data.symbol  → symbol.id

    We also have to construct some relations on the symbol table as:

    stock_data.symbol  → symbol.id
    events.symbol  → symbol.id

    We’ve now created the required tables and also established the relations among them! Let’s switch to the GRAPHIQL tab on the console to see the magic!

    Hasura has already set up the GraphQL queries based on these tables:

    GraphQL Queries/Mutations on the Hasura console
    GraphQL Queries/Mutations on the Hasura console. (Large preview)

    It is plainly simple to query on these tables and you can also apply any of these filters/properties (distinct_on, limit, offset, order_by, where) to get the desired data.

    This all looks good but we have still not connected our server-side code to the Hasura console. Let’s complete that bit!

    Connecting The NodeJs Script To The Postgres Database

    Please put the required options in the config.js file in the server directory as:

    const config = {
      databaseOptions: {
        user: '<DATABASE_USER>',
        password: '<DATABASE_PASSWORD>',
        host: '<DATABASE_HOST>',
        port: '<DATABASE_PORT>',
        database: '<DATABASE_NAME>',
        ssl: true,
      },
      apiHostOptions: {
        host: 'https://www.alphavantage.co/',
        key: '<API_KEY>',
        timeSeriesFunction: 'TIME_SERIES_INTRADAY',
        interval: '5min'
      },
      graphqlURL: '<GRAPHQL_URL>'
    };
    
    const getConfig = (key) => {
      return config[key];
    };
    
    module.exports = getConfig;

    Please put these options from the database string that was generated when we created the Postgres database on Heroku.

    The apiHostOptions consists of the API related options such as host, key, timeSeriesFunction and interval.

    You’ll get the graphqlURL field in the GRAPHIQL tab on the Hasura console.

    The getConfig function is used for returning the requested value from the config object. We’ve already used this in index.js in the server directory.

    It’s time to run the server and populate some data in the database. I’ve added one script in package.json as:

    "scripts": {
        "start": "nodemon index.js"
    }

    Run npm start on the terminal and the data points of the symbols array in index.js should be populated in the tables.

    Refactoring The Raw Query In The NodeJs Script To GraphQL Mutation

    Now that the Hasura engine is set up, let’s see how easy can it be to call a mutation on the stock_data table.

    The function insertStocksData in queries.js uses a raw query:

    const query = 'INSERT INTO stock_data (symbol, high, low, open, close, volume, time) VALUES ($1, $2, $3, $4, $5, $6, $7)';

    Let’s refactor this query and use mutation powered by the Hasura engine. Here’s the refactored queries.js in the server directory:

    
    const { createApolloFetch } = require('apollo-fetch');
    const getConfig = require('./config');
    
    const GRAPHQL_URL = getConfig('graphqlURL');
    const fetch = createApolloFetch({
      uri: GRAPHQL_URL,
    });
    
    const insertStocksData = async (payload) => {
      const insertStockMutation = await fetch({
        query: `mutation insertStockData($objects: [stock_data_insert_input!]!) {
          insert_stock_data (objects: $objects) {
            returning {
              id
            }
          }
        }`,
        variables: {
          objects: payload,
        },
      });
      console.log('insertStockMutation', insertStockMutation);
    };
    
    module.exports = {
      insertStocksData
    }

    Please note: We’ve to add graphqlURL in the config.js file.

    The apollo-fetch module returns a fetch function that can be used to query/mutate the date on the GraphQL endpoint. Easy enough, right?

    The only change that we’ve to do in index.js is to return the stocks object in the format as required by the insertStocksData function. Please check out index2.js and queries2.js for the complete code with this approach.

    Now that we’ve accomplished the data-side of the project, let’s move onto the front-end bit and build some interesting components!

    Note: We don’t have to keep the database configuration options with this approach!

    Front-end Using React And Apollo Client

    The front-end project is in the same repository and is created using the create-react-app package. The service worker generated using this package supports assets caching but it doesn’t allow more customizations to be added to the service worker file. There are already some open issues to add support for custom service worker options. There are ways to get away with this problem and add support for a custom service worker.

    Let’s start by looking at the structure for the front-end project:

    Project Directory
    Project Directory. (Large preview)

    Please check the src directory! Don’t worry about the service worker related files for now. We’ll learn more about these files later in this section. The rest of the project structure looks simple. The components folder will have the components (Loader, Chart); the services folder contains some of the helper functions/services used for transforming objects in the required structure; styles as the name suggests contains the sass files used for styling the project; views is the main directory and it contains the view layer components.

    We’d need just two view components for this project — The Symbol List and the Symbol Timeseries. We’ll build the time-series using the Chart component from the highcharts library. Let’s start adding code in these files to build up the pieces on the front-end!

    Installing Dependencies

    Here’s the list of dependencies that we’ll need:

    • apollo-boost
      Apollo boost is a zero-config way to start using Apollo Client. It comes bundled with the default configuration options.
    • reactstrap and bootstrap
      The components are built using these two packages.
    • graphql and graphql-type-json
      graphql is a required dependency for using apollo-boost and graphql-type-json is used for supporting the json datatype being used in the GraphQL schema.
    • highcharts and highcharts-react-official
      And these two packages will be used for building the chart:

    • node-sass
      This is added for supporting sass files for styling.

    • uuid
      This package is used for generating strong random values.

    All of these dependencies will make sense once we start using them in the project. Let’s get onto the next bit!

    Setting Up Apollo Client

    Create a apolloClient.js inside the src folder as:

    import ApolloClient from 'apollo-boost';
    
    const apolloClient = new ApolloClient({
      uri: '<HASURA_CONSOLE_URL>'
    });
    
    export default apolloClient;

    The above code instantiates ApolloClient and it takes in uri in the config options. The uri is the URL of your Hasura console. You’ll get this uri field on the GRAPHIQL tab in the GraphQL Endpoint section.

    The above code looks simple but it takes care of the main part of the project! It connects the GraphQL schema built on Hasura with the current project.

    We also have to pass this apollo client object to ApolloProvider and wrap the root component inside ApolloProvider. This will enable all the nested components inside the main component to use client prop and fire queries on this client object.

    Let’s modify the index.js file as:

    const Wrapper = () => {
    /* some service worker logic - ignore for now */
      const [insertSubscription] = useMutation(subscriptionMutation);
      useEffect(() => {
        serviceWorker.register(insertSubscription);
      }, [])
      /* ignore the above snippet */
      return <App />;
    }
    
    ReactDOM.render(
      <ApolloProvider client={apolloClient}>
        <Wrapper />
      </ApolloProvider>,
      document.getElementById('root')
    );

    Please ignore the insertSubscription related code. We’ll understand that in detail later. The rest of the code should be simple to get around. The render function takes in the root component and the elementId as parameters. Notice client (ApolloClient instance) is being passed as a prop to ApolloProvider. You can check the complete index.js file here.

    Setting Up The Custom Service Worker

    A Service worker is a JavaScript file that has the capability to intercept network requests. It is used for querying the cache to check if the requested asset is already present in the cache instead of making a ride to the server. Service workers are also used for sending web-push notifications to the subscribed devices.

    We’ve to send web-push notifications for the stock price updates to the subscribed users. Let’s set the ground and build this service worker file!

    The insertSubscription related snipped in the index.js file is doing the work of registering service worker and putting the subscription object in the database using subscriptionMutation.

    Please refer queries.js for all the queries and mutations being used in the project.

    serviceWorker.register(insertSubscription); invokes the register function written in the serviceWorker.js file. Here it is:

    export const register = (insertSubscription) => {
      if ('serviceWorker' in navigator) {
        const swUrl = `${process.env.PUBLIC_URL}/serviceWorker.js`
        navigator.serviceWorker.register(swUrl)
          .then(() => {
            console.log('Service Worker registered');
            return navigator.serviceWorker.ready;
          })
          .then((serviceWorkerRegistration) => {
            getSubscription(serviceWorkerRegistration, insertSubscription);
            Notification.requestPermission();
          })
      }
    }

    The above function first checks if serviceWorker is supported by the browser and then registers the service worker file hosted on the URL swUrl. We’ll check this file in a moment!

    The getSubscription function does the work of getting the subscription object using the subscribe method on the pushManager object. This subscription object is then stored in the user_subscription table against a userId. Please note that the userId is being generated using the uuid function. Let’s check out the getSubscription function:

    const getSubscription = (serviceWorkerRegistration, insertSubscription) => {
      serviceWorkerRegistration.pushManager.getSubscription()
        .then ((subscription) => {
          const userId = uuidv4();
          if (!subscription) {
            const applicationServerKey = urlB64ToUint8Array('<APPLICATION_SERVER_KEY>')
            serviceWorkerRegistration.pushManager.subscribe({
              userVisibleOnly: true,
              applicationServerKey
            }).then (subscription => {
              insertSubscription({
                variables: {
                  userId,
                  subscription
                }
              });
              localStorage.setItem('serviceWorkerRegistration', JSON.stringify({
                userId,
                subscription
              }));
            })
          }
        })
    }

    You can check serviceWorker.js file for the complete code!

    Notification Popup
    Notification Popup. (Large preview)

    Notification.requestPermission() invoked this popup that asks the user for the permission for sending notifications. Once the user clicks on Allow, a subscription object is generated by the push service. We’re storing that object in the localStorage as:

    Webpush Subscriptions object
    Webpush Subscriptions object. (Large preview)

    The field endpoint in the above object is used for identifying the device and the server uses this endpoint to send web push notifications to the user.

    We have done the work of initializing and registering the service worker. We also have the subscription object of the user! This is working all good because of the serviceWorker.js file present in the public folder. Let’s now set up the service worker to get things ready!

    This is a bit difficult topic but let’s get it right! As mentioned earlier, the create-react-app utility doesn’t support customizations by default for the service worker. We can achieve customer service worker implementation using workbox-build module.

    We also have to make sure that the default behavior of pre-caching files is intact. We’ll modify the part where the service worker gets build in the project. And, workbox-build helps in achieving exactly that! Neat stuff! Let’s keep it simple and list down all that we have to do to make the custom service worker work:

    • Handle the pre-caching of assets using workboxBuild.
    • Create a service worker template for caching assets.
    • Create sw-precache-config.js file to provide custom configuration options.
    • Add the build service worker script in the build step in package.json.

    Don’t worry if all this sounds confusing! The article doesn’t focus on explaining the semantics behind each of these points. We’ve to focus on the implementation part for now! I’ll try to cover the reasoning behind doing all the work to make a custom service worker in another article.

    Let’s create two files sw-build.js and sw-custom.js in the src directory. Please refer to the links to these files and add the code to your project.

    Let’s now create sw-precache-config.js file at the root level and add the following code in that file:

    module.exports = {
      staticFileGlobs: [
        'build/static/css/**.css',
        'build/static/js/**.js',
        'build/index.html'
      ],
      swFilePath: './build/serviceWorker.js',
      stripPrefix: 'build/',
      handleFetch: false,
      runtimeCaching: [{
        urlPattern: /this\.is\.a\.regex/,
        handler: 'networkFirst'
      }]
    }

    Let’s also modify the package.json file to make room for building the custom service worker file:

    Add these statements in the scripts section:

    "build-sw": "node ./src/sw-build.js",
    "clean-cra-sw": "rm -f build/precache-manifest.*.js && rm -f build/service-worker.js",

    And modify the build script as:

    "build": "react-scripts build && npm run build-sw && npm run clean-cra-sw",

    The setup is finally done! We now have to add a custom service worker file inside the public folder:

    function showNotification (event) {
      const eventData = event.data.json();
      const { title, body } = eventData
      self.registration.showNotification(title, { body });
    }
    
    self.addEventListener('push', (event) => {
      event.waitUntil(showNotification(event));
    })

    We’ve just added one push listener to listen to push-notifications being sent by the server. The function showNotification is used for displaying web push notifications to the user.

    This is it! We’re done with all the hard work of setting up a custom service worker to handle web push notifications. We’ll see these notifications in action once we build the user interfaces!

    We’re getting closer to building the main code pieces. Let’s now start with the first view!

    Symbol List View

    The App component being used in the previous section looks like this:

    import React from 'react';
    import SymbolList from './views/symbolList';
    
    const App = () => {
      return <SymbolList />;
    };
    
    export default App;

    It is a simple component that returns SymbolList view and SymbolList does all the heavy-lifting of displaying symbols in a neatly tied user interface.

    Let’s look at symbolList.js inside the views folder:

    Please refer to the file here!

    The component returns the results of the renderSymbols function. And, this data is being fetched from the database using the useQuery hook as:

    const { loading, error, data } = useQuery(symbolsQuery, {variables: { userId }});

    The symbolsQuery is defined as:

    export const symbolsQuery = gql`
      query getSymbols($userId: uuid) {
        symbol {
          id
          company
          symbol_events(where: {user_id: {_eq: $userId}}) {
            id
            symbol
            trigger_type
            trigger_value
            user_id
          }
          stock_symbol_aggregate {
            aggregate {
              max {
                high
                volume
              }
              min {
                low
                volume
              }
            }
          }
        }
      }
    `;

    It takes in userId and fetches the subscribed events of that particular user to display the correct state of the notification icon (bell icon that is being displayed along with the title). The query also fetches the max and min values of the stock. Notice the use of aggregate in the above query. Hasura’s Aggregation queries do the work behind the scenes to fetch the aggregate values like count, sum, avg, max, min, etc.

    Based on the response from the above GraphQL call, here’s the list of cards that are displayed on the front-end:

    Stock Cards
    Stock Cards. (Large preview)

    The card HTML structure looks something like this:

    <div key={id}>
      <div className="card-container">
        <Card>
          <CardBody>
            <CardTitle className="card-title">
              <span className="company-name">{company}  </span>
                <Badge color="dark" pill>{id}</Badge>
                <div className={classNames({'bell': true, 'disabled': isSubscribed})} id={`subscribePopover-${id}`}>
                  <FontAwesomeIcon icon={faBell} title="Subscribe" />
                </div>
            </CardTitle>
            <div className="metrics">
              <div className="metrics-row">
                <span className="metrics-row--label">High:</span> 
                <span className="metrics-row--value">{max.high}</span>
                <span className="metrics-row--label">{' '}(Volume: </span> 
                <span className="metrics-row--value">{max.volume}</span>)
              </div>
              <div className="metrics-row">
                <span className="metrics-row--label">Low: </span>
                <span className="metrics-row--value">{min.low}</span>
                <span className="metrics-row--label">{' '}(Volume: </span>
                <span className="metrics-row--value">{min.volume}</span>)
              </div>
            </div>
            <Button className="timeseries-btn" outline onClick={() => toggleTimeseries(id)}>Timeseries</Button>{' '}
          </CardBody>
        </Card>
        <Popover
          className="popover-custom" 
          placement="bottom" 
          target={`subscribePopover-${id}`}
          isOpen={isSubscribePopoverOpen === id}
          toggle={() => setSubscribeValues(id, symbolTriggerData)}
        >
          <PopoverHeader>
            Notification Options
            <span className="popover-close">
              <FontAwesomeIcon 
                icon={faTimes} 
                onClick={() => handlePopoverToggle(null)}
              />
            </span>
          </PopoverHeader>
          {renderSubscribeOptions(id, isSubscribed, symbolTriggerData)}
        </Popover>
      </div>
      <Collapse isOpen={expandedStockId === id}>
        {
          isOpen(id) ? <StockTimeseries symbol={id}/> : null
        }
      </Collapse>
    </div>

    We’re using the Card component of ReactStrap to render these cards. The Popover component is used for displaying the subscription-based options:

    Notification Options
    Notification Options. (Large preview)

    When the user clicks on the bell icon for a particular stock, he can opt-in to get notified every hour or when the price of the stock has reached the entered value. We’ll see this in action in the Events/Time Triggers section.

    Note: We’ll get to the StockTimeseries component in the next section!

    Please refer to symbolList.js for the complete code related to the stocks list component.

    Stock Timeseries View

    The StockTimeseries component uses the query stocksDataQuery:

    export const stocksDataQuery = gql`
      query getStocksData($symbol: String) {
        stock_data(order_by: {time: desc}, where: {symbol: {_eq: $symbol}}, limit: 25) {
          high
          low
          open
          close
          volume
          time
        }
      }
    `;

    The above query fetches the recent 25 data points of the selected stock. For example, here is the chart for the Facebook stock open metric:

    Stock Prices timeline
    Stock Prices timeline. (Large preview)

    This is a straightforward component where we pass in some chart options to [HighchartsReact] component. Here are the chart options:

    const chartOptions = {
      title: {
        text: `${symbol} Timeseries`
      },
      subtitle: {
        text: 'Intraday (5min) open, high, low, close prices & volume'
      },
      yAxis: {
        title: {
          text: '#'
        }
      },
      xAxis: {
        title: {
          text: 'Time'
        },
        categories: getDataPoints('time')
      },
      legend: {
        layout: 'vertical',
        align: 'right',
        verticalAlign: 'middle'
      },
      series: [
        {
          name: 'high',
          data: getDataPoints('high')
        }, {
          name: 'low',
          data: getDataPoints('low')
        }, {
          name: 'open',
          data: getDataPoints('open')
        },
        {
          name: 'close',
          data: getDataPoints('close')
        },
        {
          name: 'volume',
          data: getDataPoints('volume')
        }
      ]
    }

    The X-Axis shows the time and the Y-Axis shows the metric value at that time. The function getDataPoints is used for generating a series of points for each of the series.

    const getDataPoints = (type) => {
      const values = [];
      data.stock_data.map((dataPoint) => {
        let value = dataPoint[type];
        if (type === 'time') {
          value = new Date(dataPoint['time']).toLocaleString('en-US');
        }
        values.push(value);
      });
      return values;
    }

    Simple! That’s how the Chart component is generated! Please refer to Chart.js and stockTimeseries.js files for the complete code on stock time-series.

    You should now be ready with the data and the user interfaces part of the project. Let’s now move onto the interesting part — setting up event/time triggers based on the user’s input.

    Setting Up Event/Scheduled Triggers

    In this section, we’ll learn how to set up triggers on the Hasura console and how to send web push notifications to the selected users. Let’s get started!

    Events Triggers On Hasura Console

    Let’s create an event trigger stock_value on the table stock_data and insert as the trigger operation. The webhook will run every time there is an insert in the stock_data table.

    Event triggers setup
    Event triggers setup. (Large preview)

    We’re going to create a glitch project for the webhook URL. Let me put down a bit about webhooks to make easy clear to understand:

    Webhooks are used for sending data from one application to another on the occurrence of a particular event. When an event is triggered, an HTTP POST call is made to the webhook URL with the event data as the payload.

    In this case, when there is an insert operation on the stock_data table, an HTTP post call will be made to the configured webhook URL (post call in the glitch project).

    Glitch Project For Sending Web-push Notifications

    We’ve to get the webhook URL to put in the above event trigger interface. Go to glitch.com and create a new project. In this project, we’ll set up an express listener and there will be an HTTP post listener. The HTTP POST payload will have all the details of the stock datapoint including open, close, high, low, volume, time. We’ll have to fetch the list of users subscribed to this stock with the value equal to the close metric.

    These users will then be notified of the stock price via web-push notifications.

    That’s all we’ve to do to achieve the desired target of notifying users when the stock price reaches the expected value!

    Let’s break this down into smaller steps and implement them!

    Installing Dependencies

    We would need the following dependencies:

    • express: is used for creating an express server.
    • apollo-fetch: is used for creating a fetch function for getting data from the GraphQL endpoint.
    • web-push: is used for sending web push notifications.

    Please write this script in package.json to run index.js on npm start command:

    "scripts": {
      "start": "node index.js"
    }
    Setting Up Express Server

    Let’s create an index.js file as:

    const express = require('express');
    const bodyParser = require('body-parser');
    
    const app = express();
    app.use(bodyParser.json());
    
    const handleStockValueTrigger = (eventData, res) => {
      /* Code for handling this trigger */
    }
    
    app.post('/', (req, res) => {
      const { body } = req
      const eventType = body.trigger.name
      const eventData = body.event
      
      switch (eventType) {
        case 'stock-value-trigger':
          return handleStockValueTrigger(eventData, res);
      }
      
    });
    
    app.get('/', function (req, res) {
      res.send('Hello World - For Event Triggers, try a POST request?');
    });
    
    var server = app.listen(process.env.PORT, function () {
        console.log(`server listening on port ${process.env.PORT}`);
    });
    

    In the above code, we’ve created post and get listeners on the route /. get is simple to get around! We’re mainly interested in the post call. If the eventType is stock-value-trigger, we’ll have to handle this trigger by notifying the subscribed users. Let’s add that bit and complete this function!

    Fetching Subscribed Users
    const fetch = createApolloFetch({
      uri: process.env.GRAPHQL_URL
    });
    
    const getSubscribedUsers = (symbol, triggerValue) => {
      return fetch({
        query: `query getSubscribedUsers($symbol: String, $triggerValue: numeric) {
          events(where: {symbol: {_eq: $symbol}, trigger_type: {_eq: "event"}, trigger_value: {_gte: $triggerValue}}) {
            user_id
            user_subscription {
              subscription
            }
          }
        }`,
        variables: {
          symbol,
          triggerValue
        }
      }).then(response => response.data.events)
    }
    
    
    const handleStockValueTrigger = async (eventData, res) => {
      const symbol = eventData.data.new.symbol;
      const triggerValue = eventData.data.new.close;
      const subscribedUsers = await getSubscribedUsers(symbol, triggerValue);
      const webpushPayload = {
        title: `${symbol} - Stock Update`,
        body: `The price of this stock is ${triggerValue}`
      }
      subscribedUsers.map((data) => {
        sendWebpush(data.user_subscription.subscription, JSON.stringify(webpushPayload));
      })
      res.json(eventData.toString());
    }
    

    In the above handleStockValueTrigger function, we’re first fetching the subscribed users using the getSubscribedUsers function. We’re then sending web-push notifications to each of these users. The function sendWebpush is used for sending the notification. We’ll look at the web-push implementation in a moment.

    The function getSubscribedUsers uses the query:

    query getSubscribedUsers($symbol: String, $triggerValue: numeric) {
      events(where: {symbol: {_eq: $symbol}, trigger_type: {_eq: "event"}, trigger_value: {_gte: $triggerValue}}) {
        user_id
        user_subscription {
          subscription
        }
      }
    }

    This query takes in the stock symbol and the value and fetches the user details including user-id and user_subscription that matches these conditions:

    • symbol equal to the one being passed in the payload.
    • trigger_type is equal to event.
    • trigger_value is greater than or equal to the one being passed to this function (close in this case).

    Once we get the list of users, the only thing that remains is sending web-push notifications to them! Let’s do that right away!

    Sending Web-Push Notifications To The Subscribed Users

    We’ve to first get the public and the private VAPID keys to send web-push notifications. Please store these keys in the .env file and set these details in index.js as:

    webPush.setVapidDetails(
      'mailto:<YOUR_MAIL_ID>',
      process.env.PUBLIC_VAPID_KEY,
      process.env.PRIVATE_VAPID_KEY
    );
    
    const sendWebpush = (subscription, webpushPayload) => {
      webPush.sendNotification(subscription, webpushPayload).catch(err => console.log('error while sending webpush', err))
    }

    The sendNotification function is used for sending the web-push on the subscription endpoint provided as the first parameter.

    That’s all is required to successfully send web-push notifications to the subscribed users. Here’s the complete code defined in index.js:

    const express = require('express');
    const bodyParser = require('body-parser');
    const { createApolloFetch } = require('apollo-fetch');
    const webPush = require('web-push');
    
    webPush.setVapidDetails(
      'mailto:<YOUR_MAIL_ID>',
      process.env.PUBLIC_VAPID_KEY,
      process.env.PRIVATE_VAPID_KEY
    );
    
    const app = express();
    app.use(bodyParser.json());
    
    const fetch = createApolloFetch({
      uri: process.env.GRAPHQL_URL
    });
    
    const getSubscribedUsers = (symbol, triggerValue) => {
      return fetch({
        query: `query getSubscribedUsers($symbol: String, $triggerValue: numeric) {
          events(where: {symbol: {_eq: $symbol}, trigger_type: {_eq: "event"}, trigger_value: {_gte: $triggerValue}}) {
            user_id
            user_subscription {
              subscription
            }
          }
        }`,
        variables: {
          symbol,
          triggerValue
        }
      }).then(response => response.data.events)
    }
    
    const sendWebpush = (subscription, webpushPayload) => {
      webPush.sendNotification(subscription, webpushPayload).catch(err => console.log('error while sending webpush', err))
    }
    
    const handleStockValueTrigger = async (eventData, res) => {
      const symbol = eventData.data.new.symbol;
      const triggerValue = eventData.data.new.close;
      const subscribedUsers = await getSubscribedUsers(symbol, triggerValue);
      const webpushPayload = {
        title: `${symbol} - Stock Update`,
        body: `The price of this stock is ${triggerValue}`
      }
      subscribedUsers.map((data) => {
        sendWebpush(data.user_subscription.subscription, JSON.stringify(webpushPayload));
      })
      res.json(eventData.toString());
    }
    
    app.post('/', (req, res) => {
      const { body } = req
      const eventType = body.trigger.name
      const eventData = body.event
      
      switch (eventType) {
        case 'stock-value-trigger':
          return handleStockValueTrigger(eventData, res);
      }
      
    });
    
    app.get('/', function (req, res) {
      res.send('Hello World - For Event Triggers, try a POST request?');
    });
    
    var server = app.listen(process.env.PORT, function () {
        console.log("server listening");
    });

    Let’s test out this flow by subscribing to stock with some value and manually inserting that value in the table (for testing)!

    I subscribed to AMZN with value as 2000 and then inserted a data point in the table with this value. Here’s how the stocks notifier app notified me right after the insertion:

    Inserting a row in stock_data table for testing
    Inserting a row in stock_data table for testing. (Large preview)

    Neat! You can also check the event invocation log here:

    Event Log
    Event Log. (Large preview)

    The webhook is doing the work as expected! We’re all set for the event triggers now!

    Scheduled/Cron Triggers

    We can achieve a time-based trigger for notifying the subscriber users every hour using the Cron event trigger as:

    Cron/Scheduled Trigger setup
    Cron/Scheduled Trigger setup. (Large preview)

    We can use the same webhook URL and handle the subscribed users based on the trigger event type as stock_price_time_based_trigger. The implementation is similar to the event-based trigger.

    Conclusion

    In this article, we built a stock price notifier application. We learned how to fetch prices using the Alpha Vantage APIs and store the data points in the Hasura backed Postgres database. We also learned how to set up the Hasura GraphQL engine and create event-based and scheduled triggers. We built a glitch project for sending web-push notifications to the subscribed users.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Building A Conversational N.L.P Enabled Chatbot Using Google’s Dialogflow — Smashing Magazine

    12/08/2020

    About The Author

    Nwani Victory works as a Frontend Engineer at Liferithms.inc from Lagos, Nigeria. After office hours, he doubles as a Cloud Engineer seeking ways to make Cloud …
    More about
    Nwani

    The 2019 Capgemini research institute’s report published after a research on the use of chat assistants showed a drastic 76% increase in customer satisfaction from organizations where chat assistants where built and incorporated into their services. But how does Dialogflow, a product from Google’s ecosystem, aid developers in building chat assistants and contribute to this quota?

    Ever since ELIZA (the first Natural Language Processing computer program brought to life by Joseph Weizenbaum in 1964) was created in order to process user inputs and engage in further discussions based on the previous sentences, there has been an increased use of Natural Language Processing to extract key data from human interactions. One key application of Natural language processing has been in the creation of conversational chat assistants and voice assistants which are used in mobile and web applications to act as customer care agents attending to the virtual needs of customers.

    In 2019, the Capgemini Research Institute released a report after conducting a survey on the impact which chat assistants had on users after being incorporated by organizations within their services. The key findings from this survey showed that many customers were highly satisfied with the level of engagement they got from these chat assistants and that the number of users who were embracing the use of these assistants was fast growing!

    To quickly build a chat assistant, developers and organizations leverage SaaS products running on the cloud such as Dialogflow from Google, Watson Assistant from IBM, Azure Bot Service from Microsoft, and also Lex from Amazon to design the chat flow and then integrate the natural language processing enabled chat-bots offered from these services into their own service.

    This article would be beneficial to developers interested in building conversational chat assistants using Dialogflow as it focuses on the Dialogflow itself as a Service and how chat assistants can be built using the Dialogflow console.

    Note: Although the custom webhooks built within this article are well explained, a fair understanding of the JavaScript language is required as the webhooks were written using JavaScript.

    Dialogflow

    Dialogflow is a platform that simplifies the process of creating and designing a natural language processing conversational chat assistant which can accept voice or text data when being used either from the Dialogflow console or from an integrated web application.

    To understand how Dialogflow simplifies the creation of a conversational chat assistant, we will use it to build a customer care agent for a food delivery service and see how the built chat assistant can be used to handle food orders and other requests of the service users.

    Before we begin building, we need to understand some of the key terminologies used on Dialogflow. One of Dialogflow’s aim is to abstract away the complexities of building a Natural Language Processing application and provide a console where users can visually create, design, and train an AI-powered chatbot.

    Dialog Flow Terminologies

    Here is a list of the Dialogflow terminologies we will consider in this article in the following order:

    • Agent
      An agent on Dialogflow represents the chatbot created by a user to interact with other end-users and perform data processing operations on the information it receives. Other components come together to form an agent and each time one of these components is updated, the agent is immediately re-trained for the changes to take effect.

      User’s who want to create a full-fledged conversational chatbot within the quickest time possible can select an agent from the prebuilt agents which can be likened to a template which contains the basic intents and responses needed for a conversational assistant.

      Note: A conversational assistant on Dialogflow will now be referred to as an “agent” while someone else asides the author of the assistant who interacts with it would be referred to as an “end-user”.

    • Intent
      Similar to its literal meaning, the intent is the user’s end goal in each sentence when interacting with an agent. For a single agent, multiple intents can be created to handle each sentence within a conversation and they are connected together using Contexts.

      From the intent, an agent is able to understand the end-goal of a sentence. For example, an agent created to process food orders from customers would be to recognize the end-goal of a customer to place an order for a meal or get recommendations on the available meals from a menu using the created intents.

    • Entity
      Entities are a means by which Dialogflow processes and extracts specific data from an end-user’s input. An example of this is a Car entity added to an intent. Names of vehicles would be extracted from each sentence input as the Car entity.

      By default, an agent has some System entities which have predefined upon its creation. Dialogflow also has the option to define custom entities and add values recognizable within this entity.

    • Training Phrase
      The training phrases is a major way in which an agent is able to recognize the intent of an end-user interacting with the agent. Having a large number of training phrases within an intent increases the accuracy of the agent to recognize an intent, in fact Dialogflow’s documentation on training phases recommends that “at least 10-20” training phrases be added to a created intent.

      To make training phrases more reusable, dialogflow gives the ability to annotate specific words within the training phrase. When a word within a phrase is annotated, dialogflow would recognize it as a placeholder for values that would be provided in an end-user’s input.

    • Context
      Contexts are string names and they are used to control the flow of a conversation with an agent. On each intent, we can add multiple input contexts and also multiple output contexts. When the end-user makes a sentence that is recognized by an intent the output contexts become active and one of them is used to match the next intent.

      To understand contexts better, we can illustrate context as the security entry and exit door, while the intent as the building. The input context is used when coming into the building and it accepts visitors that have been listed in the intent while the exit door is what connects the visitors to another building which is another intent.

    • Knowledge base
      A knowledge base represents a large pool of information where an agent can fetch data when responding to an intent. This could be a document in any format such as txt, pdf, csv among other supported document types. In machine learning, a knowledge base could be referred to as a training dataset.

      An example scenario where an agent might refer to a knowledge base would be where an agent is being used to find out more details about a service or business. In this scenario, an agent can refer to the service’s Frequently Asked Questions as its knowledge base.

    • Fulfillment
      Dialogflow’s Fulfillment enables an agent to give a more dynamic response to a recognized intent rather than a static created response. This could be by calling a defined service to perform an action such as creating or retrieving data from a database.

      An intent’s fulfillment is achieved through the use of a webhook. Once enabled, a matched intent would make an API request to the webhook configured for the dialogflow agent.

    Now, that we have an understanding of the terminologies used with Dialogflow, we can move ahead to use the Dialogflow console to create and train our first agent for a hypothetical food service.

    Using The Dialogflow Console

    Note: Using the Dialogflow console requires that a Google account and a project on the Google Cloud Platform is created. If unavailable, a user would be prompted to sign in and create a project on first use.

    The Dialogflow console is where the agent is created, designed, and trained before integrating with other services. Dialogflow also provides REST API endpoints for users who do not want to make use of the console when building with Dialogflow.

    While we go through the console, we will gradually build out the agent which would act as a customer care agent for a food delivery service having the ability to list available meals, accept a new order and give information about a requested meal.

    The agent we’ll be building will have the conversation flow shown in the flow chart diagram below where a user can purchase a meal or get the list of available meals and then purchase one of the meals shown.

    A diagram of the conversation flow of the proposed agent to be built.
    A diagram of the conversation flow of the proposed agent to be built. (Large preview)

    Creating A New Agent

    Within every newly created project, Dialogflow would prompt the first time user to create an agent which takes the following fields:

    • A name to identify the agent.
    • A language which the agent would respond in. If not provided the default of English is used.
    • A project on the Google Cloud to associate the agent with.

    Immediately after we click on the create button after adding the values of the fields above, a new agent would be saved and the intents tab would be shown with the Default fallback and Default Welcome intent as the only two available intents which are created by default with every agent on Dialogflow.

    The intents tab with the two default created intents
    The intents tab with the two default created intents. (Large preview)

    Exploring the Default fallback intent, we can see it has no training phrase but has sentences such as “Sorry, could you say that again?”, “What was that?”, “Say that one more time?” as responses to indicate that the agent was not able to recognize a sentence which has been made by an end-user. During all conversations with the agent, these responses are only used when the agent cannot recognize a sentence typed or spoken by a user.

    The Default Fallback intent page with the responses listed out.
    The Default Fallback intent page with the responses listed out. (Large preview)

    While the sentences above are sufficient for indicating that agent does not understand the last typed sentence, we would like to aid the end-user by giving them some more information to hint the user on what the agent can recognize. To do this, we replace all the listed sentences above with the following ones and click the Save button for the agent to be retrained.

    I didn't get that. I am Zara and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service. What would you like me to do?
    
    I missed what you said. I'm Zara here and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service. What would you like me to do?
    
    Sorry, I didn't get that. Can you rephrase it?  I'm Zara by the way and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service.
    
    Hey, I missed that I'm Zara and I can assist you in purchasing or learning more about the meals from Dialogflow-food-delivery service.  What would you like me to do?

    From each of the four sentences above, we see can observe that the agent could not recognize what the last sentence made was and also a piece of information on what the agent can do thus hinting the user on what to type next in order to continue the conversation.

    Moving next to the Default Welcome Intent, the first section on the intent page is the Context section and expanding it we can see both the input and output contexts are blank. From the conversation flow of the agent shown previously, we want an end-user to either place a meal order or request a list of all available meals. This would require the two following new output contexts they would each become active when this intent is matched;

    • awaiting_order_request
      This would be used to match the intent handling order requests when an end-user wants to place an order for a meal.

    • awaiting_info_request
      This would be used to match the intent that retrieves data of all the meals when an end-user wants to know the available meals.

    After the context section is the intent’s Events and we can see it has the Welcome event type added to the list of events indicating that this intent will be used first when the agent is loaded.

    Coming next are the Training Phrases for the intent. Due to being created by default, it already has 16 phrases that an end-user would likely type or say when they interact with the agent for the first time.

    The Default Fallback intent page with the default added Training phrases listed
    The Default Fallback intent page with the default added Training phrases listed. (Large preview)

    When an end-user types or makes a sentence similar to those listed in the training phrases above, the agent would respond using a picked response from the Responses list section shown below:

    A list generated responses within the Default Welcome intent.
    A list generated responses within the Default Welcome intent. (Large preview)

    Each of the responses above is automatically generated for every agent on Dialogflow. Although they are grammatically correct, we would not use them for our food agent. Being a default intent that welcomes an end-user to our agent, a response from the agent should tell what organization it belongs to and also list its functionalities in a single sentence.

    We would delete all the responses above and replace them with the ones below to better help inform an end-user on what to do next with the agent.

    1.  Hello there, I am Zara and I am here to assist you to purchase or learn about the meals from the Dialogflow-food-delivery service. What would you like me to do?    
    
    2. Hi, I am Zara and I can assist you in purchasing or learning more about the meals from the Dialogflow-food-delivery service. What would you like me to do?

    From the two responses above, we can see it tells an end-user what the name of the bot is, the two things the agent can do, and lastly, it pokes the end-user to take further action. Taking further action further from this intent means we need to connect the Default Welcome Intent to another. This is possible on Dialogflow using context.

    When we add and save those two phrases above, dialogflow would immediately re-train the agent so I can respond using any one of them.

    Next, we move on to create two more intents to handle the functionalities which we have added in the two responses above. One to purchase a food item and the second to get more information about meals from our food service.

    Creating list-meals intent:

    Clicking the + ( add ) icon from the left navigation menu would navigate to the page for creating new intents and we name this intent list-available-meals.

    From there we add an output context with the name awaiting-order-request. This output context would be used to link this intent to the next one where they order a meal as we expect an end-user to place an order for a meal after getting the list of meals available.

    Moving on to the Training Phrases section on the intent page, we will add the following phrases provided by the end-user in order to find out which meals are available.

    Hey, I would like to know the meals available.
    What items are on your menu?
    Are there any available meals?
    I would like to know more about the meals you offer.

    Next, we would add just the single fallback response below to the Responses section;

    Hi there, the list of our meals is currently unavailable. Please check back in a few minutes as the items on the list are regularly updated.

    From the response above we can observe that it indicates that the meal’s list is unavailable or an error has occurred somewhere. This is because it is a fallback response and would only be used when an error occurs in fetching the meals. The main response would come as a fulfillment using the webhooks option which we will set up next.

    The last section in this intent page is the Fulfillment section and it is used to provide data to the agent to be used as a response from an externally deployed API or source. To use it we would enable the Webhook call option in the Fulfillment section and set up the fulfillment for this agent from the fulfillment tab.

    Managing Fulfillment:

    From the Fulfillment tab on the console, a developer has the option of using a webhook which gives the ability to use any deployed API through its endpoint or use the Inline Code editor to create a serverless application to be deployed as a cloud function on the Google Cloud. If you would like to know more about serverless applications, this article provides an excellent guide on getting started with serverless applications.

    The fulfillment tab for a created agent on Dialogflow.
    The fulfillment tab for a created agent on Dialogflow. (Large preview)

    Each time an end-user interacts with the agent and the intent is matched, a POST request would be made to the endpoint. Among the various object fields in the request body, only one is of concern to us, i.e. the queryResult object as shown below:

    {
      "queryResult": {
        "queryText": "End-user expression",
        "parameters": {
          "param-name": "param-value"
        },
      },
    }

    While there are other fields in the queryResult such as a context, the parameters object is more important to us as it holds the parameter extracted from the user’s text. This parameter would be the meal a user is requesting for and we would use it to query the food delivery service database.

    When we are done setting up the fulfillment, our agent would have the following structure and flow of data to it:

    The diagram showing the flow for the food delivery agent.
    The diagram showing the flow for the food delivery agent. (Large preview)

    From the diagram above, we can observe that the cloud function acts as a middleman in the entire structure. The Dialogflow agent sends the parameter extracted from an end user’s text to the cloud function in a request payload and the cloud function, in turn, queries the database for the document using the received name and sends back the queried data in a response payload to the agent.

    To start an implementation of the design system above, we would begin with creating the cloud function locally in a development machine then connect it to our dialogflow agent using the custom webhook option. After it has been tested, we can switch to using the inline editor in the fulfillment tab to create and deploy a cloud function to work with it. We begin this process by running the following commands from the command line:

    # Create a new project and ( && ) move into it.
    mkdir dialogflow-food-agent-server && cd dialogflow-food-agent-server
    
    # Create a new Node project
    yarn init -y
    
    # Install needed packages
    yarn add mongodb @google-cloud/functions-framework dotenv

    After installing the needed packages, we modify the generated package.json file to include two new objects which enable us to run a cloud function locally using the Functions Framework.

    // package.json
    {
      "main": "index.js",
      "scripts": {
        "start": "functions-framework --target=foodFunction --port=8000"
      },
    }
    

    The start command in the scripts above tells the functions Framework to run the foodFunction in the index.js file and also makes it listen and serve connections through our localhost on port 8000.

    Next is the content of the index.js file which holds the function; we’ll make use of the code below since it connects to a MongoDB database and queries the data using the parameter passed in by the Dialogflow agent.

    require("dotenv").config();
    
    exports.foodFunction = async (req, res) => {
      const { MongoClient } = require("mongodb");
      const CONNECTION_URI = process.env.MONGODB_URI;
    
      // initate a connection to the deployed mongodb cluster
      const client = new MongoClient(CONNECTION_URI, {
        useNewUrlParser: true,
      });
    
      client.connect((err) => {
        if (err) {
          res
            .status(500)
            .send({ status: "MONGODB CONNECTION REFUSED", error: err });
        }
        const collection = client.db(process.env.DATABASE_NAME).collection("Meals");
        const result = [];
        const data = collection.find({});
        const meals = [
          {
            text: {
              text: [
                `We currently have the following 20 meals on our menu list. Which would you like to request for?`,
              ],
            },
          },
        ];
        result.push(
          data.forEach((item) => {
            const { name, description, price, image_uri } = item;
            const card = {
              card: {
                title: `${name} at $${price}`,
                subtitle: description,
                imageUri: image_uri,
              },
            };
            meals.push(card);
          })
        );
    
        Promise.all(result)
          .then((_) => {
            const response = {
              fulfillmentMessages: meals,
            };
            res.status(200).json(response);
          })
          .catch((e) => res.status(400).send({ error: e }));
        client.close();
      });
    };
    

    From the code snippet above we can see that our cloud function is pulling data from a MongoDB database, but let’s gradually step through the operations involved in pulling and returning this data.

    • First, the cloud function initiates a connection to a MongoDB Atlas cluster, then it opens the collection storing the meal category documents within the database being used for the food-service on the cluster.

    • Next, using the parameter passed into the request from the user’s input, we run a find method on the collection to get which then returns a cursor which we further iterate upon to get all the MongoDB documents within the collection containing the data.

    • We model the data returned from MongoDB into Dialogflow’s Rich response message object structure which displays each of the meal items to the end-user as a card with an image, title, and a description.

    • Finally, we send back the entire data to the agent after the iteration in a JSON body and end the function’s execution with a 200 status code.

    Note: The Dialogflow agent would wait for a response after a request has been sent within a frame of 5 seconds. This waiting period is when the loading indicator is shown on the console and after it elapses without getting a response from the webhook, the agent would default to using one of the responses added in the intent page and return a DEADLINE EXCEEDED error. This limitation is worth taking note of when designing the operations to be executed from a webhook. The API error retries section within the Dialogflow best practices contains steps on how to implement a retry system.

    Now, the last thing needed is a .env file created in the project directory with the following fields to store the environment variables used in the index.js.

    #.env
    MONGODB_URI = "MONGODB CONNECTION STRING"
    DATABASE_NAME = ""

    At this point, we can start the function locally by running yarn start from the command line in the project’s directory. For now, we still cannot make use of the running function as Dialogflow only supports secure connections with an SSL certificate, and where Ngrok comes into the picture.

    Using Ngrok, we can create a tunnel to expose the localhost port running the cloud function to the internet with an SSL certificate attached to the secured connection using the command below from a new terminal;

    ngrok http -bind-tls=true 8000

    This would start the tunnel and generate a forwarding URL which would be used as an endpoint to the function running on a local machine.

    Note: The extra -bind-tls=true argument is what instructs Ngrok to create a secured tunnel rather than the unsecured connection which it creates by default.

    Now, we can copy the URL string opposite the forwarding text in the terminal and paste in the URL input field which is found in the Webhook section, and then save it.

    To test all that has been done so far, we would make a sentence to the Dialogflow agent requesting the list of meals available using the Input field at the top right section in the Dialogflow console and watch how it waits for and uses a response sent from the running function.

    A test of the created list-meals intent and its returned data result.
    A test of the created list-meals intent and its returned data result. (Large preview)

    Starting from the center placed terminal in the image above, we can the series of POST requests made to the function running locally and on the right-hand side the data response from the function formatted into cards.

    If for any reason a webhook request becomes unsuccessful, Dialogflow would resolve the error by using one of the listed responses. However, we can find out why the request failed by using the Diagnostic Info tool updated in each conversation. Within it are the Raw API response, Fulfillment request, Fulfillment response, and Fulfillment status tabs containing JSON formatted data. Selecting the Fulfillment response tab we can see the response from the webhook which is the cloud function running on our local machine.

    Diagnostics info modal with the Fulfillment response tab active.
    The Diagnostics info modal with the Fulfillment response tab active showing the webhook response in JSON format. (Large preview)

    At this point, we expect a user to continue the conversation with an order of one of the listed meals. We create the last intent for this demo next to handle meal orders.

    Creating Request-meal Intent:

    Following the same steps used while creating the first intent, we create a new intent using the console and name it request-meal and add an input context of awaiting_order_request to connect this intent from either the Default Welcome intent or the list-available meals intent.

    Within the training phrase section, we make use of the following phrases,

    Hi there, I'm famished, can I get some food?
    
    Yo, I want to place an order for some food. 
    
    I need to get some food now.
    
    Dude, I would like to purchase $40 worth of food.
    
    Hey, can I get 2 plates of food?

    Reading through the phrases above, we can observe they all indicate one thing — the user wants food. In all of the phrases listed above, the name or type of food is not specified but rather they are all specified as food. This is because we want the food to be dynamic value, if we were to list all the food names we certainly would need to have a very large list of training phrases. This also applies to the amount and price of the food being ordered, they would be annotated and the agent would be able to recognize them as a placeholder for the actual values within an input.

    To make a value within a phrase dynamic, dialogflow provides entities. Entities represent common types of data, and in this intent, we use entities to match several food types, various price amounts, and quantity from an end user’s sentence to request.

    From the training phrases above, dialogflow would recognize $40 as @sys.unit-currency which is under the amounts-with-units category of the system entities list and 2 as @number under the number category of the system entities list. However, food is not a not a recognized system entity. In a case such as this, dialogflow gives developers the option to create a custom entity to be used.

    Managing Entities

    Double-clicking on food would pop up the entities dropdown menu, at the bottom of the items in the dropdown we would find the Create new entity button, and clicking it would navigate to the Entities tab on the dialogflow console, where we can manage all entities for the agent.

    When at the entities tab, we name this new entity as food then at the options dropdown located at the top navigation bar beside the Save button we have the option to switch the entities input to a raw edit mode. Doing this would enable us to add several entity values in either a json or csv format rather than having to add the entities value one after the other.

    After the edit mode has been changed, we would copy the sample JSON data below into the editor box.

    // foods.json
    
    [
        {
            "value": "Fries",
            "synonyms": [
                "Fries",
                "Fried",
                "Fried food"
            ]
        },
     {
            "value": "Shredded Beef",
            "synonyms": [
                "Shredded Beef",
                "Beef",
                "Shredded Meat"
            ]
        },
        {
            "value": "Shredded Chicken",
            "synonyms": [
                "Shredded Chicken",
                "Chicken",
                "Pieced Chicken"
            ]
        },
    
        {
            "value": "Sweet Sour Sauce",
            "synonyms": [
                "Sweet Sour Sauce",
                "Sweet Sour",
                "Sauce"
            ]
        },
        {
            "value": "Spring Onion",
            "synonyms": [
                "Spring Onion",
                "Onion",
                "Spring"
            ]
        },
        {
            "value": "Toast",
            "synonyms": [
                "Toast",
                "Toast Bread",
                "Toast Meal"
            ]
        },
        {
            "value": "Sandwich",
            "synonyms": [
                "Sandwich",
                "Sandwich Bread",
                "Sandwich Meal"
            ]
        },
        {
            "value": "Eggs Sausage Wrap",
            "synonyms": [
                "Eggs Sausage Wrap",
                "Eggs Sausage",
                "Sausage Wrap",
                "Eggs"
            ]
        },
        {
            "value": "Pancakes",
            "synonyms": [
                "Pancakes",
                "Eggs Pancakes",
                "Sausage Pancakes"
            ]
        },
        {
            "value": "Cashew Nuts",
            "synonyms": [
                "Cashew Nuts",
                "Nuts",
                "Sausage Cashew"
            ]
        },
        {
            "value": "Sweet Veggies",
            "synonyms": [
                "Sweet Veggies",
                "Veggies",
                "Sweet Vegetables"
            ]
        },
        {
            "value": "Chicken Salad",
            "synonyms": [
                "Chicken Salad",
                "Salad",
                "Sweet Chicken Salad"
            ]
        },
        {
            "value": "Crunchy Chicken",
            "synonyms": [
                "Crunchy Chicken",
                "Chicken",
                "Crunchy Chickens"
            ]
        },
        {
            "value": "Apple Red Kidney Beans",
            "synonyms": [
                "Apple Red Kidney Beans",
                "Sweet Apple Red Kidney Beans",
                "Apple Beans Combination"
            ]
        },
    ]

    From the JSON formatted data above, we have 15 meal examples. Each object in the array has a “value” key which is the name of the meal and a “synonyms” key containing an array of names very similar to the object’s value.

    After pasting the json data above, we also check the Fuzzy Matching checkbox as it enables the agent to recognize the annotated value in the intent even when incompletely or slightly misspelled from the end user’s text.

    JSON data values added to the newly created food entity in raw editor mode.
    JSON data values added to the newly created food entity in raw editor mode. (Large preview)

    After saving the entity values above, the agent would immediately be re-trained using the new values added here and once the training is completed, we can test by typing a text in the input field at the right section.

    Responses within this intent would be gotten from our previously created function using the intent’s fulfillment webhook, however, we add the following response to serve as a fallback to be used whenever the webhook is not executed successfully.

    I currently can't find your requested meal. Would you like to place an order for another meal?

    We would also modify the code of the existing cloud function to fetch a single requested as it now handles requests from two intents.

    require("dotenv").config();
    
    exports.foodFunction = async (req, res) => {
      const { MongoClient } = require("mongodb");
      const CONNECTION_URI = process.env.MONGODB_URI;
    
      const client = new MongoClient(CONNECTION_URI, {
        useNewUrlParser: true,
      });
    
      // initate a connection to the deployed mongodb cluster
      client.connect((err) => {
        if (err) {
          res
            .status(500)
            .send({ status: "MONGODB CONNECTION REFUSED", error: err });
        }
    
        const collection = client.db(process.env.DATABASE_NAME).collection("Meals");
        const { displayName } = req.body.queryResult.intent;
        const result = [];
    
        switch (displayName) {
          case "list-available-meals":
            const data = collection.find({});
            const meals = [
              {
                text: {
                  text: [
                    `We currently have the following 20 meals on our menu list. Which would you like to request for?`,
                  ],
                },
              },
            ];
            result.push(
              data.forEach((item) => {
                const {
                  name,
                  description,
                  price,
                  availableUnits,
                  image_uri,
                } = item;
                const card = {
                  card: {
                    title: `${name} at $${price}`,
                    subtitle: description,
                    imageUri: image_uri,
                  },
                };
                meals.push(card);
              })
            );
            return Promise.all(result)
              .then((_) => {
                const response = {
                  fulfillmentMessages: meals,
                };
                res.status(200).json(response);
              })
              .catch((e) => res.status(400).send({ error: e }));
    
          case "request-meal":
            const { food } = req.body.queryResult.parameters;
    
            collection.findOne({ name: food }, (err, data) => {
              if (err) {
                res.status(400).send({ error: err });
              }
              const { name, price, description, image_uri } = data;
              const singleCard = [
                {
                  text: {
                    text: [`The ${name} is currently priced at $${price}.`],
                  },
                },
                {
                  card: {
                    title: `${name} at $${price}`,
                    subtitle: description,
                    imageUri: image_uri,
                    buttons: [
                      {
                        text: "Pay For Meal",
                        postback: "htts://google.com",
                      },
                    ],
                  },
                },
              ];
              res.status(200).json(singleCard);
    
          default:
            break;
        }
    
        client.close();
      });
    };
    

    From the highlighted parts above, we can see the following new use cases that the function has now been modified to handle:

    • Multiple intents
      the cloud function now uses a switch statement with the intent’s name being used as cases. In each request payload made to a webhook, Dialogflow includes details about the intent making the request; this is where the intent name is being pulled from to match the cases within the switch statement.
    • Fetch a single meal
      the Meals collection is now queried using the value extracted as a parameter from the user’s input.
    • A call-to-action button is now being added to the card which a user can use to pay for the requested meal and clicking it opens a tab in the browser. In a functioning chat assistant, this button’s postback URL should point to a checkout page probably using a configured third-party service such as Stripe checkout.

    To test this function again, we restart the function for the new changes in the index.js file to take effect and run the function again from the terminal by running yarn start.

    Note: You don’t have to restart the terminal running the Ngrok tunnel for the new changes to take place. Ngrok would still forward requests to the updated function when the webhook is called.

    Making a test sentence to the agent from the dialogflow console to order a specific meal, we can see the request-meal case within the cloud function being used and a single card getting returned as a response to be displayed.

    Testing the request-meal intent through the Dialogflow console emulator.
    A meal card from testing the request-meal intent using the Dialogflow console emulator. (Large preview)

    At this point, we can be assured that the cloud function works as expected. We can now move forward to deploy the local function to the Google Cloud Functions using the following command;

    gcloud functions deploy "foodFunction" --runtime nodejs10 --trigger-http --entry-point=foodFunction --set-env-vars=[MONGODB_URI="MONGODB_CONNECTION_URL", DATABASE_NAME="DATABASE_NAME"] --allow-unauthenticated

    Using the command above deploys the function to the Google Cloud with the flags explained below attached to it and logs out a generated URL endpoint of deployed cloud function to the terminal.

    • NAME
      This is the name given to a cloud function when deploying it and is it required. In our use case, the name of the cloud function when deployed would be foodFunction.

    • trigger-http
      This selects HTTP as the function’s trigger type. Cloud functions with an HTTP trigger would be invoked using their generated URL endpoint. The generated URLs are secured and use the https protocol.

    • entry-point
      This the specific exported module to be deployed from the file where the functions were written.

    • set-env-vars
      These are the environment variables available to the cloud function at runtime. In our cloud function, we only access our MONGODB_URI and DATABASE_NAME values from the environment variables.

      The MongoDB connection string is gotten from a created MongoDB cluster on Atlas. If you need some help on creating a cluster, the MongoDB Getting started section provides great help.

    • allow-authenticated
      This allows the function to be invoked outside the Google Cloud through the Internet using its generated endpoint without checking if the caller is authenticated.

    Dialogflow Integrations

    Dialogflow gives developers the feature to integrate a built agent into several conversational platforms including social media platforms such as Facebook Messenger, Slack, and Telegram. Asides from the two integration platforms which we used for our built agent, the Dialogflow documentation lists the available types of integrations and platforms within each integration type.

    Integrating With Google Actions

    Being a product from Google’s ecosystem, agents on Dialogflow integrate seamlessly with Google Assistant in very few steps. From the Integrations tab, Google Assistant is displayed as the primary integration option of a dialogflow agent. Clicking the Google Assistant option would open the Assistant modal from which we click on the test app option. From there the Actions console would be opened with the agent from Dialogflow launched in a test mode for testing using either the voice or text input option.

    Testing the Dialogflow agent from the Google Actions console.
    Using Google assistant integration to test the Dialogflow agent from the Google Actions console in a test mode. (Large preview)

    Integrating a dialogflow agent with the Google Assistant is a huge way to make the agent accessible to millions of Google Users from their Smartphones, Watches, Laptops, and several other connected devices. To publish the agent to the Google Assistant, the developers docs provides a detailed explanation of the process involved in the deployment.

    Integrating With A Web Demo

    The Web Demo which is located in the Text-based sections of the Integrations Tab in the Dialogflow console allows for the use of the built agent in a web application by using it in an iframe window. Selecting the web Demo option would generate a URL to a page with a chat window that simulates a real-world chat application.

    Note: Dialogflow’s web demo only supports text responses and does not support the display of Rich messages and images. This worth noting when using a webhook that responds with data in the Rich response format.

    Conclusion

    From several surveys, we can see the effect of chat assistants on customer satisfaction when incorporated by organizations into their services. These positive metrics are expected to grow up in the next coming years thus placing greater importance on the use of these chat assistants.

    In this article, we have learned about Dialogflow and how it is providing a platform for organizations and developers to build Natural Language processing conversational chat assistants for use in their services. We also moved further to learn about its terminologies and how these terminologies apply when building a chat assistant by building a demo chat assistant using the Dialogflow console.

    If a chat assistant is being built to be used at a production level, it is highly recommended that the developer(s) go through the Dialogflow best practices section of the documentation as it contains standard design guidelines and solutions to common pitfalls encountered while building a chat assistant.

    The source code to the JavaScript webhook built within this article has been pushed to GitHub and can be accessed from this repository.

    References

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    Building Serverless Frontend Applications Using Google Cloud Platform — Smashing Magazine

    11/06/2020

    About The Author

    Nwani Victory works as a Frontend Engineer at Liferithms.inc from Lagos, Nigeria. After office hours, he doubles as a Cloud Engineer seeking ways to make Cloud …
    More about
    Nwani

    The use of serverless applications by developers to handle the business logic of their applications in on the high increase, but how does the Google Cloud — a major service provider within the public cloud — allow developers to manage serverless applications? In this article, you will learn what serverless applications are, how they are used on the Google Cloud, and also scenarios in which they can be used in a front-end application.

    Recently, the development paradigm of applications has begun to shift from manually having to deploy, scale and update the resources used within an application to relying on third-party cloud service providers to do most of the management of these resources.

    As a developer or an organization that wants to build a market-fit application within the quickest time possible, your main focus might be on delivering your core application service to your users while you spend a smaller amount of time on configuring, deploying and stress testing your application. If this is your use case, handling the business logic of your application in a serverless manner might your best option. But how?

    This article is beneficial to front-end engineers who want to build certain functionalities within their application or back-end engineers who want to extract and handle a certain functionality from an existing back-end service using a serverless application deployed to the Google Cloud Platform.

    Note: To benefit from what will be covered here, you need to have experience working with React. No prior experience in serverless applications is required.

    Before we begin, let’s understand what serverless applications really are and how the serverless architecture can be used when building an application within the context of a frontend engineer.

    Serverless Applications

    Serverless applications are applications broken down into tiny reusable event-driven functions, hosted and managed by third-party cloud service providers within the public cloud on behalf of the application author. These are triggered by certain events and are executed on demand. Although the “less” suffix attached to the serverless word indicates the absence of a server, this is not 100% the case. These applications still run on servers and other hardware resources, but in this case, those resources are not provisioned by the developer but rather by a third-party cloud service provider. So they are server-less to the application author but still run on servers and are accessible over the public internet.

    An example use case of a serverless application would be sending emails to potential users who visit your landing page and subscribe to receiving product launch emails. At this stage, you probably don’t have a back-end service running and would not want to sacrifice the time and resources needed to create, deploy and manage one, all because you need to send emails. Here, you can write a single file that uses an email client and deploy to any cloud provider that supports serverless application and let them manage this application on your behalf while you connect this serverless application to your landing page.

    While there are a ton of reasons why you might consider leveraging serverless applications or Functions As A Service (FAAS) as they are called, for your frontend application, here are some very notable reasons that you should consider:

    • Application auto scaling
      Serverless applications are horizontally scaled and this “scaling out” is automatically done by the Cloud provider based on the amount of invocations, so the developer doesn’t have to manually add or remove resources when the application is under heavy load.
    • Cost Effectiveness
      Being event-driven, serverless applications run only when needed and this reflects on the charges as they are billed based on the number of time invoked.
    • Flexibility
      Serverless applications are built to be highly reusable and this means they are not bound to a single project or application. A particular functionality can be extracted into a serverless application, deployed and used across multiple projects or applications. Serverless applications can also be written in the preferred language of the application author, although some cloud providers only support a smaller amount of languages.

    When making use of serverless applications, every developer has a vast array of cloud providers within the public cloud to make use of. Within the context of this article we will focus on serverless applications on the Google Cloud Platform — how they are created, managed, deployed and how they also integrate with other products on the Google Cloud. To do this, we will add new functionalities to this existing React application while working through the process of:

    • Organizing application workflows using the Google Cloud.
    • Storing and retrieving users’ data on the cloud.
    • Creating and managing cron jobs on the Google Cloud.
    • Deploying Cloud Functions to the Google Cloud.

    Note: Serverless applications are not bound to React only, as long as your preferred front-end framework or library can make an HTTP request, it can use a serverless application.

    Google Cloud Functions

    The Google Cloud allows developers to create serverless applications using the Cloud Functions and runs them using the Functions Framework. As they are called, Cloud functions are reusable event-driven functions deployed to the Google Cloud to listen for specific trigger out of the six available event triggers and then perform the operation it was written to execute.

    Cloud functions which are short-lived, (with a default execution timeout of 60 seconds and a maximum of 9 minutes) can be written using JavaScript, Python, Golang and Java and executed using their runtime. In JavaScript, they can be executed using only using some available versions of the Node runtime and are written in the form of CommonJS modules using plain JavaScript as they are exported as the primary function to be run on the Google Cloud.

    An example of a cloud function is the one below which is an empty boilerplate for the function to handle a user’s data.

    // index.js
    
    exports.firestoreFunction = function (req, res) {
      return res.status(200).send({ data: `Hello ${req.query.name}` });
    }

    Above we have a module which exports a function. When executed, it receives the request and response arguments similar to a HTTP route.

    Note: A cloud function matches every HTTP protocol when a request is made. This is worth noting when expecting data in the request argument as the data attached when making a request to execute a cloud function would be present in the request body for POST requests while in the query body for GET requests.

    Cloud functions can be executed locally during development by installing the @google-cloud/functions-framework package within the same folder where the written function is placed or doing a global installation to use it for multiple functions by running npm i -g @google-cloud/functions-framework from your command line. Once installed, it should be added to the package.json script with the name of exported module similar to the one below:

    
    "scripts": {                                                                
         "start": "functions-framework --target=firestoreFunction --port=8000",       
      }

    Above we have a single command within our scripts in the package.json file which runs the functions-framework and also specifies the firestoreFunction as the target function to be run locally on port 8000.

    We can test this function’s endpoint by making a GET request to port 8000 on localhost using curl. Pasting the command below in a terminal will do that and return a response.

    curl http://localhost:8000?name="Smashing Magazine Author"

    The request above when executed makes a request with a GET HTTP method and responds with with a 200 status code and an object data containing the name added in the query.

    Deploying A Cloud Function

    Out of the available deployment methods,, one quick way to deploy a cloud function from a local machine is to use the cloud Sdk after installing it. Running the command below from the terminal after authenticating the gcloud sdk with your project on the Google Cloud, would deploy a locally created function to the Cloud Function service.

    gcloud functions deploy "demo-function" --runtime nodejs10 --trigger-http --entry-point=demo --timeout=60 --set-env-vars=[name="Developer"] --allow-unauthenticated

    Using the explained flags below, the command above deploys an HTTP triggered function to the google cloud with the name “demo-function”.

    • NAME
      This is the name given to a cloud function when deploying it and is required.
    • region
      This is the region where the cloud function is to be deployed to. By default, it is deployed to us-central1.
    • trigger-http
      This selects HTTP as the function’s trigger type.
    • allow-unauthenticated
      This allows the function to be invoked outside the Google Cloud through the Internet using its generated endpoint without checking if the caller is authenticated.
    • source
      Local path from the terminal to the file which contains the function to be deployed.
    • entry-point
      This the specific exported module to be deployed from the file where the functions were written.
    • runtime
      This is the language runtime to be used for the function among this list of accepted runtime.
    • timeout
      This is the maximum time a function can run before timing out. It is 60 seconds by default and can be set to a maximum of 9 minutes.

    Note: Making a function allow unauthenticated requests means that anybody with your function’s endpoint can also make requests without you granting it. To mitigate this, we can make sure the endpoint stays private by using it through environment variables, or by requesting authorization headers on each request.

    Now that our demo-function has been deployed and we have the endpoint, we can test this function as if it was being used in a real-world application using a global installation of autocannon. Running autocannon -d=5 -c=300 CLOUD_FUNCTION_URL from the opened terminal would generate 300 concurrent requests to the cloud function within a 5 seconds duration. This more than enough to start the cloud function and also generate some metrics that we can explore on the function’s dashboard.

    Note: A function’s endpoint would be printed out in the terminal after deployment. If not the case, run gcloud function describe FUNCTION_NAME from the terminal to get the details about the deployed function including the endpoint.

    Using the metrics tab on the dashboard, we can see a visual representation from the last request consisting of how many invocations were made, how long they lasted, the memory footprint of the function and how many instances were spun to handle the requests made.

    A function’s dashboard showing a chart of gathered metrics from all recent requests made.
    Cloud function dashboard showing all requests made. (Large preview)

    A closer look at the Active Instances chart within the image above shows the horizontal scaling capacity of the Cloud Functions, as we can see that 209 instances were spun up within a few seconds to handle the requests made using autocannon.

    Cloud Function Logs

    Every function deployed to the Google cloud has a log and each time this function is executed, a new entry into that log is made. From the Log tab on the function’s dashboard, we can see a list of all the logs entries from a cloud function.

    Below are the log entries from our deployed demo-function created as a result of the requests we made using autocannon.

    The cloud function log showing the logs from the function’s execution times.
    Cloud function log tab showing all execution logs. (Large preview)

    Each of the log entry above shows exactly when a function was executed, how long the execution took and what status code it ended with. If there are any errors resulting from a function, details of the error including the line it occurred would be shown in the logs here.

    The Logs Explorer on the Google Cloud can be used to see more comprehensive details about the logs from a cloud function.

    Cloud Functions With Front-end Applications

    Cloud functions are very useful and powerful to frontend engineers. A frontend engineer without the knowledge of managing back-end applications can extract a functionality into a cloud function, deploy to the Google Cloud and use in a frontend application by making HTTP requests to the cloud function through it’s endpoint.

    To show how cloud functions can be used in a frontend application, we would add more features to this React application. The application already has a basic routing between the authentication and home pages setup. We will expand it to use the React Context API to manage our application state as the use of the created cloud functions would be done within the application reducers.

    To get started, we create our application’s context using the createContext API and also create a reducer for handling the actions within our application.

    // state/index.js
    import { createContext } from “react”;
    
    

    export const UserReducer = (action, state) => { switch (action.type) { case “CREATE-USER”: break; case “UPLOAD-USER-IMAGE”: break; case “FETCH-DATA” : break case “LOGOUT” : break; default: console.log(${action.type} is not recognized) } };

    export const userState = { user: null, isLoggedIn : false };

    export const UserContext = createContext(userState);

    Above, we started with creating a UserReducer function which contains a switch statement, allowing it perform an operation based on the type of action dispatched into it. The switch statement has has four cases and these are the actions we will be handling. For now they don’t do anything yet but when we begin integrating with our cloud functions, we would incrementally implement the actions to be performed in them.

    We also created and exported our application’s context using the React createContext API and gave it a default value of the userState object which contains a user value currently which would be updated from null to the user’s data after authentication and also an isLoggedIn boolean value to know if the user is logged in or not.

    Now we can proceed to consume our context, but before we do that, we need to wrap our entire application tree with the Provider attached to the UserContext for the children components to be able to subscribe to the value change of our context.

    // index.js 
    import React from "react";
    import ReactDOM from "react-dom";
    import "./index.css";
    import App from "./app";
    import { UserContext, userState } from "./state/";
    
    ReactDOM.render(
      <React.StrictMode>
        <UserContext.Provider value={userState}>
          <App />
        </UserContext.Provider>
      </React.StrictMode>,
      document.getElementById("root")
    );
    
    serviceWorker.unregister();
    

    We wrap our enter application with the UserContext provider at the root component and passed our previously created userState default value in the value prop.

    Now that we have our application state fully setup, we can move into our creating the user’s data model using the Google Cloud Firestore through a cloud function.

    Handling Application Data

    A user’s data within this application consists of a unique id, an email, a password and the URL to an image. Using a cloud function this data will be stored on the cloud using the Cloud Firestore Service which is offered on the Google Cloud Platform.

    The Google Cloud Firestore, a flexible NoSQL database was carved out from the Firebase Realtime Database with new enhanced features that allows for richer and faster queries alongside offline data support. Data within the Firestore service are organized into collections.

    The Firestore can be visually accessed through the Google Cloud Console. To launch it, open the left navigation pane and scroll down to the Database section and click on Firestore. That would show the list of collections for users with existing data or prompt the user to create a new collection when there is no existing collection. We would create a users collection to be used by our application.

    Similar to other services on the Google Cloud Platform, Cloud Firestore also has a JavaScript client library built to be used in a node environment (an error would be thrown if used in the browser). To improvise, we use the Cloud Firestore in a cloud function using the @google-cloud/firestore package.

    Using The Cloud Firestore With A Cloud Function

    To get started, we would rename the first function we created from demo-cloud-function to firestoreFunction and then expand it to connect with our users collection on the Firestore and also save and login users.

    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const { SecretManagerServiceClient } = require("@google-cloud/secret-manager");
    
    const client = new SecretManagerServiceClient();
            
    exports.firestoreFunction = function (req, res) {
        return {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            console.log(document) // prints details of the collection to the function logs
            if (!type) {
                res.status(422).send("An action type was not specified");
            }
    
            switch (type) {
                case "CREATE-USER":
                    break
                case "LOGIN-USER":
                    break;
                default:
                    res.status(422).send(`${type} is not a valid function action`)
            }
    };
    

    To handle more operations involving the fire-store, we have added a switch statement with two cases to handle the authentication needs of our application. Our switch statement evaluates a type expression which we add to the request body when making a request to this function from our application and whenever this type data is not present in our request body, the request is identified as a Bad Request and a 400 status code alongside a message to indicate the missing type is sent as a response.

    We establish a connection with the Firestore using the Application Default Credentials(ADC) library within the Cloud Firestore client library. On the next line, we call the collection method in another variable and pass in the name of our collection. We will be using this to further perform other operations on the collection of the contained documents.

    Note: Client libraries for services on the Google Cloud connect to their respective service using a created service account key passed in when initializing the constructor. When the service account key is not present, it defaults to using the Application Default Credentials which in turn connects using the IAM roles assigned to cloud function.

    After editing the source code of a function that was deployed locally using the gcloud sdk, we can re-run the previous command from a terminal to update and redeploy the cloud function.

    Now that a connection has been established, we can implement the CREATE-USER case to create a new user using data from the request body and then move on to the LOGIN-USER which finds an existing user and sends back a cookie.

    
    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const path = require("path");
    const { v4 : uuid } = require("uuid")
    const cors = require("cors")({ origin: true });
    
    const client = new SecretManagerServiceClient();
    
    exports.firestoreFunction = function (req, res) {
        return cors(req, res, () => {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            if (!type) {
                res.status(422).send("An action type was not specified");
            }
    
            switch (type) {
                case "CREATE-USER":
                  if (!email || !password) {
                    res.status(422).send("email and password fields missing");
                  }
                
                const id = uuid()
                return bcrypt.genSalt(10, (err, salt) => {
                  bcrypt.hash(password, salt, (err, hash) => {
                    document.doc(id)
                      .set({
                        id : id
                        email: email,
                        password: hash,
                        img_uri : null
                       })
                      .then((response) => res.status(200).send(response))
                      .catch((e) =>
                          res.status(501).send({ error : e })
                        );
                      });
                    });               
    
               case "LOGIN":
                  break;
              default:
                res.status(400).send(`${type} is not a valid function action`)
            }
        });
    };
    

    We generated a UUID using the uuid package to be used as the ID of the document about to be saved by passing it into the set method on the document and also the user’s id. By default, a random ID is generated on every inserted document but in this case, we would update the document when handling the image upload and the UUID is what would be used to get a particular document to be updated. Rather than store the user’s password in plain text, we salt it first using bcryptjs then store the result hash as the user’s password.

    Integrating the firestoreFunction cloud function into the app, we use it from the CREATE_USER case within the user reducer.

    After clicking the Create Account button, an action is dispatched to the reducers with a CREATE_USER type to make a POST request containing the typed email and password to the firestoreFunction function’s endpoint.

    import { createContext } from "react";
    import { navigate } from "@reach/router";
    import Axios from "axios";
    
    export const userState = {
      user : null, 
      isLoggedIn: false,
    };
    
    export const UserReducer = (state, action) => {
      switch (action.type) {
        case "CREATE_USER":
          const FIRESTORE_FUNCTION = process.env.REACT_APP_FIRESTORE_FUNCTION;
          const { userEmail, userPassword } = action;
    
          const data = {
            type: "CREATE-USER",
            email: userEmail,
            password: userPassword,
          };
    
          Axios.post(`${FIRESTORE_FUNCTION}`, data)
            .then((res) => {
              navigate("/home");
              return { ...state, isLoggedIn: true };
            })
            .catch((e) => console.log(`couldnt create user. error : ${e}`));
          break;
        case "LOGIN-USER":
          break;
        case "UPLOAD-USER-IMAGE":
          break;
        case "FETCH-DATA" :
          break
        case "LOGOUT":
          navigate("/login");
          return { ...state, isLoggedIn: false };
        default:
          break;
      }
    };
    
    export const UserContext = createContext(userState);
    

    Above, we made use of Axios to make the request to the firestoreFunction and after this request has been resolved we set the user initial state from null to the data returned from the request and lastly we route the user to the home page as a logged in user.

    Next, we move on to implement a login functionality within our firestoreFunction function to enable an existing user log in to their account using their saved credentials.

    
    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const path = require("path");
    const cors = require("cors")({ origin: true });
    
    const client = new SecretManagerServiceClient()                                                                         
    exports.firestoreFunction = function (req, res) {
        return cors(req, res, () => {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            if (!type) {
                res.status(422).send("An action type was not specified");
            }
    
            switch (type) {
                case "CREATE":
                    // ... CREATE - USER LOGIC
                    break
                case "LOGIN":
                    break;
                default:
                    res.status(500).send({ error : `${type} is not a valid action` })
            }
        });
    };
    

    At this point, a new user can successfully create an account successfully and get routed to the home page. This process demonstrates how we use the Cloud Firestore to perform the basic saving and mutation of data in a serverless application.

    Handling File Storage

    The storing and retrieving of a user’s files in an application is most times a much-needed feature within an application. In an application connected to a node.js backend, Multer is often used as a middleware to handle the multipart/form-data which an uploaded file comes in. But in the absence of the node.js backend, we could use an online file storage service such as the Google Cloud Storage to store files.

    The Google Cloud Storage is a globally available file storage service used to store any amount of data as objects for applications into buckets. It is flexible enough to handle the storage of static assets for both small and large-sized applications.

    To use the Cloud Storage service within an application, we could make use of the available Storage API endpoints or by using the official node Storage client library. However, the Node Storage client library does not work within a Browser window so we could make use of a Cloud Function where we would use the library.

    An example of this, is the Cloud Function below which connects and uploads a file to a created Cloud Bucket.

    const cors = require("cors")({ origin: true });
    const { Storage } = require("@google-cloud/storage");
    const StorageClient = new Storage();
    
    exports.Uploader = (req, res) => {
        const { file } = req.body;
        StorageClient.bucket("TEST_BUCKET")
          .file(file.name)
          .then((response) => {
             console.log(response);
            res.status(200).send(response)
           })
          .catch((e) => res.status(422).send({error : e}));
      });
    };
    

    From the cloud function above, we are performing the two following main operations:

    • First, we create a connection to the Cloud Storage within the Storage constructor and it uses the Application Default Credentials (ADC) feature on the Google Cloud to authenticate with the Cloud Storage.

    • Second, we upload the file included in the request body to our TEST_BUCKET by calling the .file method and passing in the file’s name. Since this is an asynchronous operation, we use a promise to know when this action has been resolved and we send a 200 response back thus ending the life-cycle of the invocation.

    Now, we can expand the Uploader Cloud Function above to handle the upload of a user’s profile image. The cloud function will receive a user’s profile image, store it within our application’s cloud bucket, and then update the user’s img_uri data within our users’ collection in the Firestore service.

    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const cors = require("cors")({ origin: true });
    const { Storage } = require("@google-cloud/storage");
    
    const StorageClient = new Storage();
    const BucketName = process.env.STORAGE_BUCKET
    
    exports.Uploader = (req, res) => {
      return Cors(req, res, () => {
        const { file , userId } = req.body;
        const firestore = new Firestore();
        const document = firestore.collection("users");
    
        StorageClient.bucket(BucketName)
          .file(file.name)
          .on("finish", () => {
            StorageClient.bucket(BucketName)
              .file(file.name)
              .makePublic()
              .then(() => {
                  const img_uri = `https://storage.googleapis.com/${Bucket}/${file.path}`;
                    document
                     .doc(userId)
                     .update({
                          img_uri,
                      })
                      .then((updateResult) => res.status(200).send(updateResult))
                      .catch((e) => res.status(500).send(e));
                      })
              .catch((e) => console.log(e));
          });
      });
    };

    Now we have expanded the Upload function above to perform the following extra operations:

    • First, it makes a new connection to the Firestore service to get our users collection by initializing the Firestore constructor and it uses the Application Default Credentials (ADC) to authenticate with the Cloud Storage.

    • After uploading the file added in the request body we make it public in order to be accessible via a public URL by calling the makePublic method on the uploaded file. According to the Cloud Storage’s default Access Control, without making a file public, a file cannot be accessed over the internet and to be able to do this when the application loads.

    Note: Making a file public means anyone using your application can copy the file link and have unrestricted access to the file. One way to prevent this is by using a Signed URL to grant temporary access to a file within your bucket instead of making it fully public.

    • Next, we update the user’s existing data to include the URL of the file uploaded. We find the particular user’s data using Firestore’s WHERE query and we use the userId included in the request body, then we set the img_uri field to contain the URL of the newly updated image.

    The Upload cloud function above can be used within any application having registered users within the Firestore service. All that is needed to make a POST request to the endpoint, putting the user’s IS and an image in the request body.

    An example of this within the application is the UPLOAD-FILE case which makes a POST request to the function and puts the image link returned from the request in the application state.

    # index.js
    import Axios from 'axios'
    
    const UPLOAD_FUNCTION = process.env.REACT_APP_UPLOAD_FUNCTION 
    
    export const UserReducer = (state, action) => {
    switch (action.type) {
     case "CREATE-USER" :
       # .....CREATE-USER-LOGIC .... 
    
     case "UPLOAD-FILE":
        const { file, id }  = action
        return Axios.post(UPLOAD_FUNCTION, { file, id }, {
         headers: {
             "Content-Type": "image/png",
          },
       })
      .then((response) => {})
      .catch((e) => console.log(e));
    
      default : 
        return console.log(`${action.type} case not recognized`)
      }
    }
    

    From the switch case above, we make a POST request using Axios to the UPLOAD_FUNCTION passing in the added file to be included in the request body and we also added an image Content-Type in the request header.

    After a successful upload, the response returned from the cloud function would contain the user’s data document which has been updated to contain a valid url of the image uploaded to the google cloud storage. We can then update the user’s state to contain the new data and this would also update the user’s profile image src element in the profile component.

    A user’s profile page which with an update profile image
    A user’s profile page which has just been updated to show the newly updated profile image. (Large preview)

    Handling Cron Jobs

    Repetitive automated tasks such as sending emails to users or performing an internal action at a specific time are most times an included feature of applications. In a regular node.js application, such tasks could be handled as cron jobs using node-cron or node-schedule. When building serverless applications using the Google Cloud Platform, the Cloud Scheduler is also designed to perform a cron operation.

    Note: Although the Cloud Scheduler works similar to the Unix cron utility in creating jobs that are executed in the future, it is important to note that the Cloud Scheduler does not execute a command as the cron utility does. Rather it performs an operation using a specified target.

    As the name implies, the Cloud Scheduler allows users to schedule an operation to be performed at a future time. Each operation is called a job and jobs can be visually created, updated, and even destroyed from the Scheduler section of the Cloud Console. Asides from a name and description field, jobs on the Cloud Scheduler consist of the following:

    • Frequency: This is used to schedule the execution of the Cron job. Schedules are specified using the unix-cron format which is originally used when creating background jobs on the cron table in a Linux environment. The unix-cron format consists of a string with five values each representing a time point. Below we can see each of the five strings and the values they represent.
       - - - - - - - - - - - - - - - -   minute ( - 59 )
      |   - -  - - - - - -  - - - -  -  hour ( 0 - 23 )
      |   |   - - - - - - -  - - - - -  day of month ( 1 - 31 )
      |   |   |    - -  - - - -  - - -  month ( 1 - 12 )
      |   |   |    |     - - -  - - --  day of week ( 0 - 6 )   
      |   |   |    |    |
      |   |   |    |    |
      |   |   |    |    |
      |   |   |    |    |
      |   |   |    |    |  
      *   *   *    *    * 

    The Crontab generator tool comes in handy when trying to generate a frequency-time value for a job. If you are finding it difficult to put the time values together, the Crontab generator has a visual drop-down where you can select the values that make up a schedule and you copy the generated value and use as the frequency.

    • Timezone: The timezone from where the cron job is executed. Due to the time difference between time-zones, cron jobs executed with different specified time-zones would have different execution times.

    • Target: This is what is used in the execution of the specified Job. A target could be an HTTP type where the job makes a request at the specified time to URL or a Pub/Sub topic which the job can publish messages to or pull messages from and lastly an App Engine Application.

    The Cloud Scheduler combines perfectly well with HTTP triggered Cloud Functions. When a job within the Cloud Scheduler is created with its target set to HTTP, this job can be used to executed a cloud function. All that needs to be done is to specify the endpoint of the cloud function, specify the HTTP verb of the request then add whatever data needs to be passed to function in the displayed body field. As shown in the sample below:

    Fields required for creating a cron job using the cloud console
    Fields required for creating a cron job using the cloud console. (Large preview)

    The cron job in the image above would run by 9 AM every day making a POST request to the sample endpoint of a cloud function.

    A more realistic use case of a cron job is sending scheduled emails to users at a given interval using an external mailing service such as Mailgun. To see this in action, we will create a new cloud function which sends a HTML email to a specified email address using the nodemailer JavaScript package to connect to Mailgun,

    # index.js
    require("dotenv").config();
    const nodemailer = require("nodemailer");
    
    exports.Emailer = (req, res) => {
      let sender = process.env.SENDER;
      const { reciever, type } = req.body
    
      var transport = nodemailer.createTransport({
        host: process.env.HOST,
        port: process.env.PORT,
        secure: false,
        auth: {
          user: process.env.SMTP_USERNAME,
          pass: process.env.SMTP_PASSWORD,
        },
      });
    
      if (!reciever) {
        res.status(400).send({ error: `Empty email address` });
      }
    
      transport.verify(function (error, success) {
        if (error) {
          res
            .status(401)
            .send({ error: `failed to connect with stmp. check credentials` });
        }
      });
    
      switch (type) {
        case "statistics":
          return transport.sendMail(
            {
              from: sender,
              to: reciever,
              subject: "Your usage satistics of demo app",
              html: { path: "./welcome.html" },
            },
            (error, info) => {
              if (error) {
                res.status(401).send({ error : error });
              }
              transport.close();
              res.status(200).send({data  : info});
            }
          );
    
        default:
          res.status(500).send({
            error: "An available email template type has not been matched.",
          });
      }
    };

    Using the cloud function above we can send an email to any user’s email address specified as the receiver value in the request body. It performs the sending of emails through the following steps :

    • It creates an SMTP transport for sending messages by passing the host, user and pass which stands for password, all displayed on the user’s Mailgun dashboard when a new account is created.
    • Next, it verifies if the SMTP transport has the credentials needed in order to establish a connection. If there’s an error in establishing the connection, it ends the function’s invocation and sends back a 401 unauthenticated status code.
    • Next, it calls the sendMail method to send the email containing the HTML file as the email’s body to the receiver’s email address specified in the to field.

    Note: We use a switch statement in the cloud function above to make it more reusable for sending several emails for different recipients. This way we can send different emails based on the type field included in the request body when calling this cloud function.

    Now that there is a function that can send an email to a user; we are left with creating the cron job to invoke this cloud function. This time the cron jobs would be created dynamically each time a new user is created using the official Google cloud client library for the Cloud Scheduler from the initial firestoreFunction.

    We expand the CREATE-USER case to create the job which sends the email to the created user at a one-day interval.

    
    require("dotenv").config();cloc
    const { Firestore } = require("@google-cloud/firestore");
    const scheduler = require("@google-cloud/scheduler") 
    const cors = require("cors")({ origin: true });
    
    const EMAILER = proccess.env.EMAILER_ENDPOINT
    const parent = ScheduleClient.locationPath(
     process.env.PROJECT_ID,
     process.env.LOCATION_ID
    );
    
    exports.firestoreFunction = function (req, res) {
        return cors(req, res, () => {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            const client = new Scheduler.CloudSchedulerClient()
    
            if (!type) {
                res.status(422).send({ error : "An action type was not specified"});
            }
    
            switch (type) {
              case "CREATE-USER":
    
            const job = {
              httpTarget: {
                uri: process.env.EMAIL_FUNCTION_ENDPOINT,
                httpMethod: "POST",
                body: {
                  email: email,
                },
              },
              schedule: "*/30 */6 */5 10 4",
              timezone: "Africa/Lagos",
              }
                  if (!email || !password) {
                       res.status(422).send("email and password fields missing");
                    }
                return bcrypt.genSalt(10, (err, salt) => {
                  bcrypt.hash(password, salt, (err, hash) => {
                    document
                      .add({
                        email: email,
                        password: hash,
                       })
                      .then((response) => {
                          client.createJob({
                              parent : parent,
                              job : job
                          }).then(() => res.status(200).send(response))
                          .catch(e => console.log(`unable to create job : ${e}`) )
                      })
                      .catch((e) =>
                          res.status(501).send(`error inserting data : ${e}`)
                        );
                      });
                    });               
                default:
                    res.status(422).send(`${type} is not a valid function action`)
            }
        });
    };
    

    From the snippet above, we can see the following:

    • A connection to the Cloud Scheduler from the Scheduler constructor using the Application Default Credentials (ADC) is made.
    • We create an object consisting of the following details which make up the cron job to be created:
      • uri
        The endpoint of our email cloud function which a request would be made to.
      • body
        This is the data containing the email address of the user to be included when the request is made.
      • schedule
        The unix cron format representing the time when this cron job is to be performed.
    • After the promise from inserting the user’s data document is resolved, we create the cron job by calling the createJob method and passing in the job object and the parent.
    • The function’s execution is ended with a 200 status code after the promise from the createJob operation has been resolved.

    After the job is created, we would see it listed on the scheduler page.

    List of all scheduled cron jobs including the last created job.
    List of all scheduled cron jobs including the last created job. (Large preview)

    From the image above we can see the time scheduled for this job to be executed. We can decide to manually run this job or wait for it to be executed at the scheduled time.

    Conclusion

    Within this article, we have had a good look into serverless applications and the benefits of using them. We also had an extensive look at how developers can manage their serverless applications on the Google Cloud using Cloud Functions so you now know how the Google Cloud is supporting the use of serverless applications.

    Within the next years to come, we will certainly see a large number of developers adapt to the use of serverless applications when building applications. If you are using cloud functions in a production environment, it is recommended that you read this article from a Google Cloud advocate on “6 Strategies For Scaling Your Serverless Applications”.

    The source code of the created cloud functions are available within this Github repository and also the used frontend application within this Github repository. The frontend application has been deployed using Netlify and can be tested live here.

    References

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    Building A Component Library With React And Emotion — Smashing Magazine

    09/04/2020

    About The Author

    Front-end engineer passionate about performance and bleeding-edge technologies.
    More about
    Ademola

    A component library helps to keep a design consistent across multiple projects. It ensures consistency because any changes made will propagate across the projects that make use of it. In this tutorial, we’ll learn how to build a component library, using Emotion in React to resolve inconsistencies.

    According to Clearleft, a component library is:

    “A collection of components, organised in a meaningful manner, and often (but not necessarily) providing some way to browse and preview those components and their associated assets.”

    — “On Building Component Libraries,” Clearleft

    We’ll learn how to build a component library by making one that comprises four components:

    1. Button
      A wrapper around the default HTML button
    2. Box
      A container (HTML div) with custom properties
    3. Columns
      A container whose children are spaced evenly across the x-axis
    4. Stack
      A container whose children are spaced evenly across the y-axis

    These components could then be used in whatever application we are working on. We’ll build the component library using React and Emotion.

    At the end of this piece, you should be able to create a component library that fits whatever use case you have in mind. This knowledge will come handy when you’re working with a team that needs to make use of reusable components.

    First, let’s get started by establishing what the Emotion library is. The documentation explains:

    “Emotion is a library designed for writing CSS styles with JavaScript. It provides powerful and predictable style composition in addition to a great developer experience with features such as source maps, labels, and testing utilities.”

    — “Introduction,” Emotion Docs

    In essence, Emotion is a CSS-in-JavaScript library, and an interesting thing about CSS-in-JavaScript libraries is that they enable you to collocate components with styles. Being able to tie them up together in a scope ensures that some component styles don’t interfere with others, which is crucial to our component library.

    Emotion exposes two APIs for React:

    • @emotion/core
    • @emotion/styled

    Before we dive into how these APIs work, note that they both support the styling of components with template strings and objects.

    The core API is actually like the regular style property we currently use today when building apps with React, with the addition of vendor prefixing, nested selectors, media queries, and more.

    Using the object approach with the core API would typically look like this:

    import { jsx } from '@emotion/core'
    
    let Box = props => {
      return (
        <div
          css={{
            backgroundColor: 'grey'
          }}
          {...props}
        />
      )
    }
    

    This is a rather contrived example that shows how we could style a Box component with Emotion. It’s like swapping out the style property for a css property, and then we’re good to go.

    Now, let’s see how we could use the template string approach with the same core API:

    import { jsx, css } from '@emotion/core'
    
    let Box = props => {
      return (
        <div
          css={css`
            background-color: grey
          `}
          {...props}
        />
      )
    }
    

    All we did was wrap the template string with the css tag function, and Emotion handles the rest.

    The styled API, which is built on the core API, takes a slightly different approach to styling components. This API is called with a particular HTML element or React component, and that element is called with an object or a template string that contains the styles for that element.

    Let’s see how we could use the object approach with the styled API:

    import styled from '@emotion/styled'
    
    const Box = styled.div({
            backgroundColor: 'grey'
    });
    

    Here is one way to use the styled API, which is an alternative to using the core API. The rendered outputs are the same.

    Now, let’s see how we could use the template string approach using the styled API:

    import styled from '@emotion/styled'
    
    const Box = styled.div`
            background-color: grey
    `
    

    This achieves the same thing as the object approach, only with a template string this time.

    We could use either the core API or the styled API when building components or an application. I prefer the styled approach for a component library for a couple of reasons:

    • It achieves a lot with few keystrokes.
    • It takes in an as prop, which helps with dynamically changing the HTML element from the call site. Let’s say we default to a paragraph element, and we need a header element because of semantics; we can pass the header element as a value to the as property.

    Getting Started

    To get started, let’s clone the setup scripts on GitHub, which we can do on the command line:

    git clone git@github.com:smashingmagazine/component-library.git

    This command copies the code in that repository to the component-library’s folder. It contains the code required to set up a component library, which includes Rollup to help bundle our library.

    We currently have a components folder with an index.js file, which does nothing. We’ll be creating new folders under the components folder for each component we build in our library. Each component’s folder will expose the following files:

    • Component.js
      This is the component we’re building.
    • index.js
      This exports the component from Component.js and makes referencing components from a different location easier.
    • Component.story.js
      This essentially renders our component in its multiple states using Storybook.

    It also ships with a utils folder, which defines certain properties that would be used in our components. The folder contains several files:

    • helpers.js
      This contains helper functions that we are going to be using across our application.
    • units.js
      This defines spacing and font-size units, which we will use later.
    • theme.js
      This defines our component library’s palette, shadows, typography, and shape.

    Let’s look at what we’ve defined in the units.js file:

    export const spacing = {
      none: 0,
      xxsmall: '4px',
      xsmall: '8px',
      small: '12px',
      medium: '20px',
      gutter: '24px',
      large: '32px',
      xlarge: '48px',
      xxlarge: '96px',
    };
    
    export const fontSizes = {
      xsmall: '0.79rem',
      small: '0.889rem',
      medium: '1rem',
      large: '1.125rem',
      xlarge: '1.266rem',
      xxlarge: '1.424rem',
    };
    

    This defines the spacing and fontSizes rules. The spacing rule was inspired by the Braid design system, which is based on multiples of four. The fontSizes are derived from the major second (1.125) type scale, which is a good scale for product websites. If you’re curious to learn more about type scale, “Exploring Responsive Type Scales” explains the value of knowing the scales appropriate for different websites.

    Next, let’s through the theme.js file!

    import { spacing } from './units';
    
    const white = '#fff';
    const black = '#111';
    
    const palette = {
      common: {
        black,
        white,
      },
      primary: {
        main: '#0070F3',
        light: '#146DD6',
        contrastText: white,
      },
      error: {
        main: '#A51C30',
        light: '#A7333F',
        contrastText: white,
      },
      grey: {
        100: '#EAEAEA',
        200: '#C9C5C5',
        300: '#888',
        400: '#666',
      },
    };
    
    const shadows = {
      0: 'none',
      1: '0px 5px 10px rgba(0, 0, 0, 0.12)',
      2: '0px 8px 30px rgba(0, 0, 0, 0.24)',
    };
    
    const typography = {
      fontFamily:
        "Inter, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Ubuntu, 'Helvetica Neue', sans-serif",
    };
    
    const shape = {
      borderRadius: spacing['xxsmall'],
    };
    
    export const theme = {
      palette,
      shadows,
      typography,
      shape,
    };
    

    In the theme file, we’ve defined our palette, which is essentially the colors we’re going to be using across all components in our library. We also have a shadows object, where we define our box-shadow values. There’s also the typography object, which currently just defines our fontFamily. Finally, shape is used for properties such as border-radius. This theme’s structure is inspired by Material-UI.

    Next, our helpers.js file!

    export const isObjectEmpty = (obj) => {
      return Object.keys(obj).length === 0;
    };
    

    Here, we only expose the isObjectEmpty function, which takes in an object and returns true if the object is empty. It returns false if it has any values. We’re going to make use of this function later.

    Now that we’ve gone through all of the files in the utils folder, it’s about time to start building our components!

    Buttons

    Buttons are one of the most used components on the web. They’re used everywhere and can take different forms, shapes, sizes, and more.

    Here are the buttons we’re going to build in Figma.

    An illustration that shows what the Button component looks like
    Button component design from Figma (Large preview)

    These subtle variations are going to be applied as properties to our button. We would like the buttons in our component library to accept properties such as variant, size, enableElevation (i.e. box-shadow), and color.

    Starting with the button component, let’s create a Button folder, where we will define everything related to buttons, as discussed earlier.

    Let’s create our button component:

    import styled from '@emotion/styled';
    import isPropValid from '@emotion/is-prop-valid';
    
    const StyledButton = () => {};
    
    const IGNORED_PROPS = ['color'];
    
    const buttonConfig = {
      shouldForwardProp: (prop) =>
        isPropValid(prop) && !IGNORED_PROPS.includes(prop),
    };
    
    export const Button = styled('button', buttonConfig)(StyledButton);
    

    Here, we’ve started off by setting up our button component with a buttonConfig. The buttonConfig contains shouldForwardProp, which is used to control the properties that should be forwarded to the DOM, because properties such as color show up on the rendered element by default.

    Next, let’s define our button sizes, which we’re going to use in the button component!

    const buttonSizeProps = {
      small: {
        fontSize: fontSizes['xsmall'],
        padding: `${spacing['xsmall']} ${spacing['small']}`,
      },
      medium: {
        fontSize: fontSizes['small'],
        padding: `${spacing['small']} ${spacing['medium']}`,
      },
      large: {
        fontSize: fontSizes['medium'],
        padding: `${spacing['medium']} ${spacing['large']}`,
      },
    };
    

    buttonSizeProps is a map of our size values (small, medium, and large), and it returns fontSize and padding values based on the sizes. For a small button, we’d need a small font with small padding. The same goes for the medium and large sizes to scale them appropriately.

    Next, let’s define a function that provides valid CSS properties based on the passed variant:

    const getPropsByVariant = ({ variant, color, theme }) => {
    
      const colorInPalette = theme.palette[color];
    
      const variants = {
        outline: colorInPalette
          ? outlineVariantPropsByPalette
          : defaultOutlineVariantProps,
        solid: colorInPalette
          ? solidVariantPropsByPalette
          : defaultSolidVariantProps,
      };
    
      return variants[variant] || variants.solid;
    };
    

    Here, the getPropsByVariant function takes in variant, color, and theme properties and returns the properties of the specified variant; if no variant is specified, it defaults to solid. colorInPalette retrieves the palette assigned to the specified color if found, and undefined if not found in our theme object.

    In each variant, we check whether a palette actually exists for the color specified; if we don’t, then we use colors from the common and grey objects of our theme, which we will apply in defaultOutlineVariantProps and defaultSolidVariantProps.

    Next, let’s define our variant properties!

    const defaultSolidVariantProps = {
      main: {
        border: `1px solid ${theme.palette.grey[100]}`,
        backgroundColor: theme.palette.grey[100],
        color: theme.palette.common.black,
      },
      hover: {
        border: `1px solid ${theme.palette.grey[200]}`,
        backgroundColor: theme.palette.grey[200],
      },
    };
    
    const defaultOutlineVariantProps = {
      main: {
        border: `1px solid ${theme.palette.common.black}`,
        backgroundColor: theme.palette.common.white,
        color: theme.palette.common.black,
      },
      hover: {
        border: `1px solid ${theme.palette.common.black}`,
        backgroundColor: theme.palette.common.white,
        color: theme.palette.common.black,
      },
    };
    
    const solidVariantPropsByPalette = colorInPalette && {
      main: {
        border: `1px solid ${colorInPalette.main}`,
        backgroundColor: colorInPalette.main,
        color: colorInPalette.contrastText,
      },
      hover: {
        border: `1px solid ${colorInPalette.light}`,
        backgroundColor: colorInPalette.light,
      },
    };
    
    const outlineVariantPropsByPalette = colorInPalette && {
      main: {
        border: `1px solid ${colorInPalette.main}`,
        backgroundColor: theme.palette.common.white,
        color: colorInPalette.main,
      },
      hover: {
        border: `1px solid ${colorInPalette.light}`,
        backgroundColor: theme.palette.common.white,
        color: colorInPalette.light,
      },
    };
    

    Here, we define the properties that are going to be applied to our button based on the selected variants. And, as discussed earlier, defaultSolidVariantProps and defaultOutlineVariantProps use colors from our common and grey objects as fallbacks for when the color specified isn’t in our palette or when no color is specified for what we put in place.

    By the way, the solidVariantPropsByPalette and outlineVariantPropsByPalette objects use the color from our palette as specified by the button. They both have main and hover properties that differentiate the button’s default and hover styles, respectively.

    The button design we’ve used accounts for two variants, which we can check out in our component library design.

    Next, let’s create our StyledButton function, which combines all we’ve done so far.

    const StyledButton = ({
      color,
      size,
      variant,
      enableElevation,
      disabled,
      theme,
    }) => {
      if (isObjectEmpty(theme)) {
        theme = defaultTheme;
      }
    
      const fontSizeBySize = buttonSizeProps[size]?.fontSize;
      const paddingBySize = buttonSizeProps[size]?.padding;
      const propsByVariant = getPropsByVariant({ variant, theme, color });
    
      return {
        fontWeight: 500,
        cursor: 'pointer',
        opacity: disabled && 0.7,
        transition: 'all 0.3s linear',
        padding: buttonSizeProps.medium.padding,
        fontSize: buttonSizeProps.medium.fontSize,
        borderRadius: theme.shape.borderRadius,
        fontFamily: theme.typography.fontFamily,
        boxShadow: enableElevation && theme.shadows[1],
        ...(propsByVariant && propsByVariant.main),
        ...(paddingBySize && { padding: paddingBySize }),
        ...(fontSizeBySize && { fontSize: fontSizeBySize }),
        '&:hover': !disabled && {
          boxShadow: enableElevation && theme.shadows[2],
          ...(propsByVariant && propsByVariant.hover),
        },
      };
    };
    

    In the StyledButton function, we’re assigning defaultTheme to the theme if the theme object is empty which makes it optional for the consumers of our library to use Emotion’s ThemeProvider in order to make use of the library. We assigned fontSize and padding based on the buttonSizeProps object. We defined several default button properties, such as fontWeight and cursor, which aren’t tied to any property, and we also derived color, backgroundColor, and border values based on the result of propsByVariant.

    Now that we’ve created our Button component, let’s see how we can use it:

    <Button
        variant="solid"
        color="primary"
        size="small"
        enableElevation
        disabled
    >
        Small Outline Elevated Button
    </Button>
    

    We can check what that looks like on CodeSandbox:

    That’s how to use the Button component. We define the following properties:

    • We define a variant with a solid value. We could have specified outline instead. If the variant prop isn’t provided, we would also default to solid.
    • We define color, with a value of primary. We also support error as a color value or a color from a theme object. If the color property isn’t specified, we would fall back to our default color state.
    • We define size, with a value of small. It could be medium (the default) or large.
    • We define EnableElevation because we want some box-shadow on our button. We could have chosen not to use it.
    • Finally, we define disabled because we want our button to be disabled. The additional thing we do to a disabled button is reduce its opacity.

    The button doesn’t need to take any property. It defaults to a solid medium-sized button.

    Box Component

    A box component is a container that can hold any component or HTML element. It accepts but is not limited to properties such as padding, margin, display, and width. It can also be used as a base component for some of the other components we’ll get into later.

    Here’s what it looks like on Figma:

    An illustration that shows what the Box component looks like
    Box component design from Figma (Large preview)

    Before diving into the code, let’s not forget to create a new folder for this component.

    Now, let’s create our Box component:

    
    import styled from '@emotion/styled';
    import isPropValid from '@emotion/is-prop-valid';
    import { spacing, theme as defaultTheme } from '../../utils';
    
    const StyledBox = ({
      paddingX,
      paddingY,
      marginX,
      marginY,
      width,
      display,
      theme,
      ...props
    }) => {
    
      if (isObjectEmpty(theme)) {
        theme = defaultTheme;
      }
    
      const padding = spacing[props.padding];
      let paddingTop = spacing[props.paddingTop];
      let paddingRight = spacing[props.paddingRight];
      let paddingBottom = spacing[props.paddingBottom];
      let paddingLeft = spacing[props.paddingLeft];
      if (paddingX) {
        paddingLeft = spacing[paddingX];
        paddingRight = spacing[paddingX];
      }
      if (paddingY) {
        paddingTop = spacing[paddingY];
        paddingBottom = spacing[paddingY];
      }
      let margin = spacing[props.margin];
      let marginTop = spacing[props.marginTop];
      let marginRight = spacing[props.marginRight];
      let marginBottom = spacing[props.marginBottom];
      let marginLeft = spacing[props.marginLeft];
      if (marginX) {
        marginLeft = spacing[marginX];
        marginRight = spacing[marginX];
      }
      if (marginY) {
        marginTop = spacing[marginY];
        marginBottom = spacing[marginY];
      }
      return {
        padding,
        paddingTop,
        paddingRight,
        paddingBottom,
        paddingLeft,
        margin,
        marginTop,
        marginRight,
        marginBottom,
        marginLeft,
        width,
        display,
        fontFamily: theme.typography.fontFamily,
      };
    };
    
    const IGNORED_PROPS = ['display', 'width'];
    
    const boxConfig = {
      shouldForwardProp: (prop) =>
        isPropValid(prop) && !IGNORED_PROPS.includes(prop),
    };
    
    export const Box = styled('div', boxConfig)(StyledBox);
    

    The spacing rule we defined earlier is being applied to both padding and margin, as we can see in the Box component. We receive contextual values for padding and margin, and we look up their actual values from the spacing object.

    We accept paddingX and paddingY props to update padding across the horizontal and vertical axis, respectively. We do the same for marginX and marginY as well.

    Also, we don’t want the display and width props to get forwarded to the DOM because we only need them in CSS. So, we add them to our list of props to ignore, and pass that on to our config.

    Here’s how we could use the Box component:

    <Box
      padding="small"
      paddingTop="medium"
      paddingBottom="medium"
    >
      Simple Box Component
    </Box>
    

    We can see what this looks like on CodeSandbox.

    In this Box component, we’ve assigned small as a value to our padding property, and medium to the paddingTop and paddingBottom properties. When rendered, the Box component will have its padding-left and padding-right properties set to 12px each, and its padding-top and padding-bottom properties set to 20px. We could have replaced paddingTop and paddingBottom with paddingY and gotten the same result.

    Columns Component

    The Columns component is a variation of our Box component, with a display type of flex and with children spaced evenly across the x-axis.

    Here is a representation of the Columns component in Figma:

    An illustration that shows what the Button component looks like
    Columns component design from Figma (Large preview)

    Let’s build our Columns component!

    import React from 'react';
    import { Box } from '../Box';
    
    export const Columns = ({ children, space, ...props }) => {
      return (
        <Box display="flex" {...props}>
          {React.Children.map(children, (child, index) => {
            if (child.type !== Box) {
              console.warn(
                'Each child in a Columns component should be a Box component'
              );
            }
    
            if (index > 0) {
              return React.cloneElement(child, {
                marginLeft: space,
                width: '100%',
              });
            }
    
            return React.cloneElement(child, { width: '100%' });
          })}
        </Box>
      );
    };
    

    We’re using React.Children to map over the Columns component’s children. And we’re adding marginLeft and width properties to each of the children, except the first child, which doesn’t need a marginLeft property because it’s the leftmost child in the column. We expect each child to be a Box element to ensure that the necessary styles are applied to it.

    Here’s how we could use the Columns component:

    <Columns space="small">
      <Box> Item 1</Box>
      <Box> Item 2</Box>
      <Box> Item 3</Box>
    </Columns>
    

    We can see what that looks like on CodeSandbox.

    The Columns children here are spaced evenly across the x-axis by 12 pixels because that’s what the value of small resolves to, as we’ve defined earlier. Because the Columns component is literally a Box component, it can take in other Box component properties, and we can customize it as much as we want.

    Stack Component

    This is also a variation of our Box component that takes the full width of the parent element and whose children are spaced evenly across the y-axis.

    Here is a representation of the Stack component in Figma:

    An illustration that shows what the Stack component looks like
    Stack component design from Figma (Large preview)

    Let’s build our Stack component:

    import React from 'react';
    import { Box } from '../Box';
    import { Columns } from '../Columns';
    
    const StackChildrenTypes = [Box, Columns];
    const UnsupportedChildTypeWarning =
      'Each child in a Stack component should be one of the types: Box, Columns';
    
    export const Stack = ({ children, space, ...props }) => {
      return (
        <Box {...props}>
          {React.Children.map(children, (child, index) => {
            if (!StackChildrenTypes.includes(child.type)) {
              console.warn(UnsupportedChildTypeWarning);
            }
    
            if (index > 0) {
              return React.cloneElement(child, { marginTop: space });
            }
    
            return child;
          })}
        </Box>
      );
    };
    

    Here, we map over each child with React.Children and apply a paddingTop property to it with the value of the space argument. As for the first child, we need it to take its original position, so we skip adding a marginTop property to it. We also accept each child to be a Box so that we can apply the necessary properties to it.

    Here’s how we could use the Stack component:

    <Stack space="small">
      <Box marginTop="medium"> Item 1</Box>
      <Box> Item 2</Box>
      <Box> Item 3</Box>
    </Stack>
    

    We can see what that looks like on CodeSandbox.

    Here, the Box elements are spaced evenly with the small unit, and the first Box takes a separate marginTop property. This shows that you can customize components however you wish.

    Conclusion

    We’ve gone through the basics of using Emotion to create components in React using the APIs that it provides. This is just one of many ways to go about building a component library. There are some nuances to building it for a brand because you might not have to take theming and some other things into consideration. But if you plan to release the library to the public one day, then you’ll have to deal with requests for those missing pieces, so consider that possibility and make the library a little flexible ahead of time.

    If you have any questions, feel free to drop them as comments.

    The repository for this article is on GitHub, and the button designs we’ve used are on Figma.

    References

    Smashing Editorial
    (ks, ra, al, il)

    Source link

    web design

    Building React Apps With Storybook — Smashing Magazine

    09/01/2020

    About The Author

    A software developer, technical writer and problem solving enthusiast based in Lagos, Nigeria. When he’s not writing or solving problems on LeetCode, he is …
    More about
    Abdulazeez

    In this article, you will learn how to build and test react components in isolation using Storybook. You will also learn how to use the knobs addon to modify data directly from the storybook explorer.

    Storybook is a UI explorer that eases the task of testing components during development. In this article, you will learn what storybook is about and how to use it to build and test React components by building a simple application. We’ll start with a basic example that shows how to work with storybook, then we’ll go ahead to create a storybook for a Table component which will hold students’ data.

    Storybook is widely used in building live playgrounds and documenting component libraries, as you have the power to change props values, check loading states amongst other defined functionalities.

    You should have basic knowledge of React and the use of NPM before proceeding with this article, as we’ll be building a handful of React components.

    Storybook Stories

    A story is an exported function that renders a given visual state of a component based on the defined test cases. These stories are saved under the extension .stories.js. Here is an example story:

    import React from 'react';
    import Sample from './x';
    
    export default {
        title: 'Sample story',
        component: Sample   
    }
    
    export function Story(){
        return (
            <Sample data="sample data" />
        )
    }

    The good part about storybook is that it’s not different from how you typically write React components, as you can see from the example above. The difference here is that alongside the Story component, we are also exporting an object which holds the values of our story title and the component the story is meant for.

    Starting Out

    Let’s start with building the basic example mentioned above. This example will get us familiar with how to create stories and how the interface of the stories look like.
    You’ll start by creating the React application and installing Storybook in it.

    From your terminal, run the command below:

    # Scaffold a new application.
    npx create-react-app table-component
    
    # Navigate into the newly created folder.
    cd table-component
    
    # Initialise storybook.
    npx -p @storybook/cli sb init

    After that, check that the installation was successful by running the following commands:

    In one terminal:

    yarn start

    and in the other:

    yarn storybook

    You will be greeted by two different screens: the React application and the storybook explorer.

    With storybook installed in our applications, you’ll go on to remove the default stories located in src/stories folder.

    Building A Hello world story

    In this section, you’ll write your first story, not the one for the table component yet. This story is to explain the concepts of how a story works. Interestingly, you do not need to have React running to work with a story.

    Since React stories are isolated React functions, you have to define a component for the story first. In the src folder, create a components folder and a file Hello.js inside it, with the content below:

    import React from 'react';
    
    export default function Hello({name}) {
      return (
        <p>Hello {name}!, this is a simple hello world component</p>
      )
    }

    This is a component that accepts a name prop, it renders the value of name alongside some texts. Next, you write the story for the component in src/stories folder in a file named Hello.stories.js:

    First, you import React and the Hello component:

    import React from 'react';
    import Hello from '../components/Hello.js';

    Next, you create a default export which is an object containing the story title and component:

    export default {
      title: 'Hello Story',
      component: Hello
    }

    Next, you create your first story:

    export function HelloJoe() {
      return (
        <Hello name="Jo Doe" />
      )
    }

    In the code block above, the function HelloJoe(), is the name of the story, the body of the function houses the data to be rendered in the storybook. In this story, we are rendering the Hello component with the name “Jo Doe”.

    This is similar to how you would typically render the Hello component if you wanted to make use of it in another component. You can see that we’re passing a value for the name prop which needs to be rendered in the Hello component.

    Your storybook explorer should look like this:

    this image shows 'Hello Jo Doe!, this is a simple web component.'
    Hello story. (Large preview)

    The Hello Joe story is listed under the story title and already rendered. Each story has to be exported to be listed in the storybook.

    If you create more stories with the title as Hello Story, they will be listed under the title and clicking on each story renders differently. Let’s create another story:

    export function TestUser() {
        return (
            <Hello name="Test User" />
        )
    }

    Your storybook explorer should contain two stories:

    this image shows 'Hello Test User!, this is a simple web component'
    Test user story. (Large preview)

    Some components render data conditionally based on the props value passed to them. You will create a component that renders data conditionally and test the conditional rendering in storybook:

    In the Hello component file, create a new component:

    function IsLoading({condition}) {
        if (condition) {
            return (
                <p> Currently Loading </p>
            )
        return (
            <p> Here's your content </p>
        )
    }

    To test the behaviour of your new component, you will have to create a new story for it. In the previous story file, Hello.stories.jsx, create a new story:

    import Hello, { IsLoading } from '../components/Hello';
    
    export function NotLoading() {
        return (
            <IsLoading loading={false}/>
        )
    }
    
    export function Loading() {
        return (
            <IsLoading loading={true} />
        )
    }

    The first story render differs from the second story render as expected. Your storybook explorer should look like this:

    this image shows 'Here’s your content'
    Not loading story. (Large preview)
    this image show 'Currently loading, please hold on'
    Loading story. (Large preview)

    You have learnt the basics of creating stories and using them. In the next section, you will build, style and test the main component for this article.

    Building A Table Component

    In this section, you will build a table component, after which you will write a story to test it.

    The table component example will serve as a medium for displaying students data. The table component will have two headings; names and courses.

    this image shows the table story you’ll be building
    What you will be building. (Large preview)

    First, create a new file Table.js to house the component in the src/component folder. Define the table component inside the newly created file:

    import React from 'react';
    
    function Table({data}) {
        return ()
    }

    The Table component takes a prop value data. This prop value is an array of objects containing the data of students in a particular class to be rendered. Let’s write the table body:

    In the return parentheses, write the following piece of code:

    <table>
        <thead>
            <tr>
                <th>Name</th>   
                <th>Registered Course</th>
            </tr>
        </thead>            
        <tbody>
        {data}
        </tbody>
    </table>

    The code above creates a table with two headings, Name and Registered Course. In the table body, the students’ data is rendered. Since objects aren’t valid children in react, you will have to create a helper component to render individual data.

    Just after the Table component, define the helper component. Let’s call it RenderTableData:

    function RenderTableData({data}){
        return (
            <>
                {data.map(student => (
                    <tr>
                        <td>{student.name}</td>
                        <td>{student.course}</td>
                    </tr>
                ))}
            </>
        )
    }

    In the RenderTableData component above, the data prop which will be an array of objects will be mapped out and rendered individually as a table data. With the helper component written, update the Table component body from:

    {data}

    to

    {data 
    ? 
        <RenderTableData data={data} />
    :
        <tr>
            <td>No student data available</td>
            <td>No student data available</td>
        </tr>
    }

    The new block of code renders the student data with the help of the helper component if there’s any data present, otherwise, return “No student data available”.

    Before moving on to write a story to test the component, let’s style the table component. Create a stylesheet file in the components folder:

    body{
        font-weight: bold;
    }
    table {
        border-collapse: collapse;
        width: 100%;
    }
    table, th, td {
        border: 1px solid rgb(0, 0, 0);
        text-align: left;
    }
    tr:nth-child(even){
        background-color: rgb(151, 162, 211);
        color: black;
    }
    th {
        background-color: rgba(158, 191, 235, 0.925);
        color: white;
    }
    th, td {
        padding: 15px;
    }

    With the styling done, let’s create two stories to test the behaviour of the table component. The first story will have data passed to be rendered and the second won’t.

    You can also style the story differently.

    In your stories folder, create a new file Table.stories.js. Begin by importing react, the table component and defining the story:

    import React from 'react';
    import Table from '../components/Table';
    
    export default {
        title: 'Table component',
        component: Table
    }

    With the story defined, create dummy data for the first story:

    const data = [
        {name: 'Abdulazeez Abdulazeez', course: 'Water Resources and Environmental Engineering'},
        {name: 'Albert Einstein', course: 'Physics'},
        {name: 'John Doe', course: 'Estate Managment'},
        {name: 'Sigismund Freud', course: 'Neurology'},
        {name: 'Leonhard Euler', course: 'Mathematics'},
        {name: 'Ben Carson', course: 'Neurosurgery'}
    ]

    Next, you’ll write the first story named ShowStudentsData:

    export function ShowStudentsData() {
        return (
            <Table data={data} />
        )
    }

    Next, head to the storybook explorer tab to check the story. Your explorer should look like this:

    this image shows the students data story with some data
    Students data story. (Large preview)

    You have tested the component with data and it renders perfectly. The next story will be to check the behaviour if there’s no data passed.

    Just after the first story, write the second story, EmptyData:

    export function ExportData(){
        return (
            <Table />
        )
    }

    The story above is expected to render “No data available”. Head to the storybook explorer to confirm that it renders the accurate message. Your storybook explorer should look like this:

    this image shows the table component with empty data passed
    Empty data story. (Large preview)

    In this section, you have written a table component and a story to test the behaviour. In the next section, you’ll be looking at how to edit data in real time in the storybook explorer using the knobs addon.

    Addons

    Addons in storybook are extra features that are implemented optionally by the user. These extra features are things that might be necessary for your stories. Storybook provides some core addons but, you can install and even build addons to fit your use case such as decorator addons.

    A decorator is a way to wrap a story in extra “rendering” functionality. Many addons define decorators in order to augment your stories with extra rendering or gather details about how your story is rendered.
    Storybook docs

    Adding Knobs Addon To Our Table Story

    The knobs addon is a decorator addon and one of the most used in Storybook. It enables you to change the values (or props) of components without modifying the story function or the component itself.

    In this section, you will be adding the knobs addon to our application. The knobs addon eases the stress of having to update the data in your stories manually by setting up a new panel in the storybook explorer where you can easily change the data passed. Without knobs, you’ll have to go back to manually modifying your data.

    Doing this would be inefficient and it will defeat the purpose of storybook — especially in cases where those who have access to the stories do not have access to modify the data in the code.

    The knobs addon doesn’t come installed with storybook, so you will have to install it as an independent package:

    yarn add -D @storybook/addon-knobs

    Once the addon has been installed, register it under the addons array in your stories configuration located in .storybook/main.js.

    module.exports = {
        stories: ['../src/**/*.stories.js'],
        addons: [
            '@storybook/preset-create-react-app',
            '@storybook/addon-actions',
            '@storybook/addon-links',
            '@storybook/addon-knobs' // Add the knobs addon.
        ],
    };

    With the addon registered, you can now go-ahead to implement the knobs addon in your table story. The student data is of type object, as a result, you will be using the object type from the knobs addon.

    Import the decorator and the object functions after the previous imports:

    import { withKnobs, object } from '@storybook/addon-knobs';

    Just after the component field in the default export, add another field:

    decorators: [withKnobs]

    That is, your story definition object should look like this:

    export default {
        title: 'Table component',
        component: Table,
        decorators: [withKnobs]
    }

    The next step is to modify our Table component in the ShowStudentsData story to allow the use of the object knob:

    before:

    <Table data={data}/>

    after:

    <Table data={object('data', data)}/>

    The first parameter in the object function is the name to be displayed in the knobs bar. It can be anything, in this case, you’ll call it data.

    In your storybook explorer, the knobs bar is now visible:

    this image shows the knobs addon bar where data can be modified in the explorer
    Knobs addon bar. (Large preview)

    You can now add new data, edit existing ones and delete the data without changing the values in the story file directly.

    Conclusion

    In this article, you learned what storybook is all about and built a table component to complement the explanations. Now, you should be able to write and test components on the go using storybook.

    Also, the code used in this article can be found in this GitHub repository.

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    Building Desktop Apps With Electron And Vue — Smashing Magazine

    07/21/2020

    About The Author

    Front-end developer based in Lagos, Nigeria. He enjoys converting designs into code and building things for the web.
    More about
    Timi

    Electron is an open-source software framework developed and maintained by GitHub. It allows for the development of desktop GUI applications using web technologies. In this tutorial, Timi Omoyeni explains what you need to keep in mind when building a desktop application with Vue.js using the Vue CLI Plugin Electron Builder.

    JavaScript used to be known as the language for building websites and web applications especially with some of its frameworks such as React, Vue, and Angular but over time (as early as 2009), it became possible for JavaScript to run outside the browser with the emergence of Node.js, an open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside a web browser. This has led to the ability to use JavaScript for a whole lot more than just web applications, and one of which is building desktop applications using Electron.js.

    Electron enables you to create desktop applications with pure JavaScript by providing a runtime with rich native (operating system) APIs. You can see it as a variant of the Node.js runtime that is focused on desktop applications instead of web servers.

    In this tutorial, we’re going to learn how to build desktop applications using Electron, we’re also going to learn how to use Vuejs to build Electron applications.

    Note: Basic knowledge of Vue.js and the Vue CLI is required to follow this tutorial. All of the code used in this tutorial can be found on my GitHub. Feel free to clone and play around with it!

    What Are Desktop Applications?

    Desktop applications are applications that run stand-alone in desktop or laptop computers. They are applications that perform specific tasks and are installed solely for this purpose.

    An example of a desktop application is your Microsoft Word, which is used for creating and typing documents. Other examples of common desktop applications are web browsers, Visual Studio Code, and Adobe Photoshop. Desktop applications are different from web applications because you have to install the desktop application in order for you to access and make use of it, and they sometimes do not need internet access for them to work. Web apps, on the other hand, can be accessed by simply visiting the URL that such an app is hosted on and always need internet access before you can access them.

    Examples of frameworks used in building desktop apps include:

    1. Java
      Java is a general-purpose programming language that is class-based, object-oriented, and designed to have as few implementation dependencies as possible. It is intended to let application developers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.
    2. Java FX
      According to their official documentation, it is an open-source, next-generation client application platform for desktop, mobile, and embedded systems built on Java.
    3. C#
      C# is a general-purpose, multi-paradigm programming language encompassing strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented, and component-oriented programming disciplines.
    4. .NET
      .NET is a free, cross-platform, open-source developer platform for building many different types of applications. With .NET, you can use multiple languages, editors, and libraries to build for web, mobile, desktop, gaming, and IoT.

    What Is Electron?

    Electron is an open-source framework for building desktop applications. It was formerly known as ‘Atom shell’ and is developed and maintained by GitHub. It lets you write cross-platform desktop applications using HTML, CSS, and JavaScript. This means that you can build desktop applications for Windows, MacOS and other platforms using one code base. It is based on Node.js and Chromium. Examples of applications built with Electron include the popular Atom editior, Visual Studio Code, WordPress for desktop, and Slack.

    Installation

    You can install Electron in your project using NPM:

    npm install electron --save-dev

    You can also install it globally if you’re going to be working with electron apps a lot using this command:

    npm install electron -g

    Building Vuejs Apps For Desktop With Electron

    If you’re familiar with building web applications using Vuejs, it is possible to build desktop applications using Vuejs. All you need for this is the Vue CLI Plugin Electron Builder.

    The Vue CLI Plugin Electron Builder

    This tool allows you to build Vue apps for desktop with Electron, this means that it makes your Vue application work as an electron app. This means that your Vue application which possibly is a web application can be extended to work in desktop environments without the need to build a separate desktop application in another framework. This gives Vue developers the option and power to go beyond the web. Going forward, you can work on that idea you have and give users a desktop application option — one that can run on Windows, macOS, and Linux.

    To see this in action, we’re going to be building a News app using the News API. The application will provide breaking news headlines and allow you to search for articles from news sources and blogs all over the web with their API. All you need to get started with them is your personal API key which can be gotten from here.

    We’ll be building a simple app that offers the following:

    1. A page that displays top and breaking headlines from a selected country with the option to choose a country using their /top-headlines endpoint. News API provides news from a list of countries that they support, find the list here.
    2. News from a selected category using a combination of their /everything endpoint and a query parameter q with which we’ll specify our category.

    After getting your API Key, we can create our application using the Vue CLI. Ensure you have the Vue CLI installed on your system, if you do not, install it using this command:

    npm install -g @vue/cli
    # OR
    yarn global add @vue/cli

    Once this is done, create your News app using the CLI:

    vue create news-app

    We’ll fetch the data from the News API by using Axios for this tutorial, but you can use any alternative you’re more comfortable with. You can install Axios by using any of the following commands:

    //NPM
    npm install axios
    // YARN
    yarn add axios

    The next step would be to set up an Axios instance for global config in our application. We’re going to be creating a plugins folder in the src folder where we’ll create this axios.js file. After creating the file, add the following lines of code:

    import axios from "axios";
    let baseURL = `https://newsapi.org/v2`;
    let apiKey = process.env.VUE_APP_APIKEY;
    const instance = axios.create({
        baseURL: baseURL,
        timeout: 30000,
        headers: {
            "X-Api-Key": apiKey,
        },
    });
    export default instance;

    Here, we define our baseURL and apiKey which we got from News API and pass it to a new instance of Axios. This instance accepts the baseURL and apiKey together with a timeout property. News API requires you to add your API Key when making a request to their API and offers 3 ways to attach it to your request but here, we’re adding it to the header X-Api-Key property after which we export instance. Once this is done, we can now use this config for all our Axios requests in our app.

    When this is done, you can add the Plugin Electron builder with the CLI using this command:

    vue add electron-builder

    You’ll be prompted to select your preferred Electron version, I selected version 9.0.0 because it is the latest version of Electron (at the time of writing).

    When this is done, you can now serve your application using this command:

    Using Yarn(strongly recommended)
    yarn electron:serve
    OR NPM
    npm run electron:serve

    This will take some time to compile and serve your app. When that is done, your application will pop open on your system, this should look like this:

    default open state of your electron app
    Auto-open state of your electron app. (Large preview)

    If you close the devtools of your app, it should look like this:

    anding page your app
    Landing page your app. (Large preview)

    This electron plugin is super helpful and easy to use because every part of the development of this app works the same way as a Vue app. This means you can have one codebase for both your web application and desktop app. Our app will have three parts:

    1. A landing page that renders top news from a country chosen at random.
    2. A page for rendering top news from the user’s country of choice.
    3. A page that renders top news from a category of the user’s selection.

    For this, we’re going to be needing a header component for all our nav links. So let us create a file in the components folder and name it header.vue, and afterward add the following lines of code to it:

    <template>
        <header class="header">
            <div class="logo">
                <div class="logo__container">
                    <img src="http://www.smashingmagazine.com/assets/logo.png" alt="News app logo" class="logo__image" />
                </div>
                <h1>News App</h1>
            </div>
            <nav class="nav">
                <h4 class="nav__link">
                    <router-link to="/home">Home</router-link>
                </h4>
                <h4 class="nav__link">
                    <router-link to="/top-news">Top News</router-link>
                </h4>
                <h4 class="nav__link">
                    <router-link to="/categories">News By Category</router-link>
                </h4>
            </nav>
        </header>
    </template>
    <script>
        export default {
            name: "app-header",
        };
    </script>
    <style>
        .header {
            display: flex;
            flex-wrap: wrap;
            justify-content: space-between;
        }
        .logo {
            display: flex;
            flex-wrap: nowrap;
            justify-content: space-between;
            align-items: center;
            height: 50px;
        }
        .logo__container {
            width: 50px;
            height: 50px;
        }
        .logo__image {
            max-width: 100%;
            max-height: 100%;
        }
        .nav {
            display: flex;
            flex-wrap: wrap;
            width: 350px;
            justify-content: space-between;
        }
    </style>

    Here, we create a header component that has our app name and logo (image can be found on my GitHub) together with a nav section that contains links to the other parts of our application. The next thing would be to import this page on our layout page — App.vue so we can see our header on every page.

    <template>
        <div id="app">
            <app-header />
            <router-view />
        </div>
    </template>
    <script>
        import appHeader from "@/components/Header.vue";
        export default {
            name: "layout",
            components: {
                appHeader,
            },
        };
    </script>
    <style>
        @import url("https://fonts.googleapis.com/css2?family=Abel&family=Staatliches&display=swap");
        html,
        #app {
            min-height: 100vh;
        }
        #app {
            font-family: "Abel", sans-serif;
            -webkit-font-smoothing: antialiased;
            -moz-osx-font-smoothing: grayscale;
            text-align: center;
            color: #2c3e50;
            background-color: #fff;
        }
        #app h1 {
            font-family: "Staatliches", cursive;
        }
        a {
            font-weight: bold;
            color: #2c3e50;
            text-decoration: none;
        }
        a:hover {
            text-decoration: underline;
        }
        a.router-link-exact-active {
            color: #42b983;
        }
    </style>

    Here, we replace the default content in the template section with our newly created header component after we have imported and declared it in the script section. Finally, we add some styling for the whole app in the style section.

    Now if we try to view our app, it should look like this:

    empty landing page
    Empty landing page. (Large preview)

    The next step would be to add content to our Home.vue file. This page would host the first section of our app; Top news from a country selected at random. Update your Home.vue file with the following lines of code:

    <template>
      <section class="home">
        <h1>Welcome to News App</h1>
        <h4>Displaying Top News from {{ countryInfo.name }}</h4>
        <div class="articles__div" v-if="articles">
          <news-card
            v-for="(article, index) in articles"
            :key="index"
            :article="article"
          ></news-card>
        </div>
      </section>
    </template>
    <script>
      import { mapActions, mapState } from "vuex";
      import NewsCard from "../components/NewsCard";
      export default {
        data() {
          return {
            articles: "",
            countryInfo: "",
          };
        },
        components: {
          NewsCard,
        },
        mounted() {
          this.fetchTopNews();
        },
        computed: {
          ...mapState(["countries"]),
        },
        methods: {
          ...mapActions(["getTopNews"]),
          async fetchTopNews() {
            let countriesLength = this.countries.length;
            let countryIndex = Math.floor(
              Math.random() * (countriesLength - 1) + 1
            );
            this.countryInfo = this.countries[countryIndex];
            let { data } = await this.getTopNews(
              this.countries[countryIndex].value
            );
            this.articles = data.articles;
          },
        },
      };
    </script>
    <style>
      .articles__div {
        display: flex;
        flex-wrap: wrap;
        justify-content: center;
      }
    </style>

    In the script section of this file, we import mapState and mapActions from Vuex, which we’ll be using later on in this file. We also import a NewsCard component (we’ll create this next) that would render all news headlines on this page. We then make use of the fetchTopNews method to fetch the latest news from a country selected at random from the array of countries in our store. This country is passed to our getTopNews action, this would be appended to the baseURL as a query for a country like so baseURL/top-news?country=${randomCountry}. Once this is done, we loop through this data and pass it to the article prop of our Newscard component in the template section. We also have a paragraph that indicates which country the top news is from.

    The next thing would be to set up our NewsCard component that will display this news. Create a new file inside your components folder, name it NewsCard.vue, and add the following lines of code to it:

    <template>
      <section class="news">
        <div class="news__section">
          <h1 class="news__title">
            <a class="article__link" :href="article.url" target="_blank">
              {{ article.title }}
            </a>
          </h1>
          <h3 class="news__author" v-if="article.author">{{ article.author }}</h3>
          <!-- <p class="article__paragraph">{{ article.description }}</p> -->
          <h5 class="article__published">{{ new Date(article.publishedAt) }}</h5>
        </div>
        <div class="image__container">
          <img
            class="news__img"
            src="http://www.smashingmagazine.com/assets/logo.png"
            :data-src="article.urlToImage"
            :alt="article.title"
          />
        </div>
      </section>
    </template>
    <script>
      export default {
        name: "news-card",
        props: {
          article: Object,
        },
        mounted() {
          this.lazyLoadImages();
        },
        methods: {
          lazyLoadImages() {
            const images = document.querySelectorAll(".news__img");
            const options = {
              // If the image gets within 50px in the Y axis, start the download.
              root: null, // Page as root
              rootMargin: "0px",
              threshold: 0.1,
            };
            const fetchImage = (url) => {
              return new Promise((resolve, reject) => {
                const image = new Image();
                image.src = url;
                image.onload = resolve;
                image.onerror = reject;
              });
            };
            const loadImage = (image) => {
              const src = image.dataset.src;
              fetchImage(src).then(() => {
                image.src = src;
              });
            };
            const handleIntersection = (entries) => {
              entries.forEach((entry) => {
                if (entry.intersectionRatio > 0) {
                  loadImage(entry.target);
                }
              });
            };
            // The observer for the images on the page
            const observer = new IntersectionObserver(handleIntersection, options);
            images.forEach((img) => {
              observer.observe(img);
            });
          },
        },
      };
    </script>
    <style>
      .news {
        width: 100%;
        display: flex;
        flex-direction: row;
        align-items: flex-start;
        max-width: 550px;
        box-shadow: 2px 1px 7px 1px #eee;
        padding: 20px 5px;
        box-sizing: border-box;
        margin: 15px 5px;
        border-radius: 4px;
      }
      .news__section {
        width: 100%;
        max-width: 350px;
        margin-right: 5px;
      }
      .news__title {
        font-size: 15px;
        text-align: left;
        margin-top: 0;
      }
      .news__author {
        font-size: 14px;
        text-align: left;
        font-weight: normal;
      }
      .article__published {
        text-align: left;
      }
      .image__container {
        width: 100%;
        max-width: 180px;
        max-height: 180px;
      }
      .news__img {
        transition: max-width 300ms cubic-bezier(0.4, 0, 1, 1),
          max-height 300ms cubic-bezier(0.4, 0, 1, 1);
        max-width: 150px;
        max-height: 150px;
      }
      .news__img:hover {
        max-width: 180px;
        max-height: 180px;
      }
      .article__link {
        text-decoration: none;
        color: inherit;
      }
    </style>

    Here, we display data passed into this component using the article object prop. We also have a method that lazy loads the images attached to each article. This method loops through the number of article images we have and lazy loads them when they become visible. Finally, we have styles targeted at this component in the style section.

    The next thing will be to set up our store so we can start getting the latest news. Add the following lines of code to your index.js file:

    import Vue from "vue";
    import Vuex from "vuex";
    import axios from "../plugins/axios";
    Vue.use(Vuex);
    const store = new Vuex.Store({
        state: {
            countries: [{
                    name: "United States of America",
                    value: "us",
                },
                {
                    name: "Nigeria",
                    value: "ng",
                },
                {
                    name: "Argentina",
                    value: "ar",
                },
                {
                    name: "Canada",
                    value: "ca",
                },
                {
                    name: "South Africa",
                    value: "za",
                },
            ],
            categories: [
                "entertainment",
                "general",
                "health",
                "science",
                "business",
                "sports",
                "technology",
            ],
        },
        mutations: {},
        actions: {
            async getTopNews(context, country) {
                let res = await axios({
                    url: `/top-headlines?country=${country}`,
                    method: "GET",
                });
                return res;
            },
        },
    });
    export default store;

    We are adding two properties to our store, one of these properties is countries. This property contains an array of countries’ object. We also have the categories property; this contains an array of available categories on the News API. The reader will like the freedom to view the top news from specific countries and categories; this will also be needed in more than one part of the app and that is why we’re making use of the store. In the actions section of our store, we have a getTopNews method that fetches top news from a country(this country was passed from the component that called this action).

    At this point, if we open our app, we should see our landing page that looks like this:

    Updated landing page
    Updated landing page. (Large preview)

    The background.js file

    This file is the entry point for Electron into your app. It controls all the Desktop app-like settings for this app. The default state of this file can be found on my GitHub.

    In this file, we have some predefined configurations set for the app such as the default height and width for your app. Let’s take a look at some of the things you can do in this file.

    Enabling the Vuejs devtools

    By default, you have access to dev tools in Electron but it is not enabled after installation. This is as a result of an existing bug on windows 10, so if you open you background.js file, you will find some commented out code with comments that state why they’re commented out:

    // Install Vue Devtools
    // Devtools extensions are broken in Electron 6.0.0 and greater
    // See https://github.com/nklayman/vue-cli-plugin-electron-builder/issues/378 for more info
    // Electron will not launch with Devtools extensions installed on Windows 10 with dark mode
    // If you are not using Windows 10 dark mode, you may uncomment these lines
    // In addition, if the linked issue is closed, you can upgrade electron and uncomment these lines
    // try {
    //   await installVueDevtools()
    // } catch (e) {
    //   console.error('Vue Devtools failed to install:', e.toString())
    // }

    So if you’re not affected by this bug, you can uncomment the try/catch block and also search for installVueDevtools in this same file(line 5) and also uncomment it. Once this is done, your app will automatically restart, and when you check your dev tools, you should see the Vuejs Devtools.

    Vuejs in devtools
    Vuejs in devtools. (Large preview)
    Selecting A Custom Icon For Your App

    By default, the Electron icon is set as the default icon for your app, and most of the time, you probably would like to set your own custom icon. To do this, move your icon into your public folder, and rename it to be icon.png. The next thing to do would be to add the required dependency, electron-icon-builder.

    You can install it using any of the following commands:

    // With Yarn:
    yarn add --dev electron-icon-builder
    // or with NPM:
    npm install --save-dev electron-icon-builder

    Once this is done, you can run this next command. It will convert your icon into Electron format and print the following in your console when this done.

    generated info in terminal
    Generated info in terminal. (Large preview)

    The next thing would be to set the icon option in background.js file. This option goes inside the BrowserWindow option that is imported from Electron. To do this, update BrowserWindow to look like this:

    // Add this to the top of your file
    /* global __static */
    // import path
    import path from 'path'
    
    // Replace
    win = new BrowserWindow({ width: 800, height: 600 })
    // With
    win = new BrowserWindow({
      width: 800,
      height: 600,
      icon: path.join(__static, 'icon.png')
    })

    Now if we run yarn run electron:build and view our app, we should see the updated icon being used as the app icon but it doesn’t change in development. This issue helps address a manual fix for it on macOS.

    Setting Title For Your App

    You will notice the title of your app is set to the app name (news-app in this case) and we’ll need to change it. To do that, we have to add a title property to the BrowserWindow method in our background.js file like this:

    win = new BrowserWindow({
            width: 600,
            height: 500,
            title: "News App",
            icon: path.join(__static, "icon.png"),
            webPreferences: {
                // Use pluginOptions.nodeIntegration, leave this alone
                // See nklayman.github.io/vue-cli-plugin-electron-builder/guide/security.html#node-integration for more info
                nodeIntegration: process.env.ELECTRON_NODE_INTEGRATION,
            },
        });

    Here, we’re setting the title of our app to ‘News App’. But if your index.html file has a title selected or your title doesn’t change to this, try adding this code to your file:

    win.on("page-title-updated", (event) => event.preventDefault());

    We’re listening for an event that gets fired when our title is updated from BrowserWindow. When this event is fired, we’re telling Electron not to update the title with the one found in index.html file.

    Another thing that might be worth changing is the productName, this controls what name appears when you hover on your app’s icon or what your computer recognizes the app as. Right now, the name of our app is Electron. To change this name in production, create a vue.config.js file and add the following lines of code to it:

    module.exports = {
        pluginOptions: {
            electronBuilder: {
                builderOptions: {
                    productName: "News App",
                },
            },
        },
    };

    Here, we define productName to be ‘News App’ so that when we run the build command for our app, the name changes from ‘Electron’ to ‘News App’.

    Multi Platform Build

    By default, when you run the build command, the app that gets created is dependent on the platform that it is being run on. This means if you run the build command on Linux, the app that gets created would be a Linux desktop app. The same also applies to other platforms(macOS and windows). But Electron comes with the option to specify a platform (or two platforms) you want to generate. The available options are:

    1. mac
    2. win
    3. linux

    So in order to build the Windows version of your app, run the following command:

    // NPM
    npm electron:build -- --win nsis
    // YARN
    yarn electron:build --win nsis

    Conclusion

    The completed application can be found on my GitHub. The official Electron documentation provides information and a guide that helps you customize your desktop app whichever way you want. Some of the things I tried out but aren’t included in this tutorial are:

    1. Customizing your dock on macOS — https://www.electronjs.org/docs/tutorial/macos-dock.
    2. Setting resizeable, maximizable, and many more — https://github.com/electron/electron/blob/master/docs/api/browser-window.md#new-browserwindowoptions.

    So if you’re looking to do much more with your Electron application, their official docs is a good place to start.

    1. Node.jshttps://en.wikipedia.org/wiki/Node.js
    2. Java (programming language)https://en.wikipedia.org/wiki/Java_(programming_language)
    3. Electron (software framework)
    4. JavaFX 14
    5. electronjs
    6. Electron Documentation
    7. Vue CLI Plugin Electron Builder
    8. Lazy Loading Images for Performance Using Intersection Observer by Chris Nwamba
    9. axios
    10. Getting Started With Axios In Nuxthttps://www.smashingmagazine.com/2020/05/getting-started-axios-nuxt/) by Timi Omoyeni
    Smashing Editorial
    (ks, ra, yk, il)

    Source link