Browsing Tag: Smashing

    web design

    UI Design Testing Tools I Use All The Time — Smashing Magazine

    03/05/2021

    About The Author

    Paul is a leader in conversion rate optimisation and user experience design thinking. He has over 25 years experience working with clients such as Doctors …
    More about
    Paul

    Our lives as UI designers have never been easier with a host of amazing tools at our disposal. In this article, Paul Boag explores some of the useful tools that he keeps close at his work.

    When I started in web design 27 years ago, testing with users was time-consuming and expensive, but a new generation of tools has changed all of that. Most of us have heard of some of the more popular tools such as Userzoom or Hotjar, but in this post, I want to explore some of the hidden gems I use to test the interfaces I am involved in creating.

    Please note that I’m by no means affiliated with any tools mentioned here — I just use them all the time. Hopefully, they prove useful to you as well.

    We begin at the very start of a project with user research.

    Run Surveys With Survicate

    User research is vital, especially when it comes to identifying problems with an existing website. As a result, I almost always survey users early on in a redesign process.

    Although both Usability Hub and Maze allow me to create surveys, the functionality they offer is relatively limited for my taste, and it is a bit difficult to embed surveys on your website. That is a shame because exit-intent surveys can be a powerful way to gain insights into why users are abandoning a website or failing to act.

    One solution to running user research surveys I found useful is Qualaroo which does an excellent job. Unfortunately, it can prove a bit expensive in some situations, so if you are looking for an alternative, you might want to check out Survicate instead.

    Survicate
    Survicate is ideal for surveying users to gain insights into their actions. (Large preview)

    Survicate offers both website surveys and the ability to send a survey via email or social media. It even allows you to add a permanent feedback button on your site if you want.

    Testing visuals with Usability Hub

    When it comes to testing, most of the testing I carry out is nearer the start of a project than the end. That is when it is easiest to make changes, so I often test with nothing more than a design concept. At that stage, I don’t even have a clickable prototype, so I use Usability Hub.

    Usability Hub
    Usability Hub is one of the useful tools for testing design concepts. (Large preview)

    Usability Hub is a simple app that supports:

    It is an excellent way of addressing stakeholder concerns about a design concept. I can get results from testing within an hour, and Usability Hub will even handle participants’ recruitment if I want.

    “Spell Check” Your Designs with Attention Insights

    Once I start to design an interface, there is a lot to consider from messaging and aesthetics to visual hierarchy and calls to action. As a result, it can be all too easy to end up with the wrong focus on a page. If I am not careful, I lead the user’s attention in the wrong direction, and they miss critical content or calls to action.

    Although the best way of checking this is usability testing, sometimes I want a quick sanity check that I am heading in the right direction. Attention Insights takes thousands of hours of eye-tracking studies and uses that data to predict where users might look on a design.

    Attention Insights
    Attention Insights uses thousands of hours of eye-tracking studies to predict where somebody will look when viewing your design. (Large preview)

    Although not as accurate as a real eye-tracking study, it can act slightly as a spelling or grammar checker does for your copy. It can help flag potential issues and help you make a judgment call.

    Modernize Your Card Sorting with UXOps

    When it comes time to work on a website’s information architecture, I almost always turn to UXOps.

    UXOps
    UXOps is a friendly tool for running remote card sorting exercises. (Large preview)

    Like more well-established app OptimalSort, UXOps allows you to run card sorting exercises online to ensure your site reflects a user’s mental model. If I am being perfectly honest, I prefer UXOps because it is a bit more affordable and focuses on a single task. However, I also found it a very easy tool for participants to understand, and for me to interpret the data afterwards.

    Remote and Unfaciliatated Testing with Lookback

    When it comes to usability testing, we all are likely to explore remote testing these days. It is actually more convenient than in-person testing, not to mention I have been able to continue testing throughout the pandemic! Although this can be done using an app like Zoom, I personally prefer a tool called Lookback.

    Lookback
    Lookback streamlines usability testing, especially when carried out remotely and unfacilitated. (Large preview)

    I love Lookback because it has been optimized for usability testing with features such as note-taking, in-app editing of video, and automatically recording the user’s screen and webcam. However, where Lookback really shines is that it allows unfacilitated usability testing.

    Unfacilitated testing is a real boon when your time is tight, and you want to test with lots of people. With Lookback, I send participants a link, and the app will guide them through the process without the need for me to moderate the sessions.

    Quantify Your Testing Using Maze

    I like to test with more users the nearer a site gets to going live. While qualitative testing is great early on, I am more interested in understanding how the site will operate at scale as we near launch. Unfortunately, analyzing a large number of unfacilitated test sessions can prove time-consuming. That is where Maze can prove invaluable.

    Maze
    Maze can aggregate data from your usability sessions into quantitive data. (Large preview)

    Maze has a wealth of tools that are useful for all kinds of usability testing. However, its real strength lies in its ability to aggregate data. This means that instead of having to watch each session; you can get quantitative data such as:

    • The number of users who gave up.
    • Those who went either via the most direct or indirect route to complete a task.
    • Heat maps of any misclicks users made.
    • How long it took people to complete the task.

    Combined with its overall flexibility, Maze is an excellent all-round choice at a manageable price, no matter your budget.

    Find Test Participants with Testing Time

    As I am sure you know, one of the biggest pains with usability testing is recruitment. Although apps like Maze, Usability Hub and Lookback all offer the option to recruit participants for you, they come with some limits regarding the people you reach.

    When I need to recruit a particular type of person, I tend to use a service like Testing Time, if I cannot recruit people myself. That is because Testing Time allows me a lot more control over the type of person I get.

    Testing Time
    Testing Time provides you with all you need to find, manage and pay testing participants. (Large preview)

    Testing Time does not just help me with recruitment. It also provides tools for screening potential candidates, managing their tests, and paying them afterwards.

    Gather Data with Microsoft Clarity

    Once my new design is finally launched, my attention shifts to monitoring and improving those designs. I do this by watching how site visitors are behaving and identifying any issues they are encountering. The two tools I use to identify and diagnose problems with a site are heat map monitoring and session recorders.

    The most well-known tool in this field is Hotjar, although Fullstory has superior tools in many ways. If you are looking for a slightly more affordable alternative, Microsoft has released a free competitor called Clarity which gives you the ability to watch individual sessions, see scroll heatmaps and see visualizations of where people are clicking on pages.

    Microsoft Clarity
    Microsoft Clarity provides heat maps on user behaviour and session recording for free. (Large preview)

    Visualize Your Research with Evolt

    Of course, I rarely get to make arbitrary decisions about the direction of a site. There are almost always other stakeholders to win over. To do that I need to communicate the research and testing I have undertaken, and that is where Evolt comes in. Evolt helps me visualize my research, but it doesn’t stop there.

    Evolt
    Evolt, a little helpful tool to visualize your research. (Large preview)

    It is actually the ideal tool for working on user personas, journey maps and even moodboards with your stakeholders. Miro can be great for these kinds of tasks as well, and it’s often used for the same purpose, but in my personal experience Evolt seems to be optimized specifically for designers.

    No Excuse

    With so many great tools available, there really shouldn’t be any excuse for not testing with users these days. It is fast, easy and cheap. But we don’t even need to limit ourselves to testing. These tools also make user research and visualization easier than ever before, making them ideal all the way from discovery through prototype to post-launch optimization.

    But these are just the tools I make use of. There’s no doubt that you use tools that are not included in the list. If so, please post them in the comments below — I’d love to hear your stories, and the tools that you find useful in your work!

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Creating An Outside Focus And Click Handler React Component — Smashing Magazine

    03/03/2021

    About The Author

    Arihant Verma is a Software Engineer based in India. He likes to read open source code and help others understand it. He’s a rich text editors fanatic. His …
    More about
    Arihant

    In this article, we’ll look at how to create an outside focus and click handler with React. You’ll learn how to recreate an open-source React component (react-foco) from scratch in doing so. To get the most out of this article, you’ll need a basic understanding of JavaScript classes, DOM event delegation and React. By the end of the article, you’ll know how you can use JavaScript class instance properties and event delegation to create a React component that helps you detect a click or focus outside of any React component.

    Oftentimes we need to detect when a click has happened outside of an element or when the focus has shifted outside of it. Some of the evident examples for this use case are fly-out menus, dropdowns, tooltips and popovers. Let’s start the process of making this detection functionality.

    The DOM Way To Detect Outside Click

    If you were asked to write code to detect if a click happened inside a DOM node or outside of it, what would you do? Chances are you’d use the Node.contains DOM API. Here’s how MDN explains it:

    The Node.contains() method returns a Boolean value indicating whether a node is a descendant of a given node, i.e. the node itself, one of its direct children (childNodes), one of the children’s direct children, and so on.

    Let’s quickly test it out. Let’s make an element we want to detect outside click for. I’ve conveniently given it a click-text class.

    <section>
      <div class="click-text">
        click inside and outside me
      </div>
    </section>
    const concernedElement = document.querySelector(".click-text");
    
    document.addEventListener("mousedown", (event) => {
      if (concernedElement.contains(event.target)) {
        console.log("Clicked Inside");
      } else {
        console.log("Clicked Outside / Elsewhere");
      }
    });

    We did the following things:

    1. Selected the HTML element with the class click-text.
    2. Put a mouse down event listener on document and set an event handler callback function.
    3. In the callback function, we are checking if our concerned element — for which we have to detect outside click — contains the element (including itself) which triggered the mousedown event (event.target).

    If the element which triggered the mouse down event is either our concerned element or any element which is inside the concerned element, it means we have clicked inside our concerned element.

    Let’s click inside and outside of the element in the Codesandbox below, and check the console.

    Wrapping DOM Hierarchy Based Detection Logic In A React Component

    Great! So far we saw how to use DOM’s Node.contains API to detect click outside of an element. We can wrap that logic in a React component. We could name our new React component OutsideClickHandler. Our OutsideClickHandler component will work like this:

    <OutsideClickHandler
      onOutsideClick={() => {
        console.log("I am called whenever click happens outside of 'AnyOtherReactComponent' component")
      }}
    >
      <AnyOtherReactComponent />
    </OutsideClickHandler>

    OutsideClickHandler takes in two props:

    1. children
      It could be any valid React children. In the example above we are passing AnyOtherReactComponent component as OutsideClickHandler’s child.

    2. onOutsideClick
      This function will be called if a click happens anywhere outside of AnyOtherReactComponent component.

    Sounds good so far? Let’s actually start building our OutsideClickHandler component.

    import React from 'react';
    
    class OutsideClickHandler extends React.Component {
      render() {
        return this.props.children;
      }
    }

    Just a basic React component. So far, we are not doing much with it. We’re just returning the children as they are passed to our OutsideClickHandler component. Let’s wrap the children with a div element and attach a React ref to it.

    import React, { createRef } from 'react';
    
    class OutsideClickHandler extends React.Component {
      wrapperRef = createRef();
    
      render() {    
        return (
          <div ref={this.wrapperRef}>
            {this.props.children}
          </div>
        )
      }  
    }

    We’ll use this ref to get access to the DOM node object associated with the div element. Using that, we’ll recreate the outside detection logic we made above.

    Let’s attach mousedown event on document inside componentDidMount React life cycle method, and clean up that event inside componentWillUnmount React lifecycle method.

    class OutsideClickHandler extends React.Component {
      componentDidMount() {
        document
          .addEventListener('mousedown', this.handleClickOutside);
      }
    
      componentWillUnmount(){
        document
          .removeEventListener('mousedown', this.handleClickOutside);
      }
    
      handleClickOutside = (event) => {
        // Here, we'll write the same outside click
        // detection logic as we used before.
      }
    }

    Now, let’s write the detection code inside handleClickOutside handler function.

    class OutsideClickHandler extends React.Component {
      componentDidMount() {
        document
          .addEventListener('mousedown', this.handleClickOutside);
      }
    
      componentWillUnmount(){
        document
          .removeEventListener('mousedown', this.handleClickOutside);
      }
    
      handleClickOutside = (event) => {
        if (
          this.wrapperRef.current &&
          !this.wrapperRef.current.contains(event.target)
        ) {
          this.props.onOutsideClick();
        }
      }
    }

    The logic inside handleClickOutside method says the following:

    If the DOM node that was clicked (event.target) was neither our container div (this.wrapperRef.current) nor was it any node inside of it (!this.wrapperRef.current.contains(event.target)), we call the onOutsideClick prop.

    This should work in the same way as the outside click detection had worked before. Let’s try clicking outside of the grey text element in the codesandbox below, and observe the console:

    The Problem With DOM Hierarchy Based Outside Click Detection Logic

    But there’s one problem. Our React component doesn’t work if any of its children are rendered in a React portal.

    But what are React portals?

    “Portals provide a first-class way to render children into a DOM node that exists outside the DOM hierarchy of the parent component.”

    React docs for portals

    Image showing that React children rendered in React portal do not follow top down DOM hierarchy.
    React children rendered in React portal do not follow top down DOM hierarchy. (Large preview)

    In the image above, you can see that though Tooltip React component is a child of Container React component, if we inspect the DOM we find that Tooltip DOM node actually resides in a completely separate DOM structure i.e. it’s not inside the Container DOM node.

    The problem is that in our outside detection logic so far, we are assuming that the children of OutsideClickHandler will be its direct descendants in the DOM tree. Which is not the case for React portals. If children of our component render in a React portal — which is to say they render in a separate DOM node which is outside the hierarchy of our container div in which our OutsideClickHandler component renders its children — then the Node.contains logic fails.

    How would it fail though? If you’d try to click on the children of our OutsideClickHandler component — which renders in a separate DOM node using React portals — our component will register an outside click, which it shouldn’t. See for yourself:

    GIF Image showing that if a React child rendered in React portal is clicked, OutsideClickHandler, which uses <code>Node.contains</code>, wrongly registers it as outside click.
    Using Node.contains to detect outside click of React component gives wrong result for children rendered in a React portal. (Large preview)

    Try it out:

    Even though the popover that opens on clicking the button, is a child of OutsideClickHandler component, it fails to detect that it isn’t outside of it, and closes it down when it’s clicked.

    Using Class Instance Property And Event Delegation To Detect Outside Click

    So what could be the solution? We surely can’t rely on DOM to tell us if the click is happening outside anywhere. We’ll have to do something with JavaScript by rewriting out OutsideClickHandler implementation.

    Let’s start with a blank slate. So at this moment OutsideClickHandler is an empty React class.

    The crux of correctly detecting outside click is:

    1. To not rely on DOM structure.
    2. To store the ‘clicked’ state somewhere in the JavaScript code.

    For this event delegation will come to our aid. Let’s take an example of the same button and popover example we saw above in the GIF above.

    We have two children of our OutsideClickHandler function. A button and a popover — which gets rendered in a portal outside of the DOM hierarchy of OutsideClickHandler, on button click, like so:

    Diagram showing hierarchy of <code>document</code>, OutsideClickHandler React Component and its children rendered in React portal.
    DOM Hierarchy of document, OutsideClickHandler React Component and its children rendered in React portal. (Large preview)

    When either of our children are clicked we set a variable clickCaptured to true. If anything outside of them is clicked, the value of clickCaptured will remain false.

    We will store clickCaptured’s value in:

    1. A class instance property, if you are using a class react component.
    2. A ref, if you are using a functional React component.

    We aren’t using React state to store clickCaptured’s value because we aren’t rendering anything based off of this clickCaptured data. The purpose of clickCaptured is ephemeral and ends as soon as we’ve detected if the click has happened inside or outside.

    Let’s seee in the image below the logic for setting clickCaptured:

    Diagram showing setting of clickCaptured to true variable when children of OutsideClickHandler component are clicked.
    When any of the children of OutsideClickHandler component are clicked we set clickCaptured to true. (Large preview)

    Whenever a click happens anywhere, it bubbles up in React by default. It’ll reach to the document eventually.

    Diagram showing the value of <strong>clickCaptured</strong> variable when mousedown event bubbles upto document, for both inside and outside click cases.
    Value of clickCaptured variable when mousedown event bubbles upto document, for both inside and outside click cases. (Large preview)

    When the click reaches document, there are two things that might have happened:

    1. clickCaptured will be true, if children where clicked.
    2. clickCaptured will be false, if anywhere outside of them was clicked.

    In the document’s event listener we will do two things now:

    1. If clickCaptured is true, we fire an outside click handler that the user of OutsideClickHandler might have given us through a prop.
    2. We reset clickCaptured to false, so that we are ready for another click detection.
    Diagram showing the detection of if click happened inside or outside of React component by checking <strong>clickCapture</strong>’s value when mousedown event reaches document.
    Detecting if click happened inside or outside of React component by checking clickCapture’s value when mousedown event reaches document. (Large preview)

    Let’s translate this into code.

    import React from 'react'
    
    class OutsideClickHandler extends React.Component {
      clickCaptured = false;
      
      render() {
        if ( typeof this.props.children === 'function' ) {
          return this.props.children(this.getProps())
        }
    
        return this.renderComponent()
      }
    }

    We have the following things:

    1. set initial value of clickCaptured instance property to false.
    2. In the render method, we check if children prop is a function. If it is, we call it and pass it all the props we want to give it by calling getProps class method. We haven’t implemented getProps just yet.
    3. If the children prop is not a function, we call renderComponent method. Let’s implement this method now.
    class OutsideClickHandler extends React.Component {
      renderComponent() {
        return React.createElement(
          this.props.component || 'span',
          this.getProps(),
          this.props.children
        )
      }
    }

    Since we aren’t using JSX, we are directly using React’s createElement API to wrap our children in either this.props.component or a span. this.props.component can be a React component or any of the HTML element’s tag name like ‘div’, ‘section’, etc. We pass all the props that we want to pass to our newly created element by calling getProps class method as the second argument.

    Let’s write the getProps method now:

    class OutsideClickHandler extends React.Component {
      getProps() {
        return {
          onMouseDown: this.innerClick,
          onTouchStart: this.innerClick
        };
      }
    }

    Our newly created React element, will have the following props passed down to it: onMouseDown and onTouchStart for touch devices. Both of their values is the innerClick class method.

    class OutsideClickHandler extends React.Component {
      innerClick = () => {
        this.clickCaptured = true;
      }
    }

    If our new React component or anything inside of it — which could be a React portal — is clicked, we set the clickCaptured class instance property to true. Now, let’s add the mousedown and touchstart events to the document, so that we can capture the event that is bubbling up from below.

    class OutsideClickHandler extends React.Component {
      componentDidMount(){
        document.addEventListener('mousedown', this.documentClick);
        document.addEventListener('touchstart', this.documentClick);
      }
    
      componentWillUnmount(){
        document.removeEventListener('mousedown', this.documentClick);
        document.removeEventListener('touchstart', this.documentClick);
      }
    
      documentClick = (event) => {
        if (!this.clickCaptured && this.props.onClickOutside) {
          this.props.onClickOutside(event);
        }
        this.clickCaptured = false;
      };
    }

    In the document mousedown and touchstart event handlers, we are checking if clickCaptured is falsy.

    1. clickCaptured would only be true if children of our React component would have been clicked.
    2. If anything else would have been clicked clickCaptured would be false, and we’d know that outside click has happened.

    If clickCaptured is falsy, we’ll call the onClickOutside method passed down in a prop to our OutsideClickHandler component.

    That’s it! Let’s confirm that if we click inside the popover it doesn’t get closed now, as it was before:

    GIF Image showing that if a React child rendered in React portal is clicked, OutsideClickHandler component, which uses event delegation, correctly registers it as inside click, and not outside click.
    Using event delegation logic correctly detects outside click, even if children are rendered in a React portal. (Large preview)

    Let’s try it out:

    Wonderful!

    Outside Focus Detection

    Now let’s take a step further. Let’s also add functionality to detect when focus has shifted outside of a React component. It’s going to be very similar implementation as we’ve done with click detection. Let’s write the code.

    class OutsideClickHandler extends React.Component {
      focusCaptured = false // 1. to add this
    
      innerFocus = () => {
        this.focusCaptured = true;
      }
    
    componentDidMount(){
        document.addEventListener('mousedown', this.documentClick);
        document.addEventListener('touchstart', this.documentClick);
        document.addEventListener('focusin', this.documentFocus);
      }
    
    componentWillUnmount(){
        document.removeEventListener('mousedown', this.documentClick);
        document.removeEventListener('touchstart', this.documentClick);
        document.removeEventListener('focusin', this.documentFocus);
      }
    
    documentFocus = (event) => {
        if (!this.focusCaptured && this.props.onFocusOutside) {
          this.props.onFocusOutside(event);
        }
        this.focusCaptured = false;
      };
    
    // 2.  to indent the following piece of code
    // 3. This piece of code doesn’t get copied on clipboard on clicking the ‘copy’ button
    
    getProps() { return { onMouseDown: this.innerClick, onTouchStart: this.innerClick, onFocus: this.innerFocus }; }
    

    Everything’s added mostly in the same fashion, except for one thing. You might have noticed that though we are adding an onFocus react event handler on our children, we are setting a focusin event listener to our document. Why not a focus event you say? Because, 🥁🥁🥁, Starting from v17, React now maps onFocus React event to focusin native event internally.

    In case you are using v16 or before, instead of adding a focusin event handler to the document, you’ll have to add a focus event in capture phase instead. So that’ll be:

    document.addEventListener('focus', this.documentFocus, true);

    Why in capture phase you might ask? Because as weird as it is, focus event doesn’t bubble up.

    Since I’m using v17 in all my examples, I’m going to go ahead use the former. Let’s see what we have here:

    GIF Image showing correction detection of outside click and focus by React Foco component, which uses event delegation detection logic.
    React Foco component correctly detecting outside click and focus by using event delegation detection logic. (Large preview)

    Let’s try it out ourselves, try clicking inside and outside of the pink background. Also use tab and shift + tab keys ( in chrome, firefox, edge ) or Opt/Alt + Tab and Opt/Alt + Shift + Tab ( in Safari ) to toggle focussing between inner and outer button and see how focus status changes.

    Conclusion

    In this article, we learned that the most straightforward way to detect a click outside of a DOM node in JavaScript is by using Node.contains DOM API. I explained the importance of knowing why using the same method to detect clicks outside of a React component doesn’t work when the React component has children which render in a React portal. Also, now you know how to use a class instance property alongside an event delegation to correctly detect whether a click happened outside of a React component, as well as how to extend the same detection technique to outside focus detection of a React component with the focusin event caveat.

    1. React Foco Github Repository
    2. mdn documentation for Node.contains DOM api
    3. React docs for portals
    4. React createElement API
    5. React Github codebase Pull Request for mapping onFocus and onBlur methods to internally use focusin and focusout native events.
    6. Delegating Focus and Blur events
    Smashing Editorial
    (ks, vf, yk, il)

    Source link

    web design

    The State Of Mobile And Why Mobile Web Testing Matters — Smashing Magazine

    03/02/2021

    About The Author

    Kelvin is an independent software maker currently building Sailscasts — a platform to learn server-side JavaScript. He is also a technical writer and …
    More about
    Kelvin

    With mobile traffic accounting for over 50% of web traffic these days, leaving your mobile performance unoptimized isn’t really an option. In this article, we’ll discuss the complexity and challenges of mobile, and how mobile testing tools can help us with just that.

    Things have changed quite a bit over the last decade when we just started exploring what we could do on a tiny, shiny mobile screen. These days, with mobile traffic accounting for over 50% of web traffic, it’s fair to assume that the very first encounter of your prospect customers with your brand will happen on a mobile device.

    Depending on the nature of your product, the share of your mobile traffic will vary significantly, but you will certainly have some mobile traffic — and being prepared for it can make or break the deal. This requires your website or application to be heavily optimized for mobile. This optimization is quite complex in nature though. Obviously, our experiences will be responsive — and we’ve learned how to do so well over the years — but it also has to be accessible and fast.

    This goes way beyond basic optimizations such as color contrast and server response times. In the fragmented mobile landscape, our experiences have to be adjusted for low data mode, low memory, battery and CPU, reduced motion, dark and light mode and so many other conditions.

    Leaving these conditions out of the equation means abandoning prospect customers for good, and so we seek compromises to deliver a great experience within tight deadlines. And to ensure the quality of a product, we always need to test — on a number of devices, and in a number of conditions.

    State Of Mobile 2021

    While many of us, designers and developers, are likely to have a relatively new mobile phone in our pockets, a vast majority of our customers isn’t quite like us. That might come a little bit unexpected. After all, when we look at our analytics, we will hardly find any customers browsing our sites or apps with a mid-range device on a flaky 3G connection.

    The gotcha here is that, if your mobile experience isn’t optimized for various devices and network conditions, these customers will never appear in your analytics — just because your website or app will be barely usable on their devices, and so they are unlikely to return.

    In the US and the UK, Comscore’s Global State of Mobile 2020 report discovered in August 2020, that mobile usage accounted to 79% and 81% of total digital minutes respectively. Also, there was a 65% increase in video consumption on mobile devices in 2020. While a vast majority of the time is spent in just a few mobile apps, social media platforms provide a gateway to the web and your services — especially in education.

    Globally, time spent on mobile continues to rise around the world.
    Globally, time spent on mobile continues to rise around the world, according to ComScore Global State of Mobile 2020 report. (Large preview)
    Some app categories skew toward mobile-only usage, while others (education, for example) see more desktop usage.
    Some app categories skew toward mobile-only usage, while others (education, for example) see more desktop usage. ComScore Global State of Mobile 2020 report. (Large preview)

    On the other hand, while devices do get better over time in terms of their capabilities and battery life, older devices don’t really get abandoned or disappear into the void. It’s not uncommon to see customers using devices that are 5-6 years old as these devices often get passed through the generations, serving as slightly older but “good enough” devices for simple, day-to-day tasks. In fact, an average consumer upgrades their phone every 2 years, and in the US phone replacement cycle is 33 months.

    What’s a representative device to test on in 2021? An Android device that’s couple of years old, and costs around $200.
    What’s a representative device to test on in 2021? According to Tim Kadlec (video), that’s an Android device that’s couple of years old, and costs around $200. (Large preview)

    Globally in 2020, 84.8% of all shipped mobile phones are Android devices, according to the International Data Corporation (IDC). Average bestselling phones around the world cost just under $200. A representative device, then, is an Android device that is at least 24 months old, costing $200 or less, running on slow 3G, 400ms RTT and 400kbps transfer, just to be slightly more pessimistic.

    This might be very different for your company, of course, but that’s a close enough approximation of a majority of customers out there. In fact, it might be a good idea to look into current Amazon Best Sellers for your target market.

    When building a new site or app, always check current Amazon Best Sellers for your target market first
    When building a new site or app, always check current Amazon Best Sellers for your target market first. (Large preview)

    Mobile is a spectrum, and a quite entrenched one. While the mobile landscape is very fragmented already, the gap between the experience on various devices will be widening much further with the growing adoption of 5G.

    According to Ericsson Mobility Visualizer, we should be expecting a 15× increase in mobile 5G subscribers, from 212 million in 2020, to 3.3 billion by 2026.

    We should be expecting a 15&times; increase in mobile 5G subscribers, from 212 million in 2020, to 3.3 billion by 2026.
    According to Ericsson, we should be expecting a 15× increase in mobile 5G subscribers, from 212 million in 2020, to 3.3 billion by 2026.

    If you’d like to dive deeper into the performance of Android and iOS devices, you can check Geekbench Android Benchmarks for Android smartphones and tablets, and iOS Benchmarks for iPhones and iPads.

    It goes without saying that testing thoroughly on a variety of devices — rather just on a shiny new Android or iOS device — is critical for understanding and improving the experience of your prospect customers, and how well your website or app performs on a large scale.

    Making A Case For Business

    While it might sound valuable to test on mobile devices, it might not be convincing enough to drive the management and entire organization towards mobile testing. However, there are quite a few high-profile case studies exploring the impact of mobile optimization on key business metrics.

    WPO stats collects literally hundreds of them — case studies and experiments demonstrating the impact of web performance optimization (WPO) across verticals and goals.

    Driving Business Metrics

    One of the famous examples is Flipkart, India’s largest e-commerce website. For a while, Flipkart adopted an app-only strategy and temporarily shut down its mobile website altogether. The company found it more and more difficult to provide a user experience that was as fast and engaging as that of their mobile app.

    A few years ago, they’ve decided to unify their web presence and a native app into a mobile-optimized progressive web app, resulting in a 70% increase in conversion rate. They discovered that customers were spending three times more time on the mobile website, and the re-engagement rate increased by 40%.

    Improving Search Engine Visibility

    It’s not big news that search engines have been considering mobile friendliness as a part of search engine ranking. With Core Web Vitals, Google has been pushing the experience factors on mobile further to the forefront.

    In his article on Core Web Vitals and SEO, Simon Hearne discovered that Google’s index update on 31st of May 2021 will result in a positive ranking signal for page experience in mobile search only, for groups of similar URLs, which meet all three Core Web Vital targets. The impact of the signal is expected to be small, similar to HTTPS ranking boost.

    Lighthouse CI is quite remarkable: a suite of tools to continuously run, save, retrieve, and assert against Lighthouse results
    A performance benchmark Lighthouse is well-known. Its CI counterpart not so much. Lighthouse CI is quite remarkable: a suite of tools to continuously run, save, retrieve, and assert against Lighthouse results. (Large preview)

    One thing is certain though: your websites will rank better if they are better optimized for mobile, both in terms of speed and mobile-friendliness — it goes for accessibility as well.

    Improving Accessibility

    Building accessible pages and applications isn’t easy. The challenges start with tiny hit targets, poor contrast and small font size, but it quickly gets much more complicated when we deal with complex single-page-applications. To ensure that we cater well for our customers in various situations — with  permanent, temporary and situational disabilities — we need to test for accessibility.

    That means considering keyboard navigation, how navigation landmarks are properly assigned, how updates are announced by a screen reader, and whether we avoid any inaccessible libraries or third-party scripts. And then, for every component we are building, we need to ensure that we keep them accessible over time.

    It shouldn’t be surprising that if a website isn’t accessible to a customer, they are unlikely to access your product either. The earlier you invest in accessibility testing, the more you’ll save down the road on expensive consultancy, expensive third-party services, or expensive lawyers.

    Mobile Web Testing

    So, with all the challenges in the mobile space, how, then, do we test on mobile? Fortunately, there is no shortage of mobile testing tools out there. However, most times, when performing mobile testing, the focus is mostly on consistency and functionality but for a more thorough mobile test, we need to go a layer deeper into some not-so-obvious specifics of testing.

    Screen sizes

    Screen sizes are one of the many things that are always changing in the realm of mobile devices. Year after year new screen sizes and pixel densities appear with new device releases. This poses a problem in testing websites and apps on these devices, making debugging more difficult and time-consuming.

    OS Version fragmentation

    With iOS having a high adoption rate on its latest OS releases (a rate of 57% on its latest iOS 14), and the plethora of versions still being used by Android devices going as far back as Ice Cream Sandwich, one must make sure to account to this fragmentation when doing mobile testing.

    Browser fragmentation

    With Chrome and Safari having a global usage of 62.63% and 24.55% on mobile respectively, one might be tempted to focus on just these browsers when performing mobile tests. However, depending on the region of the world, you are more likely to test in other, less-known browsers, or proxy browsers, such as Opera Mini. Even though their percentage usage might be small, it might run into hundreds of thousands of usage globally.

    Performing Mobile Web Testing

    To perform mobile web testing, one option is to set up a device lab, and run tests locally. In times of remote work, it’s quite challenging as you usually need a number of devices at your disposal. Acquiring these devices doesn’t have to be expensive, and experiencing the loading on your own is extremely valuable. However, if we want to check how consistent the experience is, or conduct automated tests, it’s probably not going to be enough.

    In such cases, a good starting point is Responsively, a free open-source tool with mirrored interactions, customizable layout, 30+ built-in device profiles, hot reloading and screenshot tools.

    Also, you might want to look into dedicated developer-focused browsers for mobile testing as well.

    Sizzy supports sync scrolling, clicking and navigation across devices, as well as takes screenshots of all devices at once, with and without a device frame. Plus, it includes a Universal Inspect Element to inspect all devices at once.

    Blisk supports over 50 devices out of the box, along with sync scrolling. You can test touch support and preview devices side-by-side, working with the same piece of code across all opened devices. Hot-reloading is supported as well, as well as video recording and screenshots.

    Another little helpful tool is LT Browser, a web application allowing you to perform mobile view debugging on 45+ devices — on mobile, tablet and desktop. (Full disclosure and reminder: LambdaTest is a friendly sponsor of this article).

    Testing the Smashing Magazine website on different devices
    (Large preview)

    Once you have downloaded the browser and registered, you can build, test, and debug your website, as well as take screenshots and videos of bugs, assign them to specific devices, run a performance profiling and observe multiple devices side by side. By default, a free version provides 30 mins per day.

    If you need something slightly more advanced, LambdaTest allows you to run a cross-browser test on 2000+ devices on different operating systems. Also, BrowserStack provides an option to automate testing as well as testing for low battery, abrupt power off, and interruptions such as calls or SMS.

    Conclusion

    In this article, we have looked into the state of mobile in 2021. We’ve seen the growing usage of mobile devices as the primary means to access the web, and we’ve looked into some challenges related to that. We’ve also looked into some specific challenges around mobile testing, and how some tools can help us find and fix bugs on mobile.

    Keep in mind that your website is the face of your business and more and more users are going to access it via their mobile phones. It’s important to make sure that your users can access the services you provide on your website and have an accessible and fast experience on their devices as they do on the desktop version. This will ensure that the benefits of brand visibility get the attention they deserve.

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Cookie Consent For Designers And Developers — Smashing Magazine

    03/01/2021

    As digital practitioners, GDPR has impacted every facet of our professional and personal lives. Whether you’re addicted to Instagram, message your family on WhatsApp, buy products from Etsy or Google information, no one has escaped the rules that were introduced in 2018.

    Last week, I gave you an update on everything that’s happened with GDPR since 2018. (TL;DR: A lot has changed.) In this article, we’ll look at cookie consent: specifically, the paradox where marketers are heavily reliant on Google Analytics cookie data but need to comply with regulations.

    We’ll take a look at two developments that have impacted cookies, plus a third on the horizon. Then I’ll walk you through the risk-based approach that we’ve taken — for the moment, at least. And come back next time for a deep dive into first-party ad tracking as we start to see moves away from third-party cookies.

    In May 2020, the EU updated its GDPR guidance to clarify several points, including two key points for cookie consent:

    • Cookie walls do not offer users a genuine choice, because if you reject cookies you’re blocked from accessing content. It confirms that cookie walls should not be used.
    • Scrolling or swiping through web content does not equate to implied consent. The EU reiterates that consent must be explicit.

    What does this mean for our industry?

    Well, the EU is tightening up on cookie consent — perhaps the most noticeable (and annoying!) aspect of GDPR. Critics say that cookie notices are a cumbersome block for users, and don’t do anything to protect user privacy. The EU is trying to change this, by promoting simple, meaningful, equitable options for cookie consent.

    But that restricts what we can do with cookies, and it hints ahead to when the Privacy and Electronic Communications Regulation (PECR) may come into force. More on that shortly.

    Big Development #2: Google and Apple crack down on third-party tracking; get hit by anti-trust complaints

    As the big digital players figure out how to comply with GDPR — and how to turn privacy legislation to their advantage — some have already come under fire.

    Google is being investigated by the UK’s competition watchdog, the Competition and Markets Authority (CMA), for its ‘Privacy Sandbox’ initiative, following complaints from adtech companies and publishers.

    The Internet giant, which is also facing an antitrust investigation in Italy for display advertising, and in the US for its search advertising services, is looking to remove third-party cookies from Chrome. (Firefox and Safari already block these cookies by default.)

    The complainants say that this change will further concentrate advertising revenue in Google’s hands. Google’s response? The advertising industry needs to make ‘major changes’ as it shifts to a ‘web without third-party cookies’.

    Google’s not alone. In October 2020, four French digital advertising lobbies filed an antitrust suit against Apple’s forthcoming iOS privacy change, a feature it’s called App Tracking Transparency (ATT).

    ATT, coming in an early-spring 2021 release of iOS 14, shifts app users from an opt-out to an opt-in ad-tracking model. With ATT, every app must get your permission to share your Identifier for Advertisers (IDFA), which enables third-party ad tracking across multiple sites and channels.

    The complainants say that by restricting apps’ ad revenue, developers may have to boost app subscriptions and in-app purchases or switch to Apple’s targeted ad platform — all of which will funnel ad spend away from them and towards Cupertino.

    Critics including Facebook have slammed the change, saying it’ll hit small businesses who rely on microtargeted ads. Apple has defended the move and praised the EU’s defence of citizens’ data privacy.

    To sum up:

    • Implied consent doesn’t equal consent under GDPR, according to the EU.
    • We should also avoid cookie walls
    • Google and Apple are moving against third-party cookies — which some say exploits their dominant market position.

    So what does that mean for us, as designers and developers? First, let’s take a look at why this is important.

    Here’s What Designers Should Know About Cookies

    • GDPR is critical for you because you’ll design the points at which cookies are placed, what data is collected, and how it’s processed.
    • A functionality audit means you can map your cookie activity in the data and compliance layers on your service blueprint.
    • It can help to do a cookie audit and gap analysis, i.e. is the existing cookie pattern compliant? What content does it need around it?
    • Follow Privacy by Design best practices. Don’t try to reinvent the wheel — if you’ve created a compliant cookie banner, use your proven design pattern.
    • Work with your compliance and development teams to ensure designs meet GDPR and can be implemented. Only ask for the data you need.
    • If you need to compromise, take a risk-based approach. There’s a walk-through of one that we did further down.
    • Be aware that your content team may need to update your privacy policy as GDPR and your use of cookies evolve.

    Here’s What Developers Should Know About Cookies

    • Make sure you’re involved upfront about cookie consent and tracking, so what’s decided can be implemented.
    • If you’re doing a product or website redesign, a cookie audit using Chrome Dev Tools can show you what tracking cookies are being used. Tools like Ghostery or Cookiebot give you more detail.
    • You should implement the standard cookie opt in/out as per GDPR guidance. (Note that while GDPR is standard, the enforcement of it varies across EU countries. There’s more on this further down.) You may stand to lose Google Analytics data. You might also come under pressure to implement things that could be considered as dark patterns. There’s more on this later, with a walk-through of what we did and a look at the risk.

    So that’s where we are today. Oh, and there’s one more thing to be aware of: a piece of further legislation that might be coming our way. I like to call it Schrodinger’s Law.

    Schrodinger’s Law: The ePrivacy Regulation

    You may have heard of GDPR’s twin sister, the ePrivacy Regulation, who’s lurking on the legislative horizon. If you haven’t, here’s an introduction.

    As I said above, cookie consent — the notice that pops up when you visit a website — is regulated by the GDPR. However, cookies themselves fall under a different piece of legislation, the ePrivacy Directive of 2002, commonly known as the Cookie Law. Like GDPR, it aims to protect customer privacy.

    The ePrivacy Directive is due to be replaced by more stringent legislation, the ePrivacy Regulation. (If you’re interested in the difference between EU directives and regulations, EU directives set out the goals for legislation but delegate the implementation of those goals to member states’ legislatures. EU regulations mandate both the goals and the implementation at an EU-wide level.)

    The draft ePrivacy Regulation goes beyond cookies and ad tracking. It applies to all electronic communications, including messaging apps, spam mail, IoT data transfer and more.

    The draft ePrivacy Regulation was first presented by the EU in 2017. However, it has to be agreed by both the European Parliament and the Council of the European Union. (The Council consists of government representatives of each EU member state.)

    This is where it gets messy. Since 2017, the European Parliament and the Council haven’t been able to agree on the scope and detail of the ePrivacy Regulation.

    That’s because some countries — widely thought to include the Nordic states of Finland and Denmark — want to strengthen the current ePrivacy Directive. They want users, for example, to be able to set acceptance and rejection of tracking cookies in their browsers, not on every site they visit.

    But other countries, notably Austria and believed also to include those with sizeable digital marketing and advertising sectors, say this is bad for business. It’s thought the 27 EU member states are split down the middle on this issue — and they’re all being heavily lobbied by the tech industry.

    So the draft regulation has been ricocheting back and forth between the European Commission and its Working Party on Telecommunications and Information Society as they try to agree its scope. In November 2020, the Working Party rejected the redrafted legislation once again.

    What happens next? There are two possibilities. Either a compromise will be reached, in which case the legislation will be agreed. Because it takes time for legislation to be implemented, the soonest the ePrivacy Regulation could become law is 2025.

    Alternatively, the legislation cannot be agreed and is withdrawn by the European Commission. But the EU has staked so much on it. It will be extremely reluctant to take that step.

    That’s why I call it Schrodinger’s Law. It’s hard for us to know how to plan for any cookie-related developments because we simply don’t know what’s happening.

    So what should I do about cookies right now?

    Different EU countries are currently implementing the ePrivacy Directive differently. Over in the UK, the ICO (the UK’s data protection authority) is taking a tough stance. It’s requiring strict consent for analytics cookies, for example, and has spoken out against cookie walls.

    Until — and if — we get consistency from a new ePrivacy Regulation, if you’re based in an EU country, start by following the advice from your national Data Protection Authority. Then watch this space for developments around the ePrivacy Regulation.

    If you’re based outside the EU, make sure you’re giving EU citizens the options required under the GDPR and the ePrivacy Directive.

    However, when it comes down to the detail, there are times when I recommend taking a risk-based approach. That’s what we’ve done at Cyber-Duck — and here’s why.

    Here’s our original cookie notice. You see these everywhere. They’re pretty meaningless — users just hit accept and continue on their way.

    Screengrab of cookie consent banner. It says ‘Learn how we use cookies to manage your experience and change your settings.’
    It didn’t matter if the user had accepted cookies or not — Google Tag Manager (GTM) fired when they landed as cookies were enabled by default, meaning we would get our analytics data. (Image source: Cyber-Duck) (Large preview)

    But we wanted to be compliant, so we replaced it with this notice. You’ll see that tracking cookies are turned off by default — in line with ICO guidance. We knew there was a risk we would lose analytics data as GTM would no longer fire on first load.

    Let’s see what happened.

    Screengrab of new cookie consent notice showing marketing and analytics cookies turned off by default
    Our new cookie banner followed ICO guidelines, but… (Image source: Cyber-Duck) (Large preview)

    Problem solved? Actually, no. It just created another problem. The impact was far more significant than we expected:

    Google Analytics screengrab showing tracked traffic fall when the new cookie consent was implemented
    The new cookie consent caused our tracked traffic to collapse.
 (Image credits: Cyber-Duck) (Large preview)

    Look at the collapse in the blue line when we implemented the new cookie notice. We released the new cookie consent on 17 December and went straight from plenty of tracked traffic to almost zero. (The orange line shows the previous year’s traffic, for comparison.)

    In both the before-and-after scenarios, the default option was by far the most popular. Most users just naturally click on “accept” or “confirm”. That’s tricky, because we now know so little about the people visiting our site that we can’t give them the best information tailored to their needs.

    We needed a solution. Analytics and marketing data ultimately drive business decisions. I’m sure we all know how important data is. In this case, it was like putting money in a bank account and not knowing how much we’d spent or saved!

    Some of the solutions that were posed include design alternatives (would removing the toggle, or having two buttons with a visual nudge towards the “accept” help?) Or would we enable analytics cookies by default?

    For now, we’ve implemented a compromise position. Marketing and analytics cookies are on by default, with one clear switch to toggle them off:

    Screengrab showing iterated cookie notice with marketing and analytics cookies switched on by default
    Then we iterated again. (Image credits: Cyber-Duck) (Large preview)

    And here’s what that’s done to our stats:

    Google Analytics screengrab showing tracked traffic partially recover from 15 January
    This iteration brought back a chunk of attributable traffic.
 (Image credits: Cyber-Duck) (Large preview)

    The new cookie banner was relaunched on 15 January. You can see our website traffic starts to pick back up again. However, we’re not getting the full data we were getting before as Google Tag Manager doesn’t fire unless a user chooses cookies.

    The good news is, we are getting some data back again! But the story doesn’t end here. After we had turned cookie tracking back on by default, the attribution model got messed up. It wasn’t attributing to the correct channel in Google Analytics.

    Here’s what we mean:

    Scenario 1: (Correct Attribution)

    1. User lands on our website via a paid ad (PPC) or from the search result (organic)
    2. User accepts cookies straight away.
    3. The channel source is attributed correctly, e.g. to PPC.

    Scenario 2: (Incorrect Attribution)

    1. User lands on our website via a paid ad (PPC) or from the search result (organic)
    2. User visits a few other pages on our website without responding to the cookie banner prompt (banner appears on every page until it gets a response)
    3. User finally accepts cookie banner after browsing a few pages.
    4. Attribution comes through as direct — although they originally came from a search engine.

    How does that work? When a user browses other pages on the site, nothing is tracked until they respond to the cookie prompt. Tracking only kicks in at that point. So to Google, it looks as though the user has just landed on that page — and they are attributed to Direct traffic.

    Back to the drawing board.

    Note: I’m sure by now you’re starting to see a pattern here. This entire experience is new for us and there’s not a lot of documentation around, so it’s been a real learning curve.

    Now, how could we solve this attribution issue and stop users from navigating around the site until they’ve selected their cookie preference?

    A cookie wall is one option we considered, but that would potentially push us further away from being compliant, according to the ICO. (Though you might like to try browsing their site incognito and see if they stick to their own guidance…)

    Screengrab showing compromise cookie consent notice with tracking switched on by default
    In the end, we had to settle on a compromise.
 (Image credits: Cyber-Duck) (Large preview)

    But that’s what we’ve chosen to go with. The journey ends here for now, as we’re still gathering data. In the future, we want to explore other tools and the potential impact of moving away from Google Analytics.

    So what’s everyone else doing?

    Well, McDonald’s UK offers straightforward on/off buttons:

    Screengrab of McDonald’s cookie consent offering three options: reject all, accept cookies and cookie settings
    McDonald’s UK gives straightforward cookie choices. (Image credits: McDonald’s UK) (Large preview)

    Coca Cola’s British site nudges you to accept by making the ‘reject’ option harder to find:

    Screengrab of Coca-Cola’s cookie consent notice with ‘accept all cookies’ highlighted
    Coca-Cola’s UK site nudges you to accept cookies.
 (Image credits: Coca Cola UK) (Large preview)

    Whereas Sanrio just has an option to agree to ad tracking:

    Screengrab of Sanrio’s cookie consent showing ‘Ok’ confirmation button
    Sanrio just gives the option to agree to cookies.
 (Image credit: Sanrio.com) (Large preview)

    Hello Kitty, hello cookies.

    Die Zeit offers free access if you accept tracking cookies — but for an untracked, ad-free experience you’ll have to pay:

    Screengrab of Zeit’s cookie consent
    Die Zeit offers free access with cookies — but for an untracked experience, you have to subscribe.
 (Image credit: Die Zeit) (Large preview)

    And here’s one of my favourite dark patterns. This restaurant site only has the ‘Necessary’ cookies selected. But it nudges you to the ‘Allow all cookies’ big red button — and when you click that, the analytical and ad cookie boxes are automatically checked and set. Give it a go here!

    Screengrab of Pinchos cookie consent
    Pinchos’ cookie consent is a good example of a dark pattern.
 (Imagae credit: Pinchos.se) (Large preview)

    Even the EU isn’t consistent on its own sites.

    The European Parliament’s cookie consent offers two clear options:

    Screengrab of the European Parliament’s cookie consent
    The European Parliament’s cookie notice gives two clear options
. (Image credit: European Parliament) (Large preview)

    The CJEU’s site isn’t so clear:

    Screengrab of the CJEU’s cookie consent
    The CJEU’s cookie consent offers three choices: necessary cookies, accept all and more information.
 (Image credit: EU Court of Justice) (Large preview)

    While Europol’s site comes with two pre-checked boxes:

    Screengrab of Europol’s cookie consent showing mandatory and tracking cookies checked
    Europol’s cookie consent has analytics cookies automatically checked.
 (Image credit: Europol) (Large preview)

    And if you look at the sites for the German presidency of the Council of the European Union (July–December 2020), at first it seems as if there’s no cookies at all:

    Screengrab of Germany’s EU2020 site showing no cookies and no cookie consent notice
    Cookies? What cookies?
 (Image credit: eu2020.de) (Large preview)

    When you land on the site, there are no cookie banners or prompts. A closer look, with cookie extension tools, shows that no cookies are being placed either.

    So are they capturing any analytics data? The answer is yes.

    Screengrab of Matomo code from eu2020.de
    The eu2020.de site tracks users using Piwik, now Matomo. No cookies here!
 (Large preview)

    We found this little snippet in their code, which shows they are using ‘Piwik’. Piwik is now known as Matomo, one of a clutch of new tools that help with cookie compliance along with Fathom (server-side tracking) and HelloConsent (cookie management).

    So alternatives and solutions are emerging. We’ll take a closer look at that next time — with new alternatives to third-party cookies that will help you take control of your data and get the insight you need to deliver optimum experiences to your customers. Stay tuned!

    Further Reading

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Fresh Inspiration For March And A Smashing Winner (2021 Wallpapers Edition) — Smashing Magazine

    02/28/2021

    Our wallpapers post this month is a special one: There’s not only a new collection of wallpapers created by creative folks from across the globe waiting for you, but we’ll also award a smashing prize to the best design.

    More than ten years ago, we embarked on our monthly wallpapers adventure to provide you with new and inspiring wallpaper calendars each month anew. This month, the challenge came with a little twist: As we announced in the February wallpapers post, we’ll give away a smashing prize to the best March design today.

    Artists and designers from across the globe tickled their creativity and designed unique and inspiring wallpapers for this occasion. As usual, you’ll find all artworks compiled below — together with some timeless March favorites from our archives that are just too good to be forgotten. But first, let’s take a look at the winning design, shall we? Drumrolls, please…

    Submit your wallpaper

    Did you know that you could get featured in one of our upcoming wallpapers posts, too? We are always looking for creative talent, so if you have an idea for a wallpaper for April, please don’t hesitate to submit it. We’d love to see what you’ll come up with. Join in! →

    And The Winner Is…

    Botanica

    With Botanica, Vlad Gerasimov from Russia designed a wallpaper that beautifully plays with color, shapes, and texture to give botanical illustrations a modern twist:

    “It’s been almost a year since I published a wallpaper. 2020 has been tough! Anyway, this is something I made just to get back in shape. And, I’ve been trying a new drawing app, Pixelmator Pro — excellent so far. Hope you like the picture!”

    Botanica

    Congratulations, dear Vlad, and thank you for sharing your artwork with us! You won a ticket for one of Vitaly’s upcoming online workshops (Designing The Perfect Navigation, New Adventures In Front-End, or Smart Interface Design Patterns) — we’ll get in touch with you shortly to sort out the details.

    More Submissions

    A huge thank-you also goes out to everyone who took on this little creativity challenge and submitted their wallpaper designs this month. We sincerely appreciate it!️ So without further ado, here they are. Enjoy!

    March For Equality

    “This March, we shine the spotlight on International Women’s Day, reflecting on the achieved and highlighting the necessity for a more equal and understanding world. These turbulent times that we are in require us to stand together unitedly and IWD aims to do that.” — Designed by PopArt Studio from Serbia.

    March For Equality

    St. Patrick’s Day

    “On the 17th March, raise a glass and toast St. Patrick on St. Patrick’s Day, the Patron Saint of Ireland.” — Designed by Ever Increasing Circles from the United Kingdom.

    St. Patrick’s Day

    Spring In Moscow

    “If you think of Moscow, you think of winter, snow… but why not spring?” — Designed by Veronica Valenzuela Jimenez from Spain.

    Spring In Moscow

    Smells Like Spring Spirit

    “We’re looking forward to springtime and cultivating a better future!” — Designed by Milica Aleksic from the United States.

    Smells Like Spring Spirit

    Colorful March

    “I used bold colors because it makes people smile and I integrated a handmade touch to humanize my wallpaper.” — Designed by Guylaine Couture from Canada.

    Colorful March

    Earth Hour Day

    “I think this is an important date, and this year there’s going to be more activity on social media during this hour. Climate change affects us more and more, so every important day that reminds us that we only have one planet and that we should do everything to help it is important.” — Designed by Pedro Gonçalves from Portugal.

    Earth Hour Day

    BatPig

    “BatPig isn’t as fast as Batman. That’s why there are so many messages on the Bat-Signal.” — Designed by Ricardo Gimenes from Sweden.

    BatPig

    Stay Home

    “The character is the Dungeon Master from the old TV series ‘Dungeons & Dragons’. The show focused on a group of six friends who were transported into the titular realm and followed their adventures as they tried to find a way home with the help of their guide, the Dungeon Master. He is happy because these days everybody says ‘Stay at home’.” — Designed by Ricardo Gimenes from Sweden.

    Stay Home

    Learn To Fight Alone

    “I believe it’s very important that your family/friends lift you up in moments of success and in moments of doubts. The bond you create with them is very special. However you can’t rely on them to help reach your goals. You have to fight alone and this is how you become stronger every day.” — Designed by Hitesh Puri from India, Delhi.

    Learn To Fight Alone

    Tacos To The Moon And Back

    Designed by Ricardo Gimenes from Sweden.

    Tacos To The Moon And Back

    Spring Awakens A New Hope

    “With March comes spring, nature awakens, and along with it comes new hope that soon we will leave this difficult period behind and replace masks with smiles.” — Designed by LibraFire from Serbia.

    Spring Awakens A New Hope

    Sail The Night Sky

    Designed by Hannah Joy Patterson from South Carolina, USA.

    Sail The Night Sky

    Oldies But Goodies

    Birds singing, flowers blooming, the great unknown, and, well, pizza — a lot of different things have inspired the community to design a March wallpaper in all those years that we’ve been running our monthly series. Below you’ll find some almost-forgotten favorites from the past. (Please note that these wallpapers don’t come with a calendar.)

    Bunny O’Hare

    “When I think of March, I immediately think of St. Patrick’s Day and my Irish heritage… and then my head fills with pub music! I had fun putting a twist on this month’s calendar starring my pet rabbit. Erin go Braugh.” — Designed by Heather Ozee from the United States.

    Bunny O’Hare

    Wake Up!

    “Early spring in March is for me the time when the snow melts, everything isn’t very colorful. This is what I wanted to show. Everything comes to life slowly, as this bear. Flowers are banal, so instead of a purple crocus we have a purple bird-harbinger.” — Designed by Marek Kedzierski from Poland.

    Wake Up!

    Ballet

    “A day, even a whole month, isn’t enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so.” — Designed by Ana Masnikosa from Belgrade, Serbia.

    Ballet

    Queen Bee

    “Spring is coming! Birds are singing, flowers are blooming, bees are flying… Enjoy this month!” — Designed by Melissa Bogemans from Belgium.

    Queen Bee

    Questions

    “Doodles are slowly becoming my trademark, so I just had to use them to express this phrase I’m fond of recently. A bit enigmatic, philosophical. Inspiring, isn’t it?” — Designed by Marta Paderewska from Poland.

    Questions

    The Unknown

    “I made a connection, between the dark side and the unknown lighted and catchy area.” — Designed by Valentin Keleti from Romania.

    The Unknown

    Spring Bird

    Designed by Nathalie Ouederni from France.

    Spring Bird

    Spring Is Inevitable!

    “Spring is round the corner. And very soon plants will grow on some other planets too. Let’s be happy about a new cycle of life.” — Designed by Igor Izhik from Canada.

    Spring Is Inevitable!

    Happy Birthday Dr. Seuss!

    “March the 2nd marks the birthday of the most creative and extraordinary author ever, Dr. Seuss! I have included an inspirational quote about learning to encourage everyone to continue learning new things every day.” — Designed by Safia Begum from the United Kingdom.

    Happy Birthday Dr. Seuss!

    Awakening

    “I am the kind of person who prefers the cold but I do love spring since it’s the magical time when flowers and trees come back to life and fill the landscape with beautiful colors.” — Designed by Maria Keller from Mexico.

    Wake up!

    Marching Forward!

    “If all you want is a little orange dinosaur MARCHing (okay, I think you get the pun) across your monitor, this wallpaper was made just for you! This little guy is my design buddy at the office and sits by (and sometimes on top of) my monitor. This is what happens when you have designer’s block and a DSLR.” — Designed by Paul Bupe Jr from Statesboro, GA.

    MARCHing forward!

    Let’s Spring!

    “After some freezing months, it’s time to enjoy the sun and flowers. It’s party time, colours are coming, so let’s spring!” — Designed by Colorsfera from Spain.

    Let's spring!

    Pizza Time!

    “Who needs an excuse to look at pizza all month?” — Designed by James Mitchell from the United Kingdom.

    Pizza Time!

    Sakura

    Designed by Evacomics from Singapore.

    Sakura

    Keep Running Up That Hill

    “Keep working towards those New Year’s resolutions! Be it getting a promotion, learning a skill or getting fit, whatever it is — keep running!” Designed by Andy Patrick from Canada.

    Smashing Wallpaper - march 12

    Never Leave Home Without Your Umbrella

    “I thought it would be cute to feature holographic umbrellas in a rain-like pattern. I love patterned wallpapers, but I wanted to do something a little fun and fresh with a pattern that reminded me of rain and light bouncing off an umbrella!” — Designed by Bailey Zaputil from the United States.

    Never Leave Home Without Your Umbrella

    Colorful

    “In some parts of the world there is the beauty of nature. This is one of the best beaches in the world: Ponta Negra in the North East of Brazil.” — Designed by Elisa de Castro Guerra from France.

    Colorful

    Daydream

    “A daydream is a visionary fantasy, especially one of happy, pleasant thoughts, hopes or ambitions, imagined as coming to pass, and experienced while awake.” Designed by Bruna Suligoj from Croatia.

    Smashing Wallpaper - march 11

    Source link

    web design

    Building User Trust In UX Design — Smashing Magazine

    02/26/2021

    About The Author

    Adam is a senior lead UX/UI designer with more than 8 years of experience. Adam’s passion for design steadily grew into establishing his own agency, that …
    More about
    Adam

    Trust is at the heart of a long-term strategy of any product. There are many ways to earn it, and even more ways to lose it. In this article, we’ll go through how you, as a product designer, can make sure your product nurtures and retains trust throughout every touchpoint. To do that, we’ll be borrowing some of the tricks marketers and product people have up their sleeves.

    Building trust is one of the central goals of user experience design. And yet trust is a concept that’s very hard to define in a precise manner. We all know it when we feel it but often fall short of putting it in words. Being able to turn the elusive and intangible into actionable and concrete steps, however, is exactly what makes UX so crucial in the modern business ecosystem.

    Although a product experience that is useful and coherent is what fundamentally builds a sense of security and satisfaction, there’s a lot more nuance that goes building it. That’s what this article is about. We’ll take a deeper dive into users’ trust and how we can use UX to build a lasting relationship with your clientele.

    Instilling trust goes beyond the bare visuals of a product. Ideally, a UX designer’s work starts well before the first lines are drawn and long after designs are deployed.

    Being more present allows us to achieve a comprehensive view of the whole customer lifecycle, which also encourages us to borrow tools and approaches from marketers, product managers and developers. Being well-rounded in the product development activities is yet another aspect that we’ll advocate for throughout the piece. As a result of dabbling in non-design activities, we can gather an in-depth understanding of all areas where trust is vital.

    Think About The Customer Journey

    A central competency of UX design is a good understanding of your users’ needs, preferences, and emotions. Therefore, overtime we, designers, need to develop a wide array of skills to improve our understanding of our users and their interaction with our products.

    One such way entails using qualitative data and detailed analytics, which is vital in allowing us to outline a user’s persona’s most important qualities. Analytics can be used to create hypotheses and validate or discard them. As a result, you’ll be able to create experiences that will foster customer loyalty and a sustained sense of trust.

    Let’s look into the stages of a customer journey and explore how UX designers can bring value to the table. You might also notice the way we suggest to structure the customer journey map is marketing-oriented. Such marketing-orientedness speaks to the purpose of this article: to give designers a broader perspective.

    Below, we can see one such example of a customer journey that’s structured around the so-called “funnel” marketers and sales-people use:

    example of a customer journey
    Designed by Adam Fard UX Studio (Large preview)

    Below is the classic visualization of a sales/marketing funnel. You may have come across different wordings for the stages but this doesn’t change their essence. The reason this visualization is shaped like a funnel is simple: only a small portion of people who come across your product will end up becoming a paying customer. We’ve also combined the intent and action into one stage, since in the context of building trust through good UX they’re fairly similar.

    sales funnel
    Illustration by Adam Fard UX Studio. (Large preview)

    We’ve also combined the intent and action into one stage, since in the context of building trust through good UX they’re fairly similar.

    Now we need to apply this funnel thinking to a customer journey. That’s exactly what we did with the customer journey map (CJM) below. This map was created for one of our projects a while ago, and was tweaked significantly to respect the client’s privacy. By focusing on the whole funnel, we were able to go beyond the product UI, and audit the whole UX from the very first users’ interaction with the product in question.

    Now that we’ve talked briefly about how we can map users’ journey to pinpoint trust-sensitive areas, let’s move on to the first stage of the funnel: Awareness.

    Awareness

    Awareness is the stage where we should analyze how customers learn about a product or service. When devising a strategy for this step, we need to start from our users’ problems and their most common pain points. Being user-centric enables us to think about the best ways to approach potential customers while they are trying to tackle a certain pain point. The goal here is to have a reserved and more educational tone.

    sales funnel
    Illustration by Adam Fard UX Studio. (Large preview)

    Sounding too corporate or salesy can have an adverse effect on a person that isn’t familiar with the product. The way we should approach the awareness stage depends on whether your product is launched or not.

    In order to map a journey that is representative of real users we need real data. The ways of collecting this data will depend on whether the product in question is launched or not. Let’s go through both of these scenarios separately.

    two scenarios of collecting the data
    Illustration by Adam Fard UX Studio. (Large preview)

    The Approach For Launched Products

    A product or service that has already hit the market can learn a lot about the people it attracts. Both qualitative and quantitative methods can provide us with a wealth of valuable insight.

    There are plenty of tools and techniques in the market that will help get to know your users better. Here are the ones that are used the most often:

    Let’s break down the three in more detail.

    Google Analytics

    Google Analytics is a popular tool that is predominantly used by marketers, but it has gradually been adopted by UX specialists as well. It’s an excellent way to learn about the types of audiences you need to design for and create hypotheses about their preferences. More importantly, Google Analytics gives us insights on how people find you. Conversely, it allows you to learn how people do not find you.

    A launched product can dive into a variety of values to better their understanding of their clientele. Here are a few of them:

    • Top Sources Of Traffic
      This allows you to understand what are the most successful channels that drive awareness. Are you active enough on these channels? Can anything be improved in terms of your online presence?

    Here’s how Google Analytics present data on where your users come from:

    Google Analytics' data
    (Large preview)
    • User Demographics
      This provides you with data on your audience’s age, gender, lifestyle, and interests. That’s one of the ways you can validate a UX persona you’ve created with data rather than your assumptions;

    Here’s how Google Analytics visualizes the data on the users’ location:

    Google Analytics' visualization of data
    A screenshot taken from Google Analytics. (Large preview)
    • Keyword insights — you can use two approaches here. The first one involves the usage of Google Search Console. It shows you the keywords your audience uses to locate your page. It provides you with a wealth of insight into user pain points and can inform your keyword strategy.

    The second approach is gauging the data from SEO tools like ahrefs or SEMrush to see how people phrase their search query when they face a problem your product solves.

    Once you have an understanding of the keywords that your potential customers use, put them in Google. What do you find there? A competitor product? An aggregator website like Capterra or Clutch? Perhaps nothing that suits the query? Answers to these questions will be invaluable in informing your decisions about optimizing the very first stages of your custom journey.

    Here’s how Google Search console shows which keywords users use that end up visiting to your website:

    Google Analytics' data
    A screenshot taken from Google Analytics’ Search Console. (Large preview)
    FullStory And Its Equivalents

    There is now a great variety of UX tools when it comes to analytics engines. They help translate complex data into actionable insights on how to improve your online presence. The tool that we use, and see other designers use very often is FullStory. Such tools are a great solution when you’re looking to reduce UI friction, find ways to enhance funnel completion, and so forth.

    By using such tools, businesses can learn a lot about user behavior and how they can calibrate products to their needs. Do your users read the product description you write? Do they skim it? What part of the page seems to grab their attention? Does that validate or refute your initial assumptions?

    FullStory tool
    Image source: fullstory.com (Large preview)
    User Interviews

    Interviewing your user base has a broad spectrum of benefits when it comes to understanding their motivations, values, and experiences. There are many kinds of interviews, i.e. structured, unstructured, ones that feature closed or leading questions, and so on. They all have their benefits and can be tailored specifically to your service or user base to extract maximum insight.

    For the purposes of creating a customer journey map that visualizes real data, consider asking questions like:

    “How would you go about looking for an X service or product?”

    “What information is/was the most important while making a purchasing decision?”

    “What are some of the red flags for you when searching for our service/product?”

    pic of a user interview
    Image source: shutterstock.com (Large preview)

    Approach For Products Pending Launch

    There’s plenty of valuable insight that can be gathered without having a launched product. Designs that instill trust from day one are bound to maximize an organization’s success in the long run.

    Here are the tools and techniques you should use:

    Let’s go through each of those.

    Keyword And Online Research

    One of the most straightforward ways to establish whether a product is fit for its market is keyword research. Often, looking for keywords is associated with SEM and SEO practices, but there’s a catch. This kind of research will reveal a lot about the most prominent needs on the market as well.

    There are a few methods of keyword research can be used to establish market fitness:

    • Mining For Questions And Answers
      Think about websites like Quora or Reddit. Are people asking about how to solve a problem your product solves? What are the ways they currently go about solving it?
    screenshot from a Reddit thread
    A screenshot from a Reddit thread. (Large preview)
    • Competitor Reviews And Descriptions
      Is there a trend on why competitors get bad reviews? Conversely, is there something that helps them get better reviews? Is there a gap in their features?
    • Social Listening
      Go through twitter, facebook, LinkedIn hashtags and groups. See if there are communities that are built around the problem you solve or the demographic you target. If so, see what these people talk about, ask them questions.
    • Keyword Research Tools
      This research method helps you learn two things. The first one is whether people have a need for your product or service. By seeing the number of queries in a given period of time you can draw conclusions about the viability of your product. The second valuable insight is seeing how people describe the problem you’re solving. Knowing how people talk about their pains, in turn, will help you speak the same language with your customers.
    User Interviews

    To some, conducting user interviews before product launch may seem pointless, but it’s far from being true.

    By understanding who your potential customers are and learning about their needs and preferences is a valuable vehicle for building trust.

    Here are a few important things you can learn from potential users:

    • Whether or not they like your design.
      The visual side of a product is a vital link, allowing to build trust. For someone to like your design, of course, implies that you already have some designs complete.
    • Whether or not they find your product idea useful.
      This information will allow you to analyze how fit your product is for the market.
    • The features that they’d like to see in your product.
      This will help you quickly adapt to the needs of your customers.
    • Whether or not they find it easy to use your product.
      This data will inform your product’s usability, which too implies having some designs complete. A prototype would be ideal for early usability testing.

    Thorough and well-planned user interviews are instrumental in making intelligent business decisions. They provide you with invaluable insight rooted in feedback directly from your potential users.

    Competitor Research

    Understanding your competitors’ products is vital when it comes to market differentiation. It enables us to learn what customers are lacking and fill in those gaps.

    Here are a few things that’ll help you conduct a simple competitor research with trust in mind:

    • Choose the right competitors to research.
      By the way, these don’t have to be digital products. For example, simple notepad is a competitor to productivity apps, as they solve the same problem: being on top of your tasks and staying productive. How does that help with trust and creating a CJM? It allows you to empathize and put yourself in the shoes of your users. Also, it helps your craft authentic and relatable messaging that resonates with people.
    • Ensure that your analysis is consistent.
      It’s important to have a clear understanding of which aspects you’re going to analyze. Come up with analysis criteria, so that your notes are structured and easy to draw conclusions from.
      Considering different options is almost always a part of a customer’s journey. You have to make it easy to understand how you’re better than the alternatives.
    • Establish the best sources for your data.
      The best source is users: either yours or someone else’s. Period. But a few google searches would certainly do no harm.
    • Define the best ways to incorporate your findings into your product at its inception.

    Studying your competition will provide you with a wealth of quantitative and qualitative data that will guide your business decisions. As a result, you’ll create a product that fits your users’ needs and instills trust and satisfaction.

    Consideration & Acquisition

    Users that have made it to the consideration stage are interested in your product but aren’t prepared to become paying customers. At this point, they’re evaluating the options offered by your competition and assessing whether they’ll get the value they’re looking for.

    sales funnel
    Designed by Adam Fard UX Studio. (Large preview)

    There is a wide array of things businesses can do to motivate users to transition into a paying relationship through building trust. Here are a few of them:

    Explain How Your Algorithms Work

    If your product revolves around AI/ML algorithms, to enhance customer experience, it’s important to explain how it works.

    We’re typically very sensitive about our data. Respectively, there’s no reason to think that users will blindly trust a product’s AI. It’s our responsibility to counteract the distrust by explaining how it works and what kind of data it will use.

    Here are a few great ways you can outline the AI’s functionality while also encouraging them to make their own informed decisions:

    • Calibrate Trust
      AI systems are based on stats and numbers, which means that they can’t replace rational human thought. Emphasize that your algorithm is skilled at giving suggestions, but users should make their own choices.
    • Display Confidence Levels
      An essential aspect of the scientific approach is that there are no facts — there is only evidence. Make sure to communicate how confident your algorithm is of something to be true.
    • Explain Algorithm Outputs
      The results of an analysis must be accompanied by a clear explanation thereof.

    Good UX & UI

    A well-executed UI is at the crux of user trust. Satisfying visuals, consistency, and ethical design will make your product appear trustworthy. Lacking the above will dissuade people from purchasing your product or services.

    Here’s an older design example. Would you willingly use such service, especially when the competitors’ design isn’t stuck in 2003?

    screenshot of how Gmail looked in 2003
    Here’s how Gmail looked in 2003. (Sorce: Vala Afshar) (Large preview)

    No offense to Gmail’s former self, by the way. There’s a reason it doesn’t look like that anymore though.

    The same could also be said about your product’s UX. Confusing user flows, poor feature discoverability, and other other usability issues are a surefire way to scare away a good chunk of new users. A good remedy to such pitfall is making sure your design adheres to the usability heuristics. If you’re dealing with legacy design, conducting a heuristic evaluation would also serve you well.

    Also, stuff like fake buttons, dark patterns, and a wonky interface are guaranteed to seriously hinder your growth.

    an example of a website that employs dark patterns
    An example of a website that clearly employs dark patterns. (Source: pdfblog.com) (Large preview)

    Testimonials & Reviews

    Customer reviews are essential when it comes to building trust. There’s a significant body of research indicating that positive feedback can boost your sales and conversions.

    You don’t have to take our word for it. Here’s what researchers in Spiegel Research Center have to say about the importance of review:

    Based on data from the high-end gift retailer, we found that as products begin displaying reviews, conversion rates escalate rapidly. The purchase likelihood for a product with five reviews is 270% greater than the purchase likelihood of a product with no reviews.

    A screenshot taken from Clutch with reviews
    A screenshot taken from Clutch. (Large preview)

    Plus, studies have shown that people use testimonials to assess how trustworthy a product is.

    It’s also worth noting that people who have negative experiences are a lot more likely to write a review, rather than the ones who had a good one. That’s why you should be creative in asking people to leave reviews. Here’s how Upwork approaches soliciting feedback.

    A screenshot taken from Upwork with reviews
    A screenshot taken from Upwork. (Large preview)

    Notice that Upwork allows you to see what review a customer left only after you’ve left one. It’s fascinating how they leverage curiosity to encourage users to leave feedback.

    Over 90 percent of internet users read online reviews, and almost 85 percent of them trust them as much as a recommendation from a friend. Reviews are an important part of a trustworthy online presence.

    That being said, it’s important not to create fake reviews that glorify your product. Please don’t buy reviews or mislead users in any different way. People can generally sense when praise is excessive and disingenuous. Furthermore, users appreciate a few negative reviews as well.

    A study conducted by the North Western University and Power Reviews concluded the following:

    “As it turns out, perfect reviews aren’t the best for businesses, either. Our research with Northwestern University found that purchase probability peaks when a product’s average star rating is between 4.2 – 4.5, because a perfect 5-star rating is perceived by consumers as too good to be true.”

    Badges

    Trust badges are icons that inform your users about the security of your product/service. Badges are especially important if your site has a payment page.

    different types of badges
    Badges like these help instill trust. (Source: Marianne Wright) (Large preview)

    Providing your credit card information on a website is a sign of trust. Therefore it’s essential that we not only abide by security standards but also convey the fact that we do.

    Badges are also invaluable when it comes to showcasing important partnerships or rewards. For example, b2b companies often display awards from websites like Clutch or GoodFirms.

    examples of different badges
    (Large preview)

    Good Spelling And Grammar

    A poorly written copy is a simple way to ruin your online credibility. A few typos will certainly dissuade some people from using your product by losing their trust in it.

    Think of it this way: How can you trust a service that can even get their text right? Would you trust their online security? Would you be willing to provide your card information to them?

    The pitfall of poor grammar and spelling might seem obvious, but oftentimes the UX copy is written in a rush. And we designers are prone to glazing over the copy without giving it too much consideration.

    You’d be surprised how many error notifications and other system messages are written in a hurry never to be reviewed again.

    Blunders like on the screenshot below, in our experience, happen way too often:

    example of error notifications
    Notice how the error message uses jargon. (Source: Alex Birkett) (Large preview)

    Retention

    Considering that a customer has made it to the retention stage, it’s fair to say that you’ve earned their trust. However, it’s essential to mention that this trust needs to be retained, to ensure that they’ll continue using your product. Moreover, whenever there are people involved screw-ups are bound to happen. That means that you need to have a plan for fixing mistakes and getting the trust back.

    sales tunnel
    Illustration by adamfard.com (Large preview)

    Here are a few things you can do to elevate user experience and maintain a high trust level:

    Emails

    Effective email communication is paramount to customer retention. A whitepaper done by Emarsys indicates that about 45 of the businesses they surveyed use e-mails to retain their customers.

    As a communication medium, email is among the most expressive. It can convey emotions through text and media while also addressing customers’ needs.

    A user-centric approach to email marketing is bound to keep your customers happy, informed, and engaged. That implies not spamming and providing actual value or entertainment. Preferably, both.

    Forever 21 mailing
    Look at how Forever 21 does damage control to retain their customers’ loyalty. (Source: Iuliia Nesterenko) (Large preview)

    Notifications

    Consistent and well-thought-out push notifications are also a great way to keep your customers intrigued.

    First off, it’s always a good idea to welcome your users. They’ve just made an important step — they’ve bought your product or purchased a membership. It’s a simple and elegant way of thanking your customer for their choice.

    Secondly, consider notifying them about exclusive offers. Sharing information on special deals allows you to provide them with extra value for merely being a customer of yours.

    Finally, consider personalizing your notifications. Using users’ name or recent activity to notify them about relevant stuff will also skyrocket their engagement. However, it’s worth mentioning that being explicit about having users’ information too often or using sensitive data to personalize notifications can come across creepy.

    A screenshot of a Starbucks app notification.
    A screenshot of a Starbucks app notification. (Large preview)

    Whether the notification above is creepy is up for you to decide 🙂

    In-product Perks

    There are a variety of bonuses you can offer to build trust in the retention stage. They nudge our customers to use your product actively. These are especially potent in making up for any screw-ups.

    Here are a few popular ones you can look into:

    • Closed beta access to new features;
    • Seasonal discounts;
    • Loyalty programs;
    • Discounts on renewals.
    an example of Kate Spade’s notification
    Notice how Kate Spade nudges the users towards the purchase. (Large preview)

    Conclusion

    Phew, reading this article must have been quite a journey. We’ve almost reached the end. In order to help you consolidate everything in this article, let us try to recap its contents.

    Creating a successful product is all about building trust. Luckily, there are so many ways to improve a product’s trustworthiness through UX. However, it’s essential to make these practices consistent. Customers seek to interact with brands that can deliver great experience throughout all interactions and touchpoints.

    One of the ways to account for each touch point is reconciling two journey mapping techniques — marketing & sales funnel and customer journey map. The funnel allows us to go beyond the in-app experience that designers often are reluctant to do while a customer journey map provides empathy, structure and the depth of analysis.

    Listing all of the ways to boost trustworthiness for each funnel stage would take another couple of pages, so a simple advice would do. Empathy is the key for getting in your users’ shoes and tackling their trust concerns. For a more concrete list of guidelines, scroll up and skim through the headers. That should jog your memory.

    The bottom line is that we encourage you, dear reader, to shortlist the stages your users go through before actually becoming your users. Is there anything that might undermine your product’s trustworthiness? Is there anything you could improve and nudge a soon-to-be user in the right direction? Giving definitive answers to these questions and addressing them is a surefire for a better designed product.

    Further Reading on SmashingMag:

    Smashing Editorial
    (ah, vf, yk, il)

    Source link

    web design

    Building A Discord Bot Using Discord.js — Smashing Magazine

    02/25/2021

    About The Author

    Subha is a freelance web developer and a learner who is always passionate about learning and experimenting with new things. He loves to write about his new …
    More about
    Subha

    An introduction to building a Discord bot using the Discord.js module. The bot will share random jokes, assign or revoke user roles, and post tweets of a specific account to a Discord channel.

    Team communication platforms are getting popular day by day, as more and more people work from home. Slack and Discord are two of the most popular team communication platforms. While Discord is focused on gamers, some functionality, such as the ability to add up to 50 members in the voice call room, make it an excellent alternative to Slack. One of the most significant advantages of using such a platform is that many tasks can be automated using bots.

    In this article, we’ll build a bot from scratch using JavaScript and with help from Discord.js. We’ll cover the process from building the bot up to deploying it to the cloud. Before building our bot, let’s jot down the functionality that our bot will have:

    • Share random jokes from an array of jokes.
    • Add and remove user roles by selecting emoji.
    • Share tweets from a particular account to a particular channel.

    Because the Discord.js module is based on Node.js, I’ll assume that you are somewhat familiar with Node.js and npm. Familiarity with JavaScript is a must for this article.

    Now that we know the prerequisites and our goal, let’s start. And if you want to clone and explore the code right away, you can with the GitHub repository.

    Steps To Follow

    We will be building the bot by following a few steps.

    First, we’ll build a Discord server. A Discord server is like a group in which you can assign various topics to various channels, very similar to a Slack server. A major difference between Slack and Discord is that Slack requires different login credentials to access different servers, whereas in Discord you can access all of the servers that you are part of with a single authentication.

    The reason we need to create a server is that, without admin privileges for a server, we won’t be able to add a bot to the server. Once our server is created, we will add the bot to the server and get the access token from Discord’s developer portal. This token allows us to communicate with the Discord API. Discord provides an official open API for us to interact with. The API can be used for anything from serving requests for bots to integrating OAuth. The API supports everything from a single-server bot all the way up to a bot that can be integrated on hundreds of servers. It is very powerful and can be implemented in a lot of ways.

    The Discord.js library will help us to communicate with the Discord API using the access token. All of the functions will be based on the Discord API. Then, we can start coding our bot. We will start by writing small bits of code that will introduce us to the Discord API and the Discord.js library. We will then understand the concept of partials in Discord.js. Once we understand partials, we’ll add what’s known as a “reaction role” system to the bot. With that done, we will also know how to communicate with Twitter using an npm package called twit. This npm package will help us to integrate the Twitter tweet-forwarding functionality. Finally, we will deploy it to the cloud using Heroku.

    Now that we know how we are going to build our bot, let’s start working on it.

    Building A Discord Server

    The first thing we have to do is create a Discord server. Without a server with admin privileges, we won’t be able to integrate the bot.

    Building a Discord server is easy, and Discord now provides templates, which make it even easier. Follow the steps below, and your Discord server will be ready. First, we’ll choose how we are going to access the Discord portal. We can use either the web version or the app. Both work the same way. We’ll use the web version for this tutorial.

    If you’re reading this article, I’ll assume that you already have a Discord account. If not, just create an account as you would on any other website. Click the “Login” button in the top right, and log in if you have an account, or click the “Register” button. Fill out the simple form, complete the Captcha, and you will have successfully created an account. After opening the Discord app or website, click the plus icon on the left side, where the server list is. When you click it, you’ll be prompted to choose a template or to create your own.

    Creating a server from a template or from scratch in Discord
    Creating a server in Discord (Large preview)

    We’ll choose the “Create My Own” option. Let’s skip the next question. We’ll call our Discord server “Smashing Example”. You may also provide a photo for your server. Clicking the “Create” button will create your server.

    Registering the Bot With Discord

    Before coding the bot, we need to get a token provided by Discord. This token will establish a connection from our code to Discord. To get the token, we have to register our bot with our server. To register the bot, we have to visit Discord’s developer portal. If you are building a Discord app for the first time, you’ll find an empty list there. To register our app, click on the “New Application” link in the top-right corner. Give your application a name, and click the “Create” button. We’ll name our app “Smashing App”.

    Adding a new app to the Discord Developer Portal

    The new menu gives us some options. On the right side is an option labelled “Bot”. Click it, and select “Add Bot”. Click the confirmation, change the name of the bot if you want, save the changes, and copy the token received from this page. Our bot is now registered with Discord. We can start adding functionality and coding the bot.

    Building The Bot

    What Is Discord.js?

    Discord.js defines itself like so:

    Discord.js is a powerful node.js module that allows you to interact with the Discord API very easily. It takes a much more object-oriented approach than most other JS Discord libraries, making your bot’s code significantly tidier and easier to comprehend.

    So, Discord.js makes interaction with the Discord API much easier. It has 100% coverage with the official Discord API.

    Initializing The Bot

    Open your favorite text editor, and create a folder in which all of your files will be saved. Open the command-line interface (CLI), cd into the folder, and initialize the folder with npm: npm init -y.

    We will need two packages to start building the bot. The first is dotenv, and the second, obviously, is the Discord.js Node.js module. If you are familiar with Node.js, then you’ll be familiar with the dotenv package. It loads the environment variables from a file named .env to process.env.

    Install these two using npm i dotenv discord.js.

    Once the installation is complete, create two files in your root folder. Name one of the files .env. Name the other main file whatever you want. I’ll name it app.js. The folder structure will look like this:

    │    .env
    │    app.js
    │    package-lock.json
    │    package.json
    └─── node_modules
    

    We’ll store tokens and other sensitive information in the .env file, and store the code that produces the results in the app.js file.

    Open the .env file, and create a new variable. Let’s name the variable BOT_TOKEN for this example. Paste your token in this file. The .env file will look similar to this now:

    BOT_TOKEN=ODAxNzE1NTA2Njc1NDQ5ODY3.YAktvw.xxxxxxxxxxxxxxxxxxxxxxxx
    

    We can start working on the app.js file. The first thing to do is to require the modules that we installed.

    const Discord = require('discord.js');
    require('dotenv').config();
    

    The dotenv module is initialized using the config() method. We can pass in parameters to the config() method. But because this is a very simple use of the dotenv module, we don’t need any special function from it.

    To start using the Discord.js module, we have to initialize a constructor. This is shown in the documentation:

    const client = new Discord.Client();
    

    The Discord.js module provides a method named client.on. The client.on method listens for various events. The Discord.js library is event-based, meaning that every time an event is emitted from Discord, the functionality attached to that event will be invoked.

    The first event we will listen for is the ready event. This method will fire up when the connection with the Discord API is ready. In this method, we can pass in functions that will be executed when a connection is established between the Discord API and our app. Let’s pass a console.log statement in this method, so that we can know whether a connection is established. The client.on method with the ready event will look like this:

    client.on('ready', () => {
      console.log('Bot is ready');
    });
    

    But, this won’t establish a connection with the API because we haven’t logged into the bot with the Discord server. To enable this, the Discord.js module provides a login method. By using the login method available on the client and passing the token in this method, we can log into the app with the Discord server.

    client.login(process.env.BOT_TOKEN)
    

    If you start the app now — with node app.js or, if you are using nodemon, then with nodemon app.js — you will be able to see the console message that you defined. Our bot has successfully logged in with the Discord server now. We can start experimenting with some functionality.

    Let’s start by getting some message content depending on the code.

    The message Event

    The message event listens for some message. Using the reply method, we can program the bot to reply according to the user’s message.

    client.on('message', (msg) => {
      if (msg.content === 'Hello') msg.reply('Hi');
    });
    

    This example code will reply with a “Hi” whenever a “Hello” message is received. But in order to make this work, we have to connect the bot with a server.

    Connecting The Bot With A Discord Server

    Up to this point, the bot is not connected with any server. To connect with our server (Smashing Example), visit Discord’s developer portal. Click on the name of the app that we created earlier in this tutorial (in our case, “Smashing App”). Select the app, and click on the “OAuth2” option in the menu. You’ll find a group named “Scopes”. Check the “bot” checkbox, and copy the URL that is generated.

    Connecting the bot with the Discord server
    OAuth for bot (Large preview)

    Visit this URL in a new tab, choose your server, and click on “Authorize”. Complete the Captcha, and our bot will now be connected with the server that we chose.

    If you visit the Discord server now, you will see that a notification has already been sent by Discord, and the bot is now also showing up in the members’ list on the right side.

    Adding Functionality to the Bot

    Now that our bot is connected with the server, if you send a “Hello” to the server, the bot will reply with a “Hi”. This is just an introduction to the Discord API. The real fun is about to start.

    To familiarize ourselves a bit more with the Discord.js module, let’s add functionality that sends a joke whenever a particular command is received. This is similar to what we have just done.

    Adding A Random Joke Function To The Bot

    To make this part clearer and easier to understand, we aren’t going to use any APIs. The jokes that our bot will return will be a simple array. A random number will be generated each time within the range of the array, and that specific location of the array will be accessed to return a joke.

    In case you have ever used functionality provided by a bot in Discord, you might have noticed that some special character distinguishes normal messages from special commands. I am going to use a ? in front of our commands to make them look different than normal messages. So, our joke command will be ?joke.

    We will create an array named jokes in our app.js file. The way we will get a random joke from the array is by using this formula:

    jokes[Math.floor(Math.random() * jokes.length)]
    

    The Math.random() * jokes.length formula will generate a random number within the range of the array. The Math.floor method will floor the number that is generated.

    If you console.log() this, Math.floor(Math.random() * jokes.length), you’ll get a better understanding. Finally, jokes[] will give us a random joke from the jokes array.

    You might have noticed that our first code was used to reply to our message. But we don’t want to get a reply here. Rather, we want to get a joke as a message, without tagging anyone. For this, the Discord.js module has a method named channel.send(). Using this method, we can send messages to the channel where the command was called. So, the complete code up to this point looks like this:

    const Discord = require('discord.js');
    require('dotenv').config();
    
    const client = new Discord.Client();
    
    client.login(process.env.BOT_TOKEN);
    
    client.on('ready', () => console.log('The Bot is ready!'));
    
    // Adding jokes function
    
    // Jokes from dcslsoftware.com/20-one-liners-only-software-developers-understand/
    // www.journaldev.com/240/my-25-favorite-programming-quotes-that-are-funny-too
    const jokes = [
      'I went to a street where the houses were numbered 8k, 16k, 32k, 64k, 128k, 256k and 512k. It was a trip down Memory Lane.',
      '“Debugging” is like being the detective in a crime drama where you are also the murderer.',
      'The best thing about a Boolean is that even if you are wrong, you are only off by a bit.',
      'A programmer puts two glasses on his bedside table before going to sleep. A full one, in case he gets thirsty, and an empty one, in case he doesn’t.',
      'If you listen to a UNIX shell, can you hear the C?',
      'Why do Java programmers have to wear glasses? Because they don’t C#.',
      'What sits on your shoulder and says “Pieces of 7! Pieces of 7!”? A Parroty Error.',
      'When Apple employees die, does their life HTML5 in front of their eyes?',
      'Without requirements or design, programming is the art of adding bugs to an empty text file.',
      'Before software can be reusable it first has to be usable.',
      'The best method for accelerating a computer is the one that boosts it by 9.8 m/s2.',
      'I think Microsoft named .Net so it wouldn’t show up in a Unix directory listing.',
      'There are two ways to write error-free programs; only the third one works.',
    ];
    
    client.on('message', (msg) => {
      if (msg.content === '?joke') {
        msg.channel.send(jokes[Math.floor(Math.random() * jokes.length)]);
      }
    });
    

    I have removed the “Hello”/“Hi” part of the code because that is of no use to us anymore.

    Now that you have a basic understanding of the Discord.js module, let’s go deeper. But the module can do a lot more — for example, adding roles to a person or banning them or kicking them out. For now, we will be building a simple reaction-role system.

    Building A Reaction-Role System

    Whenever a user responds with a special emoji in a particular message or channel, a role tied to that emoji will be given to the user. The implementation will be very simple. But before building this reaction-role system, we have to understand partials.

    Partials

    Partial is a Discord.js concept. Discord.js usually caches all messages, which means that it stores messages in a collection. When a cached message receives some event, like getting a message or a reply, an event is emitted. But messages sent before the bot has started are uncached. So, reacting to such instances will not emit any event, unless we fetch them before we use them. Version 12 of the Discord.js library introduces the concept of partials. If we want to capture such uncached events, we have to opt in to partials. The library has five types of partials:

    1. USER
    2. CHANNEL
    3. GUILD_MEMBER
    4. MESSAGE
    5. REACTION

    In our case, we will need only three types of partials:

    • USER, the person who reacts;
    • MESSAGE, the message being reacted to;
    • REACTION, the reaction given by the user to the message.

    The documentation has more about partials.

    The Discord.js library provides a very easy way to use partials. We just need to add a single line of code, passing an object in the Discord.Client() constructor. The new constructor looks like this:

    const client = new Discord.Client({
      partials: ['MESSAGE', 'REACTION', 'CHANNEL'],
    });
    

    Creating Roles On The Discord Server

    To enable the reaction-role system, we have to create some roles. The first role we are going to create is the bot role. To create a role, go to “Server Settings”:

    Open server settings to create roles
    Server settings option (Large preview)

    In the server settings, go to the “Roles” option, and click on the small plus icon (+) beside where it says “Roles”.

    Creating roles in Discord
    Adding roles (Large preview)

    First, let’s create the bot role, and make sure to check the “Manage Roles” option in the role options menu. Once the bot role is created, you can add some more roles. I’ve added js, c++, and python roles. You don’t have to give them any special ability, but it’s an option.

    Here, remember one thing: The Discord roles work based on priority. Any role that has roles below it can manage the roles below it, but it can’t manage the roles above it. We want our bot role to manage the js, c++, and python roles. So, make sure that the bot role is above the other roles. Simply drag and drop to change the order of the roles in the “Roles” menu of your server settings.

    When you are done creating roles, assign the bot role to the bot. To give a role, click on the bot’s name in the members’ list on the server’s right side, and then click on the small plus icon (+). It’ll show you all of the available roles. Select the “bot” role here, and you will be done.

    Assigning roles manually
    Assinging roles (Large preview)

    Activating Developer Mode in Discord

    The roles we have created cannot be used by their names in our code. In Discord, everything from messages to roles has its own ID. If you click on the “more” indicator in any message, you’ll see an option named “Copy ID”. This option is available for everything in Discord, including roles.

    Copy ID option in Discord
    Copy ID in Discord (Large preview)

    Most likely, you won’t find this option by default. You’ll have to activate an option called “Developer Mode”. To activate it, head to the Discord settings (not your server settings), right next to your name in the bottom left. Then go to the “Appearance” option under “App Settings”, and activate “Developer Mode” from here. Now you’ll be able to copy IDs.

    messageReactionAdd and messageReactionRemove

    The event that needs to be emitted when a message is reacted is messageReactionAdd. And whenever a reaction is removed, the messageReactionRemove event should be emitted.

    Let’s continue building the system. As I said, first we need to listen for the messageReactionAdd event. Both the messageReactionAdd and messageReactionRemove events take two parameters in their callback function. The first parameter is reaction, and the second is user. These are pretty self-explanatory.

    Coding the Reaction-Role Functionality

    First, we’ll create a message that describes which emoji will give which role, something like what I’ve done here:

    The reaction-role message on server
    Reaction-role message (Large preview)

    You might be thinking, how are we going to use those emoji in our code? The default emoji are Unicode, and we will have to copy the Unicode version. If you follow the syntax :emojiName: and hit “Enter”, you will get an emoji with the name. For example, my emoji for the JavaScript role is fox; so, if I type in :fox: and hit “Enter” in Discord, I’ll receive a fox emoji. Similarly, I would use :tiger: and :snake: to get those emoji. Keep these in your Discord setup; we will need them later.

    Getting Unicode emoji
    Getting Unicode emoji (Large preview)

    Here is the starting code. This part of the code simply checks for some edge cases. Once we understand these cases, we’ll implement the logic of the reaction-role system.

    // Adding reaction-role function
    client.on('messageReactionAdd', async (reaction, user) => {
      if (reaction.message.partial) await reaction.message.fetch();
      if (reaction.partial) await reaction.fetch();
      if (user.bot) return;
      if (!reaction.message.guild) return;
    });
    

    We are passing in an asynchronous function. In the callback, the first thing we are doing is checking whether the message is a partial. If it is, then we fetch it, meaning caching or storing it in a JavaScript map method. Similarly, we are checking whether the reaction itself is a partial and then doing the same thing. Then, we check whether the user who reacted is a bot, because we don’t want to assign roles to the bot that is reacting to our messages. Finally, we are checking whether the message is on the server. Discord.js uses guild as an alternative name of the server. If the message is not on the server, then we would stop the function.

    Our bot will only assign the roles if the message is in the roles channel. If you right-click on the roles channel, you’ll see a “Copy ID” option. Copy the ID and follow along.

    if (reaction.message.channel.id == '802209416685944862') {
      if (reaction.emoji.name === '🦊') {
        await reaction.message.guild.members.cache
          .get(user.id)
          .roles.add('802208163776167977');
      }
      if (reaction.emoji.name === '🐯') {
        await reaction.message.guild.members.cache
          .get(user.id)
          .roles.add('802208242696192040');
      }
      if (reaction.emoji.name === '🐍') {
        await reaction.message.guild.members.cache
          .get(user.id)
          .roles.add('802208314766524526');
      }
    } else return;
    

    Above is the rest of the code in the callback. We are using the reaction.message.channel.id property to get the ID of the channel. Then, we are comparing it with the roles channel ID that we just copied. If it is true, then we check for the emoji and compare them with the reactions. The reaction.emoji.name returns the emoji that was used to react. We compare it with our Unicode version of the emoji. If they match, then we await for the reaction.message.guild.members.cache property.

    The cache is a collection that stores the data. These collections are a JavaScript Map with additional utilities. One of the utilities that it provides is the get method. To get anything by ID, we can simply pass in the ID in this method. So, we pass the user.id in the get method to get the user. Finally, the roles.add method adds the role to the user. In the roles.add method, we are passing the role ID. You can find the role ID in your server setting’s “Role” option. Right-clicking on a role will give you the option to copy the role ID. And we are done adding the reaction-role system to our bot!

    We can add functionality for a role to be removed when a user removes their reaction from the message. This is exactly the same as our code above, the only difference being that we are listening for the messageReactionRemove event and using the roles.remove method. So, the complete code for adding and removing roles would be like this:

    // Adding reaction-role function
    client.on('messageReactionAdd', async (reaction, user) => {
      if (reaction.message.partial) await reaction.message.fetch();
      if (reaction.partial) await reaction.fetch();
      if (user.bot) return;
      if (!reaction.message.guild) return;
      if (reaction.message.channel.id == '802209416685944862') {
        if (reaction.emoji.name === '🦊') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.add('802208163776167977');
        }
        if (reaction.emoji.name === '🐯') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.add('802208242696192040');
        }
        if (reaction.emoji.name === '🐍') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.add('802208314766524526');
        }
      } else return;
    });
    
    // Removing reaction roles
    client.on('messageReactionRemove', async (reaction, user) => {
      if (reaction.message.partial) await reaction.message.fetch();
      if (reaction.partial) await reaction.fetch();
      if (user.bot) return;
      if (!reaction.message.guild) return;
      if (reaction.message.channel.id == '802209416685944862') {
        if (reaction.emoji.name === '🦊') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.remove('802208163776167977');
        }
        if (reaction.emoji.name === '🐯') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.remove('802208242696192040');
        }
        if (reaction.emoji.name === '🐍') {
          await reaction.message.guild.members.cache
            .get(user.id)
            .roles.remove('802208314766524526');
        }
      } else return;
    });
    

    Adding Twitter Forwarding Function

    The next function we are going to add to our bot is going to be a bit more challenging. We want to focus on a particular Twitter account, so that any time the Twitter account posts a tweet, it will be forwarded to our Discord channel.

    Before starting to code, we will have to get the required tokens from the Twitter developer portal. Visit the portal and create a new app by clicking the “Create App” button in the “Overview” option. Give your app a name, copy all of the tokens, and paste them in the .env file of your code, with the proper names. Then click on “App Settings”, and enable the three-legged OAuth feature. Add the URLs below as callback URLs for testing purposes:

    http://127.0.0.1/
    https://localhost/
    

    If you own a website, add the address to the website URL and click “Save”. Head over to the “Keys and Tokens” tab, and generate the access keys and tokens. Copy and save them in your .env file. Our work with the Twitter developer portal is done. We can go back to our text editor to continue coding the bot. To achieve the functionality we want, we have to add another npm package named twit. It is a Twitter API client for Node.js. It supports both REST and streaming API.

    First, install the twit package using npm install twit, and require it in your main file:

    const Twit = require('twit');
    

    We have to create a twit instance using the Twit constructor. Pass in an object in the Twit constructor with all of the tokens that we got from Twitter:

    const T = new Twit({
      consumer_key: process.env.API_TOKEN,
      consumer_secret: process.env.API_SECRET,
      access_token: process.env.ACCESS_KEY,
      access_token_secret: process.env.ACCESS_SECRET,
      bearer_token: process.env.BEARER_TOKEN,
      timeout_ms: 60 * 1000,
    });
    

    A timeout is also specified here. We want all of the forwards to be in a specific channel. I have created a separate channel called “Twitter forwards”, where all of the tweets will be forwarded. I have already explained how you can create a channel. Create your own channel and copy the ID.

    // Destination Channel Twitter Forwards
    const dest = '803285069715865601';
    

    Now we have to create a stream. A stream API allows access to a stream of data over the network. The data is broken into smaller chunks, and then it is transmitted. Here is our code to stream the data:

    // Create a stream to follow tweets
    const stream = T.stream('statuses/filter', {
      follow: '32771325', // @Stupidcounter
    });
    

    In the follow key, I am specifying @Stupidcounter because it tweets every minute, which is great for our testing purposes. You can provide any Twitter handle’s ID to get its tweets. TweeterID will give you the ID of any handle. Finally, use the stream.on method to get the data and stream it to the desired channel.

    stream.on('tweet', (tweet) => {
      const twitterMessage = `Read the latest tweet by ${tweet.user.name} (@${tweet.user.screen_name}) here: https://twitter.com/${tweet.user.screen_name}/status/${tweet.id_str}`;
      client.channels.cache.get(dest).send(twitterMessage);
      return;
    });
    

    We are listening for the tweet event and, whenever that occurs, passing the tweet to a callback function. We’ll build a custom message; in our case, the message will be:

    Read the latest tweet by The Count (@Stupidcounter) here: https://twitter.com/Stupidcounter/status/1353949542346084353
    

    Again, we are using the client.channels.cache.get method to get the desired channel and the .send method to send our message. Now, run your bot and wait for a minute. The Twitter message will be sent to the server.

    The bot sends the tweet to Discord
    Tweets forwarded to Discord (Large preview)

    So, here is the complete Twitter forwarding code:

    // Adding Twitter forward function
    const Twit = require('twit');
    const T = new Twit({
      consumer_key: process.env.API_TOKEN,
      consumer_secret: process.env.API_SECRET,
      access_token: process.env.ACCESS_KEY,
      access_token_secret: process.env.ACCESS_SECRET,
      bearer_token: process.env.BEARER_TOKEN,
      timeout_ms: 60 * 1000,
    });
    
    // Destination channel Twitter forwards
    const dest = '803285069715865601';
    // Create a stream to follow tweets
    const stream = T.stream('statuses/filter', {
      follow: '32771325', // @Stupidcounter
    });
    
    stream.on('tweet', (tweet) => {
      const twitterMessage = `Read the latest tweet by ${tweet.user.name} (@${tweet.user.screen_name}) here: https://twitter.com/${tweet.user.screen_name}/status/${tweet.id_str}`;
      client.channels.cache.get(dest).send(twitterMessage);
      return;
    });
    

    All of the functions that we want to add are done. The only thing left now is to deploy it to the cloud. We’ll use Heroku for that.

    Deploying The Bot To Heroku

    First, create a new file in the root directory of your bot code’s folder. Name it Procfile. This Procfile will contain the commands to be executed when the program starts. In the file, we will add worker: node app.js, which will inform Heroku about which file to run at startup.

    After adding the file, let’s initiate a git repository, and push our code to GitHub (how to do so is beyond the scope of this article). One thing I would suggest is to add the node_modules folder and the .env file to the .gitignore file, so that your package size remains small and sensitive information does not get shared outside.

    Once you’ve successfully pushed all of your code to GitHub, visit the Heroku website. Log in, or create an account if you don’t have one already. Click on the “New” button to create a new app, and name it as you wish. Choose the “Deployment Method” as GitHub.

    Choose GitHub as deployment method
    Choose GitHub as the deployment method (Large preview)

    Search for your app, and click on connect once you find it. Enable automatic deployment from the “Deploy” menu, so that each time you push changes to the code, the code will get deployed automatically to Heroku.

    Now, we have to add the configuration variables to Heroku, which is very easy. Go to the “Settings” option, below your app’s name, and click on “Reveal Config Vars”.

    Revealing and adding configuration variables to Heroku
    Config Vars on Heroku (Large preview)

    Here, we’ve added the configuration variables as key-value pairs. Once you are done, go to the “Deploy” tab again, and click on “Deploy Branch” under “Manual Deploy”.

    The last thing to consider is that you might encounter a 60-second error crash that stops the bot from executing. To prevent this from happening, we have to change the worker type of the app. In Heroku, if you go to the “Resources” tab of your app, you’ll see that, under “Free Dynos”, web npm start is enabled. We have to turn this off and enable worker node app.js. So, click on the edit button beside the web npm start button, turn it off, and enable the worker node app.js option. Confirm the change. Restart all of your dynos, and we are done!

    Conclusion

    I hope you’ve enjoyed reading this article. I tried to cover all of the basics that you need to understand in developing a complicated bot. Discord.js’ documentation is a great place to learn more. It has great explanations. Also, you will find all of the code in the GitHub repository. And here are a few resources that will be helpful in your further development:

    Smashing Editorial
    (vf, il, al)

    Source link

    web design

    Material Design Text Fields Are Badly Designed — Smashing Magazine

    02/24/2021

    About The Author

    Adam Silver is an interaction designer focused on design systems and inclusive design. He loves to help organizations deliver products and services so that …
    More about
    Adam

    Where to put the label in a web form? In the early days, we talked about left-aligned labels versus top-aligned labels. These days we talk about floating labels. Let’s explore why they aren’t a very good idea, and what to use instead.

    I’ve been designing forms for over 20 years now, and I’ve tested many of them for large organizations like Boots, Just Eat and Gov.uk. One topic that comes up a lot with forms is: where to put the label. In the early days, we talked about left-aligned labels versus top-aligned labels.

    These days the focus is more about placeholders that replace labels and float labels. The latter start off inside the input. When the user starts typing, the label ‘floats’ up to make space for the answer:

    Material Design text fields use the float label pattern
    Material Design text fields use the float label pattern. (Large preview)

    Some people assume float labels are best because Google’s Material Design uses them. But in this case, Google is wrong.

    Instead, I recommend using conventional text fields which have:

    • The label outside the input (to tell the user what to type),
    • A distinct border all the way around (to make it obvious where the answer goes).
    A conventional text field
    A conventional text field

    In this article, I’ll explain why I always recommend conventional text fields and why Google is wrong about using float labels for Material Design.

    Float Labels Are Better Than A Common Alternative But They’re Still Problematic

    Float labels were designed to address some problems with a commonly used alternative: placeholder labels. That’s where the label is placed inside the input but disappears when the user starts typing:

    Placeholder label text field
    Placeholder label text field.

    Having seen lots of people interacting with forms through my work first hand I know that placeholder labels are problematic.

    This is because, for example, they:

    Float labels don’t solve 2 of these problems: poor contrast and the chance of the label being mistaken for an actual answer. And while they attempt to address the problem of the label disappearing, in doing so, float labels introduce lots of other problems, too.

    For example, the size of the label has to be tiny in order to fit inside the box, which can make it hard to read. And long labels cannot be used as they’ll get cropped by the input:

    Long labels get cut off with Material Design text fields
    Long labels get cut off with Material Design text fields. (Large preview)

    Conventional Text Fields Are Better Than Both Placeholder Labels And Float Labels

    Conventional text fields don’t have the above problems because it’s clear where the answer goes and they have a legible, readily available label. The labels can be of any length and hint text, should it be needed, is easy to accommodate as well.

    Conventional text fields can easily contain long label text
    Conventional text fields can easily contain long label text.

    I’ve watched hundreds of people interact with forms and seen many of them struggle. But not once was that down to the use of a conventional text field. They take up a bit more vertical space. But saving space at the cost of clarity, ease of use and accessibility is a bad tradeoff to make.

    Google’s Test Didn’t Include Conventional Text Fields

    Google’s article, The Evolution of Material Design’s Text Fields shows that only 2 variants were tested, both of which used float labels.

    The 2 variants of text fields that Google tested: float labels with underlines and a white transparent background (left) and float labels with grey backgrounds (right).
    The 2 variants of text fields that Google tested: float labels with underlines and a white transparent background (left) and float labels with grey backgrounds (right). (Large preview)

    Crucially the test didn’t include conventional text fields which means they haven’t actually compared the usability of their float label design against conventional text fields. And having read Google’s responses to the comments on their article, it seems that usability was not their top priority.

    Google Inadvertently Prioritized Aesthetics Over Usability

    I looked into why Material Design uses float labels and discovered comments from Michael Gilbert, a designer who worked on them.

    The comments indicate that they tried to balance aesthetics and usability.

    Matt Ericsson commented:

    This seems to imply that there was more of an emphasis on form over function […] or even a desire to simply differentiate Material components from tried and true (boring) input boxes. […] was there research conducted on the original inputs that validated that they met a goal that was not being met by box inputs? Is there something that stood out as valuable going with a simple underline?

    Google’s response:

    The design decisions behind the original text field predate my time on the team, but I think the goal was likely similar [to this research]: Balance usability with style. I believe at the time we were leaning towards minimalism, emphasizing color and animation to highlight usability.

    Denis Lesak commented:

    […] this is one of those moments where I wonder why all of this research was necessary as I have long thought that the old design was flawed for all the reasons you mentioned.

    Google’s response:

    […] the goal of the research here wasn’t to simply determine that one version was better than another […]. This study was instead focused on identifying the characteristics of the design that led to the most usable, most beautiful experiences.

    Even though Google aimed for balance, in the end they inadvertently sacrificed usability for ‘minimalism’ and ‘a beautiful experience’.

    But aesthetics and usability are not in competition with each other. Something can look good without causing problems for users. In fact, these qualities go hand in hand.

    An example form using conventional text fields that look good and function well too
    An example form using conventional text fields that look good and function well too. (Large preview)

    Conclusion

    Float labels are certainly less problematic than placeholder labels. But conventional text fields are better than float labels because they look like form fields and the label is easy to read and available at all times.

    Aesthetics are important, but putting the label inside the box does not make it look beautiful. What it does do, however, is make it demonstrably harder to use.

    Smashing Editor’s note

    At the moment of writing, here at Smashing Magazine we are actually using the floating label pattern that Adam heavily criticizes in this article. From our usability tests we can confirm that floating labels aren’t a particularly great idea, and we are looking into adjusting the design — by moving to conventional text fields — soon.

    Acknowledgments

    Thanks to Caroline Jarrett and Amy Hupe for helping me write this. And thanks to Maximilian Franzke, Olivier Van Biervliet, Dan Vidrasan, Fabien Marry for their feedback on an earlier draft of this article.

    Smashing Editorial
    (vf, yk, il)

    Source link

    web design

    Create Responsive Image Effects With CSS Gradients And aspect-ratio — Smashing Magazine

    02/23/2021

    About The Author

    Stephanie Eckles is a front-end focused SWE at Microsoft. She’s also the author of ModernCSS.dev which provides modern solutions to old CSS problems as in-depth …
    More about
    Stephanie

    A classic problem in CSS is maintaining the aspect ratio of images across related components, such as cards. The newly supported aspect-ratio property in combination with object-fit provides a remedy to this headache of the past! Let’s learn to use these properties, in addition to creating a responsive gradient image effect for extra flair.

    To prepare for our future image effects, we’re going to set up a card component that has a large image at the top followed by a headline and description. The common problem with this setup is that we may not always have perfect control over what the image is, and more importantly to our layout, what its dimensions are. And while this can be resolved by cropping ahead of time, we can still encounter issues due to responsively sized containers. A consequence is uneven positions of the card content which really stands out when you present a row of cards.

    Another previous solution besides cropping may have been to swap from an inline img to a blank div that only existed to present the image via background-image. I’ve implemented this solution many times myself in the past. One advantage this has is using an older trick for aspect ratio which uses a zero-height element and sets a padding-bottom value. Setting a padding value as a percent results in a final computed value that is relative to the element’s width. You may have also used this idea to maintain a 16:9 ratio for video embeds, in which case the padding value is found with the formula: 9/16 = 0.5625 * 100% = 56.26%. But we’re going to explore two modern CSS properties that don’t involve extra math, give us more flexibility, and also allow keeping the semantics provided by using a real img instead of an empty div.

    First, let’s define the HTML semantics, including use of an unordered list as the cards’ container:

    <ul class="card-wrapper">
      <li class="card">
        <img src="http://www.smashingmagazine.com/" alt="http://www.smashingmagazine.com/">
        <h3>A Super Wonderful Headline</h3>
        <p>Lorem ipsum sit dolor amit</p>
      </li>
      <!-- additional cards -->
    </ul>
    

    Next, we’ll create a minimal set of baseline styles for the .card component. We’ll set some basic visual styles for the card itself, a quick update to the expected h3 headline, then essential styles to begin to style the card image.

    .card {
      background-color: #fff;
      border-radius: 0.5rem;
      box-shadow: 0.05rem 0.1rem 0.3rem -0.03rem rgba(0, 0, 0, 0.45);
      padding-bottom: 1rem;
    }
    
    .card > :last-child {
      margin-bottom: 0;
    }
    
    .card h3 {
      margin-top: 1rem;
      font-size: 1.25rem;
    }
    
    img {
      border-radius: 0.5rem 0.5rem 0 0;
      width: 100%;
    }
    
    img ~ * {
      margin-left: 1rem;
      margin-right: 1rem;
    }
    

    The last rule uses the general sibling combinator to add a horizontal margin to any element that follows the img since we want the image itself to be flush with the sides of the card.

    And our progress so far leads us to the following card appearance:

    One card with the baseline styles previously described applied and including an image from Unsplash of a dessert on a small plate next to a hot beverage in a mug
    One card with the baseline styles previously described applied and including an image from Unsplash of a dessert on a small plate next to a hot beverage in a mug. (Large preview)

    Finally, we’ll create the .card-wrapper styles for a quick responsive layout using CSS grid. This will also remove the default list styles.

    .card-wrapper {
      list-style: none;
      padding: 0;
      margin: 0;
      display: grid;
      grid-template-columns: repeat(auto-fit, minmax(30ch, 1fr));
      grid-gap: 1.5rem;
    }
    

    Note: If this grid technique is unfamiliar to you, review the explanation in my tutorial about modern solutions for the 12-column grid.

    With this applied and with all cards containing an image with a valid source path, our .card-wrapper styles give us the following layout:

    Three cards are shown in a row due to the card wrapper layout styles applied. Each card has a unique image that has different natural aspect ratios, with the last card having a vertically oriented image that is more than twice the height of the other card images
    Three cards are shown in a row due to the card wrapper layout styles applied. Each card has a unique image that has different natural aspect ratios, with the last card having a vertically oriented image that is more than twice the height of the other card images. (Large preview)

    As demonstrated in the preview image, these baseline styles aren’t enough to properly contain the images given their varying natural dimensions. We’re in need of a method to constrain these images uniformly and consistently.

    Enable Uniform Image Sizes with object-fit

    As noted earlier, you may previously have made an update in this scenario to change the images to be added via background-image instead and used background-size: cover to handle nicely resizing the image. Or you may have tried to enforce cropping ahead of time (still a worthy goal since any image size reduction will improve performance!).

    Now, we have the property object-fit available which enables an img tag to act as the container for the image. And, it comes with a cover value as well that results in a similar effect as the background image solution, but with the bonus of retaining the semantics of an inline image. Let’s apply it and see how it works.

    We do need to pair it with a height dimension for extra guidance on how we want the image container to behave (recall we had already added width: 100%). And we’re going to use the max() function to select either 10rem or 30vh depending on which is larger in a given context, which prevents the image height from shrinking too much on smaller viewports or when the user has set a large zoom.

    img {
      /* ...existing styles */
      object-fit: cover;
      height: max(10rem, 30vh);
    }
    

    Bonus Accessibility Tip: You should always test your layouts with 200% and 400% zoom on desktop. While there isn’t currently a zoom media query, functions like max() can help resolve layout issues. Another context this technique is useful is spacing between elements.

    With this update, we’ve definitely improved things, and the visual result is as if we’d use the older background image technique:

    The three-card images now appear to have a uniform height and the image contents are centered within the image as if it was a container
    The three-card images now appear to have a uniform height and the image contents are centered within the image as if it was a container. (Large preview)

    Responsively Consistent Image Sizing With aspect-ratio

    When using object-fit by itself, one downside is that we still need to set some dimension hints.

    An upcoming property (currently available in Chromium browsers) called aspect-ratio will enhance our ability to consistently size images.

    Using this property, we can define a ratio to resize the image instead of setting explicit dimensions. We’ll continue to use it in combination with object-fit to ensure these dimensions only affect the image as a container, otherwise, the image could appear distorted.

    Here is our full updated image rule:

    img {
      border-radius: 0.5rem 0.5rem 0 0;
      width: 100%;
      object-fit: cover;
      aspect-ratio: 4/3;
    }
    

    We’re going to start with an image ratio of 43 for our card context, but you could choose any ratio. For example, 11 for a square, or 169 for standard video embeds.

    Here are the updated cards, although it will probably be difficult to notice the visual difference in this particular instance since the aspect ratio happens to closely match the appearance we achieved by setting the height for object-fit alone.

    The three-card images have identical width and height dimensions, which are slightly different than the previous object-fit solution
    The three-card images have identical width and height dimensions, which are slightly different than the previous object-fit solution. (Large preview)

    Setting an aspect-ratio results in the ratio being maintained as the elements grow or shrink, whereas when only setting object-fit and height the image ratio will constantly be in flux as the container dimensions change.

    Adding Responsive Effects With CSS Gradients And Functions

    OK, now that we know how to setup consistently sized images, let’s have some fun with them by adding a gradient effect!

    Our goal with this effect is to make it appear as though the image is fading into the card content. You may be tempted to wrap the image in its own container to add the gradient, but thanks to the work we’ve already done on the image sizing, we can work out how to safely do it on the main .card.

    The first step is to define a gradient. We’re going to use a CSS custom property to add in the gradient colors to enable easily swapping the gradient effect, starting with a blue to pink. The last color in the gradient will always be white to maintain the transition into the card content background and create the “feathered” edge.

    .card {
      --card-gradient: #5E9AD9, #E271AD;
    
      background-image: linear-gradient(
        var(--card-gradient),
        white max(9.5rem, 27vh)
      );
      /* ...existing styles */
    }
    

    But wait — is that a CSS max() function? In a gradient? Yes, it’s possible, and it’s the magic that makes this gradient effective responsively!

    However, if I were to add a screenshot, we wouldn’t actually see the gradient having any effect on the image yet. For that, we need to bring in the mix-blend-mode property, and in this scenario we’ll use the overlay value:

    img {
      /* ...existing styles */
      mix-blend-mode: overlay;
    }
    

    The mix-blend-mode property is similar to applying the layer blending styles available in photo manipulation software like Photoshop. And the overlay value will have the effect of allowing the medium tones in the image to blend with the gradient behind it, leading to the following result:

    Each card image has a gradient blending effect that starts with a light blue at the top, that blends to a reddish pink, and then ends by feathering into a white prior to the rest of the card text content
    Each card image has a gradient blending effect that starts with a light blue at the top, that blends to a reddish pink, and then ends by feathering into a white prior to the rest of the card text content. (Large preview)

    Now, at this point, we are relying on the aspect-ratio value alone to resize the image. And if we resize the container and cause the card layout to reflow, the changing image height causes inconsistencies in where the gradient fades to white.

    So we’ll add a max-height property as well that also uses the max() function and contains values slightly greater than the ones in the gradient. The resulting behavior is that the gradient will (almost always) correctly line up with the bottom of the image.

    img {
      /* ...existing styles */
      max-height: max(10rem, 30vh);
    }
    

    It’s important to note that adding a max-height alters the aspect-ratio behavior. Instead of always using the exact ratio, it will be used only when there’s enough allotted space given the new extra constraint of the max-height.

    However, aspect-ratio will still continue to ensure the images resize consistently as was the benefit over only object-fit. Try commenting out aspect-ratio in the final CodePen demo to see the difference it’s making across container sizes.

    Since our original goal was to enable consistently responsive image dimensions, we’ve still hit the mark. For your own use case, you may need to fiddle with the ratio and height values to achieve your desired effect.

    Alternate: mix-blend-mode And Adding A Filter

    Using overlay as the mix-blend-mode value was the best choice for the fade-to-white effect we were looking for, but let’s try an alternate option for a more dramatic effect.

    We’re going to update our solution to add a CSS custom property for the mix-blend-mode value and also update the color values for the gradient:

    .card {
      --card-gradient: tomato, orange;
      --card-blend-mode: multiply;
    }
    
    img {
      /* ...existing styles */
      mix-blend-mode: var(--card-blend-mode);
    }
    

    The multiply value has a darkening effect on mid-tones, but keeps white and black as is, resulting in the following appearance:

    Each card image has a strong orange tint from the new gradient that starts goes from a red-orange to pure orange. White areas are still white and black areas are still black
    Each card image has a strong orange tint from the new gradient that starts goes from a red-orange to pure orange. White areas are still white and black areas are still black. (Large preview)

    While we’ve lost the fade and now have a hard edge on the bottom of the image, the white part of our gradient is still important to ensure that the gradient ends prior to the card content.

    One additional modification we can add is the use of filter and, in particular, use the grayscale() function to remove the image colors and therefore have the gradient be the only source of image coloring.

    img {
      /* ...existing styles */
      filter: grayscale(100);
    }
    

    Using the value of grayscale(100) results in complete removal of the image’s natural colors and transforming it into black and white. Here’s the update for comparison with the previous screenshot of its effect when using our orange gradient with multiply:

    Now each card image still has the orange gradient but all other color is removed and replaced by shades of gray
    Now each card image still has the orange gradient but all other color is removed and replaced by shades of gray. (Large preview)

    Use aspect-ratio As A Progressive Enhancement

    As previously mentioned, currently aspect-ratio is only supported in the latest version of Chromium browsers (Chrome and Edge). However, all browsers support object-fit and that along with our height constraints results in a less-ideal but still acceptable result, seen here for Safari:

    The card image height is capped, but each card has a slightly different realized height
    The card image height is capped, but each card has a slightly different realized height. (Large preview)

    Without aspect-ratio functioning, the result here is that ultimately the image height is capped but the natural dimensions of each image still lead to some variance between card image heights. You may want to instead change to adding a max-height or make use of the max() function again to help make a max-height more responsive across varying card sizes.

    Extending The Gradient Effects

    Since we defined the gradient color stops as a CSS custom property, we have ready access to change them under different contexts. For example, we might change the gradient to more strongly feature one of the colors if the card is hovered or has one of its children in focus.

    First, we’ll update each card h3 to contain a link, such as:

    <h3><a href="http://www.smashingmagazine.com/">A Super Wonderful Headline</a></h3>
    

    Then, we can use one of our newest available selectors — :focus-within — to alter the card gradient when the link is in focus. For extra coverage of possible interactions, we’ll couple this with :hover. And, we’ll reuse our max() idea to assign a single color to take over coverage of the image portion of the card. The downside to this particular effect is that gradient stops and color changes aren’t reliably animateable — but they will be soon thanks to CSS Houdini.

    To update the color and add the new color stop, we just need to re-assign the value of --card-gradient within this new rule:

    .card:focus-within,
    .card:hover {
      --card-gradient: #24a9d5 max(8.5rem, 20vh);
    }
    

    Our max() values are less than the original in use for white to maintain the feathered edge. If we used the same values, it would meet the white and create a clearly straightedge separation.

    In creating this demo, I originally tried an effect that used transform with scale for a zoom-in effect. But I discovered that due to mix-blend-mode being applied, the browser would not consistently repaint the image which caused an unpleasant flickering. There will always be trade-offs in requesting the browser perform CSS-only effects and animations, and while it’s very cool what we can do, it’s always best to check the performance impact of your effects.

    Have Fun Experimenting!

    Modern CSS has given us some awesome tools for updating our web design toolkits, with aspect-ratio being the latest addition. So go forth, and experiment with object-fit, aspect-ratio, and adding functions like max() into your gradients for some fun responsive effects! Just be sure to double-check things cross-browser (for now!) and across varying viewports and container sizes.

    Here is the CodePen including the features and effects we reviewed today:

    See the Pen [Responsive Image Effects with CSS Gradients and aspect-ratio](https://codepen.io/smashingmag/pen/WNoERXo) by Stephanie Eckles.

    See the Pen Responsive Image Effects with CSS Gradients and aspect-ratio by Stephanie Eckles.

    Looking for more? Make sure you check out our CSS Guide here on Smashing →

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Key Updates And What They Mean — Smashing Magazine

    02/22/2021

    As digital practitioners, GDPR has impacted every facet of our professional and personal lives. Whether you’re addicted to Instagram, message your family on WhatsApp, buy products from Etsy or Google information, no one has escaped the rules that were introduced in 2018.

    The EU’s directives have impacted virtually every digital professional as products and services are designed with GDPR in mind, regardless of whether you’re a web design company in Wisconsin or a marketer in Malta. The far-reaching implications of GDPR don’t just impact how data should be processed, how products should be built and how data is transferred securely within and between organisations. It defines international data transfer agreements like that between Europe and America.

    Kevin Kelly, one of the world’s brightest digital futurists, claims that ‘Technology is as great a force as nature’. What he means by that is that user data and information technology is causing one of the most profound periods in human history since the invention of language. Just look at what is happening as governments and the tech multinationals grapple to control the Internet.

    Last week alone, as the Australian government moved to force platform owners to pay publishers for the content that’s shared on their platform, Facebook decided to block news to Australian users with a huge uproar from the Australian government.

    And that’s in addition to previous controversies (the organisation of the U.S. Capitol riot, the Cambridge Analytica scandal) at the intersection where government and technology meet.

    In this article, we’ll look at how GDPR has evolved since 2018. We’ll run through some updates from the EU, some key developments, and where GDPR is likely to evolve. We’ll explore what that means for us, as designers and developers. And we’ll look at what that means for companies both inside and outside the EU.

    In the next article, we’ll focus on cookie consent and the paradox where marketers are heavily reliant on Google Analytics cookie data but need to comply with regulations. And then we’ll take a deep dive into first-party ad tracking as we start to see moves away from third-party cookies.

    • Part 1: GDPR, Key Updates And What They Mean
    • Part 2: GDPR, Cookie Consent and 3rd Parties (next week)
      Subcribe to our newsletter to not miss it.

    A Quick Recap Of GDPR

    Let’s start by reminding ourselves what GDPR is. The GDPR became law within the EU on 25 May 2018. It’s based on 7 key principles:

    1. Lawfulness, fairness and transparency
      You must process data so that people understand what, how, and why you’re processing their data.
    2. Purpose limitation
      You should only collect data for clear, specified, and legitimate purposes. You can’t then process it in ways that are incompatible with your original purposes.
    3. Data minimization
      You should only collect the data you need.
    4. Accuracy
      Your data must be accurate and kept up to date. Inaccurate data should be erased or corrected.
    5. Storage limitation
      If data can be linked to individuals, you can only keep it for as long as you need to carry out the purposes you specified. (Caveats for scientific, statistical, or historical research use.)
    6. Integrity and confidentiality (i.e. security)
      You must ensure the personal data you hold is processed securely. You must protect it from unauthorized or unlawful processing and against accidental loss, destruction, or damage.
    7. Accountability
      You are now responsible for the data you hold and should be able to demonstrate your compliance with the GDPR.
    Diagram showing the seven principles of GDPR: lawfulness, integrity, storage and purpose limitations, data minimisation and accuracy, and accountability - overlaid with transparency, privacy and controls
    GDPR’s principles are based on transparency, privacy and user control. (Image credit: Cyber-Duck) (Large preview)

    Some Definitions

    • CJEU
      Court of Justice of the European Union. This court’s decisions clarify EU laws like GDPR.
    • DPAs
      National Data Protection Authorities. Each EU country has one. GDPR is enforced, and fines are issued, at the national level by these bodies. The UK equivalent is the Information Commissioner’s Office (ICO). In the United States, GDPR-style data privacy is largely legislated by each state.
    • European Commission
      The executive branch of the European Union (essentially the EU’s civil service). The European Commission drafts legislation including the GDPR.
    • GDPR
      The 2018 General Data Protection Regulation.

    Key Updates From The EU

    GDPR hasn’t stood still since May 2018. Here’s a quick run-through of what’s happened since it came into effect.

    How Have The EU And Its Member States Implemented GDPR?

    The European Commission reports that GDPR is almost fully implemented across the EU, though some countries — it namechecks Slovenia — have dragged their feet. However, the depth of implementation varies. The EU also says its member countries are, in its opinion, using their new powers fairly.

    However, it has also expressed concern that some divergence and fragmentation are creeping in. GDPR can only work effectively across the EU’s single market if member states are aligned. If the laws diverge, it muddies the water.

    How Does The EU Want GDPR To Develop?

    We know the EU wants it to be easier for individuals to exercise their rights under GDPR. That means cross-border collaboration and class-action lawsuits. It wants to see data portability for consumers beyond banking and telecoms.

    It also wants to make it easier for
    small and medium-sized enterprises (SMEs) to comply with GDPR. That’s likely to come in the form of extra support and tools such as more standard contractual clauses — essentially templated legalese that SMEs can copy/paste into contracts — as the EU isn’t keen to bend the rules for them.

    Big Development #1: The Unexpectedly Broad Definition Of ‘Joint Controller’

    Right, here’s the first big change since GDPR became law. In two test cases involving Facebook, the Court of Justice of the European Union has defined a far broader interpretation of ‘joint controller’ than expected.

    A joint controller situation arises when two or more controllers both have responsibility for meeting the terms of the GDPR. (Here’s a good explainer from the ICO on joint controllers.) Essentially:

    • When you process customer data, you decide with your fellow joint controller(s) who will manage each step so you’re compliant with the GDPR.
    • However, you all have full responsibility to ensure the entire process is compliant. Each of you is fully accountable to the data protection authority in the country handling any complaints.
    • An individual can raise a complaint against each and all joint controllers.
    • You are all responsible for any damage caused — unless you can prove you have no connection to the event that’s caused the damage.
    • An individual can seek compensation from any joint controller. You may be able to reclaim some of that compensation from your fellow controllers.

    In the first Facebook case, the CJEU confirmed that a company that ran a Facebook fan page counted as a joint controller alongside Facebook. In the second, the CJEU also confirmed that a company that embedded a Facebook Like button onto its website held joint controller status with the social network.

    These cases sent shockwaves through the privacy community, as essentially it makes social publishers, website operators, and fan page moderators responsible for user data alongside platforms like Facebook.

    However, the CJEU also clarified that shared responsibility does not mean equal responsibility. In both cases, responsibility sat primarily with Facebook — only Facebook had access to the data and only Facebook could delete it. So the impact of this decision may be less severe than it sounds at first — but it’s still critically important.

    And that might be why some sites — such as the website for Germany’s 2020 presidency of the EU — block embedded social content by default, until you’ve specifically opted in:

    Screengrab of eu2020.de showing social feed content blocked until third-party tracking is switched on
    Some sites are starting to block embedded social feeds from appearing on their sites by default, offering users the choice to opt-in with tracking. (Large preview)

    Big Development #2: Bye Bye Privacy Shield, Hello CPRA

    The second big change was more predictable: Privacy Shield, the mechanism that made it easier for American businesses to process European customer data, has been struck down by the courts.

    Here’s why.

    The EU wants to protect its citizens’ personal data. However, it also wants to encourage international trade, plus cross-border collaboration in areas like security.

    The EU sees itself — quite rightly — as a pioneer in data protection. So it’s using its political muscle to encourage countries who want to trade with the bloc to match its data privacy standards.

    Enter the United States. European and American philosophies around data privacy are diametrically opposed. (In essence, the European view is that personal data is private unless you give explicit permission. The American view is that your data is public unless you expressly request that it’s kept private.) But as the world’s two biggest consumer markets, they need to trade. So the EU and the US developed Privacy Shield.

    Privacy Shield was designed to enable US companies to process EU citizens’ data, as long as those companies signed up to its higher privacy standards.

    But under US law, the US government could still monitor that data. This was challenged in a case brought by Austrian privacy advocate Max Schrems. The CJEU sided with him: Privacy Shield was struck down and the 5,300 American SMEs who used Privacy Shield were given no choice but to adopt the EU’s prescribed Standard Contractual Clauses.

    Obviously, it’s in everyone’s interests for Privacy Shield to be replaced — and it will be. But experts say that its replacement is likely to be struck down again in due course because European and American approaches to privacy are essentially incompatible.

    Meanwhile, in California, 2018’s GDPR-inspired California Consumer Privacy Act (CCPA) was strengthened in November 2020 when the California Privacy Rights Act (CPRA) was passed.

    The California Consumer Privacy Act (CCPA)

    The CCPA, which came into effect in January 2020, gives California citizens the right to opt out of their data being sold. They can also ask for any data that’s been collected to be disclosed and they can ask for that data to be deleted.
    Unlike GDPR, the CCPA only applies to commercial companies:

    • Who process the data of more than 50,000 California residents a year, OR
    • Who generate gross revenue of more than $25m a year, OR
    • Who make more than half of their annual revenue from selling California residents’ personal data
    The California Privacy Rights Act (CPRA)

    The CPRA, which comes into force in January 2023, goes beyond the CCPA. Its key points include:

    • It raises the bar to companies that process the data of 100,000 California residents a year
    • It gives more protection to Californians’ sensitive data, such as their race, religion, sexual orientation, and health data and government ID
    • It triples the fines for breaches of minors’ data
    • It gives Californians the right to request their data is corrected
    • It obliges companies to help with CPRA investigations
    • And it establishes a California Privacy Protection Agency to enforce the CPRA
    Graphic summarising the CPRA
    California is tightening its privacy legislation with the CPRA, coming in 2023. (Large preview)

    Further pushes towards privacy laws are happening in other states, and together these may reinforce the need for federal privacy measures under the new Biden administration.

    In May 2020, the EU updated its GDPR guidance to clarify several points, including two key points for cookie consent:

    • Cookie walls do not offer users a genuine choice, because if you reject cookies you’re blocked from accessing content. It confirms that cookie walls should not be used.
    • Scrolling or swiping through web content does not equate to implied consent. The EU reiterates that consent must be explicit.

    I’ll be going deeper into this in the second article next week.

    Cyber-Duck cookie notice with ad tracking turned on by default
    The EU has updated its guidance on cookie consent. (Large preview)

    Big Development #4: Google And Apple Start To Shift From Third-Party Tracking

    As the big digital players figure out how to meet GDPR — and how to turn privacy legislation to their advantage — some have already come under fire.

    Both Google and Apple are facing antitrust lawsuits, following complaints from adtech companies and publishers.

    In both cases, the complainants says the big tech companies are exploiting their dominant market position.

    Again, more on this next time.

    Big Development #5: Big GDPR Fines Coming This Way

    Of course, many organizations jumped to comply with GDPR because they feared the fines that regulators could apply. Those fines have started rolling in:

    The French data regulator has slapped Google with a €50m fine for “lack of transparency, inadequate information and lack of valid consent regarding ads personalization”, saying users were “not sufficiently informed” about how and why Google collected their data.

    Its UK equivalent, the ICO, has fined US hotel conglomerate Marriott International Inc. £18.4m for failing to keep 339 million guest records secure. The 2014 cyber-attack on Starwood Hotels and Resorts Worldwide, Inc., which Marriott acquired in 2016, wasn’t discovered until 2018.

    The UK’s ICO has also fined British Airways a record £20m for a 2018 data breach of 400,000 customers’ personal and credit card data.

    Then there’s my personal favorite, a shocking breach of employee trust by H&M that led to a €35m penalty.

    So that’s where we stand today.

    What Does This Mean For You?

    As designers and developers, GDPR has — and will continue to have — a big impact in the products we design and build, and the way that we design for data.

    Here’s What We, As Designers, Should Know

    • GDPR is critical for you because you’ll design the points at which users share their data, what data is collected, and how it’s processed.
    • Follow Privacy by Design best practices. Don’t try to reinvent the wheel — if you’ve created a compliant cookie banner, use your proven design pattern.
    • Work with your compliance and development teams to ensure designs meet GDPR and can be implemented. Only ask for the data you need.
    • Finally, ask your users what data they’re comfortable sharing and how they’d like you to use it. If they find it creepy, revisit your approach.

    Here’s What We, As Developers, Should Know

    • GDPR is critical for you because you enable data processing, sharing and integrations.
    • As a general rule with GDPR, take a need-to-access approach. Start by implementing everything with no access, then only give your team access to data as and when it’s necessary (e.g. giving developers access to the Google Analytics console). Audit and document as you go.
    • Follow privacy by design and security by design principles. Robust, secure templates for implementing infrastructure are key.
    • Make sure you’re involved upfront about technical aspects e.g. cookie consent/tracking conversations, so what’s decided can be implemented.
    • Process mapping shows where data is being shared with different parts of the business.
    • Automation offers secure data handling that cuts human error. It also helps prevent the wrong people accessing data.
    • GDPR checklists and of course run books will help you manage your process. Again, audit and document as you go.

    Now let’s see how GDPR is going to evolve in the near future. We’ll focus on three areas.

    Three Areas Where GDPR Is Swiftly Evolving

    1. How The EU Is Implementing GDPR

    First up, let’s see how GDPR will be further embedded in the legislative landscape.

    The EU wants to keep its member states aligned, because that will make cross-border suits and international collaboration easier. So it has reinforced that countries should neither divert from, nor overstep the GDPR. Some member states, as I said, are paying lip service to the regulation. Others want to exceed GDPR’s standards.

    In return for their alignment, the EU will enforce compliance, work to enable class action and cheaper cross-border suits, and also promote privacy and consistent standards outside the EU. In addition to extra support and tools for SMEs, we may also see certification for security and data protection by design.

    Finally, this could raise some eyebrows in Silicon Valley: the EU has hinted that it might consider bans on data processing to encourage compliance. €50m fines aren’t the end of the world for Google and friends. But time out on the naughty step — and the resulting bad PR — is a very different thing.

    2. How GDPR Works With Innovation

    GDPR was designed to be technology-neutral and to support, not hinder, innovation. That’s certainly been tested over the past 12 months, and the EU points to the rapid rollout of COVID-19 apps as proof that its legislation works.

    We can expect to see codes of conduct for sensitive categories of data (health and scientific research). These will be welcomed.

    However, they’re watching innovators closely. The EU has expressed concern about data privacy in video, IoT devices and blockchain. They are particularly concerned about facial (and presumably voice) recognition and developments in AI.

    Most notably, the Commission is deeply concerned about what it calls “multinational technology companies”, “large digital platforms” and “online advertising and micro-targeting”. Yes, once again it’s looking at you, Facebook, Amazon, Google and friends.

    3. How The EU Is Promoting GDPR Standards Beyond The EU

    Our digital economy is global, so GDPR’s impact ripples beyond the EU’s borders — and not just in terms of compliance. The EU is setting the bar for data protection legislation worldwide. Beyond California’s CCPA, see Brazil’s LGPD, plus developments in Canada, Australia, India and a clutch of American states.

    Of course, it’s in the EU’s interests if other countries and trading blocs match their standards. So it’s promoting GDPR via several avenues:

    • Through “mutual adequacy decisions” with Japan and shortly South Korea
    • Embedded into bilateral trade agreements e.g. with New Zealand, Australia, UK
    • Through fora like the OECD, ASEAN, the G7 and the G20
    • Through its Data Protection Academy for EU and international regulators

    It is particularly keen to empower innovation through trusted data flows and to enable international cooperation between law enforcement authorities and private operators.

    The EU is leading the world in data protection. Where it goes, others will follow. So even if you’re not designing/developing for an EU audience, you need to be aware of what’s happening.

    What Does All Of This Mean For Companies In The EU?

    Companies who operate in the EU need to comply with GDPR or risk being fined. Those fines can be pretty hefty, as we’ve seen. So you need to be able to demonstrate that you’re adhering to GDPR’s 7 principles and to specific guidance from your national Data Protection Authority.

    However, that’s not as straightforward as it sounds, and you may choose to evaluate your risk in some cases. I’ll take you through an example of that next time.

    What Does This Mean For Companies Based Outside The EU?

    The implications for companies based outside the EU are exactly the same as those for EU countries, if they process personal data from the EU. That’s because GDPR applies to the personal data of people based in the EU. If you want to process it, e.g. to sell to customers in the EU, you have to abide by the rules. Otherwise, you risk being fined, like Facebook and Google.

    Here’s how that’s enforced: If you have a presence in the EU, as many multinationals do, and you don’t pay a GDPR fine, your EU assets may be seized. If you don’t have a presence, you’re obliged under GDPR to appoint a representative in the EU. Any fines will be levied through that representative. Alternatively, you may face a complex and expensive international lawsuit.

    And here’s where it gets complex for everyone:

    If your customer base includes people in the EU and citizens of other places with privacy laws, such as the State of California, you have to comply both with the California Consumer Privacy Act (CCPA) and with GDPR. These batches of legislation generally align — but they don’t match.

    Take cookies, for example. Under
    GDPR, you must get active consent from a user before you place a cookie on their device, bar those strictly needed for your site to function.

    However, under the CCPA, you must disclose what data you’re collecting, and enable your customer to deny you permission to sell their data. But they don’t have to actively agree you can collect it.

    That’s why the EU is pushing for international standards to simplify global compliance.

    N.B. If you’re in the United States and eagerly awaiting the replacement to Privacy Shield, you might like to take a leaf from Microsoft’s book instead — they and others have stated they’ll comply with GDPR rather than depend on any bilateral mechanisms to enable data processing.

    What Lessons Can Web Designers And Developers Learn From GDPR?

    Privacy regulation is here to stay and it affects all our priorities and workflows. Here are six lessons to remember as you work with customer data:

    1. We had to sprint to comply with GDPR. Now it’s a marathon.
      We know that GDPR will continue to evolve alongside the technology it aims to regulate. That means the demands on us won’t remain the same. Not only that, but GDPR has inspired similar — but not identical — legislation around the world. These legal requirements are set to keep evolving.
    2. Compliance builds competitive advantage.
      While the first major GDPR fines have been eye-watering, it’s actually the negative publicity that many say is most damaging. Who benefits from a large data leak? The company’s competitors. On the other hand, if you embed GDPR compliance as you strengthen your design and development processes, you’ll be better able to adapt as the regulations evolve.
    3. GDPR compliance and better COVID-19 outcomes are linked by user-centred design.
      We know that companies who’d begun their digital transformation were better able to adapt to the COVID-19 crisis. User-centred design supports GDPR, too. It has the process and customer focus you need to build products that align with the idea that customer data is precious and must be protected. That will make it easier to evolve your products in line with future legislation.
    4. You can build compliance into your digital products.
      Privacy by design is here to stay. If you already use service design, you can include customer information as a data layer in your service blueprints. If you don’t, now’s a great time to start. Mapping where data is collected, processed and stored highlights weak points where potential breaches may occur. Automated compliance tools will help lessen the burden on companies, plus has the potential to make data processing more secure.
    5. GDPR supports innovation — if you do it right.
      Some warn that GDPR is suffocating innovation by restricting data flows and especially by deterring companies from innovating with data. Others point to opportunities to innovate with blockchain, IoT and AI in a way that’s secure and where data is protected. The truth? Yes, of course, you can innovate and be GDPR compliant. But ethics in AI is vital: you must respect your customers and their data.
    6. Keep an eye on your third-party partners.
      This goes back to the joint controllers decision above. Companies now share responsibility for customer data with any third parties who process it and that processing must be documented. You can expect third-party checks, monitoring and contractual obligations to be a priority for companies from now on.

    Here’s How GDPR Could Develop

    Phew. That’s a lot to take in. But looking ahead, here’s where I’m betting we’ll see change.

    1. GDPR will continue to evolve, with clarity coming from test cases and potentially further legislation including the ePrivacy Regulation.
    2. The EU will continue to promote international adoption of data privacy law. We’ll see more countries embrace data protection, often baked into trade and security agreements.
    3. If we’re lucky, we may start to see international convergence of data privacy legislation — especially if the US implements data privacy at the federal level.
    4. But we’ll also see more clashes between the EU and the US, because of their opposite approaches to privacy.
    5. As ‘data is the new oil’, we could see more situations where users receive free products and services by giving away data through cookies.
    6. Businesses will shift away from third-party cookies and towards server-side tracking and automation, in order to stay compliant.
    7. Businesses will adopt Privacy by Design (PdB) and service design tools and process, to help them stay compliant to multiple sets of privacy laws.
    8. And finally — and this one’s a definite — we’ll see more and bigger privacy lawsuits. Who’ll emerge as the winners — big tech or privacy advocates? That I don’t know, but we can be certain of one thing: privacy lawyers will make a lot of money.

    A Final Word On Trust

    The theme underpinning both the European Commission’s communications and the commentary from industry experts is trust. Digital agencies like ours now need to provide evidence of data security and GDPR compliance — even down to staff training policies for data protection. That’s new. The EU’s priority is to support safe, secure data flows and innovation, both within the EU and outside. Standards compliance is their solution for this. And we, as designers and developers, have a crucial role to play.

    • Part 1: GDPR, Key Updates And What They Mean
    • Part 2: GDPR, Cookie Consent and 3rd Parties (next week)
      Subcribe to our newsletter to not miss it.

    Further Reading

    Smashing Editorial
    (vf, il)

    Source link