Browsing Tag: Testing

    web design

    Getting Rid Of A Living Nightmare In Testing — Smashing Magazine

    04/07/2021

    About The Author

    After her apprenticeship as an application developer, Ramona has been contributing to product development at shopware AG for more than 5 years now: First in …
    More about
    Ramona

    Unreliable tests are a living nightmare for anyone who writes automated tests or pays attention to the results. Flaky tests have even given folks nightmares and sleepless nights. In this article, Ramona Schwering shares her experiences to help you get out of this hell or avoid getting into it.

    There is a fable that I think about a lot these days. The fable was told to me as a child. It’s called “The Boy Who Cried Wolf” by Aesop. It is about a boy who tends the sheep of his village. He gets bored and pretends that a wolf is attacking the flock, calling out to the villagers for help — only for them to disappointedly realize that it is a false alarm and leave the boy alone. Then, when a wolf actually appears and the boy calls for help, the villagers believe it is another false alarm and do not come to the rescue, and the sheep end up getting eaten by the wolf.

    The moral of the story is best summarized by the author himself:

    “A liar will not be believed, even when he speaks the truth.”

    A wolf attacks the sheep, and the boy cries for help, but after numerous lies, no one believes him anymore. This moral can be applied to testing: Aesop’s story is a nice allegory for a matching pattern that I stumbled upon: flaky tests that fail to provide any value.

    Front-End Testing: Why Even Bother?

    Most of my days are spent on front-end testing. So it shouldn’t surprise you that the code examples in this article will be mostly from the front-end tests that I’ve come across in my work. However, in most cases, they can be easily translated to other languages and applied to other frameworks. So, I hope the article will be useful to you — whatever expertise you might have.

    It’s worth recalling what front-end testing means. In its essence, front-end testing is a set of practices for testing the UI of a web application, including its functionality.

    Starting out as a quality-assurance engineer, I know the pain of endless manual testing from a checklist right before a release. So, in addition to the goal of ensuring that an application remains error-free during successive updates, I strived to relieve the workload of tests caused by those routine tasks that you don’t actually need a human for. Now, as a developer, I find the topic still relevant, especially as I try to directly help users and coworkers alike. And there is one issue with testing in particular that has given us nightmares.

    The Science Of Flaky Tests

    A flaky test is one that fails to produce the same result each time the same analysis is run. The build will fail only occasionally: One time it will pass, another time fail, the next time pass again, without any changes to the build having been made.

    When I recall my testing nightmares, one case in particular comes into my mind. It was in a UI test. We built a custom-styled combo box (i.e. a selectable list with input field):

    An example of a custom selector
    A custom selector in a project I worked on every day. (Large preview)

    With this combo box, you could search for a product and select one or more of the results. Many days, this test went fine, but at some point, things changed. In one of the approximately ten builds in our continuous integration (CI) system, the test for searching and selecting a product in this combo box failed.

    The screenshot of the fail shows the results list not being filtered, despite the search having been successful:

    A screenshot from a CI execution with a flaky test
    Flaky test in action: why did it fail only sometimes and not always? (Large preview)

    A flaky test like this can block the continuous deployment pipeline, making feature delivery slower than it needs to be. Moreover, a flaky test is problematic because it is not deterministic anymore — making it useless. After all, you wouldn’t trust one any more than you would trust a liar.

    In addition, flaky tests are expensive to repair, often requiring hours or even days to debug. Even though end-to-end tests are more prone to being flaky, I’ve experienced them in all kinds of tests: unit tests, functional tests, end-to-end tests, and everything in between.

    Another significant problem with flaky tests is the attitude they imbue in us developers. When I started working in test automation, I often heard developers say this in response to a failed test:

    “Ahh, that build. Nevermind, just kick it off again. It will eventually pass, somewhen.”

    This is a huge red flag for me. It shows me that the error in the build won’t be taken seriously. There is an assumption that a flaky test is not a real bug, but is “just” flaky, without needing to be taken care of or even debugged. The test will pass again later anyway, right? Nope! If such a commit is merged, in the worst case we will have a new flaky test in the product.

    The Causes

    So, flaky tests are problematic. What should we do about them? Well, if we know the problem, we can design a counter-strategy.

    I often encounter causes in everyday life. They can be found within the tests themselves. The tests might be suboptimally written, hold wrong assumptions, or contain bad practices. However, not only that. Flaky tests can be an indication of something far worse.

    In the following sections, we’ll go over the most common ones I have come across.

    1. Test-Side Causes

    In an ideal world, the initial state of your application should be pristine and 100% predictable. In reality, you never know whether the ID you’ve used in your test will always be the same.

    Let’s inspect two examples of a single fail on my part. Mistake number one was using an ID in my test fixtures:

    {
       "id": "f1d2554b0ce847cd82f3ac9bd1c0dfca",
       "name": "Variant product",
    }
    

    Mistake number two was searching for a unique selector to use in a UI test and thinking, “Ok, this ID seems unique. I’ll use it.”

    <!-- This is a text field I took from a project I worked on -->
    <input type="text" id="sw-field--f1d2554b0ce847cd82f3ac9bd1c0dfca" />
    

    However, if I’d run the test on another installation or, later, on several builds in CI, then those tests might fail. Our application would generate the IDs anew, changing them between builds. So, the first possible cause is to be found in hardcoded IDs.

    The second cause can arise from randomly (or otherwise) generated demo data. Sure, you might be thinking that this “flaw” is justified — after all, the data generation is random — but think about debugging this data. It can be very difficult to see whether a bug is in the tests themselves or in the demo data.

    Next up is a test-side cause that I’ve struggled with numerous times: tests with cross-dependencies. Some tests may not be able to run independently or in a random order, which is problematic. In addition, previous tests could interfere with subsequent ones. These scenarios can cause flaky tests by introducing side effects.

    However, don’t forget that tests are about challenging assumptions. What happens if your assumptions are flawed to begin with? I’ve experienced these often, my favorite being flawed assumptions about time.

    One example is the usage of inaccurate waiting times, especially in UI tests — for example, by using fixed waiting times. The following line is taken from a Nightwatch.js test.

    // Please never do that unless you have a very good reason!
    // Waits for 1 second
    browser.pause(1000);
    

    Another wrong assumption relates to time itself. I once discovered that a flaky PHPUnit test was failing only in our nightly builds. After some debugging, I found that the time shift between yesterday and today was the culprit. Another good example is failures because of time zones.

    False assumptions do not stop there. We can also have wrong assumptions about the order of data. Imagine a grid or list containing multiple entries with information, such as a list of currencies:

    A custom list component used in our project
    A custom list component used in our project. (Large preview)

    We want to work with the information of the first entry, the “Czech koruna” currency. Can you be sure that your application will always place this piece of data as the first entry every time your test is executed? Could it be that the “Euro” or another currency will be the first entry on some occasions?

    Don’t assume that your data will come in the order you need it. Similar to hardcoded IDs, an order can change between builds, depending on the design of the application.

    2. Environment-Side Causes

    The next category of causes relates to everything outside of your tests. Specifically, we’re talking about the environment in which the tests are executed, the CI- and docker-related dependencies outside of your tests — all of those things you can barely influence, at least in your role as tester.

    A common environment-side cause is resource leaks: Often this would be an application under load, causing varying loading times or unexpected behavior. Large tests can easily cause leaks, eating up a lot of memory. Another common issue is the lack of cleanup.

    Incompatibility between dependencies gives me nightmares in particular. One nightmare occurred when I was working with Nightwatch.js for UI testing. Nightwatch.js uses WebDriver, which of course depends on Chrome. When Chrome sprinted ahead with an update, there was a problem with compatibility: Chrome, WebDriver, and Nightwatch.js itself no longer worked together, which caused our builds to fail from time to time.

    Speaking of dependencies: An honorable mention goes to any npm issues, such as missing permissions or npm being down. I experienced all of these in observing CI.

    When it comes to errors in UI tests due to environmental problems, keep in mind that you need the whole application stack in order for them to run. The more things that are involved, the more potential for error. JavaScript tests are, therefore, the most difficult tests to stabilize in web development, because they cover a large amount of code.

    3. Product-Side Causes

    Last but not least, we really have to be careful about this third area — an area with actual bugs. I’m talking about product-side causes of flakiness. One of the most well-known examples is the race conditions in an application. When this happens, the bug needs to be fixed in the product, not in the test! Trying to fix the test or the environment will have no use in this case.

    Ways To Fight Flakiness

    We have identified three causes of flakiness. We can build our counter-strategy on this! Of course, you will already have gained a lot by keeping the three causes in mind when you encounter flaky tests. You will already know what to look for and how to improve the tests. However, in addition to this, there are some strategies that will help us design, write, and debug tests, and we will look at them together in the following sections.

    Focus On Your Team

    Your team is arguably the most important factor. As a first step, admit that you have a problem with flaky tests. Getting the whole team’s commitment is crucial! Then, as a team, you need to decide how to deal with flaky tests.

    During the years I worked in technology, I came across four strategies used by teams to counter flakiness:

    1. Do nothing and accept the flaky test result.
      Of course, this strategy is not a solution at all. The test will yield no value because you cannot trust it anymore — even if you accept the flakiness. So we can skip this one pretty quickly.
    2. Retry the test until it passes.
      This strategy was common at the start of my career, resulting in the response I mentioned earlier. There was some acceptance with retrying tests until they passed. This strategy doesn’t require debugging, but it is lazy. In addition to hiding the symptoms of the problem, it will slow down your test suite even more, which makes the solution not viable. However, there might be some exceptions to this rule, which I’ll explain later.
    3. Delete and forget about the test.
      This one is self-explanatory: Simply delete the flaky test, so that it doesn’t disturb your test suite anymore. Sure, it will save you money because you won’t need to debug and fix the test anymore. But it comes at the expense of losing a bit of test coverage and losing potential bug fixes. The test exists for a reason! Don’t shoot the messenger by deleting the test.
    4. Quarantine and fix.
      I had the most success with this strategy. In this case, we would skip the test temporarily, and have the test suite constantly remind us that a test has been skipped. To make sure the fix doesn’t get overlooked, we would schedule a ticket for the next sprint. Bot reminders also work well. Once the issue causing the flakiness has been fixed, we’ll integrate (i.e. unskip) the test again. Unfortunately, we will lose coverage temporarily, but it will come back with a fix, so this will not take long.
    Skipped tests, taken from a report from our CI
    Skipped tests, taken from a report from our CI. (Large preview)

    These strategies help us deal with test problems at the workflow level, and I’m not the only one who has encountered them. In his article, Sam Saffron comes to the similar conclusion. But in our day-to-day work, they help us to a limited extent. So, how do we proceed when such a task comes our way?

    Keep Tests Isolated

    When planning your test cases and structure, always keep your tests isolated from other tests, so that they’re able to be run in an independent or random order. The most important step is to restore a clean installation between tests. In addition, only test the workflow that you want to test, and create mock data only for the test itself. Another advantage of this shortcut is that it will improve test performance. If you follow these points, no side effects from other tests or leftover data will get in the way.

    The example below is taken from the UI tests of an e-commerce platform, and it deals with the customer’s login in the shop’s storefront. (The test is written in JavaScript, using the Cypress framework.)

    // File: customer-login.spec.js
    let customer = {};
    
    beforeEach(() => {
        // Set application to clean state
        cy.setInitialState()
          .then(() => {
            // Create test data for the test specifically
            return cy.setFixture('customer');
          })
    }):
    

    The first step is resetting the application to a clean installation. It’s done as the first step in the beforeEach lifecycle hook to make sure that the reset is executed on every occasion. Afterwards, the test data is created specifically for the test — for this test case, a customer would be created via a custom command. Subsequently, we can start with the one workflow we want to test: the customer’s login.

    Further Optimize The Test Structure

    We can make some other small tweaks to make our test structure more stable. The first is quite simple: Start with smaller tests. As said before, the more you do in a test, the more can go wrong. Keep tests as simple as possible, and avoid a lot of logic in each one.

    When it comes to not assuming an order of data (for example, when dealing with the order of entries in a list in UI testing), we can design a test to function independent of any order. To bring back the example of the grid with information in it, we wouldn’t use pseudo-selectors or other CSS that have a strong dependency on order. Instead of the nth-child(3) selector, we could use text or other things for which order does not matter. For example, we could use an assertion like, “Find me the element with this one text string in this table”.

    Wait! Test Retries Are Sometimes OK?

    Retrying tests is a controversial topic, and rightfully so. I still think of it as an anti-pattern if the test is blindly retried until successful. However, there’s an important exception: When you can’t control errors, retrying can be a last resort (for example, to exclude errors from external dependencies). In this case, we cannot influence the source of the error. However, be extra careful when doing this: Don’t become blind to flakiness when retrying a test, and use notifications to remind you when a test is being skipped.

    The following example is one I used in our CI with GitLab. Other environments might have different syntax for achieving retries, but this should give you a taste:

    test:
        script: rspec
        retry:
            max: 2
            when: runner_system_failure
    

    In this example, we are configuring how many retries should be done if the job fails. What’s interesting is the possibility of retrying if there is an error in the runner system (for example, the job setup failed). We are choosing to retry our job only if something in the docker setup fails.

    Note that this will retry the whole job when triggered. If you wish to retry only the faulty test, then you’ll need to look for a feature in your test framework to support this. Below is an example from Cypress, which has supported retrying of a single test since version 5:

    {
        "retries": {
            // Configure retry attempts for 'cypress run`
            "runMode": 2,
            // Configure retry attempts for 'cypress open`
            "openMode": 2,
        }
    }
    

    You can activate test retries in Cypress’ configuration file, cypress.json. There, you can define the retry attempts in the test runner and headless mode.

    Using Dynamic Waiting Times

    This point is important for all kinds of tests, but especially UI testing. I can’t stress this enough: Don’t ever use fixed waiting times — at least not without a very good reason. If you do it, consider the possible outcomes. In the best case, you will choose waiting times that are too long, making the test suite slower than it needs to be. In the worst case, you won’t wait long enough, so the test won’t proceed because the application is not ready yet, causing the test to fail in a flaky manner. In my experience, this is the most common cause of flaky tests.

    Instead, use dynamic waiting times. There are many ways to do so, but Cypress handles them particularly well.

    All Cypress commands own an implicit waiting method: They already check whether the element that the command is being applied to exists in the DOM for the specified time — pointing to Cypress’ retry-ability. However, it only checks for existence, and nothing more. So I recommend going a step further — waiting for any changes in your website or application’s UI that a real user would also see, such as changes in the UI itself or in the animation.

    A fixed waiting time, found in Cypress’ test log
    A fixed waiting time, found in Cypress’ test log. (Large preview)

    This example uses an explicit waiting time on the element with the selector .offcanvas. The test would only proceed if the element is visible until the specified timeout, which you can configure:

    // Wait for changes in UI (until element is visible)
    cy.get(#element).should('be.visible');
    

    Another neat possibility in Cypress for dynamic waiting is its network features. Yes, we can wait for requests to occur and for the results of their responses. I use this kind of waiting especially often. In the example below, we define the request to wait for, use a wait command to wait for the response, and assert its status code:

    // File: checkout-info.spec.js
    
    // Define request to wait for
    cy.intercept({
        url: '/widgets/customer/info',
        method: 'GET'
    }).as('checkoutAvailable');
    
    // Imagine other test steps here...
    
    // Assert the response’s status code of the request
    cy.wait('@checkoutAvailable').its('response.statusCode')
      .should('equal', 200);
    

    This way, we’re able to wait exactly as long as our application needs, making the tests more stable and less prone to flakiness due to resource leaks or other environmental issues.

    Debugging Flaky Tests

    We now know how to prevent flaky tests by design. But what if you’re already dealing with a flaky test? How can you get rid of it?

    When I was debugging, putting the flawed test in a loop helped me a lot in uncovering flakiness. For example, if you run a test 50 times, and it passes every time, then you can be more certain that the test is stable — maybe your fix worked. If not, you can at least get more insight into the flaky test.

    // Use in build Lodash to repeat the test 100 times
    Cypress._.times(100, (k) => {
        it(`typing hello ${k + 1} / 100`, () => {
            // Write your test steps in here
        })
    })
    

    Getting more insight into this flaky test is especially tough in CI. To get help, see whether your testing framework is able to get more information on your build. When it comes to front-end testing, you can usually make use of a console.log in your tests:

    it('should be a Vue.JS component', () => {
        // Mock component by a method defined before
        const wrapper = createWrapper();
    
    
        // Print out the component’s html
        console.log(wrapper.html());
    
        expect(wrapper.isVueInstance()).toBe(true);
    })
    

    This example is taken from a Jest unit test in which I use a console.log to get the output of the HTML of the component being tested. If you use this logging possibility in Cypress’ test runner, you can even inspect the output in your developer tools of choice. In addition, when it comes to Cypress in CI, you can inspect this output in your CI’s log by using a plugin.

    Always look at the features of your test framework to get support with logging. In UI testing, most frameworks provide screenshot features — at least on a failure, a screenshot will be taken automatically. Some frameworks even provide video recording, which can be a huge help in getting insight into what is happening in your test.

    Fight Flakiness Nightmares!

    It’s important to continually hunt for flaky tests, whether by preventing them in the first place or by debugging and fixing them as soon as they occur. We need to take them seriously, because they can hint at problems in your application.

    Spotting The Red Flags

    Preventing flaky tests in the first place is best, of course. To quickly recap, here are some red flags:

    • The test is large and contains a lot of logic.
    • The test covers a lot of code (for example, in UI tests).
    • The test makes use of fixed waiting times.
    • The test depends on previous tests.
    • The test asserts data that is not 100% predictable, such as the use of IDs, times, or demo data, especially randomly generated ones.

    If you keep the pointers and strategies from this article in mind, you can prevent flaky tests before they happen. And if they do come, you will know how to debug and fix them.

    These steps have really helped me regain confidence in our test suite. Our test suite seems to be stable at the moment. There could be issues in the future — nothing is 100% perfect. This knowledge and these strategies will help me to deal with them. Thus, I will grow confident in my ability to fight those flaky test nightmares.

    I hope I was able to relieve at least some of your pain and concerns about flakiness!

    Further Reading

    If you want to learn more on this topic, here are some neat resources and articles, which helped me a lot:

    Smashing Editorial
    (vf, il, al)

    Source link

    web design

    UI Design Testing Tools I Use All The Time — Smashing Magazine

    03/05/2021

    About The Author

    Paul is a leader in conversion rate optimisation and user experience design thinking. He has over 25 years experience working with clients such as Doctors …
    More about
    Paul

    Our lives as UI designers have never been easier with a host of amazing tools at our disposal. In this article, Paul Boag explores some of the useful tools that he keeps close at his work.

    When I started in web design 27 years ago, testing with users was time-consuming and expensive, but a new generation of tools has changed all of that. Most of us have heard of some of the more popular tools such as Userzoom or Hotjar, but in this post, I want to explore some of the hidden gems I use to test the interfaces I am involved in creating.

    Please note that I’m by no means affiliated with any tools mentioned here — I just use them all the time. Hopefully, they prove useful to you as well.

    We begin at the very start of a project with user research.

    Run Surveys With Survicate

    User research is vital, especially when it comes to identifying problems with an existing website. As a result, I almost always survey users early on in a redesign process.

    Although both Usability Hub and Maze allow me to create surveys, the functionality they offer is relatively limited for my taste, and it is a bit difficult to embed surveys on your website. That is a shame because exit-intent surveys can be a powerful way to gain insights into why users are abandoning a website or failing to act.

    One solution to running user research surveys I found useful is Qualaroo which does an excellent job. Unfortunately, it can prove a bit expensive in some situations, so if you are looking for an alternative, you might want to check out Survicate instead.

    Survicate
    Survicate is ideal for surveying users to gain insights into their actions. (Large preview)

    Survicate offers both website surveys and the ability to send a survey via email or social media. It even allows you to add a permanent feedback button on your site if you want.

    Testing visuals with Usability Hub

    When it comes to testing, most of the testing I carry out is nearer the start of a project than the end. That is when it is easiest to make changes, so I often test with nothing more than a design concept. At that stage, I don’t even have a clickable prototype, so I use Usability Hub.

    Usability Hub
    Usability Hub is one of the useful tools for testing design concepts. (Large preview)

    Usability Hub is a simple app that supports:

    It is an excellent way of addressing stakeholder concerns about a design concept. I can get results from testing within an hour, and Usability Hub will even handle participants’ recruitment if I want.

    “Spell Check” Your Designs with Attention Insights

    Once I start to design an interface, there is a lot to consider from messaging and aesthetics to visual hierarchy and calls to action. As a result, it can be all too easy to end up with the wrong focus on a page. If I am not careful, I lead the user’s attention in the wrong direction, and they miss critical content or calls to action.

    Although the best way of checking this is usability testing, sometimes I want a quick sanity check that I am heading in the right direction. Attention Insights takes thousands of hours of eye-tracking studies and uses that data to predict where users might look on a design.

    Attention Insights
    Attention Insights uses thousands of hours of eye-tracking studies to predict where somebody will look when viewing your design. (Large preview)

    Although not as accurate as a real eye-tracking study, it can act slightly as a spelling or grammar checker does for your copy. It can help flag potential issues and help you make a judgment call.

    Modernize Your Card Sorting with UXOps

    When it comes time to work on a website’s information architecture, I almost always turn to UXOps.

    UXOps
    UXOps is a friendly tool for running remote card sorting exercises. (Large preview)

    Like more well-established app OptimalSort, UXOps allows you to run card sorting exercises online to ensure your site reflects a user’s mental model. If I am being perfectly honest, I prefer UXOps because it is a bit more affordable and focuses on a single task. However, I also found it a very easy tool for participants to understand, and for me to interpret the data afterwards.

    Remote and Unfaciliatated Testing with Lookback

    When it comes to usability testing, we all are likely to explore remote testing these days. It is actually more convenient than in-person testing, not to mention I have been able to continue testing throughout the pandemic! Although this can be done using an app like Zoom, I personally prefer a tool called Lookback.

    Lookback
    Lookback streamlines usability testing, especially when carried out remotely and unfacilitated. (Large preview)

    I love Lookback because it has been optimized for usability testing with features such as note-taking, in-app editing of video, and automatically recording the user’s screen and webcam. However, where Lookback really shines is that it allows unfacilitated usability testing.

    Unfacilitated testing is a real boon when your time is tight, and you want to test with lots of people. With Lookback, I send participants a link, and the app will guide them through the process without the need for me to moderate the sessions.

    Quantify Your Testing Using Maze

    I like to test with more users the nearer a site gets to going live. While qualitative testing is great early on, I am more interested in understanding how the site will operate at scale as we near launch. Unfortunately, analyzing a large number of unfacilitated test sessions can prove time-consuming. That is where Maze can prove invaluable.

    Maze
    Maze can aggregate data from your usability sessions into quantitive data. (Large preview)

    Maze has a wealth of tools that are useful for all kinds of usability testing. However, its real strength lies in its ability to aggregate data. This means that instead of having to watch each session; you can get quantitative data such as:

    • The number of users who gave up.
    • Those who went either via the most direct or indirect route to complete a task.
    • Heat maps of any misclicks users made.
    • How long it took people to complete the task.

    Combined with its overall flexibility, Maze is an excellent all-round choice at a manageable price, no matter your budget.

    Find Test Participants with Testing Time

    As I am sure you know, one of the biggest pains with usability testing is recruitment. Although apps like Maze, Usability Hub and Lookback all offer the option to recruit participants for you, they come with some limits regarding the people you reach.

    When I need to recruit a particular type of person, I tend to use a service like Testing Time, if I cannot recruit people myself. That is because Testing Time allows me a lot more control over the type of person I get.

    Testing Time
    Testing Time provides you with all you need to find, manage and pay testing participants. (Large preview)

    Testing Time does not just help me with recruitment. It also provides tools for screening potential candidates, managing their tests, and paying them afterwards.

    Gather Data with Microsoft Clarity

    Once my new design is finally launched, my attention shifts to monitoring and improving those designs. I do this by watching how site visitors are behaving and identifying any issues they are encountering. The two tools I use to identify and diagnose problems with a site are heat map monitoring and session recorders.

    The most well-known tool in this field is Hotjar, although Fullstory has superior tools in many ways. If you are looking for a slightly more affordable alternative, Microsoft has released a free competitor called Clarity which gives you the ability to watch individual sessions, see scroll heatmaps and see visualizations of where people are clicking on pages.

    Microsoft Clarity
    Microsoft Clarity provides heat maps on user behaviour and session recording for free. (Large preview)

    Visualize Your Research with Evolt

    Of course, I rarely get to make arbitrary decisions about the direction of a site. There are almost always other stakeholders to win over. To do that I need to communicate the research and testing I have undertaken, and that is where Evolt comes in. Evolt helps me visualize my research, but it doesn’t stop there.

    Evolt
    Evolt, a little helpful tool to visualize your research. (Large preview)

    It is actually the ideal tool for working on user personas, journey maps and even moodboards with your stakeholders. Miro can be great for these kinds of tasks as well, and it’s often used for the same purpose, but in my personal experience Evolt seems to be optimized specifically for designers.

    No Excuse

    With so many great tools available, there really shouldn’t be any excuse for not testing with users these days. It is fast, easy and cheap. But we don’t even need to limit ourselves to testing. These tools also make user research and visualization easier than ever before, making them ideal all the way from discovery through prototype to post-launch optimization.

    But these are just the tools I make use of. There’s no doubt that you use tools that are not included in the list. If so, please post them in the comments below — I’d love to hear your stories, and the tools that you find useful in your work!

    Smashing Editorial
    (vf, il)

    Source link

    web design

    The State Of Mobile And Why Mobile Web Testing Matters — Smashing Magazine

    03/02/2021

    About The Author

    Kelvin is an independent software maker currently building Sailscasts — a platform to learn server-side JavaScript. He is also a technical writer and …
    More about
    Kelvin

    With mobile traffic accounting for over 50% of web traffic these days, leaving your mobile performance unoptimized isn’t really an option. In this article, we’ll discuss the complexity and challenges of mobile, and how mobile testing tools can help us with just that.

    Things have changed quite a bit over the last decade when we just started exploring what we could do on a tiny, shiny mobile screen. These days, with mobile traffic accounting for over 50% of web traffic, it’s fair to assume that the very first encounter of your prospect customers with your brand will happen on a mobile device.

    Depending on the nature of your product, the share of your mobile traffic will vary significantly, but you will certainly have some mobile traffic — and being prepared for it can make or break the deal. This requires your website or application to be heavily optimized for mobile. This optimization is quite complex in nature though. Obviously, our experiences will be responsive — and we’ve learned how to do so well over the years — but it also has to be accessible and fast.

    This goes way beyond basic optimizations such as color contrast and server response times. In the fragmented mobile landscape, our experiences have to be adjusted for low data mode, low memory, battery and CPU, reduced motion, dark and light mode and so many other conditions.

    Leaving these conditions out of the equation means abandoning prospect customers for good, and so we seek compromises to deliver a great experience within tight deadlines. And to ensure the quality of a product, we always need to test — on a number of devices, and in a number of conditions.

    State Of Mobile 2021

    While many of us, designers and developers, are likely to have a relatively new mobile phone in our pockets, a vast majority of our customers isn’t quite like us. That might come a little bit unexpected. After all, when we look at our analytics, we will hardly find any customers browsing our sites or apps with a mid-range device on a flaky 3G connection.

    The gotcha here is that, if your mobile experience isn’t optimized for various devices and network conditions, these customers will never appear in your analytics — just because your website or app will be barely usable on their devices, and so they are unlikely to return.

    In the US and the UK, Comscore’s Global State of Mobile 2020 report discovered in August 2020, that mobile usage accounted to 79% and 81% of total digital minutes respectively. Also, there was a 65% increase in video consumption on mobile devices in 2020. While a vast majority of the time is spent in just a few mobile apps, social media platforms provide a gateway to the web and your services — especially in education.

    Globally, time spent on mobile continues to rise around the world.
    Globally, time spent on mobile continues to rise around the world, according to ComScore Global State of Mobile 2020 report. (Large preview)
    Some app categories skew toward mobile-only usage, while others (education, for example) see more desktop usage.
    Some app categories skew toward mobile-only usage, while others (education, for example) see more desktop usage. ComScore Global State of Mobile 2020 report. (Large preview)

    On the other hand, while devices do get better over time in terms of their capabilities and battery life, older devices don’t really get abandoned or disappear into the void. It’s not uncommon to see customers using devices that are 5-6 years old as these devices often get passed through the generations, serving as slightly older but “good enough” devices for simple, day-to-day tasks. In fact, an average consumer upgrades their phone every 2 years, and in the US phone replacement cycle is 33 months.

    What’s a representative device to test on in 2021? An Android device that’s couple of years old, and costs around $200.
    What’s a representative device to test on in 2021? According to Tim Kadlec (video), that’s an Android device that’s couple of years old, and costs around $200. (Large preview)

    Globally in 2020, 84.8% of all shipped mobile phones are Android devices, according to the International Data Corporation (IDC). Average bestselling phones around the world cost just under $200. A representative device, then, is an Android device that is at least 24 months old, costing $200 or less, running on slow 3G, 400ms RTT and 400kbps transfer, just to be slightly more pessimistic.

    This might be very different for your company, of course, but that’s a close enough approximation of a majority of customers out there. In fact, it might be a good idea to look into current Amazon Best Sellers for your target market.

    When building a new site or app, always check current Amazon Best Sellers for your target market first
    When building a new site or app, always check current Amazon Best Sellers for your target market first. (Large preview)

    Mobile is a spectrum, and a quite entrenched one. While the mobile landscape is very fragmented already, the gap between the experience on various devices will be widening much further with the growing adoption of 5G.

    According to Ericsson Mobility Visualizer, we should be expecting a 15× increase in mobile 5G subscribers, from 212 million in 2020, to 3.3 billion by 2026.

    We should be expecting a 15&times; increase in mobile 5G subscribers, from 212 million in 2020, to 3.3 billion by 2026.
    According to Ericsson, we should be expecting a 15× increase in mobile 5G subscribers, from 212 million in 2020, to 3.3 billion by 2026.

    If you’d like to dive deeper into the performance of Android and iOS devices, you can check Geekbench Android Benchmarks for Android smartphones and tablets, and iOS Benchmarks for iPhones and iPads.

    It goes without saying that testing thoroughly on a variety of devices — rather just on a shiny new Android or iOS device — is critical for understanding and improving the experience of your prospect customers, and how well your website or app performs on a large scale.

    Making A Case For Business

    While it might sound valuable to test on mobile devices, it might not be convincing enough to drive the management and entire organization towards mobile testing. However, there are quite a few high-profile case studies exploring the impact of mobile optimization on key business metrics.

    WPO stats collects literally hundreds of them — case studies and experiments demonstrating the impact of web performance optimization (WPO) across verticals and goals.

    Driving Business Metrics

    One of the famous examples is Flipkart, India’s largest e-commerce website. For a while, Flipkart adopted an app-only strategy and temporarily shut down its mobile website altogether. The company found it more and more difficult to provide a user experience that was as fast and engaging as that of their mobile app.

    A few years ago, they’ve decided to unify their web presence and a native app into a mobile-optimized progressive web app, resulting in a 70% increase in conversion rate. They discovered that customers were spending three times more time on the mobile website, and the re-engagement rate increased by 40%.

    Improving Search Engine Visibility

    It’s not big news that search engines have been considering mobile friendliness as a part of search engine ranking. With Core Web Vitals, Google has been pushing the experience factors on mobile further to the forefront.

    In his article on Core Web Vitals and SEO, Simon Hearne discovered that Google’s index update on 31st of May 2021 will result in a positive ranking signal for page experience in mobile search only, for groups of similar URLs, which meet all three Core Web Vital targets. The impact of the signal is expected to be small, similar to HTTPS ranking boost.

    Lighthouse CI is quite remarkable: a suite of tools to continuously run, save, retrieve, and assert against Lighthouse results
    A performance benchmark Lighthouse is well-known. Its CI counterpart not so much. Lighthouse CI is quite remarkable: a suite of tools to continuously run, save, retrieve, and assert against Lighthouse results. (Large preview)

    One thing is certain though: your websites will rank better if they are better optimized for mobile, both in terms of speed and mobile-friendliness — it goes for accessibility as well.

    Improving Accessibility

    Building accessible pages and applications isn’t easy. The challenges start with tiny hit targets, poor contrast and small font size, but it quickly gets much more complicated when we deal with complex single-page-applications. To ensure that we cater well for our customers in various situations — with  permanent, temporary and situational disabilities — we need to test for accessibility.

    That means considering keyboard navigation, how navigation landmarks are properly assigned, how updates are announced by a screen reader, and whether we avoid any inaccessible libraries or third-party scripts. And then, for every component we are building, we need to ensure that we keep them accessible over time.

    It shouldn’t be surprising that if a website isn’t accessible to a customer, they are unlikely to access your product either. The earlier you invest in accessibility testing, the more you’ll save down the road on expensive consultancy, expensive third-party services, or expensive lawyers.

    Mobile Web Testing

    So, with all the challenges in the mobile space, how, then, do we test on mobile? Fortunately, there is no shortage of mobile testing tools out there. However, most times, when performing mobile testing, the focus is mostly on consistency and functionality but for a more thorough mobile test, we need to go a layer deeper into some not-so-obvious specifics of testing.

    Screen sizes

    Screen sizes are one of the many things that are always changing in the realm of mobile devices. Year after year new screen sizes and pixel densities appear with new device releases. This poses a problem in testing websites and apps on these devices, making debugging more difficult and time-consuming.

    OS Version fragmentation

    With iOS having a high adoption rate on its latest OS releases (a rate of 57% on its latest iOS 14), and the plethora of versions still being used by Android devices going as far back as Ice Cream Sandwich, one must make sure to account to this fragmentation when doing mobile testing.

    Browser fragmentation

    With Chrome and Safari having a global usage of 62.63% and 24.55% on mobile respectively, one might be tempted to focus on just these browsers when performing mobile tests. However, depending on the region of the world, you are more likely to test in other, less-known browsers, or proxy browsers, such as Opera Mini. Even though their percentage usage might be small, it might run into hundreds of thousands of usage globally.

    Performing Mobile Web Testing

    To perform mobile web testing, one option is to set up a device lab, and run tests locally. In times of remote work, it’s quite challenging as you usually need a number of devices at your disposal. Acquiring these devices doesn’t have to be expensive, and experiencing the loading on your own is extremely valuable. However, if we want to check how consistent the experience is, or conduct automated tests, it’s probably not going to be enough.

    In such cases, a good starting point is Responsively, a free open-source tool with mirrored interactions, customizable layout, 30+ built-in device profiles, hot reloading and screenshot tools.

    Also, you might want to look into dedicated developer-focused browsers for mobile testing as well.

    Sizzy supports sync scrolling, clicking and navigation across devices, as well as takes screenshots of all devices at once, with and without a device frame. Plus, it includes a Universal Inspect Element to inspect all devices at once.

    Blisk supports over 50 devices out of the box, along with sync scrolling. You can test touch support and preview devices side-by-side, working with the same piece of code across all opened devices. Hot-reloading is supported as well, as well as video recording and screenshots.

    Another little helpful tool is LT Browser, a web application allowing you to perform mobile view debugging on 45+ devices — on mobile, tablet and desktop. (Full disclosure and reminder: LambdaTest is a friendly sponsor of this article).

    Testing the Smashing Magazine website on different devices
    (Large preview)

    Once you have downloaded the browser and registered, you can build, test, and debug your website, as well as take screenshots and videos of bugs, assign them to specific devices, run a performance profiling and observe multiple devices side by side. By default, a free version provides 30 mins per day.

    If you need something slightly more advanced, LambdaTest allows you to run a cross-browser test on 2000+ devices on different operating systems. Also, BrowserStack provides an option to automate testing as well as testing for low battery, abrupt power off, and interruptions such as calls or SMS.

    Conclusion

    In this article, we have looked into the state of mobile in 2021. We’ve seen the growing usage of mobile devices as the primary means to access the web, and we’ve looked into some challenges related to that. We’ve also looked into some specific challenges around mobile testing, and how some tools can help us find and fix bugs on mobile.

    Keep in mind that your website is the face of your business and more and more users are going to access it via their mobile phones. It’s important to make sure that your users can access the services you provide on your website and have an accessible and fast experience on their devices as they do on the desktop version. This will ensure that the benefits of brand visibility get the attention they deserve.

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Testing Vue Applications With The Vue Testing Library — Smashing Magazine

    11/24/2020

    About The Author

    Kelvin is an independent software maker currently building Sailscasts — a platform to learn server-side JavaScript. He is also a technical writer and …
    More about
    Kelvin

    The Vue Testing library can help you to test your applications by mirroring the way that a user would interact with them. Here’s everything you need to know if you want to get started right away.

    In this article, we will look at testing Vue applications using the Vue Testing Library — a lightweight library that emphasizes testing your front-end application from the user’s perspective.

    The following assumptions are made throughout this article:

    • The reader is familiar with Vue.
    • The reader is familiar with testing application UI.

    Conventionally, in Vue userland, when you want to test your application, you reach out for @vue/test-utils — the official testing library for Vue. @vue/test-utils provides APIs to test instances of rendered Vue components. Like so:

    // example.spec.js
    import { shallowMount } from '@vue/test-utils'
    import HelloWorld from '@/components/HelloWorld.vue'
    
    describe('HelloWorld.vue', () => {
      it('renders props.msg when passed', () => {
        const msg = 'new message'
        const wrapper = shallowMount(HelloWorld, {
          propsData: { msg }
        })
        expect(wrapper.text()).toMatch(msg)
      })
    })

    You can see we are mounting an instance of the Vue component using the shallowMount function provided by @vue/test-utils.

    The problem with the above approach to testing Vue applications is that the end-user will be interacting with the DOM and has no knowledge of how Vue renders the UI. Instead, he/she will be finding UI elements by text content, the label of the input element, and some other visual cues on the page.

    A better approach will be writing tests for your Vue applications in such a way that mirrors how an actual user will interact with it e.g looking for a button to increment the quantity of a product in a checkout page, hence Vue Testing Library.

    What Is Vue Testing Library?

    Vue Testing Library is a lightweight testing library for Vue that provides lightweight utility functions on top of @vue/test-utils. It was created with a simple guiding principle:

    The more your tests resemble the way your software is used, the more confidence they can give you.
    testing-library.com

    Why Use Vue Testing Library

    • You want to write tests that are not focused on implementation details i.e testing how the solution is implemented rather than if it produces the desired output.

    • You want to write tests that focus on the actual DOM nodes and not rendered Vue components.

    • You want to write tests that query the DOM in the same way a user would.

    How Vue Testing Library Works

    Vue Testing Library functions by providing utilities for querying the DOM in the same way a user would interact with the DOM. These utilities allow you to find elements by their label text, find links and buttons from their text content and assert that your Vue application is fully accessible.

    For cases where it doesn’t make sense or is not practical to find elements by their text content or label, Vue testing Library provides a recommended way to find these elements by using data-testid attribute as an escape hatch for finding these elements.

    The data-testid attribute is added to the HTML element you plan on querying for in your test. E.g

    <button data-testid="checkoutButton">Check Out</button>

    Getting Started With Vue Testing Library

    Now that you have seen why you should use Vue Testing Library and how it works, let’s proceed by setting it up in a brand new Vue CLI generated Vue project.

    First, we will generate a new Vue application by running the below command in the terminal (assuming you have Vue CLI installed on your machine):

    vue create vue-testing-library-demo

    To run our tests, we will be using Jest — a test runner developed by Facebook. Vue CLI has a plugin that easily sets up Jest. Let’s add that plugin:

    vue add unit-jest

    You will notice the plugin added a new script in package.json:

     "test:unit": "vue-cli-service test:unit",

    This would be used to run the tests. It also added a new tests folder in src and inside the tests folder a unit folder with an example test file called example.spec.js. Based on the configuration of Jest, when we run npm run test:unit Jest will look for files in tests directory and run the test files. Let’s run the example test file:

    npm run test:unit

    This should run the example.spec.js test file in tests/unit directory. Let’s look at the content of this file:

    import { shallowMount } from '@vue/test-utils'
    import HelloWorld from '@/components/HelloWorld.vue'
    
    describe('HelloWorld.vue', () => {
      it('renders props.msg when passed', () => {
        const msg = 'new message'
        const wrapper = shallowMount(HelloWorld, {
          propsData: { msg }
        })
        expect(wrapper.text()).toMatch(msg)
      })
    })

    By default, installing Jest with the Vue CLI plugin will install @vue/test-utils, hence the above test file is using the shallowMount function from @vue/test-utils. A quick way to get familiar with Vue Testing Library is to quickly modify this same test file to use Vue Testing Library instead of @vue/test-utils.

    We would do this by first uninstalling @vue/test-utils as we won’t be needing it.

    npm uninstall @vue/test-utils --save-dev

    Then we install Vue Testing Library as a development dependency:

    npm install @testing-library/vue --save-dev

    Then we proceed to modify tests/unit/example.spec.js to this:

    import { render } from '@testing-library/vue'
    import HelloWorld from '@/components/HelloWorld.vue'
    
    describe('HelloWorld.vue', () => {
      it('renders props.msg when passed', () => {
        const msg = 'new message'
        const { getByText } = render(HelloWorld, {
          props: { msg }
        })
        getByText(msg)
      })
    })
    

    Run the test again and it should still pass. Let’s look at what we did:

    • We use the render function exposed by Vue Testing Library to render the HelloWorld components. render is the only way of rendering components in Vue Testing Library. When you call render, you pass in the Vue component and an optional options object.

    • We then use the options object to pass in the msg props needed by the HelloWorld component. render will return an object with helper methods to query the DOM and one of those methods is getByText.

    • We then use getByText to assert if an element with the text content of ‘new message’ exist in the DOM.

    By now you might have noticed the shift from thinking about testing the rendered Vue component to what the user sees in the DOM. This shift will allow you test your applications from the user perspective as opposed to focusing more on the implementation details.

    Our Demo App

    Now that we have established how testing is done in Vue using Vue Testing Library, we will proceed to test our demo application. But first, we will flesh out the UI for the app. Our demo app is a simple checkout page for a product. We will be testing if the user can increment the quantity of the product before checkout, he/she can see the product name and price, and so on. Let’s get started.

    First, create a new Vue component called checkout in components/ directory and add the snippet below to it:

    <template>
        <div class="checkout">
            <h1>{{ product.name }} - <span data-testid="finalPrice">${{ product.price }}</span></h1>
            <div class="quantity-wrapper">
                <div>
                    <label for="quanity">Quantity</label>
                    <input type="number" v-model="quantity" name="quantity" class="quantity-input" />
                </div>
               <div>
                    <button @click="incrementQuantity" class="quantity-btn">+</button>
                    <button @click="decrementQuantity" class="quantity-btn">-</button>
               </div>
            </div>
              <p>final price - $<span data-testId="finalPrice">{{ finalPrice }}</span></p>
            <button @click="checkout" class="checkout-btn">Checkout</button>
        </div>
    </template>
    <script>
    export default {
        data() {
            return {
                quantity: 1,
            }
        },
        props: {
        product: {
            required: true
            }
        },
        computed: {
            finalPrice() {
                return this.product.price * this.quantity
            }
        },
        methods: {
            incrementQuantity() {
                this.quantity++;
            },
            decrementQuantity() {
                if (this.quantity == 1) return;
                this.quantity--;
            },
            checkout() {
    
            }
        }
    }
    </script>
    
    <style scoped>
    .quantity-wrapper {
        margin: 2em auto;
        width: 50%;
        display: flex;
        justify-content: center;
    }
    
    .quantity-wrapper div {
        margin-right: 2em;
    }
    .quantity-input {
        margin-left: 0.5em;
    }
    .quantity-wrapper button {
        margin-right: 1em;
    }
    button {
        cursor: pointer;
    }
    </style>
    

    Then modify App.vue to:

    <template>
      <div id="app">
        <check-out :product="product" />
      </div>
    </template>
    
    <script>
    import CheckOut from './components/CheckOut.vue'
    
    export default {
      name: 'App',
      data() {
         return {
              product: {
              name: 'Shure Mic SM7B',
              price: 200,
          }
        }
      },
      components: {
        CheckOut
      }
    }
    </script>
    
    <style>
    #app {
      font-family: Avenir, Helvetica, Arial, sans-serif;
      -webkit-font-smoothing: antialiased;
      -moz-osx-font-smoothing: grayscale;
      text-align: center;
      color: #2c3e50;
      margin-top: 60px;
    }
    </style>
    

    For our test case we will be testing the following scenarios:

    1. Can the user see the product name?
    2. Can the user see the product price?
    3. Can the user increment product quantity?
    4. Can the user decrement product quantity?
    5. Can the user see the updated total price in real-time as the quantity changes?

    Our UI is pretty minimalistic as the emphasis is on testing with Vue Testing Library. Let’s proceed to test the Checkout component. Create a new test file in tests/unit/ called checkout.spec.js.

    We will then proceed to scaffold the test file:

    import { render, fireEvent } from '@testing-library/vue'
    import CheckOut from '@/components/CheckOut.vue'
    
    const product = {
        name: 'Korg Kronos',
        price: 1200
    }
    describe('Checkout.vue', () => {
      // tests goes here
    })
    

    Our very first test case will be to check if the product name is rendered. We would do so like so:

     it('renders product name', () => {
            const { getByText } = render(CheckOut, {
                props: { product }
            })
    
            getByText(product.name)
     })

    Then we will check if the product price is rendered:

    it('renders product price', () => {
            const { getByText } = render(CheckOut, {
                props: { product }
            })
    
            getByText("$" + product.price)
     })

    Going forward with testing the Checkout component, we will test if the initial quantity the user sees is 1 using the getByDisplayValue helper method:

    it('renders initial quantity as 1', () => {
            const { getByDisplayValue, getByText } = render(CheckOut, {
                props: { product }
            })
            getByDisplayValue(1)
        })

    Next up, we will be checking if when the user clicks the button to increment product quantity, the quantity is incremented. We will do this by firing the click event using the fireEvent utility from Vue Testing Library. Here is the complete implementation:

    it('increments product quantity', async () => {
            const { getByDisplayValue, getByText } = render(CheckOut, {
                props: { product }
            })
            const incrementQuantityButton = getByText('+')
            await fireEvent.click(incrementQuantityButton)
            getByDisplayValue(2)
    })

    We will do the same for decrement when the quantity is 1 — in this case, we don’t decrement the quantity. And also when the quantity is 2. Let’s write both test cases.

    it('does not decrement quantity when quanty is 1', async () => {
            const { getByDisplayValue, getByText } = render(CheckOut, {
                props: { product }
            })
            const decrementQuantityButton = getByText('-')
            await fireEvent.click(decrementQuantityButton)
            getByDisplayValue(1)
        })
    
     it('decrement quantity when quantity greater than 1', async () => {
            const { getByDisplayValue, getByText } = render(CheckOut, {
                props: { product }
            })
            const incrementQuantityButton = getByText('+')
            const decrementQuantityButton = getByText('-')
            await fireEvent.click(incrementQuantityButton)
            await fireEvent.click(decrementQuantityButton)
            getByDisplayValue(1)
        })

    Lastly, we will test if the final price is being calculated accordingly and displayed to the user when both the increment and decrement quantity buttons are clicked.

    it('displays correct final price when increment button is clicked', async () => {
            const {  getByText, getByTestId } = render(CheckOut, {
                props: { product }
            })
            const incrementQuantityButton = getByText('+')
            await fireEvent.click(incrementQuantityButton)
            getByText(product.price * 2)
        })
    
    it('displays correct final price when decrement button is clicked', async () => {
            const {  getByText} = render(CheckOut, {
                props: { product }
            })
            const incrementQuantityButton = getByText('+')
            const decrementQuantityButton = getByText('-')
            await fireEvent.click(incrementQuantityButton)
            await fireEvent.click(decrementQuantityButton)
            getByText(product.price)
        })

    All throughout our test cases, you will notice that we were more focused on writing our tests from the perspective of what the user will see and interact with. Writing tests this way ensures that we are testing what matters to the users of the application.

    Conclusion

    This article introduces an alternative library and approach for testing Vue applications called Vue Testing Library, we see how to set it up and write tests for Vue components with it.

    Resources

    You can find the demo project on GitHub.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Supercharge Testing React Applications With Wallaby.js — Smashing Magazine

    10/16/2020

    About The Author

    Kelvin is an independent software maker currently building Sailscasts — a platform to learn server-side JavaScript. He is also a technical writer and …
    More about
    Kelvin

    Ever had to switch your focus from your editor and to your terminal to see the results of your tests? This article will introduce you to Wallaby.js — a JavaScript productivity tool that supercharges your IDE by allowing you to get real-time feedback on your JavaScript tests in your code editor even before saving the file. You will also learn how to use Wallaby.js for testing React applications.

    Note: In order to be able to follow along, you’ll need to be familiar with JavaScript testing and have a working knowledge of building React applications.

    One thing you will discover very quickly when you start writing tests for an application is that you want to run your tests constantly when you are coding. Having to switch between your code editor and terminal window (or in the case of VS Code, the integrated terminal) adds an overhead and reduces your productivity as you build your application. In an ideal world, you would have instant feedback on your tests right in your editor as you are writing your code. Enter Wallaby.js.

    What Is Wallaby.js?

    Wallaby.js is an intelligent test runner for JavaScript that continuously runs your tests. It reports code coverage and other results directly to your code editor immediately as you change your code (even without saving the file). The tool is available as an editor extension for VS Code, IntelliJ Editors (such as WebStorm and IntelliJ IDEA), Atom, Sublime Text, and Visual Studio.

    A screenshot of Wallaby.js, an intelligent test runner for JavaScript that continuously runs your tests
    (Large preview)

    Why Wallaby.js?

    As stated earlier, Wallaby.js aims to improve your productivity in your day to day JavaScript development. Depending on your development workflow, Wallaby can save you hours of time each
    week by reducing context switching. Wallaby also provides code coverage reporting, error reporting, and other time-saving features such as time-travel debugging and test stories.

    Getting Started With Wallaby.js In VS Code

    Let’s see how we can get the benefits of Wallaby.js using VS Code.

    Note: If you are not using VS Code you can check out here for instructions on how to set up for other editors.

    Install The Wallaby.js VS Code Extension

    To get started we will install the Wallaby.js VS Code extension.

    After the extension is installed, the Wallaby.js core runtime will be automatically downloaded and installed.

    Wallaby License

    Wallaby provides an Open Source license for open source projects seeking to use Wallaby.js. Visit here to obtain an open-source license. You may use the open-source license with the demo repo for this article.

    You can also get a fully functional 15-day trial license by visiting here.

    If you want to use Wallaby.js on a non-open-source project beyond the 15-day trial license period, you may obtain a license key from the wallaby website.

    Add License Key To VS Code

    After obtaining a license key, head over to VS Code and in the command palette search for “Wallaby.js: Manage License Key”, click on the command and you will be presented with an input box to enter your license key, then hit enter and you will receive a notification that Wallaby.js has been successfully activated.

    Wallaby.js And React

    Now that we have Wallaby.js set up in our VS Code editor, let’s supercharge testing a React application with Wallaby.js.

    For our React app, we will add a simple upvote/downvote feature and we will write some tests for our new feature to see how Wallaby.js plays out in the mix.

    Creating The React App

    Note: You can clone the demo repo if you like, or you can follow along below.

    We will create our React app using the create-react-app CLI tool.

    npx create-react-app wallaby-js-demo
    

    Then open the newly scaffolded React project in VS Code.

    Open src/App.js and start Wallaby.js by running: “Wallaby.js: Start” in VS Code command palette (alternatively you can use the shortcut combo — Ctrl + Shift + R R if you are on a Windows or Linux machine, or Cmd + Shift + R R if you are on a Mac).

    A screenshot of create the React app by using the create-react-app CLI tool
    (Large preview)

    When Wallaby.js starts you should see its test coverage indicators to the left of your editor similar to the screenshot below:

    A screenshot of the App.js file showing test coverage indicators when starting Wallaby.js
    (Large preview)

    Wallaby.js provides 5 different colored indicators in the left margin of your code editor:

    1. Gray: means that the line of code is not executed by any of your tests.
    2. Yellow: means that some of the code on a given line was executed but other parts were not.
    3. Green: means that all of the code on a line was executed by your tests.
    4. Pink: means that the line of code is on the execution path of a failing test.
    5. Red: means that the line of code is the source of an error or failed expectation, or in the stack of an error.

    If you look at the status bar you will see Wallaby.js metrics for this file and it’s showing we have a 100% test coverage for src/App.js and a single passing test with no failing test. How does Wallaby.js know this? When we started Wallaby.js, it detected src/App.js has a test file src/App.test.js, it then runs those tests in the background for us and conveniently gives us the feedbacks using its color indicators and also giving us a summary metric on our tests in the status bar.

    When you also open src/App.test.js you will see similar feedback from Wallaby.js

    A screenshot of the code inside the App.test.js file
    (Large preview)

    Currently, all tests are passing at the moment so we get all green indicators. Let’s see how Wallaby.js handles failing tests. In src/App.test.js let’s make the test fail by changing the expectation of the test like so:

    // src/App.test.js
    expect(linkElement).not.toBeInTheDocument();
    

    The screenshot below shows how your editor would now look with src/App.test.js open:

    A screenshot of the App.test.js file opened in a editor showing failing tests
    (Large preview)

    You will see the indicators change to red and pink for the failing tests. Also notice we didn’t have to save the file for Wallaby.js to detect we made a change.

    You will also notice the line in your editor in src/App.test.js that outputs the error of the test. This is done thanks to Wallaby.js advanced logging. Using Wallaby.js advanced logging, you can also report and explore runtime values beside your code using console.log, a special comment format //? and the VS Code command, Wallaby.js: Show Value.

    Now let’s see the Wallaby.js workflow for fixing failing tests. Click on the Wallaby.js test indicator in the status bar to open the Wallaby.js output window. (“✗ 1 ✓ 0”)

    A screenshot of the App.test.js file open in an editor with the Wallaby.js Tests indicator tab open
    (Large preview)

    In the Wallaby.js output window, right next to the failing test, you should see a “Debug Test” link. Pressing Ctrl and clicking on that link will fire up the Wallaby.js time travel debugger. When we do that, the Wallaby.js Tools window will open to the side of your editor, and you should see the Wallaby.js debugger section as well as the Value explorer and Test file coverage sections.

    If you want to see the runtime value of a variable or expression, select the value in your editor and Wallaby.js will display it for you.

    A screenshot of the App.test.js file opened in an editor showing the runtime value selected
    (Large preview)

    Also, notice the “Open Test Story” link in the output window. Wallby.js test story allows you to see all your tests and the code they are testing in a single view in your editor.

    Let’s see this in action. Press Ctrl and click on the link — you should be able to see the Wallaby.js test story open up in your editor. Wallaby’s Test Story Viewer provides a unique and efficient way of inspecting what code your test is executing in a single logical view.

    A screenshot of what can be seen in the Test Story tab
    (Large preview)

    Another thing we will explore before fixing our failing test is the Wallaby.js app. Notice the link in the Wallaby.js output window: “Launch Coverage & Test Explorer”. Clicking on the link will launch the Wallaby.js app which will give you a compact birds-eye view of all tests in your project.

    Next, click on the link and start up the Wallaby.js app in your default browser via http://localhost:51245/. Wallaby.js will quickly detect that we have our demo project open in our editor which will then automatically load it into the app.

    Here is how the app should now look like:

    A screenshot of the Wallaby.js demo app project previewed in the browser
    (Large preview)

    You should be able to see the test’s metrics on the top part of the Wallaby.js app. By default, the Tests tab in the app is opened up. By clicking on the Files tab, you should be able to see the files in your project as well as their test coverage reports.

    A screenshot of a browser tab showing the demo preview of the Wallaby.js app and where the Files tab can be found
    (Large preview)

    Back on to the Tests tab, click on the test and you should see the Wallaby.js error reporting feature to the right:

    A screenshot showing how the app reports errors
    (Large preview)

    Now we’ve covered all that, go back to the editor, and fix the failing test to make Wallaby.js happy by reverting the line we changed earlier to this:

    expect(linkElement).toBeInTheDocument();
    

    The Wallaby.js output window should now look like the screenshot below and your test coverage indicators should be all passing now.

    A screenshot of the App.test.js file opened in an editor showing all tests passed in the Output tab
    (Large preview)

    Implementing Our Feature

    We’ve explored Wallaby.js in the default app created for us by create-react-app. Let’s implement our upvote/downvote feature and write tests for that.

    Our application UI should contain two buttons one for upvoting and the other for downvoting and a single counter that will be incremented or decremented depending on the button the user clicks. Let’s modify src/App.js to look like this.

    // src/App.js
    import React, { useState } from 'react';
    import logo from './logo.svg';
    import './App.css';
    
    function App() {
      const [vote, setVote] = useState(0);
    
      function upVote() {
        setVote(vote + 1);
      }
    
      function downVote() {
        // Note the error, we will fix this later...
        setVote(vote - 2);
      }
      return (
        <div className='App'>
          <header className='App-header'>
            <img src={logo} className='App-logo' alt='logo' />
            <p className='vote' title='vote count'>
              {vote}
            </p>
            <section className='votes'>
              <button title='upVote' onClick={upVote}>
                <span role='img' aria-label='Up vote'>
                  👍🏿
                </span>
              </button>
              <button title='downVote' onClick={downVote}>
                <span role='img' aria-label='Down vote'>
                  👎🏿
                </span>
              </button>
            </section>
          </header>
        </div>
      );
    }
    
    export default App;
    

    We will also style the UI just a bit. Add the following rules to src/index.css

    .votes {
      display: flex;
      justify-content: space-between;
    }
    p.vote {
      font-size: 4rem;
    }
    button {
      padding: 2rem 2rem;
      font-size: 2rem;
      border: 1px solid #fff;
      margin-left: 1rem;
      border-radius: 100%;
      transition: all 300ms;
      cursor: pointer;
    }
    
    button:focus,
    button:hover {
      outline: none;
      filter: brightness(40%);
    }
    

    If you look at src/App.js, you will notice some gray indicators from Wallaby.js hinting us that some part of our code isn’t tested yet. Also, you will notice our initial test in src/App.test.js is failing and the Wallaby.js status bar indicator shows that our test coverage has dropped.

    A screenshot of how the initital test is shown to have failed inside the App.test.js file
    (Large preview)

    These visual clues by Wallaby.js are convenient for test-driven development (TDD) since we get instant feedback on the state of our application regarding tests.

    Testing Our App Code

    Let’s modify src/App.test.js to check that the app renders correctly.

    Note: We will be using React Testing Library for our test which comes out of the box when you run create-react-app. See the docs for usage guide.

    We are going to need a couple of extra functions from @testing-library/react, update your @testing-library/react import to:

    import { render, fireEvent, cleanup } from '@testing-library/react';
    

    Then let’s replace the single test in src/App.js with:

    test('App renders correctly', () => { render(<App />); });
    

    Immediately you will see the indicator go green in both the src/App.test.js line where we test for the render of the app and also where we are calling render in our src/App.js.

    A screenshot of the App.test.js file opened in an editor showing green indicators
    (Large preview)

    Next, we will test that the initial value of the vote state is zero(0).

    it('Vote count starts at 0', () => {
      const { getByTitle } = render(<App />);
      const voteElement = getByTitle('vote count');
      expect(voteElement).toHaveTextContent(/^0$/);
    });
    

    Next, we will test if clicking the upvote 👍🏿 button increments the vote:

    it('Vote increments by 1 when upVote button is pressed', () => {
      const { getByTitle } = render(<App />);
      const upVoteButtonElement = getByTitle('upVote');
      const voteElement = getByTitle('vote count');
      fireEvent.click(upVoteButtonElement);
      expect(voteElement).toHaveTextContent(/^1$/);
    });
    

    We will also test for the downvote 👎🏿 interaction like so:

    it('Vote decrements by 1 when downVote button is pressed', () => {
      const { getByTitle } = render(<App />);
      const downVoteButtonElement = getByTitle('downVote');
      const voteElement = getByTitle('vote count');
      fireEvent.click(downVoteButtonElement);
      expect(voteElement).toHaveTextContent(/^-1$/);
    });
    

    Oops, this test is failing. Let’s work out why. Above the test, click the View story code lens link or the Debug Test link in the Wallaby.js output window and use the debugger to step through to the downVote function. We have a bug… we should have decremented the vote count by 1 but instead, we are decrementing by 2. Let’s fix our bug and decrement by 1.

    src/App.js
    function downVote() {
        setVote(vote - 1);
    }
    

    Watch now how Wallaby’s indicators go green and we know that all of our tests are passing:

    Our src/App.test.js should look like this:

    import React from 'react';
    import { render, fireEvent, cleanup } from '@testing-library/react';
    import App from './App';
    
    test('App renders correctly', () => {
      render(<App />);
    });
    
    it('Vote count starts at 0', () => {
      const { getByTitle } = render(<App />);
      const voteElement = getByTitle('vote count');
      expect(voteElement).toHaveTextContent(/^0$/);
    });
    
    it('Vote count increments by 1 when upVote button is pressed', () => {
      const { getByTitle } = render(<App />);
      const upVoteButtonElement = getByTitle('upVote');
      const voteElement = getByTitle('vote count');
      fireEvent.click(upVoteButtonElement);
      expect(voteElement).toHaveTextContent(/^1$/);
    });
    
    it('Vote count decrements by 1 when downVote button is pressed', () => {
      const { getByTitle } = render(<App />);
      const downVoteButtonElement = getByTitle('downVote');
      const voteElement = getByTitle('vote count');
      fireEvent.click(downVoteButtonElement);
      expect(voteElement).toHaveTextContent(/^-1$/);
    });
    
    afterEach(cleanup);
    

    After writing these tests, Wallaby.js shows us that the missing code paths that we initially identified before writing any tests have now been executed. We can also see that our coverage has increased. Again, you will notice how writing your tests with instant feedback from Wallaby.js allows you to see what’s going on with your tests right in your browser, which in turn improves your productivity.

    Final outcome of the Wallaby.js demo opened in a browser tab
    (Large preview)

    Conclusion

    From this article, you have seen how Wallaby.js improves your developer experience when testing JavaScript applications. We have investigated some key features of Wallaby.js, set it up in VS Code, and then tested a React application with Wallaby.js.

    Further Resources

    Smashing Editorial
    (ra, il)

    Source link

    web design

    Unit Testing In React Native Applications — Smashing Magazine

    09/29/2020

    About The Author

    Fortune Ikechi is a Frontend Engineer based in Rivers State Nigeria. He is a student of the University of Port-Harcourt. He is passionate about community and …
    More about
    Fortune

    Unit testing has become an integral part of the software development process. It is the level of testing at which the components of the software are tested. In this tutorial, you’ll learn how to test units of a React Native application.

    React Native is one of the most widely used frameworks for building mobile applications. This tutorial is targeted at developers who want to get started testing React Native applications that they build. We’ll make use of the Jest testing framework and Enzyme.

    In this article, we’ll learn the core principles of testing, explore various libraries for testing an application, and see how to test units (or components) of a React Native application. By working with a React Native application, we’ll solidify our knowledge of testing.

    Note: Basic knowledge of JavaScript and React Native would be of great benefit as you work through this tutorial.

    What Is Unit Testing?

    Unit testing is the level of testing at which individual components of the software are tested. We do it to ensure that each component works as expected. A component is the smallest testable part of the software.

    To illustrate, let’s create a Button component and simulate a unit test:

    import React from 'react';
    import { StyleSheet, Text, TouchableOpacity } from 'react-native';
    function AppButton({ onPress }) {
        return (
          <TouchableOpacity
              style={[styles.button,
                  { backgroundColor: colors[color] }]}
                     onPress={onPress} >
              <Text style={styles.text}>Register</Text>
          </TouchableOpacity>
        );
    }
    const styles = StyleSheet.create({
        button: {
            backgroundColor: red;
            borderRadius: 25,
            justifyContent: 'center',
            alignItems: 'center',
        },
        text: {
            color: #fff
        }
    })
    export default AppButton;
    

    This Button component has text and an onPress function. Let’s test this component to see what unit testing is about.

    First, let’s create a test file, named Button.test.js:

    it('renders correctly across screens', () => {
      const tree = renderer.create(<Button />).toJSON();
      expect(tree).toMatchSnapshot();
    });
    

    Here, we are testing to see whether our Button component renders as it should on all screens of the application. This is what unit testing is all about: testing components of an application to make sure they work as they should.

    Unit Testing In React Native Applications

    A React Native application can be tested with a variety of tools, some of which are the following:

    • WebDriver
      This open-source testing tool for Node.js apps is also used to test React Native applications.
    • Nightmare
      This automates test operations in the browser. According to the documentation, “the goal is to expose a few simple methods that mimic user actions (like goto, type and click), with an API that feels synchronous for each block of scripting, rather than deeply nested callbacks.”
    • Jest
      This is one of the most popular testing libraries out there and the one we’ll be focusing on today. Like React, it is maintained by Facebook and was made to provide a “zero config” setup for maximum performance.
    • Mocha
      Mocha is a popular library for testing React and React Native applications. It has become a testing tool of choice for developers because of how easy it is to set up and use and how fast it is.
    • Jasmine
      According to its documentation, Jasmine is a behavior-driven development framework for testing JavaScript code.

    Introduction To Jest And Enzyme

    According to its documentation, “Jest is a delightful JavaScript testing framework with a focus on simplicity”. It works with zero configuration. Upon installation (using a package manager such as npm or Yarn), Jest is ready to use, with no other installations needed.

    Enzyme is a JavaScript testing framework for React Native applications. (If you’re working with React rather than React Native, a guide is available.) We’ll use Enzyme to test units of our application’s output. With it, we can simulate the application’s runtime.

    Let’s get started by setting up our project. We’ll be using the Done With It app on GitHub. It’s a React Native application marketplace. Start by cloning it, navigate into the folder, and install the packages by running the following for npm…

    npm install
    

    … or this for Yarn:

    yarn install
    

    This command will install all of the packages in our application. Once that’s done, we’ll test our application’s UI consistency using snapshots, covered below.

    Snapshots And Jest Configuration

    In this section, we’ll test for user touches and the UI of the app’s components by testing snapshots using Jest.

    Before doing that, we need to install Jest and its dependencies. To install Jest for Expo React Native, run the following command:

    yarn add jest-expo --dev
    

    This installs jest-expo in our application’s directory. Next, we need to update our package.json file to have a test script:

    "scripts": {
        "test" "jest"
    },
    "jest": {
        "preset": "jest-expo"
    }
    

    By adding this command, we are telling Jest which package to register in our application and where.

    Next is adding other packages to our application that will aid Jest to do a comprehensive test. For npm, run this…

    npm i react-test-renderer --save-dev
    

    … and for Yarn, this:

    yarn add react-test-renderer --dev
    

    We still have a little configuration to do in our package.json file. According to Expo React Native’s documentation, we need to add a transformIgnorePattern configuration that prevents tests from running in Jest whenever a source file matches a test (i.e. if a test is made and a similar file is found in the node modules of the project).

    "jest": {
      "preset": "jest-expo",
      "transformIgnorePatterns": [
        "node_modules/(?!(jest-)?react-native|react-clone-referenced-element|@react-native-community|expo(nent)?|@expo(nent)?/.*|react-navigation|@react-navigation/.*|@unimodules/.*|unimodules|sentry-expo|native-base|@sentry/.*)"
      ]
    }
    

    Now, let’s create a new file, named App.test.js, to write our first test. We will test whether our App has one child element in its tree:

    import React from "react";
    import renderer from "react-test-renderer";
    import App from "./App.js"
    describe("<App />", () => {
        it('has 1 child', () => {
            const tree = renderer.create(<App />).toJSON();
            expect(tree.children.length).toBe(1);
        });
    });
    

    Now, run yarn test or its npm equivalent. If App.js has a single child element, our test should pass, which will be confirmed in the command-line interface.

    In the code above, we’ve imported React and react-test-renderer, which renders our tests for Expo. We’ve converted the <App /> component tree to JSON, and then asked Jest to see whether the returned number of child components in JSON equals what we expect.

    More Snapshot Tests

    As David Adeneye states:

    “A snapshot test makes sure that the user interface (UI) of a web application does not change unexpectedly. It captures the code of a component at a moment in time, so that we can compare the component in one state with any other possible state it might take.”

    This is done especially when a project involves global styles that are used across a lot of components. Let’s write a snapshot test for App.js to test its UI consistency:

    it('renders correctly across screens', () => {
      const tree = renderer.create().toJSON();
      expect(tree).toMatchSnapshot();
    });
    

    Add this to the tests we’ve already written, and then run yarn test (or its npm equivalent). If our test passes, we should see this:

      PASS  src/App.test.js
      √ has 1 child (16ms)
      √ renders correctly (16ms)
    
    Test Suites: 1 passed, 1 total
    Tests:       2 passed, 2 total
    Snapshots:   1 total
    Time:        24s
    

    This tells us that our tests passed and the time they took. Your result will look similar if the tests passed.

    Let’s move on to mocking some functions in Jest.

    Mocking API Calls

    According to Jest’s documentation:

    Mock functions allow you to test the links between code by erasing the actual implementation of a function, capturing calls to the function (and the parameters passed in those calls), capturing instances of constructor functions when instantiated with `new`, and allowing test-time configuration of return values.

    Simply put, a mock is a copy of an object or function without the real workings of that function. It imitates that function.

    Mocks help us test apps in so many ways, but the main benefit is that they reduce our need for dependencies.

    Mocks can usually be performed in one of two ways. One is to create a mock function that is injected into the code to be tested. The other is to write a mock function that overrides the package or dependency that is attached to the component.

    Most organizations and developers prefer to write manual mocks that imitate functionality and use fake data to test some components.

    React Native includes fetch in the global object. To avoid making real API calls in our unit test, we mock them. Below is a way to mock all, if not most, of our API calls in React Native, and without the need for dependencies:

    global.fetch = jest.fn();
    
    // mocking an API success response once
    fetch.mockResponseIsSuccess = (body) => {
      fetch.mockImplementationForOnce (
        () => Promise.resolve({json: () => Promise.resolve(JSON.parse(body))})
      );
    };
    
    // mocking an API failure response for once
    fetch.mockResponseIsFailure = (error) => {
      fetch.mockImplementationForOnce(
        () => Promise.reject(error)
      );
    };
    

    Here, we’ve written a function that tries to fetch an API once. Having done this, it returns a promise, and when it is resolved, it returns the body in JSON. It’s similar to the mock response for a failed fetch transaction — it returns an error.

    Below is the product component of our application, containing a product object and returning the information as props.

    import React from 'react';
    const Product = () => {
        const product = {
            name: 'Pizza',
            quantity: 5,
            price: '$50'
        }
        return (
            <>
                <h1>Name: {product.name}</h1>   
                <h1>Quantity: {product.quantity}</h1>   
                <h1>Price: {product.price}</h1>   
            </>
        );
    }
    export default Product;
    

    Let’s imagine we are trying to test all of our product’s components. Directly accessing our database is not a feasible solution. This is where mocks come into play. In the code below, we are trying to mock a component of the product by using Jest to describe the objects in the component.

    describe("", () => {
      it("accepts products props", () => {
        const wrapper = mount(<Customer product={product} />);
        expect(wrapper.props().product).toEqual(product);
      });
      it("contains products quantity", () => {
        expect(value).toBe(3);
      });
    });
    

    We are using describe from Jest to dictate the tests we want to be done. In the first test, we are checking to see whether the object we are passing is equal to the props we’ve mocked.

    In the second test, we are passing the customer props to make sure it is a product and that it matches our mocks. In doing so, we don’t have to test all of our product’s components, and we also get to prevent bugs in our code.

    Mocking External API Requests

    Until now, we’ve been running tests for API calls with other elements in our application. Now let’s mock an external API call. We are going to be using Axios. To test an external call to an API, we have to mock our requests and also manage the responses we get. We are going to use axios-mock-adapter to mock Axios. First, we need to install axios-mock-adapter by running the command below:

    yarn add axios-mock-adapter
    

    The next thing to do is create our mocks:

    import MockAdapter from 'axios-mock-adapter';
    import Faker from 'faker'
    import ApiClient from '../constants/api-client';
    import userDetails from 'jest/mockResponseObjects/user-objects';
    
    let mockApi = new MockAdapter(ApiClient.getAxiosInstance());
    let validAuthentication = {
        name: Faker.internet.email(),
        password: Faker.internet.password()
    
    mockApi.onPost('requests').reply(config) => {
      if (config.data ===  validAuthentication) {
          return [200, userDetails];
        }
      return [400, 'Incorrect username and password'];
     });
    

    Here, we are calling the ApiClient and passing an Axios instance to it to mock the user’s credentials. We are using a package named faker.js to generate fake user data, such as an email address and password.

    The mock behaves as we expect the API to. If the request is successful, we’ll get a response with a status code of 200 for OK. And we’ll get a status code of 400 for a bad request to the server, which would be sent with JSON with the message “Incorrect username and password”.

    Now that our mock is ready, let’s write a test for an external API request. As before, we’ll be using snapshots.

    it('successful sign in with correct credentials', async () => {
      await store.dispatch(authenticateUser('ikechifortune@gmail.com', 'password'));
      expect(getActions()).toMatchSnapshot();
    });
    
    it('unsuccessful sign in with wrong credentials', async () => {
      await store.dispatch(authenticateUser('ikechifortune@gmail.com', 'wrong credential'))
      .catch((error) => {
        expect(errorObject).toMatchSnapshot();
      });
    

    Here, we’re testing for a successful sign-in with the correct credentials, using the native JavaScript async await to hold our inputs. Meanwhile, the authenticateUser function from Jest authenticates the request and makes sure it matches our earlier snapshots. Next, we test for an unsuccessful sign-in in case of wrong credentials, such as email address or password, and we send an error as a response.

    Now, run yarn test or npm test. I’m sure all of your tests will pass.

    Let’s see how to test components in a state-management library, Redux.

    Testing Redux Actions And Reducers Using Snapshots

    There’s no denying that Redux is one of the most widely used state managers for React applications. Most of the functionality in Redux involves a dispatch, which is a function of the Redux store that is used to trigger a change in the state of an application. Testing Redux can be tricky because Redux’s actions grow quickly in size and complexity. With Jest snapshots, this becomes easier. Most testing with Redux comes down to two things:

    • To test actions, we create redux-mock-store and dispatch the actions.
    • To test reducers, we import the reducer and pass a state and action object to it.

    Below is a Redux test with snapshots. We will be testing the actions dispatched by authenticating the user at SIGN-IN and seeing how the LOGOUT action is handled by the user reducer.

    import mockStore from 'redux-mock-store';
    import { LOGOUT } from '../actions/logout';
    import User from '../reducers/user';
    import { testUser } from 'jest/mock-objects';
    
      describe('Testing the sign in authentication', () => {
        const store = mockStore();
    
      it('user attempts with correct password and succeeds', async () => {
      await store.dispatch(authenticateUser('example@gmail.com', 'password'));
      expect(store.getActions()).toMatchSnapshot();
      });
    });
      describe('Testing reducers after user LOGS OUT', () => {
        it('user is returned back to initial app state', () => {
          expect(user(testUser, { type: LOGOUT })).toMatchSnapshot();
        });
      });
    

    In the first test, we are describing the sign-in authentication and creating a mock store. We do this by first importing a mockStore from Redux, and then importing a method named testUser from Jest to help us mock a user. Next, we test for when the user successfully signs into the application using an email address and password that matches the ones in our snapshot store. So, the snapshot ensures that the objects that the user is inputting match every time a test is run.

    In the second test, we are testing for when the user logs out. Once our reducer snapshot confirms that a user has logged out, it returns to the initial state of the application.

    Next, we test by running yarn test. If the tests have passed, we should see the following result:

      PASS  src/redux/actions.test.js
      √ user attempts with correct password and succeeds (23ms)
      √ user is returned back to initial app state (19ms)
    
    Test Suites: 1 passed, 1 total
    Tests:       2 passed, 2 total
    Snapshots:   2 total
    Time:        31s
    

    Conclusion

    With Jest, testing React Native applications has never been easier, especially with snapshots, which ensure that the UI remains consistent regardless of the global styles. Also, Jest allows us to mock certain API calls and modules in our application. We can take this further by testing components of a React Native application.

    Further Resources

    Smashing Editorial
    (ks, ra, al, il)

    Source link

    web design

    How To Automate API Testing With Postman — Smashing Magazine

    09/07/2020

    About The Author

    Kelvin Omereshone is the CTO at Quru Lab. Kelvin was formerly a Front-end engineer at myPadi.ng. He’s the creator of Nuxtjs Africa community and very passionate …
    More about
    Kelvin

    In this article, we will learn how to write automated tests on web APIs with Postman. In order to follow along to this tutorial, you’ll need at least a fair amount of familiarity with Postman.

    One of my favorite features in Postman is the ability to write automated tests for my APIs. So if you are like me and you use Postman and you are tired of manually testing your APIs, this article will show how to harness the test automation feature provided by Postman.

    In case you don’t know what Postman is or you are entirely new to Postman, I will recommend you check out the Postman getting started documentation page and then come back to this article to learn how to automate testing your API with Postman.

    APIs or Web APIs pretty much drive most of the user-facing digital products. With that said, as a backend or front-end developer being able to test these APIs with ease and more efficiently will allow you to move quickly in your development lifecycle.

    Postman allows you to manually test your APIs in both its desktop and web-based applications. However, it also has the ability for you to automate these tests by writing JavaScript assertions on your API endpoints.

    Why You Should Automate API Tests

    Testing in software development is used to ascertain the quality of any piece of software. If you are building APIs as a backend for a single frontend application or you are building APIs to be consumed by several services and clients, it’s important that the APIs work as expected.

    Setting up automated API tests to test the different endpoints in your API will help catch bugs as quickly as possible.

    It will also allow you to move quickly and add new features because you can simply run the test cases to see if you break anything along the way.

    Steps To Automating API Tests

    When writing API tests in Postman, I normally take a four step approach:

    1. Manually testing the API;
    2. Understand the response returned by the API;
    3. Write the automated test;
    4. Repeat for each endpoint on the API.

    For this article, I have a NodeJS web service powered by SailsJS that expose the following endpoints for:

    • / — the home of the API.
    • /user/signup — Signs up a new user.
    • /user/signin — Signs in an exiting user.
    • /listing/new — Creates a new listing(a listing is details of a property owned by the user) for an existing user.

    I have created and organized the endpoints for the demo service we would be using in this article in a Postman collection so you can quickly import the collection in postman by clicking the button below and follow along.

    Run in Postman

    So let’s follow my four steps to automating API tests in Postman.

    1. Test The API Manually

    I will open Postman and switch over to a workspace I created called demo which has the postman-test-demo-service collection. You will also have access to the collection if you imported it from above. So my postman would look like this:

    postman with the `postman-test-demo-service` collection
    (Large preview)

    Our first test is to test the home endpoint(/) of the API. So I would open the request on the sidebar called home you can see it’s a Get request and by simply pressing Enter, I would send a GET request to the web service to see what it responds with. The image below shows that response:

    response to a GET request
    (Large preview)

    2. Understand The Response Returned By The API

    If you are following along and also from the screenshot above you will see the response came back with a status code of 200 OK and also a JSON body with a message property with the value of You have reached postman test demo web service

    Knowing this is the expected response of the / endpoint on our service, we can proceed to step 3 — writing the actual automated test.

    3. Write The Automated Test

    Postman comes out of the box with a powerful runtime based on Node.js which gives it’s users the ability to write scripts in the JavaScript language.

    In Postman, you add scripts to be executed during two events in the Postman workflow:

    • Before you make a request.
      These scripts are called pre-request script and you can write them under the Pre-request Script tab.
    • After you’ve received a response from the request you made.
      These scripts are called Test scripts and it is this set of scripts that are our focus in this article. You write test scripts under the Tests tab in a Postman request.

    The image below shows the Tests tab opened in Postman:

    The image shows the Tests tab opened in Postman
    (Large preview)

    If you look to your right in the already opened request Tests tab, you will notice a list of snippets available to quickly get you started writing tests. Most of the time, these snippets are sufficient for quite a number of test scenarios. So I would select the snippet title Status code: Code is 200. This will generate the code below in the Tests editor:

    pm.test("Status code is 200", function () {
        pm.response.to.have.status(200);
    });

    Here is also how Postman would look like after clicking on that test snippet:

    how Postman would look like after clicking on a test snippet
    (Large preview)

    If you’ve written any form of tests in JavaScript using some of the testing frameworks out there like Jest, then the snippet above will seem familiar. But let me explain: All postman test suits or scenario begins with test() function which is exposed in the pm(short for Postman) global object provided by Postman for you. The test method takes two arguments: the first is the test description which in our test suite above reads: Status code is 200, the second argument is a callback function. It’s in this function you make your assertions or verification of the response on the particular request being tested.

    You will notice we have a single assertion right now but you can have as many as you want. However, I like keeping assertions in separate tests most of the time.

    Our assertion above simply asks Postman if the response return has a status code of 200. You could see how it reads like English. This is intentional in order to allow anyone to write these tests with ease.

    Running Our Test

    To run our test we will send a request to the endpoint again. Only this time around, when Postman receives the response from that request, it will run your tests. Below is an image showing the passing test in Postman (You can access test result on the Test Results tab of the response section in postman):

    an image showing the passing test in Postman
    (Large preview)

    So our test passed! However, it’s crucial that we make our test scenario fail first, and then make it pass; this will help make sure that the scenario you are testing is not affected by any external factor, and that the test passes for the reason you are expecting it to pass — not something else. A good test should be predictable and the end result should be known beforehand.

    To make our test pass, I will make a typo in the URL we are currently sending the GET request to. This will lead to a 404 Not Found status code which will make our test fail. Let’s do this. In the address bar currently having the variable of our baseUrl, I will add /a to it (it could be anything random actually). Making the request again and our test will fail like seen below:

    fail test
    (Large preview)

    Removing the string /a will make the test pass again.

    We have written an automated test for the home route of our demo web service. At the moment we have a test case checking the status of the response. Let’s write another test case checking if the response body contains a message property as we have seen in the response and the value is ‘You have reached postman test demo web service’. Add the below code snippet to the test editor:

    pm.test("Contains a message property", function() {
        let jsonData = pm.response.json();
        pm.expect(jsonData.message).to.eql("You have reached postman test demo web service");
    })

    Your Postman window should look like this:

    postman window
    (Large preview)

    In the snippet above, we are creating a test case and getting the JavaScript object equivalent of the response body of the request which is originally in JSON by calling json() on it. Then we use the expect assertion method to check if the message property has a value of “You have reached postman test demo web service.”

    4. Repeat!

    I believe from the above first iteration of our 4 steps to writing API tests that you’ve seen the flow. So we would be repeating this flow to test all requests in the demo web service. Next up is the signup request. Let’s test on!

    Signup Request

    The signup request is a POST request expecting the fullName, emailAddress, and password of a user. In postman, you can add these parameters in many ways however we would opt into x-www-form-urlencoded method in the Body tabs of the request section. The image below gives an example of the parameters:

    an example of the parameters
    (Large preview)

    Here is the response with the above request:

    {
        "message": "An account has been created for kelvinomereshone@gmail.com successfully",
        "data": {
            "createdAt": 1596634791698,
            "updatedAt": 1596634791698,
            "id": "9fa2e648-1db5-4ea9-89a1-3be5bc73cb34",
            "emailAddress": "kelvinomereshone@gmail.com",
            "fullName": "Kelvin Omereshone"
        },
        "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJrZWx2aW5vbWVyZXNob25lQGdtYWlsLmNvbSIsImlzcyI6Ik15UGFkaSBCYWNrZW5kIiwiaWF0IjoxNTk2NjM0NzkxfQ.otCcXSmhP4mNWAHnrYvvzHkgU8yX8yRE5rcVtmGJ68k"
    }
    token property is returned with the response body
    (Large preview)

    From the above response body, you will notice a token property is returned with the response body. So we would write a test case to assert if a JSON response body was returned and if it contains the property token. Also, we would check for the status code as well which returns 201 Created. So open the Tests tab and add the following snippets:

    pm.test("Status code is 201", function () {
        pm.response.to.have.status(201);
    });
    
    
    pm.test("Response has a JSON body", function () {
        pm.response.to.be.json;
    });
    
    pm.test("Response has a token property", function () {
        var jsonData = pm.response.json();
        pm.expect(jsonData.token).to.be.a('string');
    });
    

    What each test case does should be obvious enough from the test description. From top to bottom, we check if the response is a 201 Created status code, we assert also if the response body is JSON and lastly we assert if the token property has a value of type string. Let’s run our tests.

    Note: Make sure you change at least the email address of the new user as the Web service won’t allow for duplicate emails.

    Our tests should pass and when you check the Test Results tab of the Response section you should get 3 passing tests as shown below:

    3 passing tests
    (Large preview)

    Let’s move onward to testing the signin endpoint…

    Signin Request

    The signin request’s response body is similar to the signup request. You could verify that by hitting the endpoint with user credentials – emailAddress and Password – you signed up already. After you have done that, add the following test cases to the tests editor:

    pm.test("Status code is 200", function () {
        pm.response.to.have.status(200);
    });
    
    
    pm.test("Response has a JSON body", function () {
        pm.response.to.be.json;
    });
    
    pm.test("Response has a token property", function () {
        var jsonData = pm.response.json();
        pm.expect(jsonData.token).to.be.a('string');
    });
    
    
    
    pm.test("Response has a data property", function () {
        var jsonData = pm.response.json();
        pm.expect(jsonData.data).to.be.a('object');
    });

    Make the request to signin with a valid user credential and your test should pass and Postman should look like so:

    Postman with a passed test
    (Large preview)

    Finally, we would be testing the listing/new endpoint of our demo API.

    Listing/New Request

    This test would be a little different. According to the requirement of our fictitious API, only logged in users can create listings. So we would need a way to authenticate the request.

    Recall when signing in a JWT token was returned we can use that token as the authorization header for the create listing request.

    To do this in postman, simply copy the token you got from signing in and go over to the Authorization tab of the Request section in Postman and select the type to be Bearer Token from the Type dropdown. You can then paste the token in the box to your right labeled Token. So the new request Authorization tab should look like this:

    the new request Authorization tab
    (Large preview)

    You can then go on and add the parameters in the Body tab of the request. You will notice the fields name are already there with sample values which you can choose to edit or not. Let’s make a request first before writing our test. A successful response will look like so:

    {
        "message": "New listing created successfully",
        "data": {
            "createdAt": 1596637153470,
            "updatedAt": 1596637153470,
            "id": "41d922ce-7326-43eb-93c8-31658c59e45d",
            "name": "Glorious Lounge",
            "type": "Hotel",
            "address": "No 1. Something street",
            "rent": "$100k per year",
            "lister": "9fa2e648-1db5-4ea9-89a1-3be5bc73cb34"
        }
    }

    We can see we get a JSON response body back. We can test for that and also make sure data is not empty. Add the following test case to the Test tab:

    pm.test("Status code is 200", function () {
        pm.response.to.have.status(200);
    });
    
    
    pm.test("Response has a JSON body", function () {
        pm.response.to.be.json;
    });
    
    pm.test("Response has a message property", function () {
        var jsonData = pm.response.json();
        pm.expect(jsonData.message).to.be.a('string');
    });
    
    
    
    pm.test("Response has a data property", function () {
        var jsonData = pm.response.json();
        pm.expect(jsonData.data).to.be.a('object');
    });

    With that added, make another request and the tests should all pass as shown below:

    passed test
    (Large preview)

    Conclusion

    This article aims at showing you how to utilize Postman to write automated tests for your APIs which will allow you to bridge the gap between development and quality assurance, and also minimize the surface area of bugs in your API.

    Additional Resources

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    How To Test Your React Apps With The React Testing Library — Smashing Magazine

    07/03/2020

    About The Author

    Awesome frontend developer who loves everything coding. I’m a lover of choral music and I’m working to make it more accessible to the world, one upload at a …
    More about
    Chidi

    Testing gives confidence in written code. In the context of this article, ‘testing’ means ‘automated testing’. Without automated testing, it is significantly harder to ensure the quality of a web application of significant complexity. Fails caused by automated testing may lead to more bugs in production. In this article, we’re going to show how React developers can quickly start testing their app with the React Testing Library (RTL).

    Today, we’ll briefly discuss why it’s important to write automated tests for any software project, and shed light on some of the common types of automated testing. We’ll build a to-do list app by following the Test-Driven Development (TDD) approach. I’ll show you how to write both unit and functional tests, and in the process, explain what code mocks are by mocking a few libraries. I’ll be using a combination of RTL and Jest — both of which come pre-installed in any new project created with Create-React-App (CRA).

    To follow along, you need to know how to set up and navigate a new React project and how to work with the yarn package manager (or npm). Familiarities with Axios and React-Router are also required.

    Best Practices With React

    React is a fantastic JavaScript library for building rich user interfaces. It provides a great component abstraction for organizing your interfaces into well-functioning code, and there’s just about anything you can use it for. Read more articles on React →

    Why You Should Test Your Code

    Before shipping your software to end-users, you first have to confirm that it is working as expected. In other words, the app should satisfy its project specifications.

    Just as it is important to test our project as a whole before shipping it to end-users, it’s also essential to keep testing our code during the lifetime of a project. This is necessary for a number of reasons. We may make updates to our application or refactor some parts of our code. A third-party library may undergo a breaking change. Even the browser that is running our web application may undergo breaking changes. In some cases, something stops working for no apparent reason — things could go wrong unexpectedly. Thus, it is necessary to test our code regularly for the lifetime of a project.

    Broadly speaking, there are manual and automated software tests. In a manual test, a real user performs some action on our application to verify that they work correctly. This kind of test is less reliable when repeated several times because it’s easy for the tester to miss some details between test runs.

    In an automated test, however, a test script is executed by a machine. With a test script, we can be sure that whatever details we set in the script will remain unchanged on every test run.

    This kind of test gives us the benefits of being predictable and fast, such that we can quickly find and fix bugs in our code.

    Having seen the necessity of testing our code, the next logical question is, what sort of automated tests should we write for our code? Let’s quickly go over a few of them.

    Types Of Automated Testing

    There are many different types of automated software testing. Some of the most common ones are unit tests, integration tests, functional tests, end-to-end tests, acceptance tests, performance tests, and smoke tests.

    1. Unit test
      In this kind of test, the goal is to verify that each unit of our application, considered in isolation, is working correctly. An example would be testing that a particular function returns an expected value, give some known inputs. We’ll see several examples in this article.
    2. Smoke test
      This kind of test is done to check that the system is up and running. For example, in a React app, we could just render our main app component and call it a day. If it renders correctly we can be fairly certain that our app would render on the browser.
    3. Integration test
      This sort of test is carried out to verify that two or more modules can work well together. For example, you might run a test to verify that your server and database are actually communicating correctly.
    4. Functional test
      A functional test exists to verify that the system meets its functional specification. We’ll see an example later.
    5. End-to-end test
      This kind of test involves testing the application the same way it would be used in the real world. You can use a tool like cypress for E2E tests.
    6. Acceptance test
      This is usually done by the business owner to verify that the system meets specifications.
    7. Performance test
      This sort of testing is carried out to see how the system performs under significant load. In frontend development, this is usually about how fast the app loads on the browser.

    There’s more here if you’re interested.

    Why Use React Testing Library?

    When it comes to testing React applications, there are a few testing options available, of which the most common ones I know of are Enzyme and React Testing Library (RTL).

    RTL is a subset of the @testing-library family of packages. Its philosophy is very simple. Your users don’t care whether you use redux or context for state management. They care less about the simplicity of hooks nor the distinction between class and functional components. They just want your app to work in a certain way. It is, therefore, no surprise that the testing library’s primary guiding principle is

    “The more your tests resemble the way your software is used, the more confidence they can give you.”

    So, whatever you do, have the end-user in mind and test your app just as they would use it.

    Choosing RTL gives you a number of advantages. First, it’s much easier to get started with it. Every new React project bootstrapped with CRA comes with RTL and Jest configured. The React docs also recommend it as the testing library of choice. Lastly, the guiding principle makes a lot of sense — functionality over implementation details.

    With that out of the way, let’s get started with building a to-do list app, following the TDD approach.

    Project Setup

    Open a terminal and copy and run the below command.

    # start new react project and start the server
    npx create-react-app start-rtl && cd start-rtl && yarn start

    This should create a new React project and start the server on http://localhost:3000. With the project running, open a separate terminal, run yarn test and then press a. This runs all tests in the project in watch mode. Running the test in watch mode means that the test will automatically re-run when it detects a change in either the test file or the file that is being tested. On the test terminal, you should see something like the picture below:

    Initial test passing
    Initial test passing. (Large preview)

    You should see a lot of greens, which indicates that the test we’re running passed in flying colors.

    As I mentioned earlier, CRA sets up RTL and Jest for every new React project. It also includes a sample test. This sample test is what we just executed.

    When you run the yarn test command, react-scripts calls upon Jest to execute the test. Jest is a JavaScript testing framework that’s used in running tests. You won’t find it listed in package.json but you can do a search inside yarn.lock to find it. You can also see it in node_modules/.

    Jest is incredible in the range of functionality that it provides. It provides tools for assertions, mocking, spying, etc. I strongly encourage you to take at least a quick tour of the documentation. There’s a lot to learn there that I cannot scratch in this short piece. We’ll be using Jest a lot in the coming sections.

    Open package.json let’s see what we have there. The section of interest is dependencies.

      "dependencies": {
        "@testing-library/jest-dom": "^4.2.4",
        "@testing-library/react": "^9.3.2",
        "@testing-library/user-event": "^7.1.2",
        ...
      },

    We have the following packages installed specifically for testing purpose:

    1. @testing-library/jest-dom: provides custom DOM element matchers for Jest.
    2. @testing-library/react: provides the APIs for testing React apps.
    3. @testing-library/user-event: provides advanced simulation of browser interactions.

    Open up App.test.js let’s take a look at its content.

    import React from 'react';
    import { render } from '@testing-library/react';
    import App from './App';
    
    test('renders learn react link', () => {
      const { getByText } = render();
      const linkElement = getByText(/learn react/i);
      expect(linkElement).toBeInTheDocument();
    });

    The render method of RTL renders the <App /> component and returns an object which is de-structured for the getByText query. This query finds elements in the DOM by their display text. Queries are the tools for finding elements in the DOM. The complete list of queries can be found here. All of the queries from the testing library are exported by RTL, in addition to the render, cleanup, and act methods. You can read more about these in the API section.

    The text is matched with the regular expression /learn react/i. The i flag makes the regular expression case-insensitive. We expect to find the text Learn React in the document.

    All of this mimics the behavior a user would experience in the browser when interacting with our app.

    Let’s start making the changes required by our app. Open App.js and replace the content with the below code.

    import React from "react";
    import "./App.css";
    function App() {
      return (
        <div className="App">
          <header className="App-header">
            <h2>Getting started with React testing library</h2>
          </header>
        </div>
      );
    }
    export default App;

    If you still have the test running, you should see the test fail. Perhaps you can guess why that is the case, but we’ll return to it a bit later. Right now I want to refactor the test block.

    Replace the test block in src/App.test.js with the code below:

    # use describe, it pattern
    describe("<App />", () => {
      it("Renders <App /> component correctly", () => {
        const { getByText } = render(<App />);
        expect(getByText(/Getting started with React testing library/i)).toBeInTheDocument();
      });
    });

    This refactor makes no material difference to how our test will run. I prefer the describe and it pattern as it allows me structure my test file into logical blocks of related tests. The test should re-run and this time it will pass. In case you haven’t guessed it, the fix for the failing test was to replace the learn react text with Getting started with React testing library.

    In case you don’t have time to write your own styles you can just copy the one below into App.css.

    .App {
      min-height: 100vh;
      text-align: center;
    }
    .App-header {
      height: 10vh;
      display: flex;
      background-color: #282c34;
      flex-direction: column;
      align-items: center;
      justify-content: center;
      font-size: calc(10px + 2vmin);
      color: white;
    }
    .App-body {
      width: 60%;
      margin: 20px auto;
    }
    ul {
      padding: 0;
      display: flex;
      list-style-type: decimal;
      flex-direction: column;
    }
    li {
      font-size: large;
      text-align: left;
      padding: 0.5rem 0;
    }
    li a {
      text-transform: capitalize;
      text-decoration: none;
    }
    .todo-title {
      text-transform: capitalize;
    }
    .completed {
      color: green;
    }
    .not-completed {
      color: red;
    }

    You should already see the page title move up after adding this CSS.

    I consider this a good point for me to commit my changes and push to Github. The corresponding branch is 01-setup.

    Let’s continue with our project setup. We know we’re going to need some navigation in our app so we need React-Router. We’ll also be making API calls with Axios. Let’s install both.

    # install react-router-dom and axios
    yarn add react-router-dom axios

    Most React apps you’ll build will have to maintain state. There’s a lot of libraries available for managing state. But for this tutorial, I’ll be using React’s context API and the useContext hook. So let’s set up our app’s context.

    Create a new file src/AppContext.js and enter the below content.

    import React from "react";
    export const AppContext = React.createContext({});
    
    export const AppProvider = ({ children }) => {
      const reducer = (state, action) => {
        switch (action.type) {
          case "LOAD_TODOLIST":
            return { ...state, todoList: action.todoList };
          case "LOAD_SINGLE_TODO":
            return { ...state, activeToDoItem: action.todo };
          default:
            return state;
        }
      };
      const [appData, appDispatch] = React.useReducer(reducer, {
        todoList: [],
        activeToDoItem: { id: 0 },
      });
      return (
        <AppContext.Provider value={{ appData, appDispatch }}>
          {children}
        </AppContext.Provider>
      );
    };

    Here we create a new context with React.createContext({}), for which the initial value is an empty object. We then define an AppProvider component that accepts children component. It then wraps those children in AppContext.Provider, thus making the { appData, appDispatch } object available to all children anywhere in the render tree.

    Our reducer function defines two action types.

    1. LOAD_TODOLIST which is used to update the todoList array.
    2. LOAD_SINGLE_TODO which is used to update activeToDoItem.

    appData and appDispatch are both returned from the useReducer hook. appData gives us access to the values in the state while appDispatch gives us a function which we can use to update the app’s state.

    Now open index.js, import the AppProvider component and wrap the <App /> component with <AppProvider />. Your final code should look like what I have below.

    import { AppProvider } from "./AppContext";
    
    ReactDOM.render(
      <React.StrictMode>
        <AppProvider>
          <App />
        </AppProvider>
      </React.StrictMode>,
      document.getElementById("root")
    );

    Wrapping <App /> inside <AppProvider /> makes AppContext available to every child component in our app.

    Remember that with RTL, the aim is to test our app the same way a real user would interact with it. This implies that we also want our tests to interact with our app state. For that reason, we also need to make our <AppProvider /> available to our components during tests. Let’s see how to make that happen.

    The render method provided by RTL is sufficient for simple components that don’t need to maintain state or use navigation. But most apps require at least one of both. For this reason, it provides a wrapper option. With this wrapper, we can wrap the UI rendered by the test renderer with any component we like, thus creating a custom render. Let’s create one for our tests.

    Create a new file src/custom-render.js and paste the following code.

    import React from "react";
    import { render } from "@testing-library/react";
    import { MemoryRouter } from "react-router-dom";
    
    import { AppProvider } from "./AppContext";
    
    const Wrapper = ({ children }) => {
      return (
        <AppProvider>
          <MemoryRouter>{children}</MemoryRouter>
        </AppProvider>
      );
    };
    
    const customRender = (ui, options) =>
      render(ui, { wrapper: Wrapper, ...options });
    
    // re-export everything
    export * from "@testing-library/react";
    
    // override render method
    export { customRender as render };

    Here we define a <Wrapper /> component that accepts some children component. It then wraps those children inside <AppProvider /> and <MemoryRouter />. MemoryRouter is

    A <Router> that keeps the history of your “URL” in memory (does not read or write to the address bar). Useful in tests and non-browser environments like React Native.

    We then create our render function, providing it the Wrapper we just defined through its wrapper option. The effect of this is that any component we pass to the render function is rendered inside <Wrapper />, thus having access to navigation and our app’s state.

    The next step is to export everything from @testing-library/react. Lastly, we export our custom render function as render, thus overriding the default render.

    Note that even if you were using Redux for state management the same pattern still applies.

    Let’s now make sure our new render function works. Import it into src/App.test.js and use it to render the <App /> component.

    Open App.test.js and replace the import line. This

    import { render } from '@testing-library/react';

    should become

    import { render } from './custom-render';

    Does the test still pass? Good job.

    There’s one small change I want to make before wrapping up this section. It gets tiring very quickly to have to write const { getByText } and other queries every time. So, I’m going to be using the screen object from the DOM testing library henceforth.

    Import the screen object from our custom render file and replace the describe block with the code below.

    import { render, screen } from "./custom-render";
    
    describe("<App />", () => {
      it("Renders <App /> component correctly", () => {
        render(<App />);
        expect(
          screen.getByText(/Getting started with React testing library/i)
        ).toBeInTheDocument();
      });
    });

    We’re now accessing the getByText query from the screen object. Does your test still pass? I’m sure it does. Let’s continue.

    If your tests don’t pass you may want to compare your code with mine. The corresponding branch at this point is 02-setup-store-and-render.

    Testing And Building The To-Do List Index Page

    In this section, we’ll pull to-do items from http://jsonplaceholder.typicode.com/. Our component specification is very simple. When a user visits our app homepage,

    1. show a loading indicator that says Fetching todos while waiting for the response from the API;
    2. display the title of 15 to-do items on the screen once the API call returns (the API call returns 200). Also, each item title should be a link that will lead to the to-do details page.

    Following a test-driven approach, we’ll write our test before implementing the component logic. Before doing that we’ll need to have the component in question. So go ahead and create a file src/TodoList.js and enter the following content:

    import React from "react";
    import "./App.css";
    export const TodoList = () => {
      return (
        <div>
        </div>
      );
    };

    Since we know the component specification we can test it in isolation before incorporating it into our main app. I believe it’s up to the developer at this point to decide how they want to handle this. One reason you might want to test a component in isolation is so that you don’t accidentally break any existing test and then having to fight fires in two locations. With that out of the way let’s now write the test.

    Create a new file src/TodoList.test.js and enter the below code:

    import React from "react";
    import axios from "axios";
    import { render, screen, waitForElementToBeRemoved } from "./custom-render";
    import { TodoList } from "./TodoList";
    import { todos } from "./makeTodos";
    
    describe("<App />", () => {
      it("Renders <TodoList /> component", async () => {
        render(<TodoList />);
        await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
    
        expect(axios.get).toHaveBeenCalledTimes(1);
        todos.slice(0, 15).forEach((td) => {
          expect(screen.getByText(td.title)).toBeInTheDocument();
        });
      });
    });

    Inside our test block, we render the <TodoList /> component and use the waitForElementToBeRemoved function to wait for the Fetching todos text to disappear from the screen. Once this happens we know that our API call has returned. We also check that an Axios get call was fired once. Finally, we check that each to-do title is displayed on the screen. Note that the it block receives an async function. This is necessary for us to be able to use await inside the function.

    Each to-do item returned by the API has the following structure.

    {
      id: 0,
      userId: 0,
      title: 'Some title',
      completed: true,
    }

    We want to return an array of these when we

    import { todos } from "./makeTodos"

    The only condition is that each id should be unique.

    Create a new file src/makeTodos.js and enter the below content. This is the source of todos we’ll use in our tests.

    const makeTodos = (n) => {
      // returns n number of todo items
      // default is 15
      const num = n || 15;
      const todos = [];
      for (let i = 0; i < num; i++) {
        todos.push({
          id: i,
          userId: i,
          title: `Todo item ${i}`,
          completed: [true, false][Math.floor(Math.random() * 2)],
        });
      }
      return todos;
    };
    
    export const todos = makeTodos(200);

    This function simply generates a list of n to-do items. The completed line is set by randomly choosing between true and false.

    Unit tests are supposed to be fast. They should run within a few seconds. Fail fast! This is one of the reasons why letting our tests make actual API calls is impractical. To avoid this we mock such unpredictable API calls. Mocking simply means replacing a function with a fake version, thus allowing us to customize the behavior. In our case, we want to mock the get method of Axios to return whatever we want it to. Jest already provides mocking functionality out of the box.

    Let’s now mock Axios so it returns this list of to-dos when we make the API call in our test. Create a file src/__mocks__/axios.js and enter the below content:

    import { todos } from "../makeTodos";
    
    export default {
      get: jest.fn().mockImplementation((url) => {
        switch (url) {
          case "https://jsonplaceholder.typicode.com/todos":
            return Promise.resolve({ data: todos });
          default:
            throw new Error(`UNMATCHED URL: ${url}`);
        }
      }),
    };

    When the test starts, Jest automatically finds this mocks folder and instead of using the actual Axios from node_modules/ in our tests, it uses this one. At this point, we’re only mocking the get method using Jest’s mockImplementation method. Similarly, we can mock other Axios methods like post, patch, interceptors, defaults etc. Right now they’re all undefined and any attempt to access, axios.post for example, would result in an error.

    Note that we can customize what to return based on the URL the Axios call receives. Also, Axios calls return a promise which resolves to the actual data we want, so we return a promise with the data we want.

    At this point, we have one passing test and one failing test. Let’s implement the component logic.

    Open src/TodoList.js let’s build out the implementation piece by piece. Start by replacing the code inside with this one below.

    import React from "react";
    import axios from "axios";
    import { Link } from "react-router-dom";
    import "./App.css";
    import { AppContext } from "./AppContext";
    
    export const TodoList = () => {
      const [loading, setLoading] = React.useState(true);
      const { appData, appDispatch } = React.useContext(AppContext);
    
      React.useEffect(() => {
        axios.get("https://jsonplaceholder.typicode.com/todos").then((resp) => {
          const { data } = resp;
          appDispatch({ type: "LOAD_TODOLIST", todoList: data });
          setLoading(false);
        });
      }, [appDispatch, setLoading]);
    
      return (
        <div>
          // next code block goes here
        </div>
      );
    };

    We import AppContext and de-structure appData and appDispatch from the return value of React.useContext. We then make the API call inside a useEffect block. Once the API call returns, we set the to-do list in state by firing the LOAD_TODOLIST action. Finally, we set the loading state to false to reveal our to-dos.

    Now enter the final piece of code.

    {loading ? (
      <p>Fetching todos</p>
    ) : (
      <ul>
        {appData.todoList.slice(0, 15).map((item) => {
          const { id, title } = item;
          return (
            <li key={id}>
              <Link to={`/item/${id}`} data-testid={id}>
                {title}
              </Link>
            </li>
          );
        })}
      </ul>
    )}
    

    We slice appData.todoList to get the first 15 items. We then map over those and render each one in a <Link /> tag so we can click on it and see the details. Note the data-testid attribute on each Link. This should be a unique ID that will aid us in finding individual DOM elements. In a case where we have similar text on the screen, we should never have the same ID for any two elements. We’ll see how to use this a bit later.

    My tests now pass. Does yours pass? Great.

    Let’s now incorporate this component into our render tree. Open up App.js let’s do that.

    First things. Add some imports.

    import { BrowserRouter, Route } from "react-router-dom";
    import { TodoList } from "./TodoList";

    We need BrowserRouter for navigation and Route for rendering each component in each navigation location.

    Now add the below code after the <header /> element.

    <div className="App-body">
      <BrowserRouter>
        <Route exact path="/" component={TodoList} />
      </BrowserRouter>
    </div>

    This is simply telling the browser to render the <TodoList /> component when we’re on the root location, /. Once this is done, our tests still pass but you should see some error messages on your console telling you about some act something. You should also see that the <TodoList /> component seems to be the culprit here.

    Terminal showing act warnings
    Terminal showing act warnings. (Large preview)

    Since we’re sure that our TodoList component by itself is okay, we have to look at the App component, inside of which is rendered the <TodoList /> component.

    This warning may seem complex at first but it is telling us that something is happening in our component that we’re not accounting for in our test. The fix is to wait for the loading indicator to be removed from the screen before we proceed.

    Open up App.test.js and update the code to look like so:

    import React from "react";
    import { render, screen, waitForElementToBeRemoved } from "./custom-render";
    import App from "./App";
    describe("<App />", () => {
      it("Renders <App /> component correctly", async () => {
        render(<App />);
        expect(
          screen.getByText(/Getting started with React testing library/i)
        ).toBeInTheDocument();
        await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
      });
    });

    We’ve made two changes. First, we changed the function in the it block to an async function. This is a necessary step to allow us to use await in the function body. Secondly, we wait for the Fetching todos text to be removed from the screen. And voila!. The warning is gone. Phew! I strongly advise that you bookmark this post by Kent Dodds for more on this act warning. You’re gonna need it.

    Now open the page in your browser and you should see the list of to-dos. You can click on an item if you like, but it won’t show you anything because our router doesn’t yet recognize that URL.

    For comparison, the branch of my repo at this point is 03-todolist.

    Let’s now add the to-do details page.

    Testing And Building The Single To-Do Page

    To display a single to-do item we’ll follow a similar approach. The component specification is simple. When a user navigates to a to-do page:

    1. display a loading indicator that says Fetching todo item id where id represents the to-do’s id, while the API call to https://jsonplaceholder.typicode.com/todos/item_id runs.
    2. When the API call returns, show the following information:
      • Todo item title
      • Added by: userId
      • This item has been completed if the to-do has been completed or
      • This item is yet to be completed if the to-do has not been completed.

    Let’s start with the component. Create a file src/TodoItem.js and add the following content.

    import React from "react";
    import { useParams } from "react-router-dom";
    
    import "./App.css";
    
    export const TodoItem = () => {
      const { id } = useParams()
      return (
        <div className="single-todo-item">
        </div>
      );
    };

    The only thing new to us in this file is the const { id } = useParams() line. This is a hook from react-router-dom that lets us read URL parameters. This id is going to be used in fetching a to-do item from the API.

    This situation is a bit different because we’re going to be reading the id from the location URL. We know that when a user clicks a to-do link, the id will show up in the URL which we can then grab using the useParams() hook. But here we’re testing the component in isolation which means that there’s nothing to click, even if we wanted to. To get around this we’ll have to mock react-router-dom, but only some parts of it. Yes. It’s possible to mock only what we need to. Let’s see how it’s done.

    Create a new mock file src/__mocks__ /react-router-dom.js. Now paste in the following code:

    module.exports = {
      ...jest.requireActual("react-router-dom"),
      useParams: jest.fn(),
    };

    By now you should have noticed that when mocking a module we have to use the exact module name as the mock file name.

    Here, we use the module.exports syntax because react-router-dom has mostly named exports. (I haven’t come across any default export since I’ve been working with it. If there are any, kindly share with me in the comments). This is unlike Axios where everything is bundled as methods in one default export.

    We first spread the actual react-router-dom, then replace the useParams hook with a Jest function. Since this function is a Jest function, we can modify it anytime we want. Keep in mind that we’re only mocking the part we need to because if we mock everything, we’ll lose the implementation of MemoryHistory which is used in our render function.

    Let’s start testing!

    Now create src/TodoItem.test.js and enter the below content:

    import React from "react";
    import axios from "axios";
    import { render, screen, waitForElementToBeRemoved } from "./custom-render";
    import { useParams, MemoryRouter } from "react-router-dom";
    import { TodoItem } from "./TodoItem";
    
    describe("<TodoItem />", () => {
      it("can tell mocked from unmocked functions", () => {
        expect(jest.isMockFunction(useParams)).toBe(true);
        expect(jest.isMockFunction(MemoryRouter)).toBe(false);
      });
    });

    Just like before, we have all our imports. The describe block then follows. Our first case is only there as a demonstration that we’re only mocking what we need to. Jest’s isMockFunction can tell whether a function is mocked or not. Both expectations pass, confirming the fact that we have a mock where we want it.

    Add the below test case for when a to-do item has been completed.

      it("Renders <TodoItem /> correctly for a completed item", async () => {
        useParams.mockReturnValue({ id: 1 });
        render(<TodoItem />);
    
        await waitForElementToBeRemoved(() =>
          screen.getByText(/Fetching todo item 1/i)
        );
    
        expect(axios.get).toHaveBeenCalledTimes(1);
        expect(screen.getByText(/todo item 1/)).toBeInTheDocument();
        expect(screen.getByText(/Added by: 1/)).toBeInTheDocument();
        expect(
          screen.getByText(/This item has been completed/)
        ).toBeInTheDocument();
      });

    The very first thing we do is to mock the return value of useParams. We want it to return an object with an id property, having a value of 1. When this is parsed in the component, we end up with the following URL https://jsonplaceholder.typicode.com/todos/1. Keep in mind that we have to add a case for this URL in our Axios mock or it will throw an error. We will do that in just a moment.

    We now know for sure that calling useParams() will return the object { id: 1 } which makes this test case predictable.

    As with previous tests, we wait for the loading indicator, Fetching todo item 1 to be removed from the screen before making our expectations. We expect to see the to-do title, the id of the user who added it, and a message indicating the status.

    Open src/__mocks__/axios.js and add the following case to the switch block.

          case "https://jsonplaceholder.typicode.com/todos/1":
            return Promise.resolve({
              data: { id: 1, title: "todo item 1", userId: 1, completed: true },
            });

    When this URL is matched, a promise with a completed to-do is returned. Of course, this test case fails since we’re yet to implement the component logic. Go ahead and add a test case for when the to-do item has not been completed.

      it("Renders <TodoItem /> correctly for an uncompleted item", async () => {
        useParams.mockReturnValue({ id: 2 });
        render(<TodoItem />);
        await waitForElementToBeRemoved(() =>
          screen.getByText(/Fetching todo item 2/i)
        );
        expect(axios.get).toHaveBeenCalledTimes(2);
        expect(screen.getByText(/todo item 2/)).toBeInTheDocument();
        expect(screen.getByText(/Added by: 2/)).toBeInTheDocument();
        expect(
          screen.getByText(/This item is yet to be completed/)
        ).toBeInTheDocument();
      });

    This is the same as the previous case. The only difference is the ID of the to-do, the userId, and the completion status. When we enter the component, we’ll need to make an API call to the URL https://jsonplaceholder.typicode.com/todos/2. Go ahead and add a matching case statement to the switch block of our Axios mock.

    case "https://jsonplaceholder.typicode.com/todos/2":
      return Promise.resolve({
        data: { id: 2, title: "todo item 2", userId: 2, completed: false },
      });
    

    When the URL is matched, a promise with an uncompleted to-do is returned.

    Both test cases are failing. Now let’s add the component implementation to make them pass.

    Open src/TodoItem.js and update the code to the following:

    import React from "react";
    import axios from "axios";
    import { useParams } from "react-router-dom";
    import "./App.css";
    import { AppContext } from "./AppContext";
    
    export const TodoItem = () => {
      const { id } = useParams();
      const [loading, setLoading] = React.useState(true);
      const {
        appData: { activeToDoItem },
        appDispatch,
      } = React.useContext(AppContext);
    
      const { title, completed, userId } = activeToDoItem;
      React.useEffect(() => {
        axios
          .get(`https://jsonplaceholder.typicode.com/todos/${id}`)
          .then((resp) => {
            const { data } = resp;
            appDispatch({ type: "LOAD_SINGLE_TODO", todo: data });
            setLoading(false);
          });
      }, [id, appDispatch]);
      return (
        <div className="single-todo-item">
          // next code block goes here.
        </div>
      );
    };
    

    As with the <TodoList /> component, we import AppContext. We read activeTodoItem from it, then we read the to-do title, userId, and completion status. After that we make the API call inside a useEffect block. When the API call returns we set the to-do in state by firing the LOAD_SINGLE_TODO action. Finally, we set our loading state to false to reveal the to-do details.

    Let’s add the final piece of code inside the return div:

    {loading ? (
      <p>Fetching todo item {id}</p>
    ) : (
      <div>
        <h2 className="todo-title">{title}</h2>
        <h4>Added by: {userId}</h4>
        {completed ? (
          <p className="completed">This item has been completed</p>
        ) : (
          <p className="not-completed">This item is yet to be completed</p>
        )}
      </div>
    )}

    Once this is done all tests should now pass. Yay! We have another winner.

    Our component tests now pass. But we still haven’t added it to our main app. Let’s do that.

    Open src/App.js and add the import line:

    import { TodoItem } from './TodoItem'

    Add the TodoItem route above the TodoList route. Be sure to preserve the order shown below.

    # preserve this order
    <Route path="/item/:id" component={TodoItem} />
    <Route exact path="/" component={TodoList} />

    Open your project in your browser and click on a to-do. Does it take you to the to-do page? Of course, it does. Good job.

    In case you’re having any problem, you can check out my code at this point from the 04-test-todo branch.

    Phew! This has been a marathon. But bear with me. There’s one last point I’d like us to touch. Let’s quickly have a test case for when a user visits our app, and then proceed to click on a to-do link. This is a functional test to mimic how our app should work. In practice, this is all the testing we need to be done for this app. It ticks every box in our app specification.

    Open App.test.js and add a new test case. The code is a bit long so we’ll add it in two steps.

    import userEvent from "@testing-library/user-event";
    import { todos } from "./makeTodos";
    
    jest.mock("react-router-dom", () => ({
      ...jest.requireActual("react-router-dom"),
    }));
    
    describe("<App />"
      ...
      // previous test case
      ...
    
      it("Renders todos, and I can click to view a todo item", async () => {
        render(<App />);
        await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
        todos.slice(0, 15).forEach((td) => {
          expect(screen.getByText(td.title)).toBeInTheDocument();
        });
        // click on a todo item and test the result
        const { id, title, completed, userId } = todos[0];
        axios.get.mockImplementationOnce(() =>
          Promise.resolve({
            data: { id, title, userId, completed },
          })
        );
        userEvent.click(screen.getByTestId(String(id)));
        await waitForElementToBeRemoved(() =>
          screen.getByText(`Fetching todo item ${String(id)}`)
        );
    
        // next code block goes here
      });
    });

    We have two imports of which userEvent is new. According to the docs,

    user-event is a companion library for the React Testing Library that provides a more advanced simulation of browser interactions than the built-in fireEvent method.”

    Yes. There is a fireEvent method for simulating user events. But userEvent is what you want to be using henceforth.

    Before we start the testing process, we need to restore the original useParams hooks. This is necessary since we want to test actual behavior, so we should mock as little as possible. Jest provides us with requireActual method which returns the original react-router-dom module.

    Note that we must do this before we enter the describe block, otherwise, Jest would ignore it. It states in the documentation that requireActual:

    “…returns the actual module instead of a mock, bypassing all checks on whether the module should receive a mock implementation or not.”

    Once this is done, Jest bypasses every other check and ignores the mocked version of the react-router-dom.

    As usual, we render the <App /> component and wait for the Fetching todos loading indicator to disappear from the screen. We then check for the presence of the first 15 to-do items on the page.

    Once we’re satisfied with that, we grab the first item in our to-do list. To prevent any chance of a URL collision with our global Axios mock, we override the global mock with Jest’s mockImplementationOnce. This mocked value is valid for one call to the Axios get method. We then grab a link by its data-testid attribute and fire a user click event on that link. Then we wait for the loading indicator for the single to-do page to disappear from the screen.

    Now finish the test by adding the below expectations in the position indicated.

    expect(screen.getByText(title)).toBeInTheDocument();
    expect(screen.getByText(`Added by: ${userId}`)).toBeInTheDocument();
    switch (completed) {
      case true:
        expect(
          screen.getByText(/This item has been completed/)
        ).toBeInTheDocument();
        break;
      case false:
        expect(
          screen.getByText(/This item is yet to be completed/)
        ).toBeInTheDocument();
        break;
      default:
        throw new Error("No match");
        }
      

    We expect to see the to-do title and the user who added it. Finally, since we can’t be sure about the to-do status, we create a switch block to handle both cases. If a match is not found we throw an error.

    You should have 6 passing tests and a functional app at this point. In case you’re having trouble, the corresponding branch in my repo is 05-test-user-action.

    Conclusion

    Phew! That was some marathon. If you made it to this point, congratulations. You now have almost all you need to write tests for your React apps. I strongly advise that you read CRA’s testing docs and RTL’s documentation. Overall both are relatively short and direct.

    I strongly encourage you to start writing tests for your React apps, no matter how small. Even if it’s just smoke tests to make sure your components render. You can incrementally add more test cases over time.

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    Practical Guide To Testing React Applications With Jest — Smashing Magazine

    06/24/2020

    About The Author

    Adeneye David Abiodun is a JavaScript lover and a Tech Enthusiast, Founded @corperstechhub and currently a Lecturer / IT Technologist @critm_ugep. I build …
    More about
    Adeneye

    Building a well-functioning application requires good testing; otherwise, knowing whether your application works as expected would be a matter of guesswork and luck. Jest is one of the best tools available for testing React applications. In this article, you will learn everything you need to create a solid test for your React components and application.

    In this article, I’m going to introduce you to a React testing tool named Jest, along with the popular library Enzyme, which is designed to test React components. I’ll introduce you to Jest testing techniques, including: running tests, testing React components, snapshot testing, and mocking.
    If you are new to testing and wondering how to get started, you will find this tutorial helpful because we will start with an introduction to testing. By the end, you’ll be up and running, testing React applications using Jest and Enzyme. You should be familiar with React in order to follow this tutorial.

    A Brief Introduction To Testing

    Testing is a line-by-line review of how your code will execute. A suite of tests for an application comprises various bit of code to verify whether an application is executing successfully and without error. Testing also comes in handy when updates are made to code. After updating a piece of code, you can run a test to ensure that the update does not break functionality already in the application.

    Why Test?

    It’s good to understand why we doing something before doing it. So, why test, and what is its purpose?

    1. The first purpose of testing is to prevent regression. Regression is the reappearance of a bug that had previously been fixed. It makes a feature stop functioning as intended after a certain event occurs.
    2. Testing ensures the functionality of complex components and modular applications.
    3. Testing is required for the effective performance of a software application or product.

    Testing makes an app more robust and less prone to error. It’s a way to verify that your code does what you want it to do and that your app works as intended for your users.

    Let’s go over the types of testing and what they do.

    Unit Test

    In this type of test, individual units or components of the software are tested. A unit might be an individual function, method, procedure, module, or object. A unit test isolates a section of code and verifies its correctness, in order to validate that each unit of the software’s code performs as expected.

    In unit testing, individual procedures or functions are tested to guarantee that they are operating properly, and all components are tested individually. For instance, testing a function or whether a statement or loop in a program is functioning properly would fall under the scope of unit testing.

    Component Test

    Component testing verifies the functionality of an individual part of an application. Tests are performed on each component in isolation from other components. Generally, React applications are made up of several components, so component testing deals with testing these components individually.

    For example, consider a website that has different web pages with many components. Every component will have its own subcomponents. Testing each module without considering integration with other components is referred to as component testing.

    Testing like this in React requires more sophisticated tools. So, we would need Jest and sometimes more sophisticated tools, like Enzyme, which we will discuss briefly later.

    Snapshot Test

    A snapshot test makes sure that the user interface (UI) of a web application does not change unexpectedly. It captures the code of a component at a moment in time, so that we can compare the component in one state with any other possible state it might take.

    We will learn about snapshot testing in a later section.

    Advantages and Disadvantages of Testing

    Testing is great and should be done, but it has advantages and disadvantages.

    Advantages

    1. It prevents unexpected regression.
    2. It allows the developer to focus on the current task, rather than worrying about the past.
    3. It allows modular construction of an application that would otherwise be too complex to build.
    4. It reduces the need for manual verification.

    Disadvantages

    1. You need to write more code, as well as debug and maintain.
    2. Non-critical test failures might cause the app to be rejected in terms of continuous integration.

    Introduction to Jest

    Jest is a delightful JavaScript testing framework with a focus on simplicity. It can be installed with npm or Yarn. Jest fits into a broader category of utilities known as test runners. It works great for React applications, but it also works great outside of React applications.

    Enzyme is a library that is used to test React applications. It’s designed to test components, and it makes it possible to write assertions that simulate actions that confirm whether the UI is working correctly.

    Jest and Enzyme complement each other so well, so in this article we will be using both.

    Process Of Running A Test With Jest

    In this section, we will be installing Jest and writing tests. If you are new to React, then I recommend using Create React App, because it is ready for use and ships with Jest.

    npm init react-app my-app
    

    We need to install Enzyme ****and enzyme-adapter-react-16 with react-test-renderer (the number should be based on the version of React you’re using).

    npm install --save-dev enzyme enzyme-adapter-react-16 react-test-renderer
    

    Now that we have created our project with both Jest and Enzyme, we need to create a setupTest.js file in the src folder of the project. The file should look like this:

    import { configure } from "enzyme";
    import Adapter from "enzyme-adapter-react-16";
    configure({ adapter: new Adapter() });
    

    This imports Enzyme and sets up the adapter to run our tests.

    Before continuing, let’s learn some basics. Some key things are used a lot in this article, and you’ll need to understand them.

    • it or test
      You would pass a function to this method, and the test runner would execute that function as a block of tests.
    • describe
      This optional method is for grouping any number of it or test statements.
    • expect
      This is the condition that the test needs to pass. It compares the received parameter to the matcher. It also gives you access to a number of matchers that let you validate different things. You can read more about it in the documentation.
    • mount
      This method renders the full DOM, including the child components of the parent component, in which we are running the tests.
    • shallow
      This renders only the individual components that we are testing. It does not render child components. This enables us to test components in isolation.

    Creating A Test File

    How does Jest know what’s a test file and what isn’t? The first rule is that any files found in any directory with the name __test__ are considered a test. If you put a JavaScript file in one of these folders, Jest will try to run it when you call Jest, for better or for worse. The second rule is that Jest will recognize any file with the suffix .spec.js or .test.js. It will search the names of all folders and all files in your entire repository.

    Let’s create our first test, for a React mini-application created for this tutorial. You can clone it on GitHub. Run npm install to install all of the packages, and then npm start to launch the app. Check the README.md file for more information.

    Let’s open App.test.js to write our first test. First, check whether our app component renders correctly and whether we have specified an output:

    it("renders without crashing", () => {
      shallow(<App />);
    });
    
    it("renders Account header", () => {
      const wrapper = shallow(<App />);
      const welcome = <h1>Display Active Users Account Details</h1>;
      expect(wrapper.contains(welcome)).toEqual(true);
    });
    

    In the test above, the first test, with shallow, checks to see whether our app component renders correctly without crashing. Remember that the shallow method renders only a single component, without child components.

    The second test checks whether we have specified an h1 tag output of “Display Active User Account” in our app component, with a Jest matcher of toEqual.

    Now, run the test:

    npm run test 
    /* OR */
    npm test
    

    The output in your terminal should like this:

      PASS  src/App.test.js
      √ renders without crashing (34ms)
      √ renders Account header (13ms)
    
    Test Suites: 1 passed, 1 total
    Tests:       2 passed, 2 total
    Snapshots:   0 total
    Time:        11.239s, estimated 16s
    Ran all test suites related to changed files.
    
    Watch Usage: Press w to show more.
    

    As you can see, our test passed. It shows we have one test suite named App.test.js, with two successful tests when Jest ran. We’ll talk about snapshot testing later, and you will also get to see an example of a failed test.

    Skipping Or Isolating A Test

    Skipping or isolating a test means that when Jest runs, a specific marked test is not run.

    it.skip("renders without crashing", () => {
      shallow(<App />);
    });
    
    it("renders Account header", () => {
      const wrapper = shallow(<App />);
      const header = <h1>Display Active Users Account Details</h1>;
      expect(wrapper.contains(header)).toEqual(true);
    });
    

    Our first test will be skipped because we’ve used the skip method to isolate the test. So, it will not run or make any changes to our test when Jest runs. Only the second one will run. You can also use it.only().

    It’s a bit frustrating to make changes to a test file and then have to manually run npm test again. Jest has a nice feature called watch mode, which watches for file changes and runs tests accordingly. To run Jest in watch mode, you can run npm test -- --watch or jest --watch. I would also recommend leaving Jest running in the terminal window for the rest of this tutorial.

    Mocking Function

    A mock is a convincing duplicate of an object or module without any real inner workings. It might have a tiny bit of functionality, but compared to the real thing, it’s a mock. It can be created automatically by Jest or manually.

    Why should we mock? Mocking reduces the number of dependencies — that is, the number of related files that have to be loaded and parsed when a test is run. So, using a lot of mocks makes tests execute more quickly.

    Mock functions are also known as “spies”, because they let you spy on the behavior of a function that is called directly by some other code, rather than only testing the output.

    There are two ways to mock a function: either by creating a mock function to use it in test code, or by writing a manual mock to override a module dependency.

    Manual mocks ****are used to stub out functionality with mock data. For example, instead of accessing a remote resource, like a website or a database, you might want to create a manual mock that allows you to use fake data.

    We will use a mock function in the next section.

    Testing React Components

    The section will combine all of the knowledge we have gained so far in understanding how to test React components. Testing involves making sure the output of a component hasn’t unexpectedly changed to something else. Constructing components in the right way is by far the most effective way to ensure successful testing.

    One thing we can do is to test components props — specifically, testing whether props from one component are being passed to another. Jest and the Enzyme API allow us to create a mock function to simulate whether props are being passed between components.

    We have to pass the user-account props from the main App component to the Account component. We need to give user-account details to Account in order to render the active account of users. This is where mocking comes in handy, enabling us to test our components with fake data.

    Let’s create a mock for the user props:

    const user = {
      name: "Adeneye David",
      email: "david@gmail.com",
      username: "Dave",
    };
    

    We have created a manual mock function in our test file and wrapped it around the components. Let’s say we are testing a large database of users. Accessing the database directly from our test file is not advisable. Instead, we create a mock function, which enables us to use fake data to test our component.

    describe("", () => {
      it("accepts user account props", () => {
        const wrapper = mount(<Account user={user} />);
        expect(wrapper.props().user).toEqual(user);
      });
      it("contains users account email", () => {
        const wrapper = mount(<Account user={user} />);
        const value = wrapper.find("p").text();
        expect(value).toEqual("david@gmail.com");
      });
    });
    

    We have two tests above, and we use a describe layer, which takes the component being tested. By specifying the props and values that we expect to be passed by the test, we are able to proceed.

    In the first test, we check whether the props that we passed to the mounted component equal the mock props that we created above.

    For the second test, we pass the user props to the mounted Account component. Then, we check whether we can find the <p> element that corresponds to what we have in the Account component. When we run the test suite, you’ll see that the test runs successfully.

    We can also test the state of our component. Let’s check whether the state of the error message is equal to null:

    it("renders correctly with no error message", () => {
      const wrapper = mount();
      expect(wrapper.state("error")).toEqual(null);
    });
    

    In this test, we check whether the state of our component error is equal to null, using a toEqual() matcher. If there is an error message in our app, the test will fail when run.

    In the next section, we will go through how to test React components with snapshot testing, another amazing technique.

    Snapshot Testing

    Snapshot testing captures the code of a component at a moment in time, in order to compare it to a reference snapshot file stored alongside the test. It is used to keep track of changes in an app’s UI.

    The actual code representation of a snapshot is a JSON file, and this JSON contains a record of what the component looked like when the snapshot was made. During a test, Jest compares the contents of this JSON file to the output of the component during the test. If they match, the test passes; if they don’t, the test fails.

    To convert an Enzyme wrapper to a format that is compatible with Jest snapshot testing, we have to install enzyme-to-json:

    npm install --save-dev enzyme-to-json
    

    Let’s create our snapshot test. When we run it the first time, the snapshot of that component’s code will be composed and saved in a new __snapshots__ folder in the src directory.

    it("renders correctly", () => {
      const tree = shallow(<App />);
      expect(toJson(tree)).toMatchSnapshot();
    });
    

    When the test above runs successfully, the current UI component will be compared to the existing one.

    Now, let’s run the test:

    npm run test
    

    When the test suite runs, a new snapshot will be generated and saved to the __snapshots__ folder. When we run a test subsequently, Jest will check whether the components match the snapshot.

    As explained in the previous section, that shallow method from the Enzyme package is used to render a single component and nothing else. It doesn’t render child components. Rather, it gives us a nice way to isolate code and get better information when debugging. Another method, named mount, is used to render the full DOM, including the child components of the parent component, in which we are running the tests.

    We can also update our snapshot, Let’s make some changes to our component in order to make our test fail, which will happen because the component no longer corresponds to what we have in the snapshot file. To do this, let’s change the <h3> tag in our component from <h3> Loading...</h3> to <h3>Fetching Users...</h3>. When the test runs, this what we will get in the terminal:

     FAIL  src/App.test.js (30.696s)
      × renders correctly (44ms)
    
      ● renders correctly
    
        expect(received).toMatchSnapshot()
        Snapshot name: `renders correctly
    1
    
        - Snapshot
        + Received
    
          
            
            

    - Loading... + Fetching Users...

    7 | it("renders correctly", () => { 8 | const wrapper = shallow(); > 9 | expect(toJson(wrapper)).toMatchSnapshot(); | ^ 10 | }); 11 | 12 | /* it("renders without crashing", () => { at Object. (src/App.test.js:9:27) › 1 snapshot failed. Snapshot Summary › 1 snapshot failed from 1 test suite. Inspect your code changes or press `u` to update them. Test Suites: 1 failed, 1 total Tests: 1 failed, 1 total Snapshots: 1 failed, 1 total Time: 92.274s Ran all test suites related to changed files. Watch Usage: Press w to show more.

    If we want our test to pass, we would either change the test to its previous state or update the snapshot file. In the command line, Jest provides instruction on how to update the snapshot. First, press w in the command line to show more, and then press u to update the snapshot.

    › Press u to update failing snapshots.
    

    When we press u to update the snapshot, the test will pass.

    Conclusion

    I hope you’ve enjoyed working through this tutorial. We’ve learned some Jest testing techniques using the Enzyme testing library. I’ve also introduced you to the process of running a test, testing React components, mocking, and snapshot testing. If you have any questions, you can leave them in the comments section below, and I’ll be happy to answer every one and work through any issues with you.

    Resources And Further Reading

    Smashing Editorial
    (ks, ra, il, al)

    Source link

    web design

    Using Mirage JS And Cypress For UI Testing (Part 4) — Smashing Magazine

    06/17/2020

    About The Author

    Kelvin Omereshone is the CTO at Quru Lab. Kelvin was formerly a Front-end engineer at myPadi.ng. He’s the creator of Nuxtjs Africa community and very passionate …
    More about
    Kelvin

    In this final part of Mirage JS Deep Dive series, we will be putting everything we’ve learned in the past series into learning how to perform UI tests with Mirage JS.

    One of my favorite quotes about software testing is from the Flutter documentation. It says:

    “How can you ensure that your app continues to work as you add more features or change existing functionality? By writing tests.”

    On that note, this last part of the Mirage JS Deep Dive series will focus on using Mirage to test your JavaScript front-end application.

    Note: This article assumes a Cypress environment. Cypress is a testing framework for UI testing. You can, however, transfer the knowledge here to whatever UI testing environment or framework you use.

    Read Previous Parts Of The Series:

    • Part 1: Understanding Mirage JS Models And Associations
    • Part 2: Understanding Factories, Fixtures And Serializers
    • Part 3: Understanding Timing, Response And Passthrough

    UI Tests Primer

    UI or User Interface test is a form of acceptance testing done to verify the user flows of your front-end application. The emphasis of these kinds of software tests is on the end-user that is the actual person who will be interacting with your web application on a variety of devices ranging from desktops, laptops to mobile devices. These users would be interfacing or interacting with your application using input devices such as a keyboard, mouse, or touch screens. UI tests, therefore, are written to mimic the user interaction with your application as close as possible.

    Let’s take an e-commerce website for example. A typical UI test scenario would be:

    • The user can view the list of products when visiting the homepage.

    Other UI test scenarios might be:

    • The user can see the name of a product on the product’s detail page.
    • The user can click on the “add to cart” button.
    • The user can checkout.

    You get the idea, right?

    In making UI Tests, you will mostly be relying on your back-end states, i.e. did it return the products or an error? The role Mirage plays in this is to make those server states available for you to tweak as you need. So instead of making an actual request to your production server in your UI tests, you make the request to Mirage mock server.

    For the remaining part of this article, we will be performing UI tests on a fictitious e-commerce web application UI. So let’s get started.

    Our First UI Test

    As earlier stated, this article assumes a Cypress environment. Cypress makes testing UI on the web fast and easy. You could simulate clicks and navigation and you can programmatically visit routes in your application. See the docs for more on Cypress.

    So, assuming Cypress and Mirage are available to us, let’s start off by defining a proxy function for your API request. We can do so in the support/index.js file of our Cypress setup. Just paste the following code in:

    // cypress/support/index.js
    Cypress.on("window:before:load", (win) => {
      win.handleFromCypress = function (request) {
        return fetch(request.url, {
          method: request.method,
          headers: request.requestHeaders,
          body: request.requestBody,
        }).then((res) => {
          let content =
            res.headers.map["content-type"] === "application/json"
              ? res.json()
              : res.text()
          return new Promise((resolve) => {
            content.then((body) => resolve([res.status, res.headers, body]))
          })
        })
      }
    })
    

    Then, in your app bootstrapping file (main.js for Vue, index.js for React), we’ll use Mirage to proxy your app’s API requests to the handleFromCypress function only when Cypress is running. Here is the code for that:

    import { Server, Response } from "miragejs"
    
    if (window.Cypress) {
      new Server({
        environment: "test",
        routes() {
          let methods = ["get", "put", "patch", "post", "delete"]
          methods.forEach((method) => {
            this[method]("/*", async (schema, request) => {
              let [status, headers, body] = await window.handleFromCypress(request)
              return new Response(status, headers, body)
            })
          })
        },
      })
    }
    

    With that setup, anytime Cypress is running, your app knows to use Mirage as the mock server for all API requests.

    Let’s continue writing some UI tests. We’ll begin by testing our homepage to see if it has 5 products displayed. To do this in Cypress, we need to create a homepage.test.js file in the tests folder in the root of your project directory. Next, we’ll tell Cypress to do the following:

    • Visit the homepage i.e / route
    • Then assert if it has li elements with the class of product and also checks if they are 5 in numbers.

    Here is the code:

    // homepage.test.js
    it('shows the products', () => {
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    

    You might have guessed that this test would fail because we don’t have a production server returning 5 products to our front-end application. So what do we do? We mock out the server in Mirage! If we bring in Mirage, it can intercept all network calls in our tests. Let’s do this below and start the Mirage server before each test in the beforeEach function and also shut it down in the afterEach function. The beforeEach and afterEach functions are both provided by Cypress and they were made available so you could run code before and after each test run in your test suite — hence the name. So let’s see the code for this:

    // homepage.test.js
    import { Server } from "miragejs"
    
    let server
    
    beforeEach(() => {
      server = new Server()
    })
    
    afterEach(() => {
      server.shutdown()
    })
    
    it("shows the products", function () {
      cy.visit("/")
    
      cy.get("li.product").should("have.length", 5)
    })
    

    Okay, we are getting somewhere; we have imported the Server from Mirage and we are starting it and shutting it down in beforeEach and afterEach functions respectively. Let’s go about mocking our product resource.

    
    // homepage.test.js
    import { Server, Model } from 'miragejs';
    
    let server;
    
    beforeEach(() => {
      server = new Server({
        models: {
          product: Model,
        },
    
        routes() {
          this.namespace = 'api';
    
          this.get('products', ({ products }, request) => {
            return products.all();
          });
        },
      });
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    

    Note: You can always take a peek at the previous parts of this series if you don’t understand the Mirage bits of the above code snippet.

    • Part 1: Understanding Mirage JS Models And Associations
    • Part 2: Understanding Factories, Fixtures And Serializers
    • Part 3: Understanding Timing, Response And Passthrough

    Okay, we have started fleshing out our Server instance by creating the product model and also by creating the route handler for the /api/products route. However, if we run our tests, it will fail because we don’t have any products in the Mirage database yet.

    Let’s populate the Mirage database with some products. In order to do this, we could have used the create() method on our server instance, but creating 5 products by hand seems pretty tedious. There should be a better way.

    Ah yes, there is. Let’s utilize factories (as explained in the second part of this series). We’ll need to create our product factory like so:

    // homepage.test.js
    import { Server, Model, Factory } from 'miragejs';
    
    let server;
    
    beforeEach(() => {
      server = new Server({
        models: {
          product: Model,
        },
         factories: {
          product: Factory.extend({
            name(i) {
                return `Product ${i}`
            }
          })
        },
    
        routes() {
          this.namespace = 'api';
    
          this.get('products', ({ products }, request) => {
            return products.all();
          });
        },
      });
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    

    Then, finally, we’ll use createList() to quickly create the 5 products that our test needs to pass.

    Let’s do this:

    // homepage.test.js
    import { Server, Model, Factory } from 'miragejs';
    
    let server;
    
    beforeEach(() => {
      server = new Server({
        models: {
          product: Model,
        },
         factories: {
          product: Factory.extend({
            name(i) {
                return `Product ${i}`
            }
          })
        },
    
        routes() {
          this.namespace = 'api';
    
          this.get('products', ({ products }, request) => {
            return products.all();
          });
        },
      });
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      server.createList("product", 5)
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    

    So when we run our test, it passes!

    Note: After each test, Mirage’s server is shutdown and reset, so none of this state will leak across tests.

    Avoiding Multiple Mirage Server

    If you have been following along this series, you’d notice when we were using Mirage in development to intercept our network requests; we had a server.js file in the root of our app where we set up Mirage. In the spirit of DRY (Don’t Repeat Yourself), I think it would be good to utilize that server instance instead of having two separate instances of Mirage for both development and testing. To do this (in case you don’t have a server.js file already), just create one in your project src directory.

    Note: Your structure will differ if you are using a JavaScript framework but the general idea is to setup up the server.js file in the src root of your project.

    So with this new structure, we’ll export a function in server.js that is responsible for creating our Mirage server instance. Let’s do that:

    // src/server.js
    
    export function makeServer() { /* Mirage code goes here */}
    

    Let’s complete the implementation of the makeServer function by removing the Mirage JS server we created in homepage.test.js and adding it to the makeServer function body:

    import { Server, Model, Factory } from 'miragejs';
    
    export function makeServer() {
      let server = new Server({
        models: {
          product: Model,
        },
        factories: {
          product: Factory.extend({
            name(i) {
              return `Product ${i}`;
            },
          }),
        },
        routes() {
          this.namespace = 'api';
    
          this.get('/products', ({ products }) => {
            return products.all();
          });
        },
        seeds(server) {
          server.createList('product', 5);
        },
      });
      return server;
    }
    

    Now all you have to do is import makeServer in your test. Using a single Mirage Server instance is cleaner; this way you don’t have to maintain two server instances for both development and test environments.

    After importing the makeServer function, our test should now look like this:

    import { makeServer } from '/path/to/server';
    
    let server;
    
    beforeEach(() => {
      server = makeServer();
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      server.createList('product', 5);
    
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    

    So we now have a central Mirage server that serves us in both development and testing. You can also use the makeServer function to start Mirage in development (see first part of this series).

    Your Mirage code should not find it’s way into production. Therefore, depending on your build setup, you would need to only start Mirage during development mode.

    Note: Read my article on how to set up API Mocking with Mirage and Vue.js to see how I did that in Vue so you could replicate in whatever front-end framework you use.

    Testing Environment

    Mirage has two environments: development (default) and test. In development mode, the Mirage server will have a default response time of 400ms(which you can customize. See the third article of this series for that), logs all server responses to the console, and loads the development seeds.

    However, in the test environment, we have:

    • 0 delays to keep our tests fast
    • Mirage suppresses all logs so as not to pollute your CI logs
    • Mirage will also ignore the seeds() function so that your seed data can be used solely for development but won’t leak into your tests. This helps keep your tests deterministic.

    Let’s update our makeServer so we can have the benefit of the test environment. To do that, we’ll make it accept an object with the environment option(we will default it to development and override it in our test). Our server.js should now look like this:

    // src/server.js
    import { Server, Model, Factory } from 'miragejs';
    
    export function makeServer({ environment = 'development' } = {}) {
      let server = new Server({
        environment,
    
        models: {
          product: Model,
        },
        factories: {
          product: Factory.extend({
            name(i) {
              return `Product ${i}`;
            },
          }),
        },
    
        routes() {
          this.namespace = 'api';
    
          this.get('/products', ({ products }) => {
            return products.all();
          });
        },
        seeds(server) {
          server.createList('product', 5);
        },
      });
      return server;
    }
    

    Also note that we are passing the environment option to the Mirage server instance using the ES6 property shorthand. Now with this in place, let’s update our test to override the environment value to test. Our test now looks like this:

    import { makeServer } from '/path/to/server';
    
    let server;
    
    beforeEach(() => {
      server = makeServer({ environment: 'test' });
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      server.createList('product', 5);
    
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    

    AAA Testing

    Mirage encourages a standard for testing called the triple-A or AAA testing approach. This stands for Arrange, Act and Assert. You could see this structure in our above test already:

    it("shows all the products", function () {
      // ARRANGE
      server.createList("product", 5)
    
      // ACT
      cy.visit("/")
    
      // ASSERT
      cy.get("li.product").should("have.length", 5)
    })
    

    You might need to break this pattern but 9 times out of 10 it should work just fine for your tests.

    Let’s Test Errors

    So far, we’ve tested our homepage to see if it has 5 products, however, what if the server is down or something went wrong with fetching the products? We don’t need to wait for the server to be down to work on how our UI would look like in such a case. We can simply simulate that scenario with Mirage.

    Let’s return a 500 (Server error) when the user is on the homepage. As we have seen in a previous article, to customize Mirage responses we make use of the Response class. Let’s import it and write our test.

    homepage.test.js
    import { Response } from "miragejs"
    
    it('shows an error when fetching products fails', function() {
      server.get('/products', () => {
        return new Response(
          500,
          {},
          { error: "Can’t fetch products at this time" }
        );
      });
    
      cy.visit('/');
    
      cy.get('div.error').should('contain', "Can’t fetch products at this time");
    });
    

    What a world of flexibility! We just override the response Mirage would return in order to test how our UI would display if it failed fetching products. Our overall homepage.test.js file would now look like this:

    // homepage.test.js
    import { Response } from 'miragejs';
    import { makeServer } from 'path/to/server';
    
    let server;
    
    beforeEach(() => {
      server = makeServer({ environment: 'test' });
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      server.createList('product', 5);
    
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    
    it('shows an error when fetching products fails', function() {
      server.get('/products', () => {
        return new Response(
          500,
          {},
          { error: "Can’t fetch products at this time" }
        );
      });
    
      cy.visit('/');
    
      cy.get('div.error').should('contain', "Can’t fetch products at this time");
    });
    

    Note the modification we did to the /api/products handler only lives in our test. That means it works as we previously define when you are in development mode.

    So when we run our tests, both should pass.

    Note: I believe its worthy of noting that the elements we are querying for in Cypress should exist in your front-end UI. Cypress doesn’t create HTML elements for you.

    Testing The Product Detail Page

    Finally, let’s test the UI of the product detail page. So this is what we are testing for:

    • User can see the product name on the product detail page

    Let’s get to it. First, we create a new test to test this user flow.

    Here is the test:

    it("shows the product’s name on the detail route", function() {
      let product = this.server.create('product', {
        name: 'Korg Piano',
      });
    
      cy.visit(`/${product.id}`);
    
      cy.get('h1').should('contain', 'Korg Piano');
    });
    

    Your homepage.test.js should finally look like this.

    // homepage.test.js
    import { Response } from 'miragejs';
    import { makeServer } from 'path/to/server;
    
    let server;
    
    beforeEach(() => {
      server = makeServer({ environment: 'test' });
    });
    
    afterEach(() => {
      server.shutdown();
    });
    
    it('shows the products', function() {
      console.log(server);
      server.createList('product', 5);
    
      cy.visit('/');
    
      cy.get('li.product').should('have.length', 5);
    });
    
    it('shows an error when fetching products fails', function() {
      server.get('/products', () => {
        return new Response(
          500,
          {},
          { error: "Can’t fetch products at this time" }
        );
      });
    
      cy.visit('/');
    
      cy.get('div.error').should('contain', "Can’t fetch products at this time");
    });
    
    it("shows the product’s name on the detail route", function() {
      let product = server.create('product', {
        name: 'Korg Piano',
      });
    
      cy.visit(`/${product.id}`);
    
      cy.get('h1').should('contain', 'Korg Piano');
    });
    

    When you run your tests, all three should pass.

    Wrapping Up

    It’s been fun showing you the inners of Mirage JS in this series. I hope you have been better equipped to start having a better front-end development experience by using Mirage to mock out your back-end server. I also hope you’ll use the knowledge from this article to write more acceptance/UI/end-to-end tests for your front-end applications.

    • Part 1: Understanding Mirage JS Models And Associations
    • Part 2: Understanding Factories, Fixtures And Serializers
    • Part 3: Understanding Timing, Response And Passthrough
    • Part 4: Using Mirage JS And Cypress For UI Testing
    Smashing Editorial
    (ra, il)

    Source link