Automated testing has significantly improved efficiency, reduced human error, and accelerated time to market for many software companies, and this is particularly evident when it comes to visual regression testing , which aims to identify unintended visual changes — a critical factor in maintaining a good user experience. However, simply setting up automation is not enough; implementing it efficiently is just as important.
A popular tool in the realm of visual testing is Cypress , since it’s one of the few tools that allows developers to write automated tests directly in JavaScript. This guide will provide a comprehensive overview of the best practices to consider when you’re automating your visual testing with Cypress tests, as well as how those tests should be implemented.
Why Choose Cypress Tests for Automated Testing? The JavaScript-based nature of Cypress testing allows for frictionless integration with modern stacks, which are often based on a JavaScript framework such as React or Angular .
Cypress’s intuitive interface and real-time feedback mechanisms significantly simplify the process of writing and debugging tests for both developers and QA engineers, and they streamline the testing process, so Cypress is easier to use than most automated testing tools.
Reducing Flakiness One of the most critical aspects of any testing tool is its reliability. Cypress addresses this by offering built-in retry-ability features that aim to reduce test flakiness. By automatically retrying failed assertions, Cypress testing ensures more reliable test outcomes, thereby increasing the credibility of your testing suite. That being said, while retries are often a great way of reducing the flake rate of a given testing tool, they don’t remove flakes.
Comparison Table: Cypress vs. Other Tools To fully grasp the features of Cypress and how it compares, it’s important to do a quick assessment of how it compares to other tools. The main Cypress competitors are:
Meticulous : Known for its focus on visual regression testing and minimal maintenance requirements.Test IO : Utilizes a crowdtesting model, providing a diverse range of real-world testing scenarios.Sauce Labs : Offers emulation-based device testing along with real-time feedback mechanisms. BrowserStack : Specializes in real-world device testing, offering a comprehensive solution for cross-platform compatibility.
Criteria
Cypress
Meticulous
Test IO
Sauce Labs
Browserstack
Test Automation
Yes (JavaScript)
Yes (Visual Regression)
No (Crowdtesting)
Yes (Emulation-based)
Yes (Real-world devices)
Language Support
JavaScript
Multiple
Multiple
Multiple
Multiple
Ease of Use*
High
Moderate
Low
Moderate
High
Flake Rate*
Moderate
Low
High
Moderate
Low
* Subjective assessment based on reviews.
Cypress Testing Best Practices Following best practices isn’t just a matter of pleasing managers; it directly affects the efficiency and reliability of automated testing processes. Over time, this will help you reduce maintenance time, which is crucial in agile development environments where changes are frequent. Also, poorly designed tests can lead to misleading results, which can cause two common problems:
A bug slipping through the cracks and into production. Having to spend time debugging a false positive, thus wasting engineering hours on something that wasn’t faulty in the first place. Moreover, the adoption of best practices enhances team collaboration, making it easier to understand, modify, or extend test cases. This, in turn, fosters a more cohesive and productive development environment.
When it comes to best practices related to writing tests in Cypress, Cypress provides fairly comprehensive documentation on this. As such, this blog post will focus more on how you can best utilize the different features offered by the tool.
Utilizing the Cypress Test Runner and Command Log and Debugging Features Cypress offers a robust Test Runner that serves as the core of its testing experience, allowing for real-time execution of tests, providing immediate feedback via a test report, and accelerating the debugging process. One of its most powerful features is the Command Log, which provides a detailed, chronological record of all test actions, assertions, and network requests.
This allows developers to see the Application or Component Under Test (AUT/CUT) and explore its DOM in real time. It’s not just a display; it’s a fully interactive application with developer tools for inspecting elements just like you would in a normal browser. This interactive element is often very helpful for debugging and understanding the behavior of the application under different test conditions.
Command Log: Your Debugging Companion
The Command Log is displayed on the left-hand side of the Cypress Test Runner, serving as a visual representation of your test suite. Clicking on any of the tests — which are neatly nested appropriately — reveals every Cypress command executed within that test's block. This includes commands executed in relevant before, beforeEach, afterEach, and after hooks . The Command Log also offers a feature known as "Time Traveling ," which allows developers to hover over each command to see the exact state of the DOM at that moment.
Best Practices for Utilizing Test Runner and Command Log
Real-Time Feedback: Make the most of the real-time execution feature for quicker identification and resolution of issues. Command Log Interpretation: Learn to read the Command Log effectively. It provides a wealth of information that can help you debug tests faster. Time Travel Feature: Use the Time Travel feature judiciously to understand the state changes in your application, for more effective debugging. Scenarios Benefiting from Real-Time Execution and Command Log The real-time execution and Command Log features are particularly useful in debugging complex user flows that involve multiple steps or conditions. They are also invaluable in dynamic web applications where elements are frequently updated or changed. The detailed logging and real-time feedback simplify the process of updating and maintaining test cases, making these features especially beneficial in agile development environments.
—---------------------
Understanding and effectively utilizing Cypress's Test Runner and Command Log can significantly enhance your debugging capabilities. These features not only provide real-time feedback but also offer detailed insights into the execution flow and state changes in your application, making them indispensable tools in your testing arsenal.
Leveraging Cypress's Automatic Waiting Feature While it is possible to use a traditional .wait() statement in Cypress, there are a number of ways to avoid it. One way is to use the .should() statement, which allows you to pass a callback function that will run after a a given command. For instance, take a look at this example usage pulled from Cypress’ official documentation :
cy.get('p').should(($p) => {
// should have found 3 elements
expect($p).to.have.length(3)
// make sure the first contains some text content
expect($p.first()).to.contain('Hello World')
// use jquery's map to grab all of their classes
// jquery's map returns a new jquery object
const classes = $p.map((i, el) => {
return Cypress.$(el).attr('class')
})
// call classes.get() to make this a plain array
expect(classes.get()).to.deep.eq([
'text-primary',
'text-danger',
'text-default',
])
})
In this example, Cypress makes sure to execute the code within the .should() callback function only after the .get('p') function has successfully executed and returned a value. Another example from Cypress’s documentation on the .wait() function shows how it’s not always necessary to input a specific time interval to wait; instead, you can tell Cypress tests to wait for a given alias to respond:
// Wait for the alias 'getAccount' to respond
// without changing or stubbing its response
cy.intercept('/accounts/*').as('getAccount')
cy.visit('/accounts/123')
cy.wait('@getAccount').then((interception) => {
// we can now access the low level interception
// that contains the request body,
// response body, status, etc
})
Although there are some great ways of performing automatic waits within Cypress, it’s important to understand and be aware of the balance between time spent on optimizations and time spent on getting things done. If a simple .wait(2000) is going to increase the total run time of your tests by between one and two seconds but will work perfectly fine in 100% of cases, then you should likely just use the .wait(2000) statement. Automatic waits are great either when they’re easy to implement or when the required waiting time varies enough that a static .wait() statement will cause flakiness.
Using Spies, Stubs, and Clocks for Controlling Behavior Like many other testing tools, spies, stubs, and clocks are popular and powerful ways of handling application behavior during testing. With one simple line, you can define the behavior of a function to be exactly what you’d like it to be:
// force obj.method() to return "foo"
cy.stub(obj, 'method').returns('foo')
// force obj.method() when called with "bar" argument to return "foo"
cy.stub(obj, 'method').withArgs('bar').returns('foo')
This is often useful when the part of the web application you’re testing relies on a function/method that is outside your test suite’s scope and therefore needs to be mocked. However, it’s important to exercise caution when using these tools, as manual customization of application behavior can lead to a number of unwanted results , like:
Increased development time due to maintenance, as mocks often have to be updated in order to reflect new behavior in the function being mocked. Incorrect test results due to the misconfiguration of mocks. Unrealistic test results due to manual configuration, such as the timing of a UI element loading. All in all, spies, stubs, and clocks are powerful ways of getting your tests to behave as you want them to, but it’s crucial to research and understand whether you can utilize real-world traffic instead .
Implementing Network Traffic Control for Edge Case Testing Edge cases are testing scenarios that occur at the extreme ends of input boundaries and that may not always be present in real-world traffic captured from production. An edge case is a great example of a scenario where the use of stubs may in fact be necessary. While edge cases can happen at any point of an application, it’s very common to experience them when you’re testing interactions between multiple APIs; in this situation, it’s necessary to mock the edge case scenarios.
While it is possible to use the .stub() method described in the previous section, Cypress testing does offer the .intercept() function as a specific solution to handling network requests:
cy.intercept(
{
method: 'GET', // Route all GET requests
url: '/users/*', // that have a URL that matches '/users/*'
},
[] // and force the response to be: []
).as('getUsers') // and assign an alias
That being said, as with the .wait() function, this should be treated as an okay-but-non-preferred solution in specific use cases — for instance, when there’s a need for edge case testing. As stated in the previous section, adding manual mocks of any kind will inevitably increase the maintenance and reduce the reliability of testing suites, which is why you should always try to rely on real-world data where possible.
Ensuring Consistent Results with Cypress's Unique Architecture One of the most unique features of Cypress is its JavaScript-based architecture, which is also one of its most “dangerous” features. The possibility of writing tests in the same language you’re writing your application in allows for a lot of freedom, but without caution this can quickly become a drawback.
For instance, a popular feature of JavaScript is the async/await feature, allowing you to effectively run multiple “threads” within the same runtime environment. But this can also easily result in a number of race conditions . Typically, this isn’t too much of a worry, as there are a number of ways to work around it, but when it comes to testing — especially visual testing — this is an easy way to introduce flakiness into your test suite.
All in all, Cypress’s unique architecture allows for amazing flexibility and freedom in the creation of test suites; however, it’s still crucial to keep your test suites as simple as possible in order to increase reliability and maintenance.
Enter Meticulous: Reducing Flakiness in Visual Regression Testing While Cypress’s JavaScript-based approach will appeal to a large group of developers, it’s important to remember that there are important trade-offs. For instance, the concern of flaky tests has been raised multiple times in this post, as some using Cypress can result in unreliable tests and false positives.
Another up-and-coming tool on the market has gone in the opposite direction from Cypress, opting for a solution relying on no code at all. To use Meticulous, you simply install a recorder script on your website; this script will then capture all user interactions. And then, once you’ve developed a new visual feature, fixed a bug, or for some other reason made changes to the front-end, Meticulous will reenact a user’s behavior and capture screenshots of your application’s interface.
These screenshots will be compared with the original ones recorded during normal user interactions, and by comparing each screenshot’s pixel depth , Meticulous can accurately determine and report any visual differences. Additionally, the recorded user traffic will also contain the responses of any third-party service being used (like a backend API), which will then automatically be mocked, essentially presenting you with a no-code, no-configuration, no-maintenance solution that almost eliminates the possibility of flakes.
Final Thoughts From its unique JavaScript-based architecture that ensures consistent and reliable test results to its robust features like real-time execution and debugging, automatic waiting, and network traffic control, Cypress stands out as a tool designed to enhance the efficiency and reliability of software testing processes.
However, it’s crucial to avoid becoming too reliant on the efficacy of these features, as there are still a number of pitfalls you can run into, resulting in unreliable and/or flaky tests. Remember, any test you write yourself is subject to human error. By this logic, the most reliable tests will come from utilizing native features and automation as much as possible.
While this is true for all code, it’s especially important when it comes to testing, as your tests are supposed to catch human errors.
Meticulous Meticulous is a tool that software engineers can use to catch visual regressions in web applications without writing or maintaining UI tests.
Inject the Meticulous snippet onto production or staging and dev environments. This snippet records user sessions by collecting clickstream and network data. When you post a pull request, Meticulous selects a subset of relevant recorded sessions and simulates them against your application’s front-end. Meticulous then takes screenshots at key points and detects any visual differences. In a few seconds, it posts those differences in a comment for you to inspect. Meticulous automatically updates the baseline images after you merge your PR. This eliminates the setup and maintenance burden of UI testing.
Meticulous isolates the front-end code by mocking out all network calls, using the previously recorded network responses. This means Meticulous never causes side effects, and you don’t need a staging environment.
Learn more here .