As covered in Visual Regressions and How to Prevent Them , visual regression tests are a faster way to preserve your application’s health.
While SaaS solutions provide advanced visual regression and project management features, this article will focus on how to implement simple yet effective visual regression tests using Jest and Cypress .
We will set up visual regression tests on an example repository , with the following component:
const Avatar = ({ src, className }) => {
const [fallback, setFallback] = React.useState(false);
const onFallback = React.useCallback((e) => setFallback(true), []);
return (
<div className={className}>
{fallback ? <AvatarFallback /> : <img src={src} onError={onFallback} />}
</div>
);
};
export default styled(Avatar)`
margin: 10px;
& img {
border-radius: 50%;
}
Our <Avatar /> component displays the given image with some styling, and falls back to a custom component if the image fails to load.
We will set up both snapshot and screen testing for this repository with integration at the Pull Request level.
Set up Snapshot Tests Most snapshot tests setups rely on Jest (a JavaScript Testing Framework), which provides a set of utilities to perform snapshot comparison.
Given a test scenario mounting a component, Jest will do an initial snapshot of the rendered DOM tree. Future runs of the same tests will trigger snapshot comparisons that will fail if a difference is found.
Installation and Configuration We need first to install the required dependencies:
yarn add jest @testing-library/jest-dom @testing-library/react @testing-library/user-event react-test-renderer
Then, we need to add some entries in our package.json :
/* … */
"scripts": {
"test-snapshot": "react-scripts test",
/* … */
},
"jest": {
"testMatch": [
"**/?(*.)+(snapshot).(spec|test).[jt]s?(x)"
]
},
/* … */
The jest entry contains the Jest configuration .
Since the example project is built with Create React App , we extend the existing Jest configuration by specifying the test files matching pattern with testMatch.
In our setup, we can run tests by calling react-scripts test. Without CRA, simply running the jest command as follows will run the tests:
/* … */
"scripts": {
"test-snapshot": "jest",
/* … */
},
/* … */
Writing and Running our Tests We can now write our <Avatar /> snapshot test in a new src/__tests__/snapshot/Avatar.jsx file :
import React from "react";
import renderer from "react-test-renderer";
import Avatar from "../src/Avatar";
describe("<Avatar />", () => {
describe("with a valid image", () => {
it("renders correctly", () => {
const tree = renderer
.create(
<Avatar
src={
"https://i.picsum.photos/id/623/200/200.jpg?hmac=xquTjHIYmAPV3XEGlIUaV_KWyEofkbortxrK79jJhWA"
}
/>
)
.toJSON();
expect(tree).toMatchSnapshot();
});
});
describe("with an invalid image", () => {
it("renders correctly", async () => {
const tree = renderer.create(<Avatar src={"<not_and_url>"} />).toJSON();
expect(tree).toMatchSnapshot();
});
});
});
The renderer() function from react-test-renderer helps render the given component and serialize the generated DOM tree as JSON.
Then, the DOM result is passed to Jest expectation matcher toMatchSnapshot(), creating a snapshot file and raising a failure if a diff is detected with an existing snapshot.
We can now run our test with the yarn test-snapshot command:
A src/__snapshots__/Avatar.snapshot.spec.jsx.snap file is created.
This snapshot file should be committed since it will be used as a reference snapshot for further tests (locally and on CI).
Setup the CI Workflow Our test setup is working locally. However, to ensure that no visual regression is introduced in our component, we must enforce those tests when new Pull Requests are created on Github.
Let’s create a new Github Action to enforce such a policy by creating a Workflow file. Github Action Workflow files are a broad subject to cover ; however, let’s have an overview of the main parts of the .github/workflows/snapshot-test.yml file :
We define a strategy.matrix to run our tests on multiple versions of Node.js (12.x and 17.x) We configure caches for yarn dependencies and Jest Finally, we run the tests .
Testing the Workflow The following Pull Request showcases how snapshots tests are integrated into the CI workflow.
This Pull Request introduces a breaking change by inverting the condition on the <Avatar> fallback mechanism.
Immediately, the snapshots tests catch the visual regression by reporting the DOM differences initiated by this new change:
Note: If the Snapshot changes were intentional, simply running the test locally with the --updateSnapshot argument will update the snapshots with the new changes: yarn test-snapshot --updateSnapshot.
Once the updated snapshots are committed, the CI tests will pass.
Snapshot Test Limitations Looking closely at our snapshot tests reveals two main limitations.
First, we can see in our generated snapshot file (on master) that the “ with an invalid image renders correctly" test case is not generating the proper snapshot:
exports[`<Avatar /> with an invalid image renders correctly 1`] = `
<div
className="sc-bdvvtL oSqWo"
>
<img
onError={[Function]}
src="<not_and_url>"
/>
</div>
`;
This first limitation is generally linked to the difficulty to test image loading scenarios in Jest tests.
To properly test a “failing to load image” scenario with Jest, we will need to mock many parts of the code to trigger such a scenario . On the other hand, screen tests by running headless browser instances could help in testing such features.
Finally, any breaking change at the styling level raises cryptic snapshots changes, as shown in this Pull Request :
The resulting snapshot test error is not very helpful in understanding which change introduced the issues, or what the final visual regression for end-users is.
While some Jest plugins such as jest-styled-components help deal with this issue, screen testing is more appropriate for assessing visual regression on UI-Kit components.
Setup Screen Tests As seen earlier, we need to confirm that our <Avatar> UI-Kit component has no visual regression of any kind, from styling to advanced DOM interactions (image failing load).
Based on actual screenshots of components' rendering, screen tests are the best way to avoid visual regressions of our component.
Screen tests share the same core logic as Snapshot tests, but by comparing screenshots.
Screen tests require Cypress (a JavaScript End to End Testing Framework) to start a headless browser instance to perform screenshots of the rendered components.
Installation and Configuration The following libraries are required to perform screenshots tests on components:
Cypress Component testing : a feature of Cypress that allows running test of components without having to set up a whole application (usually required for E2E testing)cypress-image-snapshot : A Cypress plugin that brings the screenshot capabilitiesLet’s install all the required dependencies:
yarn add @cypress/react @cypress/webpack-dev-server cypress cypress-image-snapshot
Then, creating a cypress.json file to specify test files pattern and location:
{
"component": {
"componentFolder": "src",
"testFiles": "**/*.visual.spec.{js,jsx,ts,tsx}"
},
"video": false // Cypress will record videos by default
}
Again, we need to update our package.json to add a new script entry:
// …
“scripts”: {
"test-visual": "cypress run-ct --reporter cypress-image-snapshot/reporter",
}
// …
Finally, we need to properly configure the cypress-image-snapshot plugin by creating two files:
Writing and Running our Test We can now define our <Avatar /> screen tests as follows:
/* eslint-disable no-undef */
import React from "react";
import { mount } from "@cypress/react";
import Avatar from "./Avatar";
describe("<Avatar />", () => {
describe("with a valid image", () => {
it("renders the proper UI", () => {
mount(
<Avatar
src={
"https://i.picsum.photos/id/623/200/200.jpg?hmac=xquTjHIYmAPV3XEGlIUaV_KWyEofkbortxrK79jJhWA"
}
/>
);
cy.matchImageSnapshot();
});
});
describe("with an invalid image", () => {
it("renders the proper UI", () => {
mount(<Avatar src={"<not_and_url>"} />);
cy.matchImageSnapshot();
});
});
});
The above tests are similar to previously seen Snapshot tests.
The only difference is that we mount the component and call an action on cy (cypress runner): cy.matchImageSnapshot().
Similar to Jest’s toMatchSnapshot() method, cy.matchImageSnapshot() will take a screenshot of the rendered component and raise an error if any difference is found with the existing one.
We can now run our screen test with the yarn test-visual command:
Our tests generate one screenshot file per test case (describe()) that must be committed.
Setup the CI Workflow Again, we need to integrate those tests in the development workflow by creating the corresponding Github Actions.
The new .github/workflows/screen-tests.yml file contains the following parts:
The container parameter initialize a Docker image provided by Cypress that includes all required operating system dependencies and some pre-installed browsers to run the tests The “Store screenshots” action ensures that screenshots will be downloadable Testing the Workflow By creating a PR that, again, introduces a visual regression by inverting the fallback logic, we can see the following failing screen test results:
Then, by following the test suite link , we can download the generated screenshots in the Artifacts section:
The downloaded .zip file contains all original screenshots along with the diff outputs:
We can see that our “ with a valid image renders the proper UI” scenario is no longer behaving as expected.
Note: Like Snapshot tests, running the yarn test-visual --env updateSnapshots=true locally will update the screenshots with the new changes. Once the updated screenshots are committed, the CI tests will pass.
Conclusion Snapshot tests are the fastest way to catch any regression related to:
Presence of a given element in a page (example: sorting button of an actionable list) Displayed text in a given context (i18n regression testing) Rendering of components after simple interactions (ex: filter list) However, Snapshot tests are not the best match for catching visual regressions related to pure CSS issues such as overflows or advanced interactions like loading images.
Although slower to run, screen tests, by performing diff of screenshots, help catch advanced visual regressions that cannot be spotted at the DOM level.
Finally, keep in mind that it might be worth looking at SaaS solutions if your project relies on a growing number of visual regressions tests (more than the critical path) or that non-technical people get involved in the project.
Meticulous Meticulous is a tool for software engineers to catch visual regressions in web applications without writing or maintaining UI tests.
Inject the Meticulous snippet onto production or staging and dev environments. This snippet records user sessions by collecting clickstream and network data. When you post a pull request, Meticulous selects a subset of recorded sessions which are relevant and simulates these against the frontend of your application. Meticulous takes screenshots at key points and detects any visual differences. It posts those diffs in a comment for you to inspect in a few seconds. Meticulous automatically updates the baseline images after you merge your PR. This eliminates the setup and maintenance burden of UI testing.
Meticulous isolates the frontend code by mocking out all network calls, using the previously recorded network responses. This means Meticulous never causes side effects and you don’t need a staging environment.
Learn more here .