Introduction
Jest works fine. If your test suite is fast and your config is simple, you don't need Vitest. But if you're using Vite for your build already, running tests through a separate tool with its own config and its own module resolution is just... unnecessary friction.
Vitest shares Vite's config, plugins, and transformation pipeline. TypeScript, JSX, CSS modules -- all handled. No duplicate setup. And if you are not on Vite, the Jest-compatible API with native ESM support and near-instant watch mode is still worth it on its own.
Installation is one command:
npm install -D vitestThen add a test script to your package.json:
{
"scripts": {
"test": "vitest",
"test:run": "vitest run",
"test:coverage": "vitest run --coverage"
}
}No babel.config.js. No jest.config.js with a wall of transform overrides. No fighting with ts-jest. If you have spent an afternoon trying to get Jest to transform an ESM-only node_modules package, you already know why people switch.
Why Vitest Over Jest
Vitest uses esbuild for TypeScript and JSX transforms. Two to ten times faster than Babel. That speed difference changes whether developers actually run tests. Fast tests get run. Slow tests get skipped. A test nobody runs catches nothing.
ESM just works. No --experimental-vm-modules flag, no transformIgnorePatterns hacks. You import things the way your application code imports them because Vite handles module resolution the same way in both contexts.
And the config sharing. If your project has a vite.config.ts, Vitest reads from it -- path aliases, plugins, resolve settings. No maintaining two configs that drift apart.
Watch mode is on by default. Only re-runs the tests affected by your file changes. The feedback loop is tight enough that you write the test first, because results come back in under a second. The API is Jest-compatible: describe, it, expect, vi.fn(), vi.mock(). Replace jest with vi in your imports and most existing tests pass unchanged. No ts-jest, no @types/jest.
Writing Your First Tests
A small utility module and its tests:
export functionadd(a: number, b: number): number {
return a + b;
}
export functiondivide(a: number, b: number): number {
if (b === 0) {
throw newError('Cannot divide by zero');
}
return a / b;
}
export functionclamp(value: number, min: number, max: number): number {
return Math.min(Math.max(value, min), max);
}
export functionaverage(numbers: number[]): number {
if (numbers.length === 0) {
throw newError('Cannot calculate average of empty array');
}
constsum = numbers.reduce((acc, n) => acc + n, 0);
return sum / numbers.length;
}Co-locate tests next to their source. math.test.ts right beside math.ts:
import { describe, it, expect } from'vitest';
import { add, divide, clamp, average } from'./math';
describe('add', () => {
it('adds two positive numbers', () => {
expect(add(2, 3)).toBe(5);
});
it('handles negative numbers', () => {
expect(add(-1, -2)).toBe(-3);
});
it('returns the other number when adding zero', () => {
expect(add(5, 0)).toBe(5);
});
});
describe('divide', () => {
it('divides two numbers correctly', () => {
expect(divide(10, 2)).toBe(5);
});
it('throws when dividing by zero', () => {
expect(() => divide(10, 0)).toThrow('Cannot divide by zero');
});
});
describe('clamp', () => {
it('returns the value when within range', () => {
expect(clamp(5, 0, 10)).toBe(5);
});
it('clamps to minimum when value is too low', () => {
expect(clamp(-5, 0, 10)).toBe(0);
});
it('clamps to maximum when value is too high', () => {
expect(clamp(15, 0, 10)).toBe(10);
});
});
describe('average', () => {
it('calculates the average of an array', () => {
expect(average([2, 4, 6])).toBe(4);
});
it('handles a single element', () => {
expect(average([7])).toBe(7);
});
it('throws for empty array', () => {
expect(() => average([])).toThrow('Cannot calculate average of empty array');
});
});Run npm test. Fraction of a second.
The test names read like sentences -- "divide throws when dividing by zero." Good. When something fails six months from now, the name should tell you what broke without reading the test body. Zero divisors, boundary values in clamp, empty arrays in average -- that is where unit tests earn their keep. The happy path usually works because you already tried it in the browser.
Assertions and Matchers
toBe uses Object.is. Strict equality. Fine for primitives. For objects and arrays, you need toEqual -- deep comparison. This distinction trips people up constantly:
import { describe, it, expect } from'vitest';
describe('Matcher examples', () => {
// Equalityit('compares primitives with toBe', () => {
expect(1 + 1).toBe(2);
expect('hello').toBe('hello');
expect(true).toBe(true);
});
// Deep equality for objectsit('compares objects deeply with toEqual', () => {
constuser = { name: 'Marcus', role: 'developer' };
expect(user).toEqual({ name: 'Marcus', role: 'developer' });
});
// Partial matching with toMatchObjectit('matches a subset of properties', () => {
constresponse = {
status: 200,
data: { id: 1, name: 'Vitest', version: '2.0' },
headers: { 'content-type': 'application/json' },
};
expect(response).toMatchObject({
status: 200,
data: { name: 'Vitest' },
});
});
// Truthinessit('checks truthiness and falsiness', () => {
expect('non-empty').toBeTruthy();
expect('').toBeFalsy();
expect(null).toBeNull();
expect(undefined).toBeUndefined();
expect('value').toBeDefined();
});
// Numbersit('handles floating point with toBeCloseTo', () => {
expect(0.1 + 0.2).toBeCloseTo(0.3);
expect(10).toBeGreaterThan(5);
expect(3).toBeLessThanOrEqual(3);
});
// Stringsit('matches strings with toMatch', () => {
expect('hello world').toContain('world');
expect('[email protected]').toMatch(/^[\w.]+@[\w.]+\.\w+$/);
});
// Arraysit('checks array contents', () => {
constfruits = ['apple', 'banana', 'cherry'];
expect(fruits).toContain('banana');
expect(fruits).toHaveLength(3);
expect(fruits).not.toContain('grape');
});
});toMatchObject lets you assert on a subset of properties. Tests do not break every time someone adds a field to an API response. Check what matters, ignore the rest.
toBeCloseTo exists because 0.1 + 0.2 is 0.30000000000000004. IEEE 754. If you have watched expect(0.1 + 0.2).toBe(0.3) fail and stared at your screen for thirty seconds before googling it -- that is what this matcher fixes. Every matcher can be negated with .not.
Mocking: Functions, Modules, and Timers
This is where most people waste time. Not because mocking is conceptually hard, but because the mental model is wrong. They think mocking is about faking things. It is not. It is about drawing a boundary around the code you are testing and controlling everything on the other side of that boundary.
Your function hits an API? The test should not make a real HTTP request. Reads from a database? No running database required to pass. The mock replaces the dependency with something you control, so the only thing that can make the test fail is the code you actually wrote.
Function Mocks with vi.fn()
vi.fn() creates a mock function that records every call -- arguments, return values, call count. The key concept: you set up expectations about how your code interacts with the mock, not about the mock itself. Here is a notification service that depends on an email sender:
import { describe, it, expect, vi, beforeEach } from'vitest';
import { NotificationService } from'./notification-service';
describe('NotificationService', () => {
letmockSendEmail: ReturnType<typeof vi.fn>;
letservice: NotificationService;
beforeEach(() => {
// Create a fresh mock for each test
mockSendEmail = vi.fn().mockResolvedValue({ success: true });
service = newNotificationService({ sendEmail: mockSendEmail });
});
it('sends a welcome email to new users', async () => {
await service.welcomeUser('[email protected]', 'Marcus');
expect(mockSendEmail).toHaveBeenCalledOnce();
expect(mockSendEmail).toHaveBeenCalledWith({
to: '[email protected]',
subject: 'Welcome, Marcus!',
template: 'welcome',
});
});
it('retries on failure', async () => {
mockSendEmail
.mockRejectedValueOnce(newError('Network error'))
.mockResolvedValueOnce({ success: true });
await service.welcomeUser('[email protected]', 'Marcus');
expect(mockSendEmail).toHaveBeenCalledTimes(2);
});
it('throws after max retries exhausted', async () => {
mockSendEmail.mockRejectedValue(newError('Network error'));
awaitexpect(
service.welcomeUser('[email protected]', 'Marcus')
).rejects.toThrow('Network error');
expect(mockSendEmail).toHaveBeenCalledTimes(3);
});
});Test behavior, not implementation. We do not care how NotificationService sends the email internally. We care that it calls the sender with the right arguments, retries on failure, and gives up after max attempts. If someone refactors the internals, these tests still pass. Teams that assert on internal method call order, private variable values, implementation minutiae -- those tests break on every refactor and provide zero confidence. Worse than useless because they give the illusion of coverage while testing nothing meaningful.
The mockResolvedValue / mockRejectedValueOnce chaining is where vi.fn() really shines. You can script an exact sequence: first call fails, second succeeds. Or: first three calls return different values. This is how you test retry logic, exponential backoff, circuit breakers -- any pattern where the behavior depends on the history of previous calls. And beforeEach gives you a fresh mock for each test so nothing leaks between them.
Module Mocks with vi.mock()
Here is the thing that confuses people about vi.mock(): it gets hoisted. You write it in the middle of the file, but it runs before any imports. This means when your code-under-test does import { logger } from './logger', it is already receiving the mocked version. The factory function you pass to vi.mock() defines the shape of the replacement module. Every export the original had, your factory should return a mock version of. Think of it as intercepting the module resolution at import time and swapping in your fake.
import { describe, it, expect, vi } from'vitest';
import { createUser } from'./user-service';
import { logger } from'./logger';
// Mock the entire logger module
vi.mock('./logger', () => ({
logger: {
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
},
}));
describe('createUser', () => {
it('logs a success message when user is created', async () => {
constuser = awaitcreateUser({ name: 'Marcus', email: '[email protected]' });
expect(user.name).toBe('Marcus');
expect(logger.info).toHaveBeenCalledWith(
'User created',
expect.objectContaining({ email: '[email protected]' })
);
});
});Same behavior as Jest's jest.mock(). But the gotcha that bites people: if your factory function tries to reference a variable defined above it in the file, that variable does not exist yet when the factory runs. The hoisting means the factory executes before your const declarations. If you need to set up complex mocks that depend on other values, use vi.hoisted() to declare those values in the hoisted scope.
Timer Mocks
vi.useFakeTimers(). Now you control time. Advance with vi.advanceTimersByTime(), flush everything with vi.runAllTimers(). Debounced functions, polling, retry delays -- all testable without actually waiting. Call vi.useRealTimers() in your afterEach or you will get bizarre failures in unrelated tests that depend on real timers. This has bitten me more than once.
Testing Async Code
Most real code is async. API calls, database queries, file operations. Mark your test function async, use await, and Vitest waits for the promise to settle before marking the test complete.
import { describe, it, expect, vi, beforeEach } from'vitest';
import { ApiClient } from'./api-client';
// Mock the global fetchconstmockFetch = vi.fn();
vi.stubGlobal('fetch', mockFetch);
describe('ApiClient', () => {
letclient: ApiClient;
beforeEach(() => {
client = newApiClient('https://api.example.com');
mockFetch.mockReset();
});
it('fetches and transforms user data', async () => {
mockFetch.mockResolvedValue({
ok: true,
json: async () => ({
id: 1,
first_name: 'Marcus',
last_name: 'Rodriguez',
created_at: '2026-01-15T10:00:00Z',
}),
});
constuser = await client.getUser(1);
expect(mockFetch).toHaveBeenCalledWith(
'https://api.example.com/users/1',
expect.objectContaining({ method: 'GET' })
);
expect(user).toEqual({
id: 1,
fullName: 'Marcus Rodriguez',
createdAt: expect.any(Date),
});
});
it('throws ApiError on non-OK response', async () => {
mockFetch.mockResolvedValue({
ok: false,
status: 404,
statusText: 'Not Found',
});
awaitexpect(client.getUser(999)).rejects.toThrow('Not Found');
});
it('throws on network failure', async () => {
mockFetch.mockRejectedValue(newTypeError('Failed to fetch'));
awaitexpect(client.getUser(1)).rejects.toThrow('Failed to fetch');
});
});vi.stubGlobal mocks the global fetch. Cleaner than assigning to globalThis.fetch directly -- Vitest auto-restores it when the test file finishes.
The .rejects trap. If you forget to await the assertion, the test passes even when the promise resolves successfully. The assertion fires after the test has already finished. Silent false positive. Always await async assertions.
expect.any(Date) is an asymmetric matcher. "I expect a Date instance, any Date will do." Useful when your code parses an ISO string into a Date and the exact millisecond is not worth pinning down.
Snapshot Testing
Use sparingly. Snapshots capture output once, flag any change on subsequent runs. Vitest records the expected output so you do not have to write it by hand. Sounds great. Frequently misused.
They work for output that is deterministic but tedious to assert manually: serialized objects, generated HTML, config files. Email templates are a reasonable case:
import { describe, it, expect } from'vitest';
import { generateWelcomeEmail } from'./email-templates';
describe('generateWelcomeEmail', () => {
it('produces the correct HTML for a new user', () => {
consthtml = generateWelcomeEmail({
name: 'Marcus',
plan: 'pro',
trialDays: 14,
});
// File-based snapshot (saved to __snapshots__ directory)expect(html).toMatchSnapshot();
});
it('shows trial info only for trial plans', () => {
consthtml = generateWelcomeEmail({
name: 'Marcus',
plan: 'free',
trialDays: 0,
});
// Inline snapshot (stored right in the test file)expect(html).toMatchInlineSnapshot();
});
});toMatchSnapshot() saves to a __snapshots__ directory. toMatchInlineSnapshot() writes it directly into the test file -- Vitest fills it in on the first run. Inline for small outputs, file-based for large ones.
But here is the problem with snapshots in practice. Teams snapshot entire React component trees. Every CSS tweak triggers dozens of snapshot updates. Nobody reads the diff before pressing u. At that point the tests are checking nothing. They are ceremony. If you are pressing u without reading the diff, delete the test. It is not protecting you from anything. Snapshots belong on serialized data output -- email templates, generated configs, API response transformations. For everything else, write targeted assertions that say what you actually care about.
Update all snapshots with vitest run --update, or press u in watch mode. Review snapshot diffs in pull requests the same way you review code changes.
Code Coverage and CI Integration
100% coverage does not mean zero bugs. But low coverage reliably signals gaps. And coverage trending downward means the team is shipping without testing.
Two providers: v8 (faster, occasionally less accurate) and istanbul (industry standard, more precise). v8 is fine for most projects:
npm install -D @vitest/coverage-v8Then configure coverage in your vite.config.ts or vitest.config.ts:
import { defineConfig } from'vitest/config';
export defaultdefineConfig({
test: {
coverage: {
provider: 'v8',
reporter: ['text', 'json-summary', 'html'],
include: ['src/**/*.{ts,tsx}'],
exclude: [
'src/**/*.test.{ts,tsx}',
'src/**/*.d.ts',
'src/types/**',
],
thresholds: {
lines: 80,
branches: 75,
functions: 80,
statements: 80,
},
},
},
});The thresholds block is the interesting part. Coverage drops below those percentages, the test run fails. Safety net against regression.
name: Testson: [push, pull_request]
jobs:
test:
runs-on: ubuntu-lateststeps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4with:
node-version: 22cache: npm
- run: npm ci
- run: npm run test:coverage
- uses: actions/upload-artifact@v4if: always()with:
name: coverage-reportpath: coverage/npm ci installs from the lockfile for reproducible builds. Coverage below thresholds means the job fails and the PR gets a red check. The report uploads as a build artifact.
Start with modest thresholds. 60%. Maybe 70%. Ratchet up over time. Setting them too high on day one leads to bad tests written just to hit a number, and a well-tested module at 80% coverage is more valuable than a meaninglessly tested one at 100%. The html reporter generates an interactive view where you click into any file and see exactly which lines are covered -- fastest way to spot gaps after writing tests.
What I Would Do Differently
Build a shared test-utils/mocks.ts early with standard mock factories for your most common dependencies. Gets teams writing tests faster than any documentation ever will.
Run your tests in watch mode during development and in CI on every push. Everything else is optional.