Testing with Mock Data in CI/CD Pipelines: Complete Guide
Slow, flaky, expensive tests are killing your CI/CD pipeline. Real API calls in tests cause:
- ⏱️ 5-10x slower test runs (waiting for network requests)
- 💸 Higher costs (API usage fees, longer CI minutes)
- 🎲 Flaky tests (network issues, rate limits, data pollution)
- 🐛 Harder debugging (inconsistent test data)
The solution? Mock data in your CI/CD pipeline.
Why Mock Data in CI/CD?
Problem: Real APIs in Tests
# ❌ Bad: Tests call real APIs
- name: Run E2E tests
run: npm test
env:
API_URL: https://api.production.com # Real API!
API_KEY: ${{ secrets.API_KEY }}
Issues:
- Tests are slow (network latency)
- Tests fail when API is down
- Rate limits block CI runs
- Test data pollutes production
- Costs money (API calls + CI minutes)
Solution: Mocked APIs
# ✅ Good: Tests use mocks
- name: Run E2E tests
run: npm test
env:
API_MODE: faker # Unlimited free mocks!
Benefits:
- ⚡ 10x faster tests
- 💰 Zero API costs
- 🎯 Deterministic results
- 🔒 No data pollution
- ✅ Works offline
Setup Guide by CI Platform
GitHub Actions
# .github/workflows/test.yml
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run unit tests
run: npm run test:unit
- name: Run E2E tests with mocks
run: npm run test:e2e
env:
API_MODE: faker # Use free unlimited mocks
NODE_ENV: test
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results
path: test-results/
GitLab CI
# .gitlab-ci.yml
stages:
- test
unit-tests:
stage: test
image: node:18
cache:
paths:
- node_modules/
script:
- npm ci
- npm run test:unit
coverage: '/Statements\s*:\s*(\d+\.\d+)%/'
e2e-tests:
stage: test
image: node:18
variables:
API_MODE: faker # Mock APIs
NODE_ENV: test
cache:
paths:
- node_modules/
script:
- npm ci
- npm run test:e2e
artifacts:
when: always
paths:
- test-results/
reports:
junit: test-results/junit.xml
Jenkins
// Jenkinsfile
pipeline {
agent any
environment {
API_MODE = 'faker' // Use mocks
NODE_ENV = 'test'
}
stages {
stage('Install') {
steps {
sh 'npm ci'
}
}
stage('Unit Tests') {
steps {
sh 'npm run test:unit'
}
}
stage('E2E Tests') {
steps {
sh 'npm run test:e2e'
}
}
}
post {
always {
junit 'test-results/*.xml'
publishHTML([
reportDir: 'coverage',
reportFiles: 'index.html',
reportName: 'Coverage Report'
])
}
}
}
CircleCI
# .circleci/config.yml
version: 2.1
jobs:
test:
docker:
- image: cimg/node:18.0
environment:
API_MODE: faker
NODE_ENV: test
steps:
- checkout
- restore_cache:
keys:
- deps-{{ checksum "package-lock.json" }}
- run: npm ci
- save_cache:
key: deps-{{ checksum "package-lock.json" }}
paths:
- node_modules
- run: npm run test:unit
- run: npm run test:e2e
- store_test_results:
path: test-results
- store_artifacts:
path: coverage
workflows:
test-workflow:
jobs:
- test
Complete Test Setup
Project Structure
my-app/
├── src/
│ ├── api/ # API definitions
│ ├── components/
│ └── ...
├── tests/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── e2e/ # End-to-end tests
├── package.json
└── .github/workflows/test.yml
API Setup (Symulate)
// src/api/users.ts
import { defineEndpoint, m, type Infer } from '@symulate/sdk'
const UserSchema = m.object({
id: m.uuid(),
name: m.person.fullName(),
email: m.email(),
role: m.string() // Will generate 'admin', 'user', or 'guest' based on context
})
// Infer TypeScript type
type User = Infer<typeof UserSchema>
export const getUsers = defineEndpoint<User[]>({
path: '/api/users',
method: 'GET',
schema: UserSchema,
mock: {
count: 10,
instruction: 'Generate users with realistic roles (admin, user, or guest)'
}
// Mode is controlled by configureSymulate, not per-endpoint
// In CI: set generateMode: 'faker' (free unlimited)
// In dev: set generateMode: 'ai' for realistic data
})
Test Files
Unit Test (Jest):
// tests/unit/UserList.test.tsx
import { render, screen, waitFor } from '@testing-library/react'
import { UserList } from '@/components/UserList'
// Mock API automatically uses faker mode in CI
jest.mock('@/api/users')
describe('UserList', () => {
it('renders users', async () => {
render(<UserList />)
// Wait for data to load
await waitFor(() => {
expect(screen.getByText(/users/i)).toBeInTheDocument()
})
// Check that users are displayed
const users = screen.getAllByRole('listitem')
expect(users.length).toBeGreaterThan(0)
})
it('handles loading state', () => {
render(<UserList />)
expect(screen.getByText(/loading/i)).toBeInTheDocument()
})
})
E2E Test (Playwright):
// tests/e2e/users.spec.ts
import { test, expect } from '@playwright/test'
test.describe('User Management', () => {
test('displays user list', async ({ page }) => {
await page.goto('/users')
// Wait for users to load (from mocked API)
await page.waitForSelector('[data-testid="user-item"]')
// Check users are displayed
const users = await page.$$('[data-testid="user-item"]')
expect(users.length).toBeGreaterThan(0)
})
test('filters users by role', async ({ page }) => {
await page.goto('/users')
await page.waitForSelector('[data-testid="user-item"]')
// Click admin filter
await page.click('[data-testid="filter-admin"]')
// Check filtered results
await page.waitForTimeout(500) // Wait for filter to apply
const adminUsers = await page.$$('[data-testid="user-item"][data-role="admin"]')
expect(adminUsers.length).toBeGreaterThan(0)
})
test('searches users', async ({ page }) => {
await page.goto('/users')
await page.waitForSelector('[data-testid="user-item"]')
// Search for a user
await page.fill('[data-testid="search-input"]', 'John')
await page.keyboard.press('Enter')
// Check search results
await page.waitForTimeout(500)
const results = await page.$$('[data-testid="user-item"]')
expect(results.length).toBeGreaterThan(0)
})
})
Package.json Scripts
{
"scripts": {
"test": "npm run test:unit && npm run test:e2e",
"test:unit": "jest",
"test:e2e": "playwright test",
"test:watch": "jest --watch",
"test:coverage": "jest --coverage"
}
}
Advanced Patterns
1. Parallel Test Execution
Speed up tests by running in parallel:
GitHub Actions:
jobs:
test:
strategy:
matrix:
shard: [1, 2, 3, 4] # Run 4 shards in parallel
steps:
- name: Run tests
run: npm run test -- --shard=${{ matrix.shard }}/4
env:
API_MODE: faker
Result: 4x faster test runs!
2. Test Different Scenarios
Mock different API responses:
// tests/e2e/error-handling.spec.ts
import { test, expect } from '@playwright/test'
test.describe('Error Handling', () => {
test('handles API errors', async ({ page }) => {
// Configure mock to return error
await page.route('/api/users', route => {
route.fulfill({
status: 500,
body: JSON.stringify({ error: 'Server Error' })
})
})
await page.goto('/users')
// Check error message is displayed
await expect(page.locator('[data-testid="error-message"]'))
.toContainText('Server Error')
})
test('handles empty results', async ({ page }) => {
// Configure mock to return empty array
await page.route('/api/users', route => {
route.fulfill({
status: 200,
body: JSON.stringify([])
})
})
await page.goto('/users')
// Check empty state is displayed
await expect(page.locator('[data-testid="empty-state"]'))
.toBeVisible()
})
})
3. Snapshot Testing
Test UI with consistent mock data:
// tests/unit/UserCard.snapshot.test.tsx
import { render } from '@testing-library/react'
import { UserCard } from '@/components/UserCard'
describe('UserCard snapshots', () => {
it('matches snapshot', () => {
// Mock data is consistent in CI
const mockUser = {
id: 'uuid-1',
name: 'John Doe',
email: 'john@example.com',
role: 'admin'
}
const { container } = render(<UserCard user={mockUser} />)
expect(container).toMatchSnapshot()
})
})
4. Integration Tests
Test multiple components together:
// tests/integration/user-flow.test.tsx
import { render, screen, fireEvent, waitFor } from '@testing-library/react'
import { App } from '@/App'
describe('User Management Flow', () => {
it('completes full user flow', async () => {
render(<App />)
// 1. Navigate to users
fireEvent.click(screen.getByText(/users/i))
// 2. Wait for users to load
await waitFor(() => {
expect(screen.getByText(/john doe/i)).toBeInTheDocument()
})
// 3. Click on a user
fireEvent.click(screen.getByText(/john doe/i))
// 4. Check user details loaded
await waitFor(() => {
expect(screen.getByText(/user details/i)).toBeInTheDocument()
})
// 5. Edit user
fireEvent.click(screen.getByText(/edit/i))
fireEvent.change(screen.getByLabelText(/name/i), {
target: { value: 'Jane Doe' }
})
fireEvent.click(screen.getByText(/save/i))
// 6. Check update succeeded
await waitFor(() => {
expect(screen.getByText(/saved successfully/i)).toBeInTheDocument()
})
})
})
Performance Comparison
Real API vs Mocked API
Test suite: 100 E2E tests
| Metric | Real API | Mocked API | Improvement |
|---|---|---|---|
| Duration | 25 minutes | 2.5 minutes | 10x faster |
| Cost | $5/run | $0.50/run | 90% cheaper |
| Flakiness | 15% fail rate | 0.5% fail rate | 30x more reliable |
| Parallelization | Limited | Unlimited | ∞ |
ROI Calculation:
- Tests run: 20/day
- Days/month: 22
- Runs/month: 440
Before (real API):
- Duration: 440 × 25 min = 183 hours/month
- Cost: 440 × $5 = $2,200/month
- Failed runs: 66 (15%)
After (mocked API):
- Duration: 440 × 2.5 min = 18 hours/month
- Cost: 440 × $0.50 = $220/month
- Failed runs: 2 (0.5%)
Savings: $1,980/month + 165 hours developer time
Best Practices
1. Use Environment Variables
// config.ts
export const config = {
apiMode: process.env.API_MODE || 'production',
apiUrl: process.env.API_URL || 'https://api.example.com'
}
// In CI: API_MODE=faker (mocks)
// In dev: API_MODE=ai (realistic mocks)
// In prod: API_MODE=production (real API)
2. Seed Consistent Data
// tests/setup.ts
import { faker } from '@faker-js/faker'
beforeAll(() => {
// Seed faker for consistent test data
faker.seed(12345)
})
3. Test Real API Separately
# Separate workflow for integration tests
name: Integration Tests
on:
schedule:
- cron: '0 2 * * *' # Run daily at 2am
jobs:
integration:
runs-on: ubuntu-latest
steps:
- name: Run integration tests
run: npm run test:integration
env:
API_MODE: production # Use real API
API_URL: ${{ secrets.API_URL }}
API_KEY: ${{ secrets.API_KEY }}
4. Monitor Test Performance
- name: Run tests
run: npm test
- name: Report test duration
run: |
echo "Test duration: $(cat test-results/duration.txt)"
if [ $(cat test-results/duration.txt) -gt 300 ]; then
echo "⚠️ Tests took longer than 5 minutes!"
exit 1
fi
Common Issues and Solutions
Issue 1: Tests Still Slow
Symptoms: Mocked tests still take too long
Causes:
- Too many tests in one file
- Heavy DOM operations
- Waiting for timeouts
Solutions:
// ❌ Bad: Sequential tests
test('test 1', async () => { /* ... */ })
test('test 2', async () => { /* ... */ })
// ✅ Good: Parallel tests
test.concurrent('test 1', async () => { /* ... */ })
test.concurrent('test 2', async () => { /* ... */ })
Issue 2: Flaky Tests
Symptoms: Tests pass/fail randomly
Causes:
- Race conditions
- Timing issues
- Inconsistent mock data
Solutions:
// ❌ Bad: Arbitrary timeout
await page.waitForTimeout(1000)
// ✅ Good: Wait for specific condition
await page.waitForSelector('[data-testid="loaded"]')
Issue 3: High Memory Usage
Symptoms: CI runs out of memory
Causes:
- Too many parallel tests
- Memory leaks in tests
- Large mock datasets
Solutions:
# Limit parallelism
- name: Run tests
run: npm test -- --maxWorkers=2
# Increase memory
- name: Run tests
run: NODE_OPTIONS=--max_old_space_size=4096 npm test
Monitoring and Metrics
Track Test Metrics
- name: Generate test report
run: |
echo "## Test Results" >> $GITHUB_STEP_SUMMARY
echo "- Total tests: $(cat test-results/total.txt)" >> $GITHUB_STEP_SUMMARY
echo "- Passed: $(cat test-results/passed.txt)" >> $GITHUB_STEP_SUMMARY
echo "- Failed: $(cat test-results/failed.txt)" >> $GITHUB_STEP_SUMMARY
echo "- Duration: $(cat test-results/duration.txt)s" >> $GITHUB_STEP_SUMMARY
echo "- Coverage: $(cat test-results/coverage.txt)%" >> $GITHUB_STEP_SUMMARY
Alert on Degradation
- name: Check test performance
run: |
DURATION=$(cat test-results/duration.txt)
if [ $DURATION -gt 300 ]; then
curl -X POST ${{ secrets.SLACK_WEBHOOK }} \
-d "{'text':'⚠️ Tests are slow: ${DURATION}s'}"
fi
Migration Guide
From Real APIs to Mocks
Step 1: Add mock mode to API client
// Before
export async function getUsers() {
const response = await fetch('https://api.example.com/users')
return response.json()
}
// After
export async function getUsers() {
if (process.env.API_MODE === 'faker') {
return getMockedUsers() // Use mocks
}
const response = await fetch('https://api.example.com/users')
return response.json()
}
Step 2: Update CI configuration
# Add API_MODE environment variable
env:
API_MODE: faker
Step 3: Run tests and fix failures
# Run locally first
API_MODE=faker npm test
# Fix any test that expected real data
# Then deploy to CI
Step 4: Monitor and optimize
# Compare before/after metrics
# Adjust parallelism, timeouts, etc.
Conclusion
Using mock data in CI/CD pipelines:
- ⚡ Makes tests 10x faster
- 💰 Reduces costs by 90%
- 🎯 Eliminates flakiness
- 🔒 Prevents data pollution
- ✅ Enables unlimited parallelization
Next steps:
- Add
API_MODE=fakerto your CI config - Update API clients to support mock mode
- Measure the improvement
- Optimize parallelization
Ready to speed up your tests? Try Symulate with unlimited free Faker mode for CI/CD.
Further Reading:
Ready to try Symulate?
Start building frontends without waiting for backend APIs. Get 100K free AI tokens.
Sign Up Free