Unexpected Coverage Changes
When working with code coverage, you might occasionally see unexpected changes in coverage metrics. This guide helps you understand why these changes occur and how to investigate them.
Common Scenarios
Coverage Changes in Untouched Files
One of the most common surprises is seeing coverage changes in files you haven’t modified. There are several reasons this can happen:
Indirect Code Paths
Changes to one file can affect the execution paths in other files. For example:
- Function Call Removal: If File A stops calling a function in File B, File B’s coverage might decrease
- Conditional Logic Changes: Changing a condition in one file might prevent code in another file from executing
- Error Handling Changes: Changes to how errors are thrown or caught can affect which code paths are exercised
Example:
This change would reduce coverage in UserRepository.js even though it wasn’t modified.
Coverage Decreases Despite Adding Tests
Sometimes adding tests can actually decrease coverage:
Test Execution Order
If tests depend on each other or on global state, changes to one test can affect how other tests run:
- Setup Changes: Modifications to test setup/teardown code might affect which code paths later tests exercise
- Test Dependencies: If tests aren’t properly isolated, changes to one test can break others
Coverage Tool Configuration
Changes to the coverage tool configuration can affect the reported metrics:
- Threshold Changes: Adjusting minimum thresholds might exclude/include certain files
- Path Inclusion/Exclusion: Changes to which files are included in coverage calculations
- Coverage Criteria Changes: Switching between line, statement, branch, or function coverage
Large Coverage Changes from Small Code Changes
A small code change sometimes results in a significant coverage change:
Critical Path Modifications
Some code paths are gateway paths that control access to large portions of code:
- Auth/Permission Checks: Changes to authentication logic might prevent large sections of code from running
- Feature Flags: Toggling feature flags can enable/disable entire features
- Dependency Injection: Changing what’s injected can alter large execution paths
Test Data Changes
Changes to test data can dramatically affect which code paths are exercised:
- Edge Cases: Adding/removing edge cases in test data
- Mock Responses: Changes to mocked API responses
- Test Environment Variables: Different environment settings for tests
Investigation Approaches
When you encounter unexpected coverage changes, here’s how to investigate:
1. Examine Coverage Reports in Detail
Start by comparing detailed coverage reports:
- Line-by-Line Comparison: Look at exactly which lines gained/lost coverage
- Branch Coverage Analysis: Check if conditional branches are being exercised differently
- Function Call Coverage: Verify if functions are being called the same number of times
2. Analyze Execution Paths
Understanding the flow of execution can help identify indirect effects:
- Call Stack Analysis: Look at the call stack for affected code
- Dependency Graphing: Map out how components depend on each other
- Control Flow Tracking: Follow the execution path through the application
3. Review Test Execution
The way tests run can significantly impact coverage:
- Test Order Effects: Try running tests in different orders
- Isolated Test Runs: Run specific tests in isolation
- Test Environment Comparison: Compare coverage when tests run in different environments
4. Check for Infrastructure Issues
Sometimes the issue isn’t with the code or tests:
- CI/CD Configuration: Check if CI settings have changed
- Coverage Tool Updates: Verify if coverage tools have been updated
- Platform Differences: Look for differences between local and CI environments
Real-World Examples
Case Study 1: The Missing Import
A team saw coverage drop by 15% after a seemingly harmless change. Investigation revealed:
- A utility function import was removed from a test file
- This utility had side effects that initialized several components
- Without this initialization, tests for those components started failing silently
- The coverage tool reported those components as untested
Solution: The team made the component initialization explicit in the test setup.
Case Study 2: The Database Timeout
After changing a database query, a team noticed coverage decreases in unrelated authentication code:
- The new query sometimes took longer to execute
- This triggered timeout handling code in subsequent tests
- The timeout caused authentication tests to take a different path
- Several authentication code paths stopped being exercised
Solution: The team increased the timeout threshold in tests and optimized the database query.
Case Study 3: The Browser Cache
A front-end team observed inconsistent coverage results between CI runs:
- Coverage for certain UI components would vary by 5-10%
- Investigation showed browser cache settings differed between environments
- Cached responses caused some UI rendering code to be skipped
- This led to unpredictable coverage of UI component code
Solution: The team standardized cache settings across test environments and added tests with explicit cache controls.
Troubleshooting Checklist
When investigating unexpected coverage changes, work through this checklist:
-
Compare coverage reports:
- Which specific lines gained/lost coverage?
- Are there patterns to where coverage changed?
-
Review the changes:
- Do they modify how other code is called?
- Do they change conditional logic?
- Do they affect error handling?
-
Check test execution:
- Are all tests still passing?
- Has test execution order changed?
- Are there timing or race condition issues?
-
Verify environment consistency:
- Are tests running in the same environment?
- Have dependencies been updated?
- Has the coverage tool configuration changed?
-
Validate data flows:
- Has test data changed?
- Are mock objects behaving differently?
- Have environment variables changed?
Best Practices to Prevent Surprises
Test Isolation
Write tests that don’t depend on each other:
- Avoid shared state between tests
- Reset the environment between tests
- Use dependency injection for better control
Consistent Test Data
Ensure test data is consistent and explicitly defined:
- Use fixtures or factories for test data
- Version control your test data
- Be explicit about edge cases
Regular Coverage Baseline Updates
Keep your coverage expectations up to date:
- Regularly update coverage baselines
- Document expected coverage changes with PRs
- Use coverage gates with reasonable thresholds
Comprehensive Test Suites
Design tests to exercise multiple paths:
- Test happy paths and failure modes
- Include edge cases and boundary conditions
- Test integration points thoroughly
See Also
- Coverage Metrics - Understanding the different coverage metrics
- Coverage Comparisons - How to compare coverage between branches
- Troubleshooting - General coverage troubleshooting tips