Test Coverage Analysis Prompt for Ruby Projects¶
Use the cov-loupe MCP server tool to analyze the test coverage data generated by simplecov for this Ruby project. Generate a detailed coverage analysis report and output it to a markdown file named test_coverage_analysis.md.
Before diving into the sections below, identify the project type by examining the Gemfile, directory structure, and file-naming conventions (e.g., Rails app, CLI tool, service library, microservice). Adapt the component categories in sections 2 and 6 to match what is actually present — the Rails-specific names used as examples throughout are illustrative, not prescriptive.
The report should include:
1. Executive Summary¶
- Overall coverage percentage and trend direction
- High-level assessment of testing health across the application
- Key strengths and critical gaps at a glance
2. Coverage by Application Component¶
Detailed analysis of coverage across the application layers that are present in this project.
For a Rails app these are typically: Models, Controllers, Views, Mailers, Jobs, Services/POROs, Concerns, Lib files, Configuration.
For a non-web Ruby app these are typically: Core Domain Logic, Services/Commands, Data Access Layer, External Integrations, Utilities/Helpers, CLI Components, Configuration, Lib files, Concerns/Mixins.
Adapt this list to the actual project — drop categories that don't exist, add any that do.
3. Well-Tested Areas¶
- What's working well and why
- Testing patterns and practices to replicate
- Examples of strong test coverage with specific file references
4. Poorly-Tested Areas¶
Organize by: - Complexity to Test: Simple/Moderate/Complex (considering dependencies, I/O operations, external services, etc.) - Risk Level: High/Medium/Low (impact on data integrity, system reliability, business logic correctness) - Coverage Gap Size: Percentage of uncovered lines and number of untested methods
5. Priority Issues Table¶
Create a markdown table with these columns: - File/Module Path - Component Type (Model/Service/Domain/Integration/etc.) - Current Coverage % - Uncovered Lines Count - Risk Level (High/Medium/Low) - Complexity to Fix (High/Medium/Low) - Priority Score (1-10, with 10 being highest priority) - Recommended Action (specific next steps)
Sort by Priority Score descending.
6. Framework-Specific Testing Analysis¶
Cover the topics that apply to this project. For a Rails app: - Model Testing: Validations, associations, scopes, callbacks, custom methods - Controller Testing: Happy paths vs error handling, authentication/authorization - Integration Testing: Request specs, feature specs, system tests - API Testing: JSON/XML responses, API versioning - Database Operations: Migrations, seeds, complex queries - Background Jobs: Execution, retry logic, error handling - Security: Authentication, authorization, input sanitization, CSRF protection - Missing Test Types: Unit/integration/system/feature test gaps
For a non-web Ruby app, focus instead on: - Domain Logic Testing: Business rules, validations, calculations, state transitions - Service Layer Testing: Orchestration logic, error handling, transaction boundaries - Data Access Testing: Queries, data transformations, persistence operations - Integration Testing: Interactions between components and external systems - API Client Testing: HTTP requests, response parsing, error handling - File I/O Operations: Reading/writing, parsing, serialization - Concurrency/Threading: Thread-safe operations and race conditions (if applicable) - Error Handling: Exception handling, retry logic, fallback mechanisms
Include the security and missing-test-types bullets regardless of project type.
7. Common Testing Anti-Patterns Detected¶
Report only those actually observed: - Unnecessary verbosity — identical or near-identical code fragments that should be deduplicated using input arrays, extracted methods, shared examples, etc. - Over-reliance on controller tests vs request specs (Rails) - Over-mocking leading to brittle tests - Missing edge case coverage in core domain / model logic - Untested error handling and failure scenarios - Missing tests for background jobs and mailers (Rails) - Missing tests for external service integrations - Lack of integration tests for critical user flows / workflows - Untested authorization/permission logic - Untested concurrency or threading logic (if applicable) - Insufficient testing of data transformation pipelines - Missing tests for CLI argument parsing and validation (if applicable)
8. Actionable Testing Roadmap¶
Prioritized list of specific issues to resolve, organized by effort level: - Quick Wins (< 2 hours): Simple tests with high impact - Sprint 1 (2-8 hours): Medium complexity, high priority - Sprint 2 (1-2 days): Complex but critical coverage gaps - Long-term (> 2 days): Large refactoring or architectural test improvements
Also flag testing infrastructure issues — inadequate fixtures, missing helpers, or code patterns that make tests brittle — and place them in the Long-term bucket.
Include specific file paths, method names, and recommended testing approaches for each item.
9. Metrics Dashboard¶
Key numbers: - Overall coverage percentage - Number of files at 0% coverage - Number of files at 100% coverage - Number of files above 90% coverage - Number of files below 50% coverage - Coverage by component type (adapted to project) - Trend direction (if historical data available) - Lines of code vs lines of test code ratio - Average cyclomatic complexity of untested methods (if available)
10. Risk Assessment¶
- Critical Paths: Identify core user journeys / business workflows and their test coverage
- High-Risk Untested Code: Payment processing, authentication, data deletion, financial calculations, etc.
- Security Implications: Areas where lack of tests could lead to vulnerabilities
- Technical Debt: Old code with no tests that should be prioritized for coverage
- External Dependencies: Integration points with poor test coverage
- Data Integrity Risks: Operations that modify or delete data without adequate test coverage
Include specific file paths, line numbers, and code examples where relevant. Focus on actionable insights rather than just reporting numbers. Provide concrete recommendations for improving the test suite architecture and coverage strategy.