Expand the category that best describes what your team is experiencing, then follow the sub-questions to find the most relevant symptom pages.
We have problems with our tests
Tests pass sometimes and fail sometimes without code changes
Your tests are non-deterministic. This is often caused by environment differences or test architecture that depends on external systems.
- Tests Randomly Pass or Fail - Pipeline fails, rerun passes, nobody investigates
- Tests Pass in One Environment but Fail in Another - Works locally, fails in CI, or the reverse
We have good coverage numbers but bugs still reach production
Coverage measures which lines execute, not whether the tests verify correct behavior. High coverage with low defect detection points to a test design problem.
- High Coverage but Tests Miss Defects - Tests assert implementation details instead of behavior
Refactoring is risky because it breaks tests
When tests are coupled to implementation details rather than behavior, any internal change causes test failures even when the behavior is correct.
- Refactoring Breaks Tests - Internal changes break tests that should not care about implementation
The test suite takes too long to run
Slow tests delay feedback and encourage developers to skip running them locally.
- Test Suite Is Too Slow to Run - Tests take so long that developers avoid running them
- Test Environments Take Too Long to Reset Between Runs - Environment and database reset time prevents running the full suite on every change
- Pipelines Take Too Long - The overall pipeline is slow, not just tests
AI-generated code is causing quality or security problems
When developers use AI to generate code without verifying it against acceptance criteria, functional bugs and security vulnerabilities ship. AI can also accumulate structural problems faster than the team can address them.
- AI-Generated Code Ships Without Developer Understanding - Developers commit AI output without verifying it against acceptance criteria
- AI Is Generating Technical Debt Faster Than the Team Can Absorb It - The codebase accumulates duplication and inconsistent patterns from AI-generated code
Deploying and releasing is painful
The team avoids or dreads deployments
When deployments frequently cause incidents, the team learns to treat them as high-risk events.
- The Team Is Afraid to Deploy - Deployments cause anxiety because they frequently fail
- Releases Are Infrequent and Painful - The team batches changes into large, risky releases
We need to coordinate multiple services or teams to deploy
Deployment coordination signals architectural coupling or process constraints.
- Multiple Services Must Be Deployed Together - Services cannot be deployed independently
- Merge Freezes Before Deployments - The team stops merging to stabilize before a release
We need a stabilization period before each release
If you need dedicated time to “harden” before releasing, the normal development process is not producing releasable code.
- Hardening Sprints Are Needed Before Every Release - Extra time required to make code production-ready
- Staging Passes but Production Fails - Staging environment does not catch production problems
Work is slow and things pile up
Lots of things are in progress but few are finishing
High work-in-progress means the team is spread thin. Nothing gets the focus needed to finish.
- Everything Started, Nothing Finished - The board shows many items in progress, few reaching done
- Work Items Take Days or Weeks to Complete - Individual items take far longer than estimated
Merging and integrating code is difficult
When integration is deferred, branches diverge and merging becomes painful.
- Merging Is Painful and Time-Consuming - Merges require significant effort to resolve conflicts
- Pull Requests Sit for Days Waiting for Review - Code waits in the review queue instead of flowing forward
Feedback on changes takes too long
Slow feedback loops mean developers context-switch away and problems grow before they are caught.
- Feedback Takes Hours Instead of Minutes - Developers wait hours or days to learn if a change works
- Pipelines Take Too Long - The pipeline itself is the bottleneck
AI tools are not making us faster
AI coding assistants should reduce implementation time, but the overhead of prompting, reviewing, and correcting AI output sometimes exceeds the time to write the code directly.
- AI Tooling Slows You Down Instead of Speeding You Up - The prompt-review-fix cycle takes longer than coding it yourself
Production problems and team health
Customers find problems before we do
If your monitoring does not catch issues before users report them, you have an observability gap.
- Production Issues Discovered by Customers - Users report bugs the team did not know existed
- Production Problems Are Discovered Hours or Days Late - Incidents go unnoticed until impact accumulates
Code behaves differently in different environments
Environment inconsistency makes it impossible to reproduce problems reliably.
- It Works on My Machine - Code works locally but fails elsewhere
- Tests Pass in One Environment but Fail in Another - Environment differences cause test failures
The team is exhausted from process overhead
When the delivery process creates friction at every step, the team burns out.
- Team Burnout and Unsustainable Pace - Process overhead is wearing the team down
Organizational and process problems
Changes require approval chains or committees before deploying
When manual approval gates exist between a green pipeline and production, they add delay without reducing risk.
- Every Change Requires a Ticket and Approval Chain - Bureaucratic gates that add delay without reducing risk
- Work Requires Sign-Off from Teams Not Involved in Delivery - Cross-team approvals that create queues
Another team controls our pipeline or infrastructure
When the team cannot change its own delivery process, improvement stalls.
- Teams Cannot Change Their Own Pipeline - Pipeline changes require another team
- Work Stalls Waiting for the Platform Team - Infrastructure requests create queues
Knowledge is concentrated in a few people
When only certain people can deploy, debug, or explain the architecture, the team is fragile.
- Releases Depend on One Person - One person is the bottleneck for every release
- Delivery Slows Every Time the Team Rotates - New team members take weeks to become productive
- Bugs in Familiar Areas Take Disproportionately Long to Fix - Developers assigned to unfamiliar components take too long to understand and change them correctly
Leadership does not see delivery improvement as a priority
Without organizational support, technical improvements stall at the first policy conflict.
- Leadership Sees CD as a Technical Nice-to-Have - No executive sponsorship for delivery improvement
- Features Must Wait for a Separate QA Team - Organizational structure creates handoffs