This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Find Your Symptom

Answer a few questions to narrow down which dysfunction symptoms match your situation.

    Expand the category that best describes what your team is experiencing, then follow the sub-questions to find the most relevant symptom pages.

    We have problems with our tests
    Tests pass sometimes and fail sometimes without code changes

    Your tests are non-deterministic. This is often caused by environment differences or test architecture that depends on external systems.

    We have good coverage numbers but bugs still reach production

    Coverage measures which lines execute, not whether the tests verify correct behavior. High coverage with low defect detection points to a test design problem.

    Refactoring is risky because it breaks tests

    When tests are coupled to implementation details rather than behavior, any internal change causes test failures even when the behavior is correct.

    The test suite takes too long to run

    Slow tests delay feedback and encourage developers to skip running them locally.

    AI-generated code is causing quality or security problems

    When developers use AI to generate code without verifying it against acceptance criteria, functional bugs and security vulnerabilities ship. AI can also accumulate structural problems faster than the team can address them.

    Deploying and releasing is painful
    The team avoids or dreads deployments

    When deployments frequently cause incidents, the team learns to treat them as high-risk events.

    We need to coordinate multiple services or teams to deploy

    Deployment coordination signals architectural coupling or process constraints.

    We need a stabilization period before each release

    If you need dedicated time to “harden” before releasing, the normal development process is not producing releasable code.

    Work is slow and things pile up
    Lots of things are in progress but few are finishing

    High work-in-progress means the team is spread thin. Nothing gets the focus needed to finish.

    Merging and integrating code is difficult

    When integration is deferred, branches diverge and merging becomes painful.

    Feedback on changes takes too long

    Slow feedback loops mean developers context-switch away and problems grow before they are caught.

    AI tools are not making us faster

    AI coding assistants should reduce implementation time, but the overhead of prompting, reviewing, and correcting AI output sometimes exceeds the time to write the code directly.

    Production problems and team health
    Customers find problems before we do

    If your monitoring does not catch issues before users report them, you have an observability gap.

    Code behaves differently in different environments

    Environment inconsistency makes it impossible to reproduce problems reliably.

    The team is exhausted from process overhead

    When the delivery process creates friction at every step, the team burns out.

    Organizational and process problems
    Changes require approval chains or committees before deploying

    When manual approval gates exist between a green pipeline and production, they add delay without reducing risk.

    Another team controls our pipeline or infrastructure

    When the team cannot change its own delivery process, improvement stalls.

    Knowledge is concentrated in a few people

    When only certain people can deploy, debug, or explain the architecture, the team is fragile.

    Leadership does not see delivery improvement as a priority

    Without organizational support, technical improvements stall at the first policy conflict.