API Changes Break Consumers Without Warning
Breaking API changes reach all consumers simultaneously. Teams are afraid to evolve APIs because they do not know who depends on them.
less than a minute
These symptoms indicate problems with your deployment and release process. When deploying is painful, teams deploy less often, which increases batch size and risk. Each page describes what you are seeing and links to the anti-patterns most likely causing it.
Start with the symptom that matches what your team experiences. Each symptom page explains what you are seeing, identifies the most likely root causes (anti-patterns), and provides diagnostic questions to narrow down which cause applies to your situation. Follow the anti-pattern link to find concrete fix steps.
Related anti-pattern categories: Pipeline Anti-Patterns, Architecture Anti-Patterns
Related guides: Pipeline Architecture, Rollback, Small Batches
Breaking API changes reach all consumers simultaneously. Teams are afraid to evolve APIs because they do not know who depends on them.
Build outputs are discarded and rebuilt for each environment. Production is not running the artifact that was tested.
Change management overhead is identical for a one-line fix and a major rewrite. The process creates a queue that delays all changes equally.
Changes cannot go to production until multiple services are deployed in a specific order during a coordinated release window.
Changes cannot ship without approval from architecture review boards, legal, compliance, or other teams that are not part of the delivery process and have their own schedules.
Schema changes require downtime, lock tables, or leave the database in an unknown state when they fail mid-run.
There is no way to deploy code without activating it for users. All deployments are full releases with no controlled rollout.
Production deployments cause anxiety because they frequently fail. The team delays deployments, which increases batch size, which increases risk.
The team dedicates one or more sprints after “feature complete” to stabilize code before it can be released.
Deploying happens monthly, quarterly, or less. Each release is a large, risky event that requires war rooms and weekend work.
Developers announce merge freezes because the integration process is fragile. Deploying requires coordination in chat.
The team cannot prove what version is running in production, who deployed it, or what tests it passed.
If a deployment breaks production, the only option is a forward fix under pressure. Rolling back has never been practiced or tested.
Adding a build step, updating a deployment config, or changing an environment variable requires filing a ticket with a platform or DevOps team and waiting.
Something that worked before the release is broken after it. The team spends time after every release chasing down what changed and why.
A single person coordinates and executes all production releases. Deployments stop when that person is unavailable.
Changes queue for weeks waiting for central security review. Security slows delivery rather than enabling it.
No criteria exist for what a service needs before going live. New services deploy to production with no observability in place.
Deployments pass every pre-production check but break when they reach production.
Services holding in-memory state drop connections, lose sessions, or cause cache invalidation spikes on every redeployment.
Work is complete from the development team’s perspective but cannot ship until a separate QA team tests and approves it. QA has its own queue and schedule.