This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Branching and Integration

Anti-patterns in how teams branch, merge, and integrate code that prevent continuous integration and delivery.

These anti-patterns affect how code flows from a developer’s machine to the shared trunk. They create painful merges, delayed integration, and broken builds that prevent the steady stream of small, verified changes that continuous delivery requires.

1 - Long-Lived Feature Branches

Branches that live for weeks or months, turning merging into a project in itself. The longer the branch, the bigger the risk.

Category: Branching & Integration | Quality Impact: Critical

What This Looks Like

A developer creates a branch to build a feature. The feature is bigger than expected. Days pass, then weeks. Other developers are doing the same thing on their own branches. Trunk moves forward while each branch diverges further from it. Nobody integrates until the feature is “done” - and by then, the branch is hundreds or thousands of lines different from where it started.

When the merge finally happens, it is an event. The developer sets aside half a day - sometimes more - to resolve conflicts, re-test, and fix the subtle breakages that come from combining weeks of divergent work. Other developers delay their merges to avoid the chaos. The team’s Slack channel lights up with “don’t merge right now, I’m resolving conflicts.” Every merge creates a window where trunk is unstable.

Common variations:

  • The “feature branch” that is really a project. A branch named feature/new-checkout that lasts three months. Multiple developers commit to it. It has its own bug fixes and its own merge conflicts. It is a parallel fork of the product.
  • The “I’ll merge when it’s ready” branch. The developer views the branch as a private workspace. Merging to trunk is the last step, not a daily practice. The branch falls further behind each day but the developer does not notice until merge day.
  • The per-sprint branch. Each sprint gets a branch. All sprint work goes there. The branch is merged at sprint end and a new one is created. Integration happens every two weeks instead of every day.
  • The release isolation branch. A branch is created weeks before a release to “stabilize” it. Bug fixes must be applied to both the release branch and trunk. Developers maintain two streams of work simultaneously.
  • The “too risky to merge” branch. The branch has diverged so far that nobody wants to attempt the merge. It sits for weeks while the team debates how to proceed. Sometimes it is abandoned entirely and the work is restarted.

The telltale sign: if merging a branch requires scheduling a block of time, notifying the team, or hoping nothing goes wrong - branches are living too long.

Why This Is a Problem

Long-lived feature branches appear safe. Each developer works in isolation, free from interference. But that isolation is precisely the problem. It delays integration, hides conflicts, and creates compounding risk that makes every aspect of delivery harder.

It reduces quality

When a branch lives for weeks, code review becomes a formidable task. The reviewer faces hundreds of changed lines across dozens of files. Meaningful review is nearly impossible at that scale - studies consistently show that review effectiveness drops sharply after 200-400 lines of change. Reviewers skim, approve, and hope for the best. Subtle bugs, design problems, and missed edge cases survive because nobody can hold the full changeset in their head.

The isolation also means developers make decisions in a vacuum. Two developers on separate branches may solve the same problem differently, introduce duplicate abstractions, or make contradictory assumptions about shared code. These conflicts are invisible until merge time, when they surface as bugs rather than design discussions.

With short-lived branches or trunk-based development, changes are small enough for genuine review. A 50-line change gets careful attention. Design disagreements surface within hours, not weeks. The team maintains a shared understanding of how the codebase is evolving because they see every change as it happens.

It increases rework

Long-lived branches guarantee merge conflicts. Two developers editing the same file on different branches will not discover the collision until one of them merges. The second developer must then reconcile their changes against an unfamiliar modification, often without understanding the intent behind it. This manual reconciliation is rework in its purest form - effort spent making code work together that would have been unnecessary if the developers had integrated daily.

The rework compounds. A developer who rebases a three-week branch against trunk may introduce bugs during conflict resolution. Those bugs require debugging. The debugging reveals an assumption that was valid three weeks ago but is no longer true because trunk has changed. Now the developer must rethink and partially rewrite their approach. What should have been a day of work becomes a week.

When developers integrate daily, conflicts are small - typically a few lines. They are resolved in minutes with full context because both changes are fresh. The cost of integration stays constant rather than growing exponentially with branch age.

It makes delivery timelines unpredictable

A two-day feature on a long-lived branch takes two days to build and an unknown number of days to merge. The merge might take an hour. It might take two days. It might surface a design conflict that requires reworking the feature. Nobody knows until they try. This makes it impossible to predict when work will actually be done.

The queuing effect makes it worse. When several branches need to merge, they form a queue. The first merge changes trunk, which means the second branch needs to rebase against the new trunk before merging. If the second merge is large, it changes trunk again, and the third branch must rebase. Each merge invalidates the work done to prepare the next one. Teams that “schedule” their merges are admitting that integration is so costly it needs coordination.

Project managers learn they cannot trust estimates. “The feature is code-complete” does not mean it is done - it means the merge has not started yet. Stakeholders lose confidence in the team’s ability to deliver on time because “done” and “deployed” are separated by an unpredictable gap.

With continuous integration, there is no merge queue. Each developer integrates small changes throughout the day. The time from “code-complete” to “integrated and tested” is minutes, not days. Delivery dates become predictable because the integration cost is near zero.

It hides risk until the worst possible moment

Long-lived branches create an illusion of progress. The team has five features “in development,” each on its own branch. The features appear to be independent and on track. But the risk is hidden: none of these features have been proven to work together. The branches may contain conflicting changes, incompatible assumptions, or integration bugs that only surface when combined.

All of that hidden risk materializes at merge time - the moment closest to the planned release date, when the team has the least time to deal with it. A merge conflict discovered three weeks before release is an inconvenience. A merge conflict discovered the day before release is a crisis. Long-lived branches systematically push risk discovery to the latest possible point.

Continuous integration surfaces risk immediately. If two changes conflict, the team discovers it within hours, while both changes are small and the authors still have full context. Risk is distributed evenly across the development cycle instead of concentrated at the end.

Impact on continuous delivery

Continuous delivery requires that trunk is always in a deployable state and that any commit can be released at any time. Long-lived feature branches make both impossible. Trunk cannot be deployable if large, poorly validated merges land periodically and destabilize it. You cannot release any commit if the latest commit is a 2,000-line merge that has not been fully tested.

Long-lived branches also prevent continuous integration - the practice of integrating every developer’s work into trunk at least once per day. Without continuous integration, there is no continuous delivery. The pipeline cannot provide fast feedback on changes that exist only on private branches. The team cannot practice deploying small changes because there are no small changes - only large merges separated by days or weeks of silence.

Every other CD practice - automated testing, pipeline automation, small batches, fast feedback - is undermined when the branching model prevents frequent integration.

How to Fix It

Step 1: Measure your current branch lifetimes

Before changing anything, understand the baseline. For every open branch:

  1. Record when it was created and when (or if) it was last merged.
  2. Calculate the age in days.
  3. Note the number of changed files and lines.

Most teams are shocked by their own numbers. A branch they think of as “a few days old” is often two or three weeks old. Making the data visible creates urgency.

Set a target: no branch older than one day. This will feel aggressive. That is the point.

Step 2: Set a branch lifetime limit and make it visible

Agree as a team on a maximum branch lifetime. Start with two days if one day feels too aggressive. The important thing is to pick a number and enforce it.

Make the limit visible:

  • Add a dashboard or report that shows branch age for every open branch.
  • Flag any branch that exceeds the limit in the daily standup.
  • If your CI tool supports it, add a check that warns when a branch exceeds 24 hours.

The limit creates a forcing function. Developers must either integrate quickly or break their work into smaller pieces. Both outcomes are desirable.

Step 3: Break large features into small, integrable changes (Weeks 2-3)

The most common objection is “my feature is too big to merge in a day.” This is true when the feature is designed as a monolithic unit. The fix is decomposition:

  • Branch by abstraction. Introduce a new code path alongside the old one. Merge the new code path in small increments. Switch over when ready.
  • Feature flags. Hide incomplete work behind a toggle so it can be merged to trunk without being visible to users.
  • Keystone interface pattern. Build all the back-end work first, merge it incrementally, and add the UI entry point last. The feature is invisible until the keystone is placed.
  • Vertical slices. Deliver the feature as a series of thin, user-visible increments instead of building all layers at once.

Each technique lets developers merge daily without exposing incomplete functionality. The feature grows incrementally on trunk rather than in isolation on a branch.

Step 4: Adopt short-lived branches with daily integration (Weeks 3-4)

Change the team’s workflow:

  1. Create a branch from trunk.
  2. Make a small, focused change.
  3. Get a quick review (the change is small, so review takes minutes).
  4. Merge to trunk. Delete the branch.
  5. Repeat.

Each branch lives for hours, not days. If a branch cannot be merged by end of day, it is too large. The developer should either merge what they have (using one of the decomposition techniques above) or discard the branch and start smaller tomorrow.

Pair this with the team’s code review practice. Small changes enable fast reviews, and fast reviews enable short-lived branches. The two practices reinforce each other.

Step 5: Address the objections (Weeks 3-4)

ObjectionResponse
“My feature takes three weeks - I can’t merge in a day”The feature takes three weeks. The branch does not have to. Use branch by abstraction, feature flags, or vertical slicing to merge daily while the feature grows incrementally on trunk.
“Merging incomplete code to trunk is dangerous”Incomplete code behind a feature flag or without a UI entry point is not dangerous - it is invisible. The danger is a three-week branch that lands as a single untested merge.
“I need my branch to keep my work separate from other changes”That separation is the problem. You want to discover conflicts early, when they are small and cheap to fix. A branch that hides conflicts for three weeks is not protecting you - it is accumulating risk.
“We tried short-lived branches and it was chaos”Short-lived branches require supporting practices: feature flags, good decomposition, fast CI, and a culture of small changes. Without those supports, it will feel chaotic. The fix is to build the supports, not to retreat to long-lived branches.
“Code review takes too long for daily merges”Small changes take minutes to review, not hours. If reviews are slow, that is a review process problem, not a branching problem. See PRs Waiting for Review.

Step 6: Continuously tighten the limit

Once the team is comfortable with two-day branches, reduce the limit to one day. Then push toward integrating multiple times per day. Each reduction surfaces new problems - features that are hard to decompose, tests that are slow, reviews that are bottlenecked - and each problem is worth solving because it blocks the flow of work.

The goal is continuous integration: every developer integrates to trunk at least once per day. At that point, “branches” are just short-lived workspaces that exist for hours, and merging is a non-event.

Measuring Progress

MetricWhat to look for
Average branch lifetimeShould decrease to under one day
Maximum branch lifetimeNo branch should exceed two days
Integration frequencyShould increase toward at least daily per developer
Merge conflict frequencyShould decrease as branches get shorter
Merge durationShould decrease from hours to minutes
Development cycle timeShould decrease as integration overhead drops
Lines changed per mergeShould decrease as changes get smaller

Team Discussion

Use these questions in a retrospective to explore how this anti-pattern affects your team:

  • What is the average age of open branches in our repository right now?
  • When was our last painful merge? What made it painful - time, conflicts, or broken tests?
  • If every branch had to merge within two days, what would we need to change about how we slice work?

2 - Integration Deferred

The build has been red for weeks and nobody cares. “CI” means a build server exists, not that anyone actually integrates continuously.

Category: Branching & Integration | Quality Impact: Critical

What This Looks Like

The team has a build server. It runs after every push. There is a dashboard somewhere that shows build status. But the build has been red for three weeks and nobody has mentioned it. Developers push code, glance at the result if they remember, and move on. When someone finally investigates, the failure is in a test that broke weeks ago and nobody can remember which commit caused it.

The word “continuous” has lost its meaning. Developers do not integrate their work into trunk daily - they work on branches for days or weeks and merge when the feature feels done. The build server runs, but nobody treats a red build as something that must be fixed immediately. There is no shared agreement that trunk should always be green. “CI” is a tool in the infrastructure, not a practice the team follows.

Common variations:

  • The build server with no standards. A CI server runs on every push, but there are no rules about what happens when it fails. Some developers fix their failures. Others do not. The build flickers between green and red all day, and nobody trusts the signal.
  • The nightly build. The build runs once per day, overnight. Developers find out the next morning whether yesterday’s work broke something. By then they have moved on to new work and lost context on what they changed.
  • The “CI” that is just compilation. The build server compiles the code and nothing else. No tests run. No static analysis. The build is green as long as the code compiles, which tells the team almost nothing about whether the software works.
  • The manually triggered build. The build server exists, but it does not run on push. After pushing code, the developer must log into the CI server and manually start the build and tests. When developers are busy or forget, their changes sit untested. When multiple pushes happen between triggers, a failure could belong to any of them. The feedback loop depends entirely on developer discipline rather than automation.
  • The branch-only build. CI runs on feature branches but not on trunk. Each branch builds in isolation, but nobody knows whether the branches work together until merge day. Trunk is not continuously validated.
  • The ignored dashboard. The CI dashboard exists but is not displayed anywhere the team can see it. Nobody checks it unless they are personally waiting for a result. Failures accumulate silently.

The telltale sign: if you can ask “how long has the build been red?” and nobody knows the answer, continuous integration is not happening.

Why This Is a Problem

Continuous integration is not a tool - it is a practice. The practice requires that every developer integrates to a shared trunk at least once per day and that the team treats a broken build as the highest-priority problem. Without the practice, the build server is just infrastructure generating notifications that nobody reads.

It reduces quality

When the build is allowed to stay red, the team loses its only automated signal that something is wrong. A passing build is supposed to mean “the software works as tested.” A failing build is supposed to mean “stop and fix this before doing anything else.” When failures are ignored, that signal becomes meaningless. Developers learn that a red build is background noise, not an alarm.

Once the build signal is untrusted, defects accumulate. A developer introduces a bug on Monday. The build fails, but it was already red from an unrelated failure, so nobody notices. Another developer introduces a different bug on Tuesday. By Friday, trunk has multiple interacting defects and nobody knows when they were introduced or by whom. Debugging becomes archaeology.

When the team practices continuous integration, a red build is rare and immediately actionable. The developer who broke it knows exactly which change caused the failure because they committed minutes ago. The fix is fast because the context is fresh. Defects are caught individually, not in tangled clusters.

It increases rework

Without continuous integration, developers work in isolation for days or weeks. Each developer assumes their code works because it passes on their machine or their branch. But they are building on assumptions about shared code that may already be outdated. When they finally integrate, they discover that someone else changed an API they depend on, renamed a class they import, or modified behavior they rely on.

The rework cascade is predictable. Developer A changes a shared interface on Monday. Developer B builds three days of work on the old interface. On Thursday, developer B tries to integrate and discovers the conflict. Now they must rewrite three days of code to match the new interface. If they had integrated on Monday, the conflict would have been a five-minute fix.

Teams that integrate continuously discover conflicts within hours, not days. The rework is measured in minutes because the conflicting changes are small and the developers still have full context on both sides. The total cost of integration stays low and constant instead of spiking unpredictably.

It makes delivery timelines unpredictable

A team without continuous integration cannot answer the question “is the software releasable right now?” Trunk may or may not compile. Tests may or may not pass. The last successful build may have been a week ago. Between then and now, dozens of changes have landed without anyone verifying that they work together.

This creates a stabilization period before every release. The team stops feature work, fixes the build, runs the test suite, and triages failures. This stabilization takes an unpredictable amount of time - sometimes a day, sometimes a week - because nobody knows how many problems have accumulated since the last known-good state.

With continuous integration, trunk is always in a known state. If the build is green, the team can release. If the build is red, the team knows exactly which commit broke it and how long ago. There is no stabilization period because the code is continuously stabilized. Release readiness is a fact that can be checked at any moment, not a state that must be achieved through a dedicated effort.

It masks the true cost of integration problems

When the build is permanently broken or rarely checked, the team cannot see the patterns that would tell them where their process is failing. Is the build slow? Nobody notices because nobody waits for it. Are certain tests flaky? Nobody notices because failures are expected. Do certain parts of the codebase cause more breakage than others? Nobody notices because nobody correlates failures to changes.

These hidden problems compound. The build gets slower because nobody is motivated to speed it up. Flaky tests multiply because nobody quarantines them. Brittle areas of the codebase stay brittle because the feedback that would highlight them is lost in the noise.

When the team practices CI and treats a red build as an emergency, every friction point becomes visible. A slow build annoys the whole team daily, creating pressure to optimize it. A flaky test blocks everyone, creating pressure to fix or remove it. The practice surfaces the problems. Without the practice, the problems are invisible and grow unchecked.

Impact on continuous delivery

Continuous integration is the foundation that every other CD practice is built on. Without it, the pipeline cannot give fast, reliable feedback on every change. Automated testing is pointless if nobody acts on the results. Deployment automation is pointless if the artifact being deployed has not been validated. Small batches are pointless if the batches are never verified to work together.

A team that does not practice CI cannot practice CD. The two are not independent capabilities that can be adopted in any order. CI is the prerequisite. Every hour that the build stays red is an hour during which the team has no automated confidence that the software works. Continuous delivery requires that confidence to exist at all times.

How to Fix It

Step 1: Fix the build and agree it stays green

Before anything else, get trunk to green. This is the team’s first and most important commitment.

  1. Assign the broken build as the highest-priority work item. Stop feature work if necessary.
  2. Triage every failure: fix it, quarantine it to a non-blocking suite, or delete the test if it provides no value.
  3. Once the build is green, make the team agreement explicit: a red build is the team’s top priority. Whoever broke it fixes it. If they cannot fix it within 15 minutes, they revert their change and try again with a smaller commit.

Write this agreement down. Put it in the team’s working agreements document. If you do not have one, start one now. The agreement is simple: we do not commit on top of a red build, and we do not leave a red build for someone else to fix.

Step 2: Make the build visible

The build status must be impossible to ignore:

  • Display the build dashboard on a large monitor visible to the whole team.
  • Configure notifications so that a broken build alerts the team immediately - in the team chat channel, not in individual email inboxes.
  • If the build breaks, the notification should identify the commit and the committer.

Visibility creates accountability. When the whole team can see that the build broke at 2:15 PM and who broke it, social pressure keeps people attentive. When failures are buried in email notifications, they are easily ignored.

Step 3: Require integration at least once per day

The “continuous” in continuous integration means at least daily, and ideally multiple times per day. Set the expectation:

  • Every developer integrates their work to trunk at least once per day.
  • If a developer has been working on a branch for more than a day without integrating, that is a problem to discuss at standup.
  • Track integration frequency per developer per day. Make it visible alongside the build dashboard.

This will expose problems. Some developers will say their work is not ready to integrate. That is a decomposition problem - the work is too large. Some will say they cannot integrate because the build is too slow. That is a pipeline problem. Each problem is worth solving. See Long-Lived Feature Branches for techniques to break large work into daily integrations.

Step 4: Make the build fast enough to provide useful feedback (Weeks 2-3)

A build that takes 45 minutes is a build that developers will not wait for. Target under 10 minutes for the primary feedback loop:

  • Identify the slowest stages and optimize or parallelize them.
  • Move slow integration tests to a secondary pipeline that runs after the fast suite passes.
  • Add build caching so that unchanged dependencies are not recompiled on every run.
  • Run tests in parallel if they are not already.

The goal is a fast feedback loop: the developer pushes, waits a few minutes, and knows whether their change works with everything else. If they have to wait 30 minutes, they will context-switch, and the feedback loop breaks.

Step 5: Address the objections (Weeks 3-4)

ObjectionResponse
“The build is too slow to fix every red immediately”Then the build is too slow, and that is a separate problem to solve. A slow build is not a reason to ignore failures - it is a reason to invest in making the build faster.
“Some tests are flaky - we can’t treat every failure as real”Quarantine flaky tests into a non-blocking suite. The blocking suite must be deterministic. If a test in the blocking suite fails, it is real until proven otherwise.
“We can’t integrate daily - our features take weeks”The features take weeks. The integrations do not have to. Use branch by abstraction, feature flags, or vertical slicing to integrate partial work daily.
“Fixing someone else’s broken build is not my job”It is the whole team’s job. A red build blocks everyone. If the person who broke it is unavailable, someone else should revert or fix it. The team owns the build, not the individual.
“We have CI - the build server runs on every push”A build server is not CI. CI is the practice of integrating frequently and keeping the build green. If the build has been red for a week, you have a build server, not continuous integration.

Step 6: Build the habit

Continuous integration is a daily discipline, not a one-time setup. Reinforce the habit:

  • Review integration frequency in retrospectives. If it is dropping, ask why.
  • Celebrate streaks of consecutive green builds. Make it a point of team pride.
  • When a developer reverts a broken commit quickly, recognize it as the right behavior - not as a failure.
  • Periodically audit the build: is it still fast? Are new flaky tests creeping in? Is the test coverage meaningful?

The goal is a team culture where a red build feels wrong - like an alarm that demands immediate attention. When that instinct is in place, CI is no longer a process being followed. It is how the team works.

Measuring Progress

MetricWhat to look for
Build pass ratePercentage of builds that pass on first run - should be above 95%
Time to fix a broken buildShould be under 15 minutes, with revert as the fallback
Integration frequencyAt least one integration per developer per day
Build durationShould be under 10 minutes for the primary feedback loop
Longest period with a red buildShould be measured in minutes, not hours or days
Development cycle timeShould decrease as integration overhead drops and stabilization periods disappear

3 - Cherry-Pick Releases

Hand-selecting specific commits for release instead of deploying trunk, indicating trunk is never trusted to be deployable.

Category: Branching & Integration | Quality Impact: High

What This Looks Like

When a release is approaching, the team does not simply deploy trunk. Instead, someone - usually a release engineer or a senior developer - reviews the commits that have landed since the last release and selects which ones should go out. Some commits are approved. Others are held back because the feature is not ready, the ticket was not signed off, or there is uncertainty about whether the code is safe. The selected commits are cherry-picked onto a release branch and tested there before deployment.

The decision meeting runs long. People argue about which commits are safe to include. The release engineer needs to understand the implications of including Commit A without Commit B, which it might depend on. Sometimes a cherry-pick causes a conflict because the selected commits assumed an ordering that is now violated. The release branch needs its own fixes. By the time the release is ready, the release branch has diverged from trunk, and the next release cycle starts with the same conversation.

Common variations:

  • The inclusion whitelist. Only commits explicitly tagged or approved for the release are included. Everything else is held back by default. The tagging process is a separate workflow that developers forget, creating releases with missing changes that were expected to be included.
  • The exclusion blacklist. Trunk is the starting point, but specific commits are removed because they are “not ready.” Removing a commit that has dependencies is often impossible cleanly, requiring manual reversal.
  • The feature-complete gate. Commits are held back until the product manager approves the feature as complete. Trunk accumulates undeployable partial work. The gate is the symptom; the incomplete work being merged to trunk is the root cause.
  • The hotfix bypass. A critical bug is fixed on the release branch but the cherry-pick back to trunk is forgotten. The next release reintroduces the bug because trunk never had the fix.

The telltale sign: the team has a meeting or a process to decide which commits go into a release. If you have to decide, trunk is not deployable.

Why This Is a Problem

Cherry-pick releases are a workaround for a more fundamental problem: trunk is not trusted to be in a deployable state at all times. The cherry-pick process does not solve that problem - it works around it while making it more expensive and harder to fix.

It reduces quality

Bugs that never existed on trunk appear on the release branch because the cherry-picked combination of commits was never tested as a coherent system. That is a class of defect the team creates by doing the cherry-pick. Cherry-picking changes the context in which code is tested. Trunk has commits in the order they were written, with all their dependencies. A cherry-picked release branch has a subset of those commits in a different order, possibly with conflicts and manual resolutions layered on top. The release branch is a different artifact than trunk. Tests that pass on trunk may not pass - or may not be sufficient - for the release branch.

The problem intensifies when the cherry-picked set creates implicit dependencies. Commit A changed a shared utility function that Commit C also uses. Commit B was excluded. Without Commit B, the utility function behaves differently than it does on trunk. The release branch has a combination of code that never existed as a coherent state during development.

When trunk is always deployable, the release is simply a promotion of a tested, coherent state. Every commit on trunk was tested in the context of all previous commits. There are no cherry-pick combinations to reason about.

It increases rework

Each cherry-pick is a manual operation. When commits have conflicts, the conflict must be resolved manually. When the release branch needs a fix, the fix must often be applied to both the release branch and trunk, a process known as backporting. Backporting is frequently forgotten, which means the same bug reappears in the next release.

The rework is not just the cherry-pick operations themselves. It includes the review cycles: the meeting to decide which commits are included, the re-testing of the release branch as a distinct artifact, the investigation of bugs that appear only on the release branch, and the backport work. All of that effort is overhead that produces no new functionality.

When trunk is always deployable, the release process is promotion and verification - testing a state that already exists and was already tested. There is no branch-specific rework because there is no branch.

It makes delivery timelines unpredictable

The cherry-pick decision process cannot be time-boxed reliably. The release engineering team does not know in advance how many commits will need review, how many conflicts will arise, or how much the release branch will diverge from trunk. The release date slips not because development is late but because the release process itself takes longer than expected.

Product managers and stakeholders experience this as “the release is ready, so why isn’t it deployed?” The code is complete. The features are tested. But the team is still in the cherry-pick and release-branch-testing phase, which can add days to what appears complete from the outside.

The process also creates a queuing effect. When the release branch diverges far enough from trunk, the divergence blocks new development on trunk because developers are unsure whether their changes will conflict with the release branch activity. Work pauses while the release is sorted out. The pause is unplanned and difficult to budget in advance.

It signals a broken relationship with trunk

Each release cycle spent cherry-picking is a cycle not spent fixing the underlying problem. The process contains the damage while the root cause grows more expensive to address. Cherry-pick releases are a symptom, not a root cause. The reason the team cherry-picks is that trunk is not trusted. Trunk is not trusted because incomplete features are merged before they are safe to deploy, because the automated test suite does not provide sufficient confidence, or because the team has no mechanism for hiding partially complete work from users. The cherry-pick process is a compensating control that addresses the symptom while the root cause persists.

The cherry-pick process grows more expensive as more code is held back from trunk. Eventually the team has a de-facto release branch strategy indistinguishable from the anti-patterns described in Release Branches with Extensive Backporting.

Impact on continuous delivery

CD requires that every commit to trunk is potentially releasable. Cherry-pick releases prove the opposite: most commits are not releasable, and it takes a manual curation process to assemble a releasable set. That is the inverse of CD.

The cherry-pick process also makes deployment frequency a discrete, expensive event rather than a routine operation. CD requires that deployment is cheap enough to do many times per day. If the deployment process includes a review meeting, a branch creation, a targeted test cycle, and a backport operation, it is not cheap. Teams with cherry-pick releases are typically limited to weekly or monthly releases, which means bugs take weeks to reach users and business value is delayed proportionally.

How to Fix It

Eliminating cherry-pick releases requires making trunk trustworthy. The practices that do this - feature flags, comprehensive automated testing, small batches, trunk-based development - are the same practices that underpin continuous delivery.

Step 1: Understand why commits are currently being held back

Do not start by changing the branching workflow. Start by understanding the reasons commits are excluded from releases.

  1. For the last three to five releases, list every commit that was held back and why.
  2. Group the reasons: incomplete features, unreviewed changes, failed tests, stakeholder hold, uncertain dependencies, other.
  3. The distribution tells you where to focus. If most holds are “incomplete feature,” the fix is feature flags. If most holds are “failed tests,” the fix is test reliability. If most holds are “stakeholder approval needed,” the fix is shifting the approval gate earlier.

Document the findings. Share them with the team and get agreement on which root cause to address first.

Step 2: Introduce feature flags for incomplete work (Weeks 2-4)

The most common reason commits are held back is that the feature is not ready for users. Feature flags decouple deployment from release. Incomplete work can merge to trunk and be deployed to production while remaining invisible to users.

  1. Choose a simple feature flag mechanism. A configuration file read at startup is sufficient to start.
  2. For the next feature that would have been held back from a release, wrap the user-facing entry point in a flag.
  3. Merge to trunk and deploy. Verify that the feature is invisible when the flag is off.
  4. When the feature is ready, flip the flag. No deployment required.

Once the team sees that incomplete features do not require cherry-picking, the pull toward feature flags grows naturally. Each held-back commit is a candidate for the flag treatment.

Step 3: Strengthen the automated test suite (Weeks 2-5)

Commits are also held back because of uncertainty about their safety. That uncertainty is a signal that the automated test suite is not providing sufficient confidence.

  1. Identify the test gaps that correspond to the uncertainty. If the team is unsure whether a change affects the payment flow, are there tests for the payment flow?
  2. Add tests for the high-risk paths that are currently unverified.
  3. Set a requirement: if you cannot write a test that proves your change is safe, the change is not ready to merge.

The goal is a suite that makes the team confident enough in every green build to deploy it. That confidence is what makes trunk deployable.

Step 4: Move stakeholder approval before merge

If commits are held back because product managers have not signed off, the approval gate is in the wrong place. Move it to before trunk integration.

  1. Product review happens on a branch, before merge.
  2. Once approved, the branch is merged to trunk.
  3. Trunk is always in an approved state.

This is a workflow change, not a technical change. It requires that product managers review work in progress rather than waiting for a release candidate. Most find this easier, not harder, because they can give feedback while the developer is still working rather than after everything is frozen.

Step 5: Deploy trunk directly on a fixed cadence (Weeks 4-6)

Once the holds are addressed - features flagged, tests strengthened, approvals moved earlier - run an experiment: deploy trunk directly without a cherry-pick step.

  1. Pick a low-stakes deployment window.
  2. Deploy trunk as-is. Do not cherry-pick anything.
  3. Monitor the deployment. If issues arise, diagnose their source. Are they from previously-held commits? From test gaps? From incomplete feature flag coverage?

Each deployment that succeeds without cherry-picking builds confidence. Each issue is a specific thing to fix, not a reason to revert to cherry-picking.

Step 6: Retire the cherry-pick process

Once trunk deployments have been reliable for several cycles, formalize the change. Remove the cherry-pick step from the deployment runbook. Make “deploy trunk” the documented and expected process.

ObjectionResponse
“We have commits on trunk that are not ready to go out”Those commits should be behind feature flags. If they are not, that is the problem to fix. Every commit that merges to trunk should be deployable.
“Product has to approve features before they go live”Approval should happen before the feature is activated - either before merge (flip the flag after approval) or by controlling the flag in production. Holding a deployment hostage to approval couples your release cadence to a process that can be decoupled.
“What if a cherry-picked commit breaks the release branch?”It will. Repeatedly. That is the cost of the process you are describing. The alternative is to make trunk deployable so you never need the release branch.
“Our release process requires auditing which commits went out”Deploy trunk and record the commit hash. The audit trail is a git log, not a cherry-pick selection record.

Measuring Progress

MetricWhat to look for
Commits held back per releaseShould decrease toward zero
Release frequencyShould increase as deployment becomes a lower-ceremony operation
Release branch divergence from trunkShould decrease and eventually disappear
Lead timeShould decrease as commits reach production without waiting for a curation cycle
Change fail rateShould remain stable or improve as trunk becomes reliably deployable
Deployment process durationShould decrease as manual cherry-pick steps are removed

4 - Release Branches with Extensive Backporting

Maintaining multiple release branches and manually backporting fixes creates exponential overhead as branches multiply.

Category: Branching & Integration | Quality Impact: High

What This Looks Like

The team has branches named release/2.1, release/2.2, and release/2.3, each representing a version in active use. When a developer fixes a bug on trunk, the fix needs to go into all three release branches because customers are running all three versions. The developer fixes the bug once, then applies the same fix three times via cherry-pick, one branch at a time. Each cherry-pick requires a separate review, a separate CI run, and a separate deployment.

If the bug fix applies cleanly, the process takes an afternoon. If any of the release branches has diverged enough that the cherry-pick conflicts, the developer must manually resolve the conflict in a version of the code they are not familiar with. When the conflict is non-trivial, the fix on the older branch may need to be reimplemented from scratch because the surrounding code is different enough that the original approach does not apply.

Common variations:

  • The customer-pinned version. A major enterprise customer is on version 2.1 and cannot upgrade due to internal approval processes. Every security fix must be backported to 2.1 until the customer eventually migrates - which takes years. One customer extends your maintenance obligations indefinitely.
  • The parallel feature tracks. Separate release branches carry different feature sets for different customer segments. A fix to a shared component must go into every feature track. The team has effectively built multiple products that share a codebase but diverge continuously.
  • The release-then-hotfix cycle. A release branch is created for stabilization, bugs are found during stabilization, fixes are applied to the release branch, those fixes are then backported to trunk. Then the next release branch is created, and the cycle repeats.
  • The version cemetery. Branches for old versions are never officially retired. The team has vague commitments to “support” old versions. Backporting requests arrive sporadically. Developers fix bugs in version branches they have never worked in, without understanding the full context of why the code looked the way it did.

The telltale sign: when a developer fixes a bug, the first question is “which branches does this need to go into?” - and the answer is usually more than one.

Why This Is a Problem

Release branches with backporting look like a reasonable support strategy. Customers want stability in the version they have deployed. But the branch strategy trades customer stability for developer instability: the team can never move cleanly forward because they are always partially living in the past.

It reduces quality

A fix that works on trunk introduces a new bug on the release branch because the surrounding code is different enough that the original approach no longer applies. That regression appears in a version the team tests less rigorously, and is reported by a customer weeks later. Backporting a fix to a different codebase version is not the same as applying the fix in context. The release branch may have a different version of the code surrounding the bug. The fix that correctly handles the problem on trunk may be incorrect, incomplete, or inapplicable on the release branch. The developer doing the backport must evaluate the fix in a context they did not write and may not fully understand.

This creates a category of bugs unique to backporting: fixes that work on trunk but introduce new problems on the release branch. By the time a customer reports the regression, the developer who did the backport has moved on and may not even remember the original fix.

When a team runs a single releasable trunk, every fix is applied once, in context, by the developer who understands the change. The quality of the fix is limited only by that developer’s understanding

  • not by the combinatorial complexity of applying it across multiple code states.

It increases rework

The rework in a backporting workflow is structural. Every fix done once on trunk becomes multiple units of work: one cherry-pick per maintained release branch, each with its own review and CI run. Three branches means three times the work. Five branches means five times the work. The rework is not optional - it is built into the process.

Conflict resolution compounds the rework. A backport that conflicts requires the developer to understand the conflict, decide how to resolve it, and verify the resolution is correct. Each of these steps can be as expensive as the original fix. A one-hour bug fix can become three hours of backporting work, much of it spent reworking the fix in unfamiliar code.

Backport tracking is also rework. Someone must maintain the record of which fixes have been applied to which branches. When the record is incomplete - which it always is - bugs that were fixed on trunk reappear in release branches, requiring diagnosis to confirm they were fixed and investigation to understand why the fix did not propagate.

It makes delivery timelines unpredictable

When a critical security vulnerability is disclosed, the team must patch all supported release branches simultaneously. The time required is a multiple of the number of branches times the complexity of each backport. That time cannot be estimated in advance because conflicts are unpredictable. A patch that takes two hours to develop can take two days to backport if release branches have diverged significantly.

For planned features and improvements, the release branch strategy introduces a ceiling on development velocity. The team can only move as fast as they can service all their active branches. As branches accumulate, the overhead per feature grows until the team is spending more time backporting than developing. At that point, the team is maintaining the past rather than building the future.

Planning also becomes unreliable because backport work is interrupt-driven. A customer escalation against an old version stops forward work. The interrupt is not predictable in advance, so sprint commitments cannot account for it.

It creates maintenance debt that compounds over time

New developers join and find release branches full of code that looks nothing like trunk, written by people who have left, with no tests and no documentation. That is not a warning sign of future problems - it is the current state of teams with five active release branches. Each additional release branch increases the maintenance surface. Two branches is twice the maintenance of one. Five branches is five times the maintenance. As branches age, the code on them diverges further from trunk, making future backports increasingly difficult. The team can never retire a branch safely because they do not know who is using it or what they would break.

Over time, the team accumulates branches they cannot merge back to trunk - the divergence is too large - and cannot delete without risking customer impact. The branches become frozen artifacts that must be preserved indefinitely.

Impact on continuous delivery

CD requires a single path to production through trunk. Release branches with backporting create multiple parallel paths, each with its own test results, its own deployments, and its own risks. The pipeline cannot provide a single authoritative signal about system health because there are multiple systems, each evolving independently.

The backporting overhead also limits how fast the team can respond to production issues. When a bug is found in production, the fix must pass through multiple branch-specific pipelines before all affected versions are patched. In CD, a fix from commit to production can take minutes. In a multi-branch environment, the same fix might not reach all affected versions for days, because each branch has its own queue of testing and deployment.

How to Fix It

Eliminating release branches requires changing how versioning and customer support commitments are handled. The technical changes are straightforward. The harder changes are organizational: how the team handles customer upgrade requests, how compatibility is maintained, and how support commitments are scoped.

Step 1: Inventory all active release branches and their consumers

Before retiring any branch, understand who depends on it.

  1. List every active release branch and when it was created.
  2. For each branch, identify what customers or systems are running that version.
  3. Identify the date of the last backport to each branch.
  4. Assess how far each branch has diverged from trunk.

This inventory usually reveals that some branches have no known active consumers and can be retired immediately. Others have consumers who could upgrade but have not been prompted to. Only a small number typically have consumers with genuine constraints on upgrading.

Step 2: Define and communicate a version support policy

The underlying driver of branch proliferation is the absence of a clear policy on how long versions are supported. Without a policy, support obligations are open-ended.

  1. Define a maximum support window. Common choices are N-1 (only the previous major version is supported alongside the current), a fixed time window (12 or 18 months), or a fixed number of minor releases.
  2. Communicate the policy to customers. Give them a migration timeline.
  3. Apply the policy retroactively: branches outside the support window are retired, with notice.

This is a business decision, not a technical one. Engineering leadership needs to align with product and customer success teams. But without a policy, the technical remediation of the branching problem cannot proceed.

Step 3: Invest in backward compatibility to reduce upgrade friction (Weeks 2-6)

Many customers stay on old versions because upgrades are painful. If every upgrade requires configuration changes, API updates, and re-testing, customers defer upgrades indefinitely. Reducing upgrade friction reduces the business pressure to maintain old versions.

  1. Identify the most common upgrade blockers from customer escalations.
  2. Add backward compatibility layers: deprecated API endpoints that still work, configuration migration tools, clear upgrade guides.
  3. For breaking changes, use API versioning rather than code branching. The API maintains the old contract while the implementation moves forward.

The goal is that upgrading from N-1 to N is low-risk and well-supported. Customers who can upgrade easily will, which reduces the population on old versions.

Step 4: Replace backporting with forward-only fixes on supported versions (Weeks 4-8)

For versions within the support window, stop cherry-picking from trunk. Instead, fix on the oldest supported version and merge forward.

  1. When a bug is reported against version 2.1, fix it on the release/2.1 branch.
  2. Merge the fix forward: 2.1 to 2.2 to 2.3 to trunk.
  3. Forward merges are less likely to conflict than backports because the forward merge builds on the older fix rather than trying to apply a trunk-context fix to older code.

This is still more work than a single fix on trunk, but it eliminates the class of bugs caused by backporting a trunk-context fix to incompatible older code.

Step 5: Reduce to one supported release branch alongside trunk (Weeks 6-12)

Work toward a state where only the most recent release branch is maintained, with all others retired.

  1. Accelerate customer migrations for all versions outside the N-1 policy.
  2. Retire branches as their consumer count reaches zero.
  3. For the last remaining release branch, evaluate whether it can be eliminated by using feature flags on trunk to manage staged rollouts instead of a separate branch.

Once the team is running trunk and at most one release branch, the maintenance overhead drops dramatically. Backporting one version is manageable. Backporting five is not.

Step 6: Move to trunk-only with feature flags and staged rollouts (Ongoing)

The end state is trunk-only. Customers on “the current version” get staged access to new features through flags. There is one codebase to maintain, one pipeline to run, and one set of tests to pass.

ObjectionResponse
“Enterprise customers need version stability”Stability comes from reliable software and good testing, not from freezing the codebase. A customer on a fixed version still gets bugs and security vulnerabilities - they just do not get the fixes either. Feature flags provide stability for individual features without freezing the entire release.
“We are contractually obligated to support version N”A defined support window does not mean unlimited support. Work with legal and sales to scope support commitments to a finite window. Open-ended support obligations grow into maintenance traps.
“Merging branches forward creates conflicts too”Forward merges are lower-risk than backports because the merge direction follows the chronological development. The conflicts that exist reflect genuine code evolution. Invest the effort in forward merges and retire branches on schedule rather than maintaining an ever-growing backward-facing merge burden.
“Customers won’t upgrade even if we ask them to”Some will not. That is why the support policy must have teeth. After the policy window, the supported upgrade path is to the current version. Continued support for unsupported versions is a separate, charged engagement, not a default obligation.

Measuring Progress

MetricWhat to look for
Number of active release branchesShould decrease toward one and eventually zero
Backport operations per sprintShould decrease as branches are retired
Development cycle timeShould decrease as the backport overhead is removed from the development workflow
Mean time to repairShould decrease as fixes no longer need to propagate through multiple branches
Bug regression rate on release branchesShould decrease as backporting with conflict resolution is eliminated
Integration frequencyShould increase as work consolidates on trunk