Phase 0: Assess
Understand where you are today. Map your delivery process, measure what matters, and identify the constraints holding you back.
Key question: “How far are we from CD?”
Before changing anything, you need to understand your current state. This phase helps you
create a clear picture of your delivery process, establish baseline metrics, and identify
the constraints that will guide your improvement roadmap.
Team activity: The pages in this phase work as a facilitated team exercise. Run Current State Checklist as a retrospective to align on where your delivery process stands today before measuring baselines.
What You’ll Do
- Map your value stream - Visualize the flow from idea to production
- Establish baseline metrics - Measure your current DORA metrics: deployment frequency, lead time for changes, change failure rate, and mean time to restore. Track these throughout the migration - they are your evidence of progress and your case for continued investment.
- Identify constraints - Find the bottlenecks limiting your flow
- Complete the current-state checklist - Self-assess against MinimumCD practices
Why This Phase Matters
Teams that skip assessment often invest in the wrong improvements. A team with a 3-week manual
testing cycle doesn’t need better deployment automation first - they need testing fundamentals.
Understanding your constraints ensures you invest effort where it will have the biggest impact.
When You’re Ready to Move On
You’re ready for Phase 1: Foundations when you can answer:
Next: Phase 1 - Foundations - establish the technical and team practices that make CD possible.
Related Content
1 - Value Stream Mapping
Visualize your delivery process end-to-end to identify waste and constraints before starting your CD migration.
Phase 0 - Assess | Scope: Team
Before you change anything about how your team delivers software, you need to see how it works
today. Value Stream Mapping (VSM) is the single most effective tool for making your delivery
process visible. It reveals the waiting, the rework, and the handoffs that you have learned to
live with but that are silently destroying your flow.
In the context of a CD migration, a value stream map is not an academic exercise. It is the
foundation for every decision you will make in the phases ahead. It tells you where your time
goes, where quality breaks down, and which constraint to attack first.
What Is a Value Stream Map?
A value stream map is a visual representation of every step required to deliver a change from
request to production. For each step, you capture:
- Process time - the time someone is actively working on that step
- Wait time - the time the work sits idle between steps (in a queue, awaiting approval, blocked on an environment)
- Percent Complete and Accurate (%C/A) - the percentage of work arriving at this step that is usable without rework
The ratio of process time to total time (process time + wait time) is your flow efficiency.
Most teams are shocked to discover that their flow efficiency is below 15%, meaning that for
every hour of actual work, there are nearly six hours of waiting.
Prerequisites
Before running a value stream mapping session, make sure you have:
- An established, repeatable process. You are mapping what actually happens, not what should
happen. If every change follows a different path, start by agreeing on the current “most common”
path.
- All stakeholders in the room. You need representatives from every group involved in delivery:
developers, testers, operations, security, product, change management. Each person knows the
wait times and rework loops in their part of the stream that others cannot see.
- A shared understanding of wait time vs. process time. Wait time is when work sits idle. Process
time is when someone is actively working. A code review that takes “two days” but involves 30
minutes of actual review has 30 minutes of process time and roughly 15.5 hours of wait time.
Choose Your Mapping Approach
Value stream maps can be built from two directions. Most organizations benefit from starting
bottom-up and then combining into a top-down view, but the right choice depends on where your
delivery pain is concentrated.
Bottom-Up: Map at the Team Level First
Each delivery team maps its own process independently - from the moment a developer is ready to
push a change to the moment that change is running in production. This is the approach described
in Document Your Current Process, elevated to a
formal value stream map with measured process times, wait times, and %C/A.
When to use bottom-up:
- You have multiple teams that each own their own deployment process (or think they do).
- Teams have different pain points and different levels of CD maturity.
- You want each team to own its improvement work rather than waiting for an organizational
initiative.
How it works:
- Each team maps its own value stream using the session format described below.
- Teams identify and fix their own constraints. Many constraints are local - flaky tests,
manual deployment steps, slow code review - and do not require cross-team coordination.
- After teams have mapped and improved their own streams, combine the maps to reveal
cross-team dependencies. Lay the team-level maps side by side and draw the connections:
shared environments, shared libraries, shared approval processes, upstream/downstream
dependencies.
The combined view often reveals constraints that no single team can see: a shared staging
environment that serializes deployments across five teams, a security review team that is
the bottleneck for every release, or a shared library with a release cycle that blocks
downstream teams for weeks.
Advantages: Fast to start, builds team ownership, surfaces team-specific friction that
a high-level map would miss. Teams see results quickly, which builds momentum for the
harder cross-team work.
Top-Down: Map Across Dependent Teams
Start with the full flow from a customer request (or business initiative) entering the system
to the delivered outcome in production, mapping across every team the work touches. This
produces a single map that shows the end-to-end flow including all inter-team handoffs,
shared queues, and organizational boundaries.
When to use top-down:
- Delivery pain is concentrated at the boundaries between teams, not within them.
- A single change routinely touches multiple teams (front-end, back-end, platform,
data, etc.) and the coordination overhead dominates cycle time.
- Leadership needs a full picture of organizational delivery performance to prioritize
investment.
How it works:
- Identify a representative value stream - a type of work that flows through the teams
you want to map. For example: “a customer-facing feature that requires API changes,
a front-end update, and a database migration.”
- Get representatives from every team in the room. Each person maps their team’s portion
of the flow, including the handoff to the next team.
- Connect the segments. The gaps between teams - where work queues, waits for
prioritization, or gets lost in a ticket system - are usually the largest sources of
delay.
Advantages: Reveals organizational constraints that team-level maps cannot see.
Shows the true end-to-end lead time including inter-team wait times. Essential for
changes that require coordinated delivery across multiple teams.
Combining Both Approaches
The most effective strategy for large organizations:
- Start bottom-up. Have each team document its current process
and then run its own value stream mapping session. Fix team-level quick wins immediately.
- Combine into a top-down view. Once team-level maps exist, connect them to see the
full organizational flow. The team-level detail makes the top-down map more accurate
because each segment was mapped by the people who actually do the work.
- Fix constraints at the right level. Team-level constraints (flaky tests, manual
deploys) are fixed by the team. Cross-team constraints (shared environments, approval
bottlenecks, dependency coordination) are fixed at the organizational level.
This layered approach prevents two common failure modes: mapping at too high a level (which
misses team-specific friction) and mapping only at the team level (which misses the
organizational constraints that dominate end-to-end lead time).
How to Run the Session
Step 1: Start From Delivery, Work Backward
Begin at the right side of your map - the moment a change reaches production. Then work backward
through every step until you reach the point where a request enters the system. This prevents teams
from getting bogged down in the early stages and never reaching the deployment process, which is
often where the largest delays hide.
Typical steps you will uncover include:
- Request intake and prioritization
- Story refinement and estimation
- Development (coding)
- Code review
- Build and unit tests
- Integration testing
- Manual QA / regression testing
- Security review
- Staging deployment
- User acceptance testing (UAT)
- Change advisory board (CAB) approval
- Production deployment
- Production verification
Step 2: Capture Process Time and Wait Time for Each Step
For each step on the map, record the process time and the wait time. Use averages if exact numbers
are not available, but prefer real data from your issue tracker, CI system, or deployment logs
when you can get it.
Migration Tip
Pay close attention to these migration-critical delays:
- Handoffs that block flow - Every time work passes from one team or role to another (dev to QA,
QA to ops, ops to security), there is a queue. Count the handoffs. Each one is a candidate for
elimination or automation.
- Manual gates - CAB approvals, manual regression testing, sign-off meetings. These often add
days of wait time for minutes of actual value.
- Environment provisioning delays - If developers wait hours or days for a test environment,
that is a constraint you will need to address in Phase 2.
- Rework loops - Any step where work frequently bounces back to a previous step. Track the
percentage of times this happens. These loops are destroying your cycle time.
Step 3: Calculate %C/A at Each Step
Percent Complete and Accurate measures the quality of the handoff. Ask each person: “What
percentage of the work you receive from the previous step is usable without needing clarification,
correction, or rework?”
A low %C/A at a step means the upstream step is producing defective output. This is critical
information for your migration plan because it tells you where quality needs to be built in
rather than inspected after the fact.
Step 4: Identify Constraints (Kaizen Bursts)
Mark the steps with the largest wait times and the lowest %C/A with a “kaizen burst” - a starburst
symbol indicating an improvement opportunity. These are your constraints. They will become the
focus of your migration roadmap.
Common constraints teams discover during their first value stream map:
| Constraint | Typical Impact | Migration Phase to Address |
|---|
| Long-lived feature branches | Days of integration delay, merge conflicts | Phase 1 (Trunk-Based Development) |
| Manual regression testing | Days to weeks of wait time | Phase 1 (Testing Fundamentals) |
| Environment provisioning | Hours to days of wait time | Phase 2 (Production-Like Environments) |
| CAB / change approval boards | Days of wait time per deployment | Phase 2 (Pipeline Architecture) |
| Manual deployment process | Hours of process time, high error rate | Phase 2 (Single Path to Production) |
| Large batch releases | Weeks of accumulation, high failure rate | Phase 3 (Small Batches) |
Reading the Results
Once your map is complete, calculate these summary numbers:
- Total lead time = sum of all process times + all wait times
- Total process time = sum of just the process times
- Flow efficiency = total process time / total lead time * 100
- Number of handoffs = count of transitions between different teams or roles
- Rework percentage = percentage of changes that loop back to a previous step
These numbers become part of your baseline metrics and feed directly into
your work to identify constraints.
What Good Looks Like
You are not aiming for a perfect value stream map. You are aiming for a shared, honest picture of
reality that the whole team agrees on. The map should be:
- Visible - posted on a wall or in a shared digital tool where the team sees it daily
- Honest - reflecting what actually happens, including the workarounds and shortcuts
- Actionable - with constraints clearly marked so the team knows where to focus
You will revisit and update this map as you progress through each migration phase. It is a living
document, not a one-time exercise.
Next Step
With your value stream map in hand, proceed to Baseline Metrics to
quantify your current delivery performance.
Related Content
2 - Baseline Metrics
Capture baseline CI and DORA metrics before making any changes so you have an honest starting point and can measure progress.
Phase 0 - Assess | Scope: Team
You cannot improve what you have not measured. Before making any changes to your delivery process,
capture two types of baseline measurements: CI health metrics and DORA outcome metrics.
- CI health metrics are leading indicators. They reflect current team behaviors and move
immediately when those behaviors change. Use them to drive improvement experiments throughout
the migration.
- DORA metrics are lagging outcome metrics. They reflect the cumulative effect of many upstream
behaviors and move slowly. Capture them now as your honest “before” picture for reporting
progress to leadership.
Without baselines, you cannot prove improvement, cannot detect regression, and default to fixing
what is visible rather than what is the actual
constraint.
CI Health Metrics
These three metrics tell you whether your team’s integration practices are healthy. They surface
problems immediately and are your primary signal during the migration.
Integration Frequency
What it measures: How often developers commit and integrate to trunk per day.
How to capture it: Count commits merged to trunk over the last 10 working days. Divide by the
number of active developers and by 10.
| Frequency | What It Suggests |
|---|
| 2 or more per developer per day | Small batches, fast feedback |
| 1 per developer per day | Reasonable starting point |
| Less than 1 per developer per day | Long-lived branches or large work items |
Record your number: ______ average commits to trunk per developer per day.
Build Success Rate
What it measures: The percentage of CI builds that pass on the first attempt.
How to capture it: Pull the last 30 days of CI build history from your pipeline tool. Divide
passing builds by total builds.
| Success Rate | What It Suggests |
|---|
| 90% or higher | Reliable pipeline; developers integrate with confidence |
| 70-90% | Flaky tests or inconsistent local validation before pushing |
| Below 70% | Broken build is normalized; integration discipline is low |
Record your number: ______ % of CI builds that pass on first attempt.
Time to Fix a Broken Build
What it measures: The elapsed time from a build breaking on trunk to the next green build.
How to capture it: Identify build failures on trunk over the last 30 days. For each failure,
record the time from first red build to next green build. Take the median.
| Time to Fix | What It Suggests |
|---|
| Less than 10 minutes | Team treats broken builds as stop-the-line |
| 10-60 minutes | Manual but fast response |
| More than 1 hour | Broken build is not treated as urgent |
Record your number: ______ median time to fix a broken build.
DORA Metrics
The DORA research program (now part of Google Cloud) identified four metrics that predict
software delivery performance and organizational outcomes. These are lagging indicators -
they confirm that improvement work is compounding into better delivery outcomes.
Do not use these as improvement targets. See
DORA Metrics as Delivery Improvement Goals.
Deployment Frequency
What it measures: How often your team deploys to production.
How to capture it: Count the number of production deployments in the last 30 days. Check
your pipeline system, deployment logs, or change management records.
| Frequency | What It Suggests |
|---|
| Multiple times per day | You may already be practicing continuous delivery |
| Once per week | Regular cadence but likely batch changes |
| Once per month or less | Large batches, high risk per deployment, likely manual process |
Record your number: ______ deployments in the last 30 days.
Lead Time for Changes
What it measures: The elapsed time from when code is committed to trunk to when it is
running in production.
How to capture it: Pick your last 5-10 production deployments. For each one, find the merge
timestamp of the oldest change included and subtract it from the deployment timestamp. Take
the median.
| Lead Time | What It Suggests |
|---|
| Less than 1 hour | Fast flow, small batches, good automation |
| 1 day to 1 week | Reasonable with room for improvement |
| 1 week to 1 month | Significant queuing or manual gates |
| More than 1 month | Major constraints in testing, approval, or deployment |
Record your number: ______ median lead time for changes.
Change Failure Rate
What it measures: The percentage of deployments to production that result in a degraded
service requiring remediation (rollback, hotfix, or patch).
How to capture it: Look at your last 20-30 production deployments. Count how many caused an
incident, required a rollback, or needed an immediate hotfix. Divide by total deployments.
| Failure Rate | What It Suggests |
|---|
| 0-15% | Strong quality practices and small change sets |
| 16-30% | Typical for teams with some automation |
| Above 30% | Systemic quality problems |
Record your number: ______ % of deployments that required remediation.
Mean Time to Restore (MTTR)
What it measures: How long it takes to restore service after a production failure caused by
a deployment.
How to capture it: Look at your production incidents from the last 3-6 months. For each
incident caused by a deployment, record the time from detection to resolution. Take the median.
| MTTR | What It Suggests |
|---|
| Less than 1 hour | Good incident response, likely automated rollback |
| 1-4 hours | Manual but practiced recovery process |
| 4-24 hours | Significant manual intervention required |
| More than 1 day | Serious gaps in observability or rollback capability |
Record your number: ______ median time to restore service.
What Your Baselines Tell You
Your numbers point toward specific constraints:
Use these signals alongside your value stream map to identify your top constraints.
Goodhart's Law
“When a measure becomes a target, it ceases to be a good measure.”
These metrics are diagnostic tools, not performance targets. Use them within the team, for the
team. Never use them to rank individuals or compare teams.
Next Step
With your baselines recorded, proceed to Identify Constraints to
determine which bottleneck to address first.
Related Content
3 - Identify Constraints
Use your value stream map and baseline metrics to find the bottlenecks that limit your delivery flow.
Phase 0 - Assess | Scope: Team + Org
Your value stream map shows you where time goes. Your
baseline metrics tell you how fast and how safely you deliver. Now you
need to answer the most important question in your migration: What is the one thing most
limiting your delivery flow right now?
This is not a question you answer by committee vote or gut feeling. It is a question you answer
with the data you have already collected.
The Theory of Constraints
Eliyahu Goldratt’s Theory of Constraints offers a simple and powerful insight: every system has
exactly one constraint that limits its overall throughput. Improving anything other than that
constraint does not improve the system.
Consider a delivery process where code review takes 30 minutes but the queue to get a review
takes 2 days, and manual regression testing takes 5 days after that. If you invest three months
building a faster build pipeline that saves 10 minutes per build, you have improved something
that is not the constraint. The 5-day regression testing cycle still dominates your lead time.
You have made a non-bottleneck more efficient, which changes nothing about how fast you deliver.
The implication for your CD migration is direct: you must find and address constraints in order
of impact. Fix the biggest one first. Then find the next one. Then fix that. This is how you
make sustained, measurable progress rather than spreading effort across improvements that do not
move the needle.
What your team controls
Your team can apply constraint analysis to everything within your delivery boundary without
needing external approval:
- Running the value stream mapping exercise and gathering baseline metrics
- Identifying testing bottlenecks, code review delays, and environment availability issues
- Resolving integration and merge conflicts through trunk-based development
- Addressing work decomposition and WIP limit problems
What requires broader change
Some constraints are organizational, not technical. Your team can identify them, but resolving
them requires engaging outside your boundary:
- Deployment gates: CAB meetings, multi-team sign-offs, and approval queues are policy
decisions. Removing or automating them requires organizational consensus.
- Manual handoffs: When work must pass through a separate test team, security review, or
operations team, the constraint is in the process structure, not the pipeline. Resolving it
means changing how those teams engage, not just how your team works.
- Change windows: Release schedules and deployment blackout periods are set by the
organization, not the team. Challenge them with data, not just intent.
Use the constraint analysis in this page to build a prioritized case for those conversations.
Common Constraint Categories
Software delivery constraints tend to cluster into a few recurring categories. As you review your
value stream map, look for these patterns.
Testing Bottlenecks
Symptoms: Large wait time between “code complete” and “verified.” Manual regression test
cycles measured in days or weeks. Low %C/A at the testing step, indicating frequent rework.
High change failure rate in your baseline metrics despite significant testing effort.
What is happening: Testing is being done as a phase after development rather than as a
continuous activity during development. Manual test suites have grown to cover every scenario
ever encountered, and running them takes longer with every release. The test environment is
shared and frequently broken.
Migration path: Phase 1 - Testing Fundamentals
Deployment Gates
Symptoms: Wait times of days or weeks between “tested” and “deployed.” Change Advisory Board
(CAB) meetings that happen weekly or biweekly. Multiple sign-offs required from people who are
not involved in the actual change.
What is happening: The organization has substituted process for confidence. Because
deployments have historically been risky (large batches, manual processes, poor rollback), layers
of approval have been added. These approvals add delay but rarely catch issues that automated
testing would not. They exist because the deployment process is not trustworthy, and they
persist because removing them feels dangerous.
Migration path: Phase 2 - Pipeline Architecture and
building the automated quality evidence that makes manual approvals unnecessary.
Environment Provisioning
Symptoms: Developers waiting hours or days for a test or staging environment. “Works on my
machine” failures when code reaches a shared environment. Environments that drift from production
configuration over time.
What is happening: Environments are manually provisioned, shared across teams, and treated as
pets rather than cattle. There is no automated way to create a production-like environment on
demand. Teams queue for shared environments, and environment configuration has diverged from
production.
Migration path: Phase 2 - Production-Like Environments
Code Review Delays
Symptoms: Pull requests sitting open for more than a day. Review queues with 5 or more
pending reviews. Developers context-switching because they are blocked waiting for review.
What is happening: Code review is being treated as an asynchronous handoff rather than a
collaborative activity. Reviews happen when the reviewer “gets to it” rather than as a
near-immediate response. Large pull requests make review daunting, which increases queue time
further.
Migration path: Phase 1 - Code Review and
Trunk-Based Development to reduce branch lifetime
and review size.
Manual Handoffs
Symptoms: Multiple steps in your value stream map where work transitions from one team to
another. Tickets being reassigned across teams. “Throwing it over the wall” language in how people
describe the process.
What is happening: Delivery is organized as a sequence of specialist stages (dev, test, ops,
security) rather than as a cross-functional flow. Each handoff introduces a queue, a context
loss, and a communication overhead. The more handoffs, the longer the lead time and the more
likely that information is lost.
Migration path: This is an organizational constraint, not a technical one. It is addressed
gradually through cross-functional team formation and by automating the specialist activities
into the pipeline so that handoffs become automated checks rather than manual transfers.
Using Your Value Stream Map to Find the Constraint
Pull out your value stream map and follow this process:
Step 1: Rank Steps by Wait Time
List every step in your value stream and sort them by wait time, longest first. Your biggest
constraint is almost certainly in the top three. Wait time is more important than process time
because wait time is pure waste - nothing is happening, no value is being created.
Step 2: Look for Rework Loops
Identify steps where work frequently loops back. A testing step with a 40% rework rate means
that nearly half of all changes go through the development-to-test cycle twice. The effective
wait time for that step is nearly doubled when you account for rework.
Step 3: Count Handoffs
Each handoff between teams or roles is a queue point. If your value stream has 8 handoffs, you
have 8 places where work waits. Look for handoffs that could be eliminated by automation or
by reorganizing work within the team.
Step 4: Cross-Reference with Metrics
Check your findings against your baseline metrics:
- High lead time with low process time = the constraint is in the queues (wait time), not in
the work itself
- High change failure rate = the constraint is in quality practices, not in speed
- Low deployment frequency with everything else reasonable = the constraint is in the
deployment process itself or in organizational policy
Prioritizing: Fix the Biggest One First
One Constraint at a Time
Resist the temptation to tackle multiple constraints simultaneously. The Theory of Constraints
is clear: improving a non-bottleneck does not improve the system. Identify the single biggest
constraint, focus your migration effort there, and only move to the next constraint when the
first one is no longer the bottleneck.
This does not mean the entire team works on one thing. It means your improvement initiatives
are sequenced to address constraints in order of impact.
Once you have identified your top constraint, map it to a migration phase:
The Next Constraint
Fixing your first constraint will improve your flow. It will also reveal the next constraint.
This is expected and healthy. A delivery process is a chain, and strengthening the weakest link
means a different link becomes the weakest.
This is why the migration is organized in phases. Phase 1 addresses the foundational constraints
that nearly every team has (integration practices, testing, small work). Phase 2 addresses
pipeline constraints. Phase 3 optimizes flow. You will cycle through constraint identification
and resolution throughout your migration.
Plan to revisit your value stream map and metrics after addressing each major constraint. Your
map from today will be outdated within weeks of starting your migration - and that is a sign of
progress.
Next Step
Complete the Current State Checklist to assess your team against
specific MinimumCD practices and confirm your migration starting point.
Related Content
- Work Items Take Too Long - a flow symptom often traced back to the constraints this guide helps identify
- Too Much WIP - a symptom that constraint analysis frequently uncovers
- Unbounded WIP - an anti-pattern that shows up as a queue constraint in your value stream
- CAB Gates - an organizational anti-pattern that commonly surfaces as a deployment gate constraint
- Monolithic Work Items - an anti-pattern that increases lead time by inflating batch size
- Value Stream Mapping - the prerequisite exercise that produces the data this guide analyzes
4 - Current State Checklist
Self-assess your team against MinimumCD practices to understand your starting point and determine where to begin your migration.
Phase 0 - Assess | Scope: Team
This checklist translates the practices defined by MinimumCD.org into
concrete yes-or-no questions you can answer about your team today. It is not a test to pass. It is
a diagnostic tool that shows you which practices are already in place and which ones your migration
needs to establish.
Work through each category with your team. Be honest - checking a box you have not earned gives
you a migration plan that skips steps you actually need.
How to Use This Checklist
For each item, mark it with an [x] if your team consistently does this today - not occasionally,
not aspirationally, but as a default practice. If you do it sometimes but not reliably, leave it
unchecked.
Trunk-Based Development
Why it matters: Long-lived branches are the single biggest source of integration risk. Every
hour a branch lives is an hour where it diverges from what everyone else is doing. Trunk-based
development eliminates integration as a separate, painful event and makes it a continuous,
trivial activity. Without this practice, continuous integration is impossible, and without
continuous integration, continuous delivery is impossible.
Continuous Integration
Why it matters: Continuous integration means that the team always knows whether the codebase
is in a working state. If builds are not automated, if tests do not run on every commit, or if
broken builds are tolerated, then the team is flying blind. Every change is a gamble that
something else has not broken in the meantime.
Pipeline Practices
Why it matters: A pipeline is the mechanism that turns code changes into production
deployments. If the pipeline is inconsistent, manual, or bypassable, then you do not have a
reliable path to production. You have a collection of scripts and hopes. Deterministic, automated
pipelines are what make deployment a non-event rather than a high-risk ceremony.
Deployment
Why it matters: If your test environment does not look like production, your tests are lying
to you. If configuration is baked into your artifact, you are rebuilding for each environment,
which means the thing you tested is not the thing you deploy. If you cannot roll back quickly,
every deployment is a high-stakes bet. These practices ensure that what you test is what you
ship, and that shipping is safe.
Quality
Why it matters: Quality that depends on manual inspection does not scale and does not speed
up. As your deployment frequency increases through the migration, manual quality gates become
the bottleneck. The goal is to build quality in through automation so that a green build means
a deployable build. This is the foundation of continuous delivery: if it passes the pipeline,
it is ready for production.
Scoring Guide
Count the number of items you checked across all categories.
| Score | Your Starting Point | Recommended Phase |
|---|
| 0-5 | You are early in your journey. Most foundational practices are not yet in place. | Start at the beginning of Phase 1 - Foundations. Focus on trunk-based development and basic test automation first. |
| 6-12 | You have some practices in place but significant gaps remain. This is the most common starting point. | Start with Phase 1 - Foundations but focus on the categories where you had the fewest checks. Your constraint analysis will tell you which gap to close first. |
| 13-18 | Your foundations are solid. The gaps are likely in pipeline automation and deployment practices. | You may be able to move quickly through Phase 1 and focus your effort on Phase 2 - Pipeline. Validate with your value stream map that your remaining constraints match. |
| 19-22 | You are well-practiced in most areas. Your migration is about closing specific gaps and optimizing flow. | Review your unchecked items - they point to specific topics in Phase 3 - Optimize or Phase 4 - Deliver on Demand. |
| 23-25 | You are already practicing most of what MinimumCD defines. Your focus should be on consistency and delivering on demand. | Jump to Phase 4 - Deliver on Demand and focus on the capability to deploy any change when needed. |
A Score Is Not a Grade
This checklist exists to help your team find its starting point, not to judge your team’s
competence. A score of 5 does not mean your team is failing - it means your team has a clear
picture of what to work on. A score of 22 does not mean you are done - it means your remaining
gaps are specific and targeted.
The only wrong answer is a dishonest one.
Putting It All Together
You now have four pieces of information from Phase 0:
- A value stream map showing your end-to-end delivery process with wait times and rework loops
- Baseline metrics for deployment frequency, lead time, change failure rate, and MTTR
- An identified top constraint telling you where to focus first
- This checklist confirming which practices are in place and which are missing
Together, these give you a clear, data-informed starting point for your migration. You know where
you are, you know what is slowing you down, and you know which practices to establish first.
Next Step
You are ready to begin Phase 1 - Foundations. Start with the practice area
that addresses your top constraint.
Related Content