Replacing Manual Validations with Automation

The repeating mechanical cycle at the heart of every brownfield CD migration: identify a manual validation, automate it, prove the automation works, and remove the manual step.

The Brownfield CD overview covers the migration phases, principles, and common challenges. This page covers the core mechanical process - the specific, repeating cycle of replacing manual validations with automation that drives every phase forward.

The Replacement Cycle

Every brownfield CD migration follows the same four-step cycle, repeated until no manual validations remain between commit and production:

  1. Identify a manual validation in the delivery process.
  2. Automate the check so it runs in the pipeline without human intervention.
  3. Validate that the automation catches the same problems the manual step caught.
  4. Remove the manual step from the process.

Then pick the next manual validation and repeat.

Two rules make this cycle work:

  • Do not skip “validate.” Run the manual and automated checks in parallel long enough to prove the automation catches what the manual step caught. Without this evidence, the team will not trust the automation, and the manual step will creep back.
  • Do not skip “remove.” Keeping both the manual and automated checks adds cost without removing it. The goal is replacement, not duplication. Once the automated check is proven, retire the manual step explicitly.

Inventory Your Manual Validations

Before you can replace manual validations, you need to know what they are. A value stream map is the fastest way to find them. Walk the path from commit to production and mark every point where a human has to inspect, approve, verify, or execute something before the change can move forward.

Common manual validations and where they typically live:

Manual ValidationWhere It LivesWhat It Catches
Manual regression testingQA team runs test cases before releaseFunctional regressions in existing features
Code style reviewPR review checklistFormatting, naming, structural consistency
Security reviewSecurity team sign-off before deployVulnerable dependencies, injection risks, auth gaps
Environment configurationOps team configures target environmentMissing env vars, wrong connection strings, incorrect feature flags
Smoke testingSomeone clicks through the app after deployDeployment-specific failures, broken integrations
Change advisory boardCAB meeting approves production changesRisk assessment, change coordination, rollback planning
Database migration reviewDBA reviews and runs migration scriptsSchema conflicts, data loss, performance regressions

Your inventory will include items not on this list. That is expected. The list above covers the most common ones, but every team has process-specific manual steps that accumulated over time.

Prioritize by Effort and Friction

Not all manual validations are equal. Some cause significant delay on every release. Others are quick and infrequent. Prioritize by mapping each validation on two axes:

Friction (vertical axis - how much pain the manual step causes):

  • How often does it run? (every commit, every release, quarterly)
  • How long does it take? (minutes, hours, days)
  • How often does it produce errors? (rarely, sometimes, frequently)

High-frequency, long-duration, error-prone validations cause the most friction.

Effort to automate (horizontal axis - how hard is the automation):

  • Is the codebase ready? (clean interfaces vs. tightly coupled)
  • Do tools exist? (linters, test frameworks, scanning tools)
  • Is the validation well-defined? (clear pass/fail vs. subjective judgment)

Start with high-friction, low-effort validations. These give you the fastest return and build momentum for harder automations later. This is the same constraint-based thinking described in Identify Constraints - fix the biggest bottleneck first.

Low EffortHigh Effort
High FrictionStart here - fastest returnPlan these - high value but need investment
Low FrictionDo these opportunisticallyDefer - low return for high cost

Walkthrough: Replacing Manual Regression Testing

A concrete example of the full cycle applied to a common brownfield problem.

Starting state

The QA team runs 200 manual test cases before every release. The full regression suite takes three days. Releases happen every two weeks, so the team spends roughly 20% of every sprint on manual regression testing.

Step 1: Identify

The value stream map shows the 3-day manual regression cycle as the single largest wait time between “code complete” and “deployed.” This is the constraint.

Step 2: Automate (start small)

Do not attempt to automate all 200 test cases at once. Rank the test cases by two criteria:

  • Failure frequency: Which tests actually catch bugs? (In most suites, a small number of tests catch the majority of real regressions.)
  • Business criticality: Which tests cover the highest-risk functionality?

Pick the top 20 test cases by these criteria. Write automated tests for those 20 first. This is enough to start the validation step.

Step 3: Validate (parallel run)

Run the 20 automated tests alongside the full manual regression suite for two or three release cycles. Compare results:

  • Did the automated tests catch the same failures the manual tests caught?
  • Did the automated tests miss anything the manual tests caught?
  • Did the automated tests catch anything the manual tests missed?

Track these results explicitly. They are the evidence the team needs to trust the automation.

Step 4: Remove

Once the automated tests have proven equivalent for those 20 test cases across multiple cycles, remove those 20 test cases from the manual regression suite. The manual suite is now 180 test cases - taking roughly 2.7 days instead of 3.

Repeat

Pick the next 20 highest-value test cases. Automate them. Validate with parallel runs. Remove the manual cases. The manual suite shrinks with each cycle:

CycleManual Test CasesManual DurationAutomated Tests
Start2003.0 days0
11802.7 days20
21602.4 days40
31402.1 days60
41201.8 days80
51001.5 days100

Each cycle also gets faster because the team builds skill and the test infrastructure matures. For more on structuring automated tests effectively, see Testing Fundamentals and Functional Testing.

When Refactoring Is a Prerequisite

Sometimes you cannot automate a validation because the code is not structured for it. In these cases, refactoring is a prerequisite step within the replacement cycle - not a separate initiative.

Code-Level BlockerWhy It Prevents AutomationRefactoring Approach
Tight coupling between modulesCannot test one module without setting up the entire systemExtract interfaces at module boundaries so modules can be tested in isolation
Hardcoded configurationCannot run the same code in test and production environmentsExtract configuration into environment variables or config files
No clear entry pointsCannot call business logic without going through the UIExtract business logic into callable functions or services
Shared mutable stateTest results depend on execution order and are not repeatableIsolate state by passing dependencies explicitly instead of using globals
Scattered database accessCannot test logic without a running database and specific dataConsolidate data access behind a repository layer that can be substituted in tests

The key discipline: refactor only the minimum needed for the specific validation you are automating. Do not expand the refactoring scope beyond what the current cycle requires. This keeps the refactoring small, low-risk, and tied to a concrete outcome.

For more on decoupling strategies, see Architecture Decoupling.

The Compounding Effect

Each completed replacement cycle frees time that was previously spent on manual validation. That freed time becomes available for the next automation cycle. The pace of migration accelerates as you progress:

CycleManual Time per ReleaseTime Available for AutomationCumulative Automated Checks
Start5 daysLimited (squeezed between feature work)0
After 2 cycles4 days1 day freed2 validations automated
After 4 cycles3 days2 days freed4 validations automated
After 6 cycles2 days3 days freed6 validations automated
After 8 cycles1 day4 days freed8 validations automated

Early cycles are the hardest because you have the least available time. This is why starting with the highest-friction, lowest-effort validation matters - it frees the most time for the least investment.

The same compounding dynamic applies to small batches - smaller changes are easier to validate, which makes each cycle faster, which enables even smaller changes.

Small Steps in Everything

The replacement cycle embodies the same small-batch discipline that CD itself requires. The principle applies at every level of the migration:

  • Automate one validation at a time. Do not try to build the entire pipeline in one sprint.
  • Refactor one module at a time. Do not launch a “tech debt initiative” to restructure the whole codebase before you can automate anything.
  • Remove one manual check at a time. Do not announce “we are eliminating manual QA” and try to do it all at once.

The risk of big-step migration:

  • The work stalls because the scope is too large to complete alongside feature delivery.
  • ROI is distant because nothing is automated until everything is automated.
  • Feature delivery suffers because the team is consumed by a transformation project instead of delivering value.

This connects directly to the brownfield migration principle: do not stop delivering features. The replacement cycle is designed to produce value at every iteration, not only at the end.

For more on decomposing work into small steps, see Work Decomposition.

Measuring Progress

Track these metrics to gauge migration progress. Start collecting them from baseline before you begin replacing validations.

MetricWhat It Tells YouTarget Direction
Manual validations remainingHow many manual steps still exist between commit and productionDown to zero
Time spent on manual validation per releaseHow much calendar time manual checks consume each release cycleDecreasing each quarter
Pipeline coverage %What percentage of validations are automated in the pipelineIncreasing toward 100%
Deployment frequencyHow often you deploy to productionIncreasing
Lead time for changesTime from commit to productionDecreasing

If manual validations remaining is decreasing but deployment frequency is not increasing, you may be automating low-friction validations that are not on the critical path. Revisit your prioritization and focus on the validations that are actually blocking faster delivery.