Pipeline Reference Architecture

Pipeline reference architectures for single-team, multi-team, and distributed service delivery, with quality gates sequenced by defect detection priority.

This section defines quality gates sequenced by defect detection priority and three pipeline patterns that apply them. Quality gates are derived from the Systemic Defect Fixes catalog and sequenced so the cheapest, fastest checks run first.

Gates marked with [Pre-Feature] must be in place and passing before any new feature work begins. They form the baseline safety net that every commit runs through. Adding features without these gates means defects accumulate faster than the team can detect them.

Gates marked with are enhanced by AI - the AI shifts detection earlier or catches issues that rule-based tools miss. See the Systemic Defect Fixes catalog for details.

Quality Gates in Priority Sequence

The gate sequence follows a single principle: fail fast, fail cheap. Gates that catch the most common defects with the least execution time run first. Each gate listed below maps to one or more defect sources from the catalog.

Pre-commit Gates

These run on the developer’s machine before code leaves the workstation. They provide sub-second to sub-minute feedback.

GateDefect Sources AddressedCatalog SectionPre-Feature
Linting and formattingCode style consistency, preventable review noiseProcess & DeploymentRequired
Static type checkingNull/missing data assumptions, type mismatchesData & StateRequired
Secret scanningSecrets committed to source controlSecurity & ComplianceRequired
SAST (injection patterns)Injection vulnerabilities, taint analysisSecurity & ComplianceRequired
Race condition detectionRace conditions (thread sanitizers, where language supports it)Integration & Boundaries
Accessibility lintingMissing alt text, ARIA violations, contrast failuresProduct & Discovery
Unit testsLogic errors, unintended side effects, edge casesChange & ComplexityRequired
Timeout enforcement checksMissing timeout and deadline enforcementPerformance & Resilience
AI semantic code reviewLogic errors, missing edge cases, subtle injection vectors beyond pattern matchingProcess & Deployment, Security & Compliance

CI Stage 1: Build and Fast Tests < 5 min

These run on every commit to trunk.

GateDefect Sources AddressedCatalog SectionPre-Feature
All pre-commit gatesRe-run in CI to catch anything bypassed locallySee Pre-commit GatesRequired
Compilation / buildBuild reproducibility, dependency resolutionDependency & InfrastructureRequired
Dependency vulnerability scan (SCA)Known vulnerabilities in dependenciesSecurity & ComplianceRequired
License compliance scanLicense compliance violationsSecurity & Compliance
Code complexity and duplication scoringAccumulated technical debtChange & Complexity
AI change impact analysisSemantic blast radius of changes; unintended side effects beyond syntactic dependenciesChange & Complexity
AI vulnerability reachability analysisCorrelate CVEs with actual code usage paths to prioritize exploitable risks over theoretical onesSecurity & Compliance
Stage duration warningWarn if Stage 1 exceeds 10 minutes; slow fast-feedback loops mask defects and delay trunk integrationProcess & Deployment

CD Stage 1: Integration and Contract Tests < 10 min

These validate boundaries between components.

GateDefect Sources AddressedCatalog SectionPre-Feature
Contract testsInterface mismatches, wrong assumptions about upstream/downstreamIntegration & BoundariesRequired
Schema migration validationSchema migration and backward compatibility failuresData & StateRequired
Infrastructure-as-code drift detectionConfiguration drift, environment differencesDependency & Infrastructure
Environment parity checksTest environments not reflecting productionTesting & Observability Gaps
AI boundary coverage analysisIntegration boundaries missing contract tests; semantic service relationship mappingTesting & Observability Gaps
AI behavioral assumption detectionUndocumented assumptions at service boundaries that contract tests don’t coverIntegration & Boundaries

CD Stage 2: Broader Automated Verification < 15 min

These run in parallel where possible.

GateDefect Sources AddressedCatalog SectionPre-Feature
Mutation testingUntested edge cases and error paths, weak assertionsTesting & Observability Gaps
Performance benchmarksPerformance regressionsPerformance & Resilience
Resource leak detectionResource leaks (memory, connections)Performance & Resilience
Security integration testsAuthentication and authorization gapsSecurity & Compliance
Compliance-as-code policy checksRegulatory requirement gaps, missing audit trailsSecurity & Compliance
SBOM generationLicense compliance, dependency transparencySecurity & Compliance
Automated WCAG compliance scanFull-page rendered accessibility checks with browser automationProduct & Discovery
AI edge case test generationUntested boundaries and error conditions identified from code path analysisTesting & Observability Gaps
AI authorization path analysisMissing authorization checks and privilege escalation patterns in code pathsSecurity & Compliance
AI resilience reviewSingle points of failure and missing fallback paths in architecturePerformance & Resilience
AI regulatory mappingMap regulatory requirements to implementation artifacts; flag uncovered controlsSecurity & Compliance

Acceptance Tests < 20 min

These validate user-facing behavior in a production-like environment.

GateDefect Sources AddressedCatalog SectionPre-Feature
Functional acceptance testsImplementation does not match acceptance criteriaProduct & Discovery
Load and capacity testsUnknown capacity limits, slow response timesPerformance & Resilience
Chaos and resilience testsNetwork partition handling, missing graceful degradationPerformance & Resilience
Cache invalidation verificationCache invalidation errorsData & State
Feature interaction testsUnanticipated feature interactionsChange & Complexity
AI intent alignment reviewAcceptance criteria vs. user behavior data misalignment; specs that meet the letter but miss the intentProduct & Discovery

Production Verification

These run during and after deployment. They are not optional - they close the feedback loop.

GateDefect Sources AddressedCatalog SectionPre-Feature
Health checks with auto-rollbackInadequate rollback capabilityProcess & Deployment
Canary or progressive deploymentBatching too many changes per releaseProcess & Deployment
Real user monitoring and SLO checksSlow user-facing response times, product-market misalignmentPerformance & Resilience
Structured audit logging verificationMissing audit trailsSecurity & Compliance
AI change risk scoringAutomated risk assessment from change diff, deployment history, and blast radius analysisProcess & Deployment

Pre-Feature Baseline


Pipeline Patterns

These three patterns apply the quality gates above to progressively more complex team and deployment topologies. Most organizations start with Pattern 1 and evolve toward Pattern 3 as team count and deployment independence requirements grow.

  1. Single Team, Single Deployable - one team owns one modular monolith with a linear pipeline
  2. Multiple Teams, Single Deployable - multiple teams own sub-domain modules within a shared modular monolith, each with its own sub-pipeline feeding a thin integration pipeline
  3. Independent Teams, Independent Deployables - each team owns an independently deployable service with its own full pipeline and API contract verification

Mapping to the Defect Sources Catalog

Each quality gate above is derived from the Systemic Defect Fixes catalog. The catalog organizes defects by origin - product and discovery, integration, knowledge, change and complexity, testing gaps, process, data, dependencies, security, and performance. The pipeline gates are the automated enforcement points for the systemic prevention strategies described in the catalog.

Gates marked with correspond to catalog entries where AI shifts detection earlier than current rule-based automation. For expert agent patterns that implement these gates in an agentic CD context, see ACD Pipeline Enforcement.

When adding or removing gates, consult the catalog to ensure that no defect category loses its detection point. A gate that seems redundant may be the only automated check for a specific defect source.

Further Reading

For a deeper treatment of pipeline design, stage sequencing, and deployment strategies, see Dave Farley’s Continuous Delivery Pipelines which covers pipeline architecture patterns in detail.


Single Team, Single Deployable

A linear pipeline pattern for a single team owning a modular monolith.

Multiple Teams, Single Deployable

A sub-pipeline pattern for multiple teams contributing domain modules to a shared modular monolith.

Independent Teams, Independent Deployables

A fully independent pipeline pattern for teams deploying their own services in any order, with API contract verification replacing integration testing.