This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Pipeline Reference Architecture

Pipeline reference architectures for single-team, multi-team, and distributed service delivery, with quality gates sequenced by defect detection priority.

This section defines quality gates sequenced by defect detection priority and three pipeline patterns that apply them. Quality gates are derived from the Systemic Defect Fixes catalog and sequenced so the cheapest, fastest checks run first.

Gates marked with [Pre-Feature] must be in place and passing before any new feature work begins. They form the baseline safety net that every commit runs through. Adding features without these gates means defects accumulate faster than the team can detect them.

Gates marked with are enhanced by AI - the AI shifts detection earlier or catches issues that rule-based tools miss. See the Systemic Defect Fixes catalog for details.

Quality Gates in Priority Sequence

The gate sequence follows a single principle: fail fast, fail cheap. Gates that catch the most common defects with the least execution time run first. Each gate listed below maps to one or more defect sources from the catalog.

Pre-commit Gates

These run on the developer’s machine before code leaves the workstation. They provide sub-second to sub-minute feedback.

GateDefect Sources AddressedCatalog SectionPre-Feature
Linting and formattingCode style consistency, preventable review noiseProcess & DeploymentRequired
Static type checkingNull/missing data assumptions, type mismatchesData & StateRequired
Secret scanningSecrets committed to source controlSecurity & ComplianceRequired
SAST (injection patterns)Injection vulnerabilities, taint analysisSecurity & ComplianceRequired
Race condition detectionRace conditions (thread sanitizers, where language supports it)Integration & Boundaries
Accessibility lintingMissing alt text, ARIA violations, contrast failuresProduct & Discovery
Unit testsLogic errors, unintended side effects, edge casesChange & ComplexityRequired
Timeout enforcement checksMissing timeout and deadline enforcementPerformance & Resilience
AI semantic code reviewLogic errors, missing edge cases, subtle injection vectors beyond pattern matchingProcess & Deployment, Security & Compliance

CI Stage 1: Build and Fast Tests < 5 min

These run on every commit to trunk.

GateDefect Sources AddressedCatalog SectionPre-Feature
All pre-commit gatesRe-run in CI to catch anything bypassed locallySee Pre-commit GatesRequired
Compilation / buildBuild reproducibility, dependency resolutionDependency & InfrastructureRequired
Dependency vulnerability scan (SCA)Known vulnerabilities in dependenciesSecurity & ComplianceRequired
License compliance scanLicense compliance violationsSecurity & Compliance
Code complexity and duplication scoringAccumulated technical debtChange & Complexity
AI change impact analysisSemantic blast radius of changes; unintended side effects beyond syntactic dependenciesChange & Complexity
AI vulnerability reachability analysisCorrelate CVEs with actual code usage paths to prioritize exploitable risks over theoretical onesSecurity & Compliance
Stage duration warningWarn if Stage 1 exceeds 10 minutes; slow fast-feedback loops mask defects and delay trunk integrationProcess & Deployment

CD Stage 1: Integration and Contract Tests < 10 min

These validate boundaries between components.

GateDefect Sources AddressedCatalog SectionPre-Feature
Contract testsInterface mismatches, wrong assumptions about upstream/downstreamIntegration & BoundariesRequired
Schema migration validationSchema migration and backward compatibility failuresData & StateRequired
Infrastructure-as-code drift detectionConfiguration drift, environment differencesDependency & Infrastructure
Environment parity checksTest environments not reflecting productionTesting & Observability Gaps
AI boundary coverage analysisIntegration boundaries missing contract tests; semantic service relationship mappingTesting & Observability Gaps
AI behavioral assumption detectionUndocumented assumptions at service boundaries that contract tests don’t coverIntegration & Boundaries

CD Stage 2: Broader Automated Verification < 15 min

These run in parallel where possible.

GateDefect Sources AddressedCatalog SectionPre-Feature
Mutation testingUntested edge cases and error paths, weak assertionsTesting & Observability Gaps
Performance benchmarksPerformance regressionsPerformance & Resilience
Resource leak detectionResource leaks (memory, connections)Performance & Resilience
Security integration testsAuthentication and authorization gapsSecurity & Compliance
Compliance-as-code policy checksRegulatory requirement gaps, missing audit trailsSecurity & Compliance
SBOM generationLicense compliance, dependency transparencySecurity & Compliance
Automated WCAG compliance scanFull-page rendered accessibility checks with browser automationProduct & Discovery
AI edge case test generationUntested boundaries and error conditions identified from code path analysisTesting & Observability Gaps
AI authorization path analysisMissing authorization checks and privilege escalation patterns in code pathsSecurity & Compliance
AI resilience reviewSingle points of failure and missing fallback paths in architecturePerformance & Resilience
AI regulatory mappingMap regulatory requirements to implementation artifacts; flag uncovered controlsSecurity & Compliance

Acceptance Tests < 20 min

These validate user-facing behavior in a production-like environment.

GateDefect Sources AddressedCatalog SectionPre-Feature
Functional acceptance testsImplementation does not match acceptance criteriaProduct & Discovery
Load and capacity testsUnknown capacity limits, slow response timesPerformance & Resilience
Chaos and resilience testsNetwork partition handling, missing graceful degradationPerformance & Resilience
Cache invalidation verificationCache invalidation errorsData & State
Feature interaction testsUnanticipated feature interactionsChange & Complexity
AI intent alignment reviewAcceptance criteria vs. user behavior data misalignment; specs that meet the letter but miss the intentProduct & Discovery

Production Verification

These run during and after deployment. They are not optional - they close the feedback loop.

GateDefect Sources AddressedCatalog SectionPre-Feature
Health checks with auto-rollbackInadequate rollback capabilityProcess & Deployment
Canary or progressive deploymentBatching too many changes per releaseProcess & Deployment
Real user monitoring and SLO checksSlow user-facing response times, product-market misalignmentPerformance & Resilience
Structured audit logging verificationMissing audit trailsSecurity & Compliance
AI change risk scoringAutomated risk assessment from change diff, deployment history, and blast radius analysisProcess & Deployment

Pre-Feature Baseline


Pipeline Patterns

These three patterns apply the quality gates above to progressively more complex team and deployment topologies. Most organizations start with Pattern 1 and evolve toward Pattern 3 as team count and deployment independence requirements grow.

  1. Single Team, Single Deployable - one team owns one modular monolith with a linear pipeline
  2. Multiple Teams, Single Deployable - multiple teams own sub-domain modules within a shared modular monolith, each with its own sub-pipeline feeding a thin integration pipeline
  3. Independent Teams, Independent Deployables - each team owns an independently deployable service with its own full pipeline and API contract verification

Mapping to the Defect Sources Catalog

Each quality gate above is derived from the Systemic Defect Fixes catalog. The catalog organizes defects by origin - product and discovery, integration, knowledge, change and complexity, testing gaps, process, data, dependencies, security, and performance. The pipeline gates are the automated enforcement points for the systemic prevention strategies described in the catalog.

Gates marked with correspond to catalog entries where AI shifts detection earlier than current rule-based automation. For expert agent patterns that implement these gates in an agentic CD context, see ACD Pipeline Enforcement.

When adding or removing gates, consult the catalog to ensure that no defect category loses its detection point. A gate that seems redundant may be the only automated check for a specific defect source.

Further Reading

For a deeper treatment of pipeline design, stage sequencing, and deployment strategies, see Dave Farley’s Continuous Delivery Pipelines which covers pipeline architecture patterns in detail.

1 - Single Team, Single Deployable

A linear pipeline pattern for a single team owning a modular monolith.

This architecture suits a team of up to 8-10 people owning a modular monolith - a single deployable application with well-defined internal module boundaries. The codebase is organized by domain, not by technical layer. Each module encapsulates its own data, logic, and interfaces, communicating with other modules through explicit internal APIs. The application deploys as one unit, but its internal structure makes it possible to reason about, test, and change one module without understanding the entire codebase. The pipeline is linear with parallel stages where dependencies allow.

Pre-Feature Gate CI Stage Parallel Verification Acceptance Production
graph TD
    classDef prefeature fill:#0d7a32,stroke:#0a6128,color:#fff
    classDef ci fill:#224968,stroke:#1a3a54,color:#fff
    classDef parallel fill:#30648e,stroke:#224968,color:#fff
    classDef accept fill:#6c757d,stroke:#565e64,color:#fff
    classDef prod fill:#a63123,stroke:#8a2518,color:#fff

    A["Pre-commit Gates<br/><small>Lint, Types, Secrets, SAST</small>"]:::prefeature
    B["Build + Unit Tests"]:::prefeature
    C["Contract + Schema Tests"]:::prefeature
    D["Security Scans"]:::parallel
    E["Performance Benchmarks"]:::parallel
    F["Acceptance Tests<br/><small>Production-Like Env</small>"]:::accept
    G["Create Immutable Artifact"]:::ci
    H["Deploy Canary / Progressive"]:::prod
    I["Health Checks + SLO Monitors<br/>Auto-Rollback"]:::prod

    A -->|"commit to trunk"| B
    B --> C
    C --> D & E
    D --> F
    E --> F
    F --> G
    G --> H
    H --> I

Key Characteristics

  • One pipeline, one artifact: The entire application builds and deploys as a single immutable artifact. There is no fan-out or fan-in.
  • Linear with parallel branches: Security scans and performance benchmarks run in parallel because neither depends on the other. Everything else is sequential.
  • Trunk-based development: All developers commit to trunk at least daily. The pipeline runs on every commit.
  • Total target time: Under 15 minutes from commit to production-ready artifact. Acceptance tests may extend this to 20 minutes for complex applications.
  • Ownership: The team owns the pipeline definition, which lives in the same repository as the application code.

When This Architecture Breaks Down

This architecture stops working when:

  • The system becomes too large for a single team to manage.
  • Build times extend along with the ability to respond quickly even after optimization
  • Different parts of the application need different deployment cadences

When these symptoms appear, consider splitting into the multi-team architecture or decomposing the application into independently deployable services with their own pipelines.

2 - Multiple Teams, Single Deployable

A sub-pipeline pattern for multiple teams contributing domain modules to a shared modular monolith.

This architecture suits organizations where multiple teams contribute to a single deployable modular monolith - a common pattern for large applications, mobile apps, or platforms where the final artifact must be assembled from team contributions.

The modular monolith structure is what makes multi-team ownership possible. Each team owns a specific module representing a bounded sub-domain of the application. Team A might own checkout and payments, Team B owns inventory and fulfillment, Team C owns user accounts and authentication. Modules communicate through explicit internal APIs, not by reaching into each other’s database tables or calling private methods. Each team’s sub-pipeline validates only their module. A shared integration pipeline assembles and verifies the combined result.

This ownership model is critical. Without clear module boundaries, teams step on each other’s code, sub-pipelines trigger on unrelated changes, and merge conflicts replace pipeline contention as the bottleneck. The module split must follow the application’s domain boundaries, not its technical layers. A team that owns “the database layer” or “the API controllers” will always be coupled to every other team. A team that owns “payments” can change its database, API, and UI independently. If the codebase is not yet structured as a modular monolith, restructure it before adopting this architecture

  • otherwise the sub-pipelines will constantly interfere with each other.
graph TD
    classDef prefeature fill:#0d7a32,stroke:#0a6128,color:#fff
    classDef team fill:#224968,stroke:#1a3a54,color:#fff
    classDef integration fill:#30648e,stroke:#224968,color:#fff
    classDef prod fill:#a63123,stroke:#8a2518,color:#fff

    subgraph teamA ["Payments Sub-Domain (Team A)"]
        A1["Pre-commit Gates"]:::prefeature
        A2["Build + Unit Tests"]:::prefeature
        A3["Contract Tests"]:::prefeature
        A4["Security + Perf"]:::team
        A1 --> A2 --> A3 --> A4
    end

    subgraph teamB ["Inventory Sub-Domain (Team B)"]
        B1["Pre-commit Gates"]:::prefeature
        B2["Build + Unit Tests"]:::prefeature
        B3["Contract Tests"]:::prefeature
        B4["Security + Perf"]:::team
        B1 --> B2 --> B3 --> B4
    end

    subgraph teamC ["Accounts Sub-Domain (Team C)"]
        C1["Pre-commit Gates"]:::prefeature
        C2["Build + Unit Tests"]:::prefeature
        C3["Contract Tests"]:::prefeature
        C4["Security + Perf"]:::team
        C1 --> C2 --> C3 --> C4
    end

    subgraph integ ["Integration Pipeline"]
        I1["Assemble Combined Artifact"]:::integration
        I2["Integration Contract Tests"]:::integration
        I3["Acceptance Tests<br/><small>Production-Like Env</small>"]:::integration
        I4["Create Immutable Artifact"]:::integration
        I1 --> I2 --> I3 --> I4
    end

    A4 --> I1
    B4 --> I1
    C4 --> I1

    I4 --> D1["Deploy Canary / Progressive"]:::prod
    D1 --> D2["Health Checks + SLO Monitors<br/>Auto-Rollback"]:::prod

Key Characteristics

  • Module ownership by domain: Each team owns a bounded module of the application’s functionality. Ownership is defined by domain, not by technical layer. The team is responsible for all code, tests, and pipeline configuration within their module.
  • Team-owned sub-pipelines: Each team runs their own pre-commit, build, unit test, contract test, and security gates independently. A team’s sub-pipeline validates only their module and is their fast feedback loop.
  • Contract tests at both levels: Teams run contract tests in their sub-pipeline to catch boundary issues at the module edges. The integration pipeline runs cross-module contract tests to verify the assembled result.
  • Integration pipeline is thin: The integration pipeline does not re-run each team’s tests. It validates only what cannot be validated in isolation - cross-module integration, the assembled artifact, and end-to-end acceptance tests.
  • Sub-pipeline target time: Under 10 minutes. This is the team’s primary feedback loop and must stay fast.
  • Integration pipeline target time: Under 15 minutes. If it grows beyond this, the integration test suite needs decomposition or the application needs architectural changes to enable independent deployment.
  • Trunk-based development with path filters: All teams commit to the same trunk. Sub-pipelines trigger based on path filters aligned to module boundaries, so a change to the payments module does not trigger the inventory sub-pipeline.

Preventing the Integration Pipeline from Becoming a Bottleneck

The integration pipeline is a shared resource and the most likely bottleneck in this architecture. To keep it fast:

  1. Move tests left into sub-pipelines: Every test that can run in a sub-pipeline should run there. The integration pipeline should only contain tests that require the full assembled artifact.
  2. Use contract tests aggressively: Contract tests in sub-pipelines catch most integration issues without needing the full system. The integration pipeline’s contract tests are a verification layer, not the primary detection point.
  3. Run the integration pipeline on every commit to trunk: Do not batch. Batching creates large changesets that are harder to debug when they fail.
  4. Parallelize acceptance tests: Group acceptance tests by feature area and run groups in parallel.
  5. Monitor integration pipeline duration: Set an alert if it exceeds 15 minutes. Treat this the same as a failing test - fix it immediately.

When to Move Away from This Architecture

This architecture is a pragmatic pattern for organizations that cannot yet decompose their monolith into independently deployable services. The long-term goal is loose coupling - independent services with independent pipelines that do not need a shared integration step.

Signs you are ready to decompose:

  • Contract tests catch virtually all integration issues in sub-pipelines
  • The integration pipeline adds little value beyond what sub-pipelines already verify
  • Teams are blocked by integration pipeline queuing more than once per week
  • Different parts of the application need different deployment cadences

3 - Independent Teams, Independent Deployables

A fully independent pipeline pattern for teams deploying their own services in any order, with API contract verification replacing integration testing.

This is the target architecture for continuous delivery at scale. Each team owns an independently deployable service with its own pipeline, its own release cadence, and its own path to production. No team waits for another team to deploy. No integration pipeline serializes their work. The only shared infrastructure is the API contract layer that defines how services communicate.

This architecture demands disciplined API management. Without it, independent deployment is an illusion - teams deploy whenever they want, but they break each other constantly.

graph TD
    classDef prefeature fill:#0d7a32,stroke:#0a6128,color:#fff
    classDef team fill:#224968,stroke:#1a3a54,color:#fff
    classDef contract fill:#30648e,stroke:#224968,color:#fff
    classDef prod fill:#a63123,stroke:#8a2518,color:#fff
    classDef api fill:#6c757d,stroke:#565e64,color:#fff

    subgraph svcA ["Service A Pipeline (Team A)"]
        A1["Pre-commit Gates"]:::prefeature
        A2["Build + Unit Tests"]:::prefeature
        A3["Contract<br/>Verification"]:::prefeature
        A4["Security + Perf"]:::team
        A5["Acceptance Tests"]:::team
        A6["Create Immutable Artifact"]:::team
        A1 --> A2 --> A3 --> A4 --> A5 --> A6
    end

    subgraph svcB ["Service B Pipeline (Team B)"]
        B1["Pre-commit Gates"]:::prefeature
        B2["Build + Unit Tests"]:::prefeature
        B3["Contract<br/>Verification"]:::prefeature
        B4["Security + Perf"]:::team
        B5["Acceptance Tests"]:::team
        B6["Create Immutable Artifact"]:::team
        B1 --> B2 --> B3 --> B4 --> B5 --> B6
    end

    subgraph svcC ["Service C Pipeline (Team C)"]
        C1["Pre-commit Gates"]:::prefeature
        C2["Build + Unit Tests"]:::prefeature
        C3["Contract<br/>Verification"]:::prefeature
        C4["Security + Perf"]:::team
        C5["Acceptance Tests"]:::team
        C6["Create Immutable Artifact"]:::team
        C1 --> C2 --> C3 --> C4 --> C5 --> C6
    end

    subgraph apis ["API Schema Registry"]
        R1["Published API Schemas<br/><small>OpenAPI, AsyncAPI, Protobuf</small>"]:::api
        R2["Backward Compatibility<br/>Checks"]:::api
        R3["Consumer Pacts<br/><small>where available</small>"]:::api
        R1 --- R2 --- R3
    end

    A3 <-..->|"verify"| R3
    B3 <-..->|"verify"| R3
    C3 <-..->|"verify"| R3

    A6 --> A7["Deploy + Canary"]:::prod
    A7 --> A8["Health + SLOs"]:::prod

    B6 --> B7["Deploy + Canary"]:::prod
    B7 --> B8["Health + SLOs"]:::prod

    C6 --> C7["Deploy + Canary"]:::prod
    C7 --> C8["Health + SLOs"]:::prod
Pre-Feature Gate Team Pipeline API Schema Registry Production

Key Characteristics

  • Fully independent deployment: Each team deploys on its own schedule. Team A can deploy ten times a day while Team C deploys once a week. No coordination is required.
  • No shared integration pipeline: There is no fan-in step. Each pipeline goes straight from artifact creation to production. This eliminates the integration bottleneck entirely.
  • Contract tests replace integration tests: Instead of testing all services together, each team verifies its API contracts independently. The level of contract verification depends on how much coordination is possible between teams (see contract verification approaches below).
  • Each team owns its full pipeline: From pre-commit to production monitoring. No shared pipeline definitions, no central platform team gating deployments.

Why API Management Is Critical

Independent deployment only works when teams can change their service without breaking others. This requires a shared understanding of API boundaries that is enforced automatically, not through meetings or documents that drift.

Without API management, independent pipelines create independent failures. Teams deploy incompatible changes, discover the breakage in production, and revert to coordinated releases to stop the bleeding. This is worse than the multi-team architecture because it creates the illusion of independence while delivering the reliability of chaos.

What API Management Requires

  1. Published API schemas: Every service publishes its API contract (OpenAPI, AsyncAPI, Protobuf, or equivalent) as a versioned artifact. The schema is the source of truth for what the service provides.

  2. Contract verification (see approaches below): At minimum, providers verify backward compatibility against their own published schema. Where cross-team coordination is feasible, consumer-driven contracts add stronger guarantees.

  3. Backward compatibility enforcement: Every API change is checked for backward compatibility against the published schema. Breaking changes require a new API version using the expand-then-contract pattern:

    • Deploy the new version alongside the old
    • Migrate consumers to the new version
    • Remove the old version only after all consumers have migrated
  4. Schema registry: A central registry (Confluent Schema Registry, a simple artifact repository, or a Pact Broker where consumer-driven contracts are used) stores published schemas. Pipelines pull from this registry to run compatibility checks. The registry is shared infrastructure, but it does not gate deployments - it provides data that each team’s pipeline uses to make its own go/no-go decision.

  5. API versioning strategy: Teams agree on a versioning convention (URL path versioning, header versioning, or semantic versioning for message schemas) and enforce it through pipeline gates. The convention must be simple enough that every team follows it without deliberation.

Contract Verification Approaches

Not all teams can coordinate on shared contract tooling. The right approach depends on the relationship between provider and consumer teams. These approaches are listed from least to most coordination required. Use the strongest approach your context supports.

ApproachHow It WorksCoordination RequiredBest When
Provider schema compatibilityProvider’s pipeline checks every change for backward compatibility against its own published schema (e.g., OpenAPI diff). No consumer involvement needed.None between teamsTeams are in different organizations, or consumers are external/unknown
Provider-maintained consumer testsProvider team writes tests that exercise known consumer usage patterns based on API analytics, documentation, or past breakage.Minimal - provider observes consumersProvider can see consumer traffic patterns but cannot require consumer participation
Consumer-driven contractsConsumers publish pacts describing the subset of the provider API they depend on. Provider runs these pacts in its pipeline. See Contract Tests.High - shared tooling, broker, and agreement to maintain pactsTeams are in the same organization with shared tooling and willingness to maintain pacts

Most organizations use a mix. Internal teams with shared tooling can adopt consumer-driven contracts. Teams consuming third-party or cross-organization APIs use provider schema compatibility checks and provider-maintained consumer tests.

The critical requirement is not which approach you use but that every provider pipeline verifies backward compatibility before deployment. The minimum viable contract verification is an automated schema diff against the published API - if the diff contains a breaking change, the pipeline fails.

Additional Quality Gates for Distributed Architectures

GateDefect Sources AddressedCatalog Section
Provider schema backward compatibilityInterface mismatches from provider changesIntegration & Boundaries
Consumer-driven contract verification (where feasible)Wrong assumptions about upstream/downstreamIntegration & Boundaries
API schema backward compatibility checkSchema migration and backward compatibility failuresData & State
Cross-service timeout propagation checkMissing timeout and deadline enforcement across boundariesPerformance & Resilience
Circuit breaker and fallback verificationNetwork partitions and partial failures handled wrongDependency & Infrastructure
Distributed tracing validationMissing observability across service boundariesTesting & Observability Gaps

When This Architecture Works

This architecture is the goal for organizations with:

  • Multiple teams that need different deployment cadences
  • Services with well-defined, stable API boundaries
  • Teams mature enough to own their full delivery pipeline
  • Investment in contract testing tooling and API governance

When This Architecture Fails

  • Shared database schemas: Multiple services can share a database engine without problems. The failure mode is shared schemas - when Service A and Service B both read from and write to the same tables, a schema migration by one service can break the other’s queries. Each service must own its own schema. If two services need the same data, expose it through an API or event, not through direct table access.
  • Synchronous dependency chains: If Service A calls Service B which calls Service C in the request path, a deployment of C can break A through B. Circuit breakers and fallbacks are required at every boundary, and contract tests must cover failure modes, not just success paths.
  • No contract verification discipline: If teams skip backward compatibility checks or let contract test failures slide, breakage shifts from the pipeline to production. The architecture degrades into uncoordinated deployments with production as the integration environment. At minimum, every provider must run automated schema compatibility checks - even without consumer-driven contracts.
  • Missing observability: When services deploy independently, debugging production issues requires distributed tracing, correlated logging, and SLO monitoring across service boundaries. Without this, independent deployment means independent troubleshooting with no way to trace cause and effect.

Relationship to the Other Architectures

Architecture 3 is where Architecture 2 teams evolve to. The progression is:

  1. Single team, single deployable - one team, one pipeline, one artifact
  2. Multiple teams, single deployable - multiple teams, sub-pipelines, shared integration step
  3. Independent teams, independent deployables - multiple teams, fully independent pipelines, contract-based integration

The move from 2 to 3 happens incrementally. Extract one service at a time. Give it its own pipeline. Establish contract tests between it and the monolith. When the contract tests are reliable, stop running the extracted service’s code through the integration pipeline. Repeat until the integration pipeline is empty.