Eight common component patterns and how to test each fully. Each page covers what to verify, positive and negative cases, double validation, pipeline placement, and a small code example.
Each page in this subsection covers one component pattern. The structure is the same on every page so you can scan-compare:
What needs covered - the layers of testing the pattern typically benefits from.
Positive test cases - common success behaviors worth testing.
Negative test cases - common failure modes that produce production incidents.
Test double validation - how the doubles in pipeline tests stay honest.
Pipeline placement - where each test type tends to run.
Example - a short code sample illustrating one of the harder cases for that pattern.
These are recommended starting points, not exhaustive lists or required gates. Real components have details these pages don’t capture; ignore items that don’t apply, and add items the pattern doesn’t mention but your component clearly needs. The goal is to prompt the conversation, not to constrain it.
API provider, API consumer, scheduled job, and user interface are covered in depth. Event consumer, event producer, CLI/library, and stateful service are deliberately briefer sketches: the same six principles apply, the same checklist still prompts useful questions, and the test double validation model is the same. Use the briefer sketches as a starting point and expand the depth in your own runbooks for the patterns your services actually use.
The patterns
API provider - a backend service exposing an HTTP/gRPC/GraphQL API and owning its own data.
API consumer - the above, plus outbound calls to other services. The most failure-prone pattern.
Scheduled job - a service triggered on a cron, queue, or external scheduler.
User interface - a UI that renders data and accepts user interaction.
Event consumer - a service that consumes messages from a broker.
Event producer - a service that produces messages to a broker.
Layered diagram of an API provider showing four architectural layers stacked top to bottom. The first three are inside the component boundary: HTTP and API surface (covered by component tests and provider contract tests), domain logic (covered by solitary unit, sociable unit, and component tests), and persistence adapter (covered by sociable unit, adapter integration, and component tests). Below the dashed component boundary, the external database is doubled in component tests (in-memory or testcontainer) and used real in adapter integration tests against the production engine.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Documented endpoints: return the expected shape and status for valid input.
Auth: succeeds for valid credentials and tokens.
Pagination, filtering, sorting: all return the documented results.
Idempotency: idempotent operations are idempotent; non-idempotent operations create exactly one record.
Success-path side effects: events emitted and audit log entries happen on the success path.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Malformed body: bad JSON, missing required fields, wrong types, extra fields handled per the documented policy (reject vs. ignore).
Out-of-range values: negatives where positives are expected, oversize strings, unicode edge cases.
Auth failures: missing token, expired token, valid token with insufficient scope, valid token for a different tenant.
Authorization boundaries: user A cannot read or modify user B’s resources.
Resource not found: referenced IDs don’t exist, return 404 not 500.
Concurrency: two writes to the same resource at once, optimistic-lock conflict handled with the documented status code.
Persistence failure: DB unavailable, deadlock, constraint violation. The error envelope is correct and no partial state is committed.
Rate limiting and request size limits: both enforce as documented.
Idempotency under retry: same idempotency key within the window returns the original result, not a duplicate write.
Test double validation
Doubles in this pattern are mostly around persistence. Two layers keep them honest:
Adapter integration tests run against a real instance of your production database engine (the same major version, same extensions). If component tests use an in-memory SQLite shim while production runs Postgres, the shim is the lie. The adapter integration test exercises every query and migration against a Postgres testcontainer in CI.
Provider-side contract tests verify the API still satisfies every published consumer expectation. See Consumer and Provider Perspectives. Provider verification is where you discover that a “harmless” field rename broke a consumer before that consumer deploys.
Pipeline placement
Unit + sociable unit tests: pre-commit and CI Stage 1.
Adapter integration tests against testcontainers: CI Stage 1 if fast, Stage 2 otherwise.
Component tests: CI Stage 1.
Provider-side contract verification: CD Stage 1 (Contract and Boundary Validation).
Example: component test
A flow-oriented component test for an order-placement endpoint. The full app is assembled with an in-memory order repository and an in-memory event bus. The test drives the assembled component through its HTTP handlers and asserts on observable outcomes (status, persisted state, emitted event):
import request from"supertest";import{ buildApp }from"./app.js";import{ InMemoryOrderRepo }from"./test/in-memory-order-repo.js";import{ InMemoryEventBus }from"./test/in-memory-event-bus.js";test("places order with valid payment creates order and emits OrderPlaced",async()=>{const orderRepo =newInMemoryOrderRepo();const events =newInMemoryEventBus();const app =buildApp({ orderRepo, events });const res =awaitrequest(app).post("/orders").set("Authorization","Bearer tok_valid").send({items:[{sku:"A1",qty:2}],paymentToken:"pm_ok"});expect(res.status).toBe(201);expect(orderRepo.findById(res.body.id)).toBeDefined();expect(events.published).toContainEqual(
expect.objectContaining({type:"OrderPlaced",orderId: res.body.id }));});
The test asserts on what a real caller can observe, not on private methods or call sequences inside the controller.
2 - API Consumer
An API provider that also consumes one or more upstream APIs. The most failure-prone pattern in distributed systems and the one that gets the most testing attention.
Same as API provider, plus outbound HTTP/gRPC calls to services the team does not own (or does own but deploys independently). This is the most failure-prone pattern in distributed systems and gets the most testing attention.
Layered diagram of an API consumer with seven architectural layers. The first five (HTTP and API surface, domain logic and orchestration, resilience policy, outbound HTTP client, persistence adapter) are inside the component boundary. Below the dashed boundary, the external database and the external downstream service are drawn with dashed borders. Component tests cover every internal layer including resilience, with both database and downstream service doubled. Adapter integration tests pin the outbound and persistence protocols against real containers. Consumer contract tests pin the outbound boundary. Out-of-band integration tests exercise the real downstream service to confirm doubles still match reality.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Outbound call: constructs the right URL, headers, body, auth, and timeout.
Success response: parsed correctly, including optional fields and unknown fields per Postel’s Law.
Multi-call composition: multiple downstream calls in sequence or parallel produce the documented composite response.
Caching: returns the cached value within TTL and refreshes after.
Trace context: propagates downstream.
Negative test cases
Common cases to consider, not an exhaustive list. The bulk of the negative testing happens here, and it’s where most production incidents originate. Drive each failure mode through a client double that simulates it.
Timeout (downstream exceeds configured deadline): the deadline enforces; the upstream caller gets the documented response (e.g., 504); no partial state is committed. Use a client double that delays past the deadline.
Connection refused: retry policy executes the documented count and backoff; falls over to fallback or returns an error. Use a client double that rejects the connection.
5xx responses (500, 502, 503): retry only on retryable codes. Use a client double that returns 5xx.
4xx responses (400, 401, 403, 404, 409, 422, 429): each maps to documented behavior; 4xx generally not retried; 429 respects Retry-After. Use a client double that returns each code.
Slow response within timeout: performance-budget assertions hold if the service has SLO commitments. Use a client double that delays within the deadline.
Malformed response body: the response is rejected, not silently coerced. Use a client double that returns a truncated or wrong-type body.
Schema drift (extra or missing fields): extra fields tolerated; missing required fields detected with a clear error. Use a client double that returns a drifted body.
Wrong status code (200 with error body, 500 with success body): the client trusts the status code, not the body. Use a client double that returns mismatched status and body.
Circuit open: the circuit opens under sustained failure; fast-fails subsequent calls; recovers on a half-open probe. Use a client double that sustains failures.
Partial multi-call failure: compensation, rollback, or documented partial-success behavior. First client double succeeds, second fails.
Test double validation
This is where the “doubles need tests” rule lives or dies. Four layers:
Consumer-side contract tests run in the pipeline on every commit using doubles. They pin the request the consumer sends and the response shape the consumer depends on. Contract artifacts are published to a broker. Fast, deterministic, blocks the build.
Adapter integration tests exercise the outbound HTTP client against the real dependency in a controlled state - typically a testcontainer running an in-house service the team owns. They verify the adapter code correctly speaks the protocol: serialization, deserialization, header handling, timeout behavior, error mapping. The test asserts the adapter’s correctness, not the dependency’s behavior: if the test asks for a user, it validates that the response parses into a valid User, not which user was returned. For third-party dependencies the team can’t run in a controlled state, run these tests out-of-band on a schedule. WireMock loaded with provider-supplied fixtures is a useful complement but functions more like a contract test against recorded shapes than an integration test against the live protocol.
Provider-side contract verification runs in the provider’s pipeline. The provider executes every consumer’s published contract against the real provider implementation. Breaking changes are caught at the source before the provider deploys.
Post-deploy integration check runs periodically against the real downstream in a non-production environment. Same fixtures used in contract tests. Catches drift in fields the contract didn’t pin, version skew, environment differences. Failures trigger review, not a build break. See Out-of-Pipeline Verification.
For third-party APIs you do not control, there is no provider verification step. The post-deploy check against the live (or sandbox) API is the only mechanism keeping doubles honest. Run it more often than for in-house dependencies. Daily at minimum.
The anti-pattern to avoid: stubbing the third-party SDK directly. Always wrap third-party clients in a thin adapter the team owns, then double the adapter. This is called out explicitly as Mocking what you don’t own and is the single most common source of “but it worked in tests” incidents.
Consumer-side contract tests: pre-commit and CI Stage 1.
Adapter integration tests for the outbound HTTP client against an in-house dependency the team controls (a testcontainer running the team’s own service in a known state): CI Stage 1 or Stage 2.
Adapter integration tests against a third-party API or a service owned by another team: out-of-band on a schedule, never in-band. The risk of a flaky external service blocking deploys outweighs any in-band coverage benefit, and adapter tests with WireMock fixtures already cover the team’s adapter code.
Resilience component tests with fault injection: CI Stage 1.
Post-deploy integration checks against real downstreams: out of pipeline, on a schedule.
Example: fault injection at the client double
A negative-path test for downstream timeout. The payment client double simulates a slow response, the test asserts the deadline enforces and the upstream caller gets the documented error envelope:
test("returns 504 when payment service exceeds deadline",async()=>{const slowPayments ={charge:()=>newPromise((_, reject)=>{setTimeout(()=>reject(newTimeoutError("payments")),50);})};const orderRepo =newInMemoryOrderRepo();const app =buildApp({ orderRepo,payments: slowPayments,deadlineMs:30});const res =awaitrequest(app).post("/orders").set("Authorization","Bearer tok_valid").send({items:[{sku:"A1",qty:1}],paymentToken:"pm_ok"});expect(res.status).toBe(504);expect(res.body.error.code).toBe("UPSTREAM_TIMEOUT");expect(orderRepo.all()).toHaveLength(0);});
The test verifies three things at once: the documented status code, the structured error body the API contract promises, and that no partial state was committed.
3 - Scheduled Job
A service triggered on a cron, queue, or external scheduler. Reads from data sources, writes reports or updates state.
A job that runs on a cron, queue, or external scheduler. Reads from data sources, writes reports or updates state. Often has no inbound API surface. The entrypoint is the scheduler.
This pattern has two test design challenges that the API provider and API consumer patterns don’t have: time and data volume.
Layered diagram of a scheduled job with six architectural layers. The first four (pure transformation logic, job orchestration, source and sink gateways, process startup) are inside the component boundary. Below the dashed boundary, the external source and sink and the external scheduler and system clock are drawn with dashed borders. Solitary unit tests cover pure transformation. Component tests cover orchestration with the clock and gateways doubled. Adapter integration tests pin source and sink protocols against real containers. Deployed-binary tests cover process startup on the actual artifact the scheduler will invoke. Out-of-band integration uses the real scheduler and clock on a schedule.
Process startup matters more here than for an API service, because scheduled jobs typically have non-trivial startup behavior (config loading, secret resolution, lock acquisition) that a component test with the SUT in-memory can bypass. The right shape is many component tests for behavior, plus one or two tests that invoke the actual deployed binary the scheduler will invoke.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
End-to-end run: with representative input, produces the expected output (report file, database update, message published).
Idempotency: running the job twice for the same logical period produces the same result, not duplicates.
Checkpointing: a job that processes a stream resumes from the last checkpoint, not from scratch.
Time windows: “yesterday’s data” computes correctly for various reference times, especially around DST, month boundaries, and year boundaries.
Empty input: zero records produces a valid empty report, not an error.
Output format: the report or message conforms to the documented schema.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Source unavailable: DB down, source API returning 5xx. Verify the job fails cleanly with a documented exit code/status, doesn’t write partial output, and is safely re-runnable.
Sink unavailable: destination DB or message broker rejects writes. Verify no source state changes (e.g., “marked as processed”) happen if the sink fails.
Partial-write failure: half the batch writes successfully, then the connection drops. Verify the next run reprocesses the failed half without duplicating the successful half. This is where idempotency keys, transactional outboxes, or compensating reads earn their keep.
Slow job: job exceeds its expected runtime. Verify it surfaces as alertable, doesn’t silently overlap with the next scheduled run, and that the lock prevents concurrent execution.
Malformed source data: null where non-null was expected, wrong type, encoding issues. Verify the bad record is logged with enough context to investigate, and the job decides per its policy: skip, dead-letter, or fail the whole run. The choice is design; the test pins it.
Time-zone bugs: the job runs at 02:30 UTC for a “daily” report. What does it do on the day clocks shift? Test it. Use the injected clock so the test deterministically simulates the boundary.
Concurrent run: the previous run hadn’t finished when the next was triggered. Verify the lock prevents overlap or, if overlap is acceptable, that the work is partitioned correctly.
Crash mid-run: kill -9 in the middle of processing. Verify on restart the job resumes from a consistent state.
Schema drift on source: a new field appears or a field changes type. Verify per the contract policy.
Test double validation
Three classes of doubles need validation, each through a different mechanism:
The injected clock. Every in-band test that depends on “now” uses an injected clock. Validate it with one out-of-band check that runs against the real system clock, exercises a known time-window calculation, and confirms the production wiring of the clock dependency is correct. This catches the “tests use UTC, prod uses container local time” class of bug.
Source and sink gateways. Same model as the API consumer pattern. Adapter integration tests in the pipeline exercise each gateway against a real source/sink container or WireMock. Contract tests pin the shape. Post-deploy integration checks confirm the doubles still match the real systems on a schedule.
The scheduler trigger. The doubled trigger in component tests must match what the real scheduler invokes. Verify with a post-deploy integration check that runs the real scheduler against a deployed instance in a non-prod environment and confirms the entrypoint is found, the cron expression fires at the expected times, environment variables and secrets resolve, and the concurrency policy holds. This is the test that catches “passed in CI, didn’t run in prod because the cron expression had a typo.”
Pipeline placement
Unit and component tests: CI Stage 1.
Adapter integration tests for the source and sink adapters: CI Stage 1 or Stage 2.
Contract tests for each source and sink: CI Stage 1.
Component tests of the deployed binary (small set): CI Stage 1 or Stage 2.
Real-clock and real-scheduler integration check: out of pipeline, scheduled, against a non-prod environment.
Post-deploy: a synthetic invocation of the job in production that verifies it ran, processed records, and met its SLO.
Example: time-window logic with an injected clock
A test that pins the daily-report window calculation around a DST boundary. The clock is injected so the test deterministically simulates the moment of interest. source and sink are field-level fakes set up in the test class with seeded data for 2026-03-08 and 2026-03-09.
test("daily report run after DST spring forward uses correct window",()=>{const fixedClock ={now:()=>newDate("2026-03-09T07:30:00Z")};const job =newReportJob({clock: fixedClock, source, sink });
job.run();const emitted = sink.lastReport();expect(emitted.windowStart).toEqual(newDate("2026-03-08T05:00:00Z"));expect(emitted.windowEnd).toEqual(newDate("2026-03-09T05:00:00Z"));expect(emitted.recordsProcessed).toBe(source.recordsForDay("2026-03-08"));});
A separate out-of-band check runs the deployed binary against the real system clock once, to verify the production wiring of the clock dependency matches the doubled clock used here.
4 - User Interface
A UI that renders data and accepts user interaction. Talks to one or more backend APIs.
A UI that renders data and accepts user interaction. Talks to one or more backend APIs.
Layered diagram of a user interface with five architectural layers. The first four (pure rendering, component composition, feature behavior in the rendered DOM, backend HTTP client) are inside the component boundary. Below the dashed boundary, the external backend API is drawn with a dashed border. Solitary unit tests cover pure rendering. Sociable unit tests cover composition. Component tests driven by Playwright cover feature behavior with the backend doubled at the network layer. Consumer contract tests pin each backend boundary. End-to-end tests run post-deploy against the real backend.
UI component tests run in a real browser engine (Chromium, Firefox, WebKit) driven by Playwright, with the team’s existing unit-testing framework (Vitest, Jest, or whatever is already in the project) as the runner. In-memory renderer shortcuts like JSDOM are rejected: they trade accuracy for speed and produce false greens around layout, focus, event timing, Intersection Observer, and animations - exactly the surface where UI bugs live. Playwright’s headless Chromium starts in milliseconds and runs the suite fast enough to use as the default. Backends are stubbed at the network layer with page.route so the same fixtures drive component tests today and end-to-end smoke tests later.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Critical flows: a user can complete each documented critical flow via keyboard and via mouse.
Forms: accept valid input, submit, and show success.
Loading states: render while the backend is in flight.
Empty, populated, and overflow states: all render correctly.
Internationalization: the UI renders with longer translations and right-to-left scripts.
Responsive layouts: render at the documented breakpoints.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Backend errors: for every API call the UI makes, what does the user see for 4xx, 5xx, network failure, timeout? Test each. The most common UI bug is “spins forever on error.”
Form validation: required fields, format errors, length limits, cross-field rules. Each shows a specific, actionable message that’s announced to screen readers.
Authentication expiry: token expires mid-session. Verify the user is sent through the documented re-auth flow, not silently dropped.
Permission denied: the user navigates to a page they cannot access. Verify the documented response (redirect, “not authorized,” etc.).
Stale data: a list rendered, then a delete on another tab, then the user clicks the deleted item. Verify the documented refresh or error behavior.
Slow network: every interaction has a documented behavior at 3G speeds. Verify with throttled fixtures.
Concurrent edit: two users editing the same record. Verify the optimistic-lock UX behaves as documented.
Browser back button: the back button is a public interface. Test it.
Accessibility violations: automated WCAG scan in component tests catches missing labels, contrast failures, ARIA misuse on every commit. Don’t defer to quarterly audits.
Test double validation
Backend doubles in component tests must match the real backends. Same mechanism as the API consumer pattern: the UI is a consumer, every backend it talks to is a provider. Consumer-driven contracts run on every commit; provider verification runs in the backend’s pipeline. Post-deploy E2E smoke tests against the real backend close the loop on drift the contract didn’t pin.
Because UI component tests run in a real browser engine, there is no renderer-level double to validate. The browser is the production renderer, just headless. The remaining gap is between the stubbed backend and the real backend, which the out-of-band E2E suite covers. Out-of-band failures trigger review, not a build break.
Component tests in headless browser (including a11y assertions): CI Stage 1.
Visual regression: CI Stage 1 if fast, CI Stage 2 if slow.
Consumer-side contract tests for each backend: CI Stage 1.
E2E happy-path smoke tests against real backends: post-deploy, in a production-like environment, blocking the rollout but not the build.
Real user monitoring + synthetic transactions: continuously in production.
Example: UI component test for an error path
A flow-oriented test for the checkout error path. Playwright drives a headless browser; the backend is stubbed at the network layer with page.route; the team’s existing unit-testing framework (Vitest, JUnit, xUnit) runs the test. The assertion: the user sees a documented error message and the spinner does not get stuck.
[Fact]publicasyncTaskShows_error_and_clears_spinner_when_checkout_fails_with_500(){usingvar playwright =await Playwright.CreateAsync();awaitusingvar browser =await playwright.Chromium.LaunchAsync();var page =await browser.NewPageAsync();await page.RouteAsync("**/api/checkout", route => route.FulfillAsync(new(){
Status =500,
ContentType ="application/json",
Body ="{\"error\":{\"code\":\"INTERNAL\"}}"}));await page.GotoAsync("http://localhost:3000/checkout");await page.GetByRole(AriaRole.Button,new(){ Name ="Place order"}).ClickAsync();awaitExpect(page.GetByRole(AriaRole.Alert)).ToContainTextAsync("Something went wrong, please try again");awaitExpect(page.GetByRole(AriaRole.Status)).Not.ToBeVisibleAsync();}
import{ test, expect, beforeAll, afterAll }from"vitest";import{ chromium }from"playwright";let browser;beforeAll(async()=>{ browser =await chromium.launch();});afterAll(async()=>{await browser.close();});test("shows error and clears spinner when checkout fails with 500",async()=>{const page =await browser.newPage();await page.route("**/api/checkout",route=>
route.fulfill({status:500,contentType:"application/json",body:JSON.stringify({error:{code:"INTERNAL"}}),}));await page.goto("http://localhost:3000/checkout");await page.getByRole("button",{name:/place order/i}).click();awaitexpect(page.getByRole("alert")).toContainText(/something went wrong, please try again/i);awaitexpect(page.getByRole("status")).not.toBeVisible();});
The test exercises the rendered DOM the way a real user would. Intercepting at the network layer with page.route keeps the same fixtures reusable when the component test gets promoted to an end-to-end smoke test against the real backend.
5 - Event Consumer
A service that consumes messages from a broker (Kafka, SQS, RabbitMQ, Pub/Sub). Brief sketch.
A consumer of messages from Kafka, SQS, RabbitMQ, Pub/Sub, or similar. Reads messages, processes them, often updates state and produces downstream messages. The “public interface” is the topic or queue and the schema of messages on it.
This pattern has problems the API provider and API consumer patterns don’t have: ordering, replay, poison messages, dead-letter queues, and delivery semantics (at-most-once, at-least-once, exactly-once-with-effort).
Layered diagram of an event consumer with six architectural layers. The first five (message handler logic, idempotency and ordering, dead-letter and poison-message handling, backpressure, broker client) are inside the component boundary. Below the dashed boundary, the external broker and schema registry are drawn with a dashed border. Solitary unit tests cover handler logic. Component tests cover idempotency, dead-letter handling, ordering, and backpressure with the broker doubled. Adapter integration tests pin the broker protocol against a real broker container. Broker contract tests pin the topic, schema, and headers. Out-of-band synthetic publish confirms the doubles still match the real broker.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Well-formed message: produces the expected state change and the documented downstream events.
Batch processing: processes per documented policy.
Replay from offset: reproduces the same end state.
Documented schema versions: are accepted.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Malformed message: routes to the DLQ with a correlation ID; the consumer survives.
Duplicate delivery: absorbed by idempotency.
Out-of-order delivery: follows the documented behavior.
Mid-batch downstream failure: the offset is left uncommitted.
Schema-version skew: handled per the documented policy.
Slow downstream: applies backpressure rather than OOM.
Consumer-group rebalance during processing: no in-flight messages are stranded.
Test double validation
The broker double in component tests is validated by adapter integration tests against a real broker container the team controls (Kafka in Docker, ElasticMQ for SQS, Redpanda in Docker). The test exercises the broker client adapter against that controlled instance and asserts the adapter speaks the protocol correctly - it does not assert anything about which messages the broker returns or in what order; that is the broker’s behavior, not the adapter’s. Schema registry double is validated by contract tests pinning each version, plus a post-deploy check against the real registry. Post-deploy synthetic publishes a known message to the real topic in a non-prod environment.
Pipeline placement
Handler unit tests and component tests run in CI Stage 1; adapter integration tests against a team-controlled broker container in CI Stage 1 or Stage 2; adapter integration tests against a managed broker the team can’t pin to a known state run out-of-band on a schedule, alongside the post-deploy synthetic.
Example: idempotency under duplicate delivery
Money.usd takes minor units (cents); 4250 represents $42.50.
A service that produces messages to a broker. Often paired with the event consumer pattern in the same service. Brief sketch.
The producer side, often paired with the Event consumer pattern in the same service. After a state change, the service publishes a message that downstream consumers depend on.
The hard problems differ from the consumer side: atomicity with persistence (did the DB row commit and the message publish?), exactly-once semantics that require an outbox or two-phase commit, and downstream consumer dependence on schema, routing key, and headers.
Layered diagram of an event producer with five architectural layers. The first three (domain emit decision, outbox or transactional emit, broker client) are inside the component boundary. Below the dashed boundary, the external broker and the database used by the outbox are drawn with dashed borders. Solitary unit tests cover the emit decision logic. Component tests cover outbox atomicity, retry on broker unavailable, and trace propagation, run with a real database and a doubled broker. Adapter integration pins the broker protocol against a real broker container. Provider contract verification runs against every consumer's published expectations. Out-of-band synthetic state change confirms the message arrives in the real broker.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
State change: produces the correct message on the correct topic with the correct routing key, headers, and schema version.
Outbox drain: drains in order.
Redelivery: does not reorder.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
DB commits but broker fails: the message stays in the outbox and emits on the next drain. No event lost.
Broker accepts but DB rolls back: nothing is emitted. No phantom events.
Broker unavailable for an extended period: the outbox accumulates with bounded growth and alerts at a threshold.
Breaking schema change: fails provider-side contract verification before shipping.
Test double validation
The broker double in component tests is validated against a real broker container the team controls in adapter integration tests. The test asserts the adapter publishes with the right routing key, headers, and serialization - it does not assert which messages downstream consumers happen to read or in what order; those are downstream concerns. Provider-side contract verification runs in this service’s pipeline against every consumer’s published expectations.
Pipeline placement
Outbox component tests and routing tests run in CI Stage 1; adapter integration tests against a team-controlled broker container in CI Stage 1 or Stage 2; adapter integration tests against a managed broker the team can’t pin run out-of-band on a schedule. Provider-side contract verification in CD Stage 1; post-deploy synthetic state change verifies the message arrives with the expected shape.
7 - CLI Tool or Library
A binary or package consumed by other developers. The public interface is the CLI invocation surface or the library’s exported API. Brief sketch.
A binary (CLI) or package (library) consumed by other developers. The “public interface” is the CLI invocation surface (argv, stdin, stdout, stderr, exit code) or the library’s exported API.
The pattern is different because the consumer is a developer or another program, not a user clicking a button. Cross-platform behavior, semantic versioning, and backward compatibility matter more than they do for a service.
Layered diagram of a CLI tool or library with five architectural layers. The first four (pure logic and parsing, CLI invocation surface or library API, file system and subprocess adapter, documented README examples) are inside the component boundary. Below the dashed boundary, the real OS, file system, and subprocess are drawn with a dashed border. Solitary unit tests cover pure logic and parsing. Component tests cover invocation through the entrypoint. Adapter integration tests cover the file system and subprocess against the real OS in a temp directory. The API surface diff catches removal or rename of any public symbol. Doctests verify README examples run against the real binary or library. The cross-OS CI matrix runs the suite on every supported OS to catch platform-specific bugs.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Valid arguments: produce documented stdout output, no stderr, and exit code 0.
Pipe-friendly mode: produces machine-readable output (JSON/NDJSON) when stdout is not a TTY.
Library API: returns documented values for valid input.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Bad arguments: exit with the documented non-zero code and structured stderr.
Help text: reachable via --help.
Large input: does not OOM.
Interrupt (Ctrl-C, SIGTERM): runs cleanup and flushes or rolls back partial output.
Invalid arguments to the library: throws the documented error type.
Public symbol removed or renamed: the API-surface test fails the build.
Test double validation
File system doubles validated by integration tests against the real FS in a temp directory. Subprocess doubles validated by tests that actually spawn the subprocess on each supported OS. Doctests validate README examples against the real binary or library on every build.
Pipeline placement
Unit and component tests run in CI Stage 1 on every supported OS; API surface diff and doctests in CI Stage 1; cross-platform integration tests in CI Stage 2 if slow.
8 - Stateful Service
A service that maintains long-lived in-memory state: caches, in-memory aggregates, leader-elected coordinators, websocket gateways, real-time engines. Brief sketch.
A service that maintains long-lived in-memory state: caches, in-memory aggregates, leader-elected coordinators, websocket gateways, real-time engines, sticky-session servers.
The hard problems are concurrency, recovery, and unbounded growth. Stateful services fail in ways stateless services do not.
Layered diagram of a stateful service with six architectural layers. The first five (state machine logic, persistence and recovery, single-node concurrency, replication and leader election, memory bounds and long-run behavior) are inside the component boundary. Below the dashed boundary, the persistence engine is drawn with a dashed border. Solitary unit tests cover state transitions. Component tests cover persistence, recovery, and single-node concurrency. Cluster tests exercise replication and leader election against a multi-node testcontainer setup. Out-of-band soak and chaos tests catch unbounded growth, slow leaks, and replication-lag drift against a deployed instance.
Positive test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
State transitions: follow the documented machine.
Restart: state rebuilds and behavior matches pre-restart.
Replication lag under expected load: stays within budget.
Negative test cases
Common cases to consider, not an exhaustive list. Drop items that don’t apply and add ones the pattern doesn’t mention but your component needs.
Crash mid-write: consistent state on restart. No torn writes.
Network partition: minority replicas step down with documented reconciliation on heal.
Slow replication: applies backpressure rather than silent divergence.
Memory pressure: evicts oldest entries per policy without OOM.
Idle long-running connections: close cleanly with documented reconnect behavior.
Concurrent state mutations: serialize without lost updates.
Test double validation
Persistence doubles validated by adapter integration tests against the real production engine. Consensus library doubles validated by cluster tests against a multi-node testcontainer setup. Soak tests run out of pipeline against a deployed instance to catch slow leaks and unbounded growth.
Pipeline placement
State machine unit tests, recovery component tests, and single-node concurrency tests run in CI Stage 1; cluster tests with real consensus library in CI Stage 2; soak and chaos tests out of pipeline.