This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Pipeline Reference Architecture
Pipeline reference architectures for single-team, multi-team, and distributed service delivery, with quality gates sequenced by defect detection priority.
This section defines quality gates sequenced by defect detection priority and three
pipeline patterns that apply them. Quality gates are derived from the
Systemic Defect Fixes catalog and sequenced so the cheapest, fastest
checks run first.
Gates marked with [Pre-Feature] must be in place and passing before any new feature
work begins. They form the baseline safety net that every commit runs through. Adding
features without these gates means defects accumulate faster than the team can detect them.
Quality Gates in Priority Sequence
The gate sequence follows a single principle: fail fast, fail cheap. Gates that catch
the most common defects with the least execution time run first. Each gate listed below
maps to one or more defect sources from the catalog.
Pre-commit Gates
These run on the developer’s machine before code leaves the workstation. They provide
sub-second to sub-minute feedback.
| Gate |
Defect Sources Addressed |
Catalog Section |
Pre-Feature |
| Linting and formatting |
Code style consistency, preventable review noise |
Process & Deployment |
Required |
| Static type checking |
Null/missing data assumptions, type mismatches |
Data & State |
Required |
| Secret scanning |
Secrets committed to source control |
Security & Compliance |
Required |
| SAST (injection patterns) |
Injection vulnerabilities, taint analysis |
Security & Compliance |
Required |
| Race condition detection |
Race conditions (thread sanitizers, where language supports it) |
Integration & Boundaries |
|
| Accessibility linting |
Missing alt text, ARIA violations, contrast failures |
Product & Discovery |
|
| Timeout enforcement checks |
Missing timeout and deadline enforcement |
Performance & Resilience |
|
CI Stage 1: Build and Fast Tests < 5 min
These run on every commit to trunk.
CI Stage 2: Integration and Contract Tests < 10 min
These validate boundaries between components.
| Gate |
Defect Sources Addressed |
Catalog Section |
Pre-Feature |
| Contract tests |
Interface mismatches, wrong assumptions about upstream/downstream |
Integration & Boundaries |
Required |
| Schema migration validation |
Schema migration and backward compatibility failures |
Data & State |
Required |
| Infrastructure-as-code drift detection |
Configuration drift, environment differences |
Dependency & Infrastructure |
|
| Environment parity checks |
Test environments not reflecting production |
Testing & Observability Gaps |
|
CI Stage 3: Broader Automated Verification < 15 min
These run in parallel where possible.
Acceptance Tests < 20 min
These validate user-facing behavior in a production-like environment.
| Gate |
Defect Sources Addressed |
Catalog Section |
Pre-Feature |
| Functional acceptance tests |
Building the wrong thing, meets spec but misses intent |
Product & Discovery |
|
| Load and capacity tests |
Unknown capacity limits, slow response times |
Performance & Resilience |
|
| Chaos and resilience tests |
Network partition handling, missing graceful degradation |
Performance & Resilience |
|
| Cache invalidation verification |
Cache invalidation errors |
Data & State |
|
| Feature interaction tests |
Unanticipated feature interactions |
Change & Complexity |
|
Production Verification
These run during and after deployment. They are not optional - they close the feedback loop.
| Gate |
Defect Sources Addressed |
Catalog Section |
Pre-Feature |
| Health checks with auto-rollback |
Inadequate rollback capability |
Process & Deployment |
|
| Canary or progressive deployment |
Batching too many changes per release |
Process & Deployment |
|
| Real user monitoring and SLO checks |
Slow user-facing response times, product-market misalignment |
Performance & Resilience |
|
| Structured audit logging verification |
Missing audit trails |
Security & Compliance |
|
Pre-Feature Baseline
These gates must be active before starting feature work
Without these gates passing on every commit to trunk, defects accumulate faster than the
team can detect them. If any are missing, add them before writing new features. The
Foundations phase covers how to establish
this baseline.
- Linting and formatting
- Static type checking
- Secret scanning
- SAST for injection patterns
- Compilation / build
- Unit tests
- Dependency vulnerability scan
- Contract tests at every integration boundary
- Schema migration validation
Pipeline Patterns
These three patterns apply the quality gates above to progressively more complex team
and deployment topologies. Most organizations start with Pattern 1 and evolve toward
Pattern 3 as team count and deployment independence requirements grow.
- Single Team, Single Deployable - one team owns one
modular monolith with a linear pipeline
- Multiple Teams, Single Deployable - multiple teams own
sub-domain modules within a shared modular monolith, each with its own sub-pipeline
feeding a thin integration pipeline
- Independent Teams, Independent Deployables - each team
owns an independently deployable service with its own full pipeline and API contract
verification
Mapping to the Defect Sources Catalog
Each quality gate above is derived from the Systemic Defect Fixes
catalog. The catalog organizes defects by origin - product and discovery, integration,
knowledge, change and complexity, testing gaps, process, data, dependencies, security, and
performance. The pipeline gates are the automated enforcement points for the systemic
prevention strategies described in the catalog.
When adding or removing gates, consult the catalog to ensure that no defect category loses
its detection point. A gate that seems redundant may be the only automated check for a
specific defect source.
Further Reading
For a deeper treatment of pipeline design, stage sequencing, and deployment strategies, see
Dave Farley’s
Continuous Delivery Pipelines which covers pipeline
architecture patterns in detail.
Related Content
1 - Single Team, Single Deployable
A linear pipeline pattern for a single team owning a modular monolith.
This architecture suits a team of up to 8-10 people owning a
modular monolith - a single deployable
application with well-defined internal module boundaries. The codebase is organized by
domain, not by technical layer. Each module encapsulates its own data, logic, and
interfaces, communicating with other modules through explicit internal APIs. The
application deploys as one unit, but its internal structure makes it possible to reason
about, test, and change one module without understanding the entire codebase. The pipeline
is linear with parallel stages where dependencies allow.
Pre-Feature Gate
CI Stage
Parallel Verification
Acceptance
Production
graph TD
classDef prefeature fill:#0d7a32,stroke:#0a6128,color:#fff
classDef ci fill:#224968,stroke:#1a3a54,color:#fff
classDef parallel fill:#30648e,stroke:#224968,color:#fff
classDef accept fill:#6c757d,stroke:#565e64,color:#fff
classDef prod fill:#a63123,stroke:#8a2518,color:#fff
A["Pre-commit Gates<br/><small>Lint, Types, Secrets, SAST</small>"]:::prefeature
B["Build + Unit Tests"]:::prefeature
C["Contract + Schema Tests"]:::prefeature
D["Security Scans"]:::parallel
E["Performance Benchmarks"]:::parallel
F["Acceptance Tests<br/><small>Production-Like Env</small>"]:::accept
G["Create Immutable Artifact"]:::ci
H["Deploy Canary / Progressive"]:::prod
I["Health Checks + SLO Monitors<br/>Auto-Rollback"]:::prod
A -->|"commit to trunk"| B
B --> C
C --> D & E
D --> F
E --> F
F --> G
G --> H
H --> I
Key Characteristics
- One pipeline, one artifact: The entire application builds and deploys as a single
immutable artifact. There is no fan-out or fan-in.
- Linear with parallel branches: Security scans and performance benchmarks run in
parallel because neither depends on the other. Everything else is sequential.
- Trunk-based development: All developers commit to trunk at least daily. The pipeline
runs on every commit.
- Total target time: Under 15 minutes from commit to production-ready artifact.
Acceptance tests may extend this to 20 minutes for complex applications.
- Ownership: The team owns the pipeline definition, which lives in the same repository
as the application code.
When This Architecture Breaks Down
This architecture stops working when:
- The system becomes too large for a single team to manage.
- Build times extend along with the ability to respond quickly even after optimization
- Different parts of the application need different deployment cadences
When these symptoms appear, consider splitting into the
multi-team architecture or decomposing the application into
independently deployable services with their
own pipelines.
Related Content
2 - Multiple Teams, Single Deployable
A sub-pipeline pattern for multiple teams contributing domain modules to a shared modular monolith.
This architecture suits organizations where multiple teams contribute to a single
deployable modular monolith - a common
pattern for large applications, mobile apps, or platforms where the final artifact must
be assembled from team contributions.
The modular monolith structure is what makes multi-team ownership possible. Each team
owns a specific module representing a bounded sub-domain of the application. Team A
might own checkout and payments, Team B owns inventory and fulfillment, Team C owns
user accounts and authentication. Modules communicate through explicit internal APIs,
not by reaching into each other’s database tables or calling private methods. Each
team’s sub-pipeline validates only their module. A shared integration pipeline assembles
and verifies the combined result.
This ownership model is critical. Without clear module boundaries, teams step on each
other’s code, sub-pipelines trigger on unrelated changes, and merge conflicts replace
pipeline contention as the bottleneck. The module split must follow the application’s
domain boundaries, not its technical layers. A team that owns “the database layer” or
“the API controllers” will always be coupled to every other team. A team that owns
“payments” can change its database, API, and UI independently. If the codebase is not
yet structured as a modular monolith, restructure it before adopting this architecture
- otherwise the sub-pipelines will constantly interfere with each other.
graph TD
classDef prefeature fill:#0d7a32,stroke:#0a6128,color:#fff
classDef team fill:#224968,stroke:#1a3a54,color:#fff
classDef integration fill:#30648e,stroke:#224968,color:#fff
classDef prod fill:#a63123,stroke:#8a2518,color:#fff
subgraph teamA ["Payments Sub-Domain (Team A)"]
A1["Pre-commit Gates"]:::prefeature
A2["Build + Unit Tests"]:::prefeature
A3["Contract Tests"]:::prefeature
A4["Security + Perf"]:::team
A1 --> A2 --> A3 --> A4
end
subgraph teamB ["Inventory Sub-Domain (Team B)"]
B1["Pre-commit Gates"]:::prefeature
B2["Build + Unit Tests"]:::prefeature
B3["Contract Tests"]:::prefeature
B4["Security + Perf"]:::team
B1 --> B2 --> B3 --> B4
end
subgraph teamC ["Accounts Sub-Domain (Team C)"]
C1["Pre-commit Gates"]:::prefeature
C2["Build + Unit Tests"]:::prefeature
C3["Contract Tests"]:::prefeature
C4["Security + Perf"]:::team
C1 --> C2 --> C3 --> C4
end
subgraph integ ["Integration Pipeline"]
I1["Assemble Combined Artifact"]:::integration
I2["Integration Contract Tests"]:::integration
I3["Acceptance Tests<br/><small>Production-Like Env</small>"]:::integration
I4["Create Immutable Artifact"]:::integration
I1 --> I2 --> I3 --> I4
end
A4 --> I1
B4 --> I1
C4 --> I1
I4 --> D1["Deploy Canary / Progressive"]:::prod
D1 --> D2["Health Checks + SLO Monitors<br/>Auto-Rollback"]:::prod
Key Characteristics
- Module ownership by domain: Each team owns a bounded module of the application’s
functionality. Ownership is defined by domain, not by technical layer. The team is
responsible for all code, tests, and pipeline configuration within their module.
- Team-owned sub-pipelines: Each team runs their own pre-commit, build, unit test,
contract test, and security gates independently. A team’s sub-pipeline validates only
their module and is their fast feedback loop.
- Contract tests at both levels: Teams run contract tests in their sub-pipeline to
catch boundary issues at the module edges. The integration pipeline runs cross-module
contract tests to verify the assembled result.
- Integration pipeline is thin: The integration pipeline does not re-run each team’s
tests. It validates only what cannot be validated in isolation - cross-module
integration, the assembled artifact, and end-to-end acceptance tests.
- Sub-pipeline target time: Under 10 minutes. This is the team’s primary feedback loop
and must stay fast.
- Integration pipeline target time: Under 15 minutes. If it grows beyond this, the
integration test suite needs decomposition or the application needs architectural changes
to enable independent deployment.
- Trunk-based development with path filters: All teams commit to the same trunk.
Sub-pipelines trigger based on path filters aligned to module boundaries, so a
change to the payments module does not trigger the inventory sub-pipeline.
Preventing the Integration Pipeline from Becoming a Bottleneck
The integration pipeline is a shared resource and the most likely bottleneck in this
architecture. To keep it fast:
- Move tests left into sub-pipelines: Every test that can run in a sub-pipeline should
run there. The integration pipeline should only contain tests that require the full
assembled artifact.
- Use contract tests aggressively: Contract tests in sub-pipelines catch most
integration issues without needing the full system. The integration pipeline’s contract
tests are a verification layer, not the primary detection point.
- Run the integration pipeline on every commit to trunk: Do not batch. Batching
creates large changesets that are harder to debug when they fail.
- Parallelize acceptance tests: Group acceptance tests by feature area and run groups
in parallel.
- Monitor integration pipeline duration: Set an alert if it exceeds 15 minutes. Treat
this the same as a failing test - fix it immediately.
When to Move Away from This Architecture
This architecture is a pragmatic pattern for organizations that cannot yet decompose their
monolith into independently deployable services. The long-term goal is
loose coupling -
independent services with independent pipelines that do not need a shared integration step.
Signs you are ready to decompose:
- Contract tests catch virtually all integration issues in sub-pipelines
- The integration pipeline adds little value beyond what sub-pipelines already verify
- Teams are blocked by integration pipeline queuing more than once per week
- Different parts of the application need different deployment cadences
Related Content
3 - Independent Teams, Independent Deployables
A fully independent pipeline pattern for teams deploying their own services in any order, with API contract verification replacing integration testing.
This is the target architecture for continuous delivery at scale. Each team owns an
independently deployable service with its own pipeline, its own release cadence, and
its own path to production. No team waits for another team to deploy. No integration
pipeline serializes their work. The only shared infrastructure is the API contract
layer that defines how services communicate.
This architecture demands disciplined API management. Without it, independent deployment
is an illusion - teams deploy whenever they want, but they break each other constantly.
graph TD
classDef prefeature fill:#0d7a32,stroke:#0a6128,color:#fff
classDef team fill:#224968,stroke:#1a3a54,color:#fff
classDef contract fill:#30648e,stroke:#224968,color:#fff
classDef prod fill:#a63123,stroke:#8a2518,color:#fff
classDef api fill:#6c757d,stroke:#565e64,color:#fff
subgraph svcA ["Service A Pipeline (Team A)"]
A1["Pre-commit Gates"]:::prefeature
A2["Build + Unit Tests"]:::prefeature
A3["Contract<br/>Verification"]:::prefeature
A4["Security + Perf"]:::team
A5["Acceptance Tests"]:::team
A6["Create Immutable Artifact"]:::team
A1 --> A2 --> A3 --> A4 --> A5 --> A6
end
subgraph svcB ["Service B Pipeline (Team B)"]
B1["Pre-commit Gates"]:::prefeature
B2["Build + Unit Tests"]:::prefeature
B3["Contract<br/>Verification"]:::prefeature
B4["Security + Perf"]:::team
B5["Acceptance Tests"]:::team
B6["Create Immutable Artifact"]:::team
B1 --> B2 --> B3 --> B4 --> B5 --> B6
end
subgraph svcC ["Service C Pipeline (Team C)"]
C1["Pre-commit Gates"]:::prefeature
C2["Build + Unit Tests"]:::prefeature
C3["Contract<br/>Verification"]:::prefeature
C4["Security + Perf"]:::team
C5["Acceptance Tests"]:::team
C6["Create Immutable Artifact"]:::team
C1 --> C2 --> C3 --> C4 --> C5 --> C6
end
subgraph apis ["API Schema Registry"]
R1["Published API Schemas<br/><small>OpenAPI, AsyncAPI, Protobuf</small>"]:::api
R2["Backward Compatibility<br/>Checks"]:::api
R3["Consumer Pacts<br/><small>where available</small>"]:::api
R1 --- R2 --- R3
end
A3 <-..->|"verify"| R3
B3 <-..->|"verify"| R3
C3 <-..->|"verify"| R3
A6 --> A7["Deploy + Canary"]:::prod
A7 --> A8["Health + SLOs"]:::prod
B6 --> B7["Deploy + Canary"]:::prod
B7 --> B8["Health + SLOs"]:::prod
C6 --> C7["Deploy + Canary"]:::prod
C7 --> C8["Health + SLOs"]:::prod
Pre-Feature Gate
Team Pipeline
API Schema Registry
Production
Key Characteristics
- Fully independent deployment: Each team deploys on its own schedule. Team A can
deploy ten times a day while Team C deploys once a week. No coordination is required.
- No shared integration pipeline: There is no fan-in step. Each pipeline goes
straight from artifact creation to production. This eliminates the integration bottleneck
entirely.
- Contract tests replace integration tests: Instead of testing all services together,
each team verifies its API contracts independently. The level of contract verification
depends on how much coordination is possible between teams (see
contract verification approaches below).
- Each team owns its full pipeline: From pre-commit to production monitoring. No
shared pipeline definitions, no central platform team gating deployments.
Why API Management Is Critical
Independent deployment only works when teams can change their service without breaking
others. This requires a shared understanding of API boundaries that is enforced
automatically, not through meetings or documents that drift.
Without API management, independent pipelines create independent failures. Teams
deploy incompatible changes, discover the breakage in production, and revert to
coordinated releases to stop the bleeding. This is worse than the multi-team architecture
because it creates the illusion of independence while delivering the reliability of chaos.
What API Management Requires
-
Published API schemas: Every service publishes its API contract (OpenAPI, AsyncAPI,
Protobuf, or equivalent) as a versioned artifact. The schema is the source of truth for
what the service provides.
-
Contract verification (see approaches below):
At minimum, providers verify backward compatibility against their own published schema.
Where cross-team coordination is feasible, consumer-driven contracts add stronger
guarantees.
-
Backward compatibility enforcement: Every API change is checked for backward
compatibility against the published schema. Breaking changes require a new API version
using the expand-then-contract pattern:
- Deploy the new version alongside the old
- Migrate consumers to the new version
- Remove the old version only after all consumers have migrated
-
Schema registry: A central registry (Confluent Schema Registry, a simple artifact
repository, or a Pact Broker where consumer-driven contracts are used) stores published
schemas. Pipelines pull from this registry to run compatibility checks. The registry is
shared infrastructure, but it does not gate deployments - it provides data that each
team’s pipeline uses to make its own go/no-go decision.
-
API versioning strategy: Teams agree on a versioning convention (URL path versioning,
header versioning, or semantic versioning for message schemas) and enforce it through
pipeline gates. The convention must be simple enough that every team follows it without
deliberation.
Contract Verification Approaches
Not all teams can coordinate on shared contract tooling. The right approach depends on
the relationship between provider and consumer teams. These approaches are listed from
least to most coordination required. Use the strongest approach your context supports.
| Approach |
How It Works |
Coordination Required |
Best When |
| Provider schema compatibility |
Provider’s pipeline checks every change for backward compatibility against its own published schema (e.g., OpenAPI diff). No consumer involvement needed. |
None between teams |
Teams are in different organizations, or consumers are external/unknown |
| Provider-maintained consumer tests |
Provider team writes tests that exercise known consumer usage patterns based on API analytics, documentation, or past breakage. |
Minimal - provider observes consumers |
Provider can see consumer traffic patterns but cannot require consumer participation |
| Consumer-driven contracts |
Consumers publish pacts describing the subset of the provider API they depend on. Provider runs these pacts in its pipeline. See Contract Tests. |
High - shared tooling, broker, and agreement to maintain pacts |
Teams are in the same organization with shared tooling and willingness to maintain pacts |
Most organizations use a mix. Internal teams with shared tooling can adopt consumer-driven
contracts. Teams consuming third-party or cross-organization APIs use provider schema
compatibility checks and provider-maintained consumer tests.
The critical requirement is not which approach you use but that every provider pipeline
verifies backward compatibility before deployment. The minimum viable contract
verification is an automated schema diff against the published API - if the diff contains
a breaking change, the pipeline fails.
Additional Quality Gates for Distributed Architectures
| Gate |
Defect Sources Addressed |
Catalog Section |
| Provider schema backward compatibility |
Interface mismatches from provider changes |
Integration & Boundaries |
| Consumer-driven contract verification (where feasible) |
Wrong assumptions about upstream/downstream |
Integration & Boundaries |
| API schema backward compatibility check |
Schema migration and backward compatibility failures |
Data & State |
| Cross-service timeout propagation check |
Missing timeout and deadline enforcement across boundaries |
Performance & Resilience |
| Circuit breaker and fallback verification |
Network partitions and partial failures handled wrong |
Dependency & Infrastructure |
| Distributed tracing validation |
Missing observability across service boundaries |
Testing & Observability Gaps |
When This Architecture Works
This architecture is the goal for organizations with:
- Multiple teams that need different deployment cadences
- Services with well-defined, stable API boundaries
- Teams mature enough to own their full delivery pipeline
- Investment in contract testing tooling and API governance
When This Architecture Fails
- Shared database schemas: Multiple services can share a database engine without
problems. The failure mode is shared schemas - when Service A and Service B both read
from and write to the same tables, a schema migration by one service can break the
other’s queries. Each service must own its own schema. If two services need the same
data, expose it through an API or event, not through direct table access.
- Synchronous dependency chains: If Service A calls Service B which calls Service C
in the request path, a deployment of C can break A through B. Circuit breakers and
fallbacks are required at every boundary, and contract tests must cover failure modes,
not just success paths.
- No contract verification discipline: If teams skip backward compatibility checks
or let contract test failures slide, breakage shifts from the pipeline to production.
The architecture degrades into uncoordinated deployments with production as the
integration environment. At minimum, every provider must run automated schema
compatibility checks - even without consumer-driven contracts.
- Missing observability: When services deploy independently, debugging production
issues requires distributed tracing, correlated logging, and SLO monitoring across
service boundaries. Without this, independent deployment means independent
troubleshooting with no way to trace cause and effect.
Relationship to the Other Architectures
Architecture 3 is where Architecture 2 teams evolve to. The progression is:
- Single team, single deployable - one team, one pipeline, one artifact
- Multiple teams, single deployable - multiple teams, sub-pipelines, shared
integration step
- Independent teams, independent deployables - multiple teams, fully independent
pipelines, contract-based integration
The move from 2 to 3 happens incrementally. Extract one service at a time. Give it
its own pipeline. Establish contract tests between it and the monolith. When the contract
tests are reliable, stop running the extracted service’s code through the integration
pipeline. Repeat until the integration pipeline is empty.
Related Content