This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
CD Practices
Concise definitions of the core continuous delivery practices from MinimumCD.
These pages define the minimum practices required for continuous delivery. Each page covers
what the practice is, why it matters, and what the minimum criteria are. For migration
guidance and tactical how-to content, follow the links to the corresponding phase pages.
Core Practices
1 - Continuous Integration
Integrate work to trunk at least daily with automated testing to maintain a releasable codebase.
Definition
Continuous Integration (CI) is the activity of each developer integrating work to the trunk of version control at least daily and verifying that the work is, to the best of our knowledge, releasable.
CI is not just about tooling - it is fundamentally about team workflow and working agreements.
Minimum Activities Required
- Trunk-based development - all work integrates to trunk
- Work integrates to trunk at a minimum daily (each developer, every day)
- Work has automated testing before merge to trunk
- Work is tested with other work automatically on merge
- All feature work stops when the build is red
- New work does not break delivered work
Why This Matters
Without CI, Teams Experience
- Integration hell: Weeks or months of painful merge conflicts
- Late defect detection: Bugs found after they are expensive to fix
- Reduced collaboration: Developers work in isolation, losing context
- Deployment fear: Large batches of untested changes create risk
- Slower delivery: Time wasted on merge conflicts and rework
- Quality erosion: Without rapid feedback, technical debt accumulates
With CI, Teams Achieve
- Rapid feedback: Know within minutes if changes broke something
- Smaller changes: Daily integration forces better work breakdown
- Better collaboration: Team shares ownership of the codebase
- Lower risk: Small, tested changes are easier to diagnose and fix
- Faster delivery: No integration delays blocking deployment
- Higher quality: Continuous testing catches issues early
What Is Improved
Teamwork
CI requires strong teamwork to function correctly. Key improvements:
- Pull workflow: Team picks next important work instead of working from assignments
- Code review cadence: Quick reviews (< 4 hours) keep work flowing
- Pair programming: Real-time collaboration eliminates review delays
- Shared ownership: Everyone maintains the codebase together
- Team goals over individual tasks: Focus shifts from “my work” to “our progress”
Work Breakdown
CI forces better work decomposition:
- Definition of Ready: Every story has testable acceptance criteria before work starts
- Small batches: If the team can complete work in < 2 days, it is refined enough
- Vertical slicing: Each change delivers a thin, tested slice of functionality
- Incremental delivery: Features built incrementally, each step integrated daily
Testing
CI requires a shift in testing approach:
- From writing tests after code is “complete” to writing tests before/during coding (TDD/BDD)
- From testing implementation details to testing behavior and outcomes
- From manual testing before deployment to automated testing on every commit
- From separate QA phase to quality built into development
Migration Guidance
For detailed guidance on adopting CI practices during your CD migration, see:
Additional Resources
2 - Trunk-Based Development
All changes integrate into a single shared trunk with no intermediate branches.
“Trunk-based development has been shown to be a predictor of high performance in software development and delivery. It is characterized by fewer than three active branches in a code repository; branches and forks having very short lifetimes (e.g., less than a day) before being merged; and application teams rarely or never having ‘code lock’ periods when no one can check in code or do pull requests due to merging conflicts, code freezes, or stabilization phases.”
- Accelerate by Nicole Forsgren Ph.D., Jez Humble & Gene Kim
Definition
Trunk-based development (TBD) is a team workflow where changes are integrated into the trunk with no intermediate integration (develop, test, etc.) branch. The two common workflows are making changes directly to the trunk or using very short-lived branches that branch from the trunk and integrate back into the trunk.
Release branches are an intermediate step that some choose on their path to continuous delivery while improving their quality processes in the pipeline. True CD releases from the trunk.
Minimum Activities Required
- All changes integrate into the trunk
- If branches from the trunk are used:
- They originate from the trunk
- They re-integrate to the trunk
- They are short-lived and removed after the merge
What Is Improved
- Smaller changes: TBD emphasizes small, frequent changes that are easier for the team to review and more resistant to impactful merge conflicts. Conflicts become rare and trivial.
- We must test: TBD requires us to implement tests as part of the development process.
- Better teamwork: We need to work more closely as a team. This has many positive impacts, not least we will be more focused on getting the team’s highest priority done.
- Better work definition: Small changes require us to decompose the work into a level of detail that helps uncover things that lack clarity or do not make sense. This provides much earlier feedback on potential quality issues.
- Replaces process with engineering: Instead of creating a process where we control the release of features with branches, we can control the release of features with engineering techniques called evolutionary coding methods. These techniques have additional benefits related to stability that cannot be found when replaced by process.
- Reduces risk: Long-lived branches carry two common risks. First, the change will not integrate cleanly and the merge conflicts result in broken or lost features. Second, the branch will be abandoned, usually because of the first reason.
Migration Guidance
For detailed guidance on adopting TBD during your CD migration, see:
Additional Resources
3 - Single Path to Production
All deployments flow through one automated pipeline - no exceptions.
Definition
The deployment pipeline is the single, standardized path for all changes to reach any environment - development, testing, staging, or production. No manual deployments, no side channels, no “quick fixes” bypassing the pipeline. If it is not deployed through the pipeline, it does not get deployed.
Key Principles
- Single path: All deployments flow through the same pipeline
- No exceptions: Even hotfixes and rollbacks go through the pipeline
- Automated: Deployment is triggered automatically after pipeline validation
- Auditable: Every deployment is tracked and traceable
- Consistent: The same process deploys to all environments
What Is Improved
- Reliability: Every deployment is validated the same way
- Traceability: Clear audit trail from commit to production
- Consistency: Environments stay in sync
- Speed: Automated deployments are faster than manual
- Safety: Quality gates are never bypassed
- Confidence: Teams trust that production matches what was tested
- Recovery: Rollbacks are as reliable as forward deployments
Migration Guidance
For detailed guidance on establishing a single path to production, see:
Additional Resources
4 - Deterministic Pipeline
The same inputs to the pipeline always produce the same outputs.
Definition
A deterministic pipeline produces consistent, repeatable results. Given the same inputs (code, configuration, dependencies), the pipeline will always produce the same outputs and reach the same pass/fail verdict. The pipeline’s decision on whether a change is releasable is definitive - if it passes, deploy it; if it fails, fix it.
Key Principles
- Repeatable: Running the pipeline twice with identical inputs produces identical results
- Authoritative: The pipeline is the final arbiter of quality, not humans
- Immutable: No manual changes to artifacts or environments between pipeline stages
- Trustworthy: Teams trust the pipeline’s verdict without second-guessing
What Makes a Pipeline Deterministic
- Version control everything: Source code, IaC, pipeline definitions, test data, dependency lockfiles, tool versions
- Lock dependency versions: Always use lockfiles. Never rely on
latest or version ranges.
- Eliminate environmental variance: Containerize builds, pin image tags, install exact tool versions
- Remove human intervention: No manual approvals in the critical path, no manual environment setup
- Fix flaky tests immediately: Quarantine, fix, or delete. Never allow a “just re-run it” culture.
What Is Improved
- Quality increases: Real issues are never dismissed as “flaky tests”
- Speed increases: No time wasted on test reruns or manual verification
- Trust increases: Teams rely on the pipeline instead of adding manual gates
- Debugging improves: Failures are reproducible, making root cause analysis easier
- Delivery improves: Faster, more reliable path from commit to production
Migration Guidance
For detailed guidance on building a deterministic pipeline, see:
- Deterministic Pipeline - Phase 2 pipeline practice with anti-pattern/good-pattern examples and getting started steps
Additional Resources
5 - Definition of Deployable
Automated criteria that determine when a change is ready for production.
Definition
The “definition of deployable” is your organization’s agreed-upon set of non-negotiable quality criteria that every artifact must pass before it can be deployed to any environment. This definition should be automated, enforced by the pipeline, and treated as the authoritative verdict on whether a change is ready for deployment.
Key Principles
- Pipeline is definitive: If the pipeline passes, the artifact is deployable - no exceptions
- Automated validation: All criteria are checked automatically, not manually
- Consistent across environments: The same standards apply whether deploying to test or production
- Fails fast: The pipeline rejects artifacts that do not meet the standard immediately
What Should Be in Your Definition
Your definition of deployable should include automated checks for:
- Security: SAST scans, dependency vulnerability scans, secret detection
- Functionality: Unit tests, integration tests, end-to-end tests, regression tests
- Compliance: Audit trails, policy as code, change documentation
- Performance: Response time thresholds, load test baselines, resource utilization
- Reliability: Health check validation, graceful degradation tests, rollback verification
- Code quality: Linting, static analysis, complexity metrics
What Is Improved
- Removes bottlenecks: No waiting for manual approval meetings
- Increases quality: Automated checks catch more issues than manual reviews
- Reduces cycle time: Deployable artifacts are identified in minutes, not days
- Improves collaboration: Shared understanding of quality standards
- Enables continuous delivery: Trust in the pipeline makes frequent deployments safe
Migration Guidance
For detailed guidance on defining what “deployable” means for your organization, see:
- Deployable Definition - Phase 2 pipeline practice with progressive quality gates, context-specific definitions, and getting started steps
Additional Resources
6 - Immutable Artifacts
Build once, deploy everywhere. The artifact is never modified after creation.
Definition
Central to CD is that we are validating the artifact with the pipeline. It is built once and deployed to all environments. A common anti-pattern is building an artifact for each environment. The pipeline should generate immutable, versioned artifacts.
-
Immutable Pipeline: Failures should be addressed by changes in version control so that two executions with the same configuration always yield the same results. Never go to the failure point, make adjustments in the environment, and re-start from that point.
-
Immutable Artifacts: Some package management systems allow the creation of release candidate versions. For example, it is common to find -SNAPSHOT versions in Java. However, this means the artifact’s behavior can change without modifying the version. Version numbers are cheap. If we are to have an immutable pipeline, it must produce an immutable artifact. Never use or produce -SNAPSHOT versions.
Immutability provides the confidence to know that the results from the pipeline are real and repeatable.
What Is Improved
- Everything must be version controlled: source code, environment configurations, application configurations, and even test data. This reduces variability and improves the quality process.
- Confidence in testing: The artifact validated in pre-production is byte-for-byte identical to what runs in production.
- Faster rollback: Previous artifacts are unchanged in the artifact repository, ready to be redeployed.
- Audit trail: Every artifact is traceable to a specific commit and pipeline run.
Migration Guidance
For detailed guidance on implementing immutable artifacts, see:
- Immutable Artifacts - Phase 2 pipeline practice with anti-patterns, good patterns, and getting started steps
Additional Resources
7 - Production-Like Environments
Test in environments that mirror production to catch environment-specific issues early.
Definition
It is crucial to leverage pre-production environments in your CI/CD to run all of your tests (unit, integration, UAT, manual QA, E2E) early and often. Test environments increase interaction with new features and exposure to bugs - both of which are important prerequisites for reliable software.
Types of Pre-Production Environments
Most organizations employ both static and short-lived environments and utilize them for case-specific stages of the SDLC:
-
Staging environment: The last environment that teams run automated tests against prior to deployment, particularly for testing interaction between all new features after a merge. Its infrastructure reflects production as closely as possible.
-
Ephemeral environments: Full-stack, on-demand environments spun up on every code change. Each ephemeral environment is leveraged in your pipeline to run E2E, unit, and integration tests on every code change. These environments are defined in version control, created and destroyed automatically on demand. They are short-lived by definition but should closely resemble production. They replace long-lived “static” environments and the maintenance required to keep those stable.
What Is Improved
- Infrastructure is kept consistent: Test environments deliver results that reflect real-world performance. Fewer unprecedented bugs reach production since using prod-like data and dependencies allows you to run your entire test suite earlier.
- Test against latest changes: These environments rebuild upon code changes with no manual intervention.
- Test before merge: Attaching an ephemeral environment to every PR enables E2E testing in your CI before code changes get deployed to staging.
Migration Guidance
For detailed guidance on implementing production-like environments, see:
Additional Resources
8 - Rollback
Fast, automated recovery from any deployment.
Definition
Rollback on-demand means the ability to quickly and safely revert to a previous working version of your application at any time, without requiring special approval, manual intervention, or complex procedures. It should be as simple and reliable as deploying forward.
Key Principles
- Fast: Rollback completes in minutes, not hours. Target < 5 minutes.
- Automated: No manual steps or special procedures. Single command or click.
- Safe: Rollback is validated just like forward deployment.
- Simple: Any team member can execute it without specialized knowledge.
- Tested: Rollback mechanism is regularly tested, not just used in emergencies.
What Is Improved
- Mean Time To Recovery (MTTR): Drops from hours to minutes
- Deployment frequency: Increases due to reduced risk
- Team confidence: Higher willingness to deploy
- Customer satisfaction: Faster incident resolution
- On-call burden: Reduced stress for on-call engineers
Migration Guidance
For detailed guidance on implementing rollback capability, see:
- Rollback - Phase 2 pipeline practice with blue-green, canary, feature flag, and database-safe rollback patterns
Additional Resources
9 - Application Configuration
Separate what varies between environments from what does not.
Definition
Application configuration defines the internal behavior of your application and is bundled with the artifact. It does not vary between environments. This is distinct from environment configuration (secrets, URLs, credentials) which varies by deployment.
We embrace The Twelve-Factor App config definitions:
- Application Configuration: Internal to the app, does NOT vary by environment (feature flags, business rules, UI themes, default settings)
- Environment Configuration: Varies by deployment (database URLs, API keys, service endpoints, credentials)
Key Principles
Application configuration should be:
- Version controlled with the source code
- Deployed as part of the immutable artifact
- Testable in the CI pipeline
- Unchangeable after the artifact is built
What Is Improved
- Immutability: The artifact tested in staging is identical to what runs in production
- Traceability: You can trace any behavior back to a specific commit
- Testability: Application behavior can be validated in the pipeline before deployment
- Reliability: No configuration drift between environments caused by manual changes
- Faster rollback: Rolling back an artifact rolls back all application configuration changes
Migration Guidance
For detailed guidance on managing application configuration, see:
Additional Resources