This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Phase 0: Assess

Understand where you are today. Map your delivery process, measure what matters, and identify the constraints holding you back.

Key question: “How far are we from CD?”

Before changing anything, you need to understand your current state. This phase helps you create a clear picture of your delivery process, establish baseline metrics, and identify the constraints that will guide your improvement roadmap.

What You’ll Do

  1. Map your value stream - Visualize the flow from idea to production
  2. Establish baseline metrics - Measure your current delivery performance
  3. Identify constraints - Find the bottlenecks limiting your flow
  4. Complete the current-state checklist - Self-assess against MinimumCD practices

Why This Phase Matters

Teams that skip assessment often invest in the wrong improvements. A team with a 3-week manual testing cycle doesn’t need better deployment automation first - they need testing fundamentals. Understanding your constraints ensures you invest effort where it will have the biggest impact.

When You’re Ready to Move On

You’re ready for Phase 1: Foundations when you can answer:

  • What does our value stream look like end-to-end?
  • What are our current lead time, deployment frequency, and change failure rate?
  • What are the top 3 constraints limiting our delivery flow?
  • Which MinimumCD practices are we missing?

1 - Value Stream Mapping

Visualize your delivery process end-to-end to identify waste and constraints before starting your CD migration.

Phase 0 - Assess | Adapted from Dojo Consortium

Before you change anything about how your team delivers software, you need to see how it works today. Value Stream Mapping (VSM) is the single most effective tool for making your delivery process visible. It reveals the waiting, the rework, and the handoffs that you have learned to live with but that are silently destroying your flow.

In the context of a CD migration, a value stream map is not an academic exercise. It is the foundation for every decision you will make in the phases ahead. It tells you where your time goes, where quality breaks down, and which constraint to attack first.

What Is a Value Stream Map?

A value stream map is a visual representation of every step required to deliver a change from request to production. For each step, you capture:

  • Process time - the time someone is actively working on that step
  • Wait time - the time the work sits idle between steps (in a queue, awaiting approval, blocked on an environment)
  • Percent Complete and Accurate (%C/A) - the percentage of work arriving at this step that is usable without rework

The ratio of process time to total time (process time + wait time) is your flow efficiency. Most teams are shocked to discover that their flow efficiency is below 15%, meaning that for every hour of actual work, there are nearly six hours of waiting.

Prerequisites

Before running a value stream mapping session, make sure you have:

  • An established, repeatable process. You are mapping what actually happens, not what should happen. If every change follows a different path, start by agreeing on the current “most common” path.
  • All stakeholders in the room. You need representatives from every group involved in delivery: developers, testers, operations, security, product, change management. Each person knows the wait times and rework loops in their part of the stream that others cannot see.
  • A shared understanding of wait time vs. process time. Wait time is when work sits idle. Process time is when someone is actively working. A code review that takes “two days” but involves 30 minutes of actual review has 30 minutes of process time and roughly 15.5 hours of wait time.

Choose Your Mapping Approach

Value stream maps can be built from two directions. Most organizations benefit from starting bottom-up and then combining into a top-down view, but the right choice depends on where your delivery pain is concentrated.

Bottom-Up: Map at the Team Level First

Each delivery team maps its own process independently - from the moment a developer is ready to push a change to the moment that change is running in production. This is the approach described in Document Your Current Process, elevated to a formal value stream map with measured process times, wait times, and %C/A.

When to use bottom-up:

  • You have multiple teams that each own their own deployment process (or think they do).
  • Teams have different pain points and different levels of CD maturity.
  • You want each team to own its improvement work rather than waiting for an organizational initiative.

How it works:

  1. Each team maps its own value stream using the session format described below.
  2. Teams identify and fix their own constraints. Many constraints are local - flaky tests, manual deployment steps, slow code review - and do not require cross-team coordination.
  3. After teams have mapped and improved their own streams, combine the maps to reveal cross-team dependencies. Lay the team-level maps side by side and draw the connections: shared environments, shared libraries, shared approval processes, upstream/downstream dependencies.

The combined view often reveals constraints that no single team can see: a shared staging environment that serializes deployments across five teams, a security review team that is the bottleneck for every release, or a shared library with a release cycle that blocks downstream teams for weeks.

Advantages: Fast to start, builds team ownership, surfaces team-specific friction that a high-level map would miss. Teams see results quickly, which builds momentum for the harder cross-team work.

Top-Down: Map Across Dependent Teams

Start with the full flow from a customer request (or business initiative) entering the system to the delivered outcome in production, mapping across every team the work touches. This produces a single map that shows the end-to-end flow including all inter-team handoffs, shared queues, and organizational boundaries.

When to use top-down:

  • Delivery pain is concentrated at the boundaries between teams, not within them.
  • A single change routinely touches multiple teams (front-end, back-end, platform, data, etc.) and the coordination overhead dominates cycle time.
  • Leadership needs a full picture of organizational delivery performance to prioritize investment.

How it works:

  1. Identify a representative value stream - a type of work that flows through the teams you want to map. For example: “a customer-facing feature that requires API changes, a front-end update, and a database migration.”
  2. Get representatives from every team in the room. Each person maps their team’s portion of the flow, including the handoff to the next team.
  3. Connect the segments. The gaps between teams - where work queues, waits for prioritization, or gets lost in a ticket system - are usually the largest sources of delay.

Advantages: Reveals organizational constraints that team-level maps cannot see. Shows the true end-to-end lead time including inter-team wait times. Essential for changes that require coordinated delivery across multiple teams.

Combining Both Approaches

The most effective strategy for large organizations:

  1. Start bottom-up. Have each team document its current process and then run its own value stream mapping session. Fix team-level quick wins immediately.
  2. Combine into a top-down view. Once team-level maps exist, connect them to see the full organizational flow. The team-level detail makes the top-down map more accurate because each segment was mapped by the people who actually do the work.
  3. Fix constraints at the right level. Team-level constraints (flaky tests, manual deploys) are fixed by the team. Cross-team constraints (shared environments, approval bottlenecks, dependency coordination) are fixed at the organizational level.

This layered approach prevents two common failure modes: mapping at too high a level (which misses team-specific friction) and mapping only at the team level (which misses the organizational constraints that dominate end-to-end lead time).

How to Run the Session

Step 1: Start From Delivery, Work Backward

Begin at the right side of your map - the moment a change reaches production. Then work backward through every step until you reach the point where a request enters the system. This prevents teams from getting bogged down in the early stages and never reaching the deployment process, which is often where the largest delays hide.

Typical steps you will uncover include:

  • Request intake and prioritization
  • Story refinement and estimation
  • Development (coding)
  • Code review
  • Build and unit tests
  • Integration testing
  • Manual QA / regression testing
  • Security review
  • Staging deployment
  • User acceptance testing (UAT)
  • Change advisory board (CAB) approval
  • Production deployment
  • Production verification

Step 2: Capture Process Time and Wait Time for Each Step

For each step on the map, record the process time and the wait time. Use averages if exact numbers are not available, but prefer real data from your issue tracker, CI system, or deployment logs when you can get it.

Step 3: Calculate %C/A at Each Step

Percent Complete and Accurate measures the quality of the handoff. Ask each person: “What percentage of the work you receive from the previous step is usable without needing clarification, correction, or rework?”

A low %C/A at a step means the upstream step is producing defective output. This is critical information for your migration plan because it tells you where quality needs to be built in rather than inspected after the fact.

Step 4: Identify Constraints (Kaizen Bursts)

Mark the steps with the largest wait times and the lowest %C/A with a “kaizen burst” - a starburst symbol indicating an improvement opportunity. These are your constraints. They will become the focus of your migration roadmap.

Common constraints teams discover during their first value stream map:

Constraint Typical Impact Migration Phase to Address
Long-lived feature branches Days of integration delay, merge conflicts Phase 1 (Trunk-Based Development)
Manual regression testing Days to weeks of wait time Phase 1 (Testing Fundamentals)
Environment provisioning Hours to days of wait time Phase 2 (Production-Like Environments)
CAB / change approval boards Days of wait time per deployment Phase 2 (Pipeline Architecture)
Manual deployment process Hours of process time, high error rate Phase 2 (Single Path to Production)
Large batch releases Weeks of accumulation, high failure rate Phase 3 (Small Batches)

Reading the Results

Once your map is complete, calculate these summary numbers:

  • Total lead time = sum of all process times + all wait times
  • Total process time = sum of just the process times
  • Flow efficiency = total process time / total lead time * 100
  • Number of handoffs = count of transitions between different teams or roles
  • Rework percentage = percentage of changes that loop back to a previous step

These numbers become part of your baseline metrics and feed directly into your work to identify constraints.

What Good Looks Like

You are not aiming for a perfect value stream map. You are aiming for a shared, honest picture of reality that the whole team agrees on. The map should be:

  • Visible - posted on a wall or in a shared digital tool where the team sees it daily
  • Honest - reflecting what actually happens, including the workarounds and shortcuts
  • Actionable - with constraints clearly marked so the team knows where to focus

You will revisit and update this map as you progress through each migration phase. It is a living document, not a one-time exercise.

Next Step

With your value stream map in hand, proceed to Baseline Metrics to quantify your current delivery performance.


This content is adapted from the Dojo Consortium, licensed under CC BY 4.0.

2 - Baseline Metrics

Establish baseline measurements for your current delivery performance before making any changes.

Phase 0 - Assess | Adapted from Dojo Consortium

You cannot improve what you have not measured. Before making any changes to your delivery process, you need to capture baseline measurements of your current performance. These baselines serve two purposes: they help you identify where to focus your migration effort, and they give you an honest “before” picture so you can demonstrate progress as you improve.

This is not about building a sophisticated metrics dashboard. It is about getting four numbers written down so you have a starting point.

Why Measure Before Changing

Teams that skip baseline measurement fall into predictable traps:

  • They cannot prove improvement. Six months into a migration, leadership asks “What has gotten better?” Without a baseline, the answer is a shrug and a feeling.
  • They optimize the wrong thing. Without data, teams default to fixing what is most visible or most annoying rather than what is the actual constraint.
  • They cannot detect regression. A change that feels like an improvement may actually make things worse in ways that are not immediately obvious.

Baselines do not need to be precise to the minute. A rough but honest measurement is vastly more useful than no measurement at all.

The Four Essential Metrics

The DORA research program (now part of Google Cloud) identified four key metrics that predict software delivery performance and organizational outcomes. These are the metrics you should baseline first.

1. Deployment Frequency

What it measures: How often your team deploys to production.

How to capture it: Count the number of production deployments in the last 30 days. Check your deployment logs, CI/CD system, or change management records. If deployments are rare enough that you remember each one, count from memory.

What it tells you:

Frequency What It Suggests
Multiple times per day You may already be practicing continuous delivery
Once per week You have a regular cadence but likely batch changes
Once per month or less Large batches, high risk per deployment, likely manual process
Varies wildly No consistent process; deployments are event-driven

Record your number: ______ deployments in the last 30 days.

2. Lead Time for Changes

What it measures: The elapsed time from when code is committed to when it is running in production.

How to capture it: Pick your last 5-10 production deployments. For each one, find the commit timestamp of the oldest change included in that deployment and subtract it from the deployment timestamp. Take the median.

If your team uses feature branches, the clock starts at the first commit on the branch, not when the branch is merged. This captures the true elapsed time the change spent in the system.

What it tells you:

Lead Time What It Suggests
Less than 1 hour Fast flow, likely small batches and good automation
1 day to 1 week Reasonable but with room for improvement
1 week to 1 month Significant queuing, likely large batches or manual gates
More than 1 month Major constraints in testing, approval, or deployment

Record your number: ______ median lead time for changes.

3. Change Failure Rate

What it measures: The percentage of deployments to production that result in a degraded service requiring remediation (rollback, hotfix, patch, or incident).

How to capture it: Look at your last 20-30 production deployments. Count how many caused an incident, required a rollback, or needed an immediate hotfix. Divide by the total number of deployments.

What it tells you:

Failure Rate What It Suggests
0-5% Strong quality practices and small change sets
5-15% Typical for teams with some automation
15-30% Quality gaps, likely insufficient testing or large batches
Above 30% Systemic quality problems; changes are frequently broken

Record your number: ______ % of deployments that required remediation.

4. Mean Time to Restore (MTTR)

What it measures: How long it takes to restore service after a production failure caused by a deployment.

How to capture it: Look at your production incidents from the last 3-6 months. For each incident caused by a deployment, note the time from detection to resolution. Take the median. If you have not had any deployment-caused incidents, note that - it either means your quality is excellent or your deployment frequency is so low that you have insufficient data.

What it tells you:

MTTR What It Suggests
Less than 1 hour Good incident response, likely automated rollback
1-4 hours Manual but practiced recovery process
4-24 hours Significant manual intervention required
More than 1 day Serious gaps in observability or rollback capability

Record your number: ______ median time to restore service.

Capturing Your Baselines

You do not need specialized tooling to capture these four numbers. Here is a practical approach:

  1. Check your CI/CD system. Most CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps) have deployment history. Export the last 30-90 days of deployment records.
  2. Check your incident tracker. Pull incidents from the last 3-6 months and filter for deployment-caused issues.
  3. Check your version control. Git log data combined with deployment timestamps gives you lead time.
  4. Ask the team. If data is scarce, have a conversation with the team. Experienced team members can provide reasonable estimates for all four metrics.

Record these numbers somewhere the whole team can see them. A wiki page, a whiteboard, a shared document - the format does not matter. What matters is that they are written down and dated.

What Your Baselines Tell You About Where to Focus

Your baseline metrics point toward specific constraints:

Signal Likely Constraint Where to Look
Low deployment frequency + high lead time Large batches, manual process Value Stream Map for queue times
High change failure rate Insufficient testing, poor quality practices Testing Fundamentals
High MTTR No rollback capability, poor observability Rollback
High lead time + low change failure rate Excessive manual gates adding delay but not value Identify Constraints

Use these signals alongside your value stream map to identify your top constraints.

A Warning About Metrics

Next Step

With your baselines recorded, proceed to Identify Constraints to determine which bottleneck to address first.


This content is adapted from the Dojo Consortium, licensed under CC BY 4.0.

3 - Identify Constraints

Use your value stream map and baseline metrics to find the bottlenecks that limit your delivery flow.

Phase 0 - Assess

Your value stream map shows you where time goes. Your baseline metrics tell you how fast and how safely you deliver. Now you need to answer the most important question in your migration: What is the one thing most limiting your delivery flow right now?

This is not a question you answer by committee vote or gut feeling. It is a question you answer with the data you have already collected.

The Theory of Constraints

Eliyahu Goldratt’s Theory of Constraints offers a simple and powerful insight: every system has exactly one constraint that limits its overall throughput. Improving anything other than that constraint does not improve the system.

Consider a delivery process where code review takes 30 minutes but the queue to get a review takes 2 days, and manual regression testing takes 5 days after that. If you invest three months building a faster build pipeline that saves 10 minutes per build, you have improved something that is not the constraint. The 5-day regression testing cycle still dominates your lead time. You have made a non-bottleneck more efficient, which changes nothing about how fast you deliver.

The implication for your CD migration is direct: you must find and address constraints in order of impact. Fix the biggest one first. Then find the next one. Then fix that. This is how you make sustained, measurable progress rather than spreading effort across improvements that do not move the needle.

Common Constraint Categories

Software delivery constraints tend to cluster into a few recurring categories. As you review your value stream map, look for these patterns.

Testing Bottlenecks

Symptoms: Large wait time between “code complete” and “verified.” Manual regression test cycles measured in days or weeks. Low %C/A at the testing step, indicating frequent rework. High change failure rate in your baseline metrics despite significant testing effort.

What is happening: Testing is being done as a phase after development rather than as a continuous activity during development. Manual test suites have grown to cover every scenario ever encountered, and running them takes longer with every release. The test environment is shared and frequently broken.

Migration path: Phase 1 - Testing Fundamentals

Deployment Gates

Symptoms: Wait times of days or weeks between “tested” and “deployed.” Change Advisory Board (CAB) meetings that happen weekly or biweekly. Multiple sign-offs required from people who are not involved in the actual change.

What is happening: The organization has substituted process for confidence. Because deployments have historically been risky (large batches, manual processes, poor rollback), layers of approval have been added. These approvals add delay but rarely catch issues that automated testing would not. They exist because the deployment process is not trustworthy, and they persist because removing them feels dangerous.

Migration path: Phase 2 - Pipeline Architecture and building the automated quality evidence that makes manual approvals unnecessary.

Environment Provisioning

Symptoms: Developers waiting hours or days for a test or staging environment. “Works on my machine” failures when code reaches a shared environment. Environments that drift from production configuration over time.

What is happening: Environments are manually provisioned, shared across teams, and treated as pets rather than cattle. There is no automated way to create a production-like environment on demand. Teams queue for shared environments, and environment configuration has diverged from production.

Migration path: Phase 2 - Production-Like Environments

Code Review Delays

Symptoms: Pull requests sitting open for more than a day. Review queues with 5 or more pending reviews. Developers context-switching because they are blocked waiting for review.

What is happening: Code review is being treated as an asynchronous handoff rather than a collaborative activity. Reviews happen when the reviewer “gets to it” rather than as a near-immediate response. Large pull requests make review daunting, which increases queue time further.

Migration path: Phase 1 - Code Review and Trunk-Based Development to reduce branch lifetime and review size.

Manual Handoffs

Symptoms: Multiple steps in your value stream map where work transitions from one team to another. Tickets being reassigned across teams. “Throwing it over the wall” language in how people describe the process.

What is happening: Delivery is organized as a sequence of specialist stages (dev, test, ops, security) rather than as a cross-functional flow. Each handoff introduces a queue, a context loss, and a communication overhead. The more handoffs, the longer the lead time and the more likely that information is lost.

Migration path: This is an organizational constraint, not a technical one. It is addressed gradually through cross-functional team formation and by automating the specialist activities into the pipeline so that handoffs become automated checks rather than manual transfers.

Using Your Value Stream Map to Find the Constraint

Pull out your value stream map and follow this process:

Step 1: Rank Steps by Wait Time

List every step in your value stream and sort them by wait time, longest first. Your biggest constraint is almost certainly in the top three. Wait time is more important than process time because wait time is pure waste - nothing is happening, no value is being created.

Step 2: Look for Rework Loops

Identify steps where work frequently loops back. A testing step with a 40% rework rate means that nearly half of all changes go through the development-to-test cycle twice. The effective wait time for that step is nearly doubled when you account for rework.

Step 3: Count Handoffs

Each handoff between teams or roles is a queue point. If your value stream has 8 handoffs, you have 8 places where work waits. Look for handoffs that could be eliminated by automation or by reorganizing work within the team.

Step 4: Cross-Reference with Metrics

Check your findings against your baseline metrics:

  • High lead time with low process time = the constraint is in the queues (wait time), not in the work itself
  • High change failure rate = the constraint is in quality practices, not in speed
  • Low deployment frequency with everything else reasonable = the constraint is in the deployment process itself or in organizational policy

Prioritizing: Fix the Biggest One First

Once you have identified your top constraint, map it to a migration phase:

If Your Top Constraint Is… Start With…
Integration and merge conflicts Phase 1 - Trunk-Based Development
Manual testing cycles Phase 1 - Testing Fundamentals
Large work items that take weeks Phase 1 - Work Decomposition
Code review bottlenecks Phase 1 - Code Review
Manual or inconsistent deployments Phase 2 - Single Path to Production
Environment availability Phase 2 - Production-Like Environments
Change approval processes Phase 2 - Pipeline Architecture
Large batch sizes Phase 3 - Small Batches

The Next Constraint

Fixing your first constraint will improve your flow. It will also reveal the next constraint. This is expected and healthy. A delivery process is a chain, and strengthening the weakest link means a different link becomes the weakest.

This is why the migration is organized in phases. Phase 1 addresses the foundational constraints that nearly every team has (integration practices, testing, small work). Phase 2 addresses pipeline constraints. Phase 3 optimizes flow. You will cycle through constraint identification and resolution throughout your migration.

Plan to revisit your value stream map and metrics after addressing each major constraint. Your map from today will be outdated within weeks of starting your migration - and that is a sign of progress.

Next Step

Complete the Current State Checklist to assess your team against specific MinimumCD practices and confirm your migration starting point.

4 - Current State Checklist

Self-assess your team against MinimumCD practices to understand your starting point and determine where to begin your migration.

Phase 0 - Assess

This checklist translates the practices defined by MinimumCD.org into concrete yes-or-no questions you can answer about your team today. It is not a test to pass. It is a diagnostic tool that shows you which practices are already in place and which ones your migration needs to establish.

Work through each category with your team. Be honest - checking a box you have not earned gives you a migration plan that skips steps you actually need.

How to Use This Checklist

For each item, mark it with an [x] if your team consistently does this today - not occasionally, not aspirationally, but as a default practice. If you do it sometimes but not reliably, leave it unchecked.


Trunk-Based Development

  • All developers integrate their work to the trunk (main branch) at least once every 24 hours
  • No branch lives longer than 24 hours before being integrated
  • The team does not use code freeze periods to stabilize for release
  • There are fewer than 3 active branches at any given time
  • Merge conflicts are rare and small when they occur

Why it matters: Long-lived branches are the single biggest source of integration risk. Every hour a branch lives is an hour where it diverges from what everyone else is doing. Trunk-based development eliminates integration as a separate, painful event and makes it a continuous, trivial activity. Without this practice, continuous integration is impossible, and without continuous integration, continuous delivery is impossible.


Continuous Integration

  • Every commit to trunk triggers an automated build
  • The automated build includes running the full unit test suite
  • All tests must pass before any change is merged to trunk
  • A broken build is treated as the team’s top priority to fix (not left broken while other work continues)
  • The build and test cycle completes in less than 10 minutes

Why it matters: Continuous integration means that the team always knows whether the codebase is in a working state. If builds are not automated, if tests do not run on every commit, or if broken builds are tolerated, then the team is flying blind. Every change is a gamble that something else has not broken in the meantime.


Pipeline Practices

  • There is a single, defined path that every change follows to reach production (no side doors, no manual deployments, no exceptions)
  • The pipeline is deterministic: given the same input commit, it produces the same output every time
  • Build artifacts are created once and promoted through environments (not rebuilt for each environment)
  • The pipeline runs automatically on every commit to trunk without manual triggering
  • Pipeline failures provide clear, actionable feedback that developers can act on within minutes

Why it matters: A pipeline is the mechanism that turns code changes into production deployments. If the pipeline is inconsistent, manual, or bypassable, then you do not have a reliable path to production. You have a collection of scripts and hopes. Deterministic, automated pipelines are what make deployment a non-event rather than a high-risk ceremony.


Deployment

  • The team has at least one environment that closely mirrors production configuration (OS, middleware, networking, data shape)
  • Application configuration is externalized from the build artifact (config files, environment variables, or a config service - not baked into the binary)
  • The team can roll back a production deployment within minutes, not hours
  • Deployments to production do not require downtime
  • The deployment process is the same for every environment (dev, staging, production) with only configuration differences

Why it matters: If your test environment does not look like production, your tests are lying to you. If configuration is baked into your artifact, you are rebuilding for each environment, which means the thing you tested is not the thing you deploy. If you cannot roll back quickly, every deployment is a high-stakes bet. These practices ensure that what you test is what you ship, and that shipping is safe.


Quality

  • The team has automated tests at multiple levels (unit, integration, and at least some end-to-end)
  • A build that passes all automated checks is considered deployable without additional manual verification
  • There are no manual quality gates between a green build and production (no manual QA sign-off, no manual regression testing required)
  • Defects found in production are addressed by adding automated tests that would have caught them, not by adding manual inspection steps
  • The team monitors production health and can detect deployment-caused issues within minutes

Why it matters: Quality that depends on manual inspection does not scale and does not speed up. As your deployment frequency increases through the migration, manual quality gates become the bottleneck. The goal is to build quality in through automation so that a green build means a deployable build. This is the foundation of continuous delivery: if it passes the pipeline, it is ready for production.


Scoring Guide

Count the number of items you checked across all categories.

Score Your Starting Point Recommended Phase
0-5 You are early in your journey. Most foundational practices are not yet in place. Start at the beginning of Phase 1 - Foundations. Focus on trunk-based development and basic test automation first.
6-12 You have some practices in place but significant gaps remain. This is the most common starting point. Start with Phase 1 - Foundations but focus on the categories where you had the fewest checks. Your constraint analysis will tell you which gap to close first.
13-18 Your foundations are solid. The gaps are likely in pipeline automation and deployment practices. You may be able to move quickly through Phase 1 and focus your effort on Phase 2 - Pipeline. Validate with your value stream map that your remaining constraints match.
19-22 You are well-practiced in most areas. Your migration is about closing specific gaps and optimizing flow. Review your unchecked items - they point to specific topics in Phase 3 - Optimize or Phase 4 - Deliver on Demand.
23-25 You are already practicing most of what MinimumCD defines. Your focus should be on consistency and delivering on demand. Jump to Phase 4 - Deliver on Demand and focus on the capability to deploy any change when needed.

Putting It All Together

You now have four pieces of information from Phase 0:

  1. A value stream map showing your end-to-end delivery process with wait times and rework loops
  2. Baseline metrics for deployment frequency, lead time, change failure rate, and MTTR
  3. An identified top constraint telling you where to focus first
  4. This checklist confirming which practices are in place and which are missing

Together, these give you a clear, data-informed starting point for your migration. You know where you are, you know what is slowing you down, and you know which practices to establish first.

Next Step

You are ready to begin Phase 1 - Foundations. Start with the practice area that addresses your top constraint.