This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Integration and Feedback Problems
Symptoms related to work-in-progress, integration pain, review bottlenecks, and feedback speed.
These symptoms indicate problems with how work flows through your team. When integration is
deferred, feedback is slow, or work piles up, the team stays busy without finishing things.
Each page describes what you are seeing and links to the anti-patterns most likely causing it.
How to use this section
Start with the symptom that matches what your team experiences. Each symptom page explains what
you are seeing, identifies the most likely root causes (anti-patterns), and provides diagnostic
questions to narrow down which cause applies to your situation. Follow the anti-pattern link to
find concrete fix steps.
Related anti-pattern categories: Team Workflow Anti-Patterns,
Branching and Integration Anti-Patterns
Related guides: Trunk-Based Development,
Work Decomposition,
Limiting WIP
1 - Everything Started, Nothing Finished
The board shows many items in progress but few reaching done. The team is busy but not delivering.
What you are seeing
Open the team’s board on any given day. Count the items in progress. Count the team members. If
the first number is significantly higher than the second, the team has a WIP problem. Every
developer is working on a different story. Eight items in progress, zero done. Nothing gets the
focused attention needed to finish.
At the end of the sprint, there is a scramble to close anything. Stories that were “almost done”
for days finally get pushed through. Cycle time is long and unpredictable. The team is busy all
the time but finishes very little.
Common causes
Push-Based Work Assignment
When managers assign work to individuals rather than letting the team pull from a prioritized
backlog, each person ends up with their own queue of assigned items. WIP grows because work is
distributed across individuals rather than flowing through the team. Nobody swarms on blocked
items because everyone is busy with “their” assigned work.
Read more: Push-Based Work Assignment
Horizontal Slicing
When work is split by technical layer (“build the database schema,” “build the API,” “build the
UI”), each layer must be completed before anything is deployable. Multiple developers work on
different layers of the same feature simultaneously, all “in progress,” none independently done.
WIP is high because the decomposition prevents any single item from reaching completion quickly.
Read more: Horizontal Slicing
Unbounded WIP
When the team has no explicit constraint on how many items can be in progress simultaneously,
there is nothing to prevent WIP from growing. Developers start new work whenever they are
blocked, waiting for review, or between tasks. Without a limit, the natural tendency is to stay
busy by starting things rather than finishing them.
Read more: Unbounded WIP
How to narrow it down
- Does each developer have their own assigned backlog of work? If yes, the assignment model
prevents swarming and drives individual queues. Start with
Push-Based Work Assignment.
- Are work items split by technical layer rather than by user-visible behavior? If yes,
items cannot be completed independently. Start with
Horizontal Slicing.
- Is there any explicit limit on how many items can be in progress at once? If no, the team
has no mechanism to stop starting and start finishing. Start with
Unbounded WIP.
Related Content
2 - Feedback Takes Hours Instead of Minutes
The time from making a change to knowing whether it works is measured in hours, not minutes. Developers batch changes to avoid waiting.
What you are seeing
A developer makes a change and wants to know if it works. They push to CI and wait 45 minutes for
the pipeline. Or they open a PR and wait two days for a review. Or they deploy to staging and wait
for a manual QA pass that happens next week. By the time feedback arrives, the developer has moved
on to something else.
The slow feedback changes developer behavior. They batch multiple changes into a single commit to
avoid waiting multiple times. They skip local verification and push larger, less certain changes.
They start new work before the previous change is validated, juggling multiple incomplete tasks.
When feedback finally arrives and something is wrong, the developer must context-switch back. The
mental model from the original change has faded. Debugging takes longer because the developer is
working from memory rather than from active context. If multiple changes were batched, the
developer must untangle which one caused the failure.
Common causes
Inverted Test Pyramid
When most tests are slow E2E tests, the test feedback loop is measured in tens of minutes rather
than seconds. Unit tests provide feedback in seconds. E2E tests take minutes or hours. A team with
a fast unit test suite can verify a change in under a minute. A team whose testing relies on E2E
tests cannot get feedback faster than those tests can run.
Read more: Inverted Test Pyramid
Integration Deferred
When the team does not integrate frequently (at least daily), the feedback loop for integration
problems is as long as the branch lifetime. A developer working on a two-week branch does not
discover integration conflicts until they merge. Daily integration catches conflicts within hours.
Continuous integration catches them within minutes.
Read more: Integration Deferred
Manual Testing Only
When there are no automated tests, the only feedback comes from manual verification. A developer
makes a change and must either test it manually themselves (slow) or wait for someone else to test
it (slower). Automated tests provide feedback in the pipeline without requiring human effort or
scheduling.
Read more: Manual Testing Only
Long-Lived Feature Branches
When pull requests wait days for review, the code review feedback loop dominates total cycle time.
A developer finishes a change in two hours, then waits two days for review. The review feedback
loop is 24 times longer than the development time. Long-lived branches produce large PRs, and
large PRs take longer to review. Fast feedback requires fast reviews, which requires small PRs,
which requires short-lived branches.
Read more: Long-Lived Feature Branches
Manual Regression Testing Gates
When every change must pass through a manual QA gate, the feedback loop includes human scheduling.
The QA team has a queue. The change waits in line. When the tester gets to it, days have passed.
Automated testing in the pipeline replaces this queue with instant feedback.
Read more: Manual Regression Testing Gates
How to narrow it down
- How fast can the developer verify a change locally? If the local test suite takes more than
a few minutes, the test strategy is the bottleneck. Start with
Inverted Test Pyramid.
- How frequently does the team integrate to main? If developers work on branches for days
before integrating, the integration feedback loop is the bottleneck. Start with
Integration Deferred.
- Are there automated tests at all? If the only feedback is manual testing, the lack of
automation is the bottleneck. Start with
Manual Testing Only.
- How long do PRs wait for review? If review turnaround is measured in days, the review
process is the bottleneck. Start with
Long-Lived Feature Branches.
- Is there a manual QA gate in the pipeline? If changes wait in a QA queue, the manual gate
is the bottleneck. Start with
Manual Regression Testing Gates.
Related Content
3 - Merging Is Painful and Time-Consuming
Integration is a dreaded, multi-day event. Teams delay merging because it is painful, which makes the next merge even worse.
What you are seeing
A developer has been working on a feature branch for two weeks. They open a pull request and
discover dozens of conflicts across multiple files. Other developers have changed the same areas
of the codebase. Resolving the conflicts takes a full day. Some conflicts are straightforward
(two people edited adjacent lines), but others are semantic (two people changed the same
function’s behavior in different ways). The developer must understand both changes to merge
correctly.
After resolving conflicts, the tests fail. The merged code compiles but does not work because the
two changes are logically incompatible. The developer spends another half-day debugging the
interaction. By the time the branch is merged, the developer has spent more time integrating than
they spent building the feature.
The team knows merging is painful, so they delay it. The delay makes the next merge worse because
more code has diverged. The cycle repeats until someone declares a “merge day” and the team spends
an entire day resolving accumulated drift.
Common causes
Long-Lived Feature Branches
When branches live for weeks or months, they accumulate divergence from the main line. The longer
the branch lives, the more changes happen on main that the branch does not include. At merge time,
all of that divergence must be reconciled at once. A branch that is one day old has almost no
conflicts. A branch that is two weeks old may have dozens.
Read more: Long-Lived Feature Branches
Integration Deferred
When the team does not practice continuous integration (integrating to main at least daily), each
developer’s work diverges independently. The build may be green on each branch but broken when
branches combine. CI means integrating continuously, not running a build server. Without frequent
integration, merge pain is inevitable.
Read more: Integration Deferred
Monolithic Work Items
When work items are too large to complete in a day or two, developers must stay on a branch for
the duration. A story that takes a week forces a week-long branch. Breaking work into smaller
increments that can be integrated daily eliminates the divergence window that causes painful
merges.
Read more: Monolithic Work Items
How to narrow it down
- How long do branches typically live before merging? If branches live longer than two days,
the branch lifetime is the primary driver of merge pain. Start with
Long-Lived Feature Branches.
- Does the team integrate to main at least once per day? If developers work in isolation for
days before integrating, they are not practicing continuous integration regardless of whether a
CI server exists. Start with
Integration Deferred.
- How large are the typical work items? If stories take a week or more, the work
decomposition forces long branches. Start with
Monolithic Work Items.
Related Content
4 - Pull Requests Sit for Days Waiting for Review
Pull requests queue up and wait. Authors have moved on by the time feedback arrives.
What you are seeing
A developer opens a pull request and waits. Hours pass. A day passes. They ping someone in chat.
Eventually, comments arrive, but the author has moved on to something else and has to reload
context to respond. Another round of comments. Another wait. The PR finally merges two or three
days after it was opened.
The team has five or more open PRs at any time. Some are days old. Developers start new work
while they wait, which creates more PRs, which creates more review load, which slows reviews
further.
Common causes
Long-Lived Feature Branches
When developers work on branches for days, the resulting PRs are large. Large PRs take longer to
review because reviewers need more time to understand the scope of the change. A 300-line PR is
daunting. A 50-line PR takes 10 minutes. The branch length drives the PR size, which drives the
review delay.
Read more: Long-Lived Feature Branches
Knowledge Silos
When only specific individuals can review certain areas of the codebase, those individuals become
bottlenecks. Their review queue grows while other team members who could review are not
considered qualified. The constraint is not review capacity in general but review capacity for
specific code areas concentrated in too few people.
Read more: Knowledge Silos
Push-Based Work Assignment
When work is assigned to individuals, reviewing someone else’s code feels like a distraction
from “my work.” Every developer has their own assigned stories to protect. Helping a teammate
finish their work by reviewing their PR competes with the developer’s own assignments. The
incentive structure deprioritizes collaboration.
Read more: Push-Based Work Assignment
How to narrow it down
- Are PRs larger than 200 lines on average? If yes, the reviews are slow because the
changes are too large to review quickly. Start with
Long-Lived Feature Branches
and the work decomposition that feeds them.
- Are reviews waiting on specific individuals? If most PRs are assigned to or waiting on
one or two people, the team has a knowledge bottleneck. Start with
Knowledge Silos.
- Do developers treat review as lower priority than their own coding work? If yes, the
team’s norms do not treat review as a first-class activity. Start with
Push-Based Work Assignment and
establish a team working agreement that reviews happen before starting new work.
Related Content
5 - Pipelines Take Too Long
CI/CD pipelines take 30 minutes or more. Developers stop waiting and lose the feedback loop.
What you are seeing
A developer pushes a commit and waits. Thirty minutes pass. An hour. The pipeline is still
running. The developer context-switches to another task, and by the time the pipeline finishes
(or fails), they have moved on mentally. If the build fails, they must reload context, figure out
what went wrong, fix it, push again, and wait another 30 minutes.
Developers stop running the full test suite locally because it takes too long. They push and hope.
Some developers batch multiple changes into a single push to avoid waiting multiple times, which
makes failures harder to diagnose. Others skip the pipeline entirely for small changes and merge
with only local verification.
The pipeline was supposed to provide fast feedback. Instead, it provides slow feedback that
developers work around rather than rely on.
Common causes
Inverted Test Pyramid
When most of the test suite consists of end-to-end or integration tests rather than unit tests,
the pipeline is dominated by slow, resource-intensive test execution. E2E tests launch browsers,
spin up services, and wait for network responses. A test suite with thousands of unit tests (that
run in seconds) and a small number of targeted E2E tests is fast. A suite with hundreds of E2E
tests and few unit tests is slow by construction.
Read more: Inverted Test Pyramid
Snowflake Environments
When pipeline environments are not standardized or reproducible, builds include extra time for
environment setup, dependency installation, and configuration. Caching is unreliable because the
environment state is unpredictable. A pipeline that spends 15 minutes downloading dependencies
because there is no reliable cache layer is slow for infrastructure reasons, not test reasons.
Read more: Snowflake Environments
Tightly Coupled Monolith
When the codebase has no clear module boundaries, every change triggers a full rebuild and a full
test run. The pipeline cannot selectively build or test only the affected components because the
dependency graph is tangled. A change to one module might affect any other module, so the pipeline
must verify everything.
Read more: Tightly Coupled Monolith
Manual Regression Testing Gates
When the pipeline includes a manual testing phase, the wall-clock time from push to green
includes human wait time. A pipeline that takes 10 minutes to build and test but then waits two
days for manual sign-off is not a 10-minute pipeline. It is a two-day pipeline with a 10-minute
automated prefix.
Read more: Manual Regression Testing Gates
How to narrow it down
- What percentage of pipeline time is spent running tests? If test execution dominates and
most tests are E2E or integration tests, the test strategy is the bottleneck. Start with
Inverted Test Pyramid.
- How much time is spent on environment setup and dependency installation? If the pipeline
spends significant time on infrastructure before any tests run, the build environment is the
bottleneck. Start with
Snowflake Environments.
- Can the pipeline build and test only the changed components? If every change triggers a
full rebuild, the architecture prevents selective testing. Start with
Tightly Coupled Monolith.
- Does the pipeline include any manual steps? If a human must approve or act before the
pipeline completes, the human is the bottleneck. Start with
Manual Regression Testing Gates.
Related Content
6 - Work Items Take Days or Weeks to Complete
Stories regularly take more than a week from start to done. Developers go days without integrating.
What you are seeing
A developer picks up a work item on Monday. By Wednesday, they are still working on it. By
Friday, it is “almost done.” The following Monday, they are fixing edge cases. The item finally
moves to review mid-week as a 300-line pull request that the reviewer does not have time to look
at carefully.
Cycle time is measured in weeks, not days. The team commits to work at the start of the sprint
and scrambles at the end. Estimates are off by a factor of two because large items hide unknowns
that only surface mid-implementation.
Common causes
Horizontal Slicing
When work is split by technical layer rather than by user-visible behavior, each item spans an
entire layer and takes days to complete. “Build the database schema,” “build the API,” “build the
UI” are each multi-day items. Nothing is deployable until all layers are done. Vertical slicing
(cutting thin slices through all layers to deliver complete functionality) produces items that
can be finished in one to two days.
Read more: Horizontal Slicing
Monolithic Work Items
When the team takes requirements as they arrive without breaking them into smaller pieces, work
items are as large as the feature they describe. A ticket titled “Add user profile page” hides
a login form, avatar upload, email verification, notification preferences, and password reset.
Without a decomposition practice during refinement, items arrive at planning already too large
to flow.
Read more: Monolithic Work Items
Long-Lived Feature Branches
When developers work on branches for days or weeks, the branch and the work item are the same
size: large. The branching model reinforces large items because there is no integration pressure
to finish quickly. Trunk-based development creates natural pressure to keep items small enough to
integrate daily.
Read more: Long-Lived Feature Branches
How to narrow it down
- Are work items split by technical layer? If the board shows items like “backend for
feature X” and “frontend for feature X,” the decomposition is horizontal. Start with
Horizontal Slicing.
- Do items arrive at planning without being broken down? If items go from “product owner
describes a feature” to “developer starts coding” without a decomposition step, start with
Monolithic Work Items.
- Do developers work on branches for more than a day? If yes, the branching model allows
and encourages large items. Start with
Long-Lived Feature Branches.
Related Content