These anti-patterns affect how code flows from a developer’s machine to the shared trunk. They create painful merges, delayed integration, and broken builds that prevent the steady stream of small, verified changes that continuous delivery requires.
This is the multi-page printable view of this section. Click here to print.
Branching and Integration
1 - Long-Lived Feature Branches
Category: Branching & Integration | Quality Impact: Critical
What This Looks Like
A developer creates a branch to build a feature. The feature is bigger than expected. Days pass, then weeks. Other developers are doing the same thing on their own branches. Trunk moves forward while each branch diverges further from it. Nobody integrates until the feature is “done” - and by then, the branch is hundreds or thousands of lines different from where it started.
When the merge finally happens, it is an event. The developer sets aside half a day - sometimes more - to resolve conflicts, re-test, and fix the subtle breakages that come from combining weeks of divergent work. Other developers delay their merges to avoid the chaos. The team’s Slack channel lights up with “don’t merge right now, I’m resolving conflicts.” Every merge creates a window where trunk is unstable.
Common variations:
- The “feature branch” that is really a project. A branch named
feature/new-checkoutthat lasts three months. Multiple developers commit to it. It has its own bug fixes and its own merge conflicts. It is a parallel fork of the product. - The “I’ll merge when it’s ready” branch. The developer views the branch as a private workspace. Merging to trunk is the last step, not a daily practice. The branch falls further behind each day but the developer does not notice until merge day.
- The per-sprint branch. Each sprint gets a branch. All sprint work goes there. The branch is merged at sprint end and a new one is created. Integration happens every two weeks instead of every day.
- The release isolation branch. A branch is created weeks before a release to “stabilize” it. Bug fixes must be applied to both the release branch and trunk. Developers maintain two streams of work simultaneously.
- The “too risky to merge” branch. The branch has diverged so far that nobody wants to attempt the merge. It sits for weeks while the team debates how to proceed. Sometimes it is abandoned entirely and the work is restarted.
The telltale sign: if merging a branch requires scheduling a block of time, notifying the team, or hoping nothing goes wrong - branches are living too long.
Why This Is a Problem
Long-lived feature branches appear safe. Each developer works in isolation, free from interference. But that isolation is precisely the problem. It delays integration, hides conflicts, and creates compounding risk that makes every aspect of delivery harder.
It reduces quality
When a branch lives for weeks, code review becomes a formidable task. The reviewer faces hundreds of changed lines across dozens of files. Meaningful review is nearly impossible at that scale - studies consistently show that review effectiveness drops sharply after 200-400 lines of change. Reviewers skim, approve, and hope for the best. Subtle bugs, design problems, and missed edge cases survive because nobody can hold the full changeset in their head.
The isolation also means developers make decisions in a vacuum. Two developers on separate branches may solve the same problem differently, introduce duplicate abstractions, or make contradictory assumptions about shared code. These conflicts are invisible until merge time, when they surface as bugs rather than design discussions.
With short-lived branches or trunk-based development, changes are small enough for genuine review. A 50-line change gets careful attention. Design disagreements surface within hours, not weeks. The team maintains a shared understanding of how the codebase is evolving because they see every change as it happens.
It increases rework
Long-lived branches guarantee merge conflicts. Two developers editing the same file on different branches will not discover the collision until one of them merges. The second developer must then reconcile their changes against an unfamiliar modification, often without understanding the intent behind it. This manual reconciliation is rework in its purest form - effort spent making code work together that would have been unnecessary if the developers had integrated daily.
The rework compounds. A developer who rebases a three-week branch against trunk may introduce bugs during conflict resolution. Those bugs require debugging. The debugging reveals an assumption that was valid three weeks ago but is no longer true because trunk has changed. Now the developer must rethink and partially rewrite their approach. What should have been a day of work becomes a week.
When developers integrate daily, conflicts are small - typically a few lines. They are resolved in minutes with full context because both changes are fresh. The cost of integration stays constant rather than growing exponentially with branch age.
It makes delivery timelines unpredictable
A two-day feature on a long-lived branch takes two days to build and an unknown number of days to merge. The merge might take an hour. It might take two days. It might surface a design conflict that requires reworking the feature. Nobody knows until they try. This makes it impossible to predict when work will actually be done.
The queuing effect makes it worse. When several branches need to merge, they form a queue. The first merge changes trunk, which means the second branch needs to rebase against the new trunk before merging. If the second merge is large, it changes trunk again, and the third branch must rebase. Each merge invalidates the work done to prepare the next one. Teams that “schedule” their merges are admitting that integration is so costly it needs coordination.
Project managers learn they cannot trust estimates. “The feature is code-complete” does not mean it is done - it means the merge has not started yet. Stakeholders lose confidence in the team’s ability to deliver on time because “done” and “deployed” are separated by an unpredictable gap.
With continuous integration, there is no merge queue. Each developer integrates small changes throughout the day. The time from “code-complete” to “integrated and tested” is minutes, not days. Delivery dates become predictable because the integration cost is near zero.
It hides risk until the worst possible moment
Long-lived branches create an illusion of progress. The team has five features “in development,” each on its own branch. The features appear to be independent and on track. But the risk is hidden: none of these features have been proven to work together. The branches may contain conflicting changes, incompatible assumptions, or integration bugs that only surface when combined.
All of that hidden risk materializes at merge time - the moment closest to the planned release date, when the team has the least time to deal with it. A merge conflict discovered three weeks before release is an inconvenience. A merge conflict discovered the day before release is a crisis. Long-lived branches systematically push risk discovery to the latest possible point.
Continuous integration surfaces risk immediately. If two changes conflict, the team discovers it within hours, while both changes are small and the authors still have full context. Risk is distributed evenly across the development cycle instead of concentrated at the end.
Impact on continuous delivery
Continuous delivery requires that trunk is always in a deployable state and that any commit can be released at any time. Long-lived feature branches make both impossible. Trunk cannot be deployable if large, poorly validated merges land periodically and destabilize it. You cannot release any commit if the latest commit is a 2,000-line merge that has not been fully tested.
Long-lived branches also prevent continuous integration - the practice of integrating every developer’s work into trunk at least once per day. Without continuous integration, there is no continuous delivery. The pipeline cannot provide fast feedback on changes that exist only on private branches. The team cannot practice deploying small changes because there are no small changes - only large merges separated by days or weeks of silence.
Every other CD practice - automated testing, pipeline automation, small batches, fast feedback - is undermined when the branching model prevents frequent integration.
How to Fix It
Step 1: Measure your current branch lifetimes (Week 1)
Before changing anything, understand the baseline. For every open branch:
- Record when it was created and when (or if) it was last merged.
- Calculate the age in days.
- Note the number of changed files and lines.
Most teams are shocked by their own numbers. A branch they think of as “a few days old” is often two or three weeks old. Making the data visible creates urgency.
Set a target: no branch older than one day. This will feel aggressive. That is the point.
Step 2: Set a branch lifetime limit and make it visible (Week 2)
Agree as a team on a maximum branch lifetime. Start with two days if one day feels too aggressive. The important thing is to pick a number and enforce it.
Make the limit visible:
- Add a dashboard or report that shows branch age for every open branch.
- Flag any branch that exceeds the limit in the daily standup.
- If your CI tool supports it, add a check that warns when a branch exceeds 24 hours.
The limit creates a forcing function. Developers must either integrate quickly or break their work into smaller pieces. Both outcomes are desirable.
Step 3: Break large features into small, integrable changes (Weeks 2-3)
The most common objection is “my feature is too big to merge in a day.” This is true when the feature is designed as a monolithic unit. The fix is decomposition:
- Branch by abstraction. Introduce a new code path alongside the old one. Merge the new code path in small increments. Switch over when ready.
- Feature flags. Hide incomplete work behind a toggle so it can be merged to trunk without being visible to users.
- Keystone interface pattern. Build all the back-end work first, merge it incrementally, and add the UI entry point last. The feature is invisible until the keystone is placed.
- Vertical slices. Deliver the feature as a series of thin, user-visible increments instead of building all layers at once.
Each technique lets developers merge daily without exposing incomplete functionality. The feature grows incrementally on trunk rather than in isolation on a branch.
Step 4: Adopt short-lived branches with daily integration (Weeks 3-4)
Change the team’s workflow:
- Create a branch from trunk.
- Make a small, focused change.
- Get a quick review (the change is small, so review takes minutes).
- Merge to trunk. Delete the branch.
- Repeat.
Each branch lives for hours, not days. If a branch cannot be merged by end of day, it is too large. The developer should either merge what they have (using one of the decomposition techniques above) or discard the branch and start smaller tomorrow.
Pair this with the team’s code review practice. Small changes enable fast reviews, and fast reviews enable short-lived branches. The two practices reinforce each other.
Step 5: Address the objections (Weeks 3-4)
| Objection | Response |
|---|---|
| “My feature takes three weeks - I can’t merge in a day” | The feature takes three weeks. The branch does not have to. Use branch by abstraction, feature flags, or vertical slicing to merge daily while the feature grows incrementally on trunk. |
| “Merging incomplete code to trunk is dangerous” | Incomplete code behind a feature flag or without a UI entry point is not dangerous - it is invisible. The danger is a three-week branch that lands as a single untested merge. |
| “I need my branch to keep my work separate from other changes” | That separation is the problem. You want to discover conflicts early, when they are small and cheap to fix. A branch that hides conflicts for three weeks is not protecting you - it is accumulating risk. |
| “We tried short-lived branches and it was chaos” | Short-lived branches require supporting practices: feature flags, good decomposition, fast CI, and a culture of small changes. Without those supports, it will feel chaotic. The fix is to build the supports, not to retreat to long-lived branches. |
| “Code review takes too long for daily merges” | Small changes take minutes to review, not hours. If reviews are slow, that is a review process problem, not a branching problem. See PR Review Bottlenecks. |
Step 6: Continuously tighten the limit (Week 5+)
Once the team is comfortable with two-day branches, reduce the limit to one day. Then push toward integrating multiple times per day. Each reduction surfaces new problems - features that are hard to decompose, tests that are slow, reviews that are bottlenecked - and each problem is worth solving because it blocks the flow of work.
The goal is continuous integration: every developer integrates to trunk at least once per day. At that point, “branches” are just short-lived workspaces that exist for hours, and merging is a non-event.
Measuring Progress
| Metric | What to look for |
|---|---|
| Average branch lifetime | Should decrease to under one day |
| Maximum branch lifetime | No branch should exceed two days |
| Integration frequency | Should increase toward at least daily per developer |
| Merge conflict frequency | Should decrease as branches get shorter |
| Merge duration | Should decrease from hours to minutes |
| Development cycle time | Should decrease as integration overhead drops |
| Lines changed per merge | Should decrease as changes get smaller |
Related Content
- Trunk-Based Development - The branching model that eliminates long-lived branches
- Code Review - Small changes enable fast reviews, which enable short-lived branches
- Small Batches - The principle behind breaking large features into daily integrations
- Work Decomposition - Techniques for breaking features into small, mergeable increments
- PR Review Bottlenecks - Slow reviews are a common reason branches live too long
2 - No Continuous Integration
Category: Branching & Integration | Quality Impact: Critical
What This Looks Like
The team has a build server. It runs after every push. There is a dashboard somewhere that shows build status. But the build has been red for three weeks and nobody has mentioned it. Developers push code, glance at the result if they remember, and move on. When someone finally investigates, the failure is in a test that broke weeks ago and nobody can remember which commit caused it.
The word “continuous” has lost its meaning. Developers do not integrate their work into trunk daily - they work on branches for days or weeks and merge when the feature feels done. The build server runs, but nobody treats a red build as something that must be fixed immediately. There is no shared agreement that trunk should always be green. “CI” is a tool in the infrastructure, not a practice the team follows.
Common variations:
- The build server with no standards. A CI server runs on every push, but there are no rules about what happens when it fails. Some developers fix their failures. Others do not. The build flickers between green and red all day, and nobody trusts the signal.
- The nightly build. The build runs once per day, overnight. Developers find out the next morning whether yesterday’s work broke something. By then they have moved on to new work and lost context on what they changed.
- The “CI” that is just compilation. The build server compiles the code and nothing else. No tests run. No static analysis. The build is green as long as the code compiles, which tells the team almost nothing about whether the software works.
- The manually triggered build. The build server exists, but it does not run on push. After pushing code, the developer must log into the CI server and manually start the build and tests. When developers are busy or forget, their changes sit untested. When multiple pushes happen between triggers, a failure could belong to any of them. The feedback loop depends entirely on developer discipline rather than automation.
- The branch-only build. CI runs on feature branches but not on trunk. Each branch builds in isolation, but nobody knows whether the branches work together until merge day. Trunk is not continuously validated.
- The ignored dashboard. The CI dashboard exists but is not displayed anywhere the team can see it. Nobody checks it unless they are personally waiting for a result. Failures accumulate silently.
The telltale sign: if you can ask “how long has the build been red?” and nobody knows the answer, continuous integration is not happening.
Why This Is a Problem
Continuous integration is not a tool - it is a practice. The practice requires that every developer integrates to a shared trunk at least once per day and that the team treats a broken build as the highest-priority problem. Without the practice, the build server is just infrastructure generating notifications that nobody reads.
It reduces quality
When the build is allowed to stay red, the team loses its only automated signal that something is wrong. A passing build is supposed to mean “the software works as tested.” A failing build is supposed to mean “stop and fix this before doing anything else.” When failures are ignored, that signal becomes meaningless. Developers learn that a red build is background noise, not an alarm.
Once the build signal is untrusted, defects accumulate. A developer introduces a bug on Monday. The build fails, but it was already red from an unrelated failure, so nobody notices. Another developer introduces a different bug on Tuesday. By Friday, trunk has multiple interacting defects and nobody knows when they were introduced or by whom. Debugging becomes archaeology.
When the team practices continuous integration, a red build is rare and immediately actionable. The developer who broke it knows exactly which change caused the failure because they committed minutes ago. The fix is fast because the context is fresh. Defects are caught individually, not in tangled clusters.
It increases rework
Without continuous integration, developers work in isolation for days or weeks. Each developer assumes their code works because it passes on their machine or their branch. But they are building on assumptions about shared code that may already be outdated. When they finally integrate, they discover that someone else changed an API they depend on, renamed a class they import, or modified behavior they rely on.
The rework cascade is predictable. Developer A changes a shared interface on Monday. Developer B builds three days of work on the old interface. On Thursday, developer B tries to integrate and discovers the conflict. Now they must rewrite three days of code to match the new interface. If they had integrated on Monday, the conflict would have been a five-minute fix.
Teams that integrate continuously discover conflicts within hours, not days. The rework is measured in minutes because the conflicting changes are small and the developers still have full context on both sides. The total cost of integration stays low and constant instead of spiking unpredictably.
It makes delivery timelines unpredictable
A team without continuous integration cannot answer the question “is the software releasable right now?” Trunk may or may not compile. Tests may or may not pass. The last successful build may have been a week ago. Between then and now, dozens of changes have landed without anyone verifying that they work together.
This creates a stabilization period before every release. The team stops feature work, fixes the build, runs the test suite, and triages failures. This stabilization takes an unpredictable amount of time - sometimes a day, sometimes a week - because nobody knows how many problems have accumulated since the last known-good state.
With continuous integration, trunk is always in a known state. If the build is green, the team can release. If the build is red, the team knows exactly which commit broke it and how long ago. There is no stabilization period because the code is continuously stabilized. Release readiness is a fact that can be checked at any moment, not a state that must be achieved through a dedicated effort.
It masks the true cost of integration problems
When the build is permanently broken or rarely checked, the team cannot see the patterns that would tell them where their process is failing. Is the build slow? Nobody notices because nobody waits for it. Are certain tests flaky? Nobody notices because failures are expected. Do certain parts of the codebase cause more breakage than others? Nobody notices because nobody correlates failures to changes.
These hidden problems compound. The build gets slower because nobody is motivated to speed it up. Flaky tests multiply because nobody quarantines them. Brittle areas of the codebase stay brittle because the feedback that would highlight them is lost in the noise.
When the team practices CI and treats a red build as an emergency, every friction point becomes visible. A slow build annoys the whole team daily, creating pressure to optimize it. A flaky test blocks everyone, creating pressure to fix or remove it. The practice surfaces the problems. Without the practice, the problems are invisible and grow unchecked.
Impact on continuous delivery
Continuous integration is the foundation that every other CD practice is built on. Without it, the pipeline cannot give fast, reliable feedback on every change. Automated testing is pointless if nobody acts on the results. Deployment automation is pointless if the artifact being deployed has not been validated. Small batches are pointless if the batches are never verified to work together.
A team that does not practice CI cannot practice CD. The two are not independent capabilities that can be adopted in any order. CI is the prerequisite. Every hour that the build stays red is an hour during which the team has no automated confidence that the software works. Continuous delivery requires that confidence to exist at all times.
How to Fix It
Step 1: Fix the build and agree it stays green (Week 1)
Before anything else, get trunk to green. This is the team’s first and most important commitment.
- Assign the broken build as the highest-priority work item. Stop feature work if necessary.
- Triage every failure: fix it, quarantine it to a non-blocking suite, or delete the test if it provides no value.
- Once the build is green, make the team agreement explicit: a red build is the team’s top priority. Whoever broke it fixes it. If they cannot fix it within 15 minutes, they revert their change and try again with a smaller commit.
Write this agreement down. Put it in the team’s working agreements document. If you do not have one, start one now. The agreement is simple: we do not commit on top of a red build, and we do not leave a red build for someone else to fix.
Step 2: Make the build visible (Week 1)
The build status must be impossible to ignore:
- Display the build dashboard on a large monitor visible to the whole team.
- Configure notifications so that a broken build alerts the team immediately - in the team chat channel, not in individual email inboxes.
- If the build breaks, the notification should identify the commit and the committer.
Visibility creates accountability. When the whole team can see that the build broke at 2:15 PM and who broke it, social pressure keeps people attentive. When failures are buried in email notifications, they are easily ignored.
Step 3: Require integration at least once per day (Week 2)
The “continuous” in continuous integration means at least daily, and ideally multiple times per day. Set the expectation:
- Every developer integrates their work to trunk at least once per day.
- If a developer has been working on a branch for more than a day without integrating, that is a problem to discuss at standup.
- Track integration frequency per developer per day. Make it visible alongside the build dashboard.
This will expose problems. Some developers will say their work is not ready to integrate. That is a decomposition problem - the work is too large. Some will say they cannot integrate because the build is too slow. That is a pipeline problem. Each problem is worth solving. See Long-Lived Feature Branches for techniques to break large work into daily integrations.
Step 4: Make the build fast enough to provide useful feedback (Weeks 2-3)
A build that takes 45 minutes is a build that developers will not wait for. Target under 10 minutes for the primary feedback loop:
- Identify the slowest stages and optimize or parallelize them.
- Move slow integration tests to a secondary pipeline that runs after the fast suite passes.
- Add build caching so that unchanged dependencies are not recompiled on every run.
- Run tests in parallel if they are not already.
The goal is a fast feedback loop: the developer pushes, waits a few minutes, and knows whether their change works with everything else. If they have to wait 30 minutes, they will context-switch, and the feedback loop breaks.
Step 5: Address the objections (Weeks 3-4)
| Objection | Response |
|---|---|
| “The build is too slow to fix every red immediately” | Then the build is too slow, and that is a separate problem to solve. A slow build is not a reason to ignore failures - it is a reason to invest in making the build faster. |
| “Some tests are flaky - we can’t treat every failure as real” | Quarantine flaky tests into a non-blocking suite. The blocking suite must be deterministic. If a test in the blocking suite fails, it is real until proven otherwise. |
| “We can’t integrate daily - our features take weeks” | The features take weeks. The integrations do not have to. Use branch by abstraction, feature flags, or vertical slicing to integrate partial work daily. |
| “Fixing someone else’s broken build is not my job” | It is the whole team’s job. A red build blocks everyone. If the person who broke it is unavailable, someone else should revert or fix it. The team owns the build, not the individual. |
| “We have CI - the build server runs on every push” | A build server is not CI. CI is the practice of integrating frequently and keeping the build green. If the build has been red for a week, you have a build server, not continuous integration. |
Step 6: Build the habit (Week 4+)
Continuous integration is a daily discipline, not a one-time setup. Reinforce the habit:
- Review integration frequency in retrospectives. If it is dropping, ask why.
- Celebrate streaks of consecutive green builds. Make it a point of team pride.
- When a developer reverts a broken commit quickly, recognize it as the right behavior - not as a failure.
- Periodically audit the build: is it still fast? Are new flaky tests creeping in? Is the test coverage meaningful?
The goal is a team culture where a red build feels wrong - like an alarm that demands immediate attention. When that instinct is in place, CI is no longer a process being followed. It is how the team works.
Measuring Progress
| Metric | What to look for |
|---|---|
| Build pass rate | Percentage of builds that pass on first run - should be above 95% |
| Time to fix a broken build | Should be under 15 minutes, with revert as the fallback |
| Integration frequency | At least one integration per developer per day |
| Build duration | Should be under 10 minutes for the primary feedback loop |
| Longest period with a red build | Should be measured in minutes, not hours or days |
| Development cycle time | Should decrease as integration overhead drops and stabilization periods disappear |
Related Content
- Trunk-Based Development - CI requires integrating to a shared trunk, not just building branches
- Build Automation - The pipeline infrastructure that CI depends on
- Testing Fundamentals - Fast, reliable tests are essential for a CI build that teams trust
- Long-Lived Feature Branches - Long branches prevent daily integration and are both a cause and symptom of missing CI
- Working Agreements - The team agreement to keep the build green must be explicit