Tightly Coupled Monolith
10 minute read
Category: Architecture | Quality Impact: High
What This Looks Like
A developer changes a function in the order processing module. The test suite fails in the reporting module, the notification service, and a batch job that nobody knew existed. The developer did not touch any of those systems. They changed one function in one file, and three unrelated features broke.
The team has learned to be cautious. Before making any change, developers trace every caller, every import, and every database query that might be affected. A change that should take an hour takes a day because most of the time is spent figuring out what might break. Even after that analysis, surprises are common.
Common variations:
- The web of shared state. Multiple modules read and write the same database tables directly. A schema change in one module breaks queries in five others. Nobody owns the tables because everybody uses them.
- The god object. A single class or module that everything depends on. It handles authentication, logging, database access, and business logic. Changing it is terrifying because the entire application runs through it.
- Transitive dependency chains. Module A depends on Module B, which depends on Module C. A change to Module C breaks Module A through a chain that nobody can trace without a debugger. The dependency graph is a tangle, not a tree.
- Shared libraries with hidden contracts. Internal libraries used by multiple modules with no versioning or API stability guarantees. Updating the library for one consumer breaks another. Teams stop updating shared libraries because the risk is too high.
- Everything deploys together. The application is a single deployable unit. Even if modules are logically separated in the source code, they compile and ship as one artifact. A one-line change to the login page requires deploying the entire system.
The telltale sign: developers regularly say “I don’t know what this change will affect” and mean it. Changes routinely break features that seem unrelated.
Why This Is a Problem
Tight coupling turns every change into a gamble. The cost of a change is not proportional to its size but to the number of hidden dependencies it touches. Small changes carry large risk, which slows everything down.
It reduces quality
When every change can break anything, developers cannot reason about the impact of their work. A well-bounded module lets a developer think locally: “I changed the discount calculation, so discount-related behavior might be affected.” A tightly coupled system offers no such guarantee. The discount calculation might share a database table with the shipping module, which triggers a notification workflow, which updates a dashboard.
This unpredictable blast radius makes code review less effective. Reviewers can verify that the code in the diff is correct, but they cannot verify that it is safe. The breakage happens in code that is not in the diff - code that neither the author nor the reviewer thought to check.
In a system with clear module boundaries, the blast radius of a change is bounded by the module’s interface. If the interface does not change, nothing outside the module can break. Developers and reviewers can focus on the module itself and trust the boundary.
It increases rework
Tight coupling causes rework in two ways. First, unexpected breakage from seemingly safe changes sends developers back to fix things they did not intend to touch. A one-line change that breaks the notification system means the developer now needs to understand and fix the notification system before their original change can ship.
Second, developers working in different parts of the codebase step on each other. Two developers changing different modules unknowingly modify the same shared state. Both changes work individually but conflict when merged. The merge succeeds at the code level but fails at runtime because the shared state cannot satisfy both changes simultaneously. These bugs are expensive to find because the failure only manifests when both changes are present.
Systems with clear boundaries minimize this interference. Each module owns its data and exposes it through explicit interfaces. Two developers working in different modules cannot create a hidden conflict because there is no shared mutable state to conflict on.
It makes delivery timelines unpredictable
In a coupled system, the time to deliver a change includes the time to understand the impact, make the change, fix the unexpected breakage, and retest everything that might be affected. The first and third steps are unpredictable because no one knows the full dependency graph.
A developer estimates a task at two days. On day one, the change is made and tests are passing. On day two, a failing test in another module reveals a hidden dependency. Fixing the dependency takes two more days. The task that was estimated at two days takes four. This happens often enough that the team stops trusting estimates, and stakeholders stop trusting timelines.
The testing cost is also unpredictable. In a modular system, changing Module A means running Module A’s tests. In a coupled system, changing anything might mean running everything. If the full test suite takes 30 minutes, every small change requires a 30-minute feedback cycle because there is no way to scope the impact.
It prevents independent team ownership
When the codebase is a tangle of dependencies, no team can own a module cleanly. Every change in one team’s area risks breaking another team’s area. Teams develop informal coordination rituals: “Let us know before you change the order table.” “Don’t touch the shared utils module without talking to Platform first.”
These coordination costs scale quadratically with the number of teams. Two teams need one communication channel. Five teams need ten. Ten teams need forty-five. The result is that adding developers makes the system slower to change, not faster.
In a system with well-defined module boundaries, each team owns their modules and their data. They deploy independently. They do not need to coordinate on internal changes because the boundaries prevent cross-module breakage. Communication focuses on interface changes, which are infrequent and explicit.
Impact on continuous delivery
Continuous delivery requires that any change can flow from commit to production safely and quickly. Tight coupling breaks this in multiple ways:
- Blast radius prevents small, safe changes. If a one-line change can break unrelated features, no change is small from a risk perspective. The team compensates by batching changes and testing extensively, which is the opposite of continuous.
- Testing scope is unbounded. Without module boundaries, there is no way to scope testing to the changed area. Every change requires running the full suite, which slows the pipeline and reduces deployment frequency.
- Independent deployment is impossible. If everything must deploy together, deployment coordination is required. Teams queue up behind each other. Deployment frequency is limited by the slowest team.
- Rollback is risky. Rolling back one change might break something else if other changes were deployed simultaneously. The tangle works in both directions.
A team with a tightly coupled monolith can still practice CD, but they must invest in decoupling first. Without boundaries, the feedback loops are too slow and the blast radius is too large for continuous deployment to be safe.
How to Fix It
Decoupling a monolith is a long-term effort. The goal is not to rewrite the system or extract microservices on day one. The goal is to create boundaries that limit blast radius and enable independent change. Start where the pain is greatest.
Step 1: Map the dependency hotspots (Week 1)
Identify the areas of the codebase where coupling causes the most pain:
- Use version control history to find the files that change together most frequently. Files that always change as a group are likely coupled.
- List the modules or components that are most often involved in unexpected test failures after changes to other areas.
- Identify shared database tables - tables that are read or written by more than one module.
- Draw the dependency graph. Tools like dependency-cruiser (JavaScript), jdepend (Java), or similar can automate this. Look for cycles and high fan-in nodes.
Rank the hotspots by pain: which coupling causes the most unexpected breakage, the most coordination overhead, or the most test failures?
Step 2: Define module boundaries on paper (Week 2)
Before changing any code, define where boundaries should be:
- Group related functionality into candidate modules based on business domain, not technical layer. “Orders,” “Payments,” and “Notifications” are better boundaries than “Database,” “API,” and “UI.”
- For each boundary, define what the public interface would be: what data crosses the boundary and in what format?
- Identify shared state that would need to be split or accessed through interfaces.
This is a design exercise, not an implementation. The output is a diagram showing target module boundaries with their interfaces.
Step 3: Enforce one boundary (Weeks 3-6)
Pick the boundary with the best ratio of pain-reduced to effort-required and enforce it in code:
- Create an explicit interface (API, function contract, or event) for cross-module communication. All external callers must use the interface.
- Move shared database access behind the interface. If the payments module needs order data, it calls the orders module’s interface rather than querying the orders table directly.
- Add a build-time or lint-time check that enforces the boundary. Fail the build if code outside the module imports internal code directly.
This is the hardest step because it requires changing existing call sites. Use the Strangler Fig approach: create the new interface alongside the old coupling, migrate callers one at a time, and remove the old path when all callers have migrated.
Step 4: Scope testing to module boundaries (Week 4+)
Once a boundary exists, use it to scope testing:
- Write tests for the module’s public interface (contract tests and functional tests).
- Changes within the module only need to run the module’s own tests plus the interface tests. If the interface tests pass, nothing outside the module can break.
- Reserve the full integration suite for deployment validation, not developer feedback.
This immediately reduces pipeline duration for changes inside the bounded module. Developers get faster feedback. The pipeline is no longer “run everything for every change.”
Step 5: Repeat for the next boundary (Ongoing)
Each new boundary reduces blast radius, improves test scoping, and enables more independent ownership. Prioritize by pain:
| Signal | What it tells you |
|---|---|
| Files that always change together across modules | Coupling that forces coordinated changes |
| Unexpected test failures after unrelated changes | Hidden dependencies through shared state |
| Multiple teams needing to coordinate on changes | Ownership boundaries that do not match code boundaries |
| Long pipeline duration from running all tests | No way to scope testing because boundaries do not exist |
Over months, the system evolves from a tangle into a set of modules with defined interfaces. This is not a rewrite. It is incremental boundary enforcement applied where it matters most.
| Objection | Response |
|---|---|
| “We should just rewrite it as microservices” | A rewrite takes months or years and delivers zero value until it is finished. Enforcing boundaries in the existing codebase delivers value with each boundary and does not require a big-bang migration. |
| “We don’t have time to refactor” | You are already paying the cost of coupling in unexpected breakage, slow testing, and coordination overhead. Each boundary you enforce reduces that ongoing cost. |
| “The coupling is too deep to untangle” | Start with the easiest boundary, not the hardest. Even one well-enforced boundary reduces blast radius and proves the approach works. |
| “Module boundaries will slow us down” | Boundaries add a small cost to cross-module changes and remove a large cost from within-module changes. Since most changes are within a module, the net effect is faster delivery. |
Measuring Progress
| Metric | What to look for |
|---|---|
| Unexpected cross-module test failures | Should decrease as boundaries are enforced |
| Change fail rate | Should decrease as blast radius shrinks |
| Build duration | Should decrease as testing can be scoped to affected modules |
| Development cycle time | Should decrease as developers spend less time tracing dependencies |
| Cross-team coordination requests per sprint | Should decrease as module ownership becomes clearer |
| Files changed per commit | Should decrease as changes become more localized |
Related Content
- Architecture Decoupling - Strategies for creating module boundaries
- Small Batches - Decoupling enables smaller, safer changes
- Testing Fundamentals - Scoping tests to module boundaries
- Identify Constraints - Finding the coupling that hurts most
- Value Stream Mapping - Making coordination overhead visible