Separation of duties as separate teams

A compliance requirement for separation of duties is implemented as organizational walls - developers cannot deploy - instead of automated controls.

Category: Organizational & Cultural | Quality Impact: High

What This Looks Like

The compliance framework requires separation of duties (SoD): the person who writes code should not be the only person who can authorize deploying that code. This is a sensible control - it prevents a single individual from both introducing and concealing fraud or a critical error. The organization implements it by making a rule: developers cannot deploy to production. A separate team - operations, release management, or a dedicated deployment team - must perform the final step.

This implementation satisfies the letter of the SoD requirement but creates an organizational wall with significant operational costs. Developers write code. Deployers deploy code. The information that would help deployers make good decisions - what changed, what could go wrong, what the rollback plan is - is in the developers’ heads but must be extracted into documentation that deployers can act on without developer involvement.

The wall is justified as a control, but it functions as a bottleneck. The deployment team has finite capacity. Changes queue up waiting for deployment slots. Emergency fixes require escalation procedures. The organization is slower, not safer.

More critically, this implementation of SoD does not actually prevent the fraud it is meant to prevent. A developer who intends to introduce a fraudulent change can still write the code and write a misleading change description that leads the deployer to approve it. The deployer who runs an opaque deployment script is not in a position to independently verify what the script does. The control appears to be in place but provides limited actual assurance.

Common variations:

  • Tiered deployment approval. Developers can deploy to test and staging but not to production. Production requires a different team regardless of whether the change is risky or trivial.
  • Release manager sign-off. A release manager must approve every production deployment, but approval is based on a checklist rather than independent technical verification.
  • CAB as SoD proxy. The change advisory board is positioned as the SoD control, with the theory that a committee reviewing a deployment constitutes separation. In practice, CAB reviewers rarely have the technical depth to independently verify what they are approving.

The telltale sign: the deployment team’s primary value-add is running a checklist, not performing independent technical verification of the change being deployed.

Why This Is a Problem

A developer’s urgent hotfix sits in the deployment queue for two days while the deployment team works through a backlog. In the meantime, the bug is live in production. SoD implemented as an organizational wall creates a compliance control that is expensive to operate, slow to execute, and provides weaker assurance than the automated alternative.

It reduces quality

When the people who deploy code are different from the people who wrote it, the deployers cannot provide meaningful technical review. They can verify that the change was peer-reviewed, that tests passed, that documentation exists - process controls, not technical controls. A developer intent on introducing a subtle bug or a back door can satisfy all process controls while still achieving their goal. The organizational separation does not prevent this; it just ensures a second person was involved in a way they could not independently verify.

Automated controls provide stronger assurance. A pipeline that enforces peer review in source control, runs security scanning, requires tests to pass, and captures an immutable audit log of every action is a technical control that is much harder to circumvent than a human approval based on documentation. The audit evidence is generated by the system, not assembled after the fact. The controls are applied consistently to every change, not just the ones that reach the deployment team’s queue.

The quality of deployments also suffers when deployers do not have the context that developers have. Deployers executing a runbook they did not write will miss the edge cases the developer would have recognized. Incidents happen at deployment time that a developer performing the deployment would have caught.

It increases rework

The handoff from development to the deployment team is a mandatory information transfer with inherent information loss. The deployment team asks questions; developers answer them. Documentation is incomplete; the deployment is delayed while it is filled in. The deployment encounters an unexpected state in production; the deployment team cannot proceed without developer involvement, but the developer is now focused on new work.

Every friction point in the handoff generates coordination overhead. The developer who thought they were done must re-engage with a change they mentally closed. The deployment team member who encountered the problem must interrupt the developer, explain what they found, and wait for a response. Neither party is doing what they should be doing.

This overhead is invisible in estimates because handoff friction is unpredictable. Some deployments go smoothly. Others require three back-and-forth exchanges over two days. Planning treats all deployments as though they will be smooth; execution reveals they are not.

It makes delivery timelines unpredictable

The deployment team is a shared resource serving multiple development teams. Its capacity is fixed; demand is variable. When multiple teams converge on the deployment window, waits grow. A change that is technically ready to deploy waits not because anything is wrong with it but because the deployment team is busy.

This creates a perverse incentive: teams learn to submit deployment requests before their changes are fully ready, to claim a slot in the queue before the good ones are gone. Partially-ready changes sit in the queue, consuming mental bandwidth from both teams, until they are either deployed or pulled back.

The queue is also subject to priority manipulation. A team with management attention can escalate their deployment past the queue. Teams without that access wait their turn. Delivery predictability depends partly on organizational politics rather than technical readiness.

Impact on continuous delivery

CD requires that any validated change be deployable on demand by the team that owns it. A mandatory handoff to a separate team is a structural block on this requirement. You can have automated pipelines, excellent test coverage, and fast build times, and still be unable to deliver on demand because the deployment team’s schedule does not align with yours.

SoD as a compliance requirement does not change this constraint - it just frames the constraint as non-negotiable. The path forward is demonstrating that automated controls satisfy SoD requirements more effectively than organizational separation does, and negotiating with compliance to accept the automated implementation.

Most SoD frameworks in regulated industries - SOX ITGC, PCI DSS, HIPAA Security Rule - specify the control objective (no single individual controls the entire change lifecycle without oversight) rather than the mechanism (a separate team must deploy). The mechanism is an organizational choice, not a regulatory mandate.

How to Fix It

Step 1: Clarify the actual SoD requirement (Week 1)

  1. Obtain the specific SoD requirement from your compliance framework and read it exactly as written - not as interpreted by the organization.
  2. Identify what the requirement actually mandates: peer review, second authorization, audit trail, or something else. Most SoD requirements can be satisfied by peer review in source control plus an immutable audit log.
  3. Consult your compliance officer or external auditor with a specific question: “If a developer’s change requires at least one other person’s approval before deployment and an automated audit log captures the complete deployment history, does this satisfy separation of duties?” Document the response.
  4. Research how other regulated organizations in your industry have implemented SoD in automated pipelines. Many published case studies describe how financial services, healthcare, and government organizations satisfy SoD with pipeline controls.
  5. Prepare a one-page summary of findings for the compliance conversation: what the regulation requires, what the current implementation provides, and what the automated alternative would provide.

Expect pushback and address it directly:

Objection Response
“Our auditors specifically require a separate team.” Ask the auditors to cite the requirement. Auditors often have flexibility in how they accept controls; they want to see the control objective met. Present the automated alternative with a regulatory mapping.
“We’ve been operating this way for years without an audit finding.” Absence of an audit finding does not mean the current control is optimal. The question is whether a better control is available.

Step 2: Design automated SoD controls (Weeks 2-6)

  1. Require peer review of every change in source control before it can be merged. The reviewer must not be the author. This satisfies the “separate individual” requirement for authorization.
  2. Enforce branch protection rules that prevent the author from merging their own change, even if they have admin rights. The separation is enforced by tooling, not by policy.
  3. Configure the pipeline to capture the identity of the reviewer and the reviewer’s explicit approval as part of the immutable deployment record. The record must be write-once and include timestamps.
  4. Add automated gates that the reviewer cannot bypass: tests must pass, security scans must clear, required reviewers must approve. The reviewer is verifying that the gates passed, not making independent technical judgment about code they may not fully understand.
  5. Implement deployment authorization in the pipeline: the deployment step is only available after all gates pass and the required approvals are recorded. No manual intervention is needed.

Expect pushback and address it directly:

Objection Response
“Peer review is not the same as a separate team making the deployment.” Peer review that gates deployment provides the authorization separation SoD requires. The SoD objective is preventing a single individual from unilaterally making a change. Peer review achieves this.
“What if reviewers collude?” Collusion is a risk in any SoD implementation. The automated approach reduces collusion risk by making the audit trail immutable and by separating review from deployment - the reviewer approves the code, the pipeline deploys it. Neither has unilateral control.

Step 3: Transition the deployment team to a higher-value role (Weeks 6-12)

  1. Pilot the automated SoD controls with one team or one service. Run the automated pipeline alongside the current deployment team process for one quarter, demonstrating that the controls are equivalent or better.
  2. Work with the compliance team to formally accept the automated controls as the SoD mechanism, retiring the deployment team’s approval role for that service.
  3. Expand to additional services as the compliance team gains confidence in the automated controls.
  4. Redirect the deployment team’s effort toward platform engineering, reliability work, and developer experience - activities that add more value than running deployment runbooks.
  5. Update your compliance documentation to describe the automated controls as the SoD mechanism, including the specific tooling, the approval record format, and the audit log retention policy.
  6. Conduct a walkthrough with your auditors showing the audit trail for a sample deployment. Walk them through each field: who reviewed, what approved, what deployed, when, and where the record is stored.

Expect pushback and address it directly:

Objection Response
“The deployment team will resist losing their role.” The work they are freed from is low-value. The work available to them - platform engineering, SRE, developer experience - is higher-value and more interesting. Frame this as growth, not elimination.
“Compliance will take too long to approve the change.” Start with a non-production service in scope for compliance. Build the track record while the formal approval process runs.

Measuring Progress

Metric What to look for
Lead time Significant reduction as the deployment queue wait is eliminated
Release frequency Increase beyond the deployment team’s capacity ceiling
Change fail rate Should remain flat or improve as automated gates are more consistent than manual review
Development cycle time Reduction in time changes spend waiting for deployment authorization
Work in progress Reduction as the deployment bottleneck clears
Build duration Monitor automated approval gates for speed; they should add minimal time to the pipeline