Compliance interpreted as manual approval
10 minute read
Category: Organizational & Cultural | Quality Impact: High
What This Looks Like
The change advisory board convenes every Tuesday at 2 PM. Every deployment request - whether a one-line config fix or a multi-service architectural overhaul - is presented to a room of reviewers who read a summary, ask a handful of questions, and vote to approve or defer. The review is documented in a spreadsheet. The spreadsheet is the audit trail. This process exists because, someone decided years ago, the regulations require it.
The regulation in question - SOX, HIPAA, PCI DSS, GDPR, FedRAMP, or any number of industry or sector frameworks - almost certainly does not require it. Regulations require controls. They require evidence that changes are reviewed and that the people who write code are not the same people who authorize deployment. They do not mandate that the review happen in a Tuesday meeting, that it be performed manually by a human, or that every change receive the same level of scrutiny regardless of its risk profile.
The gap between what regulations actually say and how organizations implement them is filled by conservative interpretation, institutional inertia, and the organizational incentive to make compliance visible through ceremony rather than effective through automation. The result is a process that consumes significant time, provides limited actual risk reduction, and is frequently bypassed in emergencies - which means the audit trail for the highest-risk changes is often the weakest.
Common variations:
- Change freeze windows. No deployments during quarterly close, peak business periods, or extended blackout windows - often longer than regulations require and sometimes longer than the quarter itself.
- Manual evidence collection. Compliance evidence is assembled by hand from screenshots, email approvals, and meeting notes rather than automatically captured by the pipeline.
- Risk-blind approval. Every change goes through the same review regardless of whether it is a high-risk schema migration or a typo fix in a marketing page. The process cannot distinguish between them.
The telltale sign: the compliance team cannot tell you which specific regulatory requirement mandates the current manual approval process, only that “that’s how we’ve always done it.”
Why This Is a Problem
Manual compliance controls feel safe because they are visible. Auditors can see the spreadsheet, the meeting minutes, the approval signatures. What they cannot see - and what the controls do not measure - is whether the reviews are effective, whether the documentation matches reality, or whether the process is generating the risk reduction it claims to provide.
It reduces quality
Manual approval processes that treat all changes equally cannot allocate attention to risk. A CAB reviewer who must approve 47 changes in a 90-minute meeting cannot give meaningful scrutiny to any of them. The review becomes a checkbox exercise: read the title, ask one predictable question (“is this backward compatible?”), approve. Changes that genuinely warrant careful review receive the same rubber stamp as trivial ones.
The documentation that feeds manual review is typically optimistic and incomplete. Engineers writing change requests describe the happy path. Reviewers who are not familiar with the system cannot identify what is missing. The audit evidence records that a human approved the change; it does not record whether the human understood the change or identified the risks it carried.
Automated controls, by contrast, can enforce specific, verifiable criteria on every change. A pipeline that requires two reviewers to approve a pull request, runs security scanning, checks for configuration drift, and creates an immutable audit log of what ran when does more genuine risk reduction than a CAB, faster, and with evidence that actually demonstrates the controls worked.
It increases rework
When changes are batched for weekly approval, the review meeting becomes the synchronization point for everything that was developed since the last meeting. Engineers who need a fix deployed before Tuesday must either wait or escalate for emergency approval. Emergency approvals, which bypass the normal process, become a significant portion of all deployments - the change data for many CAB-heavy organizations shows 20 to 40 percent of changes going through the emergency path.
This batching amplifies rework. A bug discovered after Tuesday’s CAB runs for seven days in a non-production environment before it can be fixed in production. If the bug is in an environment that feeds downstream testing, testing is blocked for the entire week. Changes pile up waiting for the next approval window, and each additional change increases the complexity of the deployment event and the risk of something going wrong.
The rework caused by late-discovered defects in batched changes is often not attributed to the approval delay. It is attributed to “the complexity of the release,” which then justifies even more process and oversight, which creates more batching.
It makes delivery timelines unpredictable
A weekly CAB meeting creates a hard cadence that delivery cannot exceed. A feature that would take two days to develop and one day to verify takes eight days to deploy because it must wait for the approval window. If the CAB defers the change - asks for more documentation, wants a rollback plan, has concerns about the release window - the wait extends to two weeks.
This latency is invisible in development metrics. Story points are earned when development completes. The time sitting in the approval queue does not appear in velocity charts. Delivery looks faster than it is, which means planning is wrong and stakeholder expectations are wrong.
The unpredictability compounds as changes interact. Two teams each waiting for CAB approval may find that their changes conflict in ways neither team anticipated when writing the change request a week ago. The merge happens the night before the deployment window, in a hurry, without the testing that would have caught the problem.
Impact on continuous delivery
CD is defined by the ability to release any validated change on demand. A weekly approval gate creates a hard ceiling on release frequency: you can release at most once per week, and only changes that were submitted to the CAB before Tuesday at 2 PM. This ceiling is irreconcilable with CD.
More fundamentally, CD requires that the pipeline be the control - that approval, verification, and audit evidence are products of the automated process, not of a human ceremony that precedes it. The pipeline that runs security scans, enforces review requirements, captures immutable audit logs, and deploys only validated artifacts is a stronger control than a CAB, and it generates better evidence for auditors.
The path to CD in regulated environments requires reframing compliance with the compliance team: the question is not “how do we get exempted from the controls?” but “how do we implement controls that are more effective and auditable than the current manual process?”
How to Fix It
Step 1: Read the actual regulatory requirements (Week 1)
Most manual approval processes are not required by the regulation they claim to implement. Verify this before attempting to change anything.
- Obtain the text of the relevant regulation (SOX ITGC guidance, HIPAA Security Rule, PCI DSS v4.0, etc.) and identify the specific control requirements.
- Map your current manual process to the specific requirements: which step satisfies which control?
- Identify requirements that mandate human involvement versus requirements that mandate evidence that a control occurred (these are often not the same).
- Request a meeting with your compliance officer or external auditor to review your findings. Many compliance officers are receptive to automated controls because automated evidence is more reliable for audit purposes.
- Document the specific regulatory language and the compliance team’s interpretation as the baseline for redesigning your controls.
Expect pushback and address it directly:
| Objection | Response |
|---|---|
| “Our auditors said we need a CAB.” | Ask your auditors to cite the specific requirement. Most will describe the evidence they need, not the mechanism. Automated pipeline controls with immutable audit logs satisfy most regulatory evidence requirements. |
| “We can’t risk an audit finding.” | The risk of an audit finding from automation is lower than you think if the controls are well-designed. Add automated security scanning to the pipeline first. Then bring the audit log evidence to your compliance officer and ask them to review it against the specific regulatory requirements. |
Step 2: Design automated controls that satisfy regulatory requirements (Weeks 2-6)
- Identify the specific controls the regulation requires (e.g., segregation of duties, change documentation, rollback capability) and implement each as a pipeline stage.
- Require code review by at least one person who did not write the change, enforced by the source control system, not by a meeting.
- Implement automated security scanning in the pipeline and configure it to block deployment of changes with high-severity findings.
- Generate deployment records automatically from the pipeline: who approved the pull request, what tests ran, what artifact was deployed, to which environment, at what time. This is the audit evidence.
- Create a risk-tiering system: low-risk changes (non-production-data services, documentation, internal tools) go through the standard pipeline; high-risk changes (schema migrations, authentication changes, PII-handling code) require additional automated checks and a second human review.
Expect pushback and address it directly:
| Objection | Response |
|---|---|
| “Automated evidence might not satisfy auditors.” | Engage your auditors in the design process. Show them what the pipeline audit log captures. Most auditors prefer machine-generated evidence to manually assembled spreadsheets because it is harder to falsify. |
| “We need a human to review every change.” | For what purpose? If the purpose is catching errors, automated testing catches more errors than a human reading a change summary. If the purpose is authorization evidence, a pull request approval recorded in your source control system is a more reliable record than a meeting vote. |
Step 3: Transition the CAB to a risk advisory function (Weeks 6-12)
- Propose to the compliance team that the CAB shifts from approving individual changes to reviewing pipeline controls quarterly. The quarterly review should verify that automated controls are functioning, access is appropriately restricted, and audit logs are complete.
- Implement a risk-based exception process: changes to high-risk systems or during high-risk periods can still require human review, but the review is focused and the criteria are explicit.
- Define the metrics that demonstrate control effectiveness: change fail rate, security finding rate, rollback frequency. Report these to the compliance team and auditors as evidence that the controls are working.
- Archive the CAB meeting minutes alongside the automated audit logs to maintain continuity of audit evidence during the transition.
- Run the automated controls in parallel with the CAB process for one quarter before fully transitioning, so the compliance team can verify that the automated evidence is equivalent or better.
Expect pushback and address it directly:
| Objection | Response |
|---|---|
| “The compliance team owns this process and won’t change it.” | Compliance teams are often more flexible than they appear when approached with evidence rather than requests. Show them the automated control design, the audit evidence format, and a regulatory mapping. Make their job easier, not harder. |
Measuring Progress
| Metric | What to look for |
|---|---|
| Lead time | Reduction in time from ready-to-deploy to deployed, as approval wait time decreases |
| Release frequency | Increase beyond the once-per-week ceiling imposed by the weekly CAB |
| Change fail rate | Should stay flat or improve as automated controls catch more issues than manual review |
| Development cycle time | Decrease as changes no longer batch up waiting for approval windows |
| Build duration | Automated compliance checks added to the pipeline should be monitored for speed impact |
| Work in progress | Reduction in changes waiting for approval |
Related Content
- Separation of duties as separate teams - closely related pattern where compliance requirements are implemented as organizational walls
- Single path to production - automated pipeline controls are the mechanism for replacing manual approval gates
- Pipeline architecture - design the pipeline to capture the evidence compliance requires
- Value stream mapping - visualize how much of your lead time is consumed by approval waits
- Security scanning not in the pipeline - automated security controls are part of the compliance evidence story