Why DevOps Engineers Hate Compliance
Author
Matt Anderson
Date Published

DevOps engineers don't hate governance; they resent compliance because it’s fundamentally broken and structurally misaligned with modern security and dynamic cloud operations. Current "compliance automation" only reinforces these flaws. Ultimately, compliance is a knowledge problem, not a tooling problem, demanding a fundamental rebuild rooted in operational reality and real-world security.
When Compliance Is Simply Wrong
Compliance has become an unavoidable component of operating modern systems, yet it remains one of the most universally resented. This resentment is not rooted in laziness, immaturity, or an unwillingness to engage with governance. It is rooted in the undeniable fact that compliance, as practiced in most organizations today, is structurally misaligned with the realities of contemporary engineering. DevOps teams, platform engineers, and operations leaders do not object to oversight or accountability; they object to participating in a system that demands adherence to outdated practices, contradicts established security research, and creates operational drag without improving the safety or resilience of the systems they are responsible for.
The conflict begins with a basic but critical observation: much of what compliance requires is simply wrong. There are countless examples, but none are more emblematic than the continued insistence on password rotation. SOC 2 and several other frameworks still mandate rotation policies despite overwhelming evidence from NIST, Microsoft, CISA, and years of academic research demonstrating that forced rotation weakens security. Users select predictable patterns. Administrators struggle to enforce complexity. Attackers exploit the churn. Yet organizations are asked to implement it anyway because the control language has not been updated.
This is not guidance - it is an artifact of institutional inertia. Engineers, who live at the intersection of operational reality and security practice, are then placed in the absurd position of knowingly implementing measures that increase risk solely to satisfy an auditor’s expectations.
There’s an inevitable question that comes up when engineers run into these contradictions: how do you reconcile them? You don’t fix them with automation, and you certainly don’t resolve them by knowingly implementing practices that harm your security posture. You resolve them through interpretation. In every engineering team I’ve managed, I’ve worked with exceptionally bright and adaptable people, yet almost none realized they had a voice in how controls were implemented. Compliance was treated as immutable text, not guidance requiring judgment. That gap exists largely because few leaders are willing to challenge outdated interpretations.

The responsible approach is straightforward: implement what actually strengthens security, document how it meets the intent of the control, acknowledge the contradiction in your risk assessment, articulate the compensating safeguards (if you really want those bonus points), and push your auditor to evaluate substance over ceremony. A team might reasonably state: “We do not rotate passwords because phishing-resistant MFA, passwordless authentication, and hardware- or device-bound credentials provide far stronger protection than any rotation policy. Current federal and industry guidance supports this: phishing-resistant MFA is now considered the baseline, while traditional password authentication is explicitly classified as not phishing-resistant (CISA’s Implementing Phishing-Resistant MFA). Microsoft and NIST likewise identify passwordless and FIDO2-style credentials as higher-assurance methods that substantially reduce credential theft. Under this model, mandatory password rotation would weaken security, while phishing-resistant authentication combined with periodic credential auditing forms a materially stronger control.”
Auditors increasingly accept implementation approaches that diverge from the literal control language when those decisions are grounded in real security reasoning. Modern audit practice - especially in IT and security domains - has shifted toward risk-based interpretation rather than rigid checklist enforcement (ISACA Journal, “Are Organizations Actually Performing Risk-Based Audits?”, 2020). If you’re backing your decision with reputable research, modern guidance, and a defensible risk narrative - and your auditor still refuses to accept it - the problem isn’t your implementation. It’s your auditor.
Framework language may be fossilized, but interpretation isn’t, and this is exactly why knowledge - not automation, not dashboards, not compliance cosplay - is what actually makes compliance credible.
The Static Worldview of Compliance Frameworks
The deeper problem is structural: most compliance frameworks are built on a static worldview that no longer resembles how modern systems operate. They were created for an era of fixed servers, quarterly releases, long-lived IAM credentials, and infrastructure that changed slowly enough for paperwork to keep up. DevOps tooling, cloud infrastructure, and modern software delivery techniques operate under precisely the opposite assumptions. Infrastructure is ephemeral; deployments are continuous; identity boundaries shift dynamically; observability replaces paperwork as the primary mechanism of organizational awareness.
A system that may be provisioned, scaled, mutated, and destroyed within hours cannot be evaluated meaningfully by controls written for systems that used to live for years. Yet compliance assessments still demand year-long evidence trails for resources that did not exist at the beginning of the audit period, creating an impossible and fundamentally dishonest burden on engineering teams.
Beyond the conceptual mismatch, compliance interrupts engineering in ways that are uniquely counterproductive. DevOps teams already operate in an interrupt-driven reality: on-call pages, emergency patches, hotfixes pushed during outages, and architectural changes made in the middle of cascading failures. That’s normal. What isn’t normal is being asked to pretend that production systems follow slow, linear workflows written for a world without incidents.
No one pauses a sev-1 incident to open a pull request so the change has a clean audit trail. No one rotates a deeply embedded IAM (Identity and Access Management) role on a quarterly schedule because a framework mandates it - not when that role is welded into ancient CI jobs, legacy services, or third-party integrations that will detonate if touched. And the idea of maintaining a perfectly accurate “server inventory” becomes comical in a world where half your infrastructure is ephemeral - containers that live for seconds, functions that appear and disappear, nodes that autoscalers replace before the compliance team even knows they existed. These are hard truths even in the world of colocation and bare metal today (just ask us, we’ve built + operated a bare metal provisioning business).
These aren’t small frictions, either; they are collisions between compliance fantasies and operational reality. And they always show up at the worst possible time, right when engineering needs speed, clarity, and autonomy. Compliance doesn’t just interrupt engineering; it interrupts engineering exactly when the cost of interruption is highest.
Automation Theater and Its Discontents
The initial emergence of “compliance automation” platforms was marketed as a solution to this drag. In reality, these platforms often automate the least meaningful components of compliance while reinforcing the structural flaws that created the problems in the first place. Automatically generated policies, evidence scraping, Jira ticket creation, and templated attestations offer the illusion of modernization without challenging the underlying assumptions of the frameworks. These tools streamline the production of paperwork, not the improvement of systems. They operationalize the same outdated worldview with better UX.
It is automation in the shallowest sense - a faster way to conform to the wrong requirements. Engineers can smell this b.s. from a mile away.

Perhaps the most overhyped concept in this ecosystem is “compliance as code.” While appealing in theory, it collapses upon contact with the nature of compliance itself. Controls are not boolean. They require interpretation, context, and judgment. Some controls are self-contradictory. Others conflict directly with security best practices. Still others are aspirational statements disguised as requirements.
Attempting to encode these into deterministic workflows or YAML manifests creates a dangerous illusion of certainty where none exists. It is not that automation has no place in compliance (as I’d assuredly not have started a software company in the space if I believed that to be the case); it is that automation cannot substitute for understanding. When organizations attempt to reduce compliance to machine-validated expressions, they lose the nuance required to interpret controls responsibly and honestly.
The fundamental issue is that compliance remains a knowledge problem, not a tooling problem. Engineers are rarely given a clear explanation of why a control exists, what risk it addresses, how it applies within cloud-native architectures, or which parts of the control language reflect outdated thinking. They are told to follow the framework, adopt the template, gather the evidence, and move on.
This absence of context turns compliance into a burdensome obligation rather than an integrated discipline. Engineers do not reject governance - they reject rules that are never justified, risks that are never explained, and contradictions that are left unresolved.
I’ll cover some solutions - and what I believe engineers actually need - in a follow-on piece; I realize I’ve spent most of this post tearing the system apart without offering a rehabilitation plan. Stay tuned.
Rebuilding Compliance Into Something Credible
I’ve never been the engineering leader who excels at corporate theater - anyone who knows me will be laughing at this point - but I have always been fiercely protective of the infrastructure under my care. Not out of ego or because I romanticized the work, but because the systems we operated genuinely mattered. Customers depended on our uptime; internal teams depended on the stability of the platforms we maintained; and every failure had a real-world blast radius that landed on people, not abstractions. When you’re responsible for distributed systems, identity boundaries, and the brittle edges of production, you learn quickly that you can’t implement controls you know are structurally unsound. You have to understand the tradeoffs, justify the deviations, and push back when a requirement contradicts how real systems behave.
That instinct - to protect the environment first and negotiate the compliance language second - is what eventually pulled me deeper into the compliance domain. I wanted the systems I supported to be resilient and secure, but I also understood that resilience alone wouldn’t close enterprise deals or satisfy customer due diligence. In that sense, compliance became an extension of the same responsibility: safeguarding the infrastructure, the people who depended on it, and the business outcomes tied to it.
Rebuilding compliance into something credible requires discarding the long-standing contradictions that undermine it and replacing them with clarity, transparency, and respect for the engineering disciplines that keep modern companies afloat. That philosophy is also what guides the work we’re doing at Openlane - not to recreate the same automated checklists under a new banner, but to design tools and knowledge that treat engineers as decision-makers rather than data entry points. Compliance can only support DevOps when it aligns with real security practice and operational reality. That evolution is possible, and it’s already underway; the goal now is to push it forward with systems that enhance understanding instead of obscuring it.
