The Difference Between Compliance Audit and Engineering Assurance
Compliance audit asks: does this engineering organisation meet the requirements of an external standard? ISO 27001. SOC 2. PCI DSS. GDPR. The answers are documented, a certificate is issued, and the process repeats annually. Whether the organisation is actually secure, reliable, and effective is a related but separate question.
Engineering assurance asks: is this engineering organisation healthy? Are the practices we claim to follow the practices we actually follow? Are the risks we say are managed actually managed? Do we have genuine confidence in the quality and reliability of our systems?
Both questions matter. External compliance requirements are real and cannot be ignored. But compliance audit without engineering assurance produces organisations that pass audits and still suffer significant failures - because the audit measured the documents and the assurance measured the reality, and the two were not the same.
This is not cynicism about compliance. It is recognition that compliance requirements are necessarily general and backward-looking, while engineering health is specific and present-tense. An organisation that only manages to external compliance requirements will meet the minimum required and no more.
What Good Engineering Health Looks Like
Engineering health is multidimensional. A useful assessment covers several distinct dimensions:
Delivery Health
Is the engineering organisation consistently delivering on its commitments? Are deployment frequency and lead time trending in the right direction? Is the change failure rate stable or improving? Do teams have clear visibility of their work and the blockers to completing it?
Delivery health is measured by flow metrics and DORA metrics, tracked over time. An engineering organisation in good delivery health is predictable, improving, and not regularly surprising stakeholders with delays.
Technical Health
Is the codebase in a state that allows confident change? Is test coverage adequate to provide confidence that changes work as intended? Is technical debt at a manageable level and trending down rather than up? Are dependencies current and known vulnerabilities addressed?
Technical health is harder to measure than delivery health because it requires direct assessment of the codebase and infrastructure rather than process metrics. Code quality metrics - complexity, coverage, duplication - provide signals but require engineering judgement to interpret.
Operational Health
Are production systems reliable? Is monitoring adequate to detect problems quickly? Is on-call load reasonable and distributed fairly? Are incidents responded to quickly and learned from systematically?
Operational health is measured by availability metrics, incident frequency and severity, mean time to detect and restore, and on-call load per engineer.
Security Health
Are known vulnerabilities being addressed in a timely way? Are security practices - dependency scanning, access control, secrets management, penetration testing - consistently applied? Is the organisation prepared to respond to a security incident?
Security health is measured through vulnerability backlog age and resolution rates, security training completion, penetration testing findings and closure rates, and access review cadence.
People and Culture Health
Do engineers feel psychologically safe to raise concerns and admit mistakes? Are teams appropriately staffed and is attrition within acceptable limits? Is there sufficient seniority and experience distributed across teams?
People health is the hardest dimension to measure but one of the most predictive of future performance. Engagement surveys, attrition rates, and time-to-hire for open roles provide signals.
How to Self-Assess Without Gaming the Assessment
Self-assessment creates obvious incentive problems. Teams asked to assess themselves tend to rate themselves more favourably than an external observer would. The assessment becomes an exercise in confirmation rather than discovery.
Designing self-assessment to be genuine requires addressing those incentives directly.
Separate Assessment from Evaluation
When self-assessment scores are used to evaluate teams or managers, gaming becomes rational. The assessment should be used to identify where help is needed, not to score teams against each other or against targets.
Communicate this clearly and repeatedly. "We are doing this to understand where to focus improvement investment, not to rank teams" needs to be followed by consistent behaviour that matches the stated purpose.
Use Evidence Rather Than Opinions
Assessments based on "rate your team from 1-5 on test coverage" will produce inflated scores. Assessments based on "what percentage of your code is covered by automated tests, as measured by your CI pipeline?" will produce accurate scores.
Where possible, make the evidence for the assessment automatic. Connect your assessment to the actual data from your pipelines, monitoring, and tracking systems rather than relying on team self-report.
Include Uncomfortable Questions
A self-assessment that never produces uncomfortable answers is not a genuine assessment. Include questions that are likely to surface real issues: "When did you last have a production incident caused by inadequate monitoring?" "What is your oldest open security vulnerability and how long has it been open?" "How many engineers left the team in the last twelve months and why?"
Triangulate
Cross-reference self-assessment against other data sources. If a team rates their delivery health highly but their DORA metrics show low deployment frequency and high lead time, there is a discrepancy worth investigating. Triangulation protects against both gaming and genuine self-misperception.
External Audit Preparation
External audit - whether regulatory, customer-driven, or voluntary certification - is an unavoidable reality for many engineering organisations. The difference between audit preparation that is stressful and disruptive and audit preparation that is calm and efficient is whether assurance is continuous or episodic.
Continuous Evidence Collection
If your assurance practice is collecting evidence continuously rather than in response to an audit notification, audit preparation becomes assembly rather than discovery. You know what evidence you have. You know where the gaps are. You have been addressing those gaps as part of normal operations.
Organisations that scramble at audit time are organisations whose assurance practice exists only for auditors. The evidence collected for the audit may not reflect reality because it was created for the audit rather than as a product of genuine practice.
Documentation That Reflects Reality
Audit requirements typically include documentation of policies, procedures, and controls. Documentation that accurately reflects what the organisation actually does is easy to audit. Documentation created to satisfy requirements that does not reflect practice creates problems - either the auditor finds discrepancies or, worse, they do not find them but the gap between paper and practice remains.
Write documentation that describes what you actually do. If what you do does not meet requirements, change the practice rather than writing documentation that misrepresents it.
Relationship With External Auditors
Treat external auditors as professional peers rather than adversaries. The auditor's job is to assess whether your practices are sound, not to catch you out. Being transparent about known gaps - along with the plan to address them - typically produces better outcomes than presenting a picture of perfection that falls apart under scrutiny.
The Relationship Between Assurance and Psychological Safety
Assurance practice that is experienced as surveillance will not produce accurate information. If engineers believe that revealing problems leads to negative consequences, they will not reveal problems. The assurance process will collect comfortable fiction rather than useful reality.
Psychological safety - the belief that you can raise concerns, admit mistakes, and surface problems without personal risk - is a prerequisite for effective assurance. This is not separate from assurance; it is foundational to it.
Engineering leaders who respond to problems surfaced in assurance reviews with curiosity and support will receive more and better information than those who respond with disappointment or blame. Over time, the culture that results from consistently helpful responses to honest assessment produces organisations that genuinely improve.
Building Continuous Assurance
The alternative to annual audit snapshots is continuous assurance - ongoing, automated, and embedded in normal engineering operations.
Continuous assurance components:
Automated compliance checks in CI/CD pipelines. Security scanning, dependency checking, code quality gates, and compliance policy checks run on every code change. Issues are caught at source rather than discovered months later.
Real-time dashboards. Key assurance metrics - vulnerability backlog, test coverage trends, incident rates, deployment frequency - are visible continuously rather than produced for quarterly review.
Regular structured reviews. Monthly or quarterly reviews of the health dimensions described above, using the continuous data rather than producing evidence for the review.
Action tracking. Improvement actions identified through assurance are tracked in the same way as engineering work, with owners and due dates, and reported on at the same cadence as technical and delivery metrics.
Continuous assurance does not eliminate the need for periodic external audit. It makes those audits easier to prepare for, more accurate when conducted, and more meaningful in their findings. More importantly, it means the engineering organisation is genuinely better rather than periodically compliant.