Chapter 7: The Three Simple Rules: Data Traffic Light, Human Wrapper, Prompt Dividend
The Practical Framework That Makes AI Governance Implementable This Week
Published: 01 March 2026
Reading time: 15-19 minutes
Key framework introduced: Data Traffic Light · Human Wrapper · Prompt Dividend
You have spent six chapters understanding the problem. The cascade risk. The procurement gap. The regulatory environment closing in. The informal governance that works brilliantly until it doesn't.
Here is where that changes.
This chapter delivers the framework. Not a 47-page AI policy document. Not a governance committee with quarterly review cycles. Not a compliance checklist so comprehensive it sits unread in a shared folder. Three rules. Three disciplines. The minimum viable governance that separates agencies that can answer a client's AI question from those that cannot.
Before introducing them, it is worth understanding why the number three matters.
Why Frameworks Fail — and What That Means for Yours
The Trustmarque AI Governance Index 2025, drawing on a structured survey of 507 UK IT decision-makers, found that 93% of organisations are using AI in some form. Only 7% have fully embedded governance. More than half — 54% — have minimal or no formal governance at all.
This is not a knowledge gap. The same report found that only 6% of leaders said there was no awareness of AI governance within their organisation. Awareness exists. Execution does not. Trustmarque's conclusion was blunt: the blockers are less about intention, and more about ownership, resources, and clarity on next steps.
That last phrase is the operative one. Clarity on next steps.
The cognitive science explanation is straightforward. George Miller's foundational 1956 research on working memory capacity, later refined by Nelson Cowan in 2001, established that human working memory holds approximately four meaningful chunks of information at once. More than that, and things start to fall away. The research literature on policy compliance shows what happens when organisations design frameworks that exceed this capacity: people simplify informally, creating the gap between what the policy says and what the team actually does.
Converging evidence across security policy research, organisational compliance literature, and SME implementation studies consistently shows that framework complexity is a primary cause of governance failure — not in the boardroom, but at the operational level where the work actually happens. When a framework demands more of working memory than people have available, they either comply selectively or don't comply at all.
Three rules fit in working memory. They can be recalled in a client meeting. They can be explained to a new team member in ten minutes. They can be checked in the moment a piece of work is about to go out.
Most governance frameworks are designed for the organisation chart. The Three Simple Rules are designed for the human brain.
An Important Honest Framing
The Three Simple Rules are not a mandated framework. No regulator requires them by name. What regulators require — purpose-limited data use, meaningful human review, documented processes — these rules operationalise. They are the practical layer between regulatory principle and agency workflow.
The Data Traffic Light has institutional precedent in how the ICO, the UK government's security classification system, and cybersecurity practitioners already communicate risk. The Human Wrapper formalises what 84% of UK AI-using businesses are already doing informally, according to DSIT's January 2026 AI Adoption Research. The Prompt Dividend names something the industry is working out but has not yet named — FCLTGlobal calls it the 'AI efficiency dividend'; the PRCA lists prompting SOPs as governance tools.
The value here is the operationalisation. Your team hasn't had time to read the ICO's internal policy, the Stanford hallucination studies, the FCLTGlobal efficiency research, and synthesise them into something a fifteen-person creative agency can implement by Friday. This chapter does that work.
RULE 2: The Human Wrapper
We start here because this is where most agencies already live — and because the evidence for why it matters is the most visceral in this entire framework.
The Problem You Have Probably Already Encountered
A survey of 565 digital marketers, conducted by NP Digital in February 2026, found that 47.1% encounter AI inaccuracies several times per week. More troubling: 36.5% reported having published AI-generated content publicly that contained fabricated information. Only 23% said they felt comfortable using AI output without human review.
These are not edge cases. They are the operating condition of an industry that has adopted AI tools faster than it has developed the discipline to review them systematically.
The most vivid illustration of what happens when that discipline is absent came in July 2025. Deloitte Australia submitted a government report valued at approximately AU$442,000 that contained fabricated academic references, non-existent researchers, and invented quotations. The tool used was Azure OpenAI GPT-4o. The failure was not the tool — it was the absence of systematic human review. QA processes that should have caught the errors did not. Deloitte confirmed the incident.
This was not a junior content team using a free AI tool without oversight. This was a Big Four professional services firm with substantial QA infrastructure. The Deloitte incident is significant precisely because it cannot be dismissed as someone else's problem.
The academic literature provides the underlying explanation. Researchers at Stanford's RegLab found hallucination rates of 58—88% on legal queries, depending on task type and model, in a 2024 paper published in the Journal of Legal Analysis. For professional services work — where claims must be accurate, references must exist, and facts must be verifiable — these are not tolerable error rates without systematic human review.
There is a specific psychological failure mode the ICO has named: automation bias. This is the documented tendency to defer to automated outputs without adequate scrutiny — to trust the tool because it produces confident, well-formatted answers. Rubber-stamping, in the ICO's language, does not constitute meaningful human review. The ICO's internal AI use policy, published in August 2025, establishes this explicitly.
The Good News — and the Important Distinction
The DSIT AI Adoption Research, published in January 2026 and drawing on 3,500 UK businesses, found that 84% of UK businesses that use AI apply at least some human oversight. Sixty-seven percent apply significant oversight.
Most agencies are already reviewing AI outputs before they go to clients. The Human Wrapper does not introduce a new step. It formalises the one you are already taking, so you can prove it happened.
That distinction matters more than it might seem. The difference between 'we review AI outputs' and 'we have documented human review of AI outputs' is the difference between a verbal assurance and an auditable process. Enterprise clients, regulated-sector procurement teams, and — increasingly — legal processes care about the latter, not the former.
The Data (Use and Access) Act 2025, which received Royal Assent in June 2025, establishes that meaningful human review carries legal weight in the context of automated decisions. 'Meaningful' is the operative word. Meaningful review is documented review. Review that can be shown to have happened, by a named individual, against specific criteria, with recorded outcomes.
What the Human Wrapper Actually Requires
The Human Wrapper is three things documented, not three additional steps:
  • Who reviewed the AI output
  • What they checked it against
  • What they changed before it was used
It does not require a separate review document for every piece of AI-assisted work. It requires that this information is captured somewhere — in a project management comment, in a shared note, in a brief annotation on a document — in a way that creates a retrievable record.
For a fifteen-person agency, implementation looks like this: every piece of work that involved AI assistance carries a line in whatever system the agency already uses — the project management tool, the job bag, the email thread — that reads: 'AI-assisted draft. Reviewed by [name] against [source/brief]. Changes: [what was adjusted].' Three fields. Two minutes. A retrievable record.
The Human Wrapper is not a compliance burden. It is what turns 'we review AI outputs' into an answer you can give to a client, a procurement team, or a regulator. Most agencies are already doing the review. The Wrapper makes it provable.
The Regulatory Convergence
The ICO's internal AI use policy, issued in August 2025, requires decision logging and verification steps as conditions of AI use in its own operations. The UK government's Procurement Policy Note 017, published February 2025, requires suppliers to disclose AI use in procurement. These are not future requirements — they are current conditions of doing business with public sector and regulated clients.
The Human Wrapper is the operational layer that makes compliance with these requirements possible. Without it, an agency cannot demonstrate that human review occurred. With it, the demonstration is a retrievable record.
RULE 1: The Data Traffic Light
The Human Wrapper catches errors after AI produces them. The Data Traffic Light prevents the worst from being possible in the first place.
Rule 2 operates after AI has already processed your inputs. Rule 1 operates before — at the moment a team member reaches for an AI tool and is about to type something into it. That moment is the single most important intervention point in your AI workflow, because what goes into an AI tool cannot be recalled.
The Problem Most Agencies Have Not Solved
Every agency that has used AI tools has team members who make individual judgements about what is appropriate to enter. Those judgements are usually sound. They are also unverifiable, inconsistent, and entirely dependent on the individual's understanding of what the tool does with the data.
Most team members using free-tier AI tools do not know — and have not been told — that queries submitted to consumer AI services are often used in model training, are visible to the service provider, and may persist beyond the session. The UK National Cyber Security Centre warned explicitly in February 2024 that queries submitted to AI tools may be visible to model owners, used in training data, or accessed by third parties depending on the service's terms.
For an agency handling client briefs, campaign strategies, unreleased product information, or patient data in healthcare communications, this is not an abstract risk. The data classification that has not happened is the classification that creates the breach.
There is a more subtle risk alongside the obvious one. Client data entered into an AI tool does not just expose that client's information — it can shape outputs in ways that cross-contaminate client work. An AI tool that has processed Competitor A's strategic positioning may surface elements of that in outputs generated for Competitor B. The tool does not know it is doing this. The operator does not see it happening. The Prompt Dividend section addresses how to manage this systematically. The Data Traffic Light is what prevents the most dangerous inputs from entering the system in the first place.
Institutional Precedent — You Are Not Inventing This
The traffic light framework for data classification is not a new idea. It is how serious institutions have communicated risk for decades.
The UK Government Security Classification Policy applies directly to AI tools. Government guidance explicitly requires that the classification of any information being processed by an AI tool must be considered before use, and that tools must be used only for information at or below the tool's approved classification level.
The Traffic Light Protocol (TLP) v2.0 — used by cybersecurity practitioners, national threat intelligence agencies, and information-sharing communities — classifies information sharing using four colour-coded designations that have become standard operating procedure across the industry. Red, amber, and green designations are already embedded in how professional information-security practice works.
The ICO's internal AI use policy, published August 2025, classifies AI inputs and outputs by security level and requires that classification to be considered before processing. The ICO governs data protection for the UK — its internal approach to its own AI use is a reliable signal of what it will expect from organisations it regulates.
Universities across the UK have adopted Red-Amber-Green frameworks for AI data guidance in academic contexts — student data, research data, and administrative data each sit in different classification tiers with different rules for AI tool use.
The Data Traffic Light is not a novel intervention. It is the established institutional approach to data risk, translated into the operational language of an agency team.
The Three Zones — Applied to Agency Work
Red Zone: Never in Any AI Tool
Red zone data should never enter any AI tool — not enterprise, not consumer, not the tool your agency has paid for and configured. The classification is absolute.
For UK agencies, red zone data includes:
  • Client data that is personally identifiable and protected under UK GDPR — patient identifiers in healthcare communications, contact databases, anything covered by a Data Processing Agreement
  • Commercially sensitive information under NDA — unreleased product details, M&A information, proprietary strategy documents
  • Authentication credentials, API keys, and access tokens — anything that could grant system access if exposed
  • Unreleased financial information — earnings data, pricing strategy, contract terms that are market-sensitive
The practical test for red zone classification: if this data appeared in a data breach notification, would it create a regulatory, legal, or reputational consequence? If yes, it is red.
Amber Zone: Enterprise Tools with a Data Processing Agreement Only
Amber zone data can be processed through AI tools, but only tools that have been approved by the agency and are covered by a Data Processing Agreement (DPA). Consumer-tier tools — free or freemium services that train on user inputs, have no DPA, and provide no data handling guarantees — are not appropriate for amber zone data.
For most UK agencies, amber zone data includes:
  • Client briefs and campaign strategies — not under NDA but competitively sensitive
  • Internal agency documents — financial data, HR information, operational processes
  • Aggregated client performance data — analytics, campaign results, research findings where the client expects confidentiality
  • Draft work product — early-stage creative that has not been through client review
The practical test for amber: this information should be treated as confidential, but it can be processed safely in an enterprise tool with appropriate data handling controls.
Green Zone: Any Approved Tool
Green zone data is public information, published research, and content that contains no client-specific or commercially sensitive material. Team members can use any approved tool for green zone inputs without restriction.
  • Publicly available market research and industry reports
  • Published competitor content — website copy, advertising, press releases
  • Generic writing assistance — structure, grammar, formatting for content that contains no client specifics
  • Internal learning — summarising a published article, generating questions for a training session on published material
The Unspoken Rule Problem
Every experienced agency has an 'unspoken rule' about data handling. Everyone knows — roughly — what is appropriate to share and what is not. A senior copywriter instinctively knows not to paste a client's full brief into a consumer AI tool. A junior account manager might not. A new starter almost certainly does not.
The unspoken rule is not a governance system. It is tribal knowledge that exists in the heads of experienced team members and gets transmitted inconsistently to new ones. It works — until the team member who carries it moves on, or until a new tool creates a situation no one has considered before, or until the person who 'should have known better' has a deadline and reaches for the fastest available tool.
The Data Traffic Light converts the unspoken rule into an explicit, teachable, verifiable classification. It takes three seconds to apply. Red, amber, or green. The decision is made before the data enters the tool.
The Data Traffic Light answers the question your team is already asking informally: 'Can I put this in the AI?' It turns an individual judgement call into an organisational standard. It takes three seconds to apply and creates the upstream control that the Human Wrapper downstream review depends on.
RULE 3: The Prompt Dividend
The first two rules protect you. The third one pays you.
The Data Traffic Light prevents inappropriate data from entering AI tools. The Human Wrapper ensures outputs are reviewed, corrected, and documented before use. Both are protective disciplines — they reduce risk, create auditability, and make your AI governance demonstrable.
The Prompt Dividend does something different. It turns the AI work your team is already doing into organisational knowledge rather than leaving it in one person's chat history.
The McKinsey Gap — Why Most Agencies See No Return
McKinsey's State of AI 2025 report found that 8 in 10 organisations report no significant bottom-line gains from AI adoption. Only 6% of high performers capture real value — and what distinguishes them is not which tools they use or how much they spend. It is that they fundamentally redesign their workflows around AI capability rather than using AI as a better version of what they did before.
For agencies, that distinction is precise. Using ChatGPT to write a first draft faster than you would have written it yourself is productivity improvement. It is also entirely personal — it lives in one individual's interaction history, produces no organisational learning, and evaporates when that person is unavailable, changes role, or leaves.
Research by Dell'Acqua et al. — a peer-reviewed study of 758 BCG consultants published as Harvard Business School Working Paper 24-013 — found that consultants who used AI tools with access to effective prompting guidance completed 12.2% more tasks, worked 25.1% faster, and produced outputs rated 40%+ higher in quality compared to those without equivalent guidance. The performance gain was not simply from access to the tool. It was from structured guidance on how to use it — guidance that could be shared, refined, and built upon.
That is the gap the Prompt Dividend closes.
What Systematic Prompt Capture Looks Like in Practice
West Monroe, a US management consulting firm, built an internal AI resource called 'Nigel' that contains 278 curated prompts used across client engagements. The library has been used more than 12,000 times. Each prompt is searchable and ranked. When a consultant joins the firm, they do not start from zero — they inherit the accumulated AI learning of the organisation.
278 prompts and 12,000 uses did not happen by accident. They happened because someone decided that what worked individually should become available collectively. The Prompt Dividend is that decision, made explicit.
The PRCA Green Paper on AI governance, published in 2025, lists prompting SOPs (standard operating procedures) as governance guardrails — operational documentation of how AI tools are used, not just whether they are permitted. FCLTGlobal, the long-term investment research organisation, has identified what it calls the 'AI efficiency dividend': the compounding value created when AI-generated efficiency gains are reinvested rather than merely captured as short-term cost savings. Reinvestment — of time, of knowledge, of process learning — generates more than two times the return of simple cost extraction, according to FCLTGlobal's research.
The Prompt Dividend is how agencies capture that reinvestment.
The Story of What Happens Without It
In 2023, I introduced ChatGPT to our lead medical copywriter at the pharmaceutical agency. Brilliant writer. Deep therapeutic area knowledge. Essential to client delivery.
Her immediate response was fear. If she documented her process, showed how she worked, would AI replace her? So she held back for a year.
When we revisited it in 2024, something had shifted. She understood that AI was not replacing her judgement — it was scaling her capability. Research compilation, first-draft structure, reference formatting: the AI handled these. Her verification, her therapeutic expertise, her editorial taste: these remained hers.
Work that took three days compressed to three hours. Same quality. More capacity. She controlled the AI; the AI amplified her.
But here is what never happened. We never built the prompt library. Never documented the workflows. Never captured the organisational learning. When the business wound down in 2025, that expertise left with her.
Fear delayed adoption by a year. Trust eventually enabled it. Governance never caught up.
What she had developed — in fourteen months of productive AI-assisted work — was organisational knowledge. The specific prompts that worked for pharmaceutical regulatory content. The review workflow that caught the errors a medical copywriter's trained eye needed to correct. The sequencing that produced usable first drafts rather than plausible-sounding confabulations. None of it was captured. All of it walked out the door.
The Prompt Dividend is the mechanism that would have prevented that loss.
What the Prompt Dividend Requires
Prompt capture does not require sophisticated technology. It requires a shared location — a document, a folder, a page in a project management system — where prompts that produced good results are recorded with enough context to be reused:
  • What the prompt was
  • What task it was used for
  • Which tool produced the best results
  • What modifications improved the output
  • Any data classification considerations (connecting back to the Traffic Light)
The capture habit is the governance step. The library that grows from consistent capture is the organisational asset. A ten-person agency that captures prompts consistently for six months has something no competitor can replicate quickly — institutional AI knowledge that makes the team faster and the output more consistent.
For healthcare communications agencies specifically, this has an additional dimension. Prompt libraries for regulated content — content that must meet ABPI Code requirements, that must pass medical, legal, and regulatory review — are a quality management asset. The prompts that consistently produce content closer to final approval are prompts worth documenting, standardising, and protecting.
The Prompt Dividend is the author's term for a practice the industry is developing but hasn't yet named consistently. FCLTGlobal calls it the 'AI efficiency dividend.' The PRCA describes it as prompting SOPs. What they describe is the same principle: AI efficiency captured as organisational knowledge compounds. Left in individual chat histories, it evaporates.
The Classify → Review → Capture Cycle
The three rules are not independent disciplines. They form a complete workflow that mirrors how AI is actually used in agency work.
Before You Prompt: Data Traffic Light
A team member opens an AI tool. Before entering anything, a three-second check: is this red, amber, or green? Red: close the tab and use a different approach. Amber: is this an enterprise tool with a DPA? Green: proceed.
This is not an interruption to workflow. It is the moment of conscious decision-making that prevents the unintended data exposure that most agencies currently manage through tribal knowledge and optimism.
After AI Responds: Human Wrapper
The AI has produced output. Before it goes anywhere — to a client, into a brief, into a document — someone reviews it. Not rubber-stamp review. Substantive checking against the source material, the brief, the facts. What the AI got right. What it invented. What needs to change.
That review is documented. Who did it, what they checked, what they changed. Three fields in whatever system the agency already uses.
After You Deliver: Prompt Dividend
The work is done. Before moving to the next task, a moment of capture: did this prompt produce something worth keeping? Was there a sequencing, a framing, a set of constraints that produced better output? If yes, it goes into the shared library. If not, the session closes and the work moves on.
The capture habit does not add significant time to delivery. It adds significant value to the next person who does a similar task.
The Framework as Evidence
These three rules, consistently applied and documented, create something UK agencies increasingly need: evidence that AI governance exists and functions.
The Trustmarque 2025 research found that only 5% of organisations use external partners for ongoing governance oversight, despite the availability of expert services. Most are managing it in-house, often without clear ownership. The Three Simple Rules give that in-house management a structure — not a bureaucracy, but a documented, demonstrable process.
When an Enterprise client asks — as they increasingly will — 'how does your team use AI?', these three rules are the answer. Not a policy document. An operational reality you can describe, demonstrate, and evidence.
Implementation This Week
The transformation from 'governance is overwhelming' to 'this is simple enough to implement this week' is not metaphorical. Here is what implementation actually looks like for an agency starting from scratch:
  • Day 1: Classify your current AI tools. Which are enterprise tools with DPAs? Which are consumer-tier? This determines your amber zone boundary.
  • Day 2: Draft a one-page Data Traffic Light document — three zones, the types of data that sit in each, the tools approved for each zone. Circulate for team input.
  • Day 3: Add a 'Human Wrapper' line to your existing job management system — the three fields, as a standard field on AI-assisted work. This does not require a new system. It requires three new fields in the one you have.
  • Day 4: Create a shared document — a Google Doc, a Notion page, a Teams folder — titled 'AI Prompt Library.' Add the first entry yourself. The prompt that produced the best first draft this week. The context. The tool.
  • Day 5: Hold a twenty-minute team session. Explain the three rules. Walk through the Traffic Light document. Show the Prompt Library. Answer questions. The governance is now operational.
This is not a framework that requires a consultant, a governance committee, or a compliance budget. It requires a week of focused work and a commitment to the three disciplines going forward.
What You Have Now
You have the framework. Three rules that cover the three moments that matter in AI-assisted agency work: before you prompt, after AI responds, and after you deliver.
You do not have a governance programme. You have a foundation. The Three Simple Rules are the minimum viable governance — the version of AI oversight that a fifteen-person agency can implement, sustain, and explain to a client this month. They are also the foundation on which Chapter 8's four-week implementation programme builds.
The agencies that implement this week will be ahead of the 54% of organisations the Trustmarque research found operating with minimal or no formal governance. They will be able to answer the question that enterprise procurement now asks. They will have documented evidence of human review, data classification, and knowledge capture.
The gap between knowing governance matters and having governance that functions is the gap most agencies are currently sitting in. The Three Simple Rules close it without requiring the organisation chart overhaul, the 47-page policy document, or the dedicated AI governance committee.
Brains lead. Bots follow. The Three Simple Rules are the discipline that makes that principle operational.
Chapter 8 is what this looks like implemented — week by week, for an agency starting where most agencies are: tools already in use, team already ahead of the policy, no existing governance infrastructure.
Key Takeaways
  • Most governance frameworks fail at the operational level, not the strategic one: The Trustmarque AI Governance Index 2025 found that only 7% of UK organisations have fully embedded governance — not because the other 93% don't understand the need, but because their frameworks exceed what working memory can carry into a meeting, a deadline, or a client call. Three rules fit. Forty-seven-page policy documents do not.
  • The Human Wrapper formalises what most agencies are already doing informally: DSIT research found that 84% of UK AI-using businesses apply at least some human oversight. The gap is not whether review happens — it is whether it is documented. 'We review AI outputs' is a verbal assurance. 'Reviewed by [name] against [brief], changes noted' is an auditable record. Only the latter satisfies Enterprise procurement, the ICO, and the Data (Use and Access) Act 2025.
  • The Data Traffic Light converts tribal knowledge into an organisational standard: Every experienced agency has an unspoken rule about what data is appropriate to share with AI tools. That rule lives in the heads of senior team members and gets transmitted inconsistently to new ones. The Traffic Light makes it explicit, teachable, and verifiable — in three seconds, before the data enters the tool.
  • The Prompt Dividend is the difference between personal productivity and organisational capability: McKinsey found that 8 in 10 organisations report no significant AI gains. The 6% who do capture real value fundamentally redesign their workflows. The distinction for agencies is precise: AI efficiency that lives in one person's chat history evaporates when they go on holiday, change role, or leave. Prompt capture turns individual learning into shared organisational knowledge.
  • These three rules are not a regulatory mandate — they operationalise what regulation already requires: No regulator names the Three Simple Rules. The ICO, the Data (Use and Access) Act 2025, and Procurement Policy Note 017 require purpose-limited data use, meaningful human review, and documented processes. The Three Simple Rules are the practical layer that makes those requirements operational for a fifteen-person agency without a compliance team.
  • Implementation does not require a consultant, a committee, or a budget: Day one: classify your AI tools. Day two: draft the Traffic Light document. Day three: add three fields to your existing job management system. Day four: create a shared prompt library. Day five: brief the team. The governance is operational. The framework is designed to be implemented with internal resources this week.
What's Next
Next Chapter: Chapter 8: Building Your Governance Foundation in Four Weeks publishes 02 March 2026
You now have the framework. Chapter 8 is what implementation looks like — week by week, task by task, for an agency starting from where most agencies are: AI tools already in use, team already ahead of the policy, no existing governance infrastructure. The Four-Week Governance Sprint turns the Three Simple Rules from a framework you understand into a system your agency can demonstrate.

Implement This Now
The Three Simple Rules are designed for immediate implementation. Five working days. No consultant, no budget, no governance committee required.
Download the Free 5-Day Implementation Guide — The free implementation guide walks you through each day — with templates for the Data Traffic Light, Human Wrapper fields, and Prompt Library ready to use from Day 1.
If you'd rather understand your agency's full AI readiness picture before implementing, the AI Readiness Assessment maps exactly where you stand and what to prioritise first.
Book an AI Readiness Assessment (£500) — 90-minute assessment of your current AI usage, readiness gaps, and priority actions.

Disclaimer
This chapter provides general information about AI governance practices for UK professional services agencies. It is not legal, regulatory, or professional advice.
Regulatory requirements vary by sector, client base, and operational context. The examples and frameworks presented here reflect common patterns across agency operations but may not address sector-specific obligations (e.g., healthcare communications agencies subject to ABPI Code, legal marketing subject to SRA regulations, financial services agencies under FCA oversight).
For compliance questions specific to your agency's regulatory environment, consult qualified legal counsel familiar with UK GDPR, ICO guidance, and your sector's requirements.
Research methodology: All statistics, case studies, and regulatory references are documented with sources. Where examples are used without specific attribution, they represent composite patterns observed across multiple agencies rather than individual client situations.
Commercial disclosure: Brains Before Bots offers Shadow AI governance services to UK agencies (AI Readiness Assessments, Done-With-You AI Workflow Builds, and Fractional AI Leadership retainers). This book is designed to provide standalone value whether or not you engage our services. The frameworks are implementable with internal resources.

Next Chapter: Chapter 8: Building Your Governance Foundation in Four Weeks | Table of Contents
Questions or feedback? Email hello@brainsb4bots.com
© 2026 Brains Before Bots. All rights reserved.