Chapter 9: The AI Readiness Assessment: Before You Build Anything
The Two-Week Diagnostic That Tells You Exactly Where You Stand
Published: 15 March 2026
Reading time: 13-16 minutes
Key framework introduced: The AI Readiness Assessment · Two-Week Diagnostic
The vendor application process had a particular weight to it. I knew going in what was at stake.
A pharmaceutical multinational putting us through due diligence wasn't a formality. It was a decision that would determine the shape of the next three to five years. Year one would be slow — a testing year, a watching year. But if we performed (and at XEIOH, we always did), years two, three, and four grew. Steadily. Predictably. The kind of revenue growth that lets you plan, hire properly, and stop wondering about cash flow in September.
So I read the contract carefully. Every clause.
The one that stopped me, more than once, was the outsourcing clause. The precise language varied by client. The effect was the same: if we used anyone outside our core team to deliver work on their account, we were obligated to disclose it. In some cases, we had to seek approval. In a sector where freelancers are how smaller agencies stay viable — pulling in medical writers, disease area specialists, and regulatory consultants for three to six months when the work demanded it, letting them go when it didn't — this wasn't a theoretical concern. It was a practical one with a real answer required.
Where I could, I negotiated the clause out. Where I couldn't, we tightened the paperwork. Freelancers signed back-to-back commitments — the same confidentiality, the same data handling obligations, the same standards we'd committed to the client. NDAs. Approved supplier confirmations. The administrative overhead was real. But it meant that when the question came — and in pharmaceutical, the question always came eventually — we had an answer. A specific one. Documented.
If I were managing those same client relationships today, the clause I'd be watching isn't the outsourcing clause. It's the AI disclosure clause. And the back-to-back requirement wouldn't just be NDAs. It would include the AI requirements the client had placed on us: which tools are approved, what data classification rules apply, what the training data controls are, and confirmation that the freelancer has received the basic training to understand them.
The structural problem hasn't changed. Third parties — freelancers, contractors, specialist suppliers — enter agency workflows carrying their own tools, their own habits, and their own accounts. A medical writer working on a pharmaceutical account isn't logging into the agency's managed AI environment. They're using whatever they use. Probably a personal account. Possibly one without a Data Processing Agreement in place. Almost certainly one that hasn't been considered in anyone's AI governance documentation.
I couldn't govern what I hadn't mapped. And I couldn't map what I hadn't asked.
That's what this chapter is about. Not governance as an abstract ambition. Governance as something you can only build properly once you have a clear picture of what you're actually building on — who's doing the work, which tools they're using, and what happens to client data when they do.
GovernFirst isn't a philosophy. It's a sequence. And the Assessment is where the sequence begins.
The Visibility Problem
Here is what most agency principals believe about their team's AI usage: they use the tools we know about, broadly in the ways we've discussed, and not with anything sensitive that would create a problem.
Here is what the research consistently finds: that belief is wrong in most agencies, by a margin that matters.
The Microsoft UK and Censuswide survey (October 2025, n=2,003 UK employees) found that 71% use unapproved AI tools at work. More than half — 51% — do so every week. These aren't outliers. This is the operating reality of the British professional workforce right now. The agency principal who believes their team is using AI only through approved channels is, statistically, almost certainly wrong.
The self-assessment problem runs deeper than individual tools. It's structural. The Skillcast and YouGov UK Corporate Compliance Survey (n=4,000 UK respondents) found that 85% of UK managers claim data protection is fully embedded in all their business processes. Only 38% could confirm confidence in meeting the 72-hour ICO breach reporting requirement. That's a 47-point gap between what managers believe is true and what they can actually verify. Same survey. Same respondents. Completely different answers depending on whether the question asked for a claim or for evidence.
This pattern — confident belief, unverifiable in practice — appears across GDPR compliance, ISO 27001 implementation, cyber security readiness, and change management. It isn't a personal failing. It is a documented property of how organisations assess their own risk. People know what they intend. They don't always know what's actually happening.
And yet most agency AI governance conversations start with the question: "What tools does your team use?" The principal answers. The list gets documented. The governance policy gets written around the list. Job done.
Except the list is almost certainly incomplete. And the policy governs the tools the agency knows about — not the ones it doesn't.
The freelancer dimension compounds this significantly. Think about how most agencies actually staff up under pressure. A core team — five, eight, twelve people — and a rotating cast of specialists who come in when the work demands it. Medical writers on retainer during busy regulatory periods. Graphic designers and animators brought in for a campaign, then gone. Illustrators commissioned for two or three jobs a year — not enough volume to justify a staff role, but essential when the brief requires them. Disease area specialists consulted when the agency moves into an unfamiliar therapy area and needs clinical credibility fast. Each of them carries their own tool stack. Their own AI habits. Their own accounts. The agency's approved tool list doesn't extend to them by default. The governance policy doesn't reach them unless someone has specifically designed it to.
What AI are they using on your client work? Almost certainly not your agency's managed environment. Probably a personal account. Often one that hasn't been considered in anyone's AI governance documentation — yours or theirs.
This isn't a criticism of how agencies operate. It's a structural reality of how UK agency work gets done. Freelancing is embedded in how the sector functions. The AI governance frameworks being built right now are almost universally designed for fixed teams in managed environments. The actual workforce is more fluid than that — and the gap between the two is where ungoverned AI usage lives.
The procurement environment is making the visibility gap consequential in a very specific way.
The CIPR State of the Profession survey (September 2024, n=2,016) found that 63% of in-house marketing and communications teams say they ask their agencies about AI use. Only 24% of agencies report being asked. That 39-point gap is not a misunderstanding about frequency. It's a qualification gap. Clients are evaluating agencies on AI governance — formally or informally — and many of the agencies being evaluated don't know the assessment is happening.
PPN 017, effective February 2025, made AI disclosure questions standard in central government procurement. The framework covers how AI was used in bid preparation, how it features in service delivery, and what controls exist over client data as training material. Government contracts now require agencies to answer these questions. ISBA's Generative AI Supplemental Agreement (published April 2024, advisory) gives enterprise marketing clients a ready-made framework for requesting AI disclosure from agencies — specifying timing, purpose, type, and form of AI use before key milestones. The direction of travel is clear. The pace varies by sector. The destination doesn't.
The agency that hasn't mapped its AI usage before RFP season is answering these questions from memory. From belief. That's the 47-point gap made commercially visible.
What The Assessment Actually Does
The AI Readiness Assessment is a two-week structured diagnostic. It has a specific scope, a specific process, and four concrete outputs. It's not a conversation about strategy. It's not a training session on AI best practice. It's an evidence-gathering exercise that produces a clear picture of where the agency actually stands — not where it believes it stands.
Week 1 is Discovery.
Discovery means mapping every AI tool in active use across the agency, regardless of whether it appears in an approved list. This is done through structured staff conversations — typically role-based, covering the tools people use, how often, for what type of work, and what information they include. It also involves reviewing existing documentation: policies, contracts, data processing agreements, client terms. And it involves mapping the accounts and platforms accessible through agency devices and networks, including any that operate outside managed infrastructure.
Most agencies expect Week 1 to confirm what they already know. Discovery almost always surfaces additional tools. Tools adopted informally for specific tasks. Tools being used by individuals who found them helpful before anyone thought to ask whether they should. Tools that have been in active use for months without coming up at a leadership level. This isn't unusual and it isn't a failure of management. It's the natural consequence of how AI adoption has worked: bottom-up, driven by individual productivity, preceding any governance structure.
The freelancer question typically surfaces here too — and it's almost always a gap nobody had mapped. Which AI tools are the medical writers using? What about the animator brought in for the last campaign, or the disease area specialist consulted during the therapy area expansion? Are they using personal subscriptions? What are the terms of service on those accounts regarding data retention and model training? Has any client data passed through a platform without a Data Processing Agreement in place? These questions aren't asked to create alarm. They're asked because the answers determine what the governance framework actually needs to cover — and in most agencies, they haven't been gathered before.
Discovery also surfaces the data flows. Not just which tools — what information has moved through them. This is where the most significant findings tend to emerge. The tool inventory is manageable. The data flow picture — which client information, in what form, through which platforms, under what terms — is where the actual exposure becomes visible, and where the distance between current practice and defensible practice becomes clear.
Week 2 is Mapping and Recommendations.
Mapping takes the discovery findings and organises them into a structured picture of current state. Each tool gets assessed against a consistent set of criteria: what category of data does it process, does the agency have a Data Processing Agreement in place, what do the terms of service say about data retention and model training, is there a business or enterprise tier available, and how does current usage compare with what those terms permit? This isn't a compliance checklist. It's a capability inventory — a documented record of what the agency has, how it's being used, and the gap between current practice and practice that could be explained and defended.
Recommendations are specific, sequenced, and proportionate to what discovery actually found. Not generic AI governance advice from a template. Specific to the tools in use, the workflows where they appear, and the client relationships that create the highest obligations. Priority is assigned by exposure — what creates the most immediate risk, what can be addressed quickly, what requires structural change over time.
The four outputs are:
A tool inventory. A complete record of every AI platform in active use — who uses it, in what context, under what terms, and whether it extends to freelancers and contractors or stops at the permanent team.
A workflow map. A picture of how AI currently sits within the agency's production process — where it enters the work, where it creates value, and where usage is running ahead of governance.
A gap analysis. A specific account of the distance between current practice and defensible practice, framed as a clear set of priorities — not a list of failures.
A governance recommendations report. The actionable sequence: what to implement first, what takes longer, and what the agency needs to document before it can answer client procurement questions with confidence. For agencies with freelancers and contractors, this includes the back-to-back requirement — the AI obligations the agency carries to clients, reflected in the agreements and basic training provided to third parties working on client accounts.
This sequencing matters. McKinsey's Global Transformation Survey (2021, n=1,034) found that 25% of transformation value is lost at the diagnostic and target-setting phase — before any implementation begins. Organisations that skip proper diagnostic work don't save time. They create rework. They build governance structures on unmeasured foundations, then have to rebuild when those foundations don't hold up under client scrutiny or regulatory question. The Prosci 12th Edition research (2023, n=2,668) confirms the performance gap: projects with excellent structured methodology meet their objectives at 88%. Projects with poor methodology: 13%. A 7x gap. Not about talent or intention. About whether you measured before you built.
One immediate, practical pressure point. From April 27 2026, any Cyber Essentials assessment using the updated Requirements for IT Infrastructure v3.3 must treat all cloud services that store or process organisational data as in scope. The requirement is unambiguous: "Cloud services cannot be excluded from scope." AI tools accessed via account that process client or organisational data fall within that definition. Agencies that haven't identified and included those tools risk failing assessment or having existing certificates revoked if the services are omitted or found non-compliant. This is not a projected future pressure. It is a concrete, dated switchover — and the inventory work required to meet it is exactly what the Assessment produces.
The Assessment gives you the map. The Done-With-You AI Workflow Build is the journey — the four-week implementation that takes the map's findings and builds the governance structure the agency needs into its actual working practices.
What Readiness Confidence Looks Like
There is a simple test for whether an agency has genuine AI readiness. Not a questionnaire, not a certification, not a policy document sitting on a server somewhere.
One question.
When a client asks how your team uses AI in its work — do you have a real answer?
A real answer is specific. It names the tools and describes the controls. It explains what happens to client data — how AI-generated work is reviewed before it reaches the client, whether the platforms in use have Data Processing Agreements, what the terms say about model training. It covers freelancers and contractors too, not just the permanent team. An MD who has that answer doesn't pause to construct it. They don't hope the client doesn't follow up.
Most agencies can't give that answer today. The Trustmarque AI Governance Index (July 2025, n=507 UK IT decision-makers) found that 93% of UK organisations use AI in some form. Only 7% have fully embedded governance. Fifty-four per cent have minimal or none. The gap between adoption and governance is the norm. Not the exception.
The Assessment closes the gap between what an agency believes about its AI usage and what it can actually verify and articulate. After the assessment, the MD has specific answers — not because they've done a course or read a guide, but because someone has looked, systematically, and documented what they found.
What does that confidence enable, commercially?
Enterprise client conversations that weren't previously possible. Enterprise and regulated clients — pharmaceutical companies, financial services firms, large healthcare organisations — are adding AI questions to agency briefings and procurement processes. The agency that can answer with specifics isn't just compliant. It's differentiated. Most of its competitors are answering the same questions from belief rather than evidence. That gap is a commercial advantage available to any agency willing to do the diagnostic work first.
Procurement responses that don't stall at the AI disclosure section. Where PPN 017 applies, the questions are already there. Where ISBA's Generative AI Supplemental Agreement has been adopted, enterprise marketing clients now have a ready-made framework for requesting detailed AI disclosure — timing, purpose, types, controls. The agency with documented governance answers from existing records. That readiness signals something beyond the answer itself. It signals that the governance is operational, not performative.
Team confidence that doesn't depend on individual judgment calls. Without clear AI governance, team members make their own decisions — which data to include in a prompt, which tools are appropriate for which tasks, when AI-generated output needs human review. Those decisions are well-intentioned. They're inconsistent. They're undocumented. And they extend to every freelancer on every client account without the agency ever having made a deliberate choice about it.
Now — the objection that comes up every time.
"We already know what tools our team uses. We don't need an assessment to tell us what we can see."
This is almost always sincere. And it's incomplete — which is precisely the point.
Knowing which tools are visible is different from knowing which data has moved through them. It's different from knowing whether any client information is currently sitting in a consumer AI platform. It's different from being able to produce a documented answer to that question in 72 hours if a client asked. The Skillcast and YouGov data shows a 47-point gap between what UK managers claim about their data protection practices and what they can actually verify. That gap doesn't exist because the managers are careless. It exists because self-assessed readiness and verifiable readiness are structurally different things.
The ICO's enforcement record reinforces this. The Capita case is instructive. The ICO's Article 83 calculation produced a combined starting-point of just over £58M. The final agreed penalty was £14M. But the penalty notice explicitly treats as an aggravating factor the fact that vulnerabilities had been identified by penetration testing on multiple occasions and not remediated. Surfacing a problem does not create liability. Surfacing a problem and not acting creates liability. The assessment is the act of surfacing.
The question most agency principals ask after the Assessment isn't whether it was worth doing. It's why they waited.
What Clarity Actually Looks Like
I understood what made the difference between XEIOH and Zonke only by looking backwards. XEIOH's governance structures had been inherited from pharmaceutical client and vendor requirements — built to satisfy the outsourcing clause, to pass the vendor application, to keep the clients. Not designed for resilience. Not designed for crisis. When the crisis came anyway, the structures held. What I recognised in hindsight was that they held because they were real — operationalised, documented, practised. Not because I'd been clever. Because the diagnostic work had been forced on us by the clients who knew what they needed to see.
That's the diagnostic insight made visible in reverse. You can't rely on a governance framework you haven't properly mapped. You can't build a framework that holds up until you know what you're actually building on.
The AI Readiness Assessment gives you what self-assessment cannot: specificity. Not a general sense that the agency is broadly in order. A documented record of exactly which tools are in use, exactly what data has moved through them, exactly which freelancers and contractors are working outside the governance frame, and exactly what needs to happen to close the gaps. That specificity is what allows an MD to answer the client AI question with confidence — not because they've prepared talking points, but because they know the actual answer.
The Done-With-You AI Workflow Build is the four-week implementation that follows. It takes the assessment's findings and builds governance into the existing rhythm of the agency — not layered on top of the work, built into it. Chapter 10 shows how that happens without slowing anything down.
Most agencies book the Assessment to get clarity before the Build. The sequence exists because clarity and implementation are different phases. The assessment tells you where you are. Chapter 10 shows you how to build from there.
What You Have Now
You've just read the case for doing the diagnostic before building the framework. One question that tends to sit with agency principals after this chapter.
Do you know what's actually happening inside your agency's AI usage right now?
Not what you've been told. Not what you've noticed. What's actually happening — across the permanent team, and across every freelancer, contractor, illustrator, and specialist who has touched client work in the last twelve months.
That's the gap the assessment is designed to close. Most agencies that go through it find more than they expected. That's not a warning. It's the point. The tools were already there. The data flows were already there. The gap between believed practice and actual practice was already there. None of it was visible until someone looked.
The foundation you build in the Done-With-You AI Workflow Build is only as solid as the surface you build it on. This chapter is about making sure that surface is what you think it is.
A few things worth carrying forward.
Self-assessment has a structural ceiling. The 47-point gap between what UK managers claim about their compliance and what they can actually verify (Skillcast/YouGov) isn't a personal failing — it's a documented property of how organisations assess their own risk. AI governance is not exempt from this pattern. The diagnostic corrects for it.
The freelancer gap is real and almost universally unmapped. Most AI governance frameworks are designed for fixed teams in managed environments. The actual agency workforce — medical writers, graphic designers, animators, disease area specialists, brought in for weeks or months then gone — operates outside that assumption. The Assessment surfaces this. The back-to-back commitment structure closes it.
Procurement is already testing for what most agencies cannot see. 63% of in-house teams say they ask about agency AI use (CIPR, 2024). Only 24% of agencies report being asked. That gap is not a misunderstanding. It is a qualification gap — and it's widening as PPN 017, ISBA's Generative AI Supplemental Agreement, and the April 2026 Cyber Essentials scope changes land in sequence.
The diagnostic insight from XEIOH applies here too. I only understood what made XEIOH's governance structures resilient by looking backwards. The lesson wasn't strategic foresight — it was that the structures were real, documented, and tested, because clients had insisted on it. The Assessment insists on the same thing, for the same reason.
Key Takeaways
  • 71% of UK employees use unapproved AI tools at work — and most principals don't know which ones. The Microsoft UK and Censuswide survey (October 2025, n=2,003) found that 51% do so every week. The agency principal who believes their team uses AI only through approved channels is, statistically, almost certainly wrong. The assessment is the mechanism that replaces belief with evidence.
  • Self-assessment is structurally unreliable — the data proves it. Skillcast and YouGov found an 85%/38% gap: 85% of UK managers claim data protection is fully embedded in all their business processes; only 38% can confirm confidence in the 72-hour ICO breach reporting requirement. A 47-point gap on a question with a right answer. AI governance self-assessment produces the same structural distortion.
  • The freelancer and contractor dimension is the blind spot most governance frameworks miss. Medical writers, graphic designers, animators, illustrators, disease area specialists — they cycle through agency work carrying their own tools, their own accounts, and their own habits. The agency's governance policy doesn't reach them unless someone has specifically designed it to. The assessment maps this. The back-to-back commitment structure addresses it.
  • Value is lost at the diagnostic phase, not the build phase. McKinsey's Global Transformation Survey (2021, n=1,034) found that 25% of transformation value is lost at diagnostic and target-setting — before any implementation begins. Prosci's 12th Edition (2023, n=2,668) confirms the multiplier: 88% of projects with excellent methodology meet their objectives versus 13% with poor methodology. Skipping the readiness assessment doesn't accelerate AI governance. It compounds the rework cost.
  • Procurement is already scoring agencies on what most can't answer. The CIPR State of the Profession survey (September 2024, n=2,016) found a 39-point perception gap: 63% of in-house teams say they ask agencies about AI use; only 24% of agencies report being asked. PPN 017 (February 2025) made AI disclosure standard in central government procurement. The ISBA Generative AI Supplemental Agreement (April 2024, advisory) gives enterprise clients a framework for requesting it. The April 2026 Cyber Essentials scope change brings cloud-accessed AI tools into certification scope. The direction of travel is clear.
  • The question most principals ask after the assessment isn't whether it was worth doing. It's why they waited. The assessment costs less than a day of senior leadership time. It produces a documented picture of actual AI usage, actual data flows, actual gaps — and a specific sequence for closing them. That specificity is what makes the governance framework that follows defensible, not performative.
What's Next
Next Chapter: Chapter 10: Building Governance Into The Work — How to Implement Without Slowing Down publishes 22 March 2026
The assessment tells you where you are. Chapter 10 shows you how to build from there — governance embedded into the existing rhythm of the agency, not layered on top of it. The Done-With-You AI Workflow Build structure, week by week.

Implement This Now
The AI Readiness Assessment is designed to run in two weeks, starting this Monday.
Download the AI Readiness Checklist — Self-assessment tool used in client audits. Diagnose your gaps in 10 minutes.
If you want to run the Assessment with a structured guide alongside you rather than solo, the AI Readiness Assessment delivers the two-week diagnostic with external facilitation, staff interview framework, gap analysis, and governance recommendations report included.
Book an AI Readiness Assessment (£500) — The two-week diagnostic, delivered with you. Tool inventory, workflow mapping, gap analysis, and a governance recommendations report that tells you exactly what to build and in what sequence. Everything in this chapter, applied to your agency.

Disclaimer
This chapter provides general information about AI governance practices for UK professional services agencies. It is not legal, regulatory, or professional advice.
Regulatory requirements vary by sector, client base, and operational context. The examples and frameworks presented here reflect common patterns across agency operations but may not address sector-specific obligations (e.g., healthcare communications agencies subject to ABPI Code, legal marketing subject to SRA regulations, financial services agencies under FCA oversight).
For compliance questions specific to your agency's regulatory environment, consult qualified legal counsel familiar with UK GDPR, ICO guidance, and your sector's requirements.
Research methodology: All statistics, case studies, and regulatory references are documented with sources. Where examples are used without specific attribution, they represent composite patterns observed across multiple agencies rather than individual client situations.
Commercial disclosure: Brains Before Bots offers Shadow AI governance services to UK agencies (AI Readiness Assessments, Done-With-You AI Workflow Builds, and Fractional AI Leadership retainers). This book is designed to provide standalone value whether or not you engage our services. The frameworks are implementable with internal resources.

Next Chapter: Chapter 10: Building Governance Into The Work — How to Implement Without Slowing Down | Table of Contents
Questions or feedback? Email hello@brainsb4bots.com
© 2026 Brains Before Bots. All rights reserved.