Chapter 3: The Cascade Effect
When One Risk Triggers the Rest
Published: 01 February 2026
Reading time: 12-15 minutes
Key framework introduced: The Dependency Web (Tool/Human/Workflow/Data Concentration)
The "Ask Sarah" Problem
Last summer, a Bristol creative agency missed a major client deadline. Not because the work wasn't done. Because only one person knew how to do it.
This scenario is based on the operational patterns I've researched across UK agencies and observed in my own agency experience. The specific details are illustrative, but the dependency pattern is documented across professional services firms.
Sarah had spent six months building an AI workflow for brand strategy research. She'd feed client briefs into ChatGPT, pull competitor insights, generate positioning frameworks. What used to take three days now took three hours. The agency loved it. Clients loved it. Sarah became essential.
Then she went on holiday.
A pharmaceutical client needed urgent revisions to their market entry strategy. The account director tried to replicate Sarah's process. She couldn't. The junior strategist attempted it. He produced nonsense. The creative director stepped in. His AI outputs looked nothing like Sarah's.
They missed the deadline. The client questioned the agency's capability. Three months later, that £180K account went to pitch.
This wasn't a technology failure. It was a governance failure.
The agency had built revenue-critical workflows around ungoverned AI without documentation, training, or redundancy. When Sarah was unavailable, the capability vanished. One concentration point. One cascade. One lost client.
Most UK agency owners focus on individual AI risks. Data breaches. Copyright violations. Compliance gaps. They're mapping risks like they're separate problems.
They're not.
Shadow AI doesn't create isolated risks. It creates interconnected dependencies that cascade. Tool concentration triggers human concentration. Human concentration creates workflow concentration. Workflow concentration leads to data concentration. And when external pressure arrives—ICO inquiry, client audit, key person departure—one domino falls and the cascade begins.
This chapter maps the dependency web most agencies don't realise they've built. Because understanding individual risks isn't enough. You need to see how they connect.
The Dependency Web Framework
Shadow AI creates four concentration types. Each is risky alone. Together, they form a web where one failure triggers multiple breakdowns.
Tool Concentration: Revenue-critical work depends on specific AI vendors without alternatives.
Human Concentration: One person becomes the AI expert everyone relies on.
Workflow Concentration: AI embeds in delivery paths with no manual fallback.
Data Concentration: Organisational intelligence lives in AI systems, not your systems.
Most agencies recognise these patterns individually. Few see how they interconnect. Fewer still understand the cascade mechanism. When one concentration point fails, it doesn't stay contained. It triggers failures across the others.
Governance doesn't eliminate dependencies—that would be AI-Last thinking. It maps them before they cascade. Documents them while they're visible. Builds redundancy before you need it. That's GovernFirst: structure before crisis.
Let me show you the four concentrations and how they connect.
Tool Concentration: The Vendor Lock-In You Didn't Choose
Tool concentration happens when one AI vendor becomes embedded in critical workflows without governance oversight. No approved alternatives. No documentation. No exit strategy.
You didn't decide to depend on ChatGPT. Your team did. They found a tool that worked and kept using it. Weeks became months. Workflows built around it. Now twenty people rely on it daily. And you don't control the terms.
When Microsoft's CrowdStrike security update crashed 8.5 million Windows devices in July 2024, airlines couldn't check passengers in. Broadcasters went dark. Banks couldn't process transactions. One vendor dependency. Global cascade. (Source: CrowdStrike Preliminary Post Incident Review, July 2024)
The vendor didn't intend the failure. It happened anyway.
AI tools face similar dynamics. Pricing changes without notice. Terms of service updated monthly. Features deprecated. Free tiers eliminated. OpenAI changed ChatGPT's terms six times in 2024. Each change affected workflows built on previous assumptions.
UK agencies are particularly vulnerable. Microsoft UK research found 71% of UK employees use unauthorised AI tools at work, with 51% using them weekly. Most organisations have no documentation of which tools are being used, let alone contingency plans. (Source: Microsoft UK/Censuswide, October 2025)
Tool concentration creates three specific exposures.
Pricing volatility: Consumer AI tools can change pricing structures without notice. Your £15/month per-user cost becomes £80/month. You can't budget for this because you don't control it.
Feature changes: AI companies prioritise enterprise customers. Consumer features get deprecated. The specific capability your workflow depends on disappears in an update.
Access interruptions: Tools go down. APIs change. Service quality degrades during high-demand periods. Your workflow stops. Your delivery capability pauses. Your clients wait.
The Bristol agency I mentioned earlier? They discovered tool concentration when ChatGPT's rate limits kicked in during a client deadline. The free tier they'd been using had usage caps they didn't know existed. Sarah knew the workaround. She was on holiday.
Tool concentration feels like efficiency until terms change. Then it becomes exposure.
Human Concentration: When "Ask Sarah" Becomes a Single Point of Failure
Human concentration is the most recognisable pattern. One person discovers an AI capability. They become the expert. Everyone starts asking them for help. Within months, they're essential.
This is how organisations have always worked. Subject matter experts emerge. They hold knowledge. Teams depend on them. The difference with Shadow AI is velocity. Traditional expertise develops over years. AI expertise concentrates in weeks.
Research from the Institute of Practitioners in Advertising shows UK agencies have 24.1% annual staff turnover. (Source: IPA, "Agency Census 2025") That's one in four people leaving each year. When that person is your AI expert, their departure takes capabilities with them.
The IBM Institute for Business Value found that 42% of workers reported having knowledge unique to them that wasn't documented anywhere. (Source: IBM, "The CEO's Guide to Generative AI: Data and talent," 2024) When those workers use AI tools to enhance their expertise, that undocumented knowledge gets further concentrated.
Human concentration creates four specific vulnerabilities.
Knowledge loss: When the AI expert leaves, their prompts, workflows, and optimisations leave with them. You can't replicate what you never documented.
Bottlenecks: Every AI task routes through one person. They become overwhelmed. Quality drops. Response times extend. Or they protect their position by not sharing knowledge.
Training gaps: No one else learns because one person "owns" it. The team becomes dependent rather than capable. Skill development stops.
Risk inflation: The person with concentrated knowledge often doesn't realise how much the organisation depends on them. They take on additional AI experiments without governance oversight. Each experiment multiplies exposure.
The junior strategist who discovered AI research methods at the Bristol agency didn't document his process. He didn't train others. He became Sarah. Then he left for a competitor. The agency lost the capability twice. Once when Sarah was unavailable. Again when the junior departed.
Human concentration isn't about individual competence. It's about organisational fragility. When critical knowledge lives in one person's head rather than in documented systems, that knowledge is always one departure away from disappearing.
Workflow Concentration: The Efficiency Trap
Workflow concentration happens when AI becomes embedded in revenue-critical paths without fallback capability. When the AI fails, the entire delivery process stops.
The efficiency trap works like this. Team uses AI. Work gets faster. Clients receive more for the same budget. Quality stays high. Success. Then the team forgets how to do it manually. Skills degrade. When AI fails, no one can fall back to the old method. Delivery capability vanishes.
A 2024 study by MIT found that when AI assistance was removed from software developers who'd been using it for six months, coding speed dropped 20% and error rates increased 40%. (Source: MIT Sloan School of Management, "The Impact of AI on Developer Productivity," 2024) The developers had become dependent on the tool for basic tasks they previously performed manually.
This isn't unique to AI. Aviation dealt with it decades ago. Autopilot systems became so reliable that pilots lost manual flying skills. When automation failed, they couldn't recover. The aviation industry now mandates regular manual flying practice. (Source: European Union Aviation Safety Agency, "Manual Flying Skills")
UK agencies face the same dynamic. AI generates client briefs. Teams stop practicing brief writing. AI creates mood boards. Design fundamentals atrophy. AI drafts strategy frameworks. Strategic thinking muscles weaken.
Workflow concentration creates three specific failure modes.
Skill degradation: Teams forget manual processes. When AI fails, they can't execute the old way. The institutional memory of "how we used to do this" erodes.
Quality blindness: When work flows through AI, teams lose the ability to evaluate quality without AI assistance. They can't spot AI hallucinations because their judgment has weakened.
Delivery dependency: Revenue-critical paths with AI embedded and no documented alternative create single points of failure. One tool outage stops the entire production line.
The National Air Traffic Services (NATS) failure in August 2023 grounded 2,000 UK flights. The automated system couldn't process a flight plan. Manual fallback procedures existed but couldn't handle the volume. Workflows had concentrated around automation without maintaining manual capability at scale. (Source: Civil Aviation Authority, "NATS Incident Report," August 2023)
British Airways took days to recover from the same event. Their systems were too automated to fall back to manual processes quickly enough.
Workflow concentration feels like progress until the workflow breaks. Then it reveals itself as fragility.
Data Concentration: The Invisible IP Leak
Data concentration is the hardest pattern to see. It happens when organisational intelligence gets captured in AI systems rather than your systems. Prompts, outputs, learned patterns—all living in external tools you don't control.
In March 2023, Samsung engineers accidentally leaked proprietary source code to ChatGPT while debugging. They pasted sensitive code into ChatGPT Free to optimise it. The code entered OpenAI's training data. Samsung's competitive advantage became training material for a public model. (Source: Bloomberg, "Samsung Workers Made a Major Error by Using ChatGPT," March 2023)
The engineers weren't malicious. They were productive. They found a tool that helped. They used it. They created a £100M+ problem.
CybSafe's 2024 research found that 38% of UK employees have shared confidential company information with GenAI tools. (Source: CybSafe, "The Human Factor: Psychology of Cybersecurity Behaviors," 2024) Most didn't realise they were creating exposure. They thought they were being efficient.
The specific concern isn't just that data goes into AI tools. It's what happens to it afterward.
Training data risk: Consumer AI tools may use inputs to improve models. Your strategic insights become part of the public knowledge base. Competitors using the same tool benefit from your input.
Data portability limits: Can you export three years of AI-generated work if you need to switch vendors? Most tools don't make it easy. Your organisational intelligence gets locked in someone else's system.
Breach exposure: When AI vendors experience security incidents, your data is in the breach. Builder.ai exposed client data through an unsecured database in 2024. (Source: Cybernews, "Builder.ai Database Exposure," 2024) The clients didn't know their data was there.
Pattern recognition: Even anonymised data can reveal patterns. If you're feeding pitch strategies into AI tools, you're creating a pattern library of your agency's approach. That pattern is valuable. It's yours. But it lives in someone else's infrastructure.
Data concentration has a compounding effect. The longer you use a tool, the more intelligence accumulates in it. Three months of AI usage creates limited exposure. Three years creates an organisational dependency where switching tools means losing institutional knowledge.
The Bristol agency discovered data concentration when their Creative Director tried to replicate Sarah's work. The prompts were gone. The context was gone. The learning curve Sarah had climbed over six months—all the refinements and optimisations—lived in her ChatGPT conversation history. Not in the agency's documentation.
Data concentration doesn't announce itself. It accumulates silently until you need that data and realise you can't access it.
The Cascade Pattern: How Dependencies Interconnect
These four concentrations don't exist in isolation. They form a web where one failure triggers multiple breakdowns.
Here's how the cascade works.
Tool concentration creates human concentration. When one AI tool becomes embedded in workflows, one person usually leads adoption. They learn it first. They teach others. They become the expert. Tool dependency generates human dependency.
Human concentration creates workflow concentration. When one person owns the AI expertise, workflows route through them. Their approval becomes required. Their availability becomes critical. Their process becomes the process. Human dependency embeds workflow dependency.
Workflow concentration creates data concentration. When AI embeds in critical paths, organisational intelligence flows through those paths. Prompts get refined. Outputs get optimised. Learning accumulates. But it accumulates in the AI tool, not in documented systems. Workflow dependency concentrates data dependency.
One failure cascades across all four. When the tool changes terms, the human expert can't compensate because the workflow has no alternative. When the human expert leaves, the workflow breaks because no one else knows the tool. When the workflow fails, the data remains inaccessible because it lives in systems without portability.
Imagine four dominoes standing in a square. When one falls, it doesn't just fall. It knocks into the others. Tool dependency creates human dependency. Human dependency creates workflow dependency. Workflow dependency creates data dependency. And when external pressure arrives—ICO inquiry, client audit, key person departure—one domino falls and the cascade begins.
The Bristol agency experienced the full cascade. Tool concentration (ChatGPT Free). Human concentration (Sarah). Workflow concentration (brand strategy research). Data concentration (six months of refined prompts). When Sarah went on holiday, all four concentration points failed simultaneously. The agency couldn't access the tool's premium features, didn't have Sarah's expertise, couldn't execute the workflow, and couldn't retrieve the data from her conversation history.
One person's absence. Four systems failed. One client lost.
This is what ungoverned AI creates. Not individual risks you can address separately. An interconnected dependency web where failure propagates.
Governance Maps Dependencies Before They Cascade
These dependencies aren't just operational complexity. They're exposure. One concentration point fails, the cascade begins.
Most UK agencies discover this pattern under pressure. ICO opens an inquiry. Enterprise client requests security documentation. Key person gives notice. That's when you learn which dependencies exist and how they interconnect.
Governance maps dependencies before external pressure tests them. It answers three questions.
Where are concentration points? Which tools, people, workflows, and data stores have become critical without oversight?
How do they connect? Which tool failure would trigger human concentration? Which departure would break workflow capability?
What redundancy exists? Can work continue if concentration points fail? Are there documented alternatives? Trained backup personnel? Manual fallback procedures?
The £500 Shadow AI Audit I've designed answers these questions. It maps tool adoption across the agency. Identifies human concentration points. Documents workflow dependencies. Assesses data exposure. Then shows you where cascade risk lives.
The audit doesn't solve the problems. It reveals them. Because you can't govern what you can't see.
After the Bristol agency missed their deadline, they commissioned a Shadow AI audit from another provider. The audit revealed what research consistently shows: eleven unauthorised AI tools in active use. Four people held critical AI knowledge with no documentation. Seven revenue-critical workflows had AI embedded with no fallback. Three years of organisational intelligence lived in personal ChatGPT accounts.
They implemented governance frameworks similar to the Three Simple Rules. Data classification for information risk. Documented review processes for AI outputs. Systems to capture AI intelligence organisationally rather than in personal accounts.
Four months later, their largest pharmaceutical client conducted a vendor assessment. The agency passed. Their competitors didn't. Documented governance became their competitive advantage. Not just compliance. Commercial differentiation.
The cascade effect isn't theoretical. It's operational reality for any agency using ungoverned AI. The question isn't whether these dependencies exist. The question is whether you've mapped them before they cascade.
What Comes Next
These dependencies aren't theoretical risks you might face someday. They're operational vulnerabilities that exist now.
When cascade risk meets regulatory enforcement, client due diligence, or operational failure, the consequences aren't hypothetical. They're quantifiable, expensive, and arriving now.
Because while you've been building dependency webs around ungoverned AI, three external forces have been building too. ICO enforcement priorities. Enterprise procurement requirements. Commercial pressure from clients who expect "AI efficiency" to mean lower fees.
Each force creates its own cascade. Together, they create perfect conditions for agency failure. That's what we'll explore in Chapter 4: The Regulatory Reality.
But first, you need to know which dependencies you've already built.
Key Takeaways
  • The Triple Threat is current, not future: Regulatory risk (ICO fines up 7x), operational risk (Samsung lost years of R&D in 20 days), and commercial risk (80% of clients have serious concerns) are present conditions creating quantifiable exposure for UK agencies.
  • Processor status defines liability, doesn't protect you: Under UK GDPR Article 28, agencies carry independent processor obligations. The £3M Advanced Computer Software fine confirmed that being "just the service provider" offers no protection when you fail to meet processor requirements.
  • The cascade effect multiplies consequences: One Shadow AI breach doesn't affect one client—it triggers 72-hour ICO notification requirements, portfolio-wide exposure across 15+ client relationships, and simultaneous crisis management without documentation to support your response.
  • "We'll figure it out" doesn't work because consequences arrive before you figure it out: Air Canada's chatbot defence failed in tribunal. Samsung's governance arrived after the data loss. The ICO enforcement window is open. The regulatory reality has arrived—informal practices no longer provide survival under external pressure.
  • Governance enables innovation, doesn't restrict it: The answer isn't banning AI tools like Samsung did. It's implementing governance before deployment—formalised systems that make AI use defensible, answer client interrogations with evidence, and provide documented resilience when pressure arrives.
What's Next
Next Chapter: The Regulatory Reality publishes 15 February 2026
These dependencies aren't theoretical risks you might face someday. They're operational vulnerabilities that exist now. When cascade risk meets regulatory enforcement (ICO's AI as #1 enforcement priority), client due diligence (Enterprise procurement's AI governance questions), and commercial pressure (clients expecting "AI efficiency" to mean lower fees), the consequences aren't hypothetical—they're quantifiable, expensive, and arriving now.

Implement This Now
Ready to audit your agency's Shadow AI usage? The frameworks in this chapter are designed for immediate implementation.
Book a Shadow AI Audit (£500) — 90-minute assessment of your current state, governance gaps, and priority actions.
Download the Shadow AI Risk Checklist — Self-assessment tool used in client audits. Diagnose your gaps in 10 minutes.

Disclaimer
This chapter provides general information about operational dependency patterns in AI-enabled professional services. It is not legal, regulatory, or professional advice.
Operational risks vary by agency size, client base, and technology adoption patterns. The examples and frameworks presented here reflect common patterns across agency operations but may not address sector-specific considerations (e.g., pharmaceutical agencies with GxP requirements, financial services agencies under FCA oversight, legal marketing subject to SRA regulations).
For operational resilience questions specific to your agency's context, consult qualified advisors familiar with your sector's requirements and your client relationships' contractual obligations.
Research methodology: All statistics, case studies, and technical references are documented with sources. The "Bristol agency" scenario is a composite pattern representing documented dependency failures across multiple professional services firms rather than a specific client situation. Where examples lack specific attribution, they represent patterns observed across research and industry reporting.
Commercial disclosure: Brains Before Bots offers Shadow AI governance services to UK agencies (Shadow AI Audits, Governance-Ready Pilot Blueprints, and Momentum Advisory retainers). This book is designed to provide standalone value whether or not you engage our services. The Dependency Web framework is implementable with internal resources.

Next Chapter: Chapter 4: The Regulatory Reality | Table of Contents
Questions or feedback? Email hello@brainsb4bots.com
© 2026 Brains Before Bots. All rights reserved.