Thursday, March 5, 2026

The risks of shadow AI in M&A transactions

Share

In technology-focused M&A transactions, value related to material data increasingly resides in AI models, rather than traditional source code. This shift creates new challenges for buyers conducting due diligence investigations. “Shadow-AI” tools and models developed and implemented outside of formal governance pose particular risks that can affect valuation, legal compliance and post-closing integrations. For buyers, identifying and managing these hidden risks is essential to protecting value and avoiding unforeseen liabilities.

“Shadow AI” refers to AI models, scripts or datasets created by employees outside formal approval and governance processes and procedures. These tools are often undocumented and poorly controlled. While the use of Shadow AI may start as experiments or for simple tasks, it creates hidden technical and regulatory risks that can affect valuation, risk allocation, and addressing post-closing obligations.

Shadow AI typically arises when teams, or individual employees, bypass internal checks and procurement processes which relate to AI tools. Privacy reviews, data right assessments and security controls are often overlooked. Common risks include training models on personal data without a lawful basis, scraping data in breach of website terms, using open-source components with restrictive licences, or exposing confidential information to third-party AI tools. Once these tools feed into live systems or client deliverables, they can contaminate outputs and create uncertainty around intellectual property (IP) ownership.

A policy checklist and forensic checks are not sufficient to assess Shadow AI risks. There are further measures which buyers should consider when evaluating technology companies, such as:

  • scanning code repositories for unapproved models, notebooks and pipelines;
  • running licence and dependency checks to identify restrictive or non-compliant components;
  • reconstructing data lineage using system logs, cloud usage records, storage inventories and model registries to understand what data was used for training, and whether the data ownership rights are sound;
  • testing prompts and outputs for signs of copied content, bias or safety issues;
  • interviewing engineers and applied scientists, not just management; and
  • reviewing access controls, API keys and use of third party AI tools to identify unapproved services.

Where Shadow AI is detected, buyers typically require focused clean-up plans and should negotiate specific legal protections. Buyers should insist on targeted indemnities covering IP infringements related to AI use, privacy breaches, and violation of AI platform terms. Buyers should also tighten warranties around data provenance, model governance and the use of third-party AI tools. This gives the buyer a walk-away right if undisclosed issues surface later in the M&A process.

Warranty and indemnity insurance is not a cure-all. AI-related risks are frequently excluded and, where cover is available, premiums tend to reflect how mature and credible the target’s data and AI governance really is.

A transaction is usually affected by the use of Shadow AI in the following ways:

  • the price is reduced to reflect retraining costs and data contamination risks. A portion of the consideration is deferred, linked to completing data re-licensing, fixing edge-case model performance, and closing privacy gaps;
  • specific indemnities are included to cover IP claims linked to legacy scraping and code issues, backed by escrow; and
  • closing conditions include obtaining outstanding data consents, completing privacy impact assessments, and decommissioning the Shadow AI notebooks, with retraining done in a controlled environment.

Sellers should not wait for due diligence investigations to address Shadow AI. Conducting internal audits, documenting data provenance, and implementing clear AI governance policies before going to market can reduce price reductions, limit indemnity exposure, and accelerate deal timelines. Proactive remediation signals credibility and strengthens the seller’s negotiating position.

Consider a multinational company with thousands of employees across various departments. What happens if 10% of these employees begin using an AI tool that integrates into everyday systems to improve spelling and grammar in emails, documents, slideshows and messaging?

This scenario gives rise to several issues which directly reflect the Shadow AI risks discussed above:

  1. Lack of visibility: Management does not know this tool is being used, or by whom it is being used, creating governance gaps that arise when employees bypass internal checks.
  2. Data exposure: There is no oversight of what information is being input into the tool, risking the exposure of confidential or personal data to third-party AI services.
  3. Unclear licensing terms: The AI tool’s terms and conditions may grant a broad license, allowing the provider to use, modify and disseminate any data input, including using that data to train the provider’s own AI models.
  4. Delayed detection: These risks are unlikely to surface until something goes wrong – for example, when a competitor gains access to strategic information or a claim is brought because confidential customer data has been inadvertently used.

For buyers in an M&A transaction, identifying and pricing such risks becomes critical. The due diligence measures outlined above – reviewing access controls, API keys and third-party AI usage – are designed to uncover precisely these scenarios before they affect deal value or create post-closing liabilities.

In conclusion, there is an increased value in the data which a business owns. Shadow AI is a growing commercial, technical and regulatory concern which needs to be accounted for in M&A transactions. There are measures which can be implemented in the due diligence process to identify Shadow AI risks, and to mitigate the damage which may be caused by the use of Shadow AI. This allows for the business value to be accurately assessed and for Shadow AI risks to be accounted for in any M&A transaction.

Tayyibah Suliman is a Director and Head: Technology and Communications and Sadia Rizvi and Izabella Balkovic Associates | Cliffe Dekker Hofmeyr

This article first appeared in DealMakers, SA’s quarterly M&A publication.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles

DealMakers

Verified by MonsterInsights