A boardroom imperative in South Africa’s digital revolution
Global investment in artificial intelligence (AI)1 has reached an inflection point. The global AI market surged to approximately US$260bn in 2025 and is on track to exceed $1,200bn by 2030, reflecting a fourfold increase.2 AI is now widely recognised as the next great general-purpose technology, and is arguably the fastest-spreading technology in human history. In less than three years, more than 1,2 billion people have used AI tools, a pace of adoption that eclipses that of the internet, the personal computer and the smartphone.3
South Africa is firmly part of this acceleration. Use of AI by South African businesses has risen sharply, from 45% of respondents in 2024 to approximately 67% at the end of 2025, as cited in a recent report.4 AI is no longer merely an emerging technology; it already plays a key role in digital transformation in South Africa, and businesses that approach AI with robust protections and innovative strategies not only mitigate risks, but also position themselves to thrive as industry leaders in an ever-evolving market.
Notwithstanding the obvious potential of AI, there remain significant barriers to adoption, including data privacy concerns, implementation costs, lack of skilled talent, regulatory compliance, ethical concerns, reputational risk, and a general resistance to change. Management and boards must provide leadership on AI by driving innovation and growth, and providing strategic direction and oversight.
The boardroom disconnect
Unfortunately, the boardroom response has lagged the above reality. According to Deloitte, nearly 31% of directors report that AI is still not on their board agenda, although this has improved from 45% in the previous survey.5 More concerning is that only 15% of South African businesses have formal AI governance policies, creating fertile ground for “Shadow AI” – the unsupervised use of AI tools by employees, with significant legal, ethical and reputational risks.6
The importance of AI cannot be overstated – it drives productivity, innovation and competitive advantage like few technologies before it. Yet the risks are equally profound, encompassing heightened cybersecurity threats, data-privacy breaches, inaccurate or misleading outputs from AI “hallucinations”, embedded biases that can perpetuate inequality, and the gradual erosion of human skills through over-reliance on automation. In the past year alone, ransomware attacks rose globally by over a third, with generative AI emerging as a powerful enabler of threats, particularly through deepfakes – the second most common cybersecurity incident after malware.7
For boards, therefore, the question is no longer whether AI will impact their organisations, but whether they are governing its use and impact effectively and responsibly.
Navigating the regulatory maze
Although South Africa has not yet adopted comprehensive AI legislation or a dedicated AI regulatory framework, the use of AI is already subject to regulation through a range of existing legal and governance instruments. These include the Companies Act, the Protection of Personal Information Act (POPIA), the Consumer Protection Act, and the Electronic Communications and Transactions Act.
In addition, the recently introduced King V Report on Corporate Governance for South Africa, 2025 (King V) explicitly addresses the governance of AI. King V underscores the responsibility of directors to ensure clear accountability for AI-related decisions, to implement human oversight mechanisms proportionate to the level of risk, and to consider periodic independent assurance of AI systems. Its alignment with POPIA and the emerging National Artificial Intelligence Policy Framework signals that AI governance has become a core board responsibility. Within this evolving regulatory ecosystem, directors’ fiduciary duties and statutory duties under the Companies Act – to act in the best interests of the company and with reasonable care, skill and diligence – now extend squarely to the oversight of AI.
A director’s toolkit: 10-point checklist for AI governance
There is no one-size-fits-all approach when it comes to board governance of AI, as the approach to be adopted would largely depend on factors such as the organisation’s size, industry, and the scope of usage of generative AI in its operations. However, the checklist below provides a good starting point for boards to consider:
- Establish clear accountability. Designate ownership of AI at both board and management level. Avoid fragmented responsibility by assigning oversight to a specific committee, supported by cross-functional representation from legal, IT, risk, compliance and business units.
- Decide on AI strategy. Boards should interrogate management’s view on AI’s relevance and provide strategic leadership. AI adoption should align with the organisation’s business model, resources and long-term objectives. The board should monitor changes in the AI landscape and emerging thinking, to provide strategic direction.
- Develop and enforce a comprehensive AI policy. Organisations should develop AI usage policies, taking into account the risks posed by shadow AI within their operations. Such policies should, inter alia, identify approved AI tools aligned to business needs, set clear data-sharing rules, and require user training on responsible AI use. In doing so, they enable the classification of AI risks, the implementation of appropriate controls, and the proactive management of AI-related exposure.
- Competitor risk. Consider the risk of existing or emerging competitors leveraging AI and how this could impact the company.
- Build an AI inventory. Boards cannot govern what they cannot see. Require management to identify all AI systems in use, including shadow AI. Each system should be documented with its purpose, data sources, risk level and degree of human oversight.
- Define risk appetite. The board should explicitly consider how much AI-related risk the organisation is willing to accept, and how trade-offs between innovation and control are managed.
- Demand transparency and explainability. Directors should be able to understand, at a high level, how material AI systems make decisions. Systems that cannot be explained should trigger enhanced scrutiny or additional safeguards.
- Invest in board and workforce education and identify skilled talent. Ongoing education – through briefings, external experts or advisory panels – is essential for informed oversight. Recruit skilled talent where necessary.
- Implement ongoing monitoring and assurance. AI governance is not a once-off exercise. Regular audits, crisis simulations, bias testing and performance monitoring should be embedded, with escalation triggers where systems deviate from expected outcomes.
- Embed culture, ethics and disclosure. “Responsible AI”8 depends on culture. The board should ensure ethical principles such as fairness, accountability and transparency are reflected in policies, training and external disclosures, giving stakeholders confidence in the organisation’s approach.
In conclusion, AI governance has moved decisively from a technical concern to a central boardroom responsibility. Globally, only around one-third of boards report that they feel adequately prepared to oversee AI risks, highlighting a material governance capability gap. In this context, directors who fail to engage meaningfully with AI risk not only regulatory exposure, but strategic irrelevance. King V makes it clear that effective AI oversight is a key component of a board’s fiduciary duty. Boards that approach AI with discipline, ethical intent and informed curiosity will be best positioned to harness its value and sustain stakeholder trust.
Henning de Kock is CEO and Johann Piek an Executive | PSG Capital

1 In this article, the term “AI” is used in its broadest sense to refer to all forms of artificial intelligence, including, but not limited to, generative AI. It encompasses technologies such as machine learning, natural language processing, computer vision, and other related AI systems.
2 https://www.statista.com/chart/35510/ai-market-growth-forecasts-by-segment/#:~:text=The%20global%20artificial%20intelligence%20market,enterprise%2C%20healthcare%20and%20consumer%20markets.
3 https://www.microsoft.com/en-us/research/wp-content/uploads/2025/10/Microsoft-AI-Diffusion-Report.pdf
4 Based on survey respondents, including official use within respondent businesses (i.e. formally adopted), unofficial use or both. https://www.worldwideworx.com/wp-content/uploads/2025/10/The-SA-Gen-AI-Roadmap-2025.pdf.
5 https://www.deloitte.com/global/en/issues/trust/progress-on-ai-in-the-boardroom-but-room-to-accelerate.html.
6https://www.researchgate.net/publication/395822472_South_Africa’s_AI_Trajectory_Navigating_the_Divide_Between_National_Ambition_and_Market_Reality.
7 https://www.ey.com/content/dam/ey-unified-site/ey-com/en-us/campaigns/board-matters/documents/ey-cbm-cyber-and-ai-oversight-disclosures-2025-3.pdf
8 Responsible AI encompasses organisational responsibilities and practices that ensure positive, accountable and ethical AI development and operation.
This article first appeared in DealMakers, SA’s quarterly M&A publication.
DealMakers is SA’s M&A publication.
www.dealmakerssouthafrica.com

