In October 2024, the Department of Communications and Digital Technologies published South Africa's first formal national AI policy. Not legislation, not a finished product — a strategic foundation and an opening position in a longer, more contested conversation about how this country governs one of the most consequential technologies of our time.
The framework opens with a claim that deserves to be taken seriously: AI is a general-purpose technology, comparable in scope to electricity or the internet. That framing is not hype. It signals that the government understands AI cannot be treated as just another sector to regulate. It cuts across healthcare, agriculture, education, finance, and public administration, and the policy choices South Africa makes now will shape those sectors for decades.[1]
For a country still managing deep unemployment, entrenched inequality, and uneven digital access, the risks of a poorly governed AI landscape are concrete. Without a coherent policy, South Africa could end up importing AI systems designed elsewhere, trained on foreign data, and reflecting foreign assumptions about who matters and who does not. This framework is, at minimum, an acknowledgment that those risks are real and that the country needs its own position.
One of the more intellectually honest features of the document is its use of the Futures Triangle, a foresight methodology that organises the problem around three forces: the Push of the Present, the Pull of the Future, and the Weight of the Past.[2]
The push factors are immediate and quantifiable: rapid global AI advancement, economic pressure to stay competitive, growing demand for AI-driven public services, and the momentum of international AI governance standards. The pull factors are aspirational: economic transformation, social equity, sustainable development, and continental leadership on AI ethics. The weight of the past is where the framework earns its credibility. It names the digital divide, the legacy of historical inequities, bureaucratic inertia, and outdated regulatory frameworks as genuine structural obstacles to equitable AI adoption.[1]
By putting these tensions on the table rather than papering over them, the document sets up a more grounded and locally relevant policy conversation than many comparable frameworks from other jurisdictions.
The substantive core of the framework is its twelve strategic pillars, each with a stated aim and a set of proposed actions:[1]
What stands out in this list is that ethics is not appended to the economic agenda as a safeguard. It is given its own pillars, plural, covering fairness, transparency, explainability, human oversight, and professional conduct. That structural choice matters because it makes ethics harder to sideline during implementation.
The insistence on human-centred AI is one of the document's strongest commitments. The framework is explicit that AI should augment human decision-making rather than replace it, and that human oversight must be preserved especially in high-stakes applications. This is a meaningful constraint, particularly as generative AI systems are increasingly being proposed for use in government services, legal processes, and healthcare delivery.[1]
The emphasis on explainable AI is also well-placed. For AI systems operating in consequential domains, the ability to understand and contest how a decision was reached is not just a technical nicety. It is a prerequisite for accountability. South Africa's Constitution places a high premium on the right to just administrative action, and explainability is the technical dimension of that right in an AI-mediated world.
The focus on inclusive datasets is particularly relevant here. A country with eleven official languages and enormous socioeconomic diversity cannot responsibly deploy AI systems built predominantly on Western, English-language data. The framework's call for diverse, representative training data addresses this directly, even if the mechanisms for enforcing it remain to be developed.[1]
The framework is candid about being a starting point, not a solution. It identifies what needs to happen across twelve pillars but defers the how to future policy instruments, sectoral strategies, and a possible South African AI Act. That deference is appropriate at this stage, but it also means the critical questions are still open: Which body has enforcement authority over AI ethics? How are violations investigated and penalised? What standards govern AI procurement in the public sector?[1]
These are not minor implementation details. They are the difference between a policy that shapes behaviour and one that sits on a shelf. The framework's value will ultimately be determined by how seriously those questions are tackled in the next phase of the policy lifecycle.
The public comment period closed in November 2024, and the process now moves toward formal policy adoption, sectoral strategies, and regulatory development. For legal practitioners, civil society organisations, researchers, and technologists, this transition period is the most important window for substantive engagement. The framework positions stakeholder participation not as a procedural formality but as a core condition for the policy's legitimacy and effectiveness.[1]
South Africa has a genuine opportunity to develop an AI governance framework that is both rigorous and contextually grounded. That would mean taking seriously not just the economic potential of the technology but also the structural conditions that could cause its benefits to be distributed as unevenly as so many other technological transitions before it.
Have a perspective on this piece? Reach out — the best writing comes from good conversation.