Kenya is adding a new layer to an already crowded digital regulatory system, without clearly resolving who controls what.
- •A proposed law, the Artificial Intelligence Bill, 2026 (Senate Bill No. 4), would establish a powerful new Office of the Artificial Intelligence Commissioner, inserting another authority into a landscape already occupied by agencies enforcing the Data Protection Act and cybercrime laws.
- •To balance enforcement with innovation, the bill proposes the creation of regulatory sandboxes. These are controlled environments where companies can test AI systems under the supervision of the Commissioner.
- •The bill, introduced by Nominated Senator Karen Nyamu, forms part of a broader policy push following the publication of Kenya’s National AI Strategy (2025–2030), which positions the country as a regional hub for AI development.
The proposed law aims to set ground rules for how AI is built and used in Kenya, focusing on safety and transparency while promoting innovation and introducing oversight and accountability for companies deploying the technology.
At the center of the proposal is the new Commissioner, an independent office appointed through a process involving the Public Service Commission, the president and Parliament. The office would be responsible for enforcing compliance with the set laws governing AI, conducting research on trends, perform conformity audits on AI systems, overseeing risk assessments, managing regulatory sandboxes, and handling complaints related to harms such as algorithmic bias or discrimination.
The scope of those powers places the office alongside existing regulators including the Office of the Data Protection Commissioner (ODPC) and agencies enforcing the Computer Misuse and Cybercrimes Act, without clearly delineating boundaries between them. Companies may face overlapping obligations related to data handling, system accountability, and digital content, depending on how different regulators interpret their mandates.
The bill adopts a risk-based framework, clearly similar to EU laws governing AI systems, that classifies AI systems into four categories: unacceptable, high, limited and minimal risk. Systems deemed to pose “unacceptable risk” would be outright prohibited, while high-risk systems such as those used in sectors such as healthcare, finance, education and security would face the most stringent requirements.
Developers and deployers of high-risk systems would be required to conduct human-rights impact assessments, maintain detailed operational logs for at least five years and ensure transparency in how their systems function.
But in Europe, even after years of drafting these provisions, regulators have faced persistent criticism over vague definitions of what constitutes an “AI system” or “harm,” leaving companies uncertain about compliance boundaries. Kenya’s proposal mirrors that structure without offering detailed interpretive guidance, raising the possibility of similar ambiguity at an earlier stage of market development.
The bill also concentrates enforcement authority without clearly mapping institutional boundaries or technical standards. In the EU, regulators have had to supplement the law with extensive guidelines, expert panels, and staged implementation timelines to clarify obligations and avoid overreach. Even then, companies have warned that compliance remains complex and unclear.
Kenya’s framework, by contrast, introduces criminal penalties and broad supervisory powers upfront, without the same level of supporting infrastructure or clarity, potentially creating a regime where obligations are expansive, but interpretation is left to the regulator.
The bill also requires companies to obtain clear consent from users before deploying synthetic media, including deepfakes, and to notify users whenever they are interacting with an AI system. Companies are barred from deploying AI systems classified as ‘unacceptable risk’, while those deploying AI systems that could affect employment would need to conduct workforce impact assessments and implement reskilling or mitigation programs.
The proposed law also requires human oversight mechanisms, ensuring that critical decisions made by AI systems can be reviewed or overridden by qualified individuals. It also introduces criminal penalties for non-compliance including fines of up to KSh 5 million and prison terms of up to two years.
In the case of corporate violations, directors and senior officers could be held personally liable unless they can demonstrate that they exercised due diligence to prevent the offense.
It also establishes an advisory committee comprising representatives from government, industry and civil society to provide guidance on emerging risks and technological developments.
Kenya’s push to regulate AI comes as the country emerges as one of East Africa’s most active users of the technology. According to recent data, roughly 8% of Kenyans report using AI tools, a figure that surpasses neighbors like Uganda and Rwanda but remains below leaders such as South Africa, where adoption exceeds 21% and Egypt at 13%.
Generative AI engagement, including platforms like ChatGPT, is particularly strong, with more than 40% of Kenyan internet users aged 16 and older reporting use, far higher than South Africa, Egypt, or Nigeria.
However, the proposed legislation could raise concerns for local AI firms that are still nascent, similar to the concerns raised when the virtual assets law was being prepared. Compliance requirements such as impact assessments, audit trails, documentation and ongoing oversight introduce fixed costs that may be easier for large, well-resourced AI companies to absorb than for early-stage startups.
At the same time, the bill’s enforcement model relies on technical expertise that may be in short supply. Effective oversight of AI systems would require specialized skills in algorithmic auditing, data science, and systems engineering. These are capabilities that regulators globally are still developing and for an underdeveloped country like Kenya, this may be a steep hill.




