Kenyan lawmakers have drawn a clear line in the country’s intensifying debate over social media, favouring heavier regulation over an outright ban.
- •The position was adopted by the National Assembly in its report on a petition that urged Kenya to impose a nationwide ban on TikTok, citing concerns over explicit content, exploitation of minors, threats to public morals, and national security.
- •Lawmakers said that an outright ban would infringe on fundamental rights and stifle a digital economy that has become central to youth employment, online commerce, and political discourse.
- •Between July and September 2025, TikTok removed more than 580,000 videos from Kenya for violating its rules: 99.7% of those videos were taken down proactively, before users could report them, and 94.6% were removed within 24 hours of being posted.
The crackdown extended beyond short videos. During the same three-month period, roughly 90,000 LIVE sessions in Kenya, about 1% of all streams, were interrupted for breaching content standards.
While several safety issues were pointed out, the MPs agreed that banning TikTok would only lead to the emergence of a new platform that would ostensibly suffer the same fate. Moreover, companies like Telcos and brands that rely heavily on advertising with TikTok may lose massive economic opportunities.
The MPs also urged TikTok to expedite the monetization feature in Kenya that would allow more young people to earn a living from making content on the platform. Currently, the only way for Kenyan creators to earn money is through the gift option from followers or brand endorsements for those with a greater audience.
Globally, the platform removed more than 204 million videos in the quarter under review, representing about 0.7% of all uploads. Of those removals, 99.3% were proactive and 94.8% were executed within 24 hours. TikTok said 91% of violative content worldwide is now detected and removed by automated technologies, the highest rate it has recorded.
The company also reported removing more than 118 million fake accounts globally during the quarter and over 22 million accounts suspected to belong to users under the age of 13, part of an effort to curb spam, fake engagement and underage access.
What Parliament Wants
The National Assembly has divvied up the work of regulating and monitoring TikTok between different ministries and regulatory bodies. The Ministry of Interior and National Administration and the Ministry of Information, Communication and the Digital Economy have been tasked with reporting back within four months on stronger age-verification mechanisms, data localisation requirements and digital literacy programmes focused on privacy and responsible use.
The Office of the Data Protection Commissioner (ODPC) has been directed to engage social media companies to assess compliance with the Data Protection Act, 2019, ensuring that user data is processed in line with local law.
Additionally, Parliament also urged the Ministry of Information, Communication and the Digital Economy to closely monitor TikTok’s content-moderation systems to ensure they reflect Kenyan values and languages, and are supported by adequate human moderators and psychosocial safeguards.
Members of the Public Petitions Committee were told by various human content moderators that TikTok's primary AI moderation system could not detect over 100 local dialects thus could not remove offensive vernacular language.
What Other Countries Are Doing
The Kenyan debate is unfolding against a widening global crackdown on children’s access to social media. Australia in December became the first country to ban social media for users under 16, forcing platforms including TikTok, Alphabet Inc.’s YouTube, and Meta Platforms Inc.’s Instagram and Facebook to block minors or face fines of up to A$49.5 million.
In France, its National Assembly has approved legislation to bar children under 15, pending further votes, while Spain and Greece are weighing similar limits. Britain is considering an Australia-style prohibition, and Malaysia plans to impose a ban for users under 16 from 2026.
In the U.S., federal law bars companies from collecting data on children under 13 without parental consent, while several states have attempted broader restrictions, often running into free-speech challenges. Washington also forced TikTok's parent company to partially divest from its US subsidiary, bringing in US-owned firms and separating operations from its global footprint.




