Popular short video platform TikTok has undertaken one of its most aggressive digital cleanup efforts in Kenya by removing more than 450,000 videos and banning 43,000 accounts during the first quarter of 2025.
- •The sharp rise in removals reflects the platform’s pivot to AI tools in content moderation as it faces heightened scrutiny from regulators in one of its fastest-growing African markets.
- •Between January and March this year, 92.1% of flagged videos were deleted before they were viewed, and 94.3% within 24 hours of being posted, according to the company’s latest Community Guidelines Enforcement Report.
- •While video removals surged, account bans declined from more than 60,000 in previous quarters to 43,000 in early 2025, suggesting that TikTok is now favoring content removal over mass account purges.
“Ongoing advancements in AI and other moderation technologies can also benefit the overall well-being of content moderators by requiring them to review less content. It also provides moderators with better tools to do this critical moderation work,” TikTok stated in the report.
TikTok’s shift is part of a larger strategy to secure its foothold in Kenya, a country of over 15 million social media users where debates over youth safety, viral challenges, and misinformation have intensified. Local policymakers have floated measures such as mandatory social media ID checks, raising the stakes for platforms criticized for failing to protect minors or curb harmful content online.
Globally, TikTok said its moderation technology removed over 87% of violating videos automatically in the first quarter, with 99% of all harmful content caught before any user reports. This reflects a growing reliance on pre-emptive algorithmic policing, a trend likely to define social media operations in regions where human moderation alone cannot keep pace with surging content volumes.
The company’s focus on livestreams has also intensified, with 19 million LIVE sessions stopped worldwide in the same quarter, a 50% increase from the previous period. Livestreaming poses an elevated risk for violent, self-harm, or politically sensitive content, and TikTok’s data indicates its AI tools are being deployed to neutralize threats in real time.
A BBC investigation early this year uncovered a disturbing trend in Kenya, where minors as young as 15 years old were captured livestreaming sexual content on TikTok for virtual gifts despite the platform’s 18+ restrictions. Teenagers reportedly use coded language and provocative dancing to solicit rewards, with some bypassing age limits by purchasing or growing accounts to the 1,000-follower threshold required for live streaming.
In March, the Communications Authority of Kenya (CA) launched a formal inquiry into the exposé and demanded TikTok’s owner, ByteDance, to explain how underage exploitation evaded its AI-driven moderation. It warned the company to remove all illicit content or face sanctions. Over the past one year, the government has pressured the app to strengthen local operations and child-safety measures instead of outrightly banning it like Senegal and Somalia.
TikTok maintained a solid third place in Kenya’s social media landscape in 2024, according to the CA. It holds a 14% usage, behind WhatsApp and Facebook at 20% each but has grown more popular than YouTube and Instagram, which slipped to 12% and 8% respectively.
The platform recently partnered with Childline Kenya to provide in-app access to helplines for users reporting content related to suicide, self-harm, or harassment. A separate collaboration with Mental360 also aims to create localized, evidence-based mental health content and reduce stigma around online harassment and emotional distress. TikTok has also appointed local mental health ambassadors to guide users toward credible resources.





