TikTok
- 185 views
TikTok has implemented extensive measures to enhance platform safety in Kenya, resulting in the removal of over 60,000 accounts and 360,000 videos.
The social media giant detailed these actions in its recently released Q2 2024 Community Guidelines Enforcement Report which demonstrates its commitment to maintaining platform integrity. The scope of content removal in Kenya represents 0.3 per cent of the total videos uploaded within the country during this period. Notable in these enforcement efforts is TikTok's proactive approach, with an impressive 99.1 per cent of violative content being identified and removed before users reported it.
The platform's swift response time is particularly noteworthy, with 95 per cent of concerning content being eliminated within 24 hours of detection. The enforcement action extended beyond video removal, encompassing the termination of 60,465 accounts for Community Guidelines violations. A significant portion of the enforcement focused on age restrictions, with 57,262 accounts being suspended due to suspected underage users. The platform's content removal criteria addressed a broad spectrum of violations, including content promoting disordered eating, dangerous activities, nudity, graphic material, gambling, and substance abuse.
To support these moderation efforts, TikTok has established a robust infrastructure of over 40,000 trust and safety professionals. These specialists work in conjunction with advanced technological solutions to enforce the platform's community guidelines, terms of service, and advertising policies. The company emphasizes that its technological innovations enable preemptive identification and removal of potentially harmful content before it reaches the viewer base. The impact of TikTok's moderation efforts extends far beyond Kenya's borders.
On a global scale, the platform removed approximately 178 million videos in June 2024 alone, with automation handling 144 million of these removals. This technological advancement has significantly reduced the burden on human moderators, limiting their exposure to problematic content. The platform reports that its global proactive detection rate has reached an impressive 98.2 per cent.