TikTok’s parent company ByteDance has confirmed widespread reports that it will lay off nearly 500 employees in Malaysia who were primarily focused on monitoring and evaluating content uploaded to the app. The team worked within TikTok’s Trust and Safety Regional Operations department, which handles moderating user-generated videos and enforces the platform’s “community guidelines” around inappropriate or illegal material.
Apparently, this change aims to transition more of the human content review responsibilities over to automated computational processes using machine learning and artificial intelligence tools. While AI-driven techniques can scale to handle massive volumes of uploads more efficiently, there are reasonable concerns about whether algorithms alone can adequately understand nuanced cultural contexts and apply often subjective community standards in a fully accurate and unbiased manner.
The cuts come amidst heightening governmental pressure in Malaysia for all social media services to strengthen the protection of users from online abuse, radicalization risks, and the spread of misinformation. New regulations taking effect in January 2025 will require annual operating licenses for platforms, along with demonstrating robust measures to monitor, flag, remove, or disable access to unlawful or harmful material in a timely fashion.
With over 200 million TikTok accounts in Southeast Asia, the company aims to invest $2 billion globally across 2024 on improving “trust and safety,” including augmenting AI tools so they can preemptively identify and automatically remove around 80% of rule-violating videos before any users flag them. Earlier this year, Malaysia submitted the most governmental takedown requests to TikTok of any nation, totaling over 6,000 pieces of content – a removal rate of approximately 90% was achieved in cooperation.