Skip to content

TikTok Transfers AI Moderation, Potentially Affecting Employment of UK Content Censor Workers

Expanding AI moderation on TikTok leads to job reductions in UK safety sector, sparking worries about user safety under the new Online Safety Act regulations.

Shift in TikTok's Artificial Intelligence May Jeopardize UK Content Moderators' Jobs
Shift in TikTok's Artificial Intelligence May Jeopardize UK Content Moderators' Jobs

TikTok Transfers AI Moderation, Potentially Affecting Employment of UK Content Censor Workers

TikTok, the popular social media platform, has announced a significant restructuring of its UK trust and safety division, resulting in hundreds of job cuts. This decision comes at a critical time, as the UK is implementing the Online Safety Act, which imposes stringent compliance demands and heavy penalties.

The Communication Workers Union (CWU) has voiced concerns that these job cuts might undermine employee organizing efforts. They argue that human moderators are uniquely equipped to catch subtleties that may evade detection by algorithms. The CWU has also criticized the move, calling it a prioritization of corporate greed over the safety of workers and the public.

TikTok's reliance on automation is growing, with more than 85% of harmful content being removed automatically through AI. The long-term test will be whether TikTok can convince stakeholders that AI is capable of protecting over 30 million UK users—and by extension, TikTok's license to operate in one of its most important markets.

The UK's Information Commissioner's Office has launched a major investigation into TikTok's data practices. Any misstep in moderation risks becoming part of a larger narrative about TikTok's ability—or inability—to safeguard democratic and social norms.

The restructuring aligns with a broader industry trend in which tech giants, including Meta, X, and Snap, have been shrinking their human moderation teams in favor of automated systems. The regional hubs TikTok consolidates its moderation into as part of its global strategy are not explicitly named in the available search results.

TikTok's geopolitical scrutiny, due to its Chinese parent company ByteDance, amplifies Western anxieties over data governance and content manipulation. Workers are afraid that users, especially minors, will be exposed to greater risks if AI becomes the first and last line of defense.

The UK's Online Safety Act, which will be enforced from July 2025, requires platforms to implement robust age checks and actively remove harmful material, with fines of up to 10% of global turnover for breaches. The greater question is whether automation alone can meet the rising bar of safety, cultural sensitivity, and accountability demanded by regulators and users alike.

Despite the job cuts, TikTok insists that affected UK employees can apply for other roles within the company and will be given priority if qualified. By accelerating its reliance on AI at the expense of human teams, TikTok risks alienating both employees and policymakers.

The job cuts are part of a global trend that has affected Berlin, the Netherlands, and Malaysia. The greater challenge for TikTok will be to balance its need for efficiency with the need for safety and accountability, as it navigates the complex landscape of the Online Safety Act and the evolving expectations of its users.

Read also: