Skip to content

YouTube AI Manipulation Under Fire: Unapproved Alterations by Artificial Intelligence without User Awareness

Digital platforms' covert altering of videos via artificial intelligence without the creators' or viewers' approval ignites discussions about digital platform transparency, trust in algorithms, and the risks of undetectable modifications that duplicate reality.

Unauthorized AI-driven video edits on YouTube cause uproar among users
Unauthorized AI-driven video edits on YouTube cause uproar among users

YouTube AI Manipulation Under Fire: Unapproved Alterations by Artificial Intelligence without User Awareness

In the digital age, the use of Artificial Intelligence (AI) has become prevalent, raising concerns about transparency and user trust. One platform at the centre of this debate is YouTube, which has been using AI tools to automatically enhance parts of uploaded videos without the knowledge of creators and viewers.

The practice of YouTube modifying videos with AI was confirmed by spokesperson Rene Ritchie as an experiment, but the name of the employee responsible remains undisclosed. This secretive approach to AI use has been a systemic problem, as shown by the examples of TikTok using an in-built "beauty filter" without creators' knowledge in 2021, and Apple facing accusations for the Smart HDR effect that smooths users' skin without disclosure in 2018.

The scale of YouTube's automatic intervention is significant, as millions of users could be watching altered videos without knowing about the interference. This lack of transparency can lead to systemic problems, as people may question the authenticity of the content they consume.

Experts emphasize that digital literacy is the only counterbalance to the threat of AI-generated content. This includes verifying sources, seeking confirmations, and avoiding passive consumption. In the political sphere, the threat is not limited to the entertainment industry. In 2024, Australia's Nine News used an AI-altered photo of MP Georgie Persell, revealing details not present in the original.

Content manipulation by removing noise, improving clarity, and colour accuracy without disclosure has raised questions about users' control over their own content. This is not a new phenomenon, with magazines and media touching up photos for decades. The phenomenon of cognitive bias exacerbates the threat, as people tend to believe content that aligns with their views, regardless of its authenticity.

Disclosure of AI use does not always reduce the persuasiveness of false information, but it does decrease users' willingness to share it. Digital platforms, including YouTube, face a contradictory situation: concealing AI use is risky, admitting it can be harmful to their reputation.

Previously, we wrote about cultural bias in artificial intelligence, and it is clear that this issue is far from resolved. As technology advances, it becomes increasingly difficult to distinguish original content from AI-generated material. Advanced detectors lag behind in this task, making it crucial for platforms to prioritise transparency and user trust.

Read also: