bills focusing on AI deepfakes and identity fraud prevention in Denmark
In the digital age, the proliferation of deepfake technology poses a significant threat to individual rights and financial security. The World Economic Forum's Global Coalition for Digital Safety is at the forefront of the fight against these manipulative tools, targeting harmful online content, including deepfakes.
The rise in deepfake attacks has prompted a response to protect digital identities. The proposed changes aim to safeguard individuals' control over their appearances and voices, a response to the increasing threat of AI-generated deepfakes.
Deepfakes are not just limited to altering existing content; they can also generate new content, depicting individuals saying or doing things they never actually did. This technology has been used in various scams, as evident in the deepfake fraud attempt targeting Ferrari, where an AI-generated voice of CEO Benedetto Vigna was employed.
The UK engineering firm Arup experienced a deepfake scam that led to a financial loss of $25 million. Similarly, a BBC journalist demonstrated the potential for voice cloning by bypassing her bank's voice identification system using a synthetic version of her own voice.
The US has taken a proactive approach, implementing the Take It Down Act, which requires the removal of harmful deepfakes within 48 hours and imposes federal criminal penalties for their distribution. In 2023, US actors went on strike to advocate for control over the use of their images by AI.
The European Union's Digital Services Act (DSA), which came into effect in 2024, aims to prevent illegal and harmful activities online, including the spread of disinformation. Denmark is also proposing new legislation to amend its digital copyright law, addressing the increasing threat of AI-generated deepfakes.
Denmark's proposed amendment aims to safeguard individuals' control over their identities and extends the right to compensation for unauthorized use of their image for 50 years beyond the artist's death. The European Union, notably through the EU AI Act, also demands clear labeling of AI-generated content, including deepfakes.
The technology's reach is far-reaching, with instances of deepfakes including deepfakes of former US President Joe Biden and Ukrainian President Volodymyr Zelenskyy, which can lend credibility to false information. Deepfake attacks are doubling every six months, highlighting the escalating nature of the threat.
According to Resemble.ai's findings, direct financial losses resulting from deepfake scams have reached nearly $350 million. The company's research also indicates that 41% of deepfake targets are public figures, while 34% are private individuals, predominantly women and children. The Resemble.ai deepfake security report for Q2 2025 revealed a significant increase in publicly disclosed deepfake attacks, with 487 such attacks documented.
However, it's not all doom and gloom. Employees have thwarted deepfake attempts by asking questions that only the real CEO could answer, demonstrating the importance of human vigilance in the face of AI-driven threats. As the fight against deepfakes continues, it's clear that a collaborative effort between technology, law, and individual awareness is crucial in maintaining the integrity of our digital identities.