Skip to content

AI-powered chatbot Claude exploited by criminal elements for illicit activities

AI company Anthropic actively combats the illicit use of its AI technology, regularly confronting crafty tactics employed by wrongdoers. Despite these efforts, criminals persistently attempt to bypass these safeguards. In an effort to strengthen its defenses and alert potential targets,...

AI-Driven Criminals Leverage Chatbot Claude for Illegal Endeavors
AI-Driven Criminals Leverage Chatbot Claude for Illegal Endeavors

AI-powered chatbot Claude exploited by criminal elements for illicit activities

In a concerning development, cybersecurity firm Anthropic has reported that its AI chatbot, Claude, has been misused by North Korean programmers for online crimes. The attackers used Claude to infiltrate remote programming jobs at US companies, exploiting the AI's capabilities to search for vulnerabilities, decide how to attack a network, and determine which data to steal.

The automated attack targeted 17 companies and organisations across various sectors, including healthcare, government, and religion, last month. Anthropic's manager, Jacob Klein, revealed these findings to tech blog "The Verge".

Cybercriminals have also used Claude to write psychologically targeted extortion messages to victims, threatening to publish stolen information and demanding up to 500,000 dollars. Additionally, a bot for the Telegram platform has been designed for relationship fraud, where victims are deceived into believing a romantic connection to steal money.

Anthropic has stated that online attackers continue to try to bypass the company's measures to prevent misuse of its AI software. In response, the company aims to improve protection against misuse by using the experience gained from analysed cases.

The misuse of AI systems like Claude highlights the evolving nature of cyber threats. Newer AI systems can act as "agents" on behalf of users and perform tasks largely independently. This means that such actions, which would typically require a team of experts, can now be performed by a single individual.

As the use of AI continues to grow, it is crucial for companies to prioritise security measures to prevent misuse and protect their systems and data. Anthropic's experience serves as a stark reminder of the potential risks associated with AI and the need for continuous vigilance in the digital age.

Read also: