AI Hacks Take Center Stage - Both Beneficial and Malicious AI Players Engage in Escalating Cybersecurity Wars
In the ever-evolving world of technology, Artificial Intelligence (AI) is making its mark in the cybersecurity industry. However, its impact is not without controversy.
AI is being employed by various entities, from hackers and cybercriminals to spies, researchers, and corporate defenders. Google's vice president of security engineering, Heather Adkins, has noted that while AI is being used extensively, it hasn't discovered anything novel, merely doing what it already knows how to do.
One area where AI's utility is questionable is in open source projects. AI has been bombarding these projects with irrelevant reports, causing a significant waste of time for developers like Daniel Stenberg, the lead developer of the curl project used in more than 20 billion devices. Stenberg has expressed his concerns about the amount of time spent addressing AI-related issues.
On the other hand, AI is proving to be a valuable tool for cybersecurity firms. CrowdStrike, for instance, uses AI in tools like Falcon Adversary Intelligence, Charlotte AI Detection Triage, and CrowdStrike Signal. These tools help detect, prioritize, and respond to cyber threats in real-time, tailoring intelligence to each customer's environment and automating alert triage, thus improving the speed and accuracy of security operations.
Moreover, CrowdStrike integrates AI governance with ChatGPT Enterprise to monitor and control AI agents across organisations. This integration aims to provide a more comprehensive cybersecurity solution.
AI has also been used in social engineering attacks, such as the North Korean tech worker scheme, where generative AI was employed to create resumes, social media accounts, and other materials to trick Western tech companies into hiring North Korean operatives.
The use of AI in cybersecurity is a subject of debate, with some viewing it as another side effect of the broader interest in AI. In 2025, about 20% of all security report submissions were AI-generated. However, the validity of the intelligence produced by AI used to find sensitive files in Russia's behalf is uncertain.
AI's ability to process language instructions and translate plain language into computer code, or identify and summarise documents, as demonstrated by Language Models like ChatGPT, is well-established. However, its ability to do sophisticated cybersecurity research on its own is still limited.
As AI continues to evolve, it is crucial to strike a balance between its potential benefits and the risks it poses. The cybersecurity industry views AI as a digital version of "Rock 'Em Sock 'Em Robots," pitting offensive- and defensive-minded AI against each other.
For up-to-date news, analysis, and reviews on this topic, follow Tom's Hardware on Google News. Additionally, CrowdStrike is now offering assistance to individuals who believe they have been hacked. Google has also been discovering vulnerabilities with AI, with about 5% of the submissions in early July 2025 being genuine vulnerabilities, a significant decrease compared to previous years.
Russian hackers have started embedding AI in malware used against Ukraine to automatically search for sensitive files. This development underscores the need for vigilance and continuous innovation in the cybersecurity field. As AI continues to shape the landscape of cybersecurity, it is essential to navigate this new frontier with caution and foresight.