AI-driven cyber threats are revolutionizing the landscape of attacks, from malware to deepfakes
In a concerning development, researchers at HP have reported that hackers have started using Artificial Intelligence (AI) to create Remote Access Trojans (RATs), marking a significant shift in the cybersecurity landscape.
This development has raised concerns among analysts, who worry about AI's potential to make certain types of attacks more profitable. The increased attack volume that AI can generate could lead to a surge in cybercrime activities.
Hacker groups, including advanced persistent threat actors like Russian groups Fancy Bear (GRU Unit 26165) and Sandworm (Unit 74455), as well as cybercriminal groups such as ShinyHunters (also known as Scattered Spider), are actively leveraging AI to create new and improved attack methods. These groups are using AI to enhance phishing, malware, and zero-day exploit campaigns, automating and sophisticating their attacks during 2024 and 2025.
The use of AI in hacking operations is a cat-and-mouse game. It enables hackers to be faster at getting malicious utilities out there, making it challenging for cybersecurity measures to keep pace. Hackers are not only using AI to create new malware but also to overwhelm code repositories like GitHub. The speed at which malicious packages can be uploaded and disseminated is outpacing the ability of these platforms to take them down quickly enough.
However, it's important to note that AI is not introducing novel attack techniques per se, according to cybersecurity experts. Instead, it is enhancing the existing ones, making them more efficient and harder to detect. The integration of AI into traditional phishing campaigns is a growing threat, as it allows for more targeted and convincing attacks.
On the flip side, AI is also being used to improve social engineering and attack automation in hacking operations. This is a double-edged sword, as it can also be used to protect systems and identify threats more effectively. For instance, there are huge productivity gains in the use of AI code assistants for Generative AI.
The use of AI in creating deepfake videos, audio, and media has also become a concern. According to data from the MITRE ATT&CK framework, only one or two brand-new attack techniques are catalogued every year. However, 21% of organisations have experienced a deepfake video attack, 19% have experienced a deepfake media attack that bypassed biometric protections, and 28% of organisations have experienced a deepfake audio attack. Yet, only 5% of these attacks have resulted in the theft of money or intellectual property.
In conclusion, while AI is not creating entirely new attack techniques at this point, it is being used to automate and sophisticate existing ones. This is a cause for concern, as it allows hackers to move a lot quicker. As the integration of AI in cybercrime operations continues to evolve, it is crucial for organisations to stay vigilant and adapt their cybersecurity measures accordingly.