Skip to content

Demonstration of an assault strategy targeting Google and Microsoft services unveiled

AI Supply Chain Vulnerability Exposed: Unveiling Assaults on Google and Microsoft by Security Analysts

Demonstration of Strategy for Sabotaging Google and Microsoft Services
Demonstration of Strategy for Sabotaging Google and Microsoft Services

Demonstration of an assault strategy targeting Google and Microsoft services unveiled

In the rapidly evolving world of artificial intelligence, a new threat to the AI supply chain has emerged. Security researchers from Palo Alto Networks have identified an attack method called "Model Namespace Reuse" that poses risks to AI platforms and open-source projects.

The attack exploits a vulnerability in the management of model names on platforms like Hugging Face and others. If an attacker re-registers a deleted organization and uploads malicious models with the same names, unsuspecting users may unintentionally use compromised models.

Google, Microsoft, and numerous open-source projects are not immune to this threat. On Azure AI Foundry, attackers obtained permissions equivalent to an Azure endpoint, potentially entering a user's infrastructure. Similarly, on Vertex AI, attackers were able to open a reverse shell via an embedded model and gain access to the endpoint environment.

The underlying problem remains that model namespaces are reused not only in deployment but also in model cards, documentations, standard parameters, and example notebooks. This makes it easier for attackers to create an account with the name of the original developer and provide a malicious model under the familiar path.

Thousands of vulnerable open-source repositories, including well-known and highly-rated projects, have been found by researchers. To minimize risks from supply chain attacks, it's crucial to proactively check the codebase, verify the authenticity of the model being used, and version lock models to prevent unexpected behavior or the integration of malicious models.

The researchers recommend binding models to specific commits, storing copies in trusted locations, and systematically reviewing code for risky model references. Developers relying on trusted catalogs of large cloud AI services could inadvertently provide malicious models without directly interacting with platforms like Hugging Face. In sensitive or production environments, it's recommended to clone the model repository to a trusted location.

While all platforms make efforts to secure their model registers, no system is entirely immune to namespace hijacking or supply chain vulnerabilities. To combat this, the security firm SySS GmbH has uncovered the risk of manipulation in AI supply chains, highlighting vulnerabilities affecting significant platforms such as npm packages, including cloud environments and important projects like Ethereumโ€™s ETHcode extension and the DuckDB SQL database system.

In conclusion, the Model Namespace Reuse attack method underscores the critical, often overlooked security aspect of verifying the authenticity of the model being used in GCP, Azure, and numerous open-source projects. As AI continues to permeate our lives, it's essential to stay vigilant and take proactive measures to secure our AI supply chains.

Read also: