Skip to content

NIST solicits opinions on additional security measures for AI systems through control overlays

Federal authorities intend to create advice for businesses on the diverse applications of artificial intelligence.

AI Security: NIST Requests Public Feedback on Control Layers for Protecting AI Systems
AI Security: NIST Requests Public Feedback on Control Layers for Protecting AI Systems

NIST solicits opinions on additional security measures for AI systems through control overlays

In a world where artificial intelligence (AI) is rapidly becoming an integral part of corporate environments, new cybersecurity risks have emerged. Researchers from Zenity Labs, also known as "hai.lab", have demonstrated how malicious actors could potentially gain control of top AI agents and weaponize them for attacks.

These AI agents, capable of autonomous actions, have been identified as a target for data theft or corruption by malicious actors. Hackers could manipulate critical workflows using controlled AI agents, a demonstration of which was showcased at the Black Hat conference in Las Vegas.

The National Institute of Standards and Technology (NIST) has acknowledged these concerns and has released a concept paper about creating control overlays for securing AI systems. Based on the SP 800-53 framework, these control overlays are designed to help ensure AI technology's integrity and confidentiality in various test cases.

Large language models (LLMs) are not immune to these threats. They too can be used to launch autonomous cyberattacks, as demonstrated by researchers at Carnegie Mellon in July. The use of AI agents in corporate environments presents new cybersecurity risks that modern AI systems, according to the NIST paper, introduce different security challenges than traditional software.

To address these challenges, NIST is seeking public feedback on a plan to develop guidance for secure AI system implementation. A Slack channel has been created to collect community feedback on the development of the control overlays.

The project is currently based on five use cases, with the ultimate goal of ensuring the secure implementation of AI technology in corporate environments. The advances and potential use cases for AI technologies bring new opportunities, but they also bring new cybersecurity risks that need to be addressed. As AI continues to evolve, so too must our efforts to secure it.

Read also: