AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.

Bybit
AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.
Ledger

The Evolution of Zero Trust Architecture for AI Agents

At the recent RSAC 2026 event, four industry leaders independently arrived at a common conclusion – zero trust is imperative in the realm of artificial intelligence. Microsoft’s Vasu Jakkal emphasized the extension of zero trust principles to AI, while Cisco’s Jeetu Patel advocated for a shift from access control to action control. CrowdStrike’s George Kurtz highlighted the significance of AI governance, and Splunk’s John Morgan called for an agentic trust and governance model. Despite originating from different companies and stages, they all acknowledged a shared problem.

In an exclusive interview at RSAC, Matt Caulfield, VP of Product for Identity and Duo at Cisco, emphasized the need to go beyond traditional zero trust concepts. He stressed the importance of continuously verifying and scrutinizing every action taken by AI agents to prevent rogue behavior.

According to PwC’s 2025 AI Agent Survey, 79% of organizations currently utilize AI agents. However, only 14.4% have full security approval for their agent fleet, as reported in the Gravitee State of AI Agent Security 2026 report. A survey by CSA revealed that merely 26% of organizations have established AI governance policies, highlighting a significant governance gap in the industry.

The Challenge of Monolithic Agent Architectures

Tokenmetrics

The prevalent enterprise agent model involves monolithic containers where every component trusts each other, posing a significant security risk. This architecture combines reasoning, tool calls, code execution, and credential storage in a single process, making it vulnerable to prompt injections and unauthorized access.

CrowdStrike’s CTO Elia Zaitsev drew parallels between securing agents and privileged users, emphasizing the need for a defense-in-depth strategy. He highlighted the security risks associated with the monolithic agent pattern and the importance of implementing robust security controls.

During his keynote at RSAC, CrowdStrike CEO George Kurtz discussed the ClawHavoc supply chain campaign targeting the OpenClaw agentic framework, underscoring the escalating threat landscape faced by organizations utilizing AI agents.

Innovative Approaches to Agent Security

Two distinct approaches to enhancing agent security have emerged from industry leaders. Anthropic’s Managed Agents architecture segregates agents into three components – a brain, hands, and session – ensuring that they do not trust each other. This separation of instructions from execution enhances security and performance, offering a novel solution to the inherent risks of monolithic architectures.

On the other hand, Nvidia’s NemoClaw architecture encapsulates agents within multiple security layers, closely monitoring every action within the sandbox. This comprehensive approach provides robust runtime visibility and control, albeit at a higher operational cost.

Addressing the Credential Proximity Gap

A key distinction between the two architectures lies in the proximity of credentials to the execution environment. Anthropic’s design removes credentials from the blast radius entirely, preventing attackers from accessing sensitive information in case of a compromise. In contrast, NemoClaw confines the blast radius and closely monitors agent activities, potentially exposing credentials to certain risks.

Security experts advocate for gated agent architectures that prioritize trust segmentation and restrict capabilities based on the trust level of data processed. Both Anthropic and Nvidia’s solutions align with this principle, offering advanced security measures to mitigate potential threats.

Ensuring Zero Trust in AI Agent Architectures

As organizations transition towards zero-trust architectures for AI agents, it is essential to conduct a comprehensive audit of existing agent patterns, prioritize credential isolation, test session recovery mechanisms, allocate resources for observability, and track roadmap commitments to address security gaps. Implementing these measures will strengthen agent security and reduce the risk of potential breaches in the evolving threat landscape.

Zero trust architecture for AI agents has evolved from a theoretical concept to a practical necessity. By embracing innovative security solutions and adopting a proactive approach to agent governance, organizations can enhance their cybersecurity posture and safeguard critical assets from emerging threats.

Paxful

Be the first to comment

Leave a Reply

Your email address will not be published.


*