Laryaa
All Insights
06Security ArchitectureNovember 15, 2025· 10 min read

Zero-Trust for Agents: What It Actually Means

Zero-trust is about execution boundaries, not encryption.

Zero-trust has become a security buzzword applied to everything. For AI agents, zero-trust has specific architectural meaning: no component trusts any other component with sensitive data. Understanding this requires examining where trust boundaries must be enforced.

Sanitization vs Encryption

The naive security approach for AI agents is encryption: encrypt screenshots before sending to cloud, decrypt for processing. This misses the threat model.

Encryption protects data in transit and at rest. But the cloud still processes the decrypted data. The security boundary is shifted, not eliminated. If the cloud is compromised, or legally compelled, or operated by adversarial actors, encrypted transmission provides no protection.

Sanitization is different. Sanitized data is transformed — sensitive information removed or replaced before transmission. The cloud never sees the original data, encrypted or not.

Zero-trust for agents means the cloud planning layer operates only on sanitized abstractions, never on raw sensitive data.

Abstract Intent vs Raw Data

Consider an agent automating a healthcare workflow. A zero-trust architecture never sends patient names, diagnoses, or record numbers to the cloud.

Instead, it sends abstract representations: "Form with 7 fields, field 3 is populated, button labeled SUBMIT in bottom right." The cloud planner can reason about this abstraction without ever seeing PHI.

The planning output is equally abstract: "Click button in bottom right, wait for screen change, verify field count." No patient data in the instructions.

This separation — raw data local, abstractions only to cloud — is the core zero-trust principle for agents.

Why Rehydration Must Be Local

Abstract instructions must be "rehydrated" into concrete actions. "Click the Submit button" must become specific coordinates on the actual screen.

This rehydration must happen locally, using the local visual context that was never transmitted. The local execution engine maps abstract instructions to concrete actions using only on-device data.

If rehydration happened in the cloud, the cloud would need access to the raw screen — defeating the entire purpose of sanitization.

The execution boundary is physical: sensitive data stays on the device. Planning can be remote, but execution must be local.

The Trust Boundaries

In a zero-trust agent architecture, trust boundaries are explicit:

The Local Vision Engine is trusted with raw screen data. It produces sanitized abstractions.

The Cloud Planner is trusted only with sanitized abstractions. It produces abstract action plans.

The Local Execution Engine is trusted to rehydrate abstract plans into concrete actions using local visual context.

No component exceeds its trust boundary. The cloud never touches raw data. Local components never depend on cloud security.

This is zero-trust: not "encrypted trust" but "verified limited trust at each boundary."

Key Takeaway

Zero-trust for AI agents means enforcing execution boundaries where sensitive data never leaves the device. Cloud components receive only sanitized abstractions. Local components handle all sensitive processing. This isn't encryption — it's architectural separation of concerns.

Topics covered

Zero-Trust ModelData SanitizationExecution Boundaries

Questions about this analysis?

We discuss technical architecture with teams evaluating solutions.

Request Technical Discussion