Overview
Paul Kinlan, a Google developer advocate, explores how web browsers can serve as secure sandboxes for AI coding agents. His research demonstrates that browsers already contain the security infrastructure needed to safely run untrusted code - the same systems that protect users from malicious websites can be repurposed for AI agent safety.
The Breakdown
- Browsers have spent 30 years developing robust sandboxing capabilities - they’re designed to safely execute hostile, untrusted code from anywhere on the web the instant a user clicks a URL
- Three core sandbox requirements can be met with existing browser APIs: File System Access API for file operations, CSP headers with iframe sandbox for network isolation, and WebAssembly in Web Workers for safe code execution
- The Co-do demo proves the concept by creating a Claude Cowork-like experience entirely in the browser - users can chat with AI agents that manipulate local files without needing multi-GB local containers
- Advanced techniques like double-iframe sandboxing allow granular control over network access, with the outer frame handling API calls while the inner frame remains completely isolated
- The
webkitdirectoryinput attribute enables browser-based directory access across all major browsers - providing read-only access to entire folder structures without additional security risks