Separating Control and Data Planes for Safe Agentic Browsing
Yashaswi Pupneja
Abstract
Recent security research has demonstrated that agentic large language models (LLMs) embedded in browsers are vulnerable to prompt injection attacks through seemingly benign web content. In one documented case, a hidden Reddit spoiler tag successfully exfiltrated two-factor authentication tokens from a browser-based AI agent. This represents a fundamental architectural vulnerability - current agentic systems conflate the control plane (what actions the agent can authorize) with the data plane (what content the agent processes), allowing untrusted web content to directly steer agent behavior.
Successful Page Load