In December 2025 Forrester gave the category a name. By May 2026 every hyperscaler, identity vendor, and AI security startup is claiming a piece of it. The Agent Control Plane is now the most contested layer in the enterprise AI stack, and most of the people buying it cannot yet articulate what it is.
This post is for the CISO, the Head of GRC, and the platform engineering lead who keep hearing the term in vendor meetings and want a working definition that survives contact with the actual problem. We will walk through what an Agent Control Plane is, why it became necessary, what it must do to be useful, and where KonaSense fits in that picture.
The definition that matters
An Agent Control Plane is the governance, security, and observability layer that sits across a heterogeneous estate of AI agents and the humans operating alongside them. It is vendor-agnostic by design. It does not build agents. It does not orchestrate them into workflows. It governs them, sees them, and constrains them.
The term is a deliberate borrowing from networking and Kubernetes. In those systems the control plane is the layer that decides how traffic or workloads are routed, secured, and observed, separate from the data plane that actually moves the bytes or runs the workloads. The same split is now being applied to AI agents. The agents themselves are the data plane. They reason, call tools, write files, send messages, query databases. The control plane is the layer above them that decides which of those actions are allowed, captures what happened, and gives the enterprise a single place to govern the estate.
Forrester formalized this as the third functional plane in an enterprise agentic architecture, alongside the build plane (where agents are constructed) and the orchestration plane (where agents are composed into business workflows). The control plane is what makes the other two safe to operate at enterprise scale.
| Plane | Question it answers | Typical category |
|---|---|---|
| Build | How do we build, deploy, and scale agentic AI systems? | AI platforms, agent frameworks, model access, evaluation pipelines |
| Orchestration | How do we compose agents into business workflows? | Adaptive process orchestration, workflow engines |
| Control | How do we govern, observe, and constrain a heterogeneous agent estate? | Agent Control Plane |
Why the category had to exist
A year ago, most enterprises had one or two AI tools in production. Today a typical knowledge-work organization has dozens. A typical engineering organization has more. The CFO uses one. Legal uses another. Marketing has three. The developers have Claude Code, Cursor, Copilot, and a Gemini CLI. The security team has its own. None of them share a session model, an audit trail, an identity, or a policy framework. Each one was procured separately, signed off separately, and instrumented separately, if at all.
This is the same situation enterprises faced with SaaS in 2014, with containers in 2016, and with cloud workloads before that. A useful new abstraction proliferated faster than the governance tooling around it. The pattern that resolved each of those was the same: a control layer emerged that was vendor-agnostic, cross-cutting, and operationally separate from the thing being controlled. CASB for SaaS. Kubernetes for containers. CSPM for cloud. The Agent Control Plane is the equivalent for AI agents.
The forcing function is not just sprawl. Three things make this category urgent in 2026:
What an Agent Control Plane must do
The category is young enough that vendors disagree on the boundaries. Stripping the marketing away, the functional requirements collapse to four things. A control plane that does not deliver on all four is not a control plane. It is a feature.
| Requirement | What it means in practice |
|---|---|
| Discovery and inventory | Continuous visibility into every agent operating in the environment, including the ones IT did not approve. |
| Policy enforcement at the action | Block, redact, or coach in real time, at the surface where the prompt and the tool call live. |
| Tamper-evident audit trail | Cryptographically signed, vendor-neutral, queryable record of what the human asked, what the agent did, and what policy decided. |
| Humans and agents in one plane | Govern the action on the corporate surface, regardless of whether a human or an AI took it. |
1. Discover and inventory the agent estate
You cannot govern what you cannot see. The control plane must continuously discover which agents are operating in the environment, what tools they are wired to, which humans they are acting on behalf of, and what data they are touching. This includes the agents the security team approved, the agents the developers installed without asking, and the browser extensions someone enabled on their personal Chrome profile. If the inventory only covers agents the IT team registered, it is wrong.
2. Enforce policy at the moment of action
Logging is not control. A control plane has to be in the path of consequential actions, with the ability to block, redact, or coach in real time. The decision point has to be where the prompt and the tool call live, not three layers downstream at a network egress or a SaaS API. By the time a sensitive file has been uploaded to a third-party model, an after-the-fact alert is a forensic record, not a control.
3. Produce a tamper-evident audit trail
The audit trail has to be cryptographically integral, vendor-neutral, and queryable. It has to record what the human asked, what the agent decided, which tools the agent called, what data the agent saw, and what the policy layer did about it. This is the evidence base for SOC 2, ISO 27001, the EU AI Act, and the inevitable internal post-mortem when something goes wrong. If the audit trail lives inside the agent vendor's own SaaS, it is not vendor-neutral and it is not your evidence.
4. Cover humans and AI workers in the same plane
This is the requirement most early entrants in the category get wrong. Modern agentic workflows are not pure agent-to-agent. They are humans and agents collaborating through the same surfaces: the browser, the IDE, the terminal, the chat client. A control plane that only governs autonomous agents and ignores the human who pasted the credentials, or the developer who approved the tool install, has a structural blind spot. The unit of governance is the action taken on a corporate surface by a human-or-agent identity, not just the agent.
Where the existing stack falls short
The first instinct for many CISOs is to ask whether existing tools cover this. The honest answer is that they cover pieces of it, and none of them cover the composite.
| Layer | What it sees | Where it goes blind |
|---|---|---|
| Identity providers | Issues identities to agents and gates access at login. | What the agent does after the token is issued. The prompt the user typed. |
| DLP and CASB | Network egress and SaaS API traffic. | Actions on the local machine. Anything that does not traverse a recognized SaaS path. (See Part 2 for a worked example.) |
| SIEMs | Aggregates and searches events from upstream sensors. | Does not block, redact, or coach in real time. Does not generate the high-fidelity event in the first place. |
| Cloud-identity governance suites | Cloud-native agents inside a single vendor's IAM, posture, and DLP stack. | Actions on a developer's laptop, in a Cursor session, in a browser extension, in a CLI tool. |
The composite signal that matters, this human or agent identity took this action against this data on this surface, and here is the policy decision and the evidence, is not natively produced by any of the layers above. It has to be produced where the action lives.
Where KonaSense fits
KonaSense is an Agent Control Plane. We instrument the surfaces where the prompt and the tool call actually live: the browser, the IDE, the CLI, and the agentic pipeline. We govern humans and AI coworkers in the same plane, because in 2026 they share the same surfaces and the same blast radius.
The product is organized into three suites that map directly to the four requirements above:
- KonaSense Observability produces the inventory and the audit trail. Every event is OCSF-normalized, ECDSA-signed at the agent, and routed to your SIEM, your S3 bucket, or our managed lakehouse, depending on the deployment model you choose. Zero-retention configurable per tenant. The audit trail is yours and is portable.
- KonaSense Agent Security and Governance is the policy enforcement layer. PrismSense, our policy engine, makes block, redact, and coach decisions in the path of the action, before the bytes leave the surface. Policies are written against actions, not against tool names, so they survive the next file-transfer utility your developers install.
- KonaSense Red is the offensive research arm. We continuously test the coding agents and browser-resident AI tools our customers depend on and publish coordinated disclosures so the category gets safer over time, not just our customers.
| ACP requirement | KonaSense suite |
|---|---|
| Discovery and inventory | Observability |
| Policy enforcement at the action | Agent Security and Governance |
| Tamper-evident audit trail | Observability |
| Humans and agents in one plane | Observability + Agent Security and Governance |
| Continuous category hardening | KonaSense Red |
Three things distinguish where KonaSense sits on the map.
We sensor where the intent is, not where the bytes are. A coding agent that decides to exfiltrate a file is a different signal at the prompt-and-shell level than at the network edge. The composite event agent invoked a file transfer against a file in a sensitive directory is a single high-signal log line at the surface. By the time the same event reaches the firewall, it is two unrelated network connections to unknown hosts. We instrument the surface.
We treat the human and the agent as the same problem. The pattern that matters in 2026 is the human-plus-agent composite. The developer who told Claude Code to install wormhole, the analyst who pasted a customer list into a browser-resident model, the marketer who let an agent draft an email to the wrong audience. KonaSense governs the surface, which means it sees both halves.
We are coding-agent-native. Coding agents are the highest-blast-radius AI tools in the enterprise today. They write to disk, push to git, run shell commands, install software, and open network connections. KonaSense ships first-class instrumentation for Claude Code, Cursor, Copilot, and the broader IDE and CLI surface. This is not an afterthought bolted onto a SaaS-layer governance product.
What to do next, by role
If you are a CISO or Head of Security, the immediate question is whether you have a defensible answer to the EU AI Act August 2 deadline and to your next board's question about agent risk. The control plane is the artifact that produces that answer. The land-and-expand path most enterprises take is to start with observability, build the inventory and the audit trail, and add enforcement once the policy questions are clear.
If you are a Head of GRC or Compliance, the control plane is what turns a policy document into evidence. Without it, your AI policy is a PDF. With it, you can demonstrate that the policy actually fired against actual actions, with cryptographically signed records to back it up.
If you are a platform engineering or developer experience lead, the control plane is what lets you say yes to coding agents at scale. Without it, every new agent is a new line item on the risk register. With it, the agents become governed surfaces with a known security posture.
The category in one sentence
Most enterprises will end up running multiple agents from multiple vendors across multiple surfaces. The Agent Control Plane is the layer that makes that estate governable, observable, and safe. KonaSense is the control plane that lives where the action lives, on the surfaces where humans and AI coworkers actually do the work.
Want to see what your own agent estate looks like? Book a 30-minute walkthrough and we will show you the events your current stack is missing.




