By: Ted Sweetser, VP, Strategic Partnerships
Artificial intelligence has moved beyond conversation. What began as a chat interface is rapidly evolving into systems capable of autonomous action. As organizations adopt agentic AI and frameworks such as the Model Context Protocol (MCP), they must reassess both opportunity and risk.
At the IAB’s Annual Leadership Meeting earlier this year, Zack Kass, formerly of OpenAI, reflected on a central paradox of ChatGPT’s success. The conversational interface dramatically accelerated adoption, yet it also narrowed public perception of AI’s capabilities. By presenting AI primarily through a chat box, the industry unintentionally equated artificial intelligence with conversation. The interface became the product.
Behind that interface, however, were APIs—structured, programmable systems that powered the functionality users experienced. APIs have long enabled scalable and reliable integrations, but they lack intuitive accessibility. The chat interface lowered the barrier to entry, but it constrained expectations about what AI could do. That constraint is now dissolving.
The next phase of AI deployment is defined not by dialogue, but by action. If the first wave of enterprise AI consisted largely of chatbots layered over internal knowledge bases, the second wave is increasingly agentic. Agents are autonomous, LLM-powered systems capable of executing multi-step tasks across software environments. Rather than simply retrieving information, they can update records, initiate transactions, schedule workflows, and interact with multiple systems on a user’s behalf. This shift — from responding to acting — fundamentally changes the risk profile.
To enable such functionality, agents require structured methods of interacting with external tools and data systems. This is where protocols such as MCP emerge. Originally developed by Anthropic and released in late 2024, MCP provides a framework through which agents access and operate across applications. Alongside related standards such as Agent-to-Agent (A2A) and Agent Communication Protocol (ACP), MCP is shaping the interoperability layer for agentic ecosystems. As adoption accelerates, these standards are becoming foundational infrastructure.
Traditional APIs are deterministic. Their functions, permissions, and access scopes are explicitly defined at the time of implementation. This structure supports “privacy by design”: the aperture is fixed, and behavior is constrained.
Agentic systems operating through MCP introduce greater flexibility. Agents can interpret intent, adapt to novel instructions, and dynamically orchestrate tasks. That flexibility enables powerful new use cases — but it also expands the potential attack surface.
To function fully, an agent may require broad system access. As Meredith Whittaker, President of Signal, observed in discussing hypothetical AI assistants, a truly autonomous agent would need near-root visibility across multiple systems to coordinate actions seamlessly. Such access raises significant privacy, security, and governance questions — particularly when sensitive or regulated data is involved.
Recent real-world examples underscore the challenge. In one instance, an AI assistant granted permission to delete temporary files inadvertently deleted years of personal data. While reversible in that case, similar errors in an enterprise context — affecting customer databases, analytics environments, or proprietary systems — would carry far greater consequences. When agents can perform CRUD operations (create, read, update, delete), oversight must be commensurate with capability.
The risk is not only accidental misuse. Experimental deployments have demonstrated how agents can be manipulated through social engineering or poorly constrained instructions, leading to unintended actions. As agents gain access to systems such as CRMs, purchasing platforms, or campaign tools, the implications of insufficient guardrails multiply.
MCP-based implementations do provide mechanisms for constraints. Two are central: prompts and tools.
For example, rather than granting direct database visibility into user credentials, an agent can be restricted to a tool that returns only a binary authorization result—success or failure—without revealing underlying records. By limiting what an agent can see and execute, organizations can reduce risk while preserving utility. The lesson is clear: autonomy must be paired with deliberate constraint.
The strategic question is not whether to adopt agentic AI, but how. Organizations should resist the impulse to treat AI as a monolithic layer embedded across every critical system. Instead, implementation should be modular; structured as discrete, narrowly scoped deployments aligned to defined objectives and sprint cycles.
In advertising technology, it may be neither necessary nor advisable for a single agent to control both impression-level bidding and broader media strategy design. Separating strategic planning from tactical execution—while enabling structured information flow between layers—preserves flexibility without centralizing excessive authority. This approach aligns with emerging industry models for agentic ad buying systems: layered, purpose-built components rather than fully autonomous end-to-end control.
The first wave of generative AI centered on conversation. The second wave centers on execution.
Agentic systems and MCP frameworks offer substantial opportunity: operational efficiency, automation at scale, and new forms of adaptive decision-making. They also introduce expanded governance requirements, heightened privacy considerations, and greater exposure to error or misuse.
The defining challenge of this phase is balance. Organizations must determine how much autonomy to grant, how much data to expose, and how tightly to constrain execution. Thoughtful architecture, disciplined scoping, and embedded guardrails will determine whether agentic AI becomes a scalable asset or a systemic vulnerability.
AI is no longer just a chat box. It is infrastructure. How that infrastructure is designed will shape both its promise and its risk.
About the NAI
Founded in 2000, the NAI is a non–profit organization and the leading self-regulatory association dedicated to responsible data collection and use for digital advertising. The NAI works closely with its members and other key stakeholders to promote policies and voluntary practices for responsible data-driven advertising across digital media. We are a champion of strong industry self-regulation whereby industry efforts can play a complementary role to maximize industry compliance with privacy requirements in the U.S., while reducing the burden of enforcement. To learn more, visit www.thenai.org.