The Role of Protocols for AI-to-AI Interactions

Why Non Human Identities Can’t Be Ignored

The Role of Protocols for AI-to-AI Interactions

As AI systems increasingly act on behalf of users and interact with other machines, specialized protocols like MCP (Model Context  Protocol), A2A (Agent to Agent), and OAuth are critical. These protocols define how AI systems authenticate, authorize, and securely communicate with each other and with APIs. They enable safe automation, enforce access controls, and ensure trust between Non-Human Identities (NHIs), making them essential for secure and scalable AI deployments.

1. Model Context Protocol (MCP)

Created by Anthropic in November 2024, MCP is a protocol that standardizes how AI systems connect with external data and tools. At the time of writing, over 5,000 MCP servers have been released, but deploying MCP servers to production requires significant developer tooling and security effort.

MCP Server Examples:

2. Agent2Agent Protocol (A2A)

Developed by Google and now handed over to The Linux Foundation, A2A is a protocol that facilitates multi-agent communication regardless of how the agents were built. It provides a framework for AI agents to discover each other’s capabilities, collaborate and manage tasks, and make UX decisions on how to show content to the user.

3. Open Authorization (OAuth)

OAuth is a long-standing protocol that has manifold applications on the Internet – it’s also well-placed to handle authorization for agentic AI systems. Some OAuth flows and specs that are relevant for agentic AI include:

  • OAuth 2.1, which is the recommended authorization method in the MCP spec.

  • Proof Key for Code Exchange (PKCE), which protects against CSRF attacks, authorization code injections, and is well-suited for systems where client secrets cannot be securely stored.

  • Dynamic Client Registration (DCR), which allows third-party clients to automatically register with an authorization server and is ideal for scalable AI systems.

  • Protected Resource Metadata (PRM), which standardizes how resource servers advertise access requirements using OAuth.

As AI systems increasingly act on behalf of users and interact with other machines, specialized protocols like MCP (Model Context  Protocol), A2A (Agent to Agent), and OAuth are critical. These protocols define how AI systems authenticate, authorize, and securely communicate with each other and with APIs. They enable safe automation, enforce access controls, and ensure trust between Non-Human Identities (NHIs), making them essential for secure and scalable AI deployments.

1. Model Context Protocol (MCP)

Created by Anthropic in November 2024, MCP is a protocol that standardizes how AI systems connect with external data and tools. At the time of writing, over 5,000 MCP servers have been released, but deploying MCP servers to production requires significant developer tooling and security effort.


2. Agent2Agent Protocol (A2A)

Developed by Google and now handed over to The Linux Foundation, A2A is a protocol that facilitates multi-agent communication regardless of how the agents were built. It provides a framework for AI agents to discover each other’s capabilities, collaborate and manage tasks, and make UX decisions on how to show content to the user.


3. Open Authorization (OAuth)

OAuth is a long-standing protocol that has manifold applications on the Internet – it’s also well-placed to handle authorization for agentic AI systems. Some OAuth flows and specs that are relevant for agentic AI include:

  • OAuth 2.1, which is the recommended authorization method in the MCP spec.

  • Proof Key for Code Exchange (PKCE), which protects against CSRF attacks, authorization code injections, and is well-suited for systems where client secrets cannot be securely stored.

  • Dynamic Client Registration (DCR), which allows third-party clients to automatically register with an authorization server and is ideal for scalable AI systems.

  • Protected Resource Metadata (PRM),

    which standardizes how resource servers advertise access requirements using OAuth.

As AI systems increasingly act on behalf of users and interact with other machines, specialized protocols like MCP (Model Context  Protocol), A2A (Agent to Agent), and OAuth are critical. These protocols define how AI systems authenticate, authorize, and securely communicate with each other and with APIs. They enable safe automation, enforce access controls, and ensure trust between Non-Human Identities (NHIs), making them essential for secure and scalable AI deployments.

1. Model Context Protocol (MCP)

Created by Anthropic in November 2024, MCP is a protocol that standardizes how AI systems connect with external data and tools. At the time of writing, over 5,000 MCP servers have been released, but deploying MCP servers to production requires significant developer tooling and security effort.


2. Agent2Agent Protocol (A2A)

Developed by Google and now handed over to The Linux Foundation, A2A is a protocol that facilitates multi-agent communication regardless of how the agents were built. It provides a framework for AI agents to discover each other’s capabilities, collaborate and manage tasks, and make UX decisions on how to show content to the user.


3. Open Authorization (OAuth)

OAuth is a long-standing protocol that has manifold applications on the Internet – it’s also well-placed to handle authorization for agentic AI systems. Some OAuth flows and specs that are relevant for agentic AI include:

  • OAuth 2.1, which is the recommended authorization method in the MCP spec.

  • Proof Key for Code Exchange (PKCE), which protects against CSRF attacks, authorization code injections, and is well-suited for systems where client secrets cannot be securely stored.

  • Dynamic Client Registration (DCR), which allows third-party clients to automatically register with an authorization server and is ideal for scalable AI systems.

  • Protected Resource Metadata (PRM), which standardizes how resource servers advertise access requirements using OAuth.