Book intro call

Agent-to-agent—when machines negotiate with machines

Dec 27, 20255 minute read
Agent-to-agent—when machines negotiate with machines

The agentic web’s next phase isn’t just agents serving humans. It’s agents transacting with each other.

Beyond human-in-the-loop

The current conversation about AI agents focuses on Human delegation: you tell an agent what you want, and it executes on your behalf. But this is just the first step. The more profound shift comes when agents interact with other agents—negotiating, coordinating, and transacting without humans involved in each exchange.

Your personal agent needs to schedule a meeting with a client. Instead of emailing a human assistant, it communicates directly with the client’s scheduling agent. The two agents compare calendars, propose times, negotiate based on preferences and priorities, and confirm a slot—all in seconds, all without human intervention on either side.

This isn’t science fiction. It’s the logical next step of the infrastructure being built today.

The coordination challenge

Agent-to-agent interaction introduces coordination problems that human-to-human interaction solves through social norms, reputation, and shared context.

When two humans negotiate, they draw on cultural expectations about fairness, recognize each other’s social signals, and adjust based on relationship history. They have theory of mind—mental models of what the other person wants and how they’ll react.

Agents have none of this. They optimize for their programmed objectives, potentially exploiting any advantage the interaction structure allows. Without careful design, agent-to-agent interactions could devolve into adversarial dynamics—each agent trying to extract maximum value from the exchange, with no social fabric to moderate the competition.

Consider two agents negotiating a price. One represents a buyer seeking the lowest price; the other represents a seller seeking the highest. Without constraints, they might deadlock, or one might exploit information asymmetries, or they might engage in rapid-fire bargaining that destabilizes markets. The dynamics are genuinely uncertain.

Emerging protocols

Recognizing these challenges, the industry is developing protocols specifically for agent-to-agent communication.

Google has introduced agent-to-agent (a2a), a protocol designed to enable agents built on different frameworks to communicate and collaborate. A2A focuses on interoperability—ensuring that an agent built with one technology stack can productively interact with an agent built on another.

The Model context protocol (MCP) Is evolving in this direction too. While initially focused on connecting agents to tools and data, the protocol’s designers envision agents using MCP to interact with each other—each agent exposing capabilities that other agents can invoke.

These protocols address the technical layer: how do agents exchange messages, invoke each other’s capabilities, and coordinate actions? But they don’t fully solve the strategic layer: how do agents negotiate fairly, build trust, and avoid adversarial dynamics?

Trust between machines

Human trust is built through repeated interaction, reputation, and social enforcement. I trust you because you’ve been reliable before, because others vouch for you, and because cheating me would damage your reputation.

Agent trust needs different mechanisms. Cryptographic verification can prove an agent is what it claims to be. Audit trails can document past behavior. Smart contracts can encode agreements that execute automatically, removing the need for trust in compliance.

We’re seeing early versions of this infrastructure. Agent identity frameworks Like incode’s agentic identity and amazon’s agentcore identity provide verifiable credentials for agents. Blockchain-based approaches encode agent agreements as self-executing contracts. Reputation systems that track agent behavior across interactions are being prototyped.

But these mechanisms are immature. The infrastructure for trusted agent-to-agent interaction is being built in real-time, with significant gaps and uncertainties.

Economic implications

Agent-to-agent transactions Could dramatically increase market efficiency. When buyers and sellers are both represented by agents, transactions can happen faster, with better price discovery, and lower friction. Markets that currently require human brokers or intermediaries might disintermediate entirely.

But efficiency isn’t the only outcome. Agent-to-agent dynamics might also produce instability. High-frequency trading offers a cautionary parallel: when algorithms trade with algorithms at machine speed, markets can flash-crash faster than humans can intervene. Similar dynamics could emerge in other agent-to-agent markets.

There’s also the question of collusion. If your agent and my agent are both optimizing for similar objectives, they might find mutually beneficial arrangements that disadvantage others—implicit coordination without explicit conspiracy. Detecting and preventing such dynamics is an unsolved problem.

The human role evolves

As agents handle more interactions, the Human role shifts From participant to governor. You’re not negotiating deals yourself; you’re setting the parameters within which your agent negotiates. You’re not managing each transaction; you’re designing the policies that guide agent behavior.

This requires a different skill set. Understanding negotiation tactics matters less than understanding how to specify objectives and constraints clearly. Managing relationships matters less than designing systems that manage relationships on your behalf.

It also raises accountability questions. When your agent makes a deal you didn’t anticipate, who’s responsible? The current legal framework assumes human decision-makers. Agent-to-agent commerce may require new frameworks for assigning liability when autonomous systems interact.

Preparing for multi-agent futures

For organizations thinking ahead:

Design for Agent interoperability. If you’re building agents, build them to communicate with other agents—not just with humans and services. Implement emerging standards like a2a alongside MCP.

Think about agent policies. What rules should govern your agents’ interactions with other agents? What are they authorized to agree to? What limits should constrain their negotiations? These policies need explicit design.

Consider adversarial dynamics. Assume other agents will optimize against yours. Design for robustness—agents that perform well even when counterparties aren’t cooperative.

Maintain human oversight. Even as agent-to-agent interactions increase, preserve the ability for humans to review, override, and adjust. The goal is delegation, not abdication.

Watch the regulatory landscape. Agent-to-agent commerce raises novel legal questions that regulators will eventually address. Staying informed about emerging frameworks helps you adapt before compliance becomes mandatory.

The agent-to-agent future isn’t fully here, but its foundations are being laid. The organizations that understand multi-agent dynamics—and design for them—will operate more effectively in the economy that’s emerging.

End
Agent-to-agent—when machines negotiate with machines - Most Studios - Design agency in Stockholm