If you’re experimenting with LLM-powered agents, you’ve likely felt the pain of brittle function-calling glue code and one-off web-hooks. Two 2024-25 standards aim to end that pain:
Layer | Purpose | Spec |
---|---|---|
Model Context Protocol (MCP) | Structured tool + context ingestion for a single model/agent | JSON-RPC 2.0 client/server (Anthropic, OSS) |
Agent-to-Agent (A2A) | Typed task + artifact exchange between multiple, heterogeneous agents | Agent Card + JSON messages via HTTP/SSE (Linux Foundation) |
Think of MCP as USB-C for LLMs—one plug for any data/tool. Think of A2A as TCP/IP for agents—a routable envelope so specialized agents can find each other and collaborate. Together they let you scale from a single “chat-with-tools” bot to an ecosystem of composable AI workers.
Primitives
Resources (documents, vectors), Tools (JSON-schema APIs), Prompts (templated instructions) and Sampling options are advertised by an MCP server; an MCP client (often an LLM wrapper) selects and invokes them using JSON-RPC 2.0 requests over HTTP or stdio .
Lifecycle
Initialize → Operate → Shutdown phases ensure version negotiation, capability exchange, and clean disconnects .
Security
Threats like tool poisoning or credential leakage are countered with signed manifests, SBOMs, OAuth 2.1 + mTLS, and schema validation .
Actors & Artifacts
User → Client Agent ↔ Remote Agent. The Remote Agent publishes an Agent Card at /.well-known/agent.json announcing its skills and auth modes. Tasks flow as JSON objects; results return as Artifacts. SSE or push notifications stream progress .
Lifecycle & Discovery
Create (publish card) → Operate (process tasks) → Update → Terminate; all messages are JSON-RPC-2.0 compliant .
Typical Skills
Format conversions, data lookups, or domain logic (e.g., quote_shipping, vectorize_artwork)—each with typed I/O so any language stack can consume them.