This blog Post is based on Podcast Episode with MCP creators David Soria Parra and Justin Spahr-Summers
https://open.spotify.com/episode/1qYgg8spVvBtHJKMyuvVOu
TL;DR
Born from two Anthropic engineers’ frustration at copy‑pasting code between Claude Desktop and their IDE, MCP has snowballed into the de‑facto plumbing layer for AI tools. It defines a tiny, JSON‑RPC–based client–server protocol plus three high‑level primitives—tools, resources and prompts—that let any LLM talk to any data source or action surface. Since its first public spec on 25 Nov 2024, the protocol has added a scalable Streamable HTTP transport, gained SDKs in six languages, and been adopted by Cursor, Zed, Replit, Codeium, Sourcegraph—and even by OpenAI. With an ecosystem of hundreds of open‑source servers (from GitHub and Postgres connectors to “memory” and “sequential‑thinking” agents) MCP is rapidly becoming the connective tissue of AI engineering. Yet questions around authentication, supply‑chain trust and governance remain open as the community debates next steps.
Getting Started with the Model Context Protocol (MCP)
The Model Context Protocol (MCP) is transforming how AI models interact with external data sources and tools. Launched by Anthropic in November 2024, MCP has quickly gained adoption across the AI ecosystem as the standard way for language models to access context beyond their training data.
What is MCP?
MCP is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). Think of MCP like USB-C for AI applications—just as USB-C provides a standardized way to connect devices to various peripherals, MCP offers a standardized interface for AI models to connect with different data sources and tools.
Justin Spahr-Summers, MCP co-creator at Anthropic, explained during a podcast interview: "Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins."
David Soria Parra, the other co-creator, emphasized: "The interesting bit here that I want to highlight is it's AI applications and not models themselves that this is focused on. That's a common misconception."
Why Use MCP?
MCP solves what developers call the "M×N problem"—where M different AI applications need to connect to N different data sources or tools, potentially requiring M×N different integration efforts.
Key benefits include:
- Standardized Integration: Once an application supports MCP, it can connect to any MCP server without additional work
- LLM-Vendor Flexibility: Easily switch between different AI models and vendors
- Data Security: Keep your data within your infrastructure while interacting with AI
- Growing Ecosystem: Access a wide range of pre-built integrations for your LLM
How MCP Works
MCP follows a client-server architecture:
- MCP Hosts/Clients: Applications like Claude Desktop, Cursor, or Windsurf that connect to data sources via MCP
- MCP Servers: Lightweight programs that expose specific capabilities (like GitHub access, database queries, file system operations)
- MCP Protocol: The standardized communication layer between clients and servers
The protocol defines several primitives that enable rich interactions:
- Tools: Functions that LLMs can invoke to perform actions or retrieve information
- Resources: Structured data or content that provides additional context
- Prompts: Pre-defined templates or instructions for specific interactions
Core Concepts
Primitive | Who controls it? | What it represents | Typical UI affordance | Example |
---|---|---|---|---|
Tools | Model‑chosen | Executable actions | auto‑invoked function calls | create_branch , run_sql Tools |
Resources | App‑ or user‑chosen | Read‑only context blobs | attachment picker, sidebar | database schema, PDF, email thread Resources |
Prompts | User‑chosen | Template messages | slash/@ commands |
“/summarise selection ”Prompts |
Popular MCP Servers
Many pre-built MCP servers are available for common use cases:
- File System: Access local files and directories
- GitHub: Interact with repositories, PRs, and issues
- Database (PostgreSQL, SQLite): Query databases directly
- Google Drive: Access documents stored in Drive
- Docker: Manage containers through MCP
MCP in Action
The true power of MCP becomes evident when combining multiple servers. For example, a PR review workflow might:
- Use a GitHub MCP server to fetch PR details
- Analyze code changes with Claude
- Save the review to Notion using a Notion MCP server
Users are finding creative applications beyond basic integrations. As one developer mentioned in the Anthropic podcast: "The coolest [application] was an MCP server that can control a 3D printer. People are feeling this power of Claude connecting to the outside world in a really tangible way."
The Future of MCP
MCP continues to evolve with new features and improvements:
- MCP Registry: A centralized discovery system for MCP servers
- Remote Servers: Enhanced security for cloud-hosted MCP servers
- OAuth Integration: Standardized authentication mechanisms
- Improved Composability: Making servers more modular and combinable
The protocol maintains backward compatibility while adding new capabilities through a capability negotiation system.
Design trade‑offs & open questions
Issue | Current state | Ongoing proposals |
---|---|---|
Security / Trust | Any binary can call itself an MCP server; no central signing | Registry‑level reputation & manifest checks; per‑server permission prompts |
Governance | Spec lives in GitHub org with multi‑company maintainers (Microsoft, JetBrains, Shopify) but Anthropic steers roadmap Specs | Possible neutral foundation after ecosystem stabilises |
Performance limits | Claude handles “hundreds” of tools per session before confusion rises, depending on name overlap (creators, Late‑in‑Space podcast) | Hierarchical proxy servers; selective tool filtering by client or fast LLM pre‑selector |
Inter‑op with OpenAPI | Bridges exist that auto‑wrap OpenAPI specs as MCP servers | Spec team discourages conflating low‑level REST shapes with AI‑oriented primitives |
Resources
- Official Documentation: modelcontextprotocol.io
- GitHub Repositories: https://github.com/modelcontextprotocol/servers
- Example Project (Quickstart): https://modelcontextprotocol.io/quickstart/server
As David Soria Parra said, "I think the best part is just like, pick the language of your choice, pick the SDK for it... and just go build a tool of the thing that matters to you personally."
MCP is streamlining how AI applications interact with the world, making powerful integrations accessible to developers of all levels. Whether you're building a complex agent system or simply want to enhance your AI interactions with external data, MCP provides a standardized path forward.