Can we look into MCP for real?
A deep dive under the hood of MCP, the most discussed protocol of 2025
What is Model Context Protocol (MCP)?
MCP is the hottest thing this year in the AI sphere. I keep seeing badly written articles so I decided to try to put together some meaningful informations.
I did what every sensible person would at this point and I started asking question to Claude. From the questions I created this article to try bringing some clarity into the matter.
Model Context Protocol (MCP) is an open standard protocol introduced by Anthropic in November 2024 that standardizes how AI assistants communicate with external tools and data sources. Think of it as a universal adapter, like USB-C for hardware, that allows any AI application to connect with any tool or data source without custom integration code.
Before MCP, connecting an AI to 10 different tools (GitHub, Slack, databases, APIs) meant writing 10 different integrations. And if you wanted 5 different AI applications to use those same 10 tools, you’d need 50 separate integrations. MCP solves this N×M problem by providing one standardized way for AIs to talk to tools.
What Problem Does MCP Solve?
The AI integration landscape before MCP looked like this:
The Fragmentation Problem:
Every AI application needed custom code to integrate with each tool
Developers wrote the same integration logic repeatedly for different AI platforms
Tools had to provide separate SDKs and APIs for each AI application
No standard way to describe what a tool could do or how to use it
Switching tools or AI providers meant rewriting everything
Real World Impact:
A company using Claude, ChatGPT, and a custom AI agent would need three separate integrations for each tool (like GitHub, Slack, or their database)
Adding a new data source required updating every AI application individually
Tool developers couldn’t build once and support all AI platforms
Security, permissions, and error handling were inconsistent across integrations
MCP eliminates this by creating a single protocol that both AI applications and tools can implement once and use everywhere.
How MCP Solves the Problem
Think of the system like a puppet master controlling multiple puppets. Here’s how the strings connect:
The Puppets (Third-Party Tools):
Each puppet is a tool like GitHub, Slack, PostgreSQL, or Google Drive
The puppet needs attachment points where strings can connect
These attachment points are the MCP Server implementation
The Puppet Master (AI + MCP):
The puppet master is the combination of the AI model and its MCP Client
The strings are the JSON-RPC messages flowing over a transport layer
The puppet master needs to know what strings connect to which parts of each puppet
The Strings (The Protocol): This is where we get into the HOW. The strings aren’t magical, they’re made of three distinct layers:
Transport Layer (The Physical String)
How the messages physically travel
Local puppets: STDIO (standard input/output streams)
Remote puppets: HTTP with Server-Sent Events (SSE)
Message Format Layer (What’s Written on the String)
All messages use JSON-RPC 2.0 format
Provides structure for requests, responses, and notifications
Ensures every message can be understood by both sides
Protocol Layer (The Commands on the String)
Defines specific MCP methods like
tools/list
,tools/call
,resources/read
Describes capabilities, permissions, and data formats
Handles initialization, discovery, and execution
The Technical Deep Dive: How the Strings Actually Work
Now let’s explain exactly how the puppet master pulls the strings.
Phase 1: Attaching the Strings (Initialization)
When you want to use an MCP server, the AI application (MCP Client) first needs to establish a connection. Here’s the exact sequence:
Step 1: Transport Connection
For a local tool using STDIO:
AI Application spawns the MCP server as a child process
Server process: node github_mcp_server.js
Communication channels: stdin/stdout pipes connect AI ↔ Server
For a remote tool using HTTP:
AI Application connects to: https://api.example.com/mcp
Opens persistent connection for Server-Sent Events
Communication channels: HTTP POST for requests, SSE stream for responses
Step 2: The Handshake (Initialize Request)
The AI sends the first message over the connected transport:
{
“jsonrpc”: “2.0”,
“id”: 1,
“method”: “initialize”,
“params”: {
“protocolVersion”: “2024-11-05”,
“capabilities”: {
“roots”: { “listChanged”: true },
“sampling”: {}
},
“clientInfo”: {
“name”: “Claude Desktop”,
“version”: “1.0.0”
}
}
}
This message says: “Hello, I’m Claude Desktop version 1.0.0, I speak MCP protocol version 2024-11-05, and here’s what I can do (my capabilities).”
Step 3: The Response (Initialize Response)
The MCP Server responds:
{
“jsonrpc”: “2.0”,
“id”: 1,
“result”: {
“protocolVersion”: “2024-11-05”,
“capabilities”: {
“tools”: {},
“resources”: {},
“prompts”: {}
},
“serverInfo”: {
“name”: “GitHub MCP Server”,
“version”: “1.2.0”
}
}
}
This says: “Hello, I’m the GitHub MCP Server version 1.2.0, I also speak protocol version 2024-11-05, and I can provide tools, resources, and prompts.”
Step 4: Initialization Complete
The AI sends a notification (no response needed):
{
“jsonrpc”: “2.0”,
“method”: “notifications/initialized”
}
The strings are now attached. The connection is live.
Phase 2: Discovering the Puppet’s Joints (Capability Discovery)
This is one of the most powerful aspects of MCP. Instead of hardcoding what each tool can do, the AI dynamically discovers capabilities at runtime. This means new tools can be added without changing the AI’s code, and tools can update their capabilities without breaking existing integrations.
Now the puppet master needs to learn what this puppet can do. This is where the magic starts to become clear.
Step 1: Asking “What Tools Do You Have?”
The AI sends:
{
“jsonrpc”: “2.0”,
“id”: 2,
“method”: “tools/list”
}
Step 2: The Server Lists Its Tools
The GitHub MCP Server responds:
{
“jsonrpc”: “2.0”,
“id”: 2,
“result”: {
“tools”: [
{
“name”: “create_issue”,
“description”: “Creates a new GitHub issue in a repository”,
“inputSchema”: {
“type”: “object”,
“properties”: {
“repository”: {
“type”: “string”,
“description”: “Repository name in format ‘owner/repo’”
},
“title”: {
“type”: “string”,
“description”: “Issue title”
},
“body”: {
“type”: “string”,
“description”: “Issue description”
}
},
“required”: [”repository”, “title”]
}
},
{
“name”: “list_issues”,
“description”: “Lists issues from a GitHub repository”,
“inputSchema”: {
“type”: “object”,
“properties”: {
“repository”: {”type”: “string”},
“state”: {
“type”: “string”,
“enum”: [”open”, “closed”, “all”]
}
},
“required”: [”repository”]
}
}
]
}
}
This response tells the AI: “I have two tools. The first one is called create_issue
and it needs a repository name and title (and optionally a body). The second one is list_issues
and it needs a repository and optional state.”
The AI now knows exactly what strings it can pull and what information each string pull requires.
Step 3: Discovering Resources
Similarly, the AI can ask about available data:
{
“jsonrpc”: “2.0”,
“id”: 3,
“method”: “resources/list”
}
Response:
{
“jsonrpc”: “2.0”,
“id”: 3,
“result”: {
“resources”: [
{
“uri”: “github://repo/user/myproject”,
“name”: “My Project Repository”,
“description”: “Main project repository information”,
“mimeType”: “application/json”
}
]
}
}
Phase 3: Pulling the Strings (Executing Actions)
This is where the puppet master actually makes the puppet move. Let’s trace a complete action from user request to execution.
User Says: “Create a GitHub issue titled ‘Fix login bug’ in the auth-service repository”
Step 1: AI Decides to Use a Tool
The AI model (Claude) processes the request and determines it needs to use the create_issue
tool. It constructs the appropriate parameters and sends:
{
“jsonrpc”: “2.0”,
“id”: 4,
“method”: “tools/call”,
“params”: {
“name”: “create_issue”,
“arguments”: {
“repository”: “mycompany/auth-service”,
“title”: “Fix login bug”,
“body”: “Users are experiencing login failures. Need to investigate authentication flow.”
}
}
}
This message travels over the transport (STDIO or HTTP) as a JSON string.
Step 2: MCP Server Receives and Routes
The GitHub MCP Server:
Receives the JSON-RPC message on its transport layer
Parses the JSON and validates the structure
Checks that the requested tool (
create_issue
) existsValidates the arguments against the tool’s input schema
Routes to the actual tool implementation code
Step 3: MCP Server Translates to Third-Party API
Here’s the crucial part that’s often glossed over. The MCP Server now translates this standardized MCP request into the specific API call the third-party service expects:
// Inside the GitHub MCP Server implementation
async function handleCreateIssue(args) {
// Translate MCP request to GitHub API format
const githubApiRequest = {
title: args.title,
body: args.body,
labels: []
};
// Make actual HTTP call to GitHub’s REST API
const response = await fetch(
`https://api.github.com/repos/${args.repository}/issues`,
{
method: ‘POST’,
headers: {
‘Authorization’: `Bearer ${process.env.GITHUB_TOKEN}`,
‘Content-Type’: ‘application/json’,
‘Accept’: ‘application/vnd.github.v3+json’
},
body: JSON.stringify(githubApiRequest)
}
);
const issue = await response.json();
// Translate GitHub response back to MCP format
return {
issueNumber: issue.number,
url: issue.html_url,
state: issue.state
};
}
This is THE critical translation layer. The MCP Server is a wrapper that:
Takes standardized MCP requests
Translates them to service-specific API calls
Handles authentication (using stored tokens/credentials)
Makes the actual API call to GitHub
Translates the response back to MCP format
Step 4: MCP Server Responds
After GitHub’s API returns success, the MCP Server sends back:
{
“jsonrpc”: “2.0”,
“id”: 4,
“result”: {
“content”: [
{
“type”: “text”,
“text”: “Successfully created issue #127 in mycompany/auth-service”
}
],
“isError”: false
}
}
Step 5: AI Incorporates Result
The AI receives this response and uses it to formulate the final answer to the user:
“I’ve created issue #127 in the auth-service repository with the title ‘Fix login bug’. You can view it at: https://github.com/mycompany/auth-service/issues/127”
The Complete Data Flow Diagram
User Request
↓
[AI Model - Claude]
↓ (decides which tool to call)
[MCP Client in Host App]
↓ (formats JSON-RPC message)
[Transport Layer - STDIO or HTTP]
↓ (sends JSON string)
[MCP Server - GitHub Wrapper]
↓ (receives & parses JSON-RPC)
↓ (validates request)
↓ (translates to GitHub API format)
[GitHub REST API]
↓ (processes request)
↓ (returns GitHub response)
[MCP Server]
↓ (translates response to MCP format)
↓ (formats JSON-RPC response)
[Transport Layer]
↓ (sends JSON string back)
[MCP Client]
↓ (parses response)
[AI Model]
↓ (incorporates result into context)
User Response
A Complete Working Example: Weather Tool with Claude
Let’s build a simple but complete MCP server and show exactly how Claude interfaces with it.
Building the MCP Server
1. Server Implementation (weather_server.js)
import { Server } from “@modelcontextprotocol/sdk/server/index.js”;
import { StdioServerTransport } from “@modelcontextprotocol/sdk/server/stdio.js”;
// Create the MCP server instance
const server = new Server(
{
name: “weather-server”,
version: “1.0.0”,
},
{
capabilities: {
tools: {},
},
}
);
// Define the weather tool
server.setRequestHandler(”tools/list”, async () => {
return {
tools: [
{
name: “get_weather”,
description: “Gets current weather for a city”,
inputSchema: {
type: “object”,
properties: {
city: {
type: “string”,
description: “City name”,
},
units: {
type: “string”,
enum: [”celsius”, “fahrenheit”],
description: “Temperature units”,
},
},
required: [”city”],
},
},
],
};
});
// Handle tool execution
server.setRequestHandler(”tools/call”, async (request) => {
if (request.params.name === “get_weather”) {
const { city, units = “celsius” } = request.params.arguments;
// Simulate weather API call
// In real implementation, this would call OpenWeatherMap, etc.
const weatherData = await fetchWeatherFromAPI(city);
const temp = units === “celsius”
? weatherData.temp_c
: weatherData.temp_f;
return {
content: [
{
type: “text”,
text: `Weather in ${city}: ${temp}°${units === “celsius” ? “C” : “F”}, ${weatherData.condition}`,
},
],
};
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
// Simulated API call
async function fetchWeatherFromAPI(city) {
// In real implementation, this makes HTTP request to weather service
// For demo, we return mock data
return {
temp_c: 22,
temp_f: 72,
condition: “Partly cloudy”,
};
}
// Start the server with STDIO transport
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error(”Weather MCP server running on stdio”);
}
main().catch(console.error);
2. Connecting to Claude Desktop
Edit Claude Desktop configuration file:
On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%\Claude\claude_desktop_config.json
{
“mcpServers”: {
“weather”: {
“command”: “node”,
“args”: [”/path/to/weather_server.js”]
}
}
}
How Claude Interfaces with the MCP Server
Step-by-Step Execution:
1. Claude Desktop Startup
When Claude Desktop launches:
// Inside Claude Desktop’s MCP Client code
const weatherClient = new MCPClient();
const transport = new StdioClientTransport({
command: “node”,
args: [”/path/to/weather_server.js”]
});
// Spawns: node /path/to/weather_server.js
// Creates stdin/stdout pipes for communication
await weatherClient.connect(transport);
2. Initialization Handshake
Claude sends over stdin:
{”jsonrpc”:”2.0”,”id”:1,”method”:”initialize”,”params”:{”protocolVersion”:”2024-11-05”,”capabilities”:{},”clientInfo”:{”name”:”Claude Desktop”,”version”:”1.0.0”}}}
Server responds over stdout:
{”jsonrpc”:”2.0”,”id”:1,”result”:{”protocolVersion”:”2024-11-05”,”capabilities”:{”tools”:{}},”serverInfo”:{”name”:”weather-server”,”version”:”1.0.0”}}}
3. Tool Discovery
Claude sends:
{”jsonrpc”:”2.0”,”id”:2,”method”:”tools/list”}
Server responds:
{”jsonrpc”:”2.0”,”id”:2,”result”:{”tools”:[{”name”:”get_weather”,”description”:”Gets current weather for a city”,”inputSchema”:{”type”:”object”,”properties”:{”city”:{”type”:”string”,”description”:”City name”},”units”:{”type”:”string”,”enum”:[”celsius”,”fahrenheit”],”description”:”Temperature units”}},”required”:[”city”]}}]}}
Claude now knows: “I have access to a tool called get_weather
that needs a city and optional units.”
4. User Interaction
User types: “What’s the weather in Paris?”
Claude’s internal processing:
Analyzes the user query
Recognizes this requires weather information
Checks available tools
Finds
get_weather
tool matches the needExtracts parameters: city = “Paris”
Decides to call the tool
5. Tool Execution
Claude sends over stdin:
{”jsonrpc”:”2.0”,”id”:3,”method”:”tools/call”,”params”:{”name”:”get_weather”,”arguments”:{”city”:”Paris”,”units”:”celsius”}}}
This JSON string travels through the stdin pipe to the server process.
6. Server Processing
The weather_server.js:
Receives the JSON string from stdin
Parses it:
JSON.parse(message)
Recognizes
method: “tools/call”
Routes to the tools/call handler
Extracts tool name:
get_weather
Validates arguments against schema
Executes the tool function
Calls
fetchWeatherFromAPI(”Paris”)
Formats the response
7. Server Response
Server sends over stdout:
{”jsonrpc”:”2.0”,”id”:3,”result”:{”content”:[{”type”:”text”,”text”:”Weather in Paris: 22°C, Partly cloudy”}]}}
8. Claude’s Final Response
Claude receives the response, incorporates it into its context, and responds to the user:
“The weather in Paris is currently 22°C and partly cloudy.”
How MCP Communicates with Third-Party Services
The critical piece that makes MCP work is that the MCP Server acts as a translation layer. Here’s the exact flow:
Claude (MCP Client)
↓
Sends standardized MCP request:
{method: “tools/call”, params: {name: “get_weather”, arguments: {city: “Paris”}}}
↓
MCP Server receives this
↓
MCP Server translates to third-party API format:
fetch(’https://api.openweathermap.org/data/2.5/weather?q=Paris&appid=KEY’)
↓
Third-party API (OpenWeatherMap) returns:
{temp: 295.15, weather: [{description: “partly cloudy”}]}
↓
MCP Server translates response back to MCP format:
{content: [{type: “text”, text: “Weather in Paris: 22°C, Partly cloudy”}]}
↓
Claude receives standardized MCP response
The Power of This Design:
Claude never needs to know about OpenWeatherMap’s API
Doesn’t need their API key format
Doesn’t need to know their endpoints
Doesn’t need to parse their response format
The MCP Server handles all service-specific details
Stores the API key securely
Knows the exact endpoint URLs
Handles authentication headers
Translates data formats both ways
Manages rate limiting
Handles errors gracefully
Switching weather providers is easy
Update only the MCP Server code
Claude’s interface stays identical
Change from OpenWeatherMap to WeatherAPI
Claude still calls
get_weather
the same way
The Transport Layer: How the Strings Actually Connect
STDIO (Local Integrations)
For tools running on the same machine as the AI:
How it works:
// Claude spawns a child process
const serverProcess = spawn(’node’, [’weather_server.js’]);
// Sets up communication pipes
serverProcess.stdin.write(jsonRpcRequest); // Send request
serverProcess.stdout.on(’data’, (data) => { // Receive response
const response = JSON.parse(data);
handleResponse(response);
});
Why STDIO?
Simple: uses standard input/output streams
Fast: no network overhead
Secure: no network exposure
Works perfectly for local tools
Data Flow:
[Claude Process] [Weather Server Process]
stdin ←→ stdout stdin ←→ stdout
↓ ↑
Writes JSON to server’s stdin |
↓ |
Server reads from its stdin |
↓ |
Server processes request |
↓ |
Server writes JSON to its stdout -------
↓
Claude reads from server’s stdout
HTTP with Server-Sent Events (Remote Integrations)
For tools running on remote servers:
How it works:
Request Phase (HTTP POST):
// Claude makes HTTP POST request
const response = await fetch(’https://api.example.com/mcp’, {
method: ‘POST’,
headers: {
‘Content-Type’: ‘application/json’,
‘Mcp-Session-Id’: sessionId
},
body: JSON.stringify({
jsonrpc: “2.0”,
id: 3,
method: “tools/call”,
params: {name: “get_weather”, arguments: {city: “Paris”}}
})
});
Response Phase (Server-Sent Events):
// Server can stream responses back
res.setHeader(’Content-Type’, ‘text/event-stream’);
res.write(’data: {”jsonrpc”:”2.0”,”id”:3,”result”:{...}}\n\n’);
Why HTTP + SSE?
Works across networks
SSE allows server-initiated messages
Handles long-running operations
Supports progress updates
Can maintain session state
Data Flow:
[Claude - Browser/Desktop] [MCP Server - Cloud]
↓
HTTP POST request to /mcp
(contains JSON-RPC message)
↓
[Server receives, processes]
↓
HTTP Response or SSE stream
(contains JSON-RPC response)
↓
Claude processes response
Session Management
For stateful interactions:
First Request:
Claude → Server: {”jsonrpc”:”2.0”,”id”:1,”method”:”initialize”,...}
Server → Claude: HTTP 200, Header: “Mcp-Session-Id: abc123”
Subsequent Requests:
Claude → Server: Header: “Mcp-Session-Id: abc123”, {”jsonrpc”:”2.0”,...}
Server: Recognizes session, maintains context
Security and Authentication
An often overlooked aspect of “how the puppet master controls the puppet” is security. Here’s how it works:
Authentication Flow
For Remote Servers:
OAuth 2.1 Flow (most common):
User → Claude: “Connect to my Notion workspace”
↓
Claude → Notion MCP Server: Initiates OAuth
↓
Notion MCP Server → Notion: Redirects user to Notion login
↓
User authenticates with Notion directly
↓
Notion → MCP Server: Returns authorization code
↓
MCP Server → Notion: Exchanges code for access token
↓
MCP Server stores token securely
↓
Future tool calls include: Authorization: Bearer {access_token}
API Key Authentication:
// In MCP Server environment variables
process.env.GITHUB_TOKEN = “ghp_abc123xyz...”
// When making API calls
fetch(’https://api.github.com/repos/...’, {
headers: {
‘Authorization’: `Bearer ${process.env.GITHUB_TOKEN}`
}
});
Critical Security Point:
Claude NEVER sees the API keys or access tokens
MCP Server handles all authentication
Claude sends standardized requests
MCP Server adds authentication before calling third-party API
Permission and Approval System
MCP includes a built-in approval mechanism:
// Server marks tool as requiring approval
{
name: “delete_repository”,
description: “Permanently deletes a GitHub repository”,
dangerous: true,
requiresApproval: true
}
// When Claude tries to call this:
Claude → Server: {method: “tools/call”, params: {name: “delete_repository”}}
↓
Server → Claude: {requiresApproval: true, toolName: “delete_repository”}
↓
Claude → User: “The AI wants to delete a repository. Approve?”
↓
User clicks “Approve”
↓
Claude → Server: {method: “tools/call”, approved: true, ...}
↓
Server executes the action
Error Handling: When Strings Break
Here’s how MCP handles errors at each layer:
Transport Layer Errors
// Connection fails
Server process crashes
↓
Claude detects broken pipe on stdin/stdout
↓
Claude shows user: “Weather server disconnected”
↓
Attempts reconnection or marks tool unavailable
Protocol Layer Errors
// Invalid JSON-RPC message
Request: {”method”: “tools/call”} // Missing required ‘id’
↓
Response: {
“jsonrpc”: “2.0”,
“id”: null,
“error”: {
“code”: -32600,
“message”: “Invalid Request: missing id field”
}
}
Tool Execution Errors
// Tool fails during execution
Request: {method: “tools/call”, params: {name: “get_weather”, arguments: {city: “InvalidCity”}}}
↓
Response: {
“jsonrpc”: “2.0”,
“id”: 5,
“result”: {
“content”: [{
“type”: “text”,
“text”: “Error: City ‘InvalidCity’ not found in weather database”
}],
“isError”: true
}
}
Claude then tells the user: “I couldn’t find weather data for that city. Please check the spelling.”
Understanding the Connection
The “magic” of MCP isn’t magic at all. It’s a carefully designed protocol with three clear layers:
1. Transport Layer: The physical connection (STDIO pipes or HTTP) 2. Message Layer: JSON-RPC 2.0 format for structured communication
3. Protocol Layer: Standardized MCP methods and data formats
The MCP Server is the crucial translator that:
Speaks standardized MCP to the AI
Speaks service-specific APIs to the tool
Handles authentication and security
Translates data formats in both directions
This separation of concerns is why MCP succeeds where custom integrations failed. The AI doesn’t need to know about every tool’s quirks, and tools don’t need to understand AI-specific formats. The MCP Server bridges the gap with a standard protocol that both sides can implement once and reuse everywhere.
The puppet master (AI + MCP Client) pulls strings (sends JSON-RPC messages over transport), which connect to the puppet’s joints (MCP Server translates to tool APIs), making the puppet move (tool executes action and returns result). Each layer has a specific job, and together they create the seamless integration that makes AI agents powerful and practical.
Articles I enjoyed this week
How to Stay Relevant as an Engineering Leader While Empowering Others - Gregor Ojstersek, Djordje Mladenovic