Key Takeaways
- MCP is an open protocol that standardizes how AI models connect to external tools and data sources
- MCP servers expose tools, resources, and prompts that Claude can use in conversations
- You can build an MCP server in Python or TypeScript in under 100 lines of code
- MCP enables Claude to read files, query databases, call APIs, and take actions in your systems
- Claude Desktop, Cursor, and Claude.ai support MCP out of the box
Anthropic's Model Context Protocol (MCP) solves a core problem in AI assistant development: how do you give a language model standardized access to external tools, data sources, and capabilities? Before MCP, every team built custom integrations from scratch. MCP defines a standard protocol so any compliant tool can work with any compliant AI host. This tutorial shows you what MCP is, how it works, and how to build your own MCP server.
What MCP Is and Why It Matters
MCP (Model Context Protocol) is an open standard from Anthropic that defines how AI models communicate with external systems — files, databases, APIs, and other tools. Think of it as USB for AI: before USB, every peripheral needed a custom connector. USB created a standard connector that any device could use. MCP does the same for AI tool integration. Before MCP: every AI application team built bespoke integrations for every tool they wanted the AI to use. After MCP: tool authors build one MCP server, and any MCP-compatible AI host (Claude Desktop, Claude.ai, Cursor, etc.) can use it immediately. The protocol defines three primitives: Tools (functions the AI can call), Resources (data the AI can read), and Prompts (reusable prompt templates).
MCP Architecture: Hosts, Clients, and Servers
MCP has three components. MCP Host: The AI application that uses Claude — Claude Desktop, Claude.ai, Cursor, or a custom app. The host manages connections to MCP servers. MCP Client: Built into the host, communicates with servers via the protocol. MCP Server: A lightweight process (your code) that exposes tools, resources, and prompts. The host and server communicate over stdio (standard input/output for local servers) or HTTP with Server-Sent Events (for remote servers). The server runs as a separate process — the host spawns it when needed. This architecture means your MCP server can be written in any language, run locally or remotely, and does not need to know anything about Claude internals — it just needs to follow the protocol.
Building Your First MCP Server in Python
Install the Python MCP SDK: pip install mcp. A minimal server that exposes one tool:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import json
app = Server('my-first-server')
@app.list_tools()
async def list_tools():
return [
Tool(
name='get_weather',
description='Get current weather for a city',
inputSchema={
'type': 'object',
'properties': {
'city': {'type': 'string', 'description': 'City name'}
},
'required': ['city']
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == 'get_weather':
city = arguments['city']
# Your actual weather API call goes here
return [TextContent(type='text', text=f'Weather in {city}: 72F, sunny')]
if __name__ == '__main__':
import asyncio
asyncio.run(stdio_server(app))
MCP Resources and Prompts: Beyond Just Tools
Resources expose data that Claude can read — file contents, database records, API responses. Unlike tools (which are called with arguments), resources are identified by URIs and can be listed and read. Example: a filesystem MCP server exposes file:///path/to/file.txt as a resource. Claude can list available resources and read them for context. Prompts are reusable templates that appear in the host UI. A database MCP server might expose a 'query_database' prompt that pre-fills the system prompt with schema information and example queries. Users can trigger these prompts from the host's prompt library. Together, tools + resources + prompts give the AI a full interface to your system — not just the ability to call functions, but to browse data and use pre-configured workflows.
Practical MCP Use Cases Being Built Right Now
The MCP ecosystem is growing rapidly. Real MCP servers built by the community: GitHub MCP server — lets Claude create issues, review PRs, search repositories. PostgreSQL MCP server — lets Claude query databases, describe schemas, and run SQL. Filesystem MCP server — lets Claude read, write, and search local files. Slack MCP server — lets Claude read channel history and send messages. Puppeteer MCP server — lets Claude control a browser, take screenshots, and scrape pages. Building your own: internal company tools (Jira, Salesforce, proprietary APIs) are prime candidates. Any API your team uses manually is a candidate for an MCP server — once it exists, Claude can use it autonomously.
Connecting MCP Servers to Claude Desktop
Claude Desktop has built-in MCP support. Configuration is in a JSON config file. On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json. On Windows: %APPDATA%\Claude\claude_desktop_config.json. Example config to add a local Python MCP server:
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["/path/to/your/server.py"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}Restart Claude Desktop after editing the config. The server appears in Claude Desktop's interface — you will see the tools available in the conversation. Test by asking Claude to use the tool: 'What is the weather in Denver?' Claude will call your get_weather tool and show the result in the conversation.
Frequently Asked Questions
- What programming languages can I use to build MCP servers?
- Anthropic provides official SDKs for Python and TypeScript. The protocol itself is language-agnostic — you can implement it in any language by following the specification, but Python and TypeScript have the most complete SDK support and community examples.
- Is MCP open source?
- Yes. The Model Context Protocol specification, SDKs, and reference implementations are open source on GitHub under the anthropics organization. The protocol is designed to be used by any AI model provider, not just Anthropic products.
- Can I use MCP with models other than Claude?
- The protocol is designed to be model-agnostic. As of 2026, Claude (in Claude Desktop and Claude.ai) has the most complete MCP support. Other AI providers and open-source implementations are beginning to support MCP as it gains adoption.
- What is the difference between MCP tools and function calling?
- Function calling is a model feature where you define functions in an API request and the model returns structured calls to those functions. MCP is a protocol layer on top of this — it standardizes how tools are discovered, described, and called across multiple models and host applications. MCP servers can be reused across any MCP-compatible host without modification.
Ready to Level Up Your Skills?
AI integration, Claude APIs, and agentic systems are core curriculum at our bootcamp. Build real AI-powered tools in 3 days. Next cohorts October 2026 in 5 cities. Only $1,490.
View Bootcamp Details