MCP Server Setup
Connect AI coding agents to external tools via Model Context Protocol.
Last updated
Workflow1. What is MCP
Model Context Protocol is an open standard for connecting AI applications to external data sources and tools. Think of it as USB-C for AI: one protocol that lets any compliant host talk to any compliant server, regardless of vendor.
Three primitives
| Primitive | Controlled by | Purpose |
|---|---|---|
| Tools | Model | Executable functions the AI can call (search, create file, run query) |
| Resources | Application | Read-only data the host app exposes to the model (file contents, DB rows) |
| Prompts | User | Reusable prompt templates with arguments (slash commands, workflows) |
Client-server architecture
The AI application (the host) creates one MCP client per server. Each client maintains an independent connection, keeping servers isolated from each other. The host orchestrates which servers to consult based on the task.
MCP is supported by Claude Code, ChatGPT, VS Code, Cursor, Windsurf, and many other AI development tools.
2. Adding Servers to Claude Code
Remote server (HTTP transport)
claude mcp add --transport http github-remote https://api.githubcopilot.com/mcp/
Local server (stdio transport)
claude mcp add --transport stdio filesystem npx -- \
-y @modelcontextprotocol/server-filesystem /home/user/projects
Three config scopes
| Scope | Flag | Location | Shared |
|---|---|---|---|
| Local | --scope local (default) | .claude/settings.local.json | No (gitignored) |
| Project | --scope project | .mcp.json in repo root | Yes (version-controlled) |
| User | --scope user | ~/.claude/settings.json | All projects |
Shared project config: .mcp.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "."],
"env": {
"API_TOKEN": "${API_TOKEN}",
"LOG_LEVEL": "${LOG_LEVEL:-info}"
}
}
}
}
Environment variables expand at runtime: ${VAR} for required values, ${VAR:-default} for optional ones with fallbacks.
Import from Claude Desktop
claude mcp add-from-claude-desktop
Pulls all servers from your Claude Desktop config into Claude Code. Useful when migrating an existing setup.
3. Transports: stdio vs HTTP
MCP defines two transport mechanisms. The choice depends on whether your server runs locally or remotely.
stdio
The client launches the server as a child process and communicates over stdin/stdout using JSON-RPC. No network stack involved. This is the simplest transport and the default for local tools.
Streamable HTTP
The server runs as an independent HTTP process. Supports OAuth 2.0 for authentication, session management via the Mcp-Session-Id header, and Origin validation to prevent unauthorized browser-based requests. Best for shared or remote servers.
| Property | stdio | Streamable HTTP |
|---|---|---|
| Startup | Client spawns process | Server runs independently |
| Network | None (pipes) | HTTP/HTTPS |
| Auth | Inherits process env | OAuth 2.0 |
| Sessions | Lifetime of process | Mcp-Session-Id header |
| Best for | Local CLI tools, file ops | Remote APIs, shared infra |
4. Useful Servers
Official reference servers
| Server | What it does |
|---|---|
| Filesystem | File read/write/search with configurable access controls |
| Git | Repository operations: status, diff, log, commit |
| Memory | Knowledge graph persistence across sessions |
| Fetch | Web content retrieval and conversion to markdown |
| Sequential Thinking | Structured multi-step problem solving |
| Time | Timezone conversions and current time queries |
Archived (community-maintained)
Several servers originally in the official repo have been archived and are now community-maintained: GitHub, PostgreSQL, Sentry, Slack, and SQLite. They still work but receive updates from the community rather than Anthropic.
5. Tool Search and Limits
Deferred tool loading
Tool Search is enabled by default. MCP tool definitions are not loaded into the context window upfront. Instead, Claude discovers tools on demand by matching your request against server descriptions. This keeps context lean even when dozens of servers are connected.
Output limits
MCP tool responses generate a warning at 10K tokens. The default maximum is 25K tokens, after which output is truncated. You can raise or lower this threshold:
export MAX_MCP_OUTPUT_TOKENS=50000
Enterprise: managed MCP
Organizations can define a managed-mcp.json configuration that pre-provisions servers for all users. Admins can enforce allowlist or denylist policies to control which servers are available within the organization.