# Designing MCP Servers That Protect Secrets

If you're building an MCP server, the decisions you make about credential handling directly affect the security of every developer who installs it. Secrets that leak through your tool descriptions, outputs, or configuration patterns end up in LLM context windows, and from there, they can [propagate to logs, conversation history, and third-party systems](/docs/ai-agents/key-leakage-vectors-in-agent-workflows).

## Read Keys from the Environment at Startup

Your MCP server should read credentials from environment variables when it starts, never from tool call parameters or protocol messages. This keeps secrets out of the LLM's context entirely.

```typescript
// Good: Read credentials at startup
const apiKey = process.env.SERVICE_API_KEY;
if (!apiKey) {
  console.error("SERVICE_API_KEY environment variable is required");
  process.exit(1);
}

// Use the key internally -- never pass it back through tool results
async function handleToolCall(params: ToolCallParams) {
  const response = await fetch("https://api.example.com/data", {
    headers: { Authorization: `Bearer ${apiKey}` },
  });
  return sanitizeResponse(await response.json());
}
```

```typescript
// Bad: Accepting credentials as tool parameters
async function handleToolCall(params: {
  apiKey: string;
  query: string;
}) {
  // The API key is now in the LLM's context as a tool parameter
  const response = await fetch("https://api.example.com/data", {
    headers: { Authorization: `Bearer ${params.apiKey}` },
  });
  return response.json();
}
```

## Never Include Keys in Tool Descriptions

Tool descriptions are sent to the LLM as part of its system prompt on every request. Anything in a tool description is visible to the model and may be logged, cached, or forwarded.

```json
// Bad: Secret embedded in tool description
{
  "name": "query_database",
  "description": "Query the production database at db.internal:5432. Authenticate with password: hunter2"
}
```

```json
// Good: No credentials in description
{
  "name": "query_database",
  "description": "Execute a read-only SQL query against the configured database. Connection details are managed by the server."
}
```

This also applies to error messages. If your server encounters an authentication failure, return a generic error; don't echo the credentials back in the error response.

## Sanitize All Tool Outputs

Tool results are returned to the LLM and become part of the conversation. If your tool queries a database or calls an API, filter out sensitive fields before returning the result.

```typescript
function sanitizeResponse(data: Record<string, any>[]) {
  const sensitiveFields = [
    "api_key",
    "secret",
    "password",
    "token",
    "credential",
    "connection_string",
    "private_key",
  ];

  return data.map((row) => {
    const sanitized = { ...row };
    for (const field of sensitiveFields) {
      if (field in sanitized) {
        sanitized[field] = "[REDACTED]";
      }
    }
    return sanitized;
  });
}
```

This isn't just about your own secrets. If your MCP server queries a database that stores other users' API keys, those keys will flow into the LLM context unless you actively filter them.

## Implement OAuth 2.1 for Remote Servers

If your MCP server is accessible over HTTP (as opposed to stdio), the MCP specification requires [OAuth 2.1](/docs/ai-agents/oauth-vs-api-keys-for-agents) for authorization. This eliminates the need for static API keys entirely.

Your server should:

1. **Expose an authorization endpoint** that clients use to initiate the OAuth flow.
2. **Require PKCE** (Proof Key for Code Exchange) to prevent authorization code interception.
3. **Issue scoped access tokens** so clients only get the permissions they need.
4. **Enforce token expiration** with short-lived access tokens and rotating refresh tokens.
5. **Support dynamic client registration** so new MCP clients can register without manual setup.

```
Authorization flow:
Client -> Authorization Server: Authorization request + PKCE
User -> Authorization Server: Grants permission
Authorization Server -> Client: Authorization code
Client -> Authorization Server: Code + PKCE verifier -> Access token
Client -> MCP Server: Request + Access token
```

## Validate the Origin Header

MCP servers that listen on `localhost` for communication with local clients are vulnerable to DNS rebinding attacks. A malicious website can resolve a custom domain to `127.0.0.1`, allowing it to send requests to your local MCP server from a browser context.

Always validate the `Origin` header on incoming requests and reject any request from an unexpected origin:

```typescript
app.use((req, res, next) => {
  const origin = req.headers.origin;
  if (origin && !allowedOrigins.includes(origin)) {
    return res.status(403).json({ error: "Forbidden" });
  }
  next();
});
```

## Treat Tool Annotations as Untrusted

The MCP specification includes tool annotations (metadata like `readOnlyHint` and `destructiveHint`) that describe a tool's behavior. These annotations are provided by the server and **must not be trusted as a security boundary**. A compromised or malicious server can lie about its annotations.

Do not make authorization decisions based on annotations alone. A tool that claims `readOnlyHint: true` might still modify data. Hosts and clients should enforce their own access controls independent of what the server declares.

## Checklist for MCP Server Authors

- [ ] Read all credentials from environment variables at startup
- [ ] Never accept credentials as tool call parameters
- [ ] Strip credentials, connection strings, and tokens from tool descriptions
- [ ] Sanitize all tool outputs to remove sensitive fields
- [ ] Return generic error messages that don't echo credentials
- [ ] Implement OAuth 2.1 with PKCE for HTTP-based servers
- [ ] Validate Origin headers to prevent DNS rebinding
- [ ] Do not rely on tool annotations as security controls
- [ ] Document the environment variables your server requires so users can [configure secrets managers](/docs/ai-agents/securing-keys-in-mcp-configs)
