Every day, developers paste API keys directly into Claude, Cursor, and other AI assistants. It works โ until it doesn't. Here's why this pattern is dangerous and what the MCP ecosystem needs instead.
The Problem: Secrets in the Context Window
The most common way to give an AI agent access to an API looks like this:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_abc123realTokenHere"
}
}
}
}
That token is now:
- Sitting in a plaintext config file on your filesystem
- Visible to the LLM if it reads its own configuration
- Passed to every tool call with no scope restrictions
- Impossible to audit โ you can't see which API calls the agent actually made
- At risk of leakage via prompt injection, log files, or model training data
This isn't hypothetical. Researchers have demonstrated prompt injection attacks that exfiltrate environment variables. And if your agent has a DELETE capable token when it only needs GET, one hallucinated tool call can cause real damage.
Why Traditional Secret Managers Don't Fit
You might think: "Just use HashiCorp Vault or AWS Secrets Manager." But these tools were designed for services, not agents. The mismatch shows up in three ways:
1. No concept of request-level policy
Traditional secret managers hand you the key and walk away. They don't know (or care) what HTTP request you're about to make with it. But for AI agents, the request is where risk lives. An agent with a Stripe key should be able to list charges but not create refunds.
2. No audit trail tied to agent intent
When an AI agent makes 50 API calls in a session, you want to know: what was the agent trying to do? Traditional secret managers log "secret accessed" โ not "agent called GET /v1/charges because user asked to check payment status."
3. The credential never needs to be exposed
This is the key insight. The agent doesn't need the API key โ it needs the ability to make API calls. There's a fundamental difference. If the agent never sees the credential, it can't leak it. Period.
The Enforcement-Based Approach
The solution is to move from trust-based security ("here's the key, please be careful") to enforcement-based security ("tell me what you want to do, and I'll decide whether to allow it").
This is exactly what Janee does. Instead of injecting secrets into the agent's environment, you configure Janee as an MCP server that proxies API requests:
# Store credentials securely (encrypted at rest)
janee add stripe --key sk_live_xxx
# Define what the agent can do
# In ~/.janee/config.yaml:
capabilities:
stripe_readonly:
service: stripe
rules:
allow: ["GET /v1/charges*", "GET /v1/customers*"]
deny: ["POST *", "DELETE *", "PUT *"]
Now the agent interacts through MCP tools:
# Agent sees these tools:
janee_list_services โ ["stripe", "github"]
janee_execute โ proxy a request through Janee
# Agent calls:
janee_execute({
service: "stripe",
method: "GET",
path: "/v1/charges?limit=5"
})
The agent never sees sk_live_xxx. Janee injects it server-side, checks the request against the policy, and returns the response. If the agent tries POST /v1/refunds, it gets a clear denial โ not an error from Stripe.
What This Enables
Once you separate credential storage from credential usage, interesting things become possible:
- Least-privilege by default: Each agent session gets exactly the permissions it needs
- Full audit trail: Every proxied request is logged with timestamp, path, method, and response status
- Prompt injection resilience: Even if an attacker hijacks the agent's reasoning, they can't access APIs outside the allowed policy
- Credential rotation without agent changes: Update the key in Janee, agents keep working
- Multi-agent isolation: Different agents get different capability sets from the same credentials
The MCP Ecosystem Needs This
The Model Context Protocol is enabling a new generation of AI-native tools. But the security model hasn't caught up. Most MCP servers today require raw API keys passed via environment variables. The protocol itself has no built-in concept of credential management or request-level authorization.
This isn't a criticism of MCP โ it's intentionally minimal by design. But it means the ecosystem needs complementary tools that handle the security layer. Just like web applications need authentication middleware even though HTTP doesn't mandate it.
The question isn't whether AI agents will have API access. It's whether that access will be auditable, scoped, and recoverable when things go wrong.
Getting Started
Janee is open source and installs in 30 seconds:
npm install -g @true-and-useful/janee
janee init
janee add stripe # interactive credential setup
janee serve # start MCP server
Add it to your Claude Desktop config:
{
"mcpServers": {
"janee": {
"command": "janee",
"args": ["serve"]
}
}
}
No API keys in config. No tokens in the context window. Just scoped, auditable access.
Try Janee
Give your AI agents secure API access โ without exposing your secrets.
View on GitHub โ