When a security team asks "where could this API key leak from?", they usually think about a few obvious places: the codebase, the CI/CD environment, the deployed service. For an AI agent deployment with long-lived credentials, the real answer is considerably longer.

We've mapped every place a credential stored in an AI agent context can leak. The list is longer than most teams expect.

Layer 1: The LLM provider

System prompt storage

If your LLM provider caches system prompts for performance, your credentials may be stored in their infrastructure. Provider data handling policies vary, and not all providers give you visibility into what's cached where.

Inference logs

Many LLM API providers log inference requests for trust and safety monitoring, abuse detection, and compliance purposes. Whether those logs include your system prompt — and who has access to them — depends on your provider contract and tier.

Fine-tuning and training data

If you use conversation data for fine-tuning or send conversations to the provider's training pipeline (even inadvertently, through certain opt-out configurations), credentials in those conversations become part of a training corpus that could surface in future model outputs.

Layer 2: Your infrastructure

Application logs

Application logging is the most common source of credential exposure. Logging frameworks that capture function arguments, HTTP request bodies, or full context dumps will capture credentials in system prompts. This is especially dangerous because application logs are often shipped to aggregation services (Datadog, Splunk, CloudWatch) with broad access.

Error tracking systems

When an agent throws an exception, error tracking systems like Sentry capture the full exception context — including local variables, stack frames, and in many configurations, the HTTP request body that triggered the error. If your agent context includes credentials at the time of the exception, those credentials end up in Sentry.

Distributed tracing

OpenTelemetry, Jaeger, and similar distributed tracing systems capture span attributes. Traces that capture LLM request/response payloads will capture credentials. This is especially easy to miss because tracing is often added by infrastructure teams, not the engineers implementing the agent.

Container registries and build artifacts

If credentials are baked into Docker images (via environment variables at build time, not just run time), they persist in every layer of the image and in every pull from the registry. Container image layers are immutable — you can't redact a credential from a published image without rebuilding and republishing.

Layer 3: The agent framework

Conversation history and memory

Many agent frameworks maintain conversation history for context continuity. If a credential appears in the conversation at any point — even in a tool response that the agent includes in its context — it can persist in the memory store.

Vector databases used for agent memory present a particular risk: if embeddings are generated from content that includes credentials, semantic search queries can surface that content long after the original credential has been rotated.

Tool call logging

Agent frameworks typically log tool calls for debugging. If a tool call includes credentials as parameters (rather than resolving them internally), those credentials appear in tool call logs.

Checkpoints and state persistence

Long-running agents often checkpoint their state to resume after failures. If agent state includes credentials, those credentials are serialized to wherever state is persisted — databases, object storage, message queues.

Layer 4: Human access pathways

Conversation export and review tools

Internal dashboards that let engineers review agent conversations for debugging or quality review expose credentials to every engineer with dashboard access. This is often a large group — and credentials in conversations make the blast radius of a compromised engineer account much larger.

Incident response artifacts

When an incident occurs, engineers often capture conversation exports, log dumps, or heap dumps for analysis. These artifacts frequently end up in shared Slack channels, incident trackers, or email threads — none of which are typically secured to the level of production credentials.

Counting the exposure surface

A typical production AI agent deployment with long-lived credentials in its system prompt might have credentials accessible from:

  1. The LLM provider's inference logs
  2. Your application log aggregation service
  3. Your error tracking service (Sentry, Rollbar)
  4. Your distributed tracing system
  5. Your agent framework's conversation history store
  6. Your vector database (if agent memory is used)
  7. Any checkpoint or state persistence store
  8. Internal debugging dashboards
  9. Any incident artifacts from production issues

That's nine distinct exposure surfaces for a single credential. Each one has its own access controls, its own retention policies, and its own breach risk.

The JIT answer to surface area

Just-in-time credentials don't eliminate these surfaces — your logs still capture what your agent does. But they radically reduce what a breach of any surface costs.

A nonce that expires in five minutes is worthless to an attacker who finds it in a log entry six minutes later. The nine exposure surfaces above now contain tokens that are permanently invalid. The blast radius of a credential discovered in any of them is zero.

The practical implication: with JIT credentials, you can audit your exposure surfaces at your own pace. With long-lived credentials, every day that passes without auditing increases the risk that a valid credential is sitting in a surface you haven't secured yet.