Manage Providers and Credentials#
AI agents typically need credentials to access external services: an API key for the AI model provider, a token for GitHub or GitLab, and so on. OpenShell manages these credentials as first-class entities called providers.
Create and manage providers that supply credentials to sandboxes.
Create a Provider#
Providers can be created from local environment variables or with explicit credential values.
From Local Credentials#
The fastest way to create a provider is to let the CLI discover credentials from your shell environment:
$ openshell provider create --name my-claude --type claude --from-existing
This reads ANTHROPIC_API_KEY or CLAUDE_API_KEY from your current environment
and stores them in the provider.
With Explicit Credentials#
Supply a credential value directly:
$ openshell provider create --name my-api --type generic --credential API_KEY=sk-abc123
Bare Key Form#
Pass a key name without a value to read the value from the environment variable of that name:
$ openshell provider create --name my-api --type generic --credential API_KEY
This looks up the current value of $API_KEY in your shell and stores it.
Manage Providers#
List, inspect, update, and delete providers from the active cluster.
List all providers:
$ openshell provider list
Inspect a provider:
$ openshell provider get my-claude
Update a provider’s credentials:
$ openshell provider update my-claude --type claude --from-existing
Delete a provider:
$ openshell provider delete my-claude
Attach Providers to Sandboxes#
Pass one or more --provider flags when creating a sandbox:
$ openshell sandbox create --provider my-claude --provider my-github -- claude
Each --provider flag attaches one provider. The sandbox receives all
credentials from every attached provider at runtime.
Warning
Providers cannot be added to a running sandbox. If you need to attach an additional provider, delete the sandbox and recreate it with all required providers specified.
Auto-Discovery Shortcut#
When the trailing command in openshell sandbox create is a recognized tool name (claude, codex, or opencode), the CLI auto-creates the required
provider from your local credentials if one does not already exist. You do not
need to create the provider separately:
$ openshell sandbox create -- claude
This detects claude as a known tool, finds your ANTHROPIC_API_KEY, creates
a provider, attaches it to the sandbox, and launches Claude Code.
How Credential Injection Works#
The agent process inside the sandbox never sees real credential values. At startup, the proxy replaces each credential with an opaque placeholder token in the agent’s environment. When the agent sends an HTTP request containing a placeholder, the proxy resolves it to the real credential before forwarding upstream.
This resolution requires the proxy to see plaintext HTTP. Endpoints must use protocol: rest in the policy (which auto-terminates TLS) or explicit tls: terminate. Endpoints without TLS termination pass traffic through as an opaque stream, and credential placeholders are forwarded unresolved.
Supported injection locations#
The proxy resolves credential placeholders in the following parts of an HTTP request:
Location |
How the agent uses it |
Example |
|---|---|---|
Header value |
Agent reads |
|
Header value (Basic auth) |
Agent base64-encodes |
|
Query parameter value |
Agent places the placeholder in a URL query parameter. |
|
URL path segment |
Agent builds a URL with the placeholder in the path. Supports concatenated patterns. |
|
The proxy does not modify request bodies, cookies, or response content.
Fail-closed behavior#
If the proxy detects a credential placeholder in a request but cannot resolve it, it rejects the request with HTTP 500 instead of forwarding the raw placeholder to the upstream server. This prevents accidental credential leakage in server logs or error responses.
Example: Telegram Bot API (path-based credential)#
Create a provider with the Telegram bot token:
$ openshell provider create --name telegram --type generic --credential TELEGRAM_BOT_TOKEN=123456:ABC-DEF
The agent reads TELEGRAM_BOT_TOKEN from its environment and builds a request like POST /bot<placeholder>/sendMessage. The proxy resolves the placeholder in the URL path and forwards POST /bot123456:ABC-DEF/sendMessage to the upstream.
Example: Google API (query parameter credential)#
$ openshell provider create --name google --type generic --credential YOUTUBE_API_KEY=AIzaSy-secret
The agent sends GET /youtube/v3/search?part=snippet&key=<placeholder>. The proxy resolves the placeholder in the query parameter value and percent-encodes the result before forwarding.
Supported Provider Types#
The following provider types are supported.
Type |
Environment Variables Injected |
Typical Use |
|---|---|---|
|
|
Claude Code, Anthropic API |
|
|
OpenAI Codex |
|
User-defined |
Any service with custom credentials |
|
|
GitHub API, |
|
|
GitLab API, |
|
|
NVIDIA API Catalog |
|
|
Any OpenAI-compatible endpoint. Set |
|
|
opencode tool |
Tip
Use the generic type for any service not listed above. You define the
environment variable names and values yourself with --credential.
Supported Inference Providers#
The following providers have been tested with inference.local. Any provider that exposes an OpenAI-compatible API works with the openai type. Set --config OPENAI_BASE_URL to the provider’s base URL and --credential OPENAI_API_KEY to your API key.
Provider |
Name |
Type |
Base URL |
API Key Variable |
|---|---|---|---|---|
NVIDIA API Catalog |
|
|
|
|
Anthropic |
|
|
|
|
Baseten |
|
|
|
|
Bitdeer AI |
|
|
|
|
Deepinfra |
|
|
|
|
Groq |
|
|
|
|
Ollama (local) |
|
|
|
|
LM Studio (local) |
|
|
|
|
Refer to your provider’s documentation for the correct base URL, available models, and API key setup. To configure inference routing, refer to Configure Inference Routing.
Next Steps#
Explore related topics:
To control what the agent can access, refer to Customize Sandbox Policies.
To use a pre-built environment, refer to the Community Sandboxes catalog.
To view the complete field reference for the policy YAML, refer to the Policy Schema Reference.