Configuration
Credential resolution order
Redshank merges credentials from all sources, with earlier sources taking priority:
- Process environment variables (e.g.
ANTHROPIC_API_KEY) .envfile in the current working directory<workspace>/.redshank/credentials.json~/.redshank/credentials.json(user-level fallback)
Credential files are written chmod 600. Keys never appear in log output at any level.
Environment variables
Each credential is resolved with the following priority (first match wins):
REDSHANK_<KEY>— app-namespaced; use when running multiple different agents side-by-sideOPENPLANTER_<KEY>— legacy backward compatibility with the OpenPlanter predecessor<KEY>— bare/global env var; sufficient for most users
| Variable | Purpose |
|---|---|
ANTHROPIC_API_KEY | Anthropic Claude |
OPENAI_API_KEY | OpenAI |
OPENROUTER_API_KEY | OpenRouter |
CEREBRAS_API_KEY | Cerebras |
OLLAMA_BASE_URL | Local Ollama instance URL |
EXA_API_KEY | Exa neural search |
VOYAGE_API_KEY | Voyage AI embeddings |
HIBP_API_KEY | Have I Been Pwned breach data |
GITHUB_TOKEN | GitHub API (profile fetcher) |
FEC_API_KEY | FEC campaign finance API |
OPENCORPORATES_API_KEY | OpenCorporates (optional — free tier without) |
Copy .env.example from the repo root and fill in the keys you need:
cp .env.example .env
chmod 600 .env
credentials.json
For persistent storage, copy credentials.example.json to .redshank/credentials.json:
mkdir -p .redshank
cp credentials.example.json .redshank/credentials.json
chmod 600 .redshank/credentials.json
The file format maps directly to the environment variable names (snake_case):
{
"anthropic_api_key": "sk-ant-...",
"openai_api_key": "sk-...",
"openrouter_api_key": "sk-or-...",
"cerebras_api_key": "...",
"ollama_base_url": "http://localhost:11434",
"exa_api_key": "...",
"voyage_api_key": "...",
"hibp_api_key": "...",
"github_token": "ghp_...",
"fec_api_key": "...",
"opencorporates_api_key": "..."
}
All fields are optional. Unknown keys are silently ignored.
settings.json
Persistent settings live in <workspace>/.redshank/settings.json. Copy settings.example.json to get started:
cp settings.example.json .redshank/settings.json
{
"default_model": "claude-sonnet-4-20250514",
"default_reasoning_effort": "medium",
"default_model_anthropic": "claude-sonnet-4-20250514",
"default_model_openai": "gpt-4o",
"default_model_openrouter": "anthropic/claude-sonnet-4",
"default_model_cerebras": "llama-3.3-70b",
"default_model_ollama": "llama3.2"
}
default_reasoning_effort accepts low, medium, or high. Per-provider model names override the global default_model fallback for that provider only.
Stygian fallback probe
When the stygian feature is enabled (built with --features redshank-fetchers/stygian),
the CLI probes the stygian-mcp server at TUI startup using detect_stygian_availability.
The probe uses hardcoded defaults:
| Field | Default |
|---|---|
endpoint_url | http://127.0.0.1:8787/health |
timeout_ms | 1500 |
retries | 1 |
Configuration of the probe endpoint via settings.json is planned for a future release.
The current stygian availability is reflected in the TUI footer: ▲ (green) = available,
▼ (red) = down, ? (gray) = probe not yet run.
See Stygian Fallback for setup instructions, troubleshooting, and licensing-boundary rationale.