Providers
SwarmClaw supports 15 built-in LLM providers plus custom endpoints. Each agent can use a different provider.
CLI Providers
CLI providers spawn a local binary to handle chat. They manage their own tools natively — no need to configure tools in SwarmClaw.
| Provider | Binary | Notes |
|---|---|---|
| Claude Code CLI | claude | Spawns with --print --output-format stream-json. Includes auth preflight plus clearer timeout/exit diagnostics. Requires Claude Code installed. |
| OpenAI Codex CLI | codex | Spawns with --full-auto --skip-git-repo-check. Includes login preflight and streamed CLI error events. Requires Codex CLI installed. |
| OpenCode CLI | opencode | Spawns with -p flag. Multi-model support. Requires OpenCode installed. |
API Providers
API providers make direct HTTP calls. All require an API key.
| Provider | Endpoint | Models |
|---|---|---|
| Anthropic | api.anthropic.com | Claude Sonnet 4.6, Opus 4.6, Haiku 4.5 |
| OpenAI | api.openai.com | GPT-4o, GPT-4.1, o3, o4-mini |
| Google Gemini | generativelanguage.googleapis.com | Gemini 2.5 Pro, Flash, Flash Lite |
| DeepSeek | api.deepseek.com | DeepSeek Chat, Reasoner |
| Groq | api.groq.com | Llama 3.3 70B, DeepSeek R1, Qwen QWQ |
| Together AI | api.together.xyz | Llama 4 Maverick, DeepSeek R1, Qwen 2.5 |
| Mistral AI | api.mistral.ai | Mistral Large, Small, Magistral, Devstral |
| xAI (Grok) | api.x.ai | Grok 3, Grok 3 Fast, Grok 3 Mini |
| Fireworks AI | api.fireworks.ai | DeepSeek R1, Llama 3.3 70B, Qwen 3 |
Local & Remote
| Provider | Type | Notes |
|---|---|---|
| Ollama | Local/Cloud | Connects to localhost:11434. No API key needed. 50+ models including Qwen, Llama, DeepSeek, GLM. |
| OpenClaw | Gateway Profiles | Use named gateway profiles, default routing, and per-agent overrides via the bundled CLI. See OpenClaw Gateway Setup. |
OpenClaw
OpenClaw is an open-source autonomous AI agent that runs on your own devices — with shell access, browser control, scheduling, and multi-channel messaging. SwarmClaw includes the openclaw CLI as a bundled dependency, so no separate install is needed.
In the agent editor, OpenClaw is enabled through the OpenClaw Gateway toggle rather than the normal provider/model dropdown. During onboarding, you can also build OpenClaw-backed starter agents directly from the setup wizard.
Gateway Profiles
OpenClaw gateways can now be managed centrally from Providers:
- Create named gateway profiles with endpoint, credential, notes, and tags
- Use Smart Deploy to launch a local OpenClaw runtime, generate a preconfigured remote bundle, or push the official Docker bundle over SSH before saving the profile
- Mark one profile as the default for new OpenClaw-backed agents
- Run discovery and websocket-first health checks from the control plane, with optional HTTP
/v1compatibility status shown separately - Import/export gateway configs from the editor and clone saved gateways from the Providers screen
- Review external agent runtimes that register and heartbeat into SwarmClaw
Agents can inherit the default profile, select another saved profile, or use a direct gateway URL/token override when needed.
Smart Deploy supports official local bring-up, VPS bundles for major providers, hosted repo-backed templates, safer exposure presets, and SSH-managed remote lifecycle controls. See OpenClaw Setup for the full flow.
Fleet Visibility
The Providers screen also acts as the swarm overview for OpenClaw-backed runtimes:
- gateway cards show deploy method, route hints, node/device counts, verification state, and runtime counts
- external runtime cards show lifecycle state, gateway assignment, version, tags, last seen, and health notes
- operators can activate, drain, cordon, or restart registered runtimes directly from the runtime cards
This makes it easier to manage a swarm of local and remote OpenClaw workers from one control plane.
Connecting an Agent to OpenClaw
- Optional: create or select a named gateway profile in Providers
- Create or edit an agent in SwarmClaw
- Toggle OpenClaw Gateway ON
- Select the saved profile or enter a direct gateway URL (e.g.
http://192.168.1.50:18789orhttps://my-vps:18789) - Add a gateway token if the remote gateway requires authentication
- Click Connect — if the device needs approval, use Approve in Dashboard to open the gateway, then Retry Connection
Swarm of OpenClaws
Each agent can target its own gateway profile or direct endpoint. This means you can:
- Run specialized OpenClaw agents on different machines (one for code, one for research, one for ops)
- Mix local and remote gateways in the same dashboard
- Assign tasks to specific remote agents via the task board
- Use orchestrators to coordinate work across multiple OpenClaw instances
- Bridge remote OpenClaw agents to chat platforms via connectors
This is the core SwarmClaw use case: a single control plane for your swarm of autonomous agents.
Custom Providers
Add any OpenAI-compatible endpoint as a custom provider. This works with:
- OpenRouter —
https://openrouter.ai/api/v1 - Local vLLM —
http://localhost:8000/v1 - Any other OpenAI-compatible API
Adding a Custom Provider
- Navigate to Providers in the sidebar
- Click New Provider
- Enter: name, base URL, and model names
- Link a credential (API key) if required
- Save
The custom provider appears in agent and chat dropdowns immediately.
Model Selection
When a provider is configured, SwarmClaw can populate the model dropdown from the provider’s available model list. For OpenAI this means the model picker can offer the current configured OpenAI models automatically.
You are not limited to the fetched list:
- If OpenAI releases a newer model before the list is refreshed, you can still type it in manually
- If you use a custom or internal model name, you can add that manually as well
Credentials
API keys are encrypted with AES-256-GCM before storage. Add credentials from the provider setup or from the credential manager in Settings.
Model Failover
Agents can be configured with fallback credentials to handle provider failures gracefully. When a request fails with a 401 (unauthorized), 429 (rate limited), or 500 (server error), SwarmClaw automatically retries with the next credential in the fallback chain.
How It Works
- The agent sends a request using its primary credential
- If the provider returns a 401, 429, or 500 error, the request is retried with the next fallback credential
- Retries continue through the fallback list until one succeeds or all are exhausted
- Failed credentials are temporarily marked as degraded to avoid repeated failures
Configuration
Assign fallback credentials in the agent editor under Fallback Credentials. You can add multiple credentials from the same or different providers. The order determines retry priority.
See Failover for full details.
Provider Limitations
- CLI providers (Claude Code, Codex, OpenCode) cannot be used as orchestrator engines (no LangGraph support). They handle tools natively through their own CLI.
- In the Agent editor, CLI providers and OpenClaw agents do not show local Tools/Platform toggles, because capabilities are managed by the provider runtime itself.
- OpenClaw runs tools on the remote gateway, not locally — SwarmClaw streams the results via the bundled CLI
- Ollama requires the Ollama server running locally (or a cloud Ollama URL)
- Custom providers must be OpenAI-compatible (chat completions endpoint)