copy Copy chevron-down
BULDERS chevron-right App Developer Claw-Family AI Agent Integration Connect OpenClaw, ZeroClaw, Nanobot, and other Claw-family AI agents to Swan Inference
Swan Inference provides an OpenAI-compatible API that works with all Claw-family AI agent tools. This guide covers how to connect each tool to Swan Inference for decentralized AI inference.
The Claw tool of your choice installed
All Claw-family tools support custom OpenAI-compatible endpoints. The core configuration is the same across all tools:
Popular models available on Swan Inference:
OpenClaw (TypeScript, 250K+ stars)
The most popular AI assistant in the Claw family. Supports 25+ messaging platforms.
Edit your OpenClaw configuration to add Swan Inference as a provider:
ZeroClaw (Rust, 28K+ stars)
Ultra-lightweight (3.4MB binary, <10ms startup). Best for quick testing.
PicoClaw (Go, 26K+ stars)
Lightweight Go binary (<8MB). Designed for edge and ARM devices.
Nanobot (Python, 32K+ stars)
Python-native, installable via pip. Best for Python developers.
NanoClaw (TypeScript, 25K+ stars)
Container-per-session isolation. Good for multi-tenant deployments.
IronClaw (Rust, 10K+ stars)
Security-focused with TEE and encrypted vault. Built by NEAR AI.
NemoClaw (TypeScript + Python, 15K+ stars)
NVIDIA's reference stack with kernel sandbox and privacy router.
Configure the inference provider in your NemoClaw deployment:
Moltworker (TypeScript, Cloudflare Workers)
Runs on Cloudflare's edge network (330+ cities).
Or set via Cloudflare AI Gateway:
NullClaw (Zig, 6.7K+ stars)
Ultra-minimal (678KB binary, 1MB RAM). For IoT and embedded devices.
Or in config:
Playground (No API Key)
All tools can also use Swan Inference's public playground for testing without an API key:
The playground is rate-limited (5 requests/hour per IP) with restricted models, but requires no signup.
Troubleshooting
Connection refused / timeout
Verify the base URL includes /v1: https://inference.swanchain.io/v1
Check your API key starts with sk-swan-
Model not found
List available models: curl https://inference.swanchain.io/v1/models -H "Authorization: Bearer sk-swan-YOUR-KEY"
Model IDs are case-sensitive (e.g., Qwen/Qwen2.5-7B-Instruct, not qwen2.5-7b)
Rate limited (429)
Default rate limit is 200 requests/min for LLM models
Check X-RateLimit-Remaining header in responses
Consider upgrading to Pro subscription ($6/month) for higher limits
Streaming not working
Ensure your tool is configured for streaming (stream: true)
Swan Inference supports SSE streaming on /v1/chat/completions
Last updated 16 hours ago