Claw-Family AI Agent Integration

Connect OpenClaw, ZeroClaw, Nanobot, and other Claw-family AI agents to Swan Inference

Swan Inference provides an OpenAI-compatible API that works with all Claw-family AI agent tools. This guide covers how to connect each tool to Swan Inference for decentralized AI inference.

Prerequisites

  1. A Swan Inference API key (sk-swan-*) — sign up herearrow-up-right

  2. The Claw tool of your choice installed

Quick Start

All Claw-family tools support custom OpenAI-compatible endpoints. The core configuration is the same across all tools:

Setting
Value

Base URL

https://inference.swanchain.io/v1

API Key

sk-swan-YOUR-API-KEY

Popular models available on Swan Inference:

Model
Category
Use Case

deepseek-r1-distill-llama-70b

LLM

Reasoning, code generation

Qwen/Qwen2.5-7B-Instruct

LLM

General chat, fast responses

meta-llama/Llama-3.3-70B-Instruct

LLM

General purpose


Tool-Specific Setup

OpenClaw (TypeScript, 250K+ stars)

The most popular AI assistant in the Claw family. Supports 25+ messaging platforms.

Edit your OpenClaw configuration to add Swan Inference as a provider:


ZeroClaw (Rust, 28K+ stars)

Ultra-lightweight (3.4MB binary, <10ms startup). Best for quick testing.


PicoClaw (Go, 26K+ stars)

Lightweight Go binary (<8MB). Designed for edge and ARM devices.


Nanobot (Python, 32K+ stars)

Python-native, installable via pip. Best for Python developers.


NanoClaw (TypeScript, 25K+ stars)

Container-per-session isolation. Good for multi-tenant deployments.


IronClaw (Rust, 10K+ stars)

Security-focused with TEE and encrypted vault. Built by NEAR AI.


NemoClaw (TypeScript + Python, 15K+ stars)

NVIDIA's reference stack with kernel sandbox and privacy router.

Configure the inference provider in your NemoClaw deployment:

circle-info

NemoClaw is in early preview (alpha). Configuration may change.


Moltworker (TypeScript, Cloudflare Workers)

Runs on Cloudflare's edge network (330+ cities).

Or set via Cloudflare AI Gateway:


NullClaw (Zig, 6.7K+ stars)

Ultra-minimal (678KB binary, 1MB RAM). For IoT and embedded devices.

Or in config:

circle-info

For resource-constrained devices, use smaller models like Qwen/Qwen2.5-7B-Instruct for faster responses.


Choosing the Right Tool

Scenario
Recommended Tool
Why

Quick local testing

ZeroClaw

3.4MB, boots in <10ms

Python project

Nanobot

pip install, native Python API

Multi-platform bot

OpenClaw

25+ messaging platforms

Edge / ARM device

PicoClaw

8MB Go binary, runs anywhere

IoT / embedded

NullClaw

678KB, runs on $5 hardware

Security-critical

IronClaw

TEE, encrypted vault, WASM sandbox

Multi-tenant SaaS

NanoClaw

Container-per-session isolation

Enterprise / NVIDIA

NemoClaw

Kernel sandbox, policy enforcement

Global edge deploy

Moltworker

Cloudflare Workers, 330+ cities

Playground (No API Key)

All tools can also use Swan Inference's public playground for testing without an API key:

The playground is rate-limited (5 requests/hour per IP) with restricted models, but requires no signup.

Troubleshooting

Connection refused / timeout

  • Verify the base URL includes /v1: https://inference.swanchain.io/v1

  • Check your API key starts with sk-swan-

Model not found

  • List available models: curl https://inference.swanchain.io/v1/models -H "Authorization: Bearer sk-swan-YOUR-KEY"

  • Model IDs are case-sensitive (e.g., Qwen/Qwen2.5-7B-Instruct, not qwen2.5-7b)

Rate limited (429)

  • Default rate limit is 200 requests/min for LLM models

  • Check X-RateLimit-Remaining header in responses

  • Consider upgrading to Pro subscription ($6/month) for higher limits

Streaming not working

  • Ensure your tool is configured for streaming (stream: true)

  • Swan Inference supports SSE streaming on /v1/chat/completions

Learn More

Last updated