OpenClaw Use Cases and Best Practices
Document compiled: Feb 24, 2025
Based on search engines and official docs. Covers: use cases, production best practices and security, Skills and ClawHub, on-prem/private deployment, LLM routing (local and remote), multi-agent and agent-to-agent.
1. Use Cases
1.1 Social and information management
| Scenario | Description |
|---|---|
| X/Twitter account analysis | Analyze posting style, engagement, follower profile |
| Daily Reddit Digest | Auto-summarize favorite subreddits into a daily digest |
| Multi-channel personal assistant | Unified task routing and reminders across Telegram, Slack, email, calendar |
1.2 Creative and content production
| Scenario | Description |
|---|---|
| YouTube content pipeline | Automate topic selection, asset organization, progress tracking |
| Mini-App Builder | Overnight generation of mini-app prototypes |
| Social media content ops | Manage X and LinkedIn, auto-trends, drafting and publishing |
1.3 Office productivity
| Scenario | Description |
|---|---|
| Email management | Filter important mail, summarize newsletters, categorize and follow up |
| Personal CRM | Build relationship and reminder management from email and calendar |
| Feishu group management | Auto-welcome, broadcast, and moderate group Q&A |
1.4 Health and life management
| Scenario | Description |
|---|---|
| Health symptom tracker | Track metrics, medication and follow-up reminders |
| Personal and family scheduling | Shared family calendar and task allocation |
1.5 Research and knowledge management
| Scenario | Description |
|---|---|
| Personal knowledge base (RAG) | Searchable knowledge base combined with chat |
| AI earnings tracker | Auto-collect earnings reports and generate summaries |
1.6 Development and operations
| Scenario | Description |
|---|---|
| Remote code debugging | Debug and complete code from phone/tablet (e.g. commute) |
| Browser automation | Deploy and test web apps, fill forms, screenshot, export PDF |
| “Big tasks in the chat” | Site rebuilds, car negotiation, bug fixes via conversation |
1.7 Summary of advantages
- Non-technical users: No coding; complex tasks via natural language or voice
- Cross-platform: Feishu, Telegram, email, calendar as a single entry point
- Long-term memory: Learns habits over time and improves efficiency
- Access control: Roles and approval for sensitive actions
2. Production Best Practices and Security
2.1 Security landscape (2026)
- Public research shows many instances exposed on the internet with default config; production must be hardened.
- OpenClaw has real capabilities: credentials, network, files, Shell; must be secured to production standards.
2.2 Network security
| Item | Recommendation |
|---|---|
| Bind address | Bind gateway to 127.0.0.1, not 0.0.0.0; accept only local or reverse-proxy traffic |
| mDNS | Disable mDNS discovery to avoid being discovered on the LAN |
| Control UI | Disable or put behind a reverse proxy with authentication |
| Reverse proxy | Use nginx / Caddy / Traefik for HTTPS and SSL termination |
| Gateway auth | Enable 256-bit token auth; all requests must carry a valid token |
2.3 Credentials and secrets
| Item | Recommendation |
|---|---|
| Storage | No plaintext; use env vars or a secrets manager (e.g. AWS Secrets Manager, HashiCorp Vault) |
| Rotation | Rotate API keys and DB passwords every ~90 days |
| Permissions | Least privilege; only grant necessary API keys |
| Monitoring | Usage and billing alerts; investigate anomalies |
2.4 Agent and Skills configuration
| Item | Recommendation |
|---|---|
| System prompt | Clear scope, prohibitions, escalation, output format and tone |
| Skills | Minimal set; enable only what the workflow needs |
| Memory | Clear partitioning, periodic cleanup; version important memory |
2.5 Audit and operations
| Item | Recommendation |
|---|---|
| Audit logs | Log conversations, external API calls, Shell, file ops, tool calls |
| Log format | Structured for search and tracing |
| Retention | At least 90 days; 365 for enterprise |
| Egress | Domain allowlist (e.g. Squid) to limit outbound access |
| High-risk actions | File delete, email, Shell, etc. can require human approval |
| Containers | Run in Docker etc.; non-root user |
2.6 Deployment options
- Managed / one-click hardening: e.g. Clawctl for fast hardening (~60s).
- Self-hosted: Implement the checklist above; ~2–40+ hours depending on control needs.
3. Skills Overview
3.1 Capability layers
| Type | Description |
|---|---|
| Built-in tools | Ship with OpenClaw (~8 base + 17 advanced: file, Shell, web search/browse, memory, vision); no extra install |
| Skills | AgentSkills-compliant extensions (SKILL.md + dir); install from ClawHub or local |
| Plugins (MCP) | MCP-based integrations to external systems |
3.2 Load order and priority
- Workspace skills:
/skills(current workspace, highest) - Managed/Local:
~/.openclaw/skills - Bundled: Shipped with install (lowest)
- Extra dirs:
skills.load.extraDirs(lowest)
Same name: workspace overrides local, local overrides bundled.
3.3 ClawHub and installation
- ClawHub: Official skill marketplace, clawhub.com; discover, install, update, backup skills.
- Ways to install:
- Recommended: OpenClaw built-in Skills UI
- CLI:
clawhub install <skill-name>, oropenclaw skills list --eligible - Chat: Ask OpenClaw to “install XXX Skill”
- Commands:
clawhub sync --all,clawhub update --all,clawhub search "keyword"
Default install path: workspace ./skills; takes effect next session.
3.4 Core Skills (getting started)
| # | Skill | Role |
|---|---|---|
| 1 | Tavily Search/Brave Search | Real-time search; breaks model time boundary |
| 2 | Agent Browser | Web automation: open, fill forms, screenshot, export PDF |
| 3 | Shell | Run terminal commands and scripts |
Then add channel skills (Telegram, Feishu, Slack, Discord, WhatsApp, etc.) as needed.
3.5 Top 10–20 (by tier)
Tier 1: ClawHub, Agent Browser, Brave/Tavily, Shell, Cron/Wake.
Tier 2: Telegram, Feishu, Slack, Discord, WhatsApp.
Tier 3: Image, weather, stocks, news, email, calendar, notes, RAG, smart home, etc.
3.6 Best practices
- Least privilege: Enable only skills needed for the workflow.
- Read before install: Treat third-party skills as untrusted; check code and permissions.
- Secrets: Use
skills.entries.<name>.env/apiKey; never in prompt or logs. - Sandbox: Use sandboxing for untrusted input or high-risk tools.
- Prefer built-in: Use built-in tools when they suffice.
4. On-Prem and Private Deployment (No External LLM)
You can run 100% on-prem: no external LLM; all inference stays inside the network.
4.1 Why no external API is needed
- OpenClaw uses Model Providers; point providers at on-prem inference for full local/private inference.
- Any OpenAI-compatible service with a
/v1endpoint can be a provider. - With no cloud providers and network blocking, no requests leave the network.
4.2 Common on-prem options
| Option | Description | Use case |
|---|---|---|
| Ollama | Local LLM runtime; one command to pull and serve; default http://127.0.0.1:11434/v1 | Dev/test, small team |
| LM Studio + MiniMax M2.1 | Official local stack; Responses API; large context | High quality and control |
| vLLM / LiteLLM | High-throughput or unified proxy; OpenAI-style /v1 | GPU cluster, model gateway |
| Custom gateway | Any proxy implementing OpenAI-compatible API | Strict control |
LiteLLM is a gateway (no model); vLLM and Ollama are inference engines. Combine as needed (e.g. LiteLLM + vLLM for production).
4.3 Config (on-prem only)
- Only on-prem providers in
models.providers(e.g.ollama,lmstudio,local). - No cloud providers; remove or comment out OpenAI, Anthropic, etc.
- Default model:
agents.defaults.model.primaryset to local (e.g.ollama/llama3.2:latest). - No cloud fallbacks if you must never call out.
- Network: Firewall so OpenClaw can only reach internal inference endpoints.
4.4 Example: Ollama only
{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local",
"api": "openai-responses",
"models": [{ "id": "llama3.2:latest" }]
}
},
"defaults": {
"provider": "ollama",
"model": "llama3.2:latest"
}
}
}
Use /v1 in baseUrl; for multi-machine use the internal inference host (e.g. http://192.168.1.100:11434/v1). Ollama recommends at least 64k token context for local models.
4.5 Example: Internal LM Studio / custom v1
Set baseUrl to your internal service (e.g. http://192.168.1.100:1234/v1) and define the model in models with api: "openai-responses".
4.6 Hardware and security
- Hardware: One well-equipped machine (e.g. Mac Studio M3 Ultra 512GB) can run large context and moderate concurrency; two for HA or load sharing. For ~80 concurrent users see 4.6.1 (vLLM + A100 etc.).
- Security: Local models have no cloud content filter; use system prompt, compaction, and reduced agent permissions. See Section 2.
- Audit: Same as Section 2: audit logs, human approval for sensitive ops, egress filtering, containers, non-root.
4.6.1 ~80 concurrent users, local model
- Prefer vLLM (or LiteLLM in front of vLLM) for production; single-node Ollama is not ideal for 80+ concurrent.
- 7B/13B: 2× A100 40GB or 1–2× A100 80GB with vLLM.
- 30B–70B: 2× A100 80GB (NVLink) or 4× A100; for 50+ concurrent consider 2× H100 80GB or 4× A100 80GB.
- Single 24GB card (e.g. RTX 4090): light load or <10 users only.
4.6.2 Recommended local models (OpenClaw / Agent)
Community consensus: <14B not recommended; 14B–32B minimum, 32B+ more stable for tools.
| Model | Notes | VRAM | Use |
|---|---|---|---|
| Qwen3-Coder 32B | Most stable tool use; Ollama: qwen3-coder:32b | 24–32GB | First choice for agent/code |
| Qwen3 32B | Strong general + long context; qwen3:32b | 24–32GB | General + tools |
| Qwen3 72B | Near cloud quality | 48GB+ | When hardware allows |
| GLM-4.7-Flash | 30B; precise tool use | 24–32GB | Alternative to Qwen3-Coder 32B |
| DeepSeek-R1 32B | Strong reasoning and code | 24–32GB | Complex reasoning |
| Llama 3.3 70B | Multi-purpose and tools | 48GB+ | Multi-GPU or 80GB |
| GPT-OSS 20B/120B | Agent-oriented, clean tool format | 24GB / 40GB+ | Agent-focused |
Qwen3.5 (Feb 2026): MoE 397B-A17B is on HuggingFace; full precision is very large; wait for MoE/quant support in vLLM/Ollama. For now prefer Qwen3 32B / Qwen3-Coder 32B or GLM-4.7 locally.
By VRAM: 8–16GB → qwen3-coder:14b; 24–32GB → qwen3-coder:32b or glm-4.7-flash; 40GB+ → qwen3:72b, gpt-oss:120b, Llama 3.3 70B. Use temperature 0–0.2 and 32k+ context where possible.
4.7 References
- Local Models, 本地模型(中文)
- Ollama - OpenClaw; Ollama library
- Configuration Reference
- vLLM, LiteLLM
- Qwen, Qwen3.5
5. LLM Routing: Local vs Remote by Scenario
OpenClaw has several model-selection mechanisms; not all support “choose local vs external by scenario”.
5.1 Primary + fallbacks — not by scenario
Same agent can have primary + fallbacks; fallbacks are used only on primary failure (timeout, rate limit, auth).
- Does not: Choose local vs remote by task type or sensitivity; only failover.
- Config:
agents.defaults.model.primary,agents.defaults.model.fallbacks; need multiple providers andmodels.mode: "merge".
5.2 Multi-agent + bindings — by entry only
Bindings route by channel / accountId / peer to different agents (each with its own default model).
- Does: Fix “this channel always uses this model” (e.g. WhatsApp → Ollama, Telegram → GPT).
- Does not: Decide per message by content or task type.
- Config:
agents.list[].model,bindingswithchannel/accountId/peer.
5.3 True “by scenario” (content-aware) choice
| Option | Description |
|---|---|
Plugin before_prompt_build → modelOverride | Community PR: return modelOverride from hook to pick model per request; you implement logic. See plugins, Hooks. |
Webhook with model | Your system calls e.g. POST /hooks/agent with model in body; caller does scenario logic. |
| External router | Layer in front of OpenClaw decides local vs cloud and forwards to the right model/agent. |
5.4 Summary
| Method | “By scenario” (local vs external)? | Notes |
|---|---|---|
| Primary + fallbacks | No | Failover only |
| Multi-agent + bindings | By entry only | By channel/account/peer |
| Plugin modelOverride | Yes (when available) | Per-request override |
| Webhook / external router | Yes | Caller or router decides |
5.5 References
6. Multi-Agent and Agent-to-Agent Calls
Supported. OpenClaw supports multiple agents, each with its own LLM, and agents can call each other (send messages, spawn sub-tasks).
6.1 Multiple agents, each with its own LLM
- Multi-agent: Configure in
agents.list; each hasid,workspace,agentDir, sessions under~/.openclaw/agents/<agentId>/sessions. - Per-agent model:
agents.list[].model(e.g.ollama/llama3.2,openai/gpt-4o); else inheritsagents.defaults.model. - Routing:
bindingsby channel/accountId/peer (Section 5).
6.2 Two ways to call between agents
| Method | Tool | Description |
|---|---|---|
| Send to session | sessions_send | Agent A sends a message to one of B’s sessions; B runs once; supports ping-pong and announce. |
| Spawn sub-task | sessions_spawn + agentId | A spawns a task with agentId: "B"; runs in B’s context (B’s workspace and model); result via announce. B must allow A (see below). |
6.3 Enabling agent-to-agent
- Allow sending:
tools.agentToAgent.enabled: true,allow: ["home", "work"](both sides in list or visible). - Allow spawn: On target agent set
subagents.allowAgents: ["main"]or["*"]. - Sub-task model: Pass
modelinsessions_spawnor setsubagents.modelon the agent.
6.4 Related
- Sub-agents: Run in
agent::subagent:<runId>; result via announce; nesting controlled bymaxSpawnDepth. - Session tools:
sessions_list,sessions_history,sessions_send,sessions_spawn; visibility andtools.agentToAgentapply.
6.5 References
7. Reference Links
Use cases and examples from community and user practice; security and best practices aligned with 2026 production checklist; Skills counts and ClawHub URL per official docs and current site.