Skip to main content

OpenClaw Use Cases and Best Practices

Document compiled: Feb 24, 2025
Based on search engines and official docs. Covers: use cases, production best practices and security, Skills and ClawHub, on-prem/private deployment, LLM routing (local and remote), multi-agent and agent-to-agent.


1. Use Cases

1.1 Social and information management

ScenarioDescription
X/Twitter account analysisAnalyze posting style, engagement, follower profile
Daily Reddit DigestAuto-summarize favorite subreddits into a daily digest
Multi-channel personal assistantUnified task routing and reminders across Telegram, Slack, email, calendar

1.2 Creative and content production

ScenarioDescription
YouTube content pipelineAutomate topic selection, asset organization, progress tracking
Mini-App BuilderOvernight generation of mini-app prototypes
Social media content opsManage X and LinkedIn, auto-trends, drafting and publishing

1.3 Office productivity

ScenarioDescription
Email managementFilter important mail, summarize newsletters, categorize and follow up
Personal CRMBuild relationship and reminder management from email and calendar
Feishu group managementAuto-welcome, broadcast, and moderate group Q&A

1.4 Health and life management

ScenarioDescription
Health symptom trackerTrack metrics, medication and follow-up reminders
Personal and family schedulingShared family calendar and task allocation

1.5 Research and knowledge management

ScenarioDescription
Personal knowledge base (RAG)Searchable knowledge base combined with chat
AI earnings trackerAuto-collect earnings reports and generate summaries

1.6 Development and operations

ScenarioDescription
Remote code debuggingDebug and complete code from phone/tablet (e.g. commute)
Browser automationDeploy and test web apps, fill forms, screenshot, export PDF
“Big tasks in the chat”Site rebuilds, car negotiation, bug fixes via conversation

1.7 Summary of advantages

  • Non-technical users: No coding; complex tasks via natural language or voice
  • Cross-platform: Feishu, Telegram, email, calendar as a single entry point
  • Long-term memory: Learns habits over time and improves efficiency
  • Access control: Roles and approval for sensitive actions

2. Production Best Practices and Security

2.1 Security landscape (2026)

  • Public research shows many instances exposed on the internet with default config; production must be hardened.
  • OpenClaw has real capabilities: credentials, network, files, Shell; must be secured to production standards.

2.2 Network security

ItemRecommendation
Bind addressBind gateway to 127.0.0.1, not 0.0.0.0; accept only local or reverse-proxy traffic
mDNSDisable mDNS discovery to avoid being discovered on the LAN
Control UIDisable or put behind a reverse proxy with authentication
Reverse proxyUse nginx / Caddy / Traefik for HTTPS and SSL termination
Gateway authEnable 256-bit token auth; all requests must carry a valid token

2.3 Credentials and secrets

ItemRecommendation
StorageNo plaintext; use env vars or a secrets manager (e.g. AWS Secrets Manager, HashiCorp Vault)
RotationRotate API keys and DB passwords every ~90 days
PermissionsLeast privilege; only grant necessary API keys
MonitoringUsage and billing alerts; investigate anomalies

2.4 Agent and Skills configuration

ItemRecommendation
System promptClear scope, prohibitions, escalation, output format and tone
SkillsMinimal set; enable only what the workflow needs
MemoryClear partitioning, periodic cleanup; version important memory

2.5 Audit and operations

ItemRecommendation
Audit logsLog conversations, external API calls, Shell, file ops, tool calls
Log formatStructured for search and tracing
RetentionAt least 90 days; 365 for enterprise
EgressDomain allowlist (e.g. Squid) to limit outbound access
High-risk actionsFile delete, email, Shell, etc. can require human approval
ContainersRun in Docker etc.; non-root user

2.6 Deployment options

  • Managed / one-click hardening: e.g. Clawctl for fast hardening (~60s).
  • Self-hosted: Implement the checklist above; ~2–40+ hours depending on control needs.

3. Skills Overview

3.1 Capability layers

TypeDescription
Built-in toolsShip with OpenClaw (~8 base + 17 advanced: file, Shell, web search/browse, memory, vision); no extra install
SkillsAgentSkills-compliant extensions (SKILL.md + dir); install from ClawHub or local
Plugins (MCP)MCP-based integrations to external systems

3.2 Load order and priority

  1. Workspace skills: /skills (current workspace, highest)
  2. Managed/Local: ~/.openclaw/skills
  3. Bundled: Shipped with install (lowest)
  4. Extra dirs: skills.load.extraDirs (lowest)

Same name: workspace overrides local, local overrides bundled.

3.3 ClawHub and installation

  • ClawHub: Official skill marketplace, clawhub.com; discover, install, update, backup skills.
  • Ways to install:
    • Recommended: OpenClaw built-in Skills UI
    • CLI: clawhub install <skill-name>, or openclaw skills list --eligible
    • Chat: Ask OpenClaw to “install XXX Skill”
  • Commands: clawhub sync --all, clawhub update --all, clawhub search "keyword"

Default install path: workspace ./skills; takes effect next session.

3.4 Core Skills (getting started)

#SkillRole
1Tavily Search/Brave SearchReal-time search; breaks model time boundary
2Agent BrowserWeb automation: open, fill forms, screenshot, export PDF
3ShellRun terminal commands and scripts

Then add channel skills (Telegram, Feishu, Slack, Discord, WhatsApp, etc.) as needed.

3.5 Top 10–20 (by tier)

Tier 1: ClawHub, Agent Browser, Brave/Tavily, Shell, Cron/Wake.
Tier 2: Telegram, Feishu, Slack, Discord, WhatsApp.
Tier 3: Image, weather, stocks, news, email, calendar, notes, RAG, smart home, etc.

3.6 Best practices

  • Least privilege: Enable only skills needed for the workflow.
  • Read before install: Treat third-party skills as untrusted; check code and permissions.
  • Secrets: Use skills.entries.<name>.env / apiKey; never in prompt or logs.
  • Sandbox: Use sandboxing for untrusted input or high-risk tools.
  • Prefer built-in: Use built-in tools when they suffice.

4. On-Prem and Private Deployment (No External LLM)

You can run 100% on-prem: no external LLM; all inference stays inside the network.

4.1 Why no external API is needed

  • OpenClaw uses Model Providers; point providers at on-prem inference for full local/private inference.
  • Any OpenAI-compatible service with a /v1 endpoint can be a provider.
  • With no cloud providers and network blocking, no requests leave the network.

4.2 Common on-prem options

OptionDescriptionUse case
OllamaLocal LLM runtime; one command to pull and serve; default http://127.0.0.1:11434/v1Dev/test, small team
LM Studio + MiniMax M2.1Official local stack; Responses API; large contextHigh quality and control
vLLM / LiteLLMHigh-throughput or unified proxy; OpenAI-style /v1GPU cluster, model gateway
Custom gatewayAny proxy implementing OpenAI-compatible APIStrict control

LiteLLM is a gateway (no model); vLLM and Ollama are inference engines. Combine as needed (e.g. LiteLLM + vLLM for production).

4.3 Config (on-prem only)

  • Only on-prem providers in models.providers (e.g. ollama, lmstudio, local).
  • No cloud providers; remove or comment out OpenAI, Anthropic, etc.
  • Default model: agents.defaults.model.primary set to local (e.g. ollama/llama3.2:latest).
  • No cloud fallbacks if you must never call out.
  • Network: Firewall so OpenClaw can only reach internal inference endpoints.

4.4 Example: Ollama only

{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local",
"api": "openai-responses",
"models": [{ "id": "llama3.2:latest" }]
}
},
"defaults": {
"provider": "ollama",
"model": "llama3.2:latest"
}
}
}

Use /v1 in baseUrl; for multi-machine use the internal inference host (e.g. http://192.168.1.100:11434/v1). Ollama recommends at least 64k token context for local models.

4.5 Example: Internal LM Studio / custom v1

Set baseUrl to your internal service (e.g. http://192.168.1.100:1234/v1) and define the model in models with api: "openai-responses".

4.6 Hardware and security

  • Hardware: One well-equipped machine (e.g. Mac Studio M3 Ultra 512GB) can run large context and moderate concurrency; two for HA or load sharing. For ~80 concurrent users see 4.6.1 (vLLM + A100 etc.).
  • Security: Local models have no cloud content filter; use system prompt, compaction, and reduced agent permissions. See Section 2.
  • Audit: Same as Section 2: audit logs, human approval for sensitive ops, egress filtering, containers, non-root.

4.6.1 ~80 concurrent users, local model

  • Prefer vLLM (or LiteLLM in front of vLLM) for production; single-node Ollama is not ideal for 80+ concurrent.
  • 7B/13B: 2× A100 40GB or 1–2× A100 80GB with vLLM.
  • 30B–70B: 2× A100 80GB (NVLink) or 4× A100; for 50+ concurrent consider 2× H100 80GB or 4× A100 80GB.
  • Single 24GB card (e.g. RTX 4090): light load or <10 users only.

Community consensus: <14B not recommended; 14B–32B minimum, 32B+ more stable for tools.

ModelNotesVRAMUse
Qwen3-Coder 32BMost stable tool use; Ollama: qwen3-coder:32b24–32GBFirst choice for agent/code
Qwen3 32BStrong general + long context; qwen3:32b24–32GBGeneral + tools
Qwen3 72BNear cloud quality48GB+When hardware allows
GLM-4.7-Flash30B; precise tool use24–32GBAlternative to Qwen3-Coder 32B
DeepSeek-R1 32BStrong reasoning and code24–32GBComplex reasoning
Llama 3.3 70BMulti-purpose and tools48GB+Multi-GPU or 80GB
GPT-OSS 20B/120BAgent-oriented, clean tool format24GB / 40GB+Agent-focused

Qwen3.5 (Feb 2026): MoE 397B-A17B is on HuggingFace; full precision is very large; wait for MoE/quant support in vLLM/Ollama. For now prefer Qwen3 32B / Qwen3-Coder 32B or GLM-4.7 locally.

By VRAM: 8–16GB → qwen3-coder:14b; 24–32GB → qwen3-coder:32b or glm-4.7-flash; 40GB+ → qwen3:72b, gpt-oss:120b, Llama 3.3 70B. Use temperature 0–0.2 and 32k+ context where possible.

4.7 References


5. LLM Routing: Local vs Remote by Scenario

OpenClaw has several model-selection mechanisms; not all support “choose local vs external by scenario”.

5.1 Primary + fallbacks — not by scenario

Same agent can have primary + fallbacks; fallbacks are used only on primary failure (timeout, rate limit, auth).

  • Does not: Choose local vs remote by task type or sensitivity; only failover.
  • Config: agents.defaults.model.primary, agents.defaults.model.fallbacks; need multiple providers and models.mode: "merge".

5.2 Multi-agent + bindings — by entry only

Bindings route by channel / accountId / peer to different agents (each with its own default model).

  • Does: Fix “this channel always uses this model” (e.g. WhatsApp → Ollama, Telegram → GPT).
  • Does not: Decide per message by content or task type.
  • Config: agents.list[].model, bindings with channel / accountId / peer.

5.3 True “by scenario” (content-aware) choice

OptionDescription
Plugin before_prompt_build → modelOverrideCommunity PR: return modelOverride from hook to pick model per request; you implement logic. See plugins, Hooks.
Webhook with modelYour system calls e.g. POST /hooks/agent with model in body; caller does scenario logic.
External routerLayer in front of OpenClaw decides local vs cloud and forwards to the right model/agent.

5.4 Summary

Method“By scenario” (local vs external)?Notes
Primary + fallbacksNoFailover only
Multi-agent + bindingsBy entry onlyBy channel/account/peer
Plugin modelOverrideYes (when available)Per-request override
Webhook / external routerYesCaller or router decides

5.5 References


6. Multi-Agent and Agent-to-Agent Calls

Supported. OpenClaw supports multiple agents, each with its own LLM, and agents can call each other (send messages, spawn sub-tasks).

6.1 Multiple agents, each with its own LLM

  • Multi-agent: Configure in agents.list; each has id, workspace, agentDir, sessions under ~/.openclaw/agents/<agentId>/sessions.
  • Per-agent model: agents.list[].model (e.g. ollama/llama3.2, openai/gpt-4o); else inherits agents.defaults.model.
  • Routing: bindings by channel/accountId/peer (Section 5).

6.2 Two ways to call between agents

MethodToolDescription
Send to sessionsessions_sendAgent A sends a message to one of B’s sessions; B runs once; supports ping-pong and announce.
Spawn sub-tasksessions_spawn + agentIdA spawns a task with agentId: "B"; runs in B’s context (B’s workspace and model); result via announce. B must allow A (see below).

6.3 Enabling agent-to-agent

  • Allow sending: tools.agentToAgent.enabled: true, allow: ["home", "work"] (both sides in list or visible).
  • Allow spawn: On target agent set subagents.allowAgents: ["main"] or ["*"].
  • Sub-task model: Pass model in sessions_spawn or set subagents.model on the agent.
  • Sub-agents: Run in agent::subagent:<runId>; result via announce; nesting controlled by maxSpawnDepth.
  • Session tools: sessions_list, sessions_history, sessions_send, sessions_spawn; visibility and tools.agentToAgent apply.

6.5 References


CategoryLink
Skillshttps://docs.openclaw.ai/tools/skills
ClawHubhttps://clawhub.com
Skills confighttps://docs.openclaw.ai/tools/skills-config
Security & sandboxhttps://docs.openclaw.ai/gateway/security, https://docs.openclaw.ai/gateway/sandboxing
Local/on-prem modelshttps://docs.openclaw.ai/gateway/local-models, https://docs.openclaw.ai/zh-CN/gateway/local-models
LLM routing / multi-agenthttps://docs.openclaw.ai/concepts/model-failover, https://docs.openclaw.ai/concepts/multi-agent, https://docs.openclaw.ai/concepts/model-providers
Agent-to-agent / Sub-agentshttps://docs.openclaw.ai/concepts/multi-agent, https://docs.openclaw.ai/tools/subagents, https://docs.openclaw.ai/concepts/session-tool
Ollamahttps://docs.ollama.com/integrations/openclaw
Feishuhttps://docs.openclaw.ai/zh-CN/channels/feishu
Community skillshttps://openclawskills.org, https://openclawskills.dev

Use cases and examples from community and user practice; security and best practices aligned with 2026 production checklist; Skills counts and ClawHub URL per official docs and current site.