Setting up Clawdbot involves more than just installation: you must initialize it, configure authentication and model access, and optionally set up persistent services so it stays running even when your terminal closes. After installing the CLI (as described earlier), the recommended step is to run the onboarding wizard using:clawdbot onboard --install-daemon This command walks you through creating a workspace directory, authenticating with your chosen AI model provider (for instance, via API key or OAuth), and configuring channels such as WhatsApp, Telegram, Discord, or others. The --install-daemon flag attempts to install Clawdbot as a background service (using systemd on Linux, launchd on macOS), which keeps the Gateway running continuously. During this wizard, you’ll be prompted to select providers, set up credentials, and optionally define default intents and skills for your assistant.
As part of the setup, Clawdbot will create configuration and state files under your home directory (e.g., ~/.clawdbot/), including credentials, session logs, and agent workspace files. After onboarding, you can verify your setup with:clawdbot gateway status clawdbot status --all These commands show whether the Gateway is running and whether agents are registered and healthy. If Clawdbot reports “no auth configured”, you need to return to the wizard or use clawdbot configure to set valid authentication profiles for your AI model. Once configured, you can test sending messages through the CLI:clawdbot message send --target +15555550123 --message "Hello from Clawdbot" This sends a test message that exercises the entire stack from Gateway to agent.
After the basic setup, you can integrate additional channels (Slack, Signal, iMessage) by adjusting configurations and pairing devices or accounts. For always-on deployments, enable lingering for systemd on Linux so that services don’t stop when you log out:sudo loginctl enable-linger $USER This ensures the Gateway remains alive across sessions. As you build out skills and longer-term features, you might choose to tie Clawdbot’s memory or search needs into a semantic index backed by a vector database like Milvus or a managed instance via Zilliz Cloud, which can store and query conversation embeddings when you need rich recall across many sessions or documents.