How to Self-Host OpenClaw, Your Own Personal AI Agent
OpenClaw is an open-source, self-hosted AI agent that lets you chat with your own personal assistant through Telegram, Discord, Slack, WhatsApp, and more. This guide walks you through running it on your own machine or Linux server so your data stays yours.
Imagine having a personal AI assistant that actually does things, not just answering questions. One you can message through Telegram, Discord, or Slack and ask to check your calendar, draft an email, browse the web, or run a script on your server.
And one where your conversations and data never leave infrastructure you control.
OpenClaw, previously known as Moltbot and originally Clawdbot, is an open-source AI agent that runs on your own machine or server and connects to messaging apps you already use. Instead of being another chat interface, it acts more like an assistant capable of executing tasks, maintaining memory, and interacting with external tools.
This guide explains what OpenClaw is, how it works, and how to get it running on your own system.
What makes OpenClaw different from a regular chatbot?
Most AI tools are reactive. You open an app, ask a question, and get a response.
OpenClaw is designed to go further.
First, it can be proactive. You can configure reminders, scheduled briefings, and automated notifications.
Second, it can execute tasks. Depending on your configuration, OpenClaw can:
run shell commands
interact with APIs
browse websites
manage files
connect with external services
Third, it maintains persistent memory across conversations. Instead of resetting context each session, it builds context over time.
OpenClaw itself does not include a model. Instead it connects to whichever backend you prefer, including:
This will download the prebuilt container image from GitHub Container Registry.
Docker sandboxing and agent isolation
OpenClaw can also use Docker for sandboxing.
When enabled, agent tools can run inside isolated containers rather than directly on your host system. This allows stricter control over filesystem access, networking, and permissions.
Sandboxing is particularly useful if your agents execute shell commands or modify files.
Connecting your first messaging platform
Telegram
Telegram is often the easiest integration to start with.
Search for @BotFather and run:
/newbot
Follow the prompts and copy the generated bot token.
Add the token in the OpenClaw dashboard under Telegram integration settings.
Assign required scopes, install it to your workspace, and enter the credentials in OpenClaw.
Configuring your AI model
Inside the OpenClaw dashboard, choose your AI provider.
If using hosted models, paste your API key and select a model.
If running locally, configure OpenClaw to connect to Ollama:
http://localhost:11434
Example model:
llama3
Conclusion
OpenClaw allows you to run a personal AI agent on infrastructure you control while connecting it to messaging platforms you already use.
For most users, the standard install flow is the simplest way to get started. Docker is available if you want containerization or sandboxing, but it's optional.
That flexibility makes OpenClaw easy to run whether you want it on your own machine, a home lab server, or a public VPS.
For larger always-on deployments, xTom provides enterprise-grade dedicated servers and colocation services. For smaller deployments, V.PS offers scalable NVMe-powered KVM VPS hosting suitable for running OpenClaw.