Beginner Openclaw 14 min read

Getting Started with OpenClaw: Server Setup and First Conversation

#openclaw #setup #installation #vps #daemon #node #tutorial

OpenClaw is a self-hosted AI agent daemon that runs continuously on a dedicated server, giving your AI assistant persistent file system access, background task execution, and always-on availability. Unlike chat UIs that spin up per-session, OpenClaw stays alive, monitors directories, and responds to messages through channels like Telegram. In this tutorial you will go from zero to your first conversation with a running OpenClaw daemon — step by step, copy-paste ready.

If you are new to the concept of AI agents and want to understand what makes OpenClaw different from a chatbot, read What Is an AI Agent? first. For a deeper product overview, see What Is OpenClaw?.


What Is OpenClaw and Why a Dedicated Server?

OpenClaw is built on one core idea: your AI agent should have a home. Traditional LLM chat interfaces are stateless — every message starts fresh, with no memory of your file system, no background jobs, and no way to reach you proactively. OpenClaw solves this by running as a long-lived daemon process that maintains persistent context and can take actions on the host machine.

That design choice is powerful, but it comes with a meaningful security trade-off: OpenClaw needs real file system access. The daemon can read files, write outputs, execute scripts, and — depending on your configuration — make network requests. On your primary development machine, that scope is a risk. You almost certainly have SSH keys, credential files, cloud provider configs, and personal documents all living alongside your projects.

Running OpenClaw on a dedicated server (either a cloud VPS or a repurposed local machine) keeps the blast radius small. If the agent makes a mistake, it makes it in a contained environment. If an LLM provider experiences a prompt injection through a third-party data source, it cannot reach your laptop’s keychain.

Beyond security, a dedicated server gives you:

  • Always-on availability — the daemon runs 24/7 without keeping your laptop awake
  • Stable resource allocation — LLM calls, file operations, and background tasks don’t compete with your IDE and browser
  • A clean working environment — the agent’s workspace is its own, making audits and rollbacks straightforward
  • Remote access — connect via Telegram, SSH, or a web UI from any device

This tutorial covers both cloud VPS and local machine deployments. Cloud is the recommended path for most people; local machine is a fine alternative if you have spare hardware.


Prerequisites

Before you begin, make sure the following are in place.

Node.js 22 or higher (Node.js 24 recommended)

OpenClaw requires Node.js 22.16 as a minimum and performs best on Node.js 24, which is the current LTS as of 2026. Run this to check what you have:

node --version

If the output is below v22.16.0, install the latest LTS from nodejs.org or use a version manager:

# Using nvm (Linux / macOS)
nvm install 24
nvm use 24
nvm alias default 24

# Using fnm (cross-platform, faster)
fnm install 24
fnm use 24
fnm default 24

npm or pnpm

npm ships with Node.js. pnpm is optional but recommended for faster global installs:

# Install pnpm if you prefer it
npm install -g pnpm

A server to run on

You need a machine that will host the daemon. Options are covered in detail in Step 1. At minimum you need:

  • 1 vCPU, 1 GB RAM (2 GB+ recommended for smoother LLM streaming)
  • Ubuntu 22.04 / 24.04 (recommended) or macOS 14+
  • Root or sudo access for daemon registration
  • An outbound internet connection for LLM API calls

An API key from at least one LLM provider

You will need a key from Anthropic, OpenAI, or OpenRouter before completing Step 6. If you do not have one yet, sign up during the setup process — you will need it when you run the onboarding wizard.


Step 1: Choose Your Deployment Environment

A cloud VPS is the cleanest option. You get a fresh Ubuntu machine isolated from your personal files, and you can destroy and recreate it at any time without risk.

Hostinger VPS starts at around $4–6/month for a 1 vCPU / 2 GB RAM instance, which is sufficient for OpenClaw running a single agent. Use the KVM2 or KVM4 plan if you plan to run multiple agents or enable heavier background tasks. Hostinger’s one-click Ubuntu 24.04 images make provisioning fast.

DigitalOcean Droplets start at $6/month for 1 vCPU / 1 GB RAM. The $12/month plan (2 vCPU / 2 GB) is more comfortable for streaming LLM responses. DigitalOcean has excellent documentation and a predictable dashboard, making it a good choice if you are new to VPS management.

Once your VPS is running, SSH in and update the system before proceeding:

# Update package lists and upgrade installed packages
sudo apt update && sudo apt upgrade -y

# Install curl and git if not already present
sudo apt install -y curl git

Then install Node.js 24 via NodeSource:

curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash -
sudo apt install -y nodejs
node --version   # should print v24.x.x

Local Machine (Mac Mini / Old Laptop) — Alternative

If you have a spare Mac Mini, old laptop, or home server, you can skip cloud costs entirely. This works well if the machine runs continuously and you trust your home network.

For macOS, install Node.js via Homebrew:

# Install Homebrew if not present
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install Node.js 24
brew install node@24
echo 'export PATH="/opt/homebrew/opt/node@24/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
node --version

For Ubuntu on a local machine, use the same NodeSource commands shown in the Cloud VPS section above.

Key consideration for local machines: You still want to run OpenClaw in a dedicated user account, not your primary login. This limits the agent’s file system scope to that user’s home directory. Create a dedicated user before proceeding:

# Linux
sudo adduser openclaw-runner
sudo su - openclaw-runner

# macOS — use System Settings > Users & Groups to create a new Standard user,
# then switch to it in a terminal session

Step 2: Install OpenClaw

With Node.js 24 in place, installing OpenClaw is a single command. Both npm and pnpm work; choose one:

# Using npm (universal)
npm install -g openclaw@latest

# Using pnpm (faster, recommended if pnpm is installed)
pnpm add -g openclaw@latest

The global install places the openclaw binary on your PATH. Verify the installation:

openclaw --version

You should see output like:

openclaw/x.y.z node/v24.x.x linux-x64

If the command is not found after installation, your global npm bin directory may not be on PATH. Fix it:

# Find where npm puts global binaries
npm config get prefix

# Add the bin subdirectory to PATH (example for npm prefix /usr/local)
export PATH="/usr/local/bin:$PATH"
echo 'export PATH="/usr/local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

For pnpm users:

export PATH="$(pnpm root -g)/../bin:$PATH"

Once openclaw --version returns successfully, you are ready for the next step.

Installing a specific version

If you need to pin a particular OpenClaw release — for example, to match a version documented in a team runbook — you can install by version tag instead of latest:

# Install a specific version
npm install -g [email protected]

# List available versions (requires npm v7+)
npm view openclaw versions --json

For most users @latest is correct. Only pin a version if you have a specific reason to avoid upgrades, such as a breaking change in a newer release affecting your existing automation scripts.

Checking for updates

Once installed, OpenClaw can notify you when a newer version is available. The daemon logs a reminder on startup if a newer release is detected. To update manually at any time:

# Update to the latest release
npm update -g openclaw

# Or with pnpm
pnpm update -g openclaw

Always check the OpenClaw changelog before updating a production daemon to understand what changed between versions.


Step 3: Register the Background Daemon

OpenClaw is designed to run as a background service that starts automatically on boot. The onboard --install-daemon command registers this service for you using the platform’s native init system.

openclaw onboard --install-daemon

On Linux (systemd) — OpenClaw generates a unit file at /etc/systemd/system/openclaw.service (or ~/.config/systemd/user/openclaw.service for user-level installs) and enables it. The daemon will start automatically on the next boot.

On macOS (launchd) — OpenClaw creates a plist at ~/Library/LaunchAgents/com.openclaw.daemon.plist and loads it via launchctl. The service starts at login.

After the daemon is registered, verify that the service unit was created (Linux):

# Check the generated systemd unit
systemctl status openclaw

Expected output before the first start:

● openclaw.service - OpenClaw AI Agent Daemon
     Loaded: loaded (/etc/systemd/system/openclaw.service; enabled; ...)
     Active: inactive (dead)

On macOS:

launchctl list | grep openclaw

You should see com.openclaw.daemon listed. The daemon is registered but not yet configured — it will not start until you complete onboarding in Step 5.


Step 4: Configure Your LLM Provider

OpenClaw connects to an LLM provider to process your messages and execute agent tasks. You need to choose a provider and set your API key before running onboarding.

Provider Comparison

ProviderBest ForModel ExamplesNotes
OpenRouterFlexibility, model switchingClaude 3.5 Sonnet, DeepSeek R1, GPT-4oRoutes to multiple providers through one key — recommended for most users
AnthropicClaude-only setupsClaude 3.5 Sonnet, Claude 3 OpusDirect access, best for Claude-specific features
OpenAIGPT modelsGPT-4o, GPT-4 TurboDirect access to OpenAI’s model family

OpenRouter is the recommended starting point. A single OpenRouter API key lets you switch between Claude, DeepSeek, GPT-4o, and other models at any time without changing your OpenClaw configuration. This flexibility is valuable as you explore which model best suits your workflows.

For raw coding and workflow reasoning tasks, Claude 3.5 Sonnet (via Anthropic or OpenRouter) and DeepSeek R1 are strong choices. Claude 3.5 Sonnet produces high-quality, instruction-following output and handles multi-step agent tasks reliably.

Setting Your API Key

OpenClaw reads API keys from environment variables. Set the appropriate variable for your chosen provider:

# For OpenRouter (recommended)
export OPENROUTER_API_KEY="sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

# For Anthropic direct
export ANTHROPIC_API_KEY="sk-ant-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

# For OpenAI direct
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

To make the key persist across sessions, add it to your shell profile:

# For bash users
echo 'export OPENROUTER_API_KEY="sk-or-v1-your-key-here"' >> ~/.bashrc
source ~/.bashrc

# For zsh users (macOS default)
echo 'export OPENROUTER_API_KEY="sk-or-v1-your-key-here"' >> ~/.zshrc
source ~/.zshrc

After completing onboarding, the selected provider and model preferences are written to ~/.openclaw/openclaw.json. You can edit this file directly to update models or switch providers later.


Step 5: Run Manual Onboarding

With the daemon registered and your API key set, run the onboarding wizard. OpenClaw supports two onboarding modes: guided (automated) and manual. Manual onboarding is recommended because it gives you explicit control over each configuration decision, which is important for a security-sensitive daemon.

openclaw onboard

The wizard walks you through the following steps:

1. Provider selection

? Select your LLM provider:
  ❯ OpenRouter (recommended — access multiple models with one key)
    Anthropic
    OpenAI
    Other

Select your provider. The wizard will detect the corresponding environment variable and confirm it is set.

2. Model selection

? Select your primary model:
  ❯ anthropic/claude-sonnet-4-5  (recommended for agents)
    deepseek/deepseek-r1          (strong for coding & reasoning)
    openai/gpt-4o
    openai/gpt-4-turbo
    [ Enter a model ID manually ]

If you selected OpenRouter, you will see models from multiple providers. anthropic/claude-sonnet-4-5 is a solid default for general agent work. For heavy coding tasks, deepseek/deepseek-r1 is worth trying.

3. Workspace directory

? Set the agent workspace directory:
  (default: ~/openclaw-workspace)

This is the root directory the daemon has read/write access to. Press Enter to accept the default, or specify a custom path. Keep this separate from your project directories unless you intentionally want the agent to access them.

4. Daemon start confirmation

? Start the daemon now?  (Y/n)

Press Y. The wizard starts the daemon using systemd or launchd and confirms it is running.

5. Configuration summary

The wizard writes your settings to ~/.openclaw/openclaw.json and prints a summary:

OpenClaw configured successfully.
  Provider:   openrouter
  Model:      anthropic/claude-sonnet-4-5
  Workspace:  /home/openclaw-runner/openclaw-workspace
  Daemon:     running (PID 12345)

Run 'openclaw chat' to start your first conversation.

The ~/.openclaw/openclaw.json file looks like this after onboarding:

{
  "provider": "openrouter",
  "model": "anthropic/claude-sonnet-4-5",
  "workspace": "/home/openclaw-runner/openclaw-workspace",
  "daemon": {
    "autoStart": true,
    "logLevel": "info"
  },
  "channels": {
    "telegram": {
      "enabled": false
    }
  }
}

You can edit this file at any time to change the model, switch providers, or enable additional channels. The daemon picks up configuration changes on its next restart.


Step 6: Your First Conversation

With the daemon running, open a terminal session and start the interactive chat interface:

openclaw chat

You will see a prompt indicating the daemon is connected:

OpenClaw v1.x.x — Connected to daemon (PID 12345)
Model: anthropic/claude-sonnet-4-5 via OpenRouter
Workspace: /home/openclaw-runner/openclaw-workspace
Type your message and press Enter. Ctrl+C to exit.

> 

Type a message to test that the LLM connection is working:

> Hello! What can you help me with?

A successful response looks like:

Hello! I'm your OpenClaw agent. I can help you with:

- Writing and editing files in your workspace
- Running scripts and commands
- Answering technical questions
- Executing multi-step research or coding tasks

What would you like to work on?

Try a simple file operation to confirm the daemon has workspace access:

> Create a file called hello.txt in my workspace with the text "OpenClaw is running."

Then verify from a separate terminal:

cat ~/openclaw-workspace/hello.txt
# Output: OpenClaw is running.

If the file was created, your daemon is fully operational — the LLM is connected, the workspace is accessible, and the agent can take actions on the file system.

To exit the chat interface:

# Press Ctrl+C, or type:
exit

The daemon continues running in the background after you exit the chat session.

Useful daemon management commands:

# Check daemon status
openclaw status

# View daemon logs (last 50 lines)
openclaw logs --tail 50

# Restart the daemon (picks up config changes)
openclaw restart

# Stop the daemon
openclaw stop

# Start the daemon
openclaw start

Frequently Asked Questions

Why do I need a dedicated server instead of my main computer?

OpenClaw runs as a daemon with real file system access. It can read, write, and execute within its configured workspace — and depending on your setup, make network requests. On your primary development machine, that scope is a significant risk: SSH keys, .env files, cloud credentials, and sensitive personal files all live alongside your projects.

A dedicated server (or a dedicated user account on a local machine) creates a hard boundary. The agent can only reach what is in its workspace. Even if an LLM provider returns unexpected output, or a third-party data source contains an adversarial prompt, the damage is contained to the dedicated environment. Running on a VPS also means the daemon stays online 24/7 without keeping your laptop awake, which matters once you connect communication channels like Telegram.

Which LLM provider should I start with?

Start with OpenRouter if you are exploring or undecided. A single OpenRouter API key gives you access to Claude, GPT-4o, DeepSeek, and many other models without changing your OpenClaw configuration — you just update the model field in ~/.openclaw/openclaw.json. This makes it easy to experiment and switch.

If you already know you want Claude and want the most direct connection with the fewest intermediaries, use Anthropic direct. For GPT-centric workflows, OpenAI direct is the right call. Either way, Claude 3.5 Sonnet (anthropic/claude-sonnet-4-5) is the recommended starting model for general agent tasks — it follows complex instructions reliably and handles multi-step file operations well.

How do I know the daemon is running correctly?

Run openclaw status from the terminal. A healthy daemon prints:

Daemon: running
PID:    12345
Uptime: 2h 14m
Model:  anthropic/claude-sonnet-4-5 via OpenRouter

If the daemon is not running, openclaw start will attempt to bring it up and print any startup errors. For deeper diagnostics, check the logs:

# View the last 100 log lines
openclaw logs --tail 100

# On Linux, you can also check systemd directly
journalctl -u openclaw -n 100 --no-pager

Common startup failures and their fixes:

  • API key not found — the environment variable is not set in the shell the daemon runs under. Add it to /etc/environment (system-wide on Linux) or the daemon’s service environment rather than your interactive shell profile.
  • Port conflict — another process is using OpenClaw’s default port. Check openclaw.json for the port setting and change it, then restart.
  • Node.js version mismatch — the daemon was started with a different Node.js version than expected. Run openclaw --version to confirm Node.js 22.16+ is in use.

Next Steps

You now have a working OpenClaw daemon: installed, registered as a background service, connected to an LLM provider, and verified with a live conversation. This is the foundation everything else builds on.

Connect Telegram for Remote Access

The next logical step is connecting a Telegram bot so you can message your agent from your phone or desktop Telegram client — without SSHing into the server every time. The Telegram channel integration takes about ten minutes to configure and turns OpenClaw into a genuinely always-available assistant you can reach from anywhere.

Telegram integration requires two things: a bot token from BotFather and a chat ID from your own Telegram account. Once you have those, add them to ~/.openclaw/openclaw.json under the channels.telegram key, set enabled to true, and restart the daemon. Detailed step-by-step instructions are covered in the OpenClaw Telegram Integration guide.

Expand the Workspace

By default the workspace is a flat directory at ~/openclaw-workspace. As you add tasks, consider organizing it into subdirectories the agent can navigate:

mkdir -p ~/openclaw-workspace/{projects,outputs,scripts,logs}

You can instruct the agent to use these subdirectories naturally: “Save the output to the outputs folder.” OpenClaw will resolve paths relative to the workspace root without you needing to specify absolute paths each time.

Review Security Settings

Before leaving the daemon running unattended, spend a few minutes reviewing the security posture of your setup:

  • Check which user the daemon runs as. Run openclaw status and confirm the process owner is the dedicated openclaw-runner account (or equivalent), not root.
  • Review workspace boundaries. The workspace key in openclaw.json defines the root. The agent should not be given paths outside this directory in normal operation.
  • Rotate API keys periodically. LLM API keys should be treated like passwords. Set a reminder to rotate them every 90 days and update ~/.bashrc (or /etc/environment) accordingly.
  • Monitor logs weekly. openclaw logs --tail 200 gives a quick picture of what the agent has been doing. Unusual activity — large file writes, repeated errors, high API usage — is worth investigating.

Explore What Your Agent Can Do

While you wait on the Telegram guide, explore the chat interface. Try asking the agent to:

  • Write a short Python script that lists all .txt files in the workspace
  • Summarize the contents of a file you create manually
  • Plan out a directory structure for a new project

Getting comfortable with the chat commands now will make every subsequent integration — Telegram, scheduled tasks, multi-step pipelines — feel natural from the start.

For background on what makes OpenClaw distinct from other agent runtimes, revisit What Is OpenClaw? after your first week of real usage. The architectural choices will make more sense once you have experienced them firsthand.

Related Articles