OpenClaw brings a privacy-first, always-on AI assistant to your local machine — but you have to get it running first. This guide walks through every installation method available in v2026.4.5: the one-liner curl script for macOS, Linux, and WSL2 users; the npm global install for developers who prefer Node-native tooling; and the PowerShell method for Windows users running native terminals. You will also configure your API keys, choose a model, connect your first messaging platform, and learn how to diagnose the most common setup issues.
If you are new to OpenClaw and want to understand the architecture before installing, read What Is OpenClaw? first. It covers the gateway model, capability modules, and privacy guarantees that make OpenClaw different from cloud AI assistants.
Prerequisites
Before running any install command, make sure you have the following in place.
Node.js 18 or Later
All three install methods ultimately depend on Node.js. OpenClaw’s install scripts detect whether Node.js is present and offer to install it automatically if it is missing — but doing this yourself ahead of time gives you more control over which version and package manager you use.
# Check your current Node.js version
node --version
If the output shows v18.x.x or higher, you are ready. If Node is not installed or the version is below 18, install it from nodejs.org or via your system package manager:
# macOS — using Homebrew
brew install node
# Ubuntu / Debian
sudo apt-get install -y nodejs npm
# Windows — using winget
winget install OpenJS.NodeJS.LTS
An AI Model API Key (or a Local Model Server)
OpenClaw is model-agnostic. During first-run configuration you will point it at one of the following backends:
| Backend | Requirement |
|---|---|
| OpenAI (GPT-4o, GPT-4.1) | OPENAI_API_KEY from platform.openai.com |
| Anthropic (Claude 3.5+) | ANTHROPIC_API_KEY from console.anthropic.com |
| Google Gemini | GEMINI_API_KEY from aistudio.google.com |
| Ollama (local) | Ollama installed and running — no external key needed |
Have at least one API key ready before proceeding. You will enter it during the openclaw onboard step.
Administrator / sudo Access
The install script writes a binary to /usr/local/bin (macOS / Linux) or %AppData%\openclaw (Windows) and registers the background daemon with your system’s service manager. Both operations require elevated permissions.
Method 1: curl Install (macOS / Linux / WSL2)
The curl method is the fastest path to a working installation on any Unix-like system. The install script handles binary download, PATH configuration, and Node.js detection in a single command.
Step 1: Run the Install Script
Open a terminal and run:
curl -fsSL https://openclaw.ai/install.sh | bash
The flags break down as follows:
-f— fail silently on HTTP errors rather than saving an error page as a file-s— suppress progress output-S— re-enable error messages even with-s-L— follow redirects (the URL may redirect to a CDN)
The script will print its progress as it runs. A successful install ends with output similar to:
[openclaw] Detected: macOS arm64
[openclaw] Node.js 22.1.0 found — skipping Node install
[openclaw] Downloading openclaw v2026.4.5 ...
[openclaw] Installing to /usr/local/bin/openclaw
[openclaw] Done. Run `openclaw onboard` to complete setup.
Step 2: Verify the Binary
openclaw --version
# Expected output: openclaw v2026.4.5
If the command is not found, your shell’s PATH may not have updated. Either restart your terminal or source your profile:
# bash
source ~/.bashrc
# zsh (default on macOS)
source ~/.zshrc
Step 3: Run the Onboarding Wizard
openclaw onboard --install-daemon
The --install-daemon flag tells OpenClaw to register the background gateway process with your system’s service manager (launchd on macOS, systemd on Linux). Without the daemon, OpenClaw only runs while your terminal is open.
The onboarding wizard prompts you for:
- Your AI backend (OpenAI, Anthropic, Gemini, or Ollama)
- Your API key for the chosen backend
- Your preferred default model
- Privacy telemetry preference (opt-in, defaults to off)
Method 2: npm Install
The npm method is ideal for developers who already manage global Node.js tools with npm or pnpm, and who want a reproducible install they can script across machines.
Step 1: Install the Package Globally
npm install -g openclaw@latest
This installs the openclaw binary and all JavaScript dependencies to your global node_modules. The @latest tag ensures you get v2026.4.5 rather than a cached older version.
Verify the install:
openclaw --version
# openclaw v2026.4.5
If npm warns about permissions on macOS or Linux, do not use sudo npm install -g. Instead, fix your npm prefix configuration:
# Fix npm global prefix to avoid sudo
mkdir -p ~/.npm-global
npm config set prefix '~/.npm-global'
# Add to your shell profile
echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.zshrc
source ~/.zshrc
# Now install without sudo
npm install -g openclaw@latest
Step 2: Run Onboarding and Install the Daemon
Unlike the curl method, the npm install does not automatically register the background daemon. You must explicitly request it during onboarding:
openclaw onboard --install-daemon
On macOS, this creates a launchd plist at ~/Library/LaunchAgents/ai.openclaw.daemon.plist and loads it immediately. On Linux, it writes a systemd user service to ~/.config/systemd/user/openclaw.service and enables it. You can verify the daemon is running after onboarding completes:
# macOS
launchctl list | grep openclaw
# Linux (systemd)
systemctl --user status openclaw
Step 3: Confirm Daemon Connectivity
openclaw status
Expected output:
OpenClaw v2026.4.5
Daemon: running (pid 12345)
Gateway: active — 0 platforms connected
Model: gpt-4o-mini (OpenAI)
A daemon status of stopped means the background process did not start. See the Common Issues section below.
Method 3: Windows PowerShell
On Windows, the recommended install path is the official PowerShell script. This method works in native PowerShell 5.1 and PowerShell 7+ (pwsh). It does not require WSL2.
Step 1: Allow Script Execution
By default, Windows restricts running downloaded scripts. Run this in an Administrator PowerShell window:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
RemoteSigned allows locally created scripts to run without signing while requiring a digital signature for scripts downloaded from the internet. This is a reasonable balance between security and usability.
Step 2: Run the Install Script
In the same Administrator PowerShell window:
iwr -useb https://openclaw.ai/install.ps1 | iex
The flags:
iwr—Invoke-WebRequest, PowerShell’s equivalent ofcurl-useb— short for-UseBasicParsing, avoids requiring Internet Explorer’s rendering engineiex—Invoke-Expression, executes the downloaded script string
The script installs the binary to %AppData%\openclaw\bin\openclaw.exe and adds that directory to your user PATH automatically. It also checks for Node.js and offers to install it via winget if missing.
Successful output:
[openclaw] Detected: Windows x64
[openclaw] Node.js 22.1.0 found — OK
[openclaw] Downloading openclaw v2026.4.5 ...
[openclaw] Installed to C:\Users\YourName\AppData\Roaming\openclaw\bin
[openclaw] PATH updated. Restart PowerShell and run `openclaw onboard`.
Step 3: Restart PowerShell and Onboard
Close the Administrator window, open a regular (non-elevated) PowerShell window, and run:
openclaw --version
# openclaw v2026.4.5
openclaw onboard --install-daemon
On Windows, the daemon is registered as a Windows Service via the sc command. You can manage it through services.msc or PowerShell:
# Check daemon status
Get-Service -Name OpenClawDaemon
# Start / stop manually if needed
Start-Service -Name OpenClawDaemon
Stop-Service -Name OpenClawDaemon
Note: Windows Defender SmartScreen may warn about the installer on first run. This is expected for newly signed executables. Click “More info → Run anyway” if prompted, or verify the script hash against the official checksum published at
openclaw.ai/checksums.
First-Run Configuration
Whether you used curl, npm, or PowerShell, the openclaw onboard wizard leads you through the same configuration sequence. This section documents each prompt in detail.
1. Choose Your AI Backend
? Select your primary AI backend:
> OpenAI
Anthropic
Google Gemini
Ollama (local)
Your choice here sets the default model adapter. You can add additional adapters later by editing ~/.openclaw/openclaw.config.json directly.
2. Enter Your API Key
? OpenAI API key: sk-****
OpenClaw writes the key to ~/.openclaw/openclaw.config.json with file permissions set to 600 (owner read/write only) on Unix systems. On Windows, the config directory is protected by your user account’s ACL.
Security tip: If you prefer not to store the key in a config file, you can set an environment variable instead. OpenClaw checks for OPENAI_API_KEY, ANTHROPIC_API_KEY, and GEMINI_API_KEY at startup and prefers them over the config file value.
# Add to ~/.zshrc or ~/.bashrc
export OPENAI_API_KEY="sk-your-key-here"
3. Choose a Default Model
? Select your default model:
> gpt-4o-mini (fast, cost-efficient)
gpt-4o (strongest reasoning)
gpt-4.1 (latest, long context)
gpt-4o-mini is the recommended starting point for most users. It handles the vast majority of personal assistant tasks at a fraction of the cost of GPT-4o. You can override the model on a per-platform basis in the gateway configuration.
4. Privacy Settings
? Send anonymous usage statistics to help improve OpenClaw? (y/N)
The default is No. No data is sent unless you explicitly opt in. This setting can be changed at any time in openclaw.config.json under the telemetry.enabled key.
5. Config File Location
After onboarding, your configuration lives at:
~/.openclaw/
├── openclaw.config.json ← main config (API keys, model, privacy)
├── platforms/ ← per-platform connector configs
│ └── telegram.json
│ └── slack.json
└── data/
└── openclaw.db ← local SQLite session store
Connecting Your First Platform
With OpenClaw installed and configured, the next step is connecting a messaging platform so the gateway has somewhere to send and receive messages. Telegram is the recommended starting point because its bot token setup is self-contained and does not require OAuth app registration.
Connecting Telegram
Step 1: Create a Bot with BotFather
Open Telegram and search for @BotFather (the official Telegram bot management account). Start a conversation and send:
/newbot
BotFather will ask for:
- A display name for your bot (e.g.,
My OpenClaw Assistant) - A username ending in
bot(e.g.,my_openclaw_bot)
After both are set, BotFather replies with your bot token:
Done! Congratulations on your new bot. You will find it at t.me/my_openclaw_bot.
Use this token to access the HTTP API:
7123456789:AAFabcdefghijklmnopqrstuvwxyz01234567
Copy the token. It is the long string after the colon.
Step 2: Add the Token to OpenClaw
openclaw platform add telegram --token "7123456789:AAFabcdefghijklmnopqrstuvwxyz01234567"
OpenClaw writes a connector config to ~/.openclaw/platforms/telegram.json and registers the bot with Telegram’s webhook system.
Step 3: Verify the Connection
openclaw status
OpenClaw v2026.4.5
Daemon: running (pid 12345)
Gateway: active — 1 platform connected
✓ Telegram (@my_openclaw_bot)
Model: gpt-4o-mini (OpenAI)
Now open Telegram, find your bot at t.me/my_openclaw_bot, send it a message, and watch OpenClaw respond.
Connecting Slack (Overview)
Slack requires more setup because it uses OAuth rather than simple bot tokens. The high-level steps are:
- Go to api.slack.com/apps and click Create New App → From Scratch
- Enable the Bot Token Scopes:
chat:write,channels:read,im:history,im:write - Install the app to your workspace and copy the Bot User OAuth Token (starts with
xoxb-) - Enable Event Subscriptions and point the Request URL to your OpenClaw local webhook URL (use a tunnel like ngrok for non-server installs)
# After completing Slack app setup
openclaw platform add slack --token "xoxb-your-slack-bot-token"
For teams evaluating whether OpenClaw’s automation capabilities suit their workflow, it is also worth exploring n8n — a visual workflow automation tool that complements OpenClaw well for complex multi-step integrations involving external APIs.
Common Issues
Node.js Version Error
Symptom:
Error: OpenClaw requires Node.js >= 18. Found: v16.20.2
Fix: Upgrade Node.js. The cleanest approach is using nvm (Node Version Manager):
# Install nvm (macOS / Linux)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.zshrc
# Install and use Node 22 LTS
nvm install 22
nvm use 22
nvm alias default 22
# Confirm
node --version
# v22.x.x
# Reinstall openclaw
npm install -g openclaw@latest
On Windows, use nvm-windows (github.com/coreybutler/nvm-windows) or install Node 22 LTS directly from nodejs.org.
Daemon Not Running
Symptom:
openclaw status
# Daemon: stopped
Diagnosis and Fix:
# macOS — check launchd logs
log show --predicate 'subsystem == "ai.openclaw"' --last 1h
# Linux — check systemd journal
journalctl --user -u openclaw --since "1 hour ago"
# Windows — check Windows Event Log
Get-EventLog -LogName Application -Source OpenClawDaemon -Newest 20
The most common cause is a port conflict. OpenClaw’s daemon defaults to port 39201 for its local WebSocket API. If another process occupies that port, the daemon fails to start:
# Find what is using port 39201
# macOS / Linux
lsof -i :39201
# Windows PowerShell
netstat -ano | Select-String ":39201"
To change the daemon port, edit ~/.openclaw/openclaw.config.json:
{
"daemon": {
"port": 39202
}
}
Then restart the daemon:
# macOS
launchctl unload ~/Library/LaunchAgents/ai.openclaw.daemon.plist
launchctl load ~/Library/LaunchAgents/ai.openclaw.daemon.plist
# Linux
systemctl --user restart openclaw
# Windows PowerShell
Restart-Service -Name OpenClawDaemon
API Key Authentication Failure
Symptom:
[openclaw] Error: 401 Unauthorized — check your API key configuration
Fix:
# Verify the stored key
openclaw config get llm.apiKey
# Update with a new key
openclaw config set llm.apiKey "sk-your-correct-key-here"
# Or remove from config and use environment variable instead
openclaw config unset llm.apiKey
export OPENAI_API_KEY="sk-your-correct-key-here"
# Restart daemon to pick up changes
openclaw daemon restart
Platform Connector Not Receiving Messages
Symptom: OpenClaw status shows the platform as connected, but messages sent to the bot receive no response.
Fix for Telegram: Make sure the bot is not blocked and that you are sending messages directly to the bot (not a group unless you have enabled group mode):
# Check gateway event log
openclaw logs --platform telegram --tail 50
Fix for Slack: Slack event subscriptions require a publicly reachable webhook URL. On a local machine, use ngrok to expose the OpenClaw gateway:
ngrok http 39201
# Copy the https URL (e.g., https://abc123.ngrok-free.app)
# Update Slack's Event Subscriptions Request URL to:
# https://abc123.ngrok-free.app/webhooks/slack
Frequently Asked Questions
What are the system requirements for OpenClaw?
OpenClaw’s own process footprint is minimal — the daemon and gateway together use roughly 80–150MB of RAM at idle. The practical system requirements are determined almost entirely by your choice of model backend:
- Cloud API backend (OpenAI, Anthropic, Gemini): Any machine with 2GB RAM, a working internet connection, and Node.js 18+. A $5/month VPS is sufficient.
- Local model backend (Ollama): Depends on the model. A 7B-parameter quantized model (Q4) requires approximately 8GB RAM or 6GB VRAM. A 13B model needs 16GB RAM or 12GB VRAM. OpenClaw itself imposes no GPU requirement — that is entirely a function of the model you choose to run locally.
No GPU is required to run OpenClaw itself. See the GPU question below for more detail.
How do I update OpenClaw to the latest version?
The update command depends on how you originally installed OpenClaw:
# curl / npm installs — same command for both
npm update -g openclaw
# Or install the specific version explicitly
npm install -g [email protected]
# Verify after update
openclaw --version
After updating, restart the daemon so the new binary takes effect:
# macOS
launchctl unload ~/Library/LaunchAgents/ai.openclaw.daemon.plist && launchctl load ~/Library/LaunchAgents/ai.openclaw.daemon.plist
# Linux
systemctl --user restart openclaw
# Windows
Restart-Service -Name OpenClawDaemon
Subscribe to the OpenClaw GitHub releases feed to be notified when new versions drop. Breaking changes are documented in CHANGELOG.md at the root of the repository.
Can I run OpenClaw without a GPU?
Yes, absolutely. OpenClaw itself is a Node.js orchestration process — it does not perform any model inference directly and has no GPU dependency at all. Whether you need a GPU depends entirely on how you configure your model backend:
- Cloud API (OpenAI, Anthropic, Gemini): Zero GPU required. All inference runs on the provider’s hardware. Your local machine only handles orchestration, platform gateway communication, and local file I/O. A Raspberry Pi 4 with 8GB RAM can run OpenClaw in this configuration.
- Ollama local models: GPU is optional but strongly recommended for a responsive experience. Without a GPU, Ollama runs inference on the CPU. A 7B quantized model on a modern CPU (Apple M-series, AMD Ryzen 9) produces acceptable latency (2–8 seconds per response). Older CPUs may be impractically slow for interactive use. A GPU with 8GB+ VRAM dramatically improves inference speed.
For users who want local privacy without a GPU, the recommended setup is a modern Apple Silicon Mac (M2 or later) with Ollama — the M-series unified memory architecture provides GPU-class inference speeds without a discrete GPU.
Next Steps
With OpenClaw installed, the daemon running, and at least one platform connected, you are ready to put the tool to work.
- What Is OpenClaw? — If you skipped the overview article, read it now to understand the gateway architecture and capability modules that power everything you just installed.
- Building AI Workflows with n8n — For complex multi-step automation that involves branching logic, conditional triggers, and third-party API integrations, n8n complements OpenClaw’s AI layer with a visual workflow builder.
- Explore the Canvas interface — Open a browser and navigate to
http://localhost:39201while the daemon is running. The Canvas provides a real-time view of every tool call, platform event, and model response — an invaluable tool for debugging and understanding how OpenClaw processes your requests. - Add more platforms — Run
openclaw platform listto see all available connectors, thenopenclaw platform add [name] --helpfor setup instructions specific to each one.