Beginner Openclaw 14 min read

OpenClaw Integrations and Community: Connecting Your Digital Life

#openclaw #integrations #community #messaging #open-source #privacy
📚

Read these first:

One of the most compelling reasons to choose OpenClaw as your personal AI assistant is not the local inference engine or the privacy guarantees alone — it is the sheer breadth of platforms you can connect it to. Out of the box, OpenClaw ships with connectors for more than 50 messaging, voice, and developer platforms. Whether you want your AI to respond in a WhatsApp chat, triage GitHub issues, or answer a voice query on your iPhone, the gateway layer makes it possible without surrendering your data to a cloud middleman.

This guide walks through every major integration category, explains how the privacy model holds up even when you route messages through cloud platforms, and introduces the open-source community where new connectors are built and maintained. If you are brand new to OpenClaw, start with What Is OpenClaw? before reading on — this article assumes you already have a running installation.


The OpenClaw Platform Ecosystem

OpenClaw’s integration story begins with a design decision made early in the project: the gateway process must be entirely separate from the AI runtime. The runtime (which loads your model and executes reasoning) has no direct knowledge of WhatsApp, Slack, or any other platform. It only understands structured events: a message arrives, the runtime thinks, a response is dispatched. The gateway is responsible for translating the messy reality of platform APIs into that clean event model — and for translating the runtime’s clean responses back into platform-native actions.

This architecture has two practical benefits for you as a user:

  1. Adding a new platform never touches the AI layer. You install a connector, authenticate it, and the gateway handles the rest. Your automation logic remains unchanged.
  2. The gateway is the only component that needs outbound internet access. The runtime can operate in a fully air-gapped environment if you are using a local model. This makes it straightforward to audit exactly what data leaves your machine and why.

The full connector catalog lives at openclaw/openclaw on GitHub under packages/connectors/. Each connector is its own package with its own version, changelog, and maintainer attribution, so you can evaluate the health of a specific integration independently of the core project.


Messaging Integrations

Messaging connectors are the most frequently used category. They enable OpenClaw to participate in the conversations you are already having — monitoring incoming messages, replying on your behalf (with your configured rules), and triggering workflows from natural-language instructions you send from your phone.

The table below covers the major messaging platforms, their setup complexity, and important implementation notes:

PlatformCategorySetup ComplexityNotes
WhatsAppConsumer MessagingMediumRequires a Meta Business account and phone number verification via the official WhatsApp Cloud API. Personal account integration is not supported by Meta’s ToS.
TelegramConsumer MessagingLowCreate a bot via BotFather, paste the token into openclaw.config.json. Full inline keyboard and callback support included.
SlackTeam MessagingMediumRequires a Slack app with bot and event subscription scopes. The connector ships with a manifest you can import directly from the Slack app creation UI.
DiscordCommunity MessagingMediumBot token setup via the Discord Developer Portal. Includes Discord-specific primitives: reactions, thread creation, role-based permission gating.
iMessageConsumer MessagingHighmacOS 13+ only. Uses an AppleScript bridge that must run as the same macOS user account as your Messages app. Requires Full Disk Access permission for the bridge binary.
SignalSecure MessagingHighRelies on the third-party signal-cli tool. Requires device linking via QR code and periodic re-authentication. Not officially supported by Signal; use at your own discretion.
SMSCarrier MessagingMediumRoutes through Twilio or Vonage. Requires a paid account with either provider and a provisioned phone number.
MattermostTeam MessagingLowSelf-hosted Mattermost installations only. Uses the Mattermost Bot Account API. Community-maintained connector.
Rocket.ChatTeam MessagingLowUses the Rocket.Chat REST API. Community-maintained connector.

Setting Up a Messaging Connector

Every messaging connector follows the same three-step pattern:

Step 1 — Install the connector package (if it is not bundled by default):

openclaw connector install @openclaw/connector-telegram

Step 2 — Add credentials to openclaw.config.json:

{
  "connectors": {
    "telegram": {
      "enabled": true,
      "botToken": "YOUR_BOT_TOKEN_HERE",
      "allowedChatIds": ["123456789"]
    }
  }
}

The allowedChatIds field is an important security boundary. Only Telegram chats in this list will trigger the OpenClaw runtime. Without this filter, anyone who discovers your bot token and messages your bot can issue commands to your local AI.

Step 3 — Restart the gateway process:

openclaw gateway restart

The gateway will attempt to authenticate and register any necessary webhooks. Check the gateway log at ~/.openclaw/logs/gateway.log to confirm successful initialization.


Voice Integrations

OpenClaw’s voice capability delegates speech recognition and synthesis to the host platform’s native engine rather than running its own neural voice model. This keeps the local resource footprint small but means voice quality is tied to your operating system’s built-in TTS/STT rather than a purpose-trained voice AI.

macOS — Siri Shortcuts Integration

On macOS, the recommended voice path uses Siri Shortcuts:

  1. Open the Shortcuts app and create a new shortcut.
  2. Add the “Run Shell Script” action.
  3. Set the script to:
    openclaw query --input "$1" --output text
  4. Add the “Speak Text” action after the shell script action, piping the output.
  5. Assign a voice trigger phrase such as “Hey Siri, ask my agent.”

This flow lets you speak a natural-language query, have it processed by the OpenClaw runtime locally, and hear the response through macOS’s built-in speech synthesis — all without a network request if you are using a local model.

For users on macOS 14 or later, the Apple Silicon Neural Engine accelerates both local model inference and the built-in speech recognition, making this combination particularly fluid on M-series Macs.

iOS — OpenClaw Companion App

The OpenClaw companion app for iOS (available from the project’s GitHub Releases page as a TestFlight build) connects to a running OpenClaw instance on your local network via the gateway’s WebSocket API.

To set it up:

  1. Install the companion app via TestFlight (invite link in the GitHub Releases section).
  2. Open the app and enter your gateway address: http://[YOUR-MAC-IP]:7823.
  3. Tap the microphone icon to send a voice query; responses appear as text and are optionally spoken back via iOS speech synthesis.

Requirement: Your Mac running OpenClaw and your iPhone must be on the same local network, or you must expose the gateway port via a VPN or SSH tunnel for remote access.

Android — Companion App with Google Assistant Hand-off

The Android companion app mirrors the iOS experience and additionally supports Google Assistant hand-off: you can configure a Google Assistant routine that says “Hey Google, open my agent” and launches the OpenClaw app directly into voice input mode.

Setup steps:

  1. Download the APK from the GitHub Releases page and sideload it (or use the beta Google Play listing if available in your region).
  2. Configure the gateway IP address in the app settings.
  3. Optional: Create a Google Assistant routine in the Google Home app that opens the OpenClaw app.

Developer Integrations

Beyond messaging and voice, OpenClaw ships first-party connectors for the developer tools you likely already use:

PlatformWhat OpenClaw Can Do
GitHubSummarize new issues, generate PR review notes, post issue comments, trigger workflow dispatches
LinearCreate issues from natural language, query issue status, update assignments
NotionCreate pages, query databases, append blocks to existing pages
Google CalendarRead upcoming events, create new events, respond to scheduling queries
JiraCreate tickets, transition status, assign issues via Atlassian REST API
RSS/Atom FeedsMonitor any public feed, trigger a workflow on new entries, summarize new items

Webhooks and Custom Connectors

If the platform you need is not in the catalog, OpenClaw exposes a generic inbound webhook endpoint that any service can POST to:

# The gateway starts an HTTP listener on port 7824 by default
curl -X POST http://localhost:7824/webhook/custom \
  -H "Authorization: Bearer YOUR_WEBHOOK_SECRET" \
  -H "Content-Type: application/json" \
  -d '{"text": "Summarize the latest quarterly report and send to Slack #finance"}'

You configure the webhook secret in openclaw.config.json:

{
  "webhook": {
    "enabled": true,
    "port": 7824,
    "secret": "YOUR_WEBHOOK_SECRET"
  }
}

For full programmatic control, the gateway also exposes a local REST API at port 7823. Every action you can take through a messaging connector — send a message, trigger a workflow, query session state — is also available as an API call. This makes it straightforward to integrate OpenClaw into your own scripts, CI pipelines, or custom frontends.

The community has published several connector SDK templates that stub out the boilerplate for building a new connector. These live under packages/connector-template/ in the repository and are the recommended starting point for new platform integrations.


Community and Contribution

OpenClaw is genuinely community-driven. The core team at openclaw/openclaw maintains the runtime, gateway, and first-party connectors, but a large share of the 50+ platform catalog has been contributed by individual developers from the community.

GitHub Repository Structure

The monorepo is organized as follows:

openclaw/openclaw
├── packages/
│   ├── core/              # Runtime and session management
│   ├── gateway/           # Gateway process and WebSocket API
│   ├── connectors/        # All platform connectors (first-party)
│   ├── connector-template/# SDK template for new connectors
│   └── canvas/            # Local web UI
├── apps/
│   ├── ios/               # iOS companion app
│   └── android/           # Android companion app
└── docs/                  # Documentation source

How to Contribute

The project uses a fork-and-PR workflow. Here is the standard contribution path:

  1. Fork the repository on GitHub.
  2. Clone your fork and run npm install from the root (the repo uses npm workspaces).
  3. Create a branch named after your change: feature/add-mattermost-connector or fix/telegram-reconnect-race.
  4. Write your code. For new connectors, copy packages/connector-template/ as your starting point.
  5. Add tests. The project uses Vitest for unit tests. New connectors are expected to include at minimum a test for the message parsing logic.
  6. Open a PR with the provided template. The core team triages PRs weekly.

Before submitting, check the Contribution Guidelines at CONTRIBUTING.md in the repository root. They cover code style (ESLint + Prettier), commit message format (Conventional Commits), and the connector review checklist that all new platform connectors must pass.

Community Channels

  • GitHub Discussions — the primary channel for feature requests, architecture questions, and long-form design conversations. Organized into categories: General, Connectors, Models, Canvas, and Showcase.
  • Discord Server — real-time chat for quick questions and troubleshooting. Link in the repository README.
  • Weekly Office Hours — the core team hosts a one-hour open video call every Thursday (link posted in the Discord #announcements channel). This is the best place to ask about roadmap timing or get a connector PR reviewed live.

Plugin Registry

Community connectors that are not (yet) merged into the main repository are distributed through the OpenClaw Plugin Registry — a curated list maintained as a GitHub repository at openclaw/community-connectors. Installing a community plugin is identical to installing a first-party connector:

openclaw connector install @community/connector-microsoft-teams
openclaw connector install @community/connector-zoom

The registry entry for each connector includes its maintenance status (active / maintained / archived), the last verified API compatibility date, and a link to the connector’s own GitHub repository. Always check this information before building critical workflows on a community connector.


Privacy Model Across Integrations

Routing your messages through WhatsApp or Slack raises an obvious question: if those platforms are cloud-based, does OpenClaw’s local-first privacy guarantee still hold? The answer requires understanding exactly where data processing happens in the integration chain.

The Data Flow

When a WhatsApp message arrives and triggers OpenClaw, the sequence is:

  1. Meta’s servers receive the message from your contact’s device and POST it to your configured webhook URL via the WhatsApp Cloud API.
  2. Your gateway receives the webhook POST at its public-facing URL (typically a tunnel endpoint you have set up) and passes the message payload to the local runtime.
  3. Your local runtime processes the message entirely on your hardware. It calls your local model (or a configured API), executes any tool calls, and produces a response. No data leaves your machine during this step.
  4. Your gateway sends the response back to Meta’s API, which delivers it to your contact.

The implication is that WhatsApp (Meta) sees the content of the messages — that has not changed, and cannot change without WhatsApp offering end-to-end encrypted API access. What OpenClaw protects is the AI reasoning process itself: your message is not handed to OpenAI, Anthropic, or any other AI vendor as part of normal operation. The intelligence applied to your conversation stays on your machine.

For Signal, the privacy profile is stronger: Signal’s protocol encrypts messages end-to-end, and the signal-cli bridge decrypts them locally before they ever reach the gateway. In this case, even the transport-level content is protected from platform observation.

What Leaves Your Machine

To be precise: the only data that exits your machine during normal OpenClaw operation is:

Outbound TrafficDestinationWhen
Message payload (response text)Platform API (WhatsApp, Slack, etc.)When OpenClaw sends a reply
Inference requestCloud API (if configured)Only if you use a cloud model adapter
OAuth token refreshPlatform OAuth serverPeriodically, per platform
Webhook registrationPlatform APIOnce, at gateway startup

If you use a fully local model via Ollama or LM Studio, the second row disappears entirely. Every byte of AI reasoning stays on your hardware.


Frequently Asked Questions

Can I add support for a platform that is not listed?

Yes — and the project actively encourages it. The connector SDK template at packages/connector-template/ stubs out the complete interface a connector must implement: event parsing, credential validation, message dispatch, and the health-check endpoint the gateway uses to verify the connector is alive. A minimal connector for a platform with a straightforward REST API can typically be built in a weekend. Once your connector passes the review checklist in CONTRIBUTING.md, the core team will consider merging it into the first-party connector catalog. In the meantime, you can publish it to the community connector registry immediately.

Can I use OpenClaw for enterprise messaging like Microsoft Teams or Zoom?

Community connectors for Microsoft Teams and Zoom exist in the plugin registry, but they are not part of the first-party connector catalog as of v2026.4.5. Enterprise platforms present additional complexity: OAuth flows often require admin tenant approval, API rate limits are more aggressive, and compliance requirements (such as Teams’ requirement that bot messages be logged to your organization’s compliance store) interact with OpenClaw’s local-first model in non-obvious ways. The Teams connector works for personal and small-team accounts but has not been validated for large enterprise deployments. If your organization has a strict IT governance policy, consult your IT team before deploying any bot integration on a corporate Teams tenant. The project’s GitHub Discussions board has an ongoing thread tracking enterprise connector maturity.

How does OpenClaw keep my data private when using cloud-based platforms like WhatsApp?

OpenClaw’s privacy guarantee covers the AI reasoning layer, not the transport layer. When you send a message via WhatsApp, Meta’s servers see the message content — that is a property of how WhatsApp works, and OpenClaw cannot change it. What OpenClaw changes is what happens next: instead of routing your message to an external AI API (which would give OpenAI or Anthropic access to your conversation), your local runtime processes it on your own hardware. Your AI reasoning, your context history, your tool execution logs — none of that leaves your machine. For maximum transport-layer privacy, use the Signal integration, which combines OpenClaw’s local AI processing with Signal’s end-to-end encryption, so neither the platform operator nor any AI vendor has access to the content of your conversations.


Next Steps

You now have a complete picture of OpenClaw’s integration landscape: which platforms are available, how to set them up, how the privacy model holds across cloud-based messaging, and how to contribute new connectors to the community.

If you want to go deeper on extending OpenClaw’s capabilities beyond platform connectors, the OpenClaw Skills and Nodes article covers the drag-and-drop automation graph system that lets you build multi-step workflows without writing code.

For a broader perspective on how other open-source agent projects handle community extensions, the AutoGPT Plugins and Community guide is a useful comparison — AutoGPT’s block-based marketplace model contrasts instructively with OpenClaw’s connector-per-platform approach.

If you are thinking about how to wire OpenClaw into a custom application you are building, the CrewAI Custom Tools and Integrations article covers patterns for building tool adapters that may translate directly to the OpenClaw connector SDK model.

Related Articles