← Back home

OpenClaw Tutorial: Install, Configure, and Automate with n8n on a Remote Server

If you’ve been following the AI agent space, you’ve probably heard of OpenClaw by now. It’s been all over developer communities — people praising it for privacy, others flagging security risks. I had my own doubts, so I just tried it myself.

This OpenClaw tutorial covers three things end to end:

📺 Prefer to watch? Full walkthrough on YouTube


What Is OpenClaw and Why Run It on a Server?

OpenClaw is an open-source AI agent framework that runs on your own hardware. Unlike typical chatbots, it doesn’t just answer questions — it executes tasks. You define what it can access, and it does that thing well.

Running it on a remote server means your OpenClaw AI agent is always on, reachable from any device, and not dependent on your laptop being awake.


Architecture Overview

Here’s how the stack fits together:

If OpenClaw and n8n are on the same machine or network, skip ngrok.

Why n8n in the middle? I don’t want to give the agent direct access to my data. The n8n layer means I control exactly what OpenClaw can see and do — a clean boundary between the AI and your actual systems.

New to n8n or ngrok? Drop a comment and I’ll cover those separately.


Step 1: Install Node.js (v22)

OpenClaw requires Node.js. I’m running v22.

# Using nvm
nvm install 22
nvm use 22
node --version

Step 2: Run the OpenClaw Installer

curl -fsSL https://openclaw.ai/install.sh | bash

You’ll hit a security warning during setup. Read it carefully — OpenClaw has broad access capabilities by design, so understanding the security model matters before you proceed.

Choose Quickstart when prompted.


Step 3: Choose a Model Provider

When the installer asks for a model provider, I went with MiniMax — one of the more affordable options. It runs around $10/month with a 2.7 usage limit every 5 hours, which is fine for personal use.

Select the Global version and paste in your API key.

You can swap the model later once your setup is working. Get something running first.


Step 4: Connect a Channel — Telegram

Channels are how you talk to OpenClaw. It comes with a browser dashboard, but connecting it to a messaging app you already use is where it gets genuinely useful.

We’re using Telegram — the fastest channel to get working.

Create a Telegram Bot

Open Telegram and search for @BotFather. Run /newbot and follow the prompts.

/newbot

BotFather gives you a bot token. Copy it.

Back in the OpenClaw installer, paste the token when prompted. Skip the search provider and skills for now — we’ll add those manually so you understand what’s happening under the hood.

Let the installer finish.


Step 5: Verify the OpenClaw Gateway

The Gateway is the engine of OpenClaw. It runs in the background and handles everything — incoming messages, routing, responses. No Gateway, no agent.

openclaw gateway status

If it’s running, you’re ready to pair.


Step 6: Pair with Telegram

Open your Telegram bot and send it a message. It’ll respond with a pairing prompt — follow the command it shows you to confirm your identity.

openclaw pairing approve telegram <CODE>

One-time step. Once it’s done, your OpenClaw AI agent is live on Telegram.


Step 7: Build Your First OpenClaw Automation with n8n

Your agent is running. But right now it only knows what it shipped with. Skills are how you wire it into your own tools and data.

A skill is a simple instruction file. It tells OpenClaw: “when the user asks about X, call this endpoint.” Write it once, and the agent knows how to use it. This is the core of the OpenClaw + n8n automation pattern.

Create the Skill

cd ~/.openclaw/workspace
mkdir -p skills/personal-calendar
touch skills/personal-calendar/SKILL.md

Write the SKILL.md

# Personal Calendar

Use this skill when the user asks about their schedule or upcoming events.

## Endpoint

GET https://your-ngrok-url.ngrok.io/calendar

## Response

Returns an array of calendar events as JSON.

Register and Restart

openclaw skills list         # Verify it appears
openclaw gateway restart     # Pick up the new skill

Set Up the n8n Webhook

On the n8n side, create a webhook workflow that returns your data as JSON. Since n8n is local, expose it with ngrok:

ngrok http 5678

Paste the ngrok URL into your SKILL.md.

Test It in Telegram

/new

Ask your agent about your calendar. You’ll see the request hit n8n, and the agent will respond with the data the workflow returned.


Step 8: Add an AI Agent Inside the Skill

Let’s go a level deeper. Instead of returning raw data, put an AI agent inside the n8n workflow to process it and return a recommendation.

The scenario: a small business buying and selling Flesh and Blood cards. We have a month of sales history in a JSON file and want the agent to recommend what to restock and what to offload.

The n8n workflow reads the file, passes it through a Mistral agent node, and returns a structured recommendation back to OpenClaw.

Same steps as before to wire it up — create the skill, write the SKILL.md, restart the Gateway, start a new Telegram session with /new, and ask.


Key Terms

Gateway — the core runtime. Always running, handles all message routing.

Channel — the interface you use to talk to OpenClaw (Telegram, Discord, Slack, etc.).

Skill — an instruction file that teaches the agent how to call an external tool or data source.

OpenClaw automation — pairing skills with n8n (or any webhook) to give the agent controlled access to your actual workflows and data.


Closing Thoughts

OpenClaw isn’t magic. It’s a structured way to give an AI agent access to your tools and data without handing over the keys to everything. You define exactly what it can do through skills. It does those things well.

That said — be deliberate about what you give it access to. The n8n boundary pattern we used here is worth keeping. The agent calls n8n, n8n decides what to return. You stay in control.

Next up: connecting OpenClaw to Discord and locking it down to specific channels with allowlists.

Questions? Drop them in the comments.