What We’re Building
A chatbot that remembers previous messages in a conversation — not just a single Q&A. Each message considers the full conversation history, making responses contextually aware.
This is one of the most common n8n use cases: a chatbot backend that your website, Slack, or any frontend can call via webhook.
Architecture
POST /webhook/chat
↓
AI Agent Node
├── LLM: OpenAI Chat Model (gpt-4o-mini)
├── Memory: Window Buffer Memory (last 10 messages)
└── Tools: (optional — add later)
↓
Respond to Webhook
Step-by-Step Build
Step 1: Webhook Trigger
- Create a new workflow in n8n
- Add a Webhook node
- Set:
- HTTP Method:
POST - Path:
chat - Response Mode:
Using Respond to Webhook Node
- HTTP Method:
- Note the webhook URL for testing
Step 2: Set Session ID
Every conversation needs a session ID so the chatbot maintains separate memory per user. Add a Set node:
sessionId = {{ $json.body.session_id || $json.headers['x-session-id'] || 'default' }}
userMessage = {{ $json.body.message }}
Step 3: AI Agent Node
Add an AI Agent node after the Set node:
- Connect the AI Agent node
- Inside the AI Agent node:
- Chat Model: Add “OpenAI Chat Model” sub-node
- Model:
gpt-4o-mini - Add credential: your OpenAI API key
- Model:
- Memory: Add “Window Buffer Memory” sub-node
- Session Key:
{{ $('Set').item.json.sessionId }} - Context Window Length:
10(remember last 10 messages)
- Session Key:
- Chat Model: Add “OpenAI Chat Model” sub-node
- System Prompt (in the AI Agent’s “System Message” field):
You are a helpful assistant. Be concise but thorough. Today's date is {{ $now.toFormat('yyyy-MM-dd') }}. - User Message:
{{ $('Set').item.json.userMessage }}
Step 4: Respond to Webhook
Add a Respond to Webhook node:
{
"response": "{{ $('AI Agent').item.json.output }}",
"session_id": "{{ $('Set').item.json.sessionId }}"
}
Step 5: Test
Activate the workflow and test with curl:
# First message
curl -X POST http://localhost:5678/webhook/chat \
-H "Content-Type: application/json" \
-d '{"message": "My name is Alex and I prefer Python.", "session_id": "user-001"}'
# Follow-up (should remember "Alex" and "Python")
curl -X POST http://localhost:5678/webhook/chat \
-H "Content-Type: application/json" \
-d '{"message": "What programming language did I mention?", "session_id": "user-001"}'
The second response should reference “Python” — the chatbot remembered.
Adding a System Prompt with Context
Customize the chatbot for a specific purpose:
Customer Support Bot:
You are a friendly customer support agent for Acme Corp.
Our products: TaskFlow Pro ($29/mo), DataPulse ($49/mo).
Support hours: Mon-Fri 9am-6pm PST.
If you can't resolve an issue, tell the customer to email [email protected].
Keep responses under 3 sentences unless more detail is requested.
Code Review Bot:
You are an expert code reviewer. When shown code:
1. Point out bugs or potential issues
2. Suggest improvements for readability
3. Note security concerns if any
4. Be specific with line references when possible
Be direct and technical. Skip pleasantries.
Adding Tools to Your Chatbot
Make the chatbot more capable by adding tools inside the AI Agent node:
HTTP Request Tool — lets the chatbot call APIs:
- Inside AI Agent, click “Add Tool” → “HTTP Request Tool”
- Configure the endpoint (e.g., your product catalog API)
- The chatbot will call it when the user asks about products
Code Tool — lets the chatbot run JavaScript for calculations:
- Add Tool → “Code Tool”
- The chatbot can now calculate things like “What’s 15% tip on $85.50?”
Wikipedia Tool — for factual lookups:
- Add Tool → “Wikipedia”
- The chatbot can look up facts it’s uncertain about
Multi-Language Support
For chatbots serving international users, add a language detection step:
- Before the AI Agent, add a Code node:
const message = $input.item.json.userMessage;
// Pass language to agent
return { userMessage: message, languageHint: "Respond in the same language as the user" };
- Include
{{ $json.languageHint }}in the system prompt.
Persisting Memory Across Restarts
The default Window Buffer Memory is in-memory and lost when n8n restarts. For persistent memory:
- Replace “Window Buffer Memory” with Postgres Chat Memory or Redis Chat Memory
- Configure your database connection
- Memory now persists permanently per session ID
For n8n Cloud, use the built-in Simple Memory (persisted storage).
Frequently Asked Questions
How do I connect this to my website’s chat widget?
Your frontend makes POST requests to the webhook URL. In JavaScript:
async function sendMessage(message, sessionId) {
const response = await fetch('https://your-n8n.com/webhook/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message, session_id: sessionId }),
});
const data = await response.json();
return data.response;
}
Use a persistent sessionId (stored in localStorage) so the user’s conversation history is maintained.
What is the “Context Window Length” in Window Buffer Memory?
It’s how many recent messages to include. Set to 10 = last 5 exchanges (5 user + 5 AI). Higher values = better context but more tokens consumed per request. For most chatbots, 8–16 is a good balance.
Can I use Claude instead of OpenAI?
Yes. Replace “OpenAI Chat Model” with “Anthropic Chat Model” and select a Claude model. All other nodes stay identical.
How do I handle errors (API timeouts, etc.)?
Add an Error Trigger workflow that catches failures and sends an alert to Slack. In the main workflow, add an If node after the AI Agent to check $('AI Agent').item.error and return a friendly error message.
Next Steps
- n8n RAG Pipeline with Vector Database — Add document retrieval to your chatbot
- n8n Webhook and API Automation — Connect your chatbot to more services
- LangChain Memory Management — Code-based alternative for more control