82 tutorials
Give your agents a persistent memory. Explore advanced techniques for state management and integrating external memory sources in AgentScope.
A complete project-based guide to building an autonomous web researcher agent using AgentScope, including tool use and information synthesis.
Unlock the true potential of your agents. Learn to create and register custom tools in AgentScope to perform specialized tasks and interact with any API.
Take your AgentScope project live. A guide to packaging, containerizing with Docker, and deploying your multi-agent application for production use.
Your first step into AgentScope. A beginner-friendly tutorial on installation and building a simple, functional multi-agent application from scratch.
Add crucial human oversight to your AI systems. Learn to integrate user feedback and approval steps within your AgentScope agent interactions.
Master agent collaboration. This guide covers designing and implementing complex, multi-step workflows using AgentScope's powerful pipeline features.
Learn the fundamental building blocks of AgentScope. We break down agents, models, and communication paradigms for new developers.
Learn to containerize CrewAI agents and expose them as a scalable web service using FastAPI and Docker for production-ready AI applications.
Learn how AutoGen agents write, execute, and debug code autonomously. Covers LocalCommandLineCodeExecutor, Docker sandbox, and iterative code refinement.
Build AutoGen workflows where humans review and approve AI actions before execution. Covers UserProxyAgent, approval gates, and interrupt-based oversight.
Build custom AI agents with AutoGPT Forge. Learn the agent protocol, implement custom abilities, and create specialized agents using the Forge SDK.
Extend AutoGPT with community plugins and integrations. Covers the plugin system, popular extensions, the AutoGPT marketplace, and how to contribute.
Explore real-world AutoGPT use cases: market research, content pipelines, code review, competitor analysis, and more. Includes prompts and configuration tips.
Build custom CrewAI tools to connect agents to databases, APIs, and external services. Covers @tool decorator, StructuredTool, BaseTool, and crewai-tools.
Build complex AI workflows with CrewAI Flows. Covers @start, @listen, @router decorators, conditional branching, and combining multiple crews in one pipeline.
Master CrewAI memory systems and knowledge bases. Covers short-term, long-term, entity memory, and knowledge sources for agents that learn across runs.
Learn how to build multi-agent workflows with CrewAI using sequential and hierarchical processes. Includes role definition, task delegation, and crew execution.
Deep dive into gstack's gear system: Founder, Engineering Manager, and QA Engineer personas. Understand prompt design and customization.
Maximize gstack with power-user tips, common pitfalls, community resources, and guidance on building your own Claude Code skill collections.
Use gstack for structured planning, code review, QA testing, shipping, and browser automation. Real workflow examples with each persona and command.
Install gstack into your Claude Code skills directory in under a minute. Covers cloning, directory structure, verifying commands, and first use.
gstack is a collection of 13 Claude Code slash commands by Garry Tan. Switch between Founder, Engineer, and QA personas for structured workflows.
Build LangChain agents that use tools to search the web, run code, and call APIs. Covers the @tool decorator, create_react_agent, and AgentExecutor.
Master LangChain memory management to build chatbots that remember conversation history. Covers in-memory, Redis, and LangGraph checkpointer approaches.
Use LangChain structured output to extract typed data from LLMs. Covers with_structured_output, Pydantic models, JSON mode, and data extraction pipelines.
Deploy Letta agents in production with Docker, configure the server for scale, and integrate with web applications via the REST API and Python SDK.
Deep dive into Letta's three-tier memory: core memory, archival memory, and recall memory. Build agents that remember across unlimited conversations.
Build collaborative multi-agent systems with Letta. Connect agents via message passing, share memory blocks, and orchestrate agent networks for complex tasks.
Add tools and external integrations to Letta agents. Build custom functions, connect to APIs and databases, and give your agents persistent tool-use memory.
Learn how to build a RAG pipeline with LlamaIndex from scratch. Covers installation, SimpleDirectoryReader, VectorStoreIndex, and your first query engine.
Improve LlamaIndex RAG quality with hybrid search, reranking, HyDE, sub-questions, and recursive retrieval. Practical techniques for production RAG pipelines.
Build LlamaIndex agents that use tools to query your data, call APIs, and execute code. Covers FunctionCallingAgent, ReActAgent, and query engine tools.
Master LlamaIndex document ingestion: load PDFs, Word docs, web pages, and databases into your RAG pipeline with SimpleDirectoryReader and LlamaParse.
Build production AI pipelines with LlamaIndex Workflows. Covers @step decorator, events, async execution, and multi-step RAG orchestration.
Step-by-step guide to install MetaGPT on Windows, macOS, and Linux. Configure OpenAI or local LLMs and run your first software company simulation.
Build custom MetaGPT roles and actions to create specialized AI teams. Define agent behavior, communication protocols, and multi-role workflows for your domain.
Use MetaGPT's Data Interpreter to analyze datasets, generate visualizations, and solve complex data problems through autonomous code execution.
Explore real-world MetaGPT use cases: code generation, documentation, data analysis, content creation, and research automation with working examples.
MetaGPT simulates a software company with AI agents as Product Manager, Engineer, and QA Engineer. Learn how it works, how to install it, and when to use it.
Learn how to build AI-powered automation workflows with n8n. Covers installation, AI Agent node, HTTP requests, and a complete email-to-summary workflow.
Build a stateful AI chatbot in n8n with conversation memory. Step-by-step guide covering the AI Agent node, memory buffer, and webhook trigger.
Build a RAG pipeline in n8n: ingest documents into a vector store, retrieve relevant chunks, and generate grounded answers with no code required.
Deploy n8n in production with Docker, PostgreSQL, and proper configuration. Covers environment setup, database persistence, reverse proxy, and monitoring.
Connect your n8n AI workflows to external services using webhooks and HTTP requests. Build real-world integrations with Slack, GitHub, Notion, and REST APIs.
Set up OpenClaw on a VPS or local machine. Install the daemon, configure your LLM provider, and have your first AI assistant conversation.
Install OpenClaw via curl, npm, or PowerShell. Covers macOS, Windows, and Linux setup, first-run configuration, and connecting your first platform.
Automate recurring tasks with OpenClaw Cron. Build a morning briefing, email triage system, and scheduled reports with Zapier MCP integration.
Write a custom SKILL.md file, understand the 6-level load priority, and publish your skill to ClawHub for the community to use.
Connect OpenClaw to WhatsApp, Telegram, Slack, Discord, iMessage, and more. Explore the community, contribution guide, and platform ecosystem.
Build a multi-agent OpenClaw system with specialized roles. Distribute agents across devices, isolate workspaces, and manage progressive trust.
Set up multi-channel routing in OpenClaw. Route Telegram to a coding agent, Slack to a work agent, and keep contexts isolated per workspace.
Give OpenClaw persistent memory with Mem9.ai. Configure soul.md for agent personality and decision.md for permanent knowledge across sessions.
Harden your OpenClaw setup with Docker sandboxing, VirusTotal skill scanning, firewall rules, and prompt injection mitigation strategies.
Build custom OpenClaw Skills and wire them with Nodes for complex workflows. Covers the skill API, Canvas node editor, and session management.
Connect OpenClaw to Telegram step by step. Create a bot with BotFather, configure pairing security, and chat with your AI from your phone.
Explore real-world OpenClaw use cases: unified messaging, voice assistants, scheduled tasks with Cron, browser automation, and Canvas workflows.
OpenClaw is a privacy-first personal AI assistant that runs locally and connects to 50+ messaging platforms. Learn what it does and why it matters.
Step-by-step guide to installing OpenDevin (OpenHands) locally using Docker. Covers prerequisites, LLM configuration, and running your first task.
Configure OpenHands (OpenDevin) for production: choose the right agent, configure LLMs, customize the runtime sandbox, and tune performance settings.
Explore practical OpenHands (OpenDevin) use cases: automated debugging, code refactoring, feature development, research tasks, and DevOps automation.
Integrate OpenHands (OpenDevin) with GitHub and CI pipelines. Automate code review, PR creation, issue resolution, and continuous integration workflows.
OpenDevin (now OpenHands) is an open-source AI software engineer that writes code, runs commands, and browses the web autonomously. Learn what it does.
Install OpenJarvis with pip, configure config.toml, connect an inference engine like Ollama, and run your first local AI agent task.
Deep dive into OpenJarvis Engine and Learning modules. Configure Ollama, vLLM, and SGLang backends. Build persistent knowledge with RAG and memory.
Extend OpenJarvis with custom tools, explore the CLI and Python SDK patterns, and learn how the community contributes to the local-first ecosystem.
Explore OpenJarvis use cases: private document Q&A, local code assistance, energy-efficient batch tasks, RAG knowledge bases, and offline agents.
OpenJarvis is a local-first AI agent framework with five modular components. Run agents with Ollama, vLLM, or cloud APIs. Learn the architecture.
Install the Paperclip server and React dashboard. Create your first company, define agent roles, and run a task through the heartbeat protocol.
Browse ClipHub for agent templates, install community plugins, and learn how Paperclip integrates external agents like Claude, Codex, and Cursor.
Master Paperclip governance: budget controls, role permissions, multi-company isolation, and the full REST API for programmatic agent management.
Explore Paperclip use cases: software dev teams, content ops, research squads, customer support orgs, and cross-agent coordination with budgets.
Paperclip orchestrates AI agent teams like a company: roles, org charts, budgets, and governance. Learn how this Company OS manages agent workflows.
Unlock AutoGen's full potential by integrating custom Python functions and APIs as tools, enabling agents to perform complex, real-world tasks.
Learn the fundamentals of Microsoft's AutoGen framework. A step-by-step tutorial to build a simple multi-agent system from scratch.
Leverage AutoGen's GroupChat to orchestrate complex workflows between multiple specialized AI agents for advanced problem-solving.
Install AutoGPT with Docker or local Python. Covers configuration, API key setup, and running your first autonomous agent task.
AutoGPT is an open-source autonomous agent that breaks goals into tasks and executes them without human input. Learn what it is and when to use it.
Build multi-agent workflows with CrewAI. Covers installation, creating role-based agents, assigning tasks, and running your first crew.
Learn how to build your first AI agent with LangChain. This beginner guide covers installation, core concepts, and a working hello-world example.
Build a production-ready RAG pipeline using LangChain and Pinecone. Learn to embed documents, store vectors, and retrieve context for accurate LLM responses.
Kickstart your AI development journey. Learn to install Letta and create your first autonomous agent with our easy-to-follow, step-by-step guide.
No articles found
Try adjusting your filters