Intermediate Crewai Tutorial 4 min read

CrewAI Multi-Agent Workflows: Sequential and Hierarchical Crews

#crewai #multi-agent #workflows #sequential #hierarchical #python
📚

Read these first:

Why Multi-Agent Workflows?

A single AI agent works well for focused tasks — writing a paragraph, answering a question, calling an API. But complex, real-world jobs require specialization and coordination: one agent to research, another to analyze, a third to write, and a manager to oversee quality.

CrewAI multi-agent workflows let you define a team of specialized agents (a “crew”), assign tasks, and choose how they collaborate. This mirrors how effective human teams operate — each member has a role, a goal, and a set of tools.

This guide covers the two core process types in CrewAI: sequential and hierarchical.

Prerequisites

pip install crewai crewai-tools python-dotenv
export OPENAI_API_KEY="sk-your-key"

Core Building Blocks

CrewAI has four concepts:

ConceptWhat it does
AgentA role-specific LLM with a goal, backstory, and optional tools
TaskA unit of work assigned to an agent with expected output
CrewA collection of agents and tasks with a defined process
ProcessHow tasks are executed: sequential or hierarchical

Defining Agents

Each agent has a role, goal, and backstory. These shape how the LLM behaves:

from crewai import Agent

researcher = Agent(
    role="Senior Research Analyst",
    goal="Uncover cutting-edge developments and gather accurate information on topics.",
    backstory=(
        "You are an experienced research analyst with a talent for finding reliable "
        "sources. You distill complex information into clear, factual summaries."
    ),
    verbose=True,
    allow_delegation=False,  # won't pass tasks to others
)

writer = Agent(
    role="Technical Content Writer",
    goal="Write clear, engaging technical articles based on provided research.",
    backstory=(
        "You are a skilled writer who specializes in developer content. "
        "You turn research notes into polished articles with practical examples."
    ),
    verbose=True,
    allow_delegation=False,
)

editor = Agent(
    role="Editor",
    goal="Review and improve articles for clarity, accuracy, and completeness.",
    backstory=(
        "You are a meticulous editor who checks for factual accuracy, "
        "logical flow, and actionable takeaways."
    ),
    verbose=True,
)

Defining Tasks

Each task has a description and expected_output, and is assigned to a specific agent:

from crewai import Task

research_task = Task(
    description=(
        "Research the latest developments in {topic}. "
        "Identify the top 5 key developments, notable tools or libraries, "
        "and any controversies or open questions in the community."
    ),
    expected_output=(
        "A structured research report with: executive summary, 5 key developments, "
        "tool comparisons, and source list."
    ),
    agent=researcher,
)

writing_task = Task(
    description=(
        "Using the research report, write a comprehensive technical article about {topic}. "
        "Target audience: intermediate developers. Include code examples where relevant. "
        "Minimum 800 words."
    ),
    expected_output=(
        "A complete article in Markdown with: introduction, 3-4 sections, "
        "at least one code example, and a conclusion."
    ),
    agent=writer,
)

editing_task = Task(
    description=(
        "Review the article for factual accuracy, clarity, and developer relevance. "
        "Suggest specific improvements and produce the final polished version."
    ),
    expected_output=(
        "The final edited article in Markdown, ready for publication."
    ),
    agent=editor,
)

Sequential Process

In a sequential process, tasks execute in the order they are defined. The output of each task is automatically passed as context to the next:

from crewai import Crew, Process

crew = Crew(
    agents=[researcher, writer, editor],
    tasks=[research_task, writing_task, editing_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff(inputs={"topic": "AI Agent frameworks in 2025"})

# Access the final output
print(result.raw)

# Access individual task outputs
for task_output in result.tasks_output:
    print(f"Task: {task_output.description[:50]}...")
    print(f"Output: {task_output.raw[:200]}...")
    print("---")

Sequential process is the right choice when:

  • Each step depends on the previous step’s output
  • You want a clear, auditable pipeline
  • The workflow is linear (research → write → edit)

Hierarchical Process

In a hierarchical process, a manager agent automatically coordinates the crew. The manager decides which agent to delegate tasks to and in what order:

from crewai import Crew, Process

hierarchical_crew = Crew(
    agents=[researcher, writer, editor],
    tasks=[research_task, writing_task, editing_task],
    process=Process.hierarchical,
    manager_llm="gpt-4o",  # manager uses a capable model
    verbose=True,
    planning=True,         # manager plans before executing
)

result = hierarchical_crew.kickoff(inputs={"topic": "Vector databases for RAG"})
print(result.raw)

Hierarchical process is the right choice when:

  • You want the crew to adapt dynamically (manager re-assigns if an agent fails)
  • Tasks are loosely coupled and can be parallelized
  • You need human-in-the-loop oversight at the manager level

Adding Tools to Agents

Agents become much more powerful with tools. CrewAI ships with many built-in tools:

from crewai import Agent
from crewai_tools import SerperDevTool, WebsiteSearchTool, FileReadTool

# Web search tool
search_tool = SerperDevTool()

# Web content tool
web_tool = WebsiteSearchTool()

researcher_with_tools = Agent(
    role="Senior Research Analyst",
    goal="Research topics using real-time web data.",
    backstory="You are a detail-oriented analyst who verifies facts with live sources.",
    tools=[search_tool, web_tool],
    verbose=True,
)

You can also use any @tool decorated function from LangChain:

from langchain.tools import tool
from crewai import Agent

@tool
def query_database(sql: str) -> str:
    """Execute a read-only SQL query and return results."""
    # connect to DB and run query
    return "id=1, name=Alice, score=95"

analyst = Agent(
    role="Data Analyst",
    goal="Answer data questions by querying the internal database.",
    backstory="Expert in SQL and data analysis.",
    tools=[query_database],
)

Enabling Memory and Caching

For long-running crews or repeated runs, enable memory and caching:

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    process=Process.sequential,
    memory=True,          # agents remember across runs
    cache=True,           # cache tool results
    max_rpm=100,          # rate limit API calls
    verbose=True,
)

With memory=True, each agent maintains short-term memory (within a run) and the crew maintains long-term memory (across runs) using a local vector store.

Complete End-to-End Example

Here is a minimal, fully runnable crew that researches a topic and writes a summary:

import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process

load_dotenv()

# Agents
researcher = Agent(
    role="Researcher",
    goal="Find key facts about {topic}.",
    backstory="Expert at distilling complex topics into clear bullet points.",
    verbose=True,
)

writer = Agent(
    role="Writer",
    goal="Write a concise summary of the research findings.",
    backstory="Technical writer who makes research accessible to developers.",
    verbose=True,
)

# Tasks
research = Task(
    description="List the 5 most important facts about {topic}.",
    expected_output="A numbered list of 5 clear, accurate facts.",
    agent=researcher,
)

summary = Task(
    description="Write a 3-paragraph summary based on the research.",
    expected_output="A 3-paragraph summary in plain English.",
    agent=writer,
)

# Crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research, summary],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff(inputs={"topic": "LangChain framework"})
print(result.raw)

Frequently Asked Questions

When should I use sequential vs hierarchical process?

Use sequential when your workflow has a clear, fixed order and each step depends on the previous one (e.g., research → write → review). Use hierarchical when you want flexibility — the manager can re-order, skip, or re-delegate tasks based on results. Hierarchical adds latency and cost (manager LLM calls) but handles complex, dynamic workflows better.

How do I pass data between tasks?

In sequential mode, the output of each task is automatically added to the context of the next task. You can also explicitly reference a task’s output:

writing_task = Task(
    description="Write an article based on the research.",
    expected_output="A complete article.",
    agent=writer,
    context=[research_task],  # explicitly use research_task output
)

Can agents work in parallel?

Not natively in the current open-source CrewAI. The enterprise version (CrewAI+) supports asynchronous task execution. For open-source parallel execution, look at AutoGen or custom async patterns.

How do I control which LLM each agent uses?

Pass an llm parameter to each Agent:

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

cheap_llm = ChatOpenAI(model="gpt-4o-mini")
powerful_llm = ChatAnthropic(model="claude-opus-4-6-20250514")

researcher = Agent(role="Researcher", goal="...", backstory="...", llm=cheap_llm)
writer = Agent(role="Writer", goal="...", backstory="...", llm=powerful_llm)

This lets you save cost on simple tasks while using capable models where quality matters.

What is allow_delegation?

When allow_delegation=True, an agent can ask other agents in the crew for help. This is useful in hierarchical setups but can cause unexpected delegation chains in sequential mode. Set it to False for agents that should always complete their own tasks independently.

Next Steps

Related Articles