Beginner Agentscope 7 min read

Understanding AgentScope Core Concepts: A Beginner's Guide

#AgentScope #Architecture #AI Concepts #Beginner

If you are working through Understanding AgentScope Core Concepts: A Beginner’s Guide, you are in the right place. AgentScope is a multi-agent framework built by Alibaba that makes it straightforward to compose, coordinate, and deploy AI agents in Python. Version 1.0 introduced a major overhaul — the entire framework is now fully asynchronous, which makes it more suitable for real workloads where multiple agents need to act concurrently. This guide walks you through every foundational concept you need to know before writing your first multi-agent application.


What is AgentScope?

AgentScope is an open-source Python framework designed for building applications where multiple AI agents collaborate to complete tasks. Instead of wiring together raw API calls and managing conversation state by hand, AgentScope gives you composable building blocks — Agents, Models, Memory, Toolkits, and a message bus — that work together out of the box.

The framework targets developers who want to go beyond single-model chat and build systems where agents reason, delegate, use tools, and pass results to one another. If you have read about the ReAct: Reasoning and Acting — The Paper Behind Agent Frameworks, AgentScope’s ReActAgent is a production-ready implementation of exactly that pattern.

Important: If you have seen older AgentScope tutorials, be aware that v1.0 was a breaking rewrite. Classes like DialogAgent and DictDialogAgent are deprecated. Model configuration files are gone. Everything is now configured in code and runs asynchronously. The examples in this guide reflect the current v1.0 API.


Core Concepts: Agents, Models, and Memory

AgentScope is built around five primitives. Understanding what each one does — and how they relate — is the fastest way to become productive.

Agent

An Agent is the core actor in the framework. It receives a message, reasons about it, optionally calls tools, and returns a response. The two most important agent classes are:

  • ReActAgent — implements the Reason + Act loop. It can call tools, observe the result, reason again, and repeat until it has an answer. This is the workhorse for any agent that needs to do more than pure conversation.
  • UserAgent — a proxy that represents a human in the loop. When the conversation reaches a UserAgent, it pauses and waits for console input. Useful during development and for supervised workflows.

Every agent is initialized with three dependencies: a Model, a Memory, and optionally a Toolkit.

Model

A Model wraps a large language model provider and handles all API calls. In the current API, you instantiate the model class directly in code — there are no external config files.

For example, to use Alibaba’s DashScope service you use DashScopeChatModel. You can swap this for any supported provider by swapping the class. The model object is responsible for formatting prompts, making HTTP requests, and returning responses that the agent can act on.

If you are new to LLMs in general, What Is a Large Language Model (LLM)? is a solid foundation before diving deeper.

Memory

Memory gives an agent access to context — the conversation history and any state accumulated during a session.

  • InMemoryMemory — stores everything in a Python dictionary. Fast, zero setup, lost when the process ends. The right choice for prototypes and short tasks.
  • LongTermMemoryBase — the abstract base for persistent memory backends. Useful when you need agents to remember facts across restarts or sessions.

At the beginner level you will almost always start with InMemoryMemory and graduate to a persistent backend when your use case requires it.

Toolkit

A Toolkit is a named collection of Python functions that an agent is allowed to call. You create a Toolkit instance, register functions onto it with toolkit.register_tool_function(fn), and pass the toolkit to your agent at construction time. The ReActAgent will automatically format the tool signatures for the LLM and parse the model’s tool-call responses.

A common built-in tool is execute_python_code, which lets an agent write and run Python snippets — a pattern also explored in AutoGen Code Execution: Build Agents That Write and Run Code.

MsgHub

MsgHub is the message bus for multi-agent scenarios. It maintains a list of participating agents, routes messages between them, and provides the infrastructure for orchestration pipelines. You will see it used as an async context manager in the examples below.


How Agents Communicate

In a single-agent setup, communication is a simple request/response loop between the agent and the user. In a multi-agent setup, AgentScope uses pipelines inside a MsgHub to define the flow of messages.

The most basic pipeline is sequential_pipeline, which passes a message from agent to agent in order — agent 1 responds, its output becomes agent 2’s input, and so on. The MsgHub keeps all participants aware of the shared conversation context.

This design is intentionally explicit: you choose the communication pattern rather than having the framework decide for you. That means you can build round-robin debates, hierarchical delegation chains, or any other topology you need, all within the same primitives.


Building Your First Multi-Agent Application

The example below creates a simple two-agent loop: a ReActAgent that has a code-execution tool, and a UserAgent that acts as the human driver. Everything runs asynchronously using Python’s asyncio.

Step 1: Install AgentScope

pip install agentscope

AgentScope requires Python 3.10 or higher. Check your version first:

python --version

Step 2: Set your API key

AgentScope delegates model calls to the provider you choose. For the DashScope example below, export your key as an environment variable:

export DASHSCOPE_API_KEY="your-api-key-here"

Step 3: Build the application

import asyncio
import os
from agentscope.agent import ReActAgent, UserAgent
from agentscope.model import DashScopeChatModel
from agentscope.memory import InMemoryMemory
from agentscope.tool import Toolkit, execute_python_code

async def main():
    # --- 1. Configure the model ---
    model = DashScopeChatModel(
        model_name="qwen-max",
        api_key=os.environ["DASHSCOPE_API_KEY"],
    )

    # --- 2. Set up memory ---
    memory = InMemoryMemory()

    # --- 3. Register tools in a Toolkit ---
    toolkit = Toolkit()
    toolkit.register_tool_function(execute_python_code)

    # --- 4. Create the AI agent ---
    agent = ReActAgent(
        name="CodingAssistant",
        model=model,
        memory=memory,
        toolkit=toolkit,
        sys_prompt=(
            "You are a helpful coding assistant. "
            "When asked to compute something, write and run Python code."
        ),
    )

    # --- 5. Create the user proxy ---
    user = UserAgent(name="User")

    # --- 6. Start the conversation loop ---
    message = "Write Python code to calculate the first 10 Fibonacci numbers and print them."
    print(f"User: {message}\n")

    while True:
        # Agent processes the message
        response = await agent(message)
        print(f"CodingAssistant: {response}\n")

        # Hand off to the user (blocks for console input)
        message = await user(response)

        if message.get_text_content().strip().lower() in ("exit", "quit", "bye"):
            print("Ending session.")
            break

if __name__ == "__main__":
    asyncio.run(main())

When you run this script, the ReActAgent will receive the prompt, decide it should write and execute Python code, invoke execute_python_code, observe the output, and return a final answer. The UserAgent then waits for your next input, continuing the loop.

Step 4: Extend to multi-agent with MsgHub

Once you are comfortable with single-agent flows, adding a second agent is straightforward:

import asyncio
import os
from agentscope.agent import ReActAgent
from agentscope.model import DashScopeChatModel
from agentscope.memory import InMemoryMemory
from agentscope.pipeline import MsgHub, sequential_pipeline

async def main():
    model = DashScopeChatModel(
        model_name="qwen-max",
        api_key=os.environ["DASHSCOPE_API_KEY"],
    )

    # Two agents with distinct roles
    planner = ReActAgent(
        name="Planner",
        model=model,
        memory=InMemoryMemory(),
        sys_prompt="You break user requests into a numbered step-by-step plan.",
    )

    executor = ReActAgent(
        name="Executor",
        model=model,
        memory=InMemoryMemory(),
        sys_prompt="You receive a plan and execute each step, reporting results.",
    )

    # MsgHub orchestrates the two agents
    async with MsgHub(participants=[planner, executor]) as hub:
        initial_task = "Build a Python function that reads a CSV file and returns summary statistics."

        # sequential_pipeline: planner → executor
        await sequential_pipeline(
            agents=[planner, executor],
            message=initial_task,
        )

if __name__ == "__main__":
    asyncio.run(main())

The planner receives the task, produces a step-by-step plan, and its output flows automatically into the executor, which tries to carry out each step. The MsgHub ensures both agents share the same conversation context throughout.


Frequently Asked Questions

What Python version does AgentScope require?

AgentScope requires Python 3.10 or higher. The v1.0 rewrite relies on modern asyncio features and type-hint syntax that are not available in older versions. If you are on 3.9 or below, upgrade before installing.

My old AgentScope code stopped working after upgrading to v1.0. Why?

Version 1.0 was a ground-up rewrite. DialogAgent, DictDialogAgent, and the prompt-based ReAct agent were removed. Model configuration via external YAML or JSON files is also gone — models are now instantiated as Python objects directly in code. If you have pre-v1.0 code, you will need to rewrite the agent initialization and model configuration sections.

Can I use a model provider other than DashScope?

Yes. DashScope (DashScopeChatModel) is used in the examples because it is Alibaba’s own service and ships with first-class support in AgentScope. However, the framework exposes a common Model interface, and other providers can be integrated by using or subclassing the appropriate model wrapper for your LLM of choice.

What happened to the RAG and Distribution modules?

Both modules were temporarily removed in v1.0 during the refactor. The maintainers have indicated they plan to reintroduce them in a future release with updated designs. If your use case depends on RAG or distributed agent execution, check the official release notes to see whether these modules have been restored before starting a new project.

How is AgentScope different from AutoGen or MetaGPT?

All three frameworks target multi-agent orchestration, but with different emphases. AgentScope focuses on asynchronous, production-ready pipelines with a clean separation between Model, Memory, and Toolkit. AutoGen emphasizes conversational back-and-forth and code-execution sandboxing (see AutoGen Code Execution: Build Agents That Write and Run Code). MetaGPT takes a software-team metaphor with specialized roles like Engineer and Product Manager. The best choice depends on whether your workload is more pipeline-oriented, conversational, or role-structured.

Related Articles