System Requirements
Before installing MetaGPT, verify these requirements:
| Requirement | Minimum | Recommended |
|---|---|---|
| Python | 3.9+ | 3.11+ |
| RAM | 4GB | 8GB+ |
| Storage | 2GB | 5GB |
| OS | Windows 10, macOS 12, Ubuntu 20.04 | Latest versions |
| LLM API | OpenAI API key | GPT-4o recommended |
MetaGPT requires Python 3.9 or higher. Check your version:
python --version
# or
python3 --version
Installation
Method 1: pip (Recommended)
pip install metagpt
For the latest development version:
pip install git+https://github.com/geekan/MetaGPT.git
Method 2: From Source
git clone https://github.com/geekan/MetaGPT.git
cd MetaGPT
pip install -e .
Method 3: conda Environment (Recommended for Isolation)
# Create clean environment
conda create -n metagpt python=3.11 -y
conda activate metagpt
# Install MetaGPT
pip install metagpt
Configuration
MetaGPT requires a configuration file. Initialize it:
metagpt --init-config
This creates ~/.metagpt/config2.yaml. Edit it:
# ~/.metagpt/config2.yaml
llm:
api_type: "openai"
model: "gpt-4o-mini" # or gpt-4o for best results
base_url: "https://api.openai.com/v1"
api_key: "sk-..." # your OpenAI API key
# Optional: proxy settings
# proxy: "http://127.0.0.1:7890"
# Storage for generated artifacts
workspace:
path: "./workspace"
Security tip: Use environment variables instead of hardcoding the API key:
llm:
api_key: "${OPENAI_API_KEY}"
Then set the environment variable:
export OPENAI_API_KEY="sk-..."
# On Windows:
set OPENAI_API_KEY=sk-...
Verify Installation
Run a quick test to confirm everything works:
python -c "import metagpt; print(metagpt.__version__)"
Or run a minimal example:
# test_install.py
import asyncio
from metagpt.roles import ProductManager
async def test():
pm = ProductManager()
print(f"MetaGPT loaded. Roles available: ProductManager initialized.")
print("Installation successful!")
asyncio.run(test())
python test_install.py
Your First Software Project
Once installed, run MetaGPT on a real task:
# Using CLI
metagpt "Write a Python script that downloads the top 10 trending GitHub repos"
# More complex project
metagpt "Build a REST API for a todo list with CRUD operations using FastAPI"
Or via Python API:
import asyncio
from metagpt.software_company import generate_repo
from metagpt.const import METAGPT_ROOT
async def main():
repo = await generate_repo(
idea="Build a command-line calculator with history",
investment=3.0, # "budget" for the simulation (roughly = number of iterations)
n_round=5, # max rounds of collaboration
)
print(f"Project generated at: {repo.workdir}")
asyncio.run(main())
MetaGPT will print the simulated collaboration between roles (PM → Architect → Engineers → QA).
Configuring Different LLM Providers
Claude (Anthropic)
llm:
api_type: "anthropic"
model: "claude-sonnet-4-6"
api_key: "${ANTHROPIC_API_KEY}"
Azure OpenAI
llm:
api_type: "azure"
model: "gpt-4o"
base_url: "https://YOUR_RESOURCE.openai.azure.com/"
api_key: "${AZURE_OPENAI_API_KEY}"
api_version: "2024-05-01-preview"
Ollama (Local Models)
llm:
api_type: "ollama"
model: "llama3.2"
base_url: "http://localhost:11434/api"
First start Ollama:
ollama pull llama3.2
ollama serve
Note: Local models produce significantly lower quality outputs for complex multi-role workflows. GPT-4o or Claude is strongly recommended for MetaGPT.
Multiple LLMs (Role-Specific)
Configure different models for different roles to optimize cost:
# ~/.metagpt/config2.yaml
llm:
api_type: "openai"
model: "gpt-4o-mini" # default (cheaper model for most roles)
api_key: "${OPENAI_API_KEY}"
# Override for specific roles
roles:
architect:
llm:
model: "gpt-4o" # expensive model only for architecture
product_manager:
llm:
model: "gpt-4o" # expensive model for requirements
Optional Dependencies
Mermaid (Diagrams)
MetaGPT can generate architecture diagrams using Mermaid:
# macOS
brew install mermaid-js/mermaid/mmdc
# npm
npm install -g @mermaid-js/mermaid-cli
Enable in config:
mermaid:
engine: "nodejs" # or "pyppeteer" for headless Chrome
Browser Automation (for data collection roles)
pip install metagpt[playwright]
playwright install
Workspace and Output
MetaGPT generates files in a workspace/ directory by default. Each project gets its own subdirectory:
workspace/
└── my_project_20260408/
├── docs/
│ ├── prd.md ← Product Requirements Document
│ └── system_design.md ← Architecture design
├── src/
│ └── *.py ← Generated source files
└── tests/
└── test_*.py ← Generated test files
Change the workspace path:
workspace:
path: "/home/user/metagpt-projects"
Frequently Asked Questions
How much does MetaGPT cost per run?
A typical project with 5 rounds costs $0.05–$0.30 using gpt-4o-mini. Complex projects with many files can cost $1–3 with gpt-4o. Use investment parameter to cap costs — lower value = fewer iterations.
Can I run MetaGPT without internet?
Only with a local LLM (Ollama). All other providers require internet access to the LLM API endpoint.
Installation fails on Windows — what do I do?
Common fixes:
# Upgrade pip first
python -m pip install --upgrade pip
# Install with no binary (avoids some C extension issues)
pip install metagpt --no-binary metagpt
# If tiktoken fails:
pip install tiktoken --force-reinstall
How do I update MetaGPT?
pip install --upgrade metagpt
Check current version: python -c "import metagpt; print(metagpt.__version__)"
Where is the generated code saved?
By default in ./workspace/[project_name]/ relative to where you run the command. Change with the workspace.path config option or --workspace CLI flag.
Next Steps
- MetaGPT Data Interpreter — Use MetaGPT’s powerful data analysis agent
- MetaGPT Custom Roles and Actions — Build custom agents for your use case
- What Is MetaGPT — Understand MetaGPT’s architecture