Intermediate Letta Explore 3 min read

Letta Tool Use: Equip Agents with External Capabilities

#letta #tools #function-calling #integrations #external-apis #memgpt

Tools in Letta

Letta agents are stateful tool users — they remember which tools they’ve used and the results over time. When an agent calls a tool multiple times, it builds up context about what works and what doesn’t, stored in its archival memory.

This is fundamentally different from stateless function calling: a Letta agent can reason “I tried searching for X last week and found Y, so let me try a different approach this time.”

How Letta Tools Work

Under the hood, Letta tool calling uses function calling from the underlying LLM (GPT-4o, Claude, etc.). Letta adds:

  1. Tool registration — tools are persisted server-side and attached to agents
  2. Memory integration — tool results can be stored in core or archival memory
  3. Cross-session persistence — the agent’s tool usage history survives restarts

Creating Your First Tool

from letta import create_client

client = create_client()

# Define a plain Python function
def get_current_weather(location: str, unit: str = "celsius") -> str:
    """
    Get the current weather for a location.

    Args:
        location: City name or coordinates (e.g., "San Francisco, CA")
        unit: Temperature unit, "celsius" or "fahrenheit"

    Returns:
        Weather description as a string
    """
    import httpx
    response = httpx.get(
        "https://wttr.in/",
        params={"q": location, "format": "3"},  # format=3: compact text
    )
    return response.text.strip()

# Register the tool with Letta
weather_tool = client.create_tool(get_current_weather)
print(f"Tool created: {weather_tool.id}")

Key requirement: Letta automatically generates the JSON schema from the function signature and docstring. Always:

  • Use type annotations on all parameters
  • Write a clear docstring with Args: and Returns: sections
  • Keep function names descriptive (used as tool name)

Attaching Tools to an Agent

from letta.schemas.memory import ChatMemory
from letta.schemas.llm_config import LLMConfig
from letta.schemas.embedding_config import EmbeddingConfig

llm_config = LLMConfig(
    model="gpt-4o-mini",
    model_endpoint_type="openai",
    model_endpoint="https://api.openai.com/v1",
    context_window=128000,
)

embed_config = EmbeddingConfig(
    embedding_model="text-embedding-3-small",
    embedding_endpoint_type="openai",
    embedding_endpoint="https://api.openai.com/v1",
    embedding_dim=1536,
)

# Create agent with tool
agent = client.create_agent(
    name="weather_assistant",
    system="You are a helpful assistant. Use tools to get current information.",
    memory=ChatMemory(
        human="User is asking about weather.",
        persona="I am a helpful assistant with weather lookup capabilities.",
    ),
    tools=["get_current_weather"],  # by function name
    llm_config=llm_config,
    embedding_config=embed_config,
)

# Or attach to existing agent
client.add_tool_to_agent(
    agent_id=agent.id,
    tool_id=weather_tool.id,
)

Database Integration Tool

Connect agents to your data:

import sqlite3
from typing import Optional

def query_customer_database(
    customer_id: Optional[str] = None,
    email: Optional[str] = None,
) -> str:
    """
    Query the customer database by ID or email.

    Args:
        customer_id: Customer's unique identifier (e.g., "C001")
        email: Customer's email address

    Returns:
        Customer details as formatted string, or error message if not found
    """
    conn = sqlite3.connect("customers.db")
    cursor = conn.cursor()

    if customer_id:
        cursor.execute(
            "SELECT id, name, email, plan, created_at FROM customers WHERE id = ?",
            (customer_id,),
        )
    elif email:
        cursor.execute(
            "SELECT id, name, email, plan, created_at FROM customers WHERE email = ?",
            (email,),
        )
    else:
        return "Error: provide either customer_id or email"

    row = cursor.fetchone()
    conn.close()

    if not row:
        return f"No customer found for {'ID: ' + customer_id if customer_id else 'email: ' + email}"

    return f"Customer: {row[1]}\nEmail: {row[2]}\nPlan: {row[3]}\nJoined: {row[4]}"

db_tool = client.create_tool(query_customer_database)

Web Search Tool

def search_web(query: str, num_results: int = 5) -> str:
    """
    Search the web for current information.

    Args:
        query: The search query
        num_results: Number of results to return (1-10)

    Returns:
        Search results as formatted text with titles and snippets
    """
    import httpx
    response = httpx.get(
        "https://api.duckduckgo.com/",
        params={
            "q": query,
            "format": "json",
            "no_html": "1",
            "skip_disambig": "1",
        },
        timeout=10,
    )
    data = response.json()

    results = []
    # Abstract (top result)
    if data.get("Abstract"):
        results.append(f"Summary: {data['Abstract']}")

    # Related topics
    for topic in data.get("RelatedTopics", [])[:num_results]:
        if "Text" in topic:
            results.append(f"- {topic['Text'][:200]}")

    return "\n".join(results) if results else "No results found."

search_tool = client.create_tool(search_web)

Code Execution Tool

Give agents the ability to run Python:

import subprocess
import tempfile
import os

def execute_python_code(code: str) -> str:
    """
    Execute Python code and return the output.
    Use for calculations, data analysis, and code testing.

    Args:
        code: Valid Python code to execute

    Returns:
        stdout output, or error message if execution fails
    """
    # Write to temp file for security
    with tempfile.NamedTemporaryFile(
        mode="w", suffix=".py", delete=False
    ) as f:
        f.write(code)
        tmpfile = f.name

    try:
        result = subprocess.run(
            ["python", tmpfile],
            capture_output=True,
            text=True,
            timeout=10,  # 10 second timeout
        )
        output = result.stdout or result.stderr
        return output[:2000]  # limit output size
    except subprocess.TimeoutExpired:
        return "Error: Code execution timed out (>10s)"
    except Exception as e:
        return f"Error: {str(e)}"
    finally:
        os.unlink(tmpfile)

code_tool = client.create_tool(execute_python_code)

Using Letta’s Built-In Tools

Letta ships with several pre-built tools:

# List all available built-in tools
tools = client.list_tools()
for tool in tools:
    print(f"{tool.name}: {tool.description[:80]}")

# Common built-in tools:
# - archival_memory_search: search agent's long-term memory
# - archival_memory_insert: save to long-term memory
# - core_memory_append: append to core memory
# - core_memory_replace: update core memory
# - send_message: send message to user (primary response)

Built-in memory tools are always attached — they’re how agents manage their own memory. You cannot remove them.

Building an Agent with Multiple Tools

# Create all tools
tools_to_create = [
    get_current_weather,
    search_web,
    execute_python_code,
    query_customer_database,
]

created_tools = [client.create_tool(fn) for fn in tools_to_create]
tool_names = [t.name for t in created_tools]

# Create a powerful agent with all tools
power_agent = client.create_agent(
    name="power_assistant",
    system="""You are an advanced AI assistant with multiple capabilities:
- Weather lookup: check current conditions anywhere
- Web search: find current information
- Code execution: run Python for calculations and analysis
- Customer lookup: query the customer database

Use the most appropriate tool for each request.
Store important findings in your archival memory for future reference.""",
    memory=ChatMemory(
        human="User needs varied assistance.",
        persona="I am a capable assistant with tools for weather, search, code, and customer data.",
    ),
    tools=tool_names,
    llm_config=llm_config,
    embedding_config=embed_config,
)

# Test it
response = client.send_message(
    agent_id=power_agent.id,
    message="What's the weather in Tokyo? Also, calculate the compound interest on $10,000 at 5% for 10 years.",
    role="user",
)
print(response.messages[-1].text)

Tool Use Inspection

View what tools an agent has called:

# Get agent's message history (includes tool calls)
messages = client.get_messages(agent_id=power_agent.id, limit=20)
for msg in messages:
    if msg.role == "tool":
        print(f"Tool call: {msg.tool_call_id}")
        print(f"Result: {msg.text[:200]}")

# Check what the agent stored from tool results
archive = client.get_archival_memory(
    agent_id=power_agent.id,
    query="weather Tokyo",
    limit=5,
)
for passage in archive:
    print(passage.text)

Frequently Asked Questions

Can I update a tool without recreating the agent?

Yes. Update the tool via client.update_tool() and the agent will use the new version on next invocation. No need to recreate the agent.

How do I handle tools that require credentials?

Don’t put credentials in the tool function itself. Use environment variables:

import os

def call_my_api(query: str) -> str:
    """Call my API with the query."""
    api_key = os.getenv("MY_API_KEY")
    # ...

The key is set in the server’s environment, not in the tool code.

What happens when a tool raises an exception?

Letta catches exceptions and returns the error message to the agent. The agent can then decide to retry, try a different approach, or inform the user. Add informative error messages in your functions.

Can I pass complex objects to tools?

Tool parameters must be JSON-serializable types: str, int, float, bool, list, dict. For complex inputs, serialize to JSON string first and parse inside the function.

How many tools can an agent have?

There’s no hard limit, but LLM context windows constrain how many tool schemas fit. With GPT-4o’s 128K context, 20–30 tools work reliably. For more tools, use tool selection logic or multiple specialized agents.

Next Steps

Related Articles