Intermediate Langchain Tutorial 3 min read

LangChain Agents and Tools: Build Agents That Take Action

#langchain #agents #tools #react #tool-calling #python #openai
📚

Read these first:

Why Agents Instead of Simple Chains?

A chain follows a fixed path: input → prompt → LLM → output. An agent is different — it decides dynamically which steps to take, which tools to call, and when it has enough information to answer.

LangChain agents are the backbone of any AI system that needs to interact with the real world: searching the web, querying a database, running Python code, or calling an external API. The agent receives a question, reasons about what tools it needs, calls them in sequence, and synthesizes a final answer.

This guide shows you how to build LangChain agents with custom tools from scratch.

Prerequisites

Install the required packages:

pip install langchain langchain-openai langchain-community python-dotenv

Optional — for web search:

pip install langchain-tavily

Set your API keys:

export OPENAI_API_KEY="sk-your-key"
export TAVILY_API_KEY="tvly-your-key"  # optional

Defining Tools with @tool

The simplest way to give an agent a capability is the @tool decorator. Write any Python function and decorate it — the docstring becomes the tool’s description, which the LLM uses to decide when to call it:

from langchain.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a city. Input should be a city name like 'Seoul' or 'New York'."""
    # In production, call a real weather API here
    return f"Weather in {location}: 22°C, partly cloudy"

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression. Input should be a valid Python math expression like '2 + 2' or '100 * 0.15'."""
    try:
        result = eval(expression, {"__builtins__": {}}, {})
        return str(result)
    except Exception as e:
        return f"Error: {e}"

The docstring is critical — write it as instructions to the LLM, not as developer documentation.

Building a ReAct Agent

The ReAct pattern (Reasoning + Acting) is the most common agent architecture. The agent alternates between reasoning about what to do next and acting (calling a tool), until it reaches a final answer.

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain import hub
from langchain.tools import tool

load_dotenv()

# 1. Define tools
@tool
def get_weather(location: str) -> str:
    """Get the current weather for a city."""
    return f"Weather in {location}: 22°C, partly cloudy"

@tool
def calculate(expression: str) -> str:
    """Evaluate a math expression like '2 + 2' or '50 * 1.1'."""
    try:
        return str(eval(expression, {"__builtins__": {}}, {}))
    except Exception as e:
        return f"Error: {e}"

tools = [get_weather, calculate]

# 2. Load the ReAct prompt template
prompt = hub.pull("hwchase17/react")

# 3. Create the agent
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
agent = create_react_agent(llm, tools, prompt)

# 4. Wrap in an executor (handles the loop of reasoning + acting)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,   # prints each thought/action step
    max_iterations=5,
)

# 5. Run it
result = agent_executor.invoke({
    "input": "What's the weather in Tokyo, and what is 22 multiplied by 1.15?"
})
print(result["output"])

With verbose=True you can watch the agent’s reasoning in real time:

Thought: I need to get the weather in Tokyo and calculate 22 * 1.15
Action: get_weather
Action Input: Tokyo
Observation: Weather in Tokyo: 22°C, partly cloudy
Thought: Now I need to calculate 22 * 1.15
Action: calculate
Action Input: 22 * 1.15
Observation: 25.3
Thought: I have both pieces of information
Final Answer: The weather in Tokyo is 22°C and partly cloudy. 22 × 1.15 = 25.3

Modern Tool Calling: bind_tools

Newer OpenAI and Anthropic models support native function/tool calling. This is more reliable than ReAct because the model returns structured JSON instead of free-form text:

from langchain_openai import ChatOpenAI
from langchain.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a city."""
    return f"Weather in {location}: 22°C, partly cloudy"

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# Bind tools directly to the model
model_with_tools = llm.bind_tools([get_weather])

response = model_with_tools.invoke("What's the weather in Paris?")
print(response.tool_calls)
# [{'name': 'get_weather', 'args': {'location': 'Paris'}, 'id': '...'}]

For production agents, use create_react_agent from langchain (the modern version) with bind_tools under the hood — it handles the tool-call loop automatically.

Adding Web Search with Tavily

For agents that need live web information, Tavily is the recommended search provider:

from langchain_community.tools.tavily_search import TavilySearchResults

search_tool = TavilySearchResults(
    max_results=3,
    description="Search the web for current information on any topic.",
)

tools = [search_tool, calculate]
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = agent_executor.invoke({
    "input": "What are the latest developments in the LangChain framework?"
})
print(result["output"])

Handling Errors Gracefully

Agents can call tools with invalid inputs. Use handle_parsing_errors to prevent crashes:

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    handle_parsing_errors=True,  # recover from malformed tool calls
    max_iterations=8,            # prevent infinite loops
    max_execution_time=30,       # timeout in seconds
)

Tool Input Validation with Pydantic

For tools with multiple parameters, use a Pydantic model for type safety:

from langchain.tools import StructuredTool
from pydantic import BaseModel, Field

class WeatherInput(BaseModel):
    location: str = Field(description="City name, e.g. 'Seoul'")
    unit: str = Field(default="celsius", description="Temperature unit: 'celsius' or 'fahrenheit'")

def get_weather_structured(location: str, unit: str = "celsius") -> str:
    """Get weather with unit preference."""
    temp = 22 if unit == "celsius" else 72
    return f"Weather in {location}: {temp}°{unit[0].upper()}, partly cloudy"

weather_tool = StructuredTool.from_function(
    func=get_weather_structured,
    name="get_weather",
    description="Get current weather for a city with preferred temperature unit.",
    args_schema=WeatherInput,
)

Frequently Asked Questions

What is the difference between ReAct and tool-calling agents?

ReAct is a prompt-based approach: the agent outputs Thought → Action → Observation as plain text, parsed by LangChain. Tool-calling (function calling) is a native model feature where the model returns structured JSON for tool invocations. Tool-calling is more reliable, especially for complex multi-tool workflows. Use create_react_agent with a modern OpenAI/Anthropic model and LangChain will use native tool calling automatically.

How do I add memory to an agent so it remembers past conversations?

Wrap the AgentExecutor in a RunnableWithMessageHistory:

from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory

store = {}
def get_history(session_id: str):
    if session_id not in store:
        store[session_id] = ChatMessageHistory()
    return store[session_id]

agent_with_memory = RunnableWithMessageHistory(
    agent_executor,
    get_history,
    input_messages_key="input",
    history_messages_key="chat_history",
)

Can I build agents without an OpenAI API key?

Yes. Swap ChatOpenAI for ChatAnthropic, ChatGoogleGenerativeAI, or ChatOllama (for local models). The agent logic stays identical — only the LLM instantiation changes.

How many tools should an agent have?

Keep it under 10. Too many tools confuse the model and increase latency. Group related operations into single tools with multiple parameters rather than creating many single-operation tools.

What causes an agent to loop infinitely?

Usually a tool returns an error the agent can’t recover from, or the goal is ambiguous. Set max_iterations=8 and max_execution_time=60 as safety limits. Analyze verbose output to identify where the loop starts and improve the tool’s error messages or docstring.

Next Steps

Related Articles