Beginner Langchain 3 min read

Introduction to LangChain: Build Your First AI Agent

#langchain #ai-agents #python #llm #openai

What Is LangChain?

LangChain is an open-source framework that makes it straightforward to build applications powered by large language models (LLMs). Instead of writing low-level API calls to OpenAI or Anthropic directly, LangChain gives you composable building blocks — chains, agents, tools, memory — that snap together like LEGO bricks.

Since its release in late 2022, LangChain has become the most-starred AI framework on GitHub, used by companies from early-stage startups to Fortune 500 enterprises. The core idea is simple: LLMs are more powerful when they can take actions in the real world, not just generate text.

Installing LangChain

You need Python 3.9+ and an OpenAI API key (or any supported LLM provider).

pip install langchain langchain-openai python-dotenv

Create a .env file in your project root:

OPENAI_API_KEY=sk-your-key-here

Core Concepts

Before writing code, you need to understand three building blocks:

1. LLM / Chat Model — The brain. Wraps an API call to a model like gpt-4o and returns a string response.

2. Prompt Template — A reusable template with variables. Keeps prompts clean and testable.

3. Chain — Connects a prompt template to an LLM. Input goes in, output comes out.

These three concepts handle 80% of real-world use cases.

Building Your First Chain

Here is a complete, runnable example:

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

load_dotenv()

# 1. Define the model
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# 2. Define the prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that explains concepts clearly."),
    ("human", "{question}"),
])

# 3. Build the chain using the pipe operator
chain = prompt | llm | StrOutputParser()

# 4. Run it
response = chain.invoke({"question": "What is a vector database?"})
print(response)

Run it:

python main.py

You should see a clear explanation of vector databases printed to your terminal. That’s a working LangChain chain.

What Just Happened?

The | pipe operator is LangChain Expression Language (LCEL). Each component transforms the data and passes it to the next:

  1. prompt takes {"question": "..."} → produces a list of chat messages
  2. llm takes the messages → calls OpenAI API → returns an AIMessage
  3. StrOutputParser() takes the AIMessage → extracts the .content string

This composable pattern is the foundation of every LangChain application.

Frequently Asked Questions

Is LangChain only for Python?

LangChain has an official JavaScript/TypeScript version called LangChain.js. Both versions share the same concepts (chains, agents, tools) and are maintained by the same team. The Python version is more mature and has more integrations.

Does LangChain only work with OpenAI?

No. LangChain supports dozens of model providers including Anthropic Claude, Google Gemini, Mistral, Cohere, Ollama (local models), and more. You swap the ChatOpenAI import for the provider of your choice — the rest of the chain stays identical.

How is LangChain different from calling the OpenAI API directly?

Direct API calls work fine for simple completions. LangChain adds value when you need: memory across conversations, tool use (agents that can search the web or query a database), retrieval-augmented generation (RAG), structured output parsing, and multi-step pipelines. For a single Q&A call, direct API is simpler. For anything more complex, LangChain saves significant boilerplate.

Next Steps

Now that you have a working chain, explore these natural next steps:

Related Articles