Intermediate Llamaindex Tutorial 3 min read

LlamaIndex Advanced Retrieval: Improve RAG Answer Quality

#llamaindex #rag #retrieval #hybrid-search #reranking #query-engine #advanced

Why Basic RAG Falls Short

A basic VectorStoreIndex with default settings works well for demos. Production RAG needs more. Users ask ambiguous questions, use different terminology than your documents, or ask questions that require synthesizing multiple sources.

This guide covers five proven retrieval techniques that dramatically improve answer quality.

Technique 1: Hybrid Search (Vector + Keyword)

Pure vector search misses exact keyword matches. Hybrid search combines semantic similarity and BM25 keyword matching:

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.retrievers.bm25 import BM25Retriever
from llama_index.core.retrievers import QueryFusionRetriever

documents = SimpleDirectoryReader("data/").load_data()
index = VectorStoreIndex.from_documents(documents)

# Vector retriever
vector_retriever = index.as_retriever(similarity_top_k=5)

# BM25 (keyword) retriever — needs nodes from the index
nodes = list(index.docstore.docs.values())
bm25_retriever = BM25Retriever.from_defaults(nodes=nodes, similarity_top_k=5)

# Combine them with Reciprocal Rank Fusion
hybrid_retriever = QueryFusionRetriever(
    retrievers=[vector_retriever, bm25_retriever],
    similarity_top_k=5,
    num_queries=1,       # don't generate additional queries
    mode="reciprocal_rerank",
)

nodes = hybrid_retriever.retrieve("LlamaIndex VectorStoreIndex API")
for node in nodes:
    print(f"Score: {node.score:.3f} | {node.text[:100]}...")

Hybrid search consistently outperforms pure vector search for technical queries (API names, error codes, version numbers).

Technique 2: Reranking

Retrieve more candidates, then rerank by relevance. The initial vector search casts a wide net; the reranker picks the best ones:

from llama_index.core import VectorStoreIndex
from llama_index.core.postprocessor import SentenceTransformerRerank
from llama_index.core.query_engine import RetrieverQueryEngine

index = VectorStoreIndex.from_documents(documents)
retriever = index.as_retriever(similarity_top_k=20)  # retrieve more

# Rerank to top 5 using a cross-encoder model
reranker = SentenceTransformerRerank(
    model="cross-encoder/ms-marco-MiniLM-L-2-v2",
    top_n=5,
)

query_engine = RetrieverQueryEngine(
    retriever=retriever,
    node_postprocessors=[reranker],
)

response = query_engine.query("How do I configure the embedding model?")
print(response)

Install the reranker:

pip install sentence-transformers

For API-based reranking (no local model):

from llama_index.postprocessor.cohere_rerank import CohereRerank

reranker = CohereRerank(api_key="your-cohere-key", top_n=5)

Technique 3: HyDE (Hypothetical Document Embeddings)

Instead of embedding the question directly, ask the LLM to generate a hypothetical answer, then embed that. The embedding of a full answer is closer to the embeddings of actual document chunks than a short question:

from llama_index.core.indices.query.query_transform import HyDEQueryTransform
from llama_index.core.query_engine import TransformQueryEngine
from llama_index.llms.openai import OpenAI

# Wrap query engine with HyDE
base_query_engine = index.as_query_engine(similarity_top_k=5)

hyde = HyDEQueryTransform(
    llm=OpenAI(model="gpt-4o-mini"),
    include_original=True,  # also keep the original query
)

hyde_query_engine = TransformQueryEngine(base_query_engine, query_transform=hyde)

# The question gets transformed to a hypothetical answer before retrieval
response = hyde_query_engine.query(
    "What are the best practices for chunking large documents?"
)
print(response)

HyDE is especially effective for questions where the answer uses different vocabulary than the question.

Technique 4: Sub-Question Decomposition

Complex questions that span multiple documents benefit from decomposition:

from llama_index.core.tools import QueryEngineTool
from llama_index.core.query_engine import SubQuestionQueryEngine
from llama_index.core import VectorStoreIndex

# Create separate indexes for different document sets
langchain_index = VectorStoreIndex.from_documents(langchain_docs)
llamaindex_index = VectorStoreIndex.from_documents(llamaindex_docs)

# Wrap them as tools
tools = [
    QueryEngineTool.from_defaults(
        query_engine=langchain_index.as_query_engine(),
        name="langchain_docs",
        description="Documentation for LangChain framework",
    ),
    QueryEngineTool.from_defaults(
        query_engine=llamaindex_index.as_query_engine(),
        name="llamaindex_docs",
        description="Documentation for LlamaIndex framework",
    ),
]

# Sub-question engine decomposes queries automatically
sub_question_engine = SubQuestionQueryEngine.from_defaults(
    query_engine_tools=tools,
    verbose=True,
)

response = sub_question_engine.query(
    "Compare how LangChain and LlamaIndex handle document chunking"
)
# Generates sub-questions: "How does LangChain handle chunking?" + "How does LlamaIndex handle chunking?"
# Then synthesizes a comparison
print(response)

Technique 5: Recursive Retrieval (Small-to-Big)

Retrieve small, precise chunks but include their parent document for full context:

from llama_index.core.node_parser import SentenceSplitter, HierarchicalNodeParser
from llama_index.core import VectorStoreIndex
from llama_index.core.retrievers import AutoMergingRetriever
from llama_index.core.storage import StorageContext

# Create hierarchical nodes: parent (large) → child (small)
parser = HierarchicalNodeParser.from_defaults(
    chunk_sizes=[2048, 512, 128],  # parent → mid → leaf
)

nodes = parser.get_nodes_from_documents(documents)

# Index only the leaf (small) nodes for precise retrieval
storage_context = StorageContext.from_defaults()
storage_context.docstore.add_documents(nodes)

leaf_nodes = [n for n in nodes if n.parent_node]  # only leaves
index = VectorStoreIndex(leaf_nodes, storage_context=storage_context)

# AutoMerging: retrieve leaves, then return parent if enough leaves match
base_retriever = index.as_retriever(similarity_top_k=12)
auto_merging_retriever = AutoMergingRetriever(
    base_retriever,
    storage_context,
    verbose=True,
)

When you retrieve small chunks that are siblings (from the same parent), AutoMergingRetriever returns the parent instead — giving the LLM more complete context.

Combining Techniques: Production Pipeline

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.postprocessor import SentenceTransformerRerank
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.retrievers.bm25 import BM25Retriever
from llama_index.core.retrievers import QueryFusionRetriever

# Load and index
documents = SimpleDirectoryReader("data/").load_data()
index = VectorStoreIndex.from_documents(documents)

# Hybrid retriever
vector_ret = index.as_retriever(similarity_top_k=20)
nodes = list(index.docstore.docs.values())
bm25_ret = BM25Retriever.from_defaults(nodes=nodes, similarity_top_k=20)

hybrid = QueryFusionRetriever(
    retrievers=[vector_ret, bm25_ret],
    similarity_top_k=20,
    num_queries=1,
    mode="reciprocal_rerank",
)

# Reranker on top
reranker = SentenceTransformerRerank(
    model="cross-encoder/ms-marco-MiniLM-L-2-v2",
    top_n=5,
)

# Final query engine
query_engine = RetrieverQueryEngine(
    retriever=hybrid,
    node_postprocessors=[reranker],
)

response = query_engine.query("What are the installation requirements?")
print(response)
print("\nSource nodes:")
for node in response.source_nodes:
    print(f"  Score {node.score:.3f}: {node.text[:80]}...")

Frequently Asked Questions

Which technique gives the biggest improvement?

In order of typical impact:

  1. Reranking — most consistent improvement, easy to add
  2. Hybrid search — crucial for technical queries with exact terms
  3. Recursive retrieval — biggest win for long documents
  4. Sub-question decomposition — essential for multi-document questions
  5. HyDE — helps with questions phrased very differently from documents

Start with reranking — it’s the most reliable improvement across all use cases.

Does reranking significantly increase latency?

With a local cross-encoder (SentenceTransformerRerank): +50–200ms, worth it for quality. With an API reranker (Cohere, etc.): +100–300ms. For most applications, this is acceptable. Cache reranked results if you have repeated queries.

What’s the best similarity_top_k value?

For basic retrieval: 3–5. For reranking pipelines: retrieve 15–30, rerank to 3–5. More candidates = better chances the reranker finds the right ones.

Do these techniques work with any vector database?

The core retrieval logic is database-agnostic. Hybrid search (BM25 component) is local. Reranking is a post-processing step. They all work the same whether your index is in-memory, Pinecone, Weaviate, or Chroma.

Next Steps

Related Articles