, I focus on a particular step of the RAG pipeline: The doc retrieval step. This step is essential for any RAG system’s efficiency, contemplating that with out fetching essentially the most related paperwork, it’s difficult for an LLM to appropriately reply the consumer’s questions. I’ll focus on the standard strategy to fetching essentially the most related paperwork, some strategies to enhance it, and the advantages you’ll see from higher doc retrieval in your RAG pipeline.
As per my final article on Enriching LLM Context with Metadata, I’ll write my most important objective for this text:
My objective for this text is to to spotlight how one can fetch and filter essentially the most related paperwork on your AI search.

Desk of contents
Why is perfect doc retrieval vital?
It’s vital to really perceive why the doc fetching step is so essential to any RAG pipeline. To grasp this, it’s essential to even have a common define of the move in a RAG pipeline:
- The consumer enters their question
- The question is embedded, and also you calculate embedding similarity between the question and every particular person doc (or chunk of doc)
- We fetch essentially the most related paperwork based mostly on embedding similarity
- Essentially the most related paperwork (or chunks) are fed into an LLM, and it’s prompted to reply the consumer query given the offered chunks

Now there are a number of points of the pipeline which is vital. Components comparable to:
- Which embedding mannequin do you make the most of
- Which LLM mannequin do you utilize
- What number of paperwork (or chunks) do you fetch
Nonetheless, I’d argue that there’s probably no side extra vital than the collection of paperwork. It is because with out the proper paperwork, it doesn’t matter how good you’re LLM is, or what number of chunks you fetch, the reply is more than likely to be incorrect.
The mannequin will in all probability work with a barely worse embedding mannequin or a barely older LLM. Nonetheless, when you don’t fetch the proper paperwork, you’re RAG pipeline will fail.
Conventional approaches
I’ll first perceive some conventional approaches which can be used at present, primarily utilizing embedding similarity or key phrase search.
Embedding similarity
Utilizing embedding similarity to fetch essentially the most related paperwork is the go-to strategy at present. It is a strong strategy that’s respectable in most use instances. RAG with embedding similarity doc retrieval is precisely as I described above.
Key phrase search
Key phrase search can also be generally used to fetch related paperwork. Conventional approaches, comparable to TF-IDF or BM25, are nonetheless used at present with success. Nonetheless, key phrase search additionally has its weaknesses. For instance, it solely fetches paperwork based mostly on a precise match, which introduces points when a precise match isn’t potential.
Thus, I wish to focus on another strategies you should use to enhance your doc retrieval step.
Methods to fetch extra related paperwork
On this part, I’ll focus on some extra superior strategies to fetch essentially the most related paperwork. I’ll divide the part into two. The primary part will cowl optimizing doc retrieval for recall, referring to fetching as lots of the related paperwork as potential from the corpus of obtainable paperwork. The opposite subsection discusses find out how to optimize for precision. This implies guaranteeing that the paperwork you fetch are literally right and related for the consumer question.
Recall: Fetch extra of the related paperwork
I’ll focus on the next strategies:
- Contextual retrieval
- Fetching extra chunks
- Reranking
Contextual retrieval

Contextual retrieval is a method launched by Anthropic in September 2024. Their article covers two subjects: Including context to doc chunks and mixing key phrase search (BM25) with semantic search to fetch related paperwork.
So as to add context to paperwork, they take every doc chunk and immediate an LLM, given the chunk and your entire doc, to rewrite the chunk to incorporate each data from the given chunk and related context from your entire doc.
For instance, if in case you have a doc divided into two chunks. The place chunk one contains vital metadata comparable to an handle, date, location, and time, and the opposite chunk comprises details about a lease settlement. The LLM would possibly rewrite the second chunk to incorporate each the lease settlement and essentially the most related a part of the primary chunk, which on this case is the handle, location, and date.
Anthropic additionally discusses combining semantic search and key phrase search of their article, basically fetching paperwork with each strategies, and utilizing a prioritized strategy to mix the paperwork retrieved from every approach.
Fetching extra chunks
An easier strategy to fetch extra of the related paperwork is to easily fetch extra chunks. The extra chunks you fetch, the upper your probability of fetching the related chunks is. Nonetheless, this has two most important downsides:
- You’ll probably get extra irrelevant chunks as properly (impacting recall)
- You’ll improve the quantity of tokens you feed to your LLM, which can negatively influence the LLM’s output high quality
Reranking for recall
Rereanking can also be a robust approach, which can be utilized to extend precision and recall when fetching related paperwork to a consumer question. When fetching paperwork based mostly on semantic similarity, you’ll assign a similarity rating to all chunks, and sometimes solely hold the highest Ok most comparable chunks (Ok is often a quantity between 10 and 20, however it varies for various functions). Because of this a reranker ought to try to put the related paperwork throughout the Ok most related paperwork, whereas protecting irrelevant paperwork out of the identical record. I feel Qwen Reranker is an efficient mannequin; nonetheless, there are additionally many different rerankers on the market.
Precision: Filter away irrelevant paperwork
- Reranking
- LLM verification
Reranking for precision
As mentioned within the final part on recall, rerankers will also be used to enhance precision. Rerankers will improve recall by including related paperwork into the highest Ok record of most related paperwork. On the opposite facet, rerankers will enhance precision, by guaranteeing that the irrelevant paperwork keep out of the highest Ok most related paperwork record.
LLM verification
Using LLM to evaluate chunk (or doc) relevance can also be a robust approach to filter away irrelevant chunks. You may merely create a perform like under:
def is_relevant_chunk(chunk_text: str, user_query: str) -> bool:
"""
Confirm if the chunk textual content is related to the consumer question
"""
immediate = f"""
Given the offered consumer question, and chunk textual content, decide whether or not the chunk textual content is related to reply the consumer question.
Return a json response with {
"related": bool
}
{user_query}
{chunk_text}
"""
return llm_client.generate(immediate)
You then feed every chunk (or doc) by means of this perform, and solely hold the chunks or paperwork which can be judged as related by the LLM.
This system has two most important downsides:
- LLM price
- LLM response time
You’ll be sending loads of LLM API calls, which is able to inevitably incur a major price. Moreover, sending so many queries will take time, which provides delay to your RAG pipeline. It’s best to steadiness this with the necessity for speedy responses to the customers.
Advantages of enhancing doc retrieval
There are quite a few advantages to enhancing the doc retrieval step in your RAG pipeline. Some examples are:
- Higher LLM query answering efficiency
- Much less hallucinations
- Extra usually capable of appropriately reply customers’ queries
- Primarily, it makes the LLMs’ job simpler
Total, the power of your query answering mannequin will improve when it comes to the variety of efficiently answered consumer queries. That is the metric I like to recommend scoring your RAG system after, and you’ll learn extra about LLM system evaluations in my article on Evaluating 5 Million Paperwork with Automated Evals.
Fewer hallucinations are additionally an extremely vital issue. Hallucinations are one of the vital vital points we face with LLMs. They’re so detrimental as a result of they decrease the customers’ belief within the question-answer system, which makes them much less prone to proceed utilizing your utility. Nonetheless, guaranteeing the LLM each receives the related paperwork (precision), and minimizes the quantity of irrelevant paperwork (recall), is efficacious to reduce the quantity of hallucinations the RAG system produces.
Much less irrelevant paperwork (precision), additionally avoids the issues of context bloat (an excessive amount of noise within the context), and even context poisoning (incorrect data offered within the paperwork).
Abstract
On this article, I’ve mentioned how one can enhance the doc retrieval step of your RAG pipeline. I began off discussing how I consider the doc retrieval step is essentially the most vital a part of the RAG pipeline, and it’s best to spend time optimizing this step. Moreover, I mentioned how conventional RAG pipelines fetch related paperwork by means of semantic search and key phrase search. Persevering with, I mentioned strategies you’ll be able to make the most of to enhance each the precision and recall of retrieved paperwork, with strategies comparable to contextual retrieval and LLM chunk verification.
👉 Discover me on socials:
🧑💻 Get in contact
✍️ Medium