ever tried constructing a RAG (Retrieval-Augmented Technology) software, you’re probably aware of the challenges posed by tables and pictures. This text explores find out how to deal with these codecs utilizing Imaginative and prescient Language Fashions, particularly with the ColPali mannequin.
However first, what precisely is RAG — and why do tables and pictures make it so troublesome?
RAG and parsing
Think about you’re confronted with a query like:
What’s the our firm’s coverage for dealing with refund?
A foundational LLM (Giant Language Mannequin) in all probability gained’t have the ability to reply this, as such data is company-specific and sometimes not included within the mannequin’s coaching information.
That’s why a standard strategy is to attach the LLM to a information base — comparable to a SharePoint folder containing varied inner paperwork. This permits the mannequin to retrieve and incorporate related context, enabling it to reply questions that require specialised information. This method is named Retrieval-Augmented Technology (RAG), and it usually includes working with paperwork like PDFs.
Nonetheless, extracting the proper data from a big and various information base requires in depth doc preprocessing. Frequent steps embrace:
- Parsing: Parsing paperwork into texts and pictures, usually assisted with Optical Character Recognition (OCR) instruments like Tesseract. Tables are most frequently transformed into texts
- Construction Preservation: Keep the construction of the doc, together with headings, paragraphs, by changing the extracted textual content right into a format that retains context, comparable to Markdown
- Chunking: Splitting or merging textual content passages, in order that the contexts will be fed into the context window with out inflicting the passages come throughout as disjointed
- Enriching: Present further metadata e.g. extract key phrase or present abstract to the chunks to ease discovery. Optionally, to additionally caption photographs with descriptive texts through multimodal LLM to make photographs searchable
- Embedding: Embed the texts (and doubtlessly the photographs too with multimodal embedding), and retailer them right into a vector DB
As you’ll be able to think about, the method is very difficult, includes numerous experimentation, and could be very brittle. Worse but, even when we tried to do it as finest as we might, this parsing won’t really work in spite of everything.
Why parsing usually falls brief
Tables and picture usually exist in PDFs. The under picture reveals how they’re sometimes parsed for LLM’s consumption:

- Texts are chunked
- Tables are become texts, no matter contained inside are copied with out preserving desk boundaries
- Photos are fed into multimodal LLM for textual content abstract era, or alternatively, the unique picture is fed into multimodal embedding mannequin while not having to generate a textual content abstract
Nonetheless, there are two inherent points with such conventional strategy.
#1. Complicated tables can’t be merely be interpreted as texts
Taking this desk for example, we as human would interpret {that a} temperature change of >2˚C to 2.5˚C’s implication on Well being is An increase of two.3˚C by 2080 places as much as 270 million in danger from malaria

Nonetheless, if we flip this desk right into a textual content, it could appear like this: Temperature change Inside EC goal <(2˚C) >2˚C to 2.5˚C >3C Well being Globally it's estimated that An increase of two.3oC by 2080 places An increase of three.3oC by 2080 a median temperature rise as much as 270 million in danger from would put as much as 330...
The result’s a jumbled block of textual content with no discernible that means. Even for a human reader, it’s inconceivable to extract any significant perception from it. When this sort of textual content is fed right into a Giant Language Mannequin (LLM), it additionally fails to supply an correct interpretation.
#2. Disassociation between texts and pictures
The outline of the picture is commonly included in texts and they’re inseparable from each other. Taking the under for example, we all know the chart represents the “Modelled Prices of Local weather Change with Totally different Pure Price of Time Desire and declining low cost fee schemes (no fairness weighting)”

Nonetheless, as that is parsed, the picture description (parsed textual content) might be disassociated with the picture (parsed chart). So we will count on, throughout RAG, the picture wouldn’t be retrieved as enter after we increase a query like “what’s the price of local weather change?”

So, even when we try to engineer options that protect as a lot data as potential throughout parsing, they usually fall brief when confronted with real-world eventualities.
Given how vital parsing is in RAG purposes, does this imply RAG brokers are destined to fail when working with complicated paperwork? Completely not. With ColPali, we’ve a extra refined and efficient strategy to dealing with them.
What’s ColPali?
The core premise of ColPali is easy: Human learn PDF as pages, not “chunks”, so it is smart to deal with PDF as such: As a substitute of going by the messy means of parsing, we simply flip the PDF pages into photographs, and use that as context for LLM to supply a solution.

Now, the thought of embedding photographs utilizing multimodal fashions isn’t new — it’s a standard approach. So what makes ColPali stand out? The important thing lies in its inspiration from ColBERT, a mannequin that embeds inputs into multi-vectors, enabling extra exact and environment friendly search.
Earlier than diving into ColPali’s capabilities, let me briefly digress to clarify what ColBERT is all about.
ColBERT: Granular, context-aware embedding for texts
ColBERT is a textual content embedding and reranking approach that leverage on multi-vectors to boost search accuracy for texts.
Let’s think about this case: we’ve this query: is Paul vegan?, we have to determine which textual content chuck comprises the related data.

Ideally, we should always determine Textual content Chunk A as essentially the most related one. But when we use a single-vector embedding mannequin (text-ada-002), it’s going to return Textual content Chunk B as an alternative.
The explanation lies in how single-vector bi-encoders — like text-ada-002 — function. They try to compress a whole sentence right into a single vector, with out encoding particular person phrases in a context-aware method. In distinction, ColBERT embeds every phrase with contextual consciousness, leading to a richer, multi-vector illustration that captures extra nuanced data.

ColPali: ColBERT’s brother for dealing with document-like photographs
ColPali follows an analogous philosophy however applies it to document-like photographs. Simply as ColBERT breaks down textual content and embeds every phrase individually, ColPali divides a picture into patches and generates embeddings for every patch. This strategy preserves extra of the picture’s contextual element, enabling extra correct and significant interpretation.

Aside from increased retrieval accuracy, advantages of ColPali consists of:
- Explainability: ColPali allows word-level comparability between the question and particular person picture patches of a doc. This permits us to obviously perceive and justify why a specific doc is deemed extra related.
- Decreased Growth Effort & Larger Robustness: By eliminating the necessity for complicated preprocessing pipelines — comparable to chunking, OCR, and structure parsing — ColPali considerably reduces growth time and minimizes potential factors of failure.
- Efficiency Good points: Embedding and retrieval processes are quicker, leading to improved total system responsiveness.
Now you recognize what ColPali is, let’s dive into the code and see if ColPali can resolve the challenges we talked about earlier!
Illustration
My code will be present in my Github. Just a few phrases about it:
- Occasion: Working the code requires a machine with A100 GPU
- Embedding mannequin: ColPali has various variants, I’m utilizing vidore/colqwen2-v0.1for demonstration. You may check with the leaderboard right here and swap to different fashions
- Agent:
 – LLM: I’m utilizing OpenRouter to eat LLM, and the agent is powered byGPT-4o. You may substitute it with any multimodal LLM that may take picture as enter
 – Orchestration:LangGraphis used to develop the agent
- Library used for changing PDF into photographs: pdf2imagewhich is a wrapper forpoppler. So please ensure you even havepopplerput in
- Pattern information: “The Impacts and Prices of Local weather Change” written by Paul Watkiss et al, which is publicly out there right here
Outcomes
The code is fairly simple, so I’ll leap into conclusion as an alternative: How properly can ColPali cope with the issues we’ve seen within the “Why parsing usually falls brief?” part?
#1. Complicated tables can’t be merely be interpreted as texts
Query: What’s the well being impression if temperature rises above pre-industrial stage by 3 levels?
Reply: If the temperature rises above pre-industrial ranges by 3 levels Celsius, the well being impression can be important. Particularly, it's estimated that by the 2080s, as much as 330 million folks can be in danger from malaria. It is a substantial enhance in comparison with the Nineteen Twenties, the place the quantity in danger was between 50 to 100 million.
That is appropriately referred from this desk within the PDF!

#2. Disassociation between texts and pictures
Query: What’s the modelled value of local weather change, in line with the Greenbook declining mannequin?
Reply: In keeping with the Greenbook declining mannequin, the modelled value of local weather change is 7.2 Euro/tCO2
The right reply needs to be 7.4 Euro/tCO2, however we will see it’s near appropriate!

Conclusion
Conventional RAG pipelines wrestle with non-textual content material. ColPali treats every PDF web page as a picture, permitting it to course of visible layouts, tables, charts, and embedded graphics — codecs that normal textual content parsers usually distort or ignore.
ColPali brings vision-language intelligence to RAG, making it way more able to dealing with the messy, multimodal actuality of enterprise paperwork.
 
			 
		     
                                
















