(VLMs) are highly effective machine-learning fashions that may course of each visible and textual data. With the current launch of Qwen 3 VL, I wish to make a deep dive into how one can make the most of these highly effective VLMs to course of paperwork.
Desk of contents
Why it’s essential use VLMs
To focus on why some duties require VLMs, I wish to begin off with an instance activity, the place we have to interpret textual content and the visible data of textual content.
Think about you take a look at the picture beneath. The checkboxes characterize whether or not a doc needs to be included in a report or not, and now it’s essential decide which paperwork to incorporate.

For a human, this can be a easy activity; clearly, paperwork 1 and three needs to be included, whereas doc 2 needs to be excluded. Nevertheless, in case you tried to unravel this drawback by means of a pure LLM, you’ll encounter points.
To run a pure LLM, you’ll first must OCR the picture, the place the OCR output would look one thing like beneath, in case you use Google’s Tesseract, for instance, which extracts the textual content line by line.
Doc 1 Doc 2 Doc 3 X X
As you may need already found, the LLM can have points deciding which paperwork to incorporate, as a result of it’s unattainable to know which paperwork the Xs belong to. This is only one of many situations the place VLMs are extraordinarily environment friendly at fixing an issue.
The principle level right here is that understanding which paperwork have a checkboxed X requires each visible and textual data. You want to know the textual content and the visible place of the textual content within the picture. I summarize this within the quote beneath:
VLMs are required when the which means of textual content depends upon its visible place
Utility areas
There are a plethora of areas you may apply VLMs to. On this part, I’ll cowl some completely different areas the place VLMs have confirmed helpful, and the place I’ve additionally efficiently utilized VLMs.
Agentic use instances
Brokers are within the wind these days, and VLMs additionally play a job on this. I’ll spotlight two primary areas the place VLMs can be utilized in an agentic context, although there are of course many different such areas.
Laptop use
Laptop use is an attention-grabbing use case for VLMs. With pc use, I confer with a VLM taking a look at a body out of your pc and deciding which motion to take subsequent. One instance of that is OpenAI’s Operator. This may, for instance, be taking a look at a body of this text you’re studying proper now, and scrolling all the way down to learn extra from this text.
VLMs are helpful for pc use, as a result of LLMs aren’t sufficient to determine which actions to take. When working on a pc, you typically must interpret the visible place of buttons and knowledge, which, as I described to start with, is among the prime areas of use for VLMs.
Debugging
Debugging code can also be a brilliant helpful agentic software space for VLMs. Think about that you’re creating an internet software, and uncover a bug.
One choice is to begin logging to the console, copy the logs, describe to Cursor what you probably did, and immediate Cursor to repair it. That is naturally time-consuming, because it requires a variety of handbook steps from the consumer.
An alternative choice is thus to make the most of VLMs to raised clear up the issue. Ideally, you describe learn how to reproduce the problem, a VLM can go into your software, recreate the move, try the problem, and thus debug what goes mistaken. There are functions being constructed for areas like this, although most haven’t come far in improvement from what I’ve seen.
Query answering
Using VLMs for visible query answering is among the traditional approaches to utilizing VLMs. Query answering is the use case I described earlier on this article about determining which checkbox belongs to which paperwork. You feed the VLM with a consumer query, and a picture (or a number of photos), for the VLM to course of. The VLM will then present a solution in textual content format. You may see how this course of works within the determine beneath.

You must, nevertheless, weigh the trade-offs of utilizing VLMs vs LLMs. Naturally, when a activity requires textual and visible data, it’s essential make the most of VLMs to get a correct consequence. Nevertheless, VLMs are additionally often way more costly to run, as they should course of extra tokens. It’s because photos include a variety of data, which thus results in many enter tokens to course of.
Moreover, if the VLM is to course of textual content, you additionally want high-resolution photos, permitting the VLM to interpret the pixels making up letters. With decrease resolutions, the VLM struggles to learn the textual content within the photos, and also you’ll obtain low-quality outcomes.
Classification

One other attention-grabbing software space for VLMs is classification. With classification, I confer with the state of affairs the place you’ve gotten a predetermined set of classes and want to find out which class a picture belongs to.
You may make the most of VLMs for classification, with the identical method as utilizing LLMs. You create a structured immediate containing all related data, together with the potential output classes. Moreover, you ideally cowl the completely different edge instances, for instance, in situations the place two classes are each very possible, and the VLM has to determine between the 2 classes.
You may, for instance, have a immediate comparable to:
def get_prompt():
return """
## Common directions
You want to decide which class a given doc belongs to.
The accessible classes are "authorized", "technical", "monetary".
## Edge case dealing with
- Within the situation the place you've gotten a authorized doc protecting monetary data, the doc belongs to the monetary class
- ...
## Return format
Reply solely with the corresponding class, and no different textual content
"""
You too can successfully make the most of VLMs for data extraction, and there are a variety of data extraction duties requiring visible data. You create the same immediate to the classification immediate I created above, and sometimes immediate the VLM to reply in a structured format, comparable to a JSON object.
When performing data extraction, it’s essential think about what number of information factors you wish to extract. For instance, if it’s essential extract 20 completely different information factors from a doc, you in all probability don’t wish to extract all of them directly. It’s because the mannequin will possible wrestle to precisely extract that a lot data in a single go.
As an alternative, it’s best to think about splitting up the duty, for instance, extracting 10 information factors, with two completely different requests, simplifying the duty for the mannequin. On the opposite aspect of the argument, you’ll typically encounter that some information factors are associated to one another, which means they need to be extracted in the identical request. Moreover, sending a number of requests will increase the inference value.

When VLMs are problematic
VLMs are wonderful fashions that may carry out duties that have been unimaginable to unravel with AI only a few years in the past. Nevertheless, additionally they have their limitations, which I’ll cowl on this part.
Price of operating VLMs
The primary limitation is the price of operating VLMs, which I’ve additionally briefly mentioned earlier on this article. VLMs course of photos, which include a variety of pixels. These pixels characterize a variety of data, which is encoded into tokens that the VLM can course of. The difficulty is that since photos include a lot data, it’s essential create a variety of tokens per picture, which once more will increase the price to run VLMs.
Moreover, you typically want high-resolution photos, because the VLM is required to learn textual content in photos, resulting in much more tokens to course of. VLMs are thus costly to run, each over an API, however in compute prices in case you determine to self-host the VLM.
Can not course of lengthy paperwork
The quantity of tokens contained in photos additionally limits the variety of pages a VLM can course of directly. VLMs are restricted by their context home windows, similar to conventional LLMs. It is a drawback if you wish to course of lengthy paperwork containing lots of of pages. Naturally, you may break up the doc into chunks, however you may encounter issues the place the VLM doesn’t have entry to all of the contents of the doc in a single go.
For instance, in case you have a 100-page doc, you may first course of pages 1-50, after which course of pages 51-100. Nevertheless, if some data on web page 53 may want the context from web page 1 (for instance, the title or date of the doc), this can result in points.
To discover ways to cope with this drawback, I learn Qwen 3’s cookbook, the place they’ve a web page on learn how to make the most of Qwen 3 for ultralong paperwork. I’ll you should definitely take a look at this out and focus on how nicely it really works in a future article.
Conclusion
On this article, I’ve mentioned imaginative and prescient language fashions and how one can apply them to completely different drawback areas. I first described learn how to combine VLMs in agentic techniques, for instance, as a pc use agent, or to debug internet functions. Persevering with, I coated areas comparable to query answering, classification, and knowledge extraction. Lastly, I additionally coated some limitations of VLMs, discussing the computational value of operating VLMs and the way they wrestle with lengthy paperwork.
👉 Discover me on socials:
🧑💻 Get in contact
✍️ Medium