Why will we nonetheless wrestle with paperwork in 2025?
in any data-driven organisation, and also you’ll encounter a bunch of PDFs, Phrase recordsdata, PowerPoints, half-scanned pictures, handwritten notes, and the occasional shock CSV lurking in a SharePoint folder. Enterprise and information analysts waste hours changing, splitting, and cajoling these codecs into one thing their Python pipelines will settle for. Even the newest generative-AI stacks can choke when the underlying textual content is wrapped inside graphics or sprinkled throughout irregular desk grids.
Docling was born to unravel precisely that ache. Launched as an open-source mission by IBM Analysis Zurich and now hosted beneath the Linux Basis AI & Knowledge Basis, the library abstracts parsing, format understanding, OCR, desk reconstruction, multimodal export, and even audio transcription behind one fairly easy API and CLI command.
Though docling helps the processing of HTML, MS Workplace format recordsdata, Picture codecs and others, we’ll be largely taking a look at utilizing it to course of PDF recordsdata.
As a knowledge scientist or ML engineer, why ought to I care about Docling?
Usually, the actual bottleneck isn’t constructing the mannequin — it’s feeding it. We spend a big share of our time on information wrangling, and nothing kills productiveness sooner than being handed a vital dataset locked inside a 100-page PDF. That is exactly the issue Docling solves, appearing as a bridge from the world of unstructured paperwork on to the structured sanity of Markdown, JSON, or a Pandas DataFrame.
However its energy extends past simply information extraction, straight into the realm of recent, AI-assisted improvement. Think about pointing docling at an HTML web page of API specs; it effortlessly interprets that complicated internet format into clear, structured Markdown — the proper context to feed straight into AI coding assistants like Cursor, ChatGPT, or Claude.
The place Docling got here from
The mission originated inside IBM’s Deep Search group, which was creating retrieval-augmented era (RAG) pipelines for lengthy patent PDFs. They open-sourced the core beneath an MIT license in late 2024 and have been transport weekly releases ever since. A vibrant group shortly fashioned round its unified DoclingDocument mannequin, a Pydantic object that retains textual content, pictures, tables, formulation, and format metadata collectively so downstream instruments like LangChain, LlamaIndex, or Haystack don’t should guess a web page’s studying order.
Right this moment, Docling integrates visual-language fashions (VLMs), similar to SmolDocling, for determine captioning. It additionally helps Tesseract, EasyOCR, and RapidOCR for textual content extraction and ships recipes for chunking, serialisation, and vector-store ingestion. In different phrases: you level it at a folder, and also you get Markdown, HTML, CSV, PNGs, JSON, or only a ready-to-embed Python object — no further scaffolding code required.
What we’ll do
To showcase Docling, we’ll first set up it after which use it with three totally different examples that exhibit its versatility and usefulness as a doc parser and processor. Please observe that utilizing Docling is sort of computationally intensive, so will probably be useful when you’ve got entry to a GPU in your system.
Nonetheless, earlier than we begin coding, we have to arrange a improvement surroundings.
Organising a improvement surroundings
I’ve began utilizing the UV package deal supervisor for this now, however be at liberty to make use of whichever instruments you’re most comfy with. Word additionally that I’ll be working beneath WSL2 Ubuntu for Home windows and operating my code utilizing a Jupyter Pocket book.
Word, even utilizing UV, the code beneath took a few minutes to finish on my system, because it’s a fairly hefty set of library installs.
$ uv init docling
Initialized mission `docling` at `/dwelling/tom/docling`
$ cd docling
$ uv venv
Utilizing CPython 3.11.10 interpreter at: /dwelling/tom/miniconda3/bin/python
Creating digital surroundings at: .venv
Activate with: supply .venv/bin/activate
$ supply .venv/bin/activate
(docling) $ uv pip set up docling pandas jupyter
Now kind within the command,
(docling) $ jupyter pocket book
And you must see a pocket book open in your browser. If that doesn’t occur robotically, you’ll doubtless see a screenful of data after operating the Jupyter Pocket book command. Close to the underside, you will see that a URL to repeat and paste into your browser to launch the Jupyter Pocket book.
Your URL will likely be totally different to mine, but it surely ought to look one thing like this:-
http://127.0.0.1:8888/tree?token=3b9f7bd07b6966b41b68e2350721b2d0b6f388d248cc69d
Instance 1: Convert any PDF or DOCX to Markdown or JSON
The only use case can also be the one you’ll use a big share of the time:- flip a doc’s textual content into Markdown
For many of our examples, our enter PDF will likely be one I’ve used a number of occasions earlier than for various exams. It’s a copy of Tesla’s 10-Q SEC submitting doc from September 2023. It’s roughly fifty pages lengthy and consists primarily of monetary data associated to Tesla. The complete doc is publicly out there on the Securities & Change Fee (SEC) web site and will be considered/downloaded utilizing this hyperlink.
Right here is a picture of the primary web page of that doc to your reference.

Let’s evaluation the docling code we have to convert into markdown. It units up the file path for the enter PDF, runs the DocumentConverter operate on it, after which exports the parsed outcome into Markdown format in order that the content material will be extra simply learn, edited, or analysed.
from docling.document_converter import DocumentConverter
import time
from pathlib import Path
inpath = "/mnt/d//tesla"
infile = "tesla_q10_sept_23.pdf"
data_folder = Path(inpath)
doc_path = data_folder / infile
converter = DocumentConverter()
outcome = converter.convert(doc_path) # → DoclingResult
# Markdown export nonetheless works
markdown_text = outcome.doc.export_to_markdown()
That is the output we get from operating the above code (simply the primary web page).
## UNITED STATES SECURITIES AND EXCHANGE COMMISSION
Washington, D.C. 20549 FORM 10-Q
(Mark One)
- x QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the quarterly interval ended September 30, 2023
OR
- o TRANSITION REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the transition interval from _________ to _________
Fee File Quantity: 001-34756
## Tesla, Inc.
(Actual identify of registrant as laid out in its constitution)
Delaware
(State or different jurisdiction of incorporation or group)
1 Tesla Street Austin, Texas
(Handle of principal govt places of work)
## (512) 516-8177
(Registrant's phone quantity, together with space code)
## Securities registered pursuant to Part 12(b) of the Act:
| Title of every class | Buying and selling Image(s) | Identify of every change on which registered |
|-----------------------|---------------------|---------------------------------------------|
| Widespread inventory | TSLA | The Nasdaq World Choose Market |
Point out by test mark whether or not the registrant (1) has filed all stories required to be filed by Part 13 or 15(d) of the Securities Change Act of 1934 ('Change Act') through the previous 12 months (or for such shorter interval that the registrant was required to file such stories), and (2) has been topic to such submitting necessities for the previous 90 days. Sure x No o
Point out by test mark whether or not the registrant has submitted electronically each Interactive Knowledge File required to be submitted pursuant to Rule 405 of Regulation S-T (§232.405 of this chapter) through the previous 12 months (or for such shorter interval that the registrant was required to submit such recordsdata). Sure x No o
Point out by test mark whether or not the registrant is a big accelerated filer, an accelerated filer, a non-accelerated filer, a smaller reporting firm, or an rising progress firm. See the definitions of 'giant accelerated filer,' 'accelerated filer,' 'smaller reporting firm' and 'rising progress firm' in Rule 12b-2 of the Change Act:
Giant accelerated filer
x
Accelerated filer
Non-accelerated filer
o
Smaller reporting firm
Rising progress firm
o
If an rising progress firm, point out by test mark if the registrant has elected to not use the prolonged transition interval for complying with any new or revised monetary accounting requirements supplied pursuant to Part 13(a) of the Change Act. o
Point out by test mark whether or not the registrant is a shell firm (as outlined in Rule 12b-2 of the Change Act). Sure o No x
As of October 16, 2023, there have been 3,178,921,391 shares of the registrant's widespread inventory excellent.
With the rise of AI code editors and using LLMs basically, this system has develop into considerably extra invaluable and related. The efficacy of LLMs and code editors will be considerably enhanced by offering them with acceptable context. Usually this may entail supplying them with the textual illustration of a specific device or framework’s documentation, API and coding examples.
Changing the output of PDFs to JSON format can also be easy. Simply add these two strains of code. It’s possible you’ll encounter limitations with the scale of the JSON output, so alter the print assertion accordingly.
json_blob = outcome.doc.model_dump_json(indent=2)
print(json_blob[10000], "…")
Instance 2: Extract complicated tables from a PDF
Many PDFs typically retailer tables as remoted textual content chunks or, worse, as flattened pictures. Docling’s table-structure mannequin reassembles rows, columns, and spanning cells, supplying you with both a Pandas DataFrame or a ready-to-save CSV. Our take a look at enter PDF has many tables. Look, for instance, at web page 11 of the PDF, and we will see the desk beneath,

Let’s see if we will extract that information. It’s barely extra complicated code than in our first instance, but it surely’s doing extra work. The PDF is transformed once more utilizing Docling’s DocumentConverter operate, producing a structured doc illustration. Then, for every desk detected, it transforms the desk right into a Pandas DataFrame and in addition retrieves the web page variety of the desk from the doc’s provenance metadata. If the desk comes from web page 11, it prints it out in Markdown format after which breaks the loop (so solely the primary matching desk is proven).
import pandas as pd
from docling.document_converter import DocumentConverter
from time import time
from pathlib import Path
inpath = "/mnt/d//tesla"
infile = "tesla_q10_sept_23.pdf"
data_folder = Path(inpath)
input_doc_path = data_folder / infile
doc_converter = DocumentConverter()
start_time = time()
conv_res = doc_converter.convert(input_doc_path)
# Export desk from web page 11
for table_ix, desk in enumerate(conv_res.doc.tables):
page_number = desk.prov[0].page_no if desk.prov else "Unknown"
if page_number == 11:
table_df: pd.DataFrame = desk.export_to_dataframe()
print(f"## Desk {table_ix} (Web page {page_number})")
print(table_df.to_markdown())
break
end_time = time() - start_time
print(f"Doc transformed and tables exported in {end_time:.2f} seconds.")
And the output just isn’t too shabby.
## Desk 10 (Web page 11)
| | | Three Months Ended September 30,.2023 | Three Months Ended September 30,.2022 | 9 Months Ended September 30,.2023 | 9 Months Ended September 30,.2022 |
|---:|:---------------------------------------|:----------------------------------------|:----------------------------------------|:---------------------------------------|:---------------------------------------|
| 0 | Automotive gross sales | $ 18,582 | $ 17,785 | $ 57,879 | $ 46,969 |
| 1 | Automotive regulatory credit | 554 | 286 | 1,357 | 1,309 |
| 2 | Power era and storage gross sales | 1,416 | 966 | 4,188 | 2,186 |
| 3 | Providers and different | 2,166 | 1,645 | 6,153 | 4,390 |
| 4 | Complete revenues from gross sales and providers | 22,718 | 20,682 | 69,577 | 54,854 |
| 5 | Automotive leasing | 489 | 621 | 1,620 | 1,877 |
| 6 | Power era and storage leasing | 143 | 151 | 409 | 413 |
| 7 | Complete revenues | $ 23,350 | $ 21,454 | $ 71,606 | $ 57,144 |
Doc transformed and tables exported in 33.43 seconds.
To retrieve ALL the tables from a PDF, you would wish to omit the if page_number =… line from my code.
One factor I’ve observed with Docling is that it’s not quick. As proven above, it took virtually 34 seconds to extract that single desk from a 50-page PDF.
Instance 3: Carry out OCR on an picture.
For this instance, I scanned a random web page from the Tesla 10-Q PDF and saved it as a PNG file. Let’s see how Docling copes with studying that picture and changing what it finds into markdown. Right here is my scanned picture.

And our code. We use Tesseract as our OCR engine (others can be found)
from pathlib import Path
import time
import pandas as pd
from docling.document_converter import DocumentConverter, ImageFormatOption
from docling.fashions.tesseract_ocr_cli_model import TesseractCliOcrOptions
def essential():
inpath = "/mnt/d//tesla"
infile = "10q-image.png"
input_doc_path = Path(inpath) / infile
# Configure OCR for picture enter
image_options = ImageFormatOption(
ocr_options=TesseractCliOcrOptions(force_full_page_ocr=True),
do_table_structure=True,
table_structure_options={"do_cell_matching": True},
)
converter = DocumentConverter(
format_options={"picture": image_options}
)
start_time = time.time()
conv_res = converter.convert(input_doc_path).doc
# Print all tables as Markdown
for table_ix, desk in enumerate(conv_res.tables):
table_df: pd.DataFrame = desk.export_to_dataframe(doc=conv_res)
page_number = desk.prov[0].page_no if desk.prov else "Unknown"
print(f"n--- Desk {table_ix+1} (Web page {page_number}) ---")
print(table_df.to_markdown(index=False))
# Print full doc textual content as Markdown
print("n--- Full Doc (Markdown) ---")
print(conv_res.export_to_markdown())
elapsed = time.time() - start_time
print(f"nProcessing accomplished in {elapsed:.2f} seconds")
if __name__ == "__main__":
essential()
Right here is our output.
--- Desk 1 (Web page 1) ---
| | Three Months Ended September J0,. | Three Months Ended September J0,.2022 | 9 Months Ended September J0,.2023 | 9 Months Ended September J0,.2022 |
|:-------------------------|------------------------------------:|:----------------------------------------|:---------------------------------------|:---------------------------------------|
| Price ol revenves | 181 | 150 | 554 | 424 |
| Analysis an0 developrent | 189 | 124 | 491 | 389 |
| | 95 | | 2B3 | 328 |
| Complete | 465 | 362 | 1,328 | 1,141 |
--- Full Doc (Markdown) ---
## Word 8 Fairness Incentive Plans
## Different Pertormance-Primarily based Grants
("RSUs") und inventory optlons unrecognized stock-based compensatian
## Abstract Inventory-Primarily based Compensation Data
| | Three Months Ended September J0, | Three Months Ended September J0, | 9 Months Ended September J0, | 9 Months Ended September J0, |
|--------------------------|------------------------------------|------------------------------------|-----------------------------------|-----------------------------------|
| | | 2022 | 2023 | 2022 |
| Price ol revenves | 181 | 150 | 554 | 424 |
| Analysis an0 developrent | 189 | 124 | 491 | 389 |
| | 95 | | 2B3 | 328 |
| Complete | 465 | 362 | 1,328 | 1,141 |
## Word 9 Commitments and Contingencies
## Working Lease Preparations In Buffalo, New York and Shanghai, China
## Authorized Proceedings
Between september 1 which 2021 pald has
Processing accomplished in 7.64 seconds
When you examine this output to the unique picture, the outcomes are disappointing. A number of the textual content within the picture was simply missed or garbled. That is the place a product like AWS Textract comes into its personal, because it excels at extracting textual content from a variety of sources.
Nonetheless, Docling does present varied choices for OCR, so in the event you obtain poor outcomes from one system, you’ll be able to all the time swap to a different.
I tried the identical process utilizing EasyOCR, however the outcomes weren’t considerably totally different from these obtained with Tesseract. When you’d prefer to attempt it out, right here is the code.
from pathlib import Path
import time
import pandas as pd
from docling.document_converter import DocumentConverter, ImageFormatOption
from docling.fashions.easyocr_model import EasyOcrOptions # Import EasyOCR choices
def essential():
inpath = "/mnt/d//tesla"
infile = "10q-image.png"
input_doc_path = Path(inpath) / infile
# Configure picture pipeline with EasyOCR
image_options = ImageFormatOption(
ocr_options=EasyOcrOptions(force_full_page_ocr=True), # use EasyOCR
do_table_structure=True,
table_structure_options={"do_cell_matching": True},
)
converter = DocumentConverter(
format_options={"picture": image_options}
)
start_time = time.time()
conv_res = converter.convert(input_doc_path).doc
# Print all tables as Markdown
for table_ix, desk in enumerate(conv_res.tables):
table_df: pd.DataFrame = desk.export_to_dataframe(doc=conv_res)
page_number = desk.prov[0].page_no if desk.prov else "Unknown"
print(f"n--- Desk {table_ix+1} (Web page {page_number}) ---")
print(table_df.to_markdown(index=False))
# Print full doc textual content as Markdown
print("n--- Full Doc (Markdown) ---")
print(conv_res.export_to_markdown())
elapsed = time.time() - start_time
print(f"nProcessing accomplished in {elapsed:.2f} seconds")
if __name__ == "__main__":
essential()
Abstract
The generative-AI increase re-ignited an outdated reality: rubbish in, rubbish out. LLMs can hallucinate much less solely after they ingest semantically and spatially coherent enter. Docling gives coherence (more often than not) throughout a number of supply codecs that your stakeholders can current, and does so domestically and reproducibly.
Docling has its makes use of past the AI world, although. Contemplate the huge variety of paperwork saved in places similar to financial institution vaults, solicitors’ places of work, and insurance coverage corporations worldwide. If these are to be digitised, Docling might present a number of the options for that.
Its greatest weak point might be the Optical Character Recognition of textual content inside pictures. I attempted utilizing Tesseract and EasyOCR, and the outcomes from each had been disappointing. You’ll most likely want to make use of a industrial product like AWS Textract if you wish to reliably reproduce textual content from these forms of sources.
It may also be gradual. I’ve a reasonably high-spec desktop PC with a GPU, and it took a while on most duties I set it. Nonetheless, in case your enter paperwork are primarily PDFs, Docling may very well be a invaluable addition to your textual content processing toolbox.
I’ve solely scratched the floor of what Docling is able to, and I encourage you to go to their homepage, which will be accessed utilizing the next hyperlink to be taught extra.