LangExtract is a from builders at Google that makes it simple to show messy, unstructured textual content into clear, structured knowledge by leveraging LLMs. Customers can present a number of few-shot examples together with a customized schema and get outcomes based mostly on that. It really works each with proprietary in addition to native LLMs (through Ollama).
A major quantity of knowledge in healthcare is unstructured, making it a perfect space the place a instrument like this may be useful. Scientific notes are lengthy and stuffed with abbreviations and inconsistencies. Essential particulars reminiscent of drug names, dosages, and particularly hostile drug reactions (ADRs) get buried within the textual content. Due to this fact, for this text, I wished to see if LangExtract might deal with hostile drug response (ADR) detection in scientific notes. Extra importantly, is it efficient? Let’s discover out on this article. Be aware that whereas LangExtract is an open-source mission from builders at Google, it isn’t an formally supported Google product.
Only a fast observe: I’m solely exhibiting how LangExtract works. I’m not a physician, and this isn’t medical recommendation.
▶️ Here’s a detailed Kaggle pocket book to comply with alongside.
Why ADR Extraction Issues
An Opposed Drug Response (ADR) is a dangerous, unintended consequence brought on by taking a drugs. These can vary from gentle unintended effects like nausea or dizziness to extreme outcomes that will require medical consideration.

Detecting them shortly is important for affected person security and pharmacovigilance. The problem is that in scientific notes, ADRs are buried alongside previous circumstances, lab outcomes, and different context. Because of this, detecting them is hard. Utilizing LLMs to detect ADRs is an ongoing space of analysis. Some current works have proven that LLMs are good at elevating crimson flags however not dependable. So, ADR extraction is an efficient stress take a look at for LangExtract, because the purpose right here is to see if this library can spot the hostile reactions amongst different entities in scientific notes like medicines, dosages, severity, and so forth.
How LangExtract Works
Earlier than we bounce into utilization, let’s break down LangExtract’s workflow. It’s a easy three-step course of:
- Outline your extraction process by writing a transparent immediate that specifies precisely what you need to extract.
- Present a number of high-quality examples to information the mannequin in the direction of the format and stage of element you count on.
- Submit your enter textual content, select the mannequin, and let LangExtract course of it. Customers can then evaluation the outcomes, visualize them, or move them instantly into their downstream pipeline.
The official GitHub repository of the instrument has detailed examples spanning a number of domains, from entity extraction in Shakespeare’s Romeo & Juliet to treatment identification in scientific notes and structuring radiology studies. Do test them out.
Set up
First we have to set up the LangExtract library. It’s at all times a good suggestion to do that inside a digital surroundings to maintain your mission dependencies remoted.
pip set up langextract
Figuring out Opposed Drug Reactions in Scientific Notes with LangExtract & Gemini
Now let’s get to our use case. For this walkthrough, I’ll use Google’s Gemini 2.5 Flash mannequin. You might additionally use Gemini Professional for extra advanced reasoning duties. You’ll must first set your API key:
export LANGEXTRACT_API_KEY="your-api-key-here"
▶️ Here’s a detailed Kaggle pocket book to comply with alongside.
Step 1: Outline the Extraction Process
Let’s create our immediate for extracting medicines, dosages, hostile reactions, and actions taken. We will additionally ask for severity the place talked about.
immediate = textwrap.dedent("""
Extract treatment, dosage, hostile response, and motion taken from the textual content.
For every hostile response, embrace its severity as an attribute if talked about.
Use actual textual content spans from the unique textual content. Don't paraphrase.
Return entities within the order they seem.""")

Subsequent, let’s present an instance to information the mannequin in the direction of the proper format:
# 1) Outline the immediate
immediate = textwrap.dedent("""
Extract situation, treatment, dosage, hostile response, and motion taken from the textual content.
For every hostile response, embrace its severity as an attribute if talked about.
Use actual textual content spans from the unique textual content. Don't paraphrase.
Return entities within the order they seem.""")
# 2) Instance
examples = [
lx.data.ExampleData(
text=(
"After taking ibuprofen 400 mg for a headache, "
"the patient developed mild stomach pain. "
"They stopped taking the medicine."
),
extractions=[
lx.data.Extraction(
extraction_class="condition",
extraction_text="headache"
),
lx.data.Extraction(
extraction_class="medication",
extraction_text="ibuprofen"
),
lx.data.Extraction(
extraction_class="dosage",
extraction_text="400 mg"
),
lx.data.Extraction(
extraction_class="adverse_reaction",
extraction_text="mild stomach pain",
attributes={"severity": "mild"}
),
lx.data.Extraction(
extraction_class="action_taken",
extraction_text="They stopped taking the medicine"
)
]
)
]
Step 2: Present the Enter and Run the Extraction
For the enter, I’m utilizing an actual scientific sentence from the ADE Corpus v2 dataset on Hugging Face.
input_text = (
"A 27-year-old man who had a historical past of bronchial bronchial asthma, "
"eosinophilic enteritis, and eosinophilic pneumonia offered with "
"fever, pores and skin eruptions, cervical lymphadenopathy, hepatosplenomegaly, "
"atypical lymphocytosis, and eosinophilia two weeks after receiving "
"trimethoprim (TMP)-sulfamethoxazole (SMX) remedy."
)
Subsequent, let’s run LangExtract with the Gemini-2.5-Flash mannequin.
consequence = lx.extract(
text_or_documents=input_text,
prompt_description=immediate,
examples=examples,
model_id="gemini-2.5-flash",
api_key=LANGEXTRACT_API_KEY
)
Step 3: View the Outcomes
You possibly can show the extracted entities with positions
print(f"Enter: {input_text}n")
print("Extracted entities:")
for entity in consequence.extractions:
position_info = ""
if entity.char_interval:
begin, finish = entity.char_interval.start_pos, entity.char_interval.end_pos
position_info = f" (pos: {begin}-{finish})"
print(f"• {entity.extraction_class.capitalize()}: {entity.extraction_text}{position_info}")

LangExtract appropriately identifies the hostile drug response with out complicated it with the affected person’s pre-existing circumstances, which is a key problem in the sort of process.
If you wish to visualize it, it’s going to create this .jsonl file. You possibly can load that .jsonl file by calling the visualization perform, and it’ll create an HTML file for you.
lx.io.save_annotated_documents(
[result],
output_name="adr_extraction.jsonl",
output_dir="."
)
html_content = lx.visualize("adr_extraction.jsonl")
# Show the HTML content material instantly
show((html_content))

Working with longer scientific notes
Actual scientific notes are sometimes for much longer than the instance proven above. For example, right here is an precise observe from the ADE-Corpus-V2 dataset launched below the MIT License. You possibly can entry it on Hugging Face or Zenodo.

To course of longer texts with LangExtract, you retain the identical workflow however add three parameters:
extraction_passes runs a number of passes over the textual content to catch extra particulars and enhance recall.
max_workers controls parallel processing so bigger paperwork may be dealt with sooner.
max_char_buffer splits the textual content into smaller chunks, which helps the mannequin keep correct even when the enter may be very lengthy.
consequence = lx.extract(
text_or_documents=input_text,
prompt_description=immediate,
examples=examples,
model_id="gemini-2.5-flash",
extraction_passes=3,
max_workers=20,
max_char_buffer=1000
)
Right here is the output. For brevity, I’m solely exhibiting a portion of the output right here.

If you would like, you may also move a doc’s URL on to the text_or_documents parameter.
Utilizing LangExtract with Native fashions through Ollama
LangExtract isn’t restricted to proprietary APIs. It’s also possible to run it with native fashions by means of Ollama. That is particularly helpful when working with privacy-sensitive scientific knowledge that may’t depart your safe surroundings. You possibly can arrange Ollama domestically, pull your most popular mannequin, and level LangExtract to it. Full directions can be found within the official docs.
Conclusion
For those who’re constructing an data retrieval system or any software involving metadata extraction, LangExtract can prevent a major quantity of preprocessing effort. In my ADR experiments, LangExtract carried out effectively, appropriately figuring out medicines, dosages, and reactions. What I observed is that the output instantly is dependent upon the standard of the few-shot examples offered by the consumer, which suggests whereas LLMs do the heavy lifting, people nonetheless stay an vital a part of the loop. The outcomes have been encouraging, however since scientific knowledge is high-risk, broader and extra rigorous testing throughout various datasets continues to be wanted earlier than shifting towards manufacturing use.
















