that “teams are remarkably clever, and are sometimes smarter than the neatest folks in them.” He was writing about decision-making, however the identical precept applies to classification: get sufficient folks to explain the identical phenomenon and a taxonomy begins to emerge, even when no two folks phrase it the identical method. The problem is extracting that sign from the noise.
I had a number of thousand rows of free-text knowledge and wanted to do precisely that. Every row was a brief natural-language annotation explaining why an automatic safety discovering was irrelevant, which capabilities to make use of for a repair, or what coding practices to observe. One individual wrote “that is check code, not deployed wherever.” One other wrote “non-production atmosphere, secure to disregard.” A 3rd wrote “solely runs in CI/CD pipeline throughout integration assessments.” All three meant the identical factor, however no two shared greater than a phrase or two.
The taxonomy was in there. I simply wanted the best instrument to extract it. Conventional clustering and key phrase matching couldn’t deal with the paraphrase variation, so I attempted one thing I hadn’t seen mentioned a lot: utilizing a regionally hosted LLM as a zero-shot classifier. This weblog submit explores the way it carried out, the way it works, and a few ideas for utilizing and deploying these programs your self.
Why conventional clustering struggles with quick free-text
Commonplace unsupervised clustering works by discovering mathematical proximity in some function house. For lengthy paperwork, that is normally superb. Sufficient sign exists in phrase frequencies or embedding vectors to kind coherent teams. However quick, semantically dense textual content breaks these assumptions in just a few particular methods.
Embedding similarity conflates completely different meanings. “This secret is solely utilized in improvement” and “This API secret is hardcoded for comfort” produce comparable embeddings as a result of the vocabulary overlaps. However one is a few non-production atmosphere and the opposite is about an intentional safety tradeoff. Okay-means or DBSCAN can’t distinguish them as a result of the vectors are too shut.
Subject fashions floor phrases, not ideas. Latent Dirichlet Allocation (LDA) and its variants discover phrase co-occurrence patterns. When your corpus consists of one-sentence annotations, the phrase co-occurrence sign is just too sparse to kind significant matters. You get clusters outlined by “check” or “code” or “safety” somewhat than coherent themes.
Regex and key phrase matching can’t deal with paraphrase variation. You might write guidelines to catch “check code” and “non-production,” however you’d miss “solely used throughout CI,” “by no means deployed,” “development-only fixture,” and dozens of different phrasings that every one specific the identical underlying thought.
The frequent thread: these strategies function on floor options (tokens, vectors, patterns) somewhat than semantic which means. For classification duties the place which means issues greater than vocabulary, you want one thing that understands language.
LLMs as zero-shot classifiers
The important thing perception is easy: as a substitute of asking an algorithm to find clusters, outline your candidate classes primarily based on area information and ask a language mannequin to categorise every entry.
This works as a result of LLMs course of semantic which means, not simply token patterns. “This secret is solely utilized in improvement” and “Non-production atmosphere, secure to disregard” comprise virtually no overlapping phrases, however a language mannequin understands they specific the identical thought. This isn’t simply instinct. Chae and Davidson (2025) in contrast 10 fashions throughout zero-shot, few-shot, and fine-tuned coaching regimes and located that giant LLMs in zero-shot mode carried out competitively with fine-tuned BERT on stance detection duties. Wang et al. (2023) discovered LLMs outperformed state-of-the-art classification strategies on three of 4 benchmark datasets utilizing zero-shot prompting alone, no labeled coaching knowledge required.
The setup has three parts:
- Candidate classes. An inventory of mutually unique classes outlined from area information. In my case, I began with about 10 anticipated themes (check code, enter validation, framework protections, non-production environments, and so forth.) and expanded to twenty candidates after reviewing a pattern.
- A classification immediate. Structured to return a class label and a quick cause. Low temperature (0.1) for consistency. Quick max output (100 tokens) since we solely want a label, not an essay.
- An area LLM. I used Ollama to run fashions regionally. No API prices, no knowledge leaving my machine, and quick sufficient for 1000’s of classifications.
Right here’s the core of the classification immediate:
CLASSIFICATION_PROMPT = """
Classify this textual content into certainly one of these themes:
{themes}
Textual content:
"{content material}"
Reply with ONLY the theme quantity and title, and a quick cause.
Format: THEME_NUMBER. THEME_NAME | Cause
Classification:
"""
And the Ollama name:
response = ollama.generate(
mannequin="gemma2",
immediate=immediate,
choices={
"temperature": 0.1, # Low temp for constant classification
"num_predict": 100, # Quick response, we simply want a label
}
)
Two issues to notice. First, the temperature setting issues. At 0.7 or greater, the identical enter can produce completely different classifications throughout runs. At 0.1, the mannequin is sort of deterministic, which helps clean classification. Second, limiting num_predict retains the mannequin from producing explanations you don’t want, which accelerates throughput considerably.
Constructing the pipeline
The complete pipeline has three steps: preprocess, classify, analyze.
Preprocessing strips content material that provides tokens with out including classification sign. URLs, boilerplate phrases (“For extra info, see…”), and formatting artifacts all get eliminated. Frequent phrases get normalized (“false constructive” turns into “FP,” “manufacturing” turns into “prod”) to scale back token variation. Deduplication by content material hash removes actual repeats. This step diminished my token finances by roughly 30% and made classification extra constant.
Classification runs every entry via the LLM with the candidate classes. For ~7,000 entries, this took about 45 minutes on a MacBook Professional utilizing Gemma 2 (9B parameters). I additionally examined Llama 3.2 (3B), which was sooner however barely much less exact on edge circumstances the place two classes had been shut. Gemma 2 dealt with ambiguous entries with noticeably higher judgment.
One sensible concern: lengthy runs can fail partway via. The pipeline saves checkpoints each 100 classifications, so you may resume from the place you left off.
Evaluation aggregates the outcomes and generates a distribution chart. Right here’s what the output appeared like:

The chart tells a transparent story. Over 1 / 4 of all entries described code that solely runs in non-production environments. One other 21.9% described circumstances the place a safety framework already handles the danger. These two classes alone account for half the dataset, which is the type of perception that’s arduous to extract from unstructured textual content some other method.
When this strategy is just not the best match
This method works finest in a particular area of interest: medium-scale datasets (tons of to tens of 1000’s of entries), semantically complicated textual content, and conditions the place you’ve gotten sufficient area information to outline candidate classes however no labeled coaching knowledge.
It’s not the best instrument when:
- your classes are keyword-defined (simply use regex),
- when you’ve gotten labeled coaching knowledge (practice a supervised classifier; it’ll be sooner and cheaper),
- once you want sub-second latency at scale (use embeddings and a nearest-neighbor lookup),
- or once you genuinely don’t know what classes exist. On this case, run exploratory subject modeling first to develop instinct, then change to LLM classification as soon as you may outline classes.
The opposite constraint is throughput. Even on a quick machine, classifying one entry per fraction of a second means 7,000 entries takes near an hour. For datasets above 100,000 entries, you’ll need an API-hosted mannequin or a batching technique.
Different purposes price attempting
The pipeline generalizes to any downside the place you’ve gotten unstructured textual content and wish structured classes.
Buyer suggestions. NPS responses, assist tickets, and survey open-ends all endure from the identical downside: assorted phrasing for a finite set of underlying themes. “Your app crashes each time I open settings” and “Settings web page is damaged on iOS” are the identical class, however key phrase matching gained’t catch that.
Bug report triage. Free-text bug descriptions might be auto-categorized by element, root trigger, or severity. That is particularly helpful when the individual submitting the bug doesn’t know which element is accountable.
Code intent classification. That is one I haven’t tried but however discover compelling: classifying code snippets, Semgrep guidelines, or configuration guidelines by goal (authentication, knowledge entry, error dealing with, logging). The identical approach applies. Outline the classes, write a classification immediate, run the corpus via an area mannequin.
Getting began
The pipeline is easy: outline your classes, write a classification immediate, and run your knowledge via an area mannequin.
The toughest half isn’t the code. It’s defining classes which can be mutually unique and collectively exhaustive. My recommendation: begin with a pattern of 100 entries, classify them manually, discover which classes you retain reaching for, and use these as your candidate listing. Then let the LLM scale the sample.
I used this method as a part of a bigger evaluation on how safety groups remediate vulnerabilities. The classification outcomes helped floor which varieties of safety context are most typical throughout organizations, and the chart above is without doubt one of the outputs from that work. If you happen to’re within the safety angle, the complete report is offered at that hyperlink.















