Massive Language Fashions (LLMs) are nice at producing textual content, however getting structured output like JSON often requires intelligent prompting and hoping the LLM understands. Fortunately, JSON mode is changing into extra frequent in LLM frameworks and providers. This allows you to outline the precise output schema you need.
This submit will get into constrained technology utilizing JSON mode. We’ll use a posh, nested and practical JSON schema instance to information LLM frameworks/APIs like Llama.cpp or Gemini API to generate structured information, particularly vacationer location info. This builds on a earlier submit about constrained technology utilizing Steerage, however focuses on the extra broadly adopted JSON mode.
Whereas extra restricted than Steerage, JSON mode’s broader help makes it extra accessible, particularly with cloud-based LLM suppliers.
Throughout a private venture, I found that whereas JSON mode was simple with Llama.cpp, getting it to work with Gemini API required some further steps. This submit shares these options that can assist you make the most of JSON mode successfully.
Our instance schema represents a TouristLocation
. It is a non-trivial construction with nested objects, lists, enums, and numerous information varieties like strings and numbers.
Right here’s a simplified model:
{
"title": "string",
"location_long_lat": ["number", "number"],
"climate_type": {"sort": "string", "enum": ["tropical", "desert", "temperate", "continental", "polar"]},
"activity_types": ["string"],
"attraction_list": [
{
"name": "string",
"description": "string"
}
],
"tags": ["string"],
"description": "string",
"most_notably_known_for": "string",
"location_type": {"sort": "string", "enum": ["city", "country", "establishment", "landmark", "national park", "island", "region", "continent"]},
"dad and mom": ["string"]
}
You’ll be able to write one of these schema by hand or you possibly can generate it utilizing the Pydantic library. Right here is how you are able to do it on a simplified instance:
from typing import Checklist
from pydantic import BaseModel, Disciplineclass TouristLocation(BaseModel):
"""Mannequin for a vacationer location"""
high_season_months: Checklist[int] = Discipline(
[], description="Checklist of months (1-12) when the situation is most visited"
)
tags: Checklist[str] = Discipline(
...,
description="Checklist of tags describing the situation (e.g. accessible, sustainable, sunny, low cost, expensive)",
min_length=1,
)
description: str = Discipline(..., description="Textual content description of the situation")
# Instance utilization and schema output
location = TouristLocation(
high_season_months=[6, 7, 8],
tags=["beach", "sunny", "family-friendly"],
description="An exquisite seaside with white sand and clear blue water.",
)
schema = location.model_json_schema()
print(schema)
This code defines a simplified model of TouristLocation
information class utilizing Pydantic. It has three fields:
high_season_months
: An inventory of integers representing the months of the yr (1-12) when the situation is most visited. Defaults to an empty listing.tags
: An inventory of strings describing the situation with tags like “accessible”, “sustainable”, and so on. This subject is required (...
) and will need to have not less than one aspect (min_length=1
).description
: A string subject containing a textual content description of the situation. This subject can be required.
The code then creates an occasion of the TouristLocation
class and makes use of model_json_schema()
to get the JSON Schema illustration of the mannequin. This schema defines the construction and forms of the info anticipated for this class.
model_json_schema()
returns:
{'description': 'Mannequin for a vacationer location',
'properties': {'description': {'description': 'Textual content description of the '
'location',
'title': 'Description',
'sort': 'string'},
'high_season_months': {'default': [],
'description': 'Checklist of months (1-12) '
'when the situation is '
'most visited',
'objects': {'sort': 'integer'},
'title': 'Excessive Season Months',
'sort': 'array'},
'tags': {'description': 'Checklist of tags describing the situation '
'(e.g. accessible, sustainable, sunny, '
'low cost, expensive)',
'objects': {'sort': 'string'},
'minItems': 1,
'title': 'Tags',
'sort': 'array'}},
'required': ['tags', 'description'],
'title': 'TouristLocation',
'sort': 'object'}
Now that we now have our schema, lets see how we are able to implement it. First in Llama.cpp with its Python wrapper and second utilizing Gemini’s API.
Llama.cpp, a C++ library for working Llama fashions domestically. It’s beginner-friendly and has an energetic neighborhood. We can be utilizing it by its Python wrapper.
Right here’s methods to generate TouristLocation
information with it:
# Imports and stuff# Mannequin init:
checkpoint = "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF"
mannequin = Llama.from_pretrained(
repo_id=checkpoint,
n_gpu_layers=-1,
filename="*Q4_K_M.gguf",
verbose=False,
n_ctx=12_000,
)
messages = [
{
"role": "system",
"content": "You are a helpful assistant that outputs in JSON."
f"Follow this schema {TouristLocation.model_json_schema()}",
},
{"role": "user", "content": "Generate information about Hawaii, US."},
{"role": "assistant", "content": f"{location.model_dump_json()}"},
{"role": "user", "content": "Generate information about Casablanca"},
]
response_format = {
"sort": "json_object",
"schema": TouristLocation.model_json_schema(),
}
begin = time.time()
outputs = mannequin.create_chat_completion(
messages=messages, max_tokens=1200, response_format=response_format
)
print(outputs["choices"][0]["message"]["content"])
print(f"Time: {time.time() - begin}")
The code first imports obligatory libraries and initializes the LLM mannequin. Then, it defines a listing of messages for a dialog with the mannequin, together with a system message instructing the mannequin to output in JSON format based on a particular schema, person requests for details about Hawaii and Casablanca, and an assistant response utilizing the required schema.
Llama.cpp makes use of context-free grammars underneath the hood to constrain the construction and generate legitimate JSON output for a brand new metropolis.
Within the output we get the next generated string:
{'activity_types': ['shopping', 'food and wine', 'cultural'],
'attraction_list': [{'description': 'One of the largest mosques in the world '
'and a symbol of Moroccan architecture',
'name': 'Hassan II Mosque'},
{'description': 'A historic walled city with narrow '
'streets and traditional shops',
'name': 'Old Medina'},
{'description': 'A historic square with a beautiful '
'fountain and surrounding buildings',
'name': 'Mohammed V Square'},
{'description': 'A beautiful Catholic cathedral built in '
'the early 20th century',
'name': 'Casablanca Cathedral'},
{'description': 'A scenic waterfront promenade with '
'beautiful views of the city and the sea',
'name': 'Corniche'}],
'climate_type': 'temperate',
'description': 'A big and bustling metropolis with a wealthy historical past and tradition',
'location_type': 'metropolis',
'most_notably_known_for': 'Its historic structure and cultural '
'significance',
'title': 'Casablanca',
'dad and mom': ['Morocco', 'Africa'],
'tags': ['city', 'cultural', 'historical', 'expensive']}
Which might then be parsed into an occasion of our Pydantic class.
Gemini API, Google’s managed LLM service, claims restricted JSON mode help for Gemini Flash 1.5 in its documentation. Nevertheless, it may be made to work with just a few changes.
Listed below are the overall directions to get it to work:
schema = TouristLocation.model_json_schema()
schema = replace_value_in_dict(schema.copy(), schema.copy())
del schema["$defs"]
delete_keys_recursive(schema, key_to_delete="title")
delete_keys_recursive(schema, key_to_delete="location_long_lat")
delete_keys_recursive(schema, key_to_delete="default")
delete_keys_recursive(schema, key_to_delete="default")
delete_keys_recursive(schema, key_to_delete="minItems")print(schema)
messages = [
ContentDict(
role="user",
parts=[
"You are a helpful assistant that outputs in JSON."
f"Follow this schema {TouristLocation.model_json_schema()}"
],
),
ContentDict(position="person", components=["Generate information about Hawaii, US."]),
ContentDict(position="mannequin", components=[f"{location.model_dump_json()}"]),
ContentDict(position="person", components=["Generate information about Casablanca"]),
]
genai.configure(api_key=os.environ["GOOGLE_API_KEY"])
# Utilizing `response_mime_type` with `response_schema` requires a Gemini 1.5 Professional mannequin
mannequin = genai.GenerativeModel(
"gemini-1.5-flash",
# Set the `response_mime_type` to output JSON
# Cross the schema object to the `response_schema` subject
generation_config={
"response_mime_type": "utility/json",
"response_schema": schema,
},
)
response = mannequin.generate_content(messages)
print(response.textual content)
Right here’s methods to overcome Gemini’s limitations:
- Substitute
$ref
with Full Definitions: Gemini stumbles on schema references ($ref
). These are used when you will have a nested object definition. Substitute them with the whole definition out of your schema.
def replace_value_in_dict(merchandise, original_schema):
# Supply: https://github.com/pydantic/pydantic/points/889
if isinstance(merchandise, listing):
return [replace_value_in_dict(i, original_schema) for i in item]
elif isinstance(merchandise, dict):
if listing(merchandise.keys()) == ["$ref"]:
definitions = merchandise["$ref"][2:].cut up("/")
res = original_schema.copy()
for definition in definitions:
res = res[definition]
return res
else:
return {
key: replace_value_in_dict(i, original_schema)
for key, i in merchandise.objects()
}
else:
return merchandise
- Take away Unsupported Keys: Gemini doesn’t but deal with keys like “title”, “AnyOf”, or “minItems”. Take away these out of your schema. This has the consequence of a much less readable and fewer restrictive schema however we don’t have one other selection if insist on utilizing Gemini.
def delete_keys_recursive(d, key_to_delete):
if isinstance(d, dict):
# Delete the important thing if it exists
if key_to_delete in d:
del d[key_to_delete]
# Recursively course of all objects within the dictionary
for ok, v in d.objects():
delete_keys_recursive(v, key_to_delete)
elif isinstance(d, listing):
# Recursively course of all objects within the listing
for merchandise in d:
delete_keys_recursive(merchandise, key_to_delete)
- One-Shot or Few-shot Prompting for Enums: Gemini generally struggles with enums, outputting all potential values as an alternative of a single choice. The values are additionally separated by “|” in a single string, making them invalid based on our schema. Use one-shot prompting, offering a appropriately formatted instance, to information it in the direction of the specified conduct.
By making use of these transformations and offering clear examples, you possibly can efficiently generate structured JSON output with Gemini API.
JSON mode lets you get structured information straight out of your LLMs, making them extra helpful for sensible purposes. Whereas frameworks like Llama.cpp provide simple implementations, you would possibly encounter points with cloud providers like Gemini API.
Hopefully, this weblog allowed you to get a greater sensible understanding on how JSON mode works and the way you need to use it even when utilizing Gemini’s API which solely has partial help to this point.
Now that I used to be in a position to get Gemini to considerably work with JSON mode, I can full the implementation of my LLM workflow the place having information structured in a particular method is critical.
You could find the principle code of this submit right here: https://gist.github.com/CVxTz/8eace07d9bd2c5123a89bf790b5cc39e