An LLM can deal with basic routing. Semantic search can deal with personal knowledge higher. Which one would you choose?
A single immediate can not deal with all the pieces, and a single knowledge supply is probably not appropriate for all the info.
Right here’s one thing you typically see in manufacturing however not in demos:
You want a couple of knowledge supply to retrieve data. Multiple vector retailer, graph DB, and even an SQL database. And also you want completely different prompts to deal with completely different duties, too.
If that’s the case, we now have an issue. Given unstructured, typically ambiguous, and poorly formatted consumer enter, how will we resolve which database to retrieve knowledge from?
If, for some cause, you continue to suppose it’s too straightforward, right here’s an instance.
Suppose you may have a tour-guiding chatbot, and one traveler asks for an optimum journey schedule between 5 locations. Letting the LLM reply could hallucinate, as LLMs aren’t good with location-based calculations.
As an alternative, for those who retailer this data in a graph database, the LLM could generate a question to fetch the shortest journey path between the factors…