or so, it has been not possible to disclaim that there was a rise within the hype stage in direction of AI, particularly with the rise of generative AI and agentic AI. As a knowledge scientist working in a consulting agency, I’ve famous a substantial progress within the variety of enquiries concerning how we will leverage these new applied sciences to make processes extra environment friendly or automated. And whereas this curiosity would possibly flatter us knowledge scientists, it generally looks as if folks anticipate magic from AI fashions, as if they may resolve each downside with nothing greater than a immediate. Then again, whereas I personally consider generative and agentic AI has modified (and can proceed to alter) how we work and reside, once we conduct business-process modifications, we should contemplate its limitations and challenges and see the place it proves to be a great instrument (as we wouldn’t use a fork, for instance, to chop meals).
As I’m a nerd and perceive how LLMs work, I needed to check their efficiency in a logic recreation just like the Spanish model of Wordle towards a logic I had in-built a few hours some years in the past (extra particulars on that may be discovered right here). Particularly, I had the next questions:
- Will my algorithm be higher than LLM fashions?
- How will reasoning capabilities in LLM fashions have an effect on their efficiency?
Constructing an LLM-based resolution
To get an answer by the LLM mannequin, I constructed three important prompts. The primary one was focused to get an preliminary guess:
Let’s suppose I’m taking part in WORDLE, however in Spanish. It’s a recreation the place you must guess a 5-letter phrase, and solely 5 letters, in 6 makes an attempt. Additionally, a letter might be repeated within the closing phrase.
First, let’s assessment the foundations of the sport: On daily basis the sport chooses a five-letter phrase that gamers attempt to guess inside six makes an attempt. After the participant enters the phrase they assume it’s, every letter is marked in inexperienced, yellow, or grey: inexperienced means the letter is appropriate and within the appropriate place; yellow means the letter is within the hidden phrase however not within the appropriate place; whereas grey means the letter isn’t within the hidden phrase.
However when you place a letter twice and one reveals up inexperienced and the opposite yellow, it means the letter seems twice: as soon as within the inexperienced place, and as soon as in one other place that isn’t the yellow one.
Instance: If the hidden phrase is “PIZZA”, and your first try is “PANEL”, the response would seem like this: the “P” could be inexperienced, the “A” yellow, and the “N”, “E”, and “L” grey.
Since for now we don’t know something in regards to the goal phrase, give me a great beginning phrase—one that you just assume will present helpful data to assist us determine the ultimate phrase.
Then, a second immediate could be used to indicate all of the phrase guidelines (the immediate right here isn’t proven in full attributable to house, however the full model additionally had instance video games and instance reasonings):
Now, the thought is that we assessment the sport technique. I’ll be providing you with the sport outcomes. The thought is that, given this end result, you counsel a brand new 5-letter phrase. Bear in mind additionally that there are solely 6 complete makes an attempt. I’ll provide the end result within the following format:
LETTER -> COLORFor instance, if the hidden phrase is PIZZA, and the try is PANEL, I’ll give the end result on this format:
P -> GREEN (it’s the primary letter of the ultimate phrase)
A -> YELLOW (it’s within the phrase, however not within the second place—as an alternative it’s within the final one)
N -> GRAY (it’s not within the phrase)
E -> GRAY (it’s not within the phrase)
L -> GRAY (it’s not within the phrase)Let’s keep in mind the foundations. If a letter is inexperienced, it means it’s within the place the place it was positioned. If it’s yellow, it means the letter is within the phrase, however not in that place. If it’s grey, it means it’s not within the phrase.
If you happen to place a letter twice and one reveals inexperienced and the opposite grey, it means the letter solely seems as soon as within the phrase. However when you place a letter twice and one reveals inexperienced and the opposite yellow, it means the letter seems twice: as soon as within the inexperienced place, and one other time in a unique place (not the yellow one).
All the knowledge I offer you should be used to construct your suggestion. On the finish of the day, we need to “flip” all of the letters inexperienced, since which means we guessed the phrase.
Your closing reply should solely comprise the phrase suggestion—not your reasoning.
The ultimate immediate was used to get a brand new suggestion after having the results of our try:
Right here’s the end result. Keep in mind that the phrase should have 5 letters, that you need to use the foundations and all of the data of the sport, and that the aim is to “flip” all of the letters inexperienced, with not more than 6 makes an attempt to guess the phrase. Take your time to assume via your reply—I don’t want a fast response. Don’t give me your reasoning, solely your closing end result.
One thing essential right here is that I by no means tried to information the LLMs or identified errors or errors within the logic. I needed a pure LLM-based end result and didn’t need to bias the answer in any form or type.
Preliminary experiments
The reality is that my preliminary speculation was that whereas I anticipated my algorithm to be higher than the LLMs, I assumed the Generative AI-based resolution was going to do a reasonably good job with out a lot assist, however after some days, I seen some “humorous” behaviors, just like the one under (the place the reply was apparent):

The reply was fairly apparent: it solely needed to swap two letters. Nevertheless, ChatGPT answered with the identical guess as earlier than.
After seeing these sorts of errors, I began to ask about this on the finish of video games, and the LLMs principally acknowledged their errors, however didn’t present a transparent clarification on their reply:

Whereas these are simply two examples, this sort of conduct was standard when producing the pure LLM resolution, showcasing some potential limitations within the reasoning of base fashions.
Outcomes Evaluation
With all this data into account, I ran an experiment for 30 days. For 15 days I in contrast my algorithm towards 3 base LLM fashions:
- ChatGPT’s 4o/5 mannequin (After OpenAI launched GPT-5 mannequin, I couldn’t toggle between fashions on the free-tier model of ChatGPT)
- Gemini’s 2.5-Flash mannequin
- Meta’s Llama 4 mannequin
Right here, I in contrast two important metrics: the proportion of wins and a factors system metrics (any inexperienced letter within the closing guess awarded 3 factors, yellow letters awarded 1 level, and grey letters awarded 0 factors):

As might be seen, my algorithm (whereas particular to this use case, it solely took me a day or so to construct) is the one method that wins each day. Analyzing the LLM fashions, Gemini offers the more serious efficiency, whereas ChatGPT and Meta’s Llama present related numbers. Nevertheless, as might be seen on the determine on the fitting, there may be nice variability within the efficiency of every mannequin and consistency is one thing that isn’t proven by these alternate options for this specific use case.
Nevertheless, these outcomes wouldn’t be full if we didn’t analyze a reasoning LLM mannequin towards my algorithm (and towards a base LLM mannequin). So, for the next 15 days I additionally in contrast the next fashions:
- ChatGPT’s 4o/5 mannequin utilizing reasoning functionality
- Gemini’s 2.5-Flash mannequin (similar mannequin as earlier than)
- Meta’s Llama 4 mannequin (similar mannequin as earlier than)
Some essential feedback right here: initially, I deliberate to make use of Grok as nicely, however after Grok 4 was launched, the reasoning toggle for Grok 3 disappeared, which made comparisons troublesome; alternatively, I attempted to make use of Gemini’s 2.5-Professional, however in distinction with ChatGPT’s reasoning possibility, the usage of this isn’t a toggle, however a unique mannequin which solely allowed me to ship 5 prompts per day, which didn’t permit us to finish a full recreation. With this in thoughts, we present the outcomes for the next 15 days:

The reasoning functionality behind LLMs offers an enormous increase to efficiency on this job, which requires understanding which letter can be utilized in every place, which of them have been evaluated, remembering all outcomes and understanding all mixtures. Not solely are the common outcomes higher, but in addition efficiency is extra constant, as within the two video games that weren’t received, just one letter was missed. Despite this enchancment, the precise algorithm I constructed remains to be barely higher when it comes to efficiency, however as I discussed earlier, this was finished for this particular job. One thing fascinating is that for these 15 video games, the bottom LLM fashions (Gemini 2.5 Flash and Llama 4) didn’t win as soon as, and the efficiency was worse than the opposite set, which makes me surprise if the wins that have been achieved earlier than have been fortunate or not.
Ultimate Remarks
The intention of this train has been to attempt to take a look at the efficiency of LLMs towards a particularly constructed algorithm for a job that requires making use of logic guidelines to generate a profitable end result. We now have seen that base fashions don’t have good efficiency, however that reasoning capabilities of LLM options present an essential increase, producing related efficiency to the outcomes of the tailor-made algorithm I had constructed. One essential factor to bear in mind is that whereas this enchancment is actual, with real-world purposes and manufacturing programs we additionally should think about response time (reasoning LLM fashions take extra time to generate a solution than base fashions or, on this case, the logic I constructed) and value (in keeping with the Azure OpenAI pricing web page, as of the 30th of August of 2025, the value of 1M enter tokens for the overall goal GPT-4o-mini basic goal mannequin is round $0.15, whereas for the o4-mini reasoning mannequin, the price of 1M enter tokens is $1.10). Whereas I firmly consider that LLMs and generative AI will proceed to evolve the best way we work, we will’t deal with them as a Swiss knife that solves all the pieces, with out contemplating its limitations and with out evaluating easy-to-build tailor-made options.