has maybe been an important phrase with regards to Massive Language Fashions (LLMs), with the discharge of ChatGPT. ChatGPT was made so profitable, largely due to the scaled pre-training OpenAI did, making it a strong language mannequin.
Following that, Frontier LLM labs began scaling the post-training, with supervised fine-tuning and RLHF, the place fashions received more and more higher at instruction following and performing advanced duties.
And simply once we thought LLMs have been about to plateau, we began doing inference-time scaling with the discharge of reasoning fashions, the place spending pondering tokens gave big enhancements to the standard of outputs.

I now argue we must always proceed this scaling with a brand new scaling paradigm: usage-based scaling, the place you scale how a lot you’re utilizing LLMs:
- Run extra coding brokers in parallel
- At all times begin a deep analysis on a subject of curiosity
- Run info fetching workflows
In the event you’re not firing off an agent earlier than going to lunch, or going to sleep, you’re losing time
On this article, I’ll talk about why scaling LLM utilization can result in elevated productiveness, particularly when working as a programmer. Moreover, I’ll talk about particular strategies you need to use to scale your LLM utilization, each personally, and for corporations you’re working for. I’ll hold this text high-level, aiming to encourage how one can maximally make the most of AI to your benefit.
Why you must scale LLM utilization
We’ve got already seen scaling be extremely highly effective beforehand with:
- pre-training
- post-training
- inference time scaling
The explanation for that is that it seems the extra computing energy you spend on one thing, the higher output high quality you’ll obtain. This, after all, assumes you’re capable of spend the pc successfully. For instance, for pre-training, having the ability to scale computing depends on
- Massive sufficient fashions (sufficient weights to coach)
- Sufficient information to coach on
In the event you scale compute with out these two parts, you gained’t see enhancements. Nonetheless, in case you do scale all three, you get superb outcomes, just like the frontier LLMs we’re seeing now, for instance, with the discharge of Gemini 3.
I thus argue you must look to scale your individual LLM utilization as a lot as attainable. This might, for instance, be firing off a number of brokers to code in parallel, or beginning Gemini deep analysis on a subject you’re taken with.
In fact, the utilization should nonetheless be of worth. There’s no level in beginning a coding agent on some obscure activity you don’t have any want for. Moderately, you must begin a coding agent on:
- A linear concern you by no means felt you had time to take a seat down and do your self
- A fast characteristic was requested within the final gross sales name
- Some UI enhancements, , in the present day’s coding brokers deal with simply

In a world with abundance of assets, we must always look to maximise our use of them
My predominant level right here is that the brink to carry out duties has decreased considerably for the reason that launch of LLMs. Beforehand, once you received a bug report, you needed to sit down for two hours in deep focus, occupied with methods to clear up that bug.
Nonetheless, in the present day, that’s not the case. As a substitute, you may go into Cursor, put within the bug report, and ask Claude Sonnet 4.5 to aim to repair it. You’ll be able to then come again 10 minutes later, check if the issue is fastened, and create the pull request.
What number of tokens are you able to spend whereas nonetheless doing one thing helpful with the tokens
The right way to scale LLM utilization
I talked about why you must scale LLM utilization by working extra coding brokers, deep analysis brokers, and another AI brokers. Nonetheless, it may be arduous to think about precisely what LLMs you must fireplace off. Thus, on this part, I’ll talk about particular brokers you may fireplace off to scale your LLM utilization.
Parallel coding brokers
Parallel coding brokers are one of many easiest methods to scale LLM utilization for any programmer. As a substitute of solely engaged on one drawback at a time, you begin two or extra brokers on the similar time, both utilizing Cursor brokers, Claude code, or another agentic coding software. That is sometimes made very straightforward to do by using Git worktrees.
For instance, I sometimes have one predominant activity or venture that I’m engaged on, the place I’m sitting in Cursor and programming. Nonetheless, generally I get a bug report coming in, and I robotically route it to Claude Code to make it seek for why the issue is going on and repair it if attainable. Generally, this works out of the field; generally, I’ve to assist it a bit.
Nonetheless, the price of beginning this bug fixing agent is tremendous low (I can actually simply copy the Linear concern into Cursor, which may learn the problem utilizing Linear MCP). Equally, I even have a script robotically researching related prospects, which I’ve working within the background.
Deep analysis
Deep analysis is a performance you need to use in any of the frontier mannequin suppliers like Google Gemini, OpenAI ChatGPT, and Anthropic’s Claude. I want Gemini 3 deep analysis, although there are a lot of different strong deep analysis instruments on the market.
Each time I’m taken with studying extra a few subject, discovering info, or something related, I fireplace off a deep analysis agent with Gemini.
For instance, I used to be taken with discovering some prospects given a particular ICP. I then shortly pasted the ICP info into Gemini, gave it some contextual info, and had it begin researching, in order that it may run whereas I used to be engaged on my predominant programming venture.
After 20 minutes, I had a short report from Gemini, which turned out to include a great deal of helpful info.
Creating workflows with n8n
One other strategy to scale LLM utilization is to create workflows with n8n or any related workflow-building software. With n8n, you may construct particular workflows that, for instance, learn Slack messages and carry out some motion primarily based on these Slack messages.
You may, for example, have a workflow that reads a bug report group on Slack and robotically begins a Claude code agent for a given bug report. Or you can create one other workflow that aggregates info from a variety of totally different sources and offers it to you in an simply readable format. There are primarily limitless alternatives with workflow-building instruments.
Extra
There are various different strategies you need to use to scale your LLM utilization. I’ve solely listed the primary few objects that got here to thoughts for me after I’m working with LLMs. I like to recommend at all times protecting in thoughts what you may automate utilizing AI, and how one can leverage it to turn out to be simpler. The right way to scale LLM utilization will differ broadly from totally different corporations, job titles, and lots of different components.
Conclusion
On this article, I’ve mentioned methods to scale your LLM utilization to turn out to be a simpler engineer. I argue that we’ve seen scaling work extremely nicely up to now, and it’s extremely doubtless we will see more and more highly effective outcomes by scaling our personal utilization of LLMs. This might be firing off extra coding brokers in parallel, working deep analysis brokers whereas consuming lunch. Basically, I imagine that by growing our LLM utilization, we will turn out to be more and more productive.
👉 My free eBook and Webinar:
📚 Get my free Imaginative and prescient Language Fashions e book
💻 My webinar on Imaginative and prescient Language Fashions
👉 Discover me on socials:
📩 Subscribe to my e-newsletter
🧑💻 Get in contact
✍️ Medium















