• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Saturday, June 14, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

From Knowledge to Tales: Code Brokers for KPI Narratives

Admin by Admin
May 29, 2025
in Artificial Intelligence
0
Img 0259 1024x585.png
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Cease Constructing AI Platforms | In the direction of Information Science

What If I had AI in 2018: Hire the Runway Success Heart Optimization


, we regularly want to analyze what’s happening with KPIs: whether or not we’re reacting to anomalies on our dashboards or simply routinely doing a numbers replace. Primarily based on my years of expertise as a KPI analyst, I might estimate that greater than 80% of those duties are pretty customary and might be solved simply by following a easy guidelines. 

Right here’s a high-level plan for investigating a KPI change (you could find extra particulars within the article “Anomaly Root Trigger Evaluation 101”):

  • Estimate the top-line change within the metric to grasp the magnitude of the shift. 
  • Examine knowledge high quality to make sure that the numbers are correct and dependable.
  • Collect context about inside and exterior occasions which may have influenced the change.
  • Slice and cube the metric to determine which segments are contributing to the metric’s shift.
  • Consolidate your findings in an govt abstract that features hypotheses and estimates of their impacts on the primary KPI.

Since we have now a transparent plan to execute, such duties can doubtlessly be automated utilizing AI brokers. The code brokers we just lately mentioned could possibly be a great match there, as their capability to write down and execute code will assist them to analyse knowledge effectively, with minimal back-and-forth. So, let’s strive constructing such an agent utilizing the HuggingFace smolagents framework. 

Whereas engaged on our process, we are going to talk about extra superior options of the smolagents framework:

  • Methods for tweaking all types of prompts to make sure the specified behaviour.
  • Constructing a multi-agent system that may clarify the Kpi modifications and hyperlink them to root causes. 
  • Including reflection to the circulate with supplementary planning steps.

MVP for explaining KPI modifications

As common, we are going to take an iterative method and begin with a easy MVP, specializing in the slicing and dicing step of the evaluation. We’ll analyse the modifications of a easy metric (income) cut up by one dimension (nation). We’ll use the dataset from my earlier article, “Making sense of KPI modifications”.

Let’s load the info first. 

raw_df = pd.read_csv('absolute_metrics_example.csv', sep = 't')
df = raw_df.groupby('nation')[['revenue_before', 'revenue_after_scenario_2']].sum()
  .sort_values('revenue_before', ascending = False).rename(
    columns = {'revenue_after_scenario_2': 'after', 
      'revenue_before': 'earlier than'})
Picture by creator

Subsequent, let’s initialise the mannequin. I’ve chosen the OpenAI GPT-4o-mini as my most popular possibility for easy duties. Nevertheless, the smolagents framework helps all types of fashions, so you need to use the mannequin you favor. Then, we simply must create an agent and provides it the duty and the dataset.

from smolagents import CodeAgent, LiteLLMModel

mannequin = LiteLLMModel(model_id="openai/gpt-4o-mini", 
  api_key=config['OPENAI_API_KEY']) 

agent = CodeAgent(
    mannequin=mannequin, instruments=[], max_steps=10,
    additional_authorized_imports=["pandas", "numpy", "matplotlib.*", 
      "plotly.*"], verbosity_level=1 
)

process = """
Here's a dataframe displaying income by section, evaluating values 
earlier than and after.
May you please assist me perceive the modifications? Particularly:
1. Estimate how the full income and the income for every section 
have modified, each in absolute phrases and as a share.
2. Calculate the contribution of every section to the full 
change in income.

Please spherical all floating-point numbers within the output 
to 2 decimal locations.
"""

agent.run(
    process,
    additional_args={"knowledge": df},
)

The agent returned fairly a believable outcome. We obtained detailed statistics on the metric modifications in every section and their impression on the top-line KPI.

{'total_before': 1731985.21, 'total_after': 
1599065.55, 'total_change': -132919.66, 'segment_changes': 
{'absolute_change': {'different': 4233.09, 'UK': -4376.25, 'France': 
-132847.57, 'Germany': -690.99, 'Italy': 979.15, 'Spain': 
-217.09}, 'percentage_change': {'different': 0.67, 'UK': -0.91, 
'France': -55.19, 'Germany': -0.43, 'Italy': 0.81, 'Spain': 
-0.23}, 'contribution_to_change': {'different': -3.18, 'UK': 3.29, 
'France': 99.95, 'Germany': 0.52, 'Italy': -0.74, 'Spain': 0.16}}}

Let’s check out the code generated by the agent. It’s high quality, however there’s one potential challenge. The Llm recreated the dataframe primarily based on the enter knowledge as a substitute of referencing it straight. This method will not be excellent (particularly when working with large datasets), as it will possibly result in errors and better token utilization.

import pandas as pd                                                                                                        
 
# Creating the DataFrame from the supplied knowledge                 
knowledge = {                                                        
    'earlier than': [632767.39, 481409.27, 240704.63, 160469.75,      
120352.31, 96281.86],                                           
    'after': [637000.48, 477033.02, 107857.06, 159778.76,       
121331.46, 96064.77]                                            
}                                                               
index = ['other', 'UK', 'France', 'Germany', 'Italy', 'Spain']  
df = pd.DataFrame(knowledge, index=index)                            
                                                                
# Calculating complete income earlier than and after                    
total_before = df['before'].sum()                               
total_after = df['after'].sum()                                 
                                                                
# Calculating absolute and share change for every section   
df['absolute_change'] = df['after'] - df['before']              
df['percentage_change'] = (df['absolute_change'] /              
df['before']) * 100                                             
                                                                
# Calculating complete income change                              
total_change = total_after - total_before                       
                                                                
# Calculating contribution of every section to the full change  
df['contribution_to_change'] = (df['absolute_change'] /         
total_change) * 100                                             
                                                                
# Rounding outcomes                                              
df = df.spherical(2)                                                
                                                                
# Printing the calculated outcomes                               
print("Complete income earlier than:", total_before)                    
print("Complete income after:", total_after)                      
print("Complete change in income:", total_change)                 
print(df)

It’s value fixing this drawback earlier than transferring on to constructing a extra advanced system.

Tweaking prompts

Because the LLM is simply following the directions given to it, we are going to tackle this challenge by tweaking the immediate.

Initially, I tried to make the duty immediate extra express, clearly instructing the LLM to make use of the supplied variable.

process = """Here's a dataframe displaying income by section, evaluating 
values earlier than and after. The information is saved in df variable. 
Please, use it and do not attempt to parse the info your self. 

May you please assist me perceive the modifications?
Particularly:
1. Estimate how the full income and the income for every section 
have modified, each in absolute phrases and as a share.
2. Calculate the contribution of every section to the full change in income.

Please spherical all floating-point numbers within the output to 2 decimal locations.
"""

It didn’t work. So, the subsequent step is to look at the system immediate and see why it really works this manner. 

print(agent.prompt_templates['system_prompt'])

#... 
# Listed here are the foundations it is best to all the time comply with to unravel your process:
# 1. At all times present a 'Thought:' sequence, and a 'Code:n```py' sequence ending with '```' sequence, else you'll fail.
# 2. Use solely variables that you've got outlined.
# 3. At all times use the fitting arguments for the instruments. DO NOT cross the arguments as a dict as in 'reply = wiki({'question': "What's the place the place James Bond lives?"})', however use the arguments straight as in 'reply = wiki(question="What's the place the place James Bond lives?")'.
# 4. Take care to not chain too many sequential device calls in the identical code block, particularly when the output format is unpredictable. As an illustration, a name to go looking has an unpredictable return format, so wouldn't have one other device name that will depend on its output in the identical block: quite output outcomes with print() to make use of them within the subsequent block.
# 5. Name a device solely when wanted, and by no means re-do a device name that you just beforehand did with the very same parameters.
# 6. Do not identify any new variable with the identical identify as a device: as an example do not identify a variable 'final_answer'.
# 7. By no means create any notional variables in our code, as having these in your logs will derail you from the true variables.
# 8. You should use imports in your code, however solely from the next record of modules: ['collections', 'datetime', 'itertools', 'math', 'numpy', 'pandas', 'queue', 'random', 're', 'stat', 'statistics', 'time', 'unicodedata']
# 9. The state persists between code executions: so if in a single step you have created variables or imported modules, these will all persist.
# 10. Do not surrender! You are in command of fixing the duty, not offering instructions to unravel it.
# Now Start!

On the finish of the immediate, we have now the instruction "# 2. Use solely variables that you've got outlined!". This may be interpreted as a strict rule to not use some other variables. So, I modified it to "# 2. Use solely variables that you've got outlined or ones supplied in extra arguments! By no means attempt to copy and parse extra arguments." 

modified_system_prompt = agent.prompt_templates['system_prompt']
    .change(
        '2. Use solely variables that you've got outlined!', 
        '2. Use solely variables that you've got outlined or ones supplied in extra arguments! By no means attempt to copy and parse extra arguments.'
    )
agent.prompt_templates['system_prompt'] = modified_system_prompt

This variation alone didn’t assist both. Then, I examined the duty message. 

╭─────────────────────────── New run ────────────────────────────╮
│                                                                │
│ Here's a pandas dataframe displaying income by section,         │
│ evaluating values earlier than and after.                             │
│ May you please assist me perceive the modifications?               │
│ Particularly:                                                  │
│ 1. Estimate how the full income and the income for every     │
│ section have modified, each in absolute phrases and as a          │
│ share.                                                    │
│ 2. Calculate the contribution of every section to the full     │
│ change in income.                                             │
│                                                                │
│ Please spherical all floating-point numbers within the output to 2   │
│ decimal locations.                                                │
│                                                                │
│ You've gotten been supplied with these extra arguments, that   │
│ you may entry utilizing the keys as variables in your python      │
│ code:                                                          │
│ {'df':             earlier than      after                           │
│ nation                                                        │
│ different    632767.39  637000.48                                  │
│ UK       481409.27  477033.02                                  │
│ France   240704.63  107857.06                                  │
│ Germany  160469.75  159778.76                                  │
│ Italy    120352.31  121331.46                                  │
│ Spain     96281.86   96064.77}.                                │
│                                                                │
╰─ LiteLLMModel - openai/gpt-4o-mini ────────────────────────────╯

It has an instruction associated the the utilization of extra arguments "You've gotten been supplied with these extra arguments, which you can entry utilizing the keys as variables in your python code". We are able to attempt to make it extra particular and clear. Sadly, this parameter will not be uncovered externally, so I needed to find it in the supply code. To search out the trail of a Python bundle, we will use the next code.

import smolagents 
print(smolagents.__path__)

Then, I discovered the brokers.py file and modified this line to incorporate a extra particular instruction.

self.process += f"""
You've gotten been supplied with these extra arguments obtainable as variables 
with names {",".be part of(additional_args.keys())}. You may entry them straight. 
Here's what they comprise (only for informational functions): 
{str(additional_args)}."""

It was a little bit of hacking, however that’s typically what occurs with the LLM frameworks. Don’t neglect to reload the bundle afterwards, and we’re good to go. Let’s take a look at whether or not it really works now.

process = """
Here's a pandas dataframe displaying income by section, evaluating values 
earlier than and after. 

Your process might be perceive the modifications to the income (after vs earlier than) 
in several segments and supply govt abstract.
Please, comply with the next steps:
1. Estimate how the full income and the income for every section 
have modified, each in absolute phrases and as a share.
2. Calculate the contribution of every section to the full change 
in income.

Spherical all floating-point numbers within the output to 2 decimal locations. 
"""
agent.logger.stage = 1 # Decrease verbosity stage
agent.run(
    process,
    additional_args={"df": df},
)

Hooray! The issue has been fastened. The agent now not copies the enter variables and references df variable straight as a substitute. Right here’s the newly generated code.

import pandas as pd                                             
                                                                  
# Calculate complete income earlier than and after                      
total_before = df['before'].sum()                               
total_after = df['after'].sum()                                 
total_change = total_after - total_before                       
percentage_change_total = (total_change / total_before * 100)   
if total_before != 0 else 0                                     
                                                                
# Spherical values                                                  
total_before = spherical(total_before, 2)                           
total_after = spherical(total_after, 2)                             
total_change = spherical(total_change, 2)                           
percentage_change_total = spherical(percentage_change_total, 2)     
                                                                
# Show outcomes                                               
print(f"Complete Income Earlier than: {total_before}")                  
print(f"Complete Income After: {total_after}")                    
print(f"Complete Change: {total_change}")                          
print(f"Proportion Change: {percentage_change_total}%")

Now, we’re prepared to maneuver on to constructing the precise agent that may resolve our process.

AI agent for KPI narratives

Lastly, it’s time to work on the AI agent that may assist us clarify KPI modifications and create an govt abstract.

Our agent will comply with this plan for the basis trigger evaluation:

  • Estimate the top-line KPI change. 
  • Slice and cube the metric to grasp which segments are driving the shift. 
  • Lookup occasions within the change log to see whether or not they can clarify the metric modifications.
  • Consolidate all of the findings within the complete govt abstract.

After lots of experimentation and a number of other tweaks, I’ve arrived at a promising outcome. Listed here are the important thing changes I made (we are going to talk about them intimately later):

  • I leveraged the multi-agent setup by including one other staff member — the change log Agent, who can entry the change log and help in explaining KPI modifications.
  • I experimented with extra highly effective fashions like gpt-4o and gpt-4.1-mini since gpt-4o-mini wasn’t enough. Utilizing stronger fashions not solely improved the outcomes, but additionally considerably lowered the variety of steps: with gpt-4.1-miniI obtained the ultimate outcome after simply six steps, in comparison with 14–16 steps with gpt-4o-mini. This implies that investing in dearer fashions may be worthwhile for agentic workflows.
  • I supplied the agent with the advanced device to analyse KPI modifications for easy metrics. The device performs all of the calculations, whereas LLM can simply interpret the outcomes. I mentioned the method to KPI modifications evaluation intimately in my earlier article. 
  • I reformulated the immediate into a really clear step-by-step information to assist the agent keep on observe. 
  • I added planning steps that encourage the LLM agent to suppose by way of its method first and revisit the plan each three iterations. 

After all of the changes, I obtained the next abstract from the agent, which is fairly good.

Govt Abstract:
Between April 2025 and Could 2025, complete income declined sharply by
roughly 36.03%, falling from 1,731,985.21 to 1,107,924.43, a
drop of -624,060.78 in absolute phrases.
This decline was primarily pushed by vital income 
reductions within the 'new' buyer segments throughout a number of 
international locations, with declines of roughly 70% in these segments.

Probably the most impacted segments embrace:
- other_new: earlier than=233,958.42, after=72,666.89, 
abs_change=-161,291.53, rel_change=-68.94%, share_before=13.51%, 
impression=25.85, impact_norm=1.91
- UK_new: earlier than=128,324.22, after=34,838.87, 
abs_change=-93,485.35, rel_change=-72.85%, share_before=7.41%, 
impression=14.98, impact_norm=2.02
- France_new: earlier than=57,901.91, after=17,443.06, 
abs_change=-40,458.85, rel_change=-69.87%, share_before=3.34%, 
impression=6.48, impact_norm=1.94
- Germany_new: earlier than=48,105.83, after=13,678.94, 
abs_change=-34,426.89, rel_change=-71.56%, share_before=2.78%, 
impression=5.52, impact_norm=1.99
- Italy_new: earlier than=36,941.57, after=11,615.29, 
abs_change=-25,326.28, rel_change=-68.56%, share_before=2.13%, 
impression=4.06, impact_norm=1.91
- Spain_new: earlier than=32,394.10, after=7,758.90, 
abs_change=-24,635.20, rel_change=-76.05%, share_before=1.87%, 
impression=3.95, impact_norm=2.11

Primarily based on evaluation from the change log, the primary causes for this 
development are:
1. The introduction of latest onboarding controls applied on Could 
8, 2025, which lowered new buyer acquisition by about 70% to 
forestall fraud.
2. A postal service strike within the UK beginning April 5, 2025, 
inflicting order supply delays and elevated cancellations 
impacting the UK new section.
3. A rise in VAT by 2% in Spain as of April 22, 2025, 
affecting new buyer pricing and inflicting larger cart 
abandonment.

These elements mixed clarify the outsized unfavorable impacts 
noticed in new buyer segments and the general income decline.

The LLM agent additionally generated a bunch of illustrative charts (they have been a part of our development explaining device). For instance, this one exhibits the impacts throughout the mix of nation and maturity.

Picture by creator

The outcomes look actually thrilling. Now let’s dive deeper into the precise implementation to grasp the way it works underneath the hood. 

Multi-AI agent setup

We’ll begin with our change log agent. This agent will question the change log and attempt to determine potential root causes for the metric modifications we observe. Since this agent doesn’t must do advanced operations, we implement it as a ToolCallingAgent. As a result of this agent might be known as by one other agent, we have to outline its identify and description attributes.

@device 
def get_change_log(month: str) -> str: 
    """
    Returns the change log (record of inside and exterior occasions which may have affected our KPIs) for the given month 

    Args:
        month: month within the format %Y-%m-01, for instance, 2025-04-01
    """
    return events_df[events_df.month == month].drop('month', axis = 1).to_dict('information')

mannequin = LiteLLMModel(model_id="openai/gpt-4.1-mini", api_key=config['OPENAI_API_KEY'])
change_log_agent = ToolCallingAgent(
    instruments=[get_change_log],
    mannequin=mannequin,
    max_steps=10,
    identify="change_log_agent",
    description="Helps you discover the related data within the change log that may clarify modifications on metrics. Present the agent with all of the context to obtain data",
)

Because the supervisor agent might be calling this agent, we gained’t have any management over the question it receives. Due to this fact, I made a decision to change the system immediate to incorporate extra context.

change_log_system_prompt = '''
You are a grasp of the change log and also you assist others to clarify 
the modifications to metrics. Once you obtain a request, search for the record of occasions 
occurred by month, then filter the related data primarily based 
on supplied context and return again. Prioritise probably the most possible elements 
affecting the KPI and restrict your reply solely to them.
'''

modified_system_prompt = change_log_agent.prompt_templates['system_prompt'] 
  + 'nnn' + change_log_system_prompt

change_log_agent.prompt_templates['system_prompt'] = modified_system_prompt

To allow the first agent to delegate duties to the change log agent, we merely must specify it within the managed_agents subject.

agent = CodeAgent(
    mannequin=mannequin,
    instruments=[calculate_simple_growth_metrics],
    max_steps=20,
    additional_authorized_imports=["pandas", "numpy", "matplotlib.*", "plotly.*"],
    verbosity_level = 2, 
    planning_interval = 3,
    managed_agents = [change_log_agent]
)

Let’s see the way it works. First, we will take a look at the brand new system immediate for the first agent. It now consists of details about staff members and directions on how one can ask them for assist.

It's also possible to give duties to staff members.
Calling a staff member works the identical as for calling a device: merely, 
the one argument you may give within the name is 'process'.
Provided that this staff member is an actual human, try to be very verbose 
in your process, it needs to be an extended string offering informations 
as detailed as mandatory.
Here's a record of the staff members which you can name:
```python
def change_log_agent("Your question goes right here.") -> str:
    """Helps you discover the related data within the change log that 
    can clarify modifications on metrics. Present the agent with all of the context 
    to obtain data"""
```

The execution log exhibits that the first agent efficiently delegated the duty to the second agent and acquired the next response.

<-- Main agent calling the change log agent -->

─ Executing parsed code: ─────────────────────────────────────── 
  # Question change_log_agent with the detailed process description     
  ready                                                        
  context_for_change_log = (                                      
      "We analyzed modifications in income from April 2025 to Could      
  2025. We discovered giant decreases "                                
      "primarily within the 'new' maturity segments throughout international locations:    
  Spain_new, UK_new, Germany_new, France_new, Italy_new, and      
  other_new. "                                                    
      "The income fell by round 70% in these segments, which    
  have outsized unfavorable impression on complete income change. "        
      "We wish to know the 1-3 most possible causes for this      
  vital drop in income within the 'new' buyer segments      
  throughout this era."                                            
  )                                                               
                                                                  
  rationalization = change_log_agent(process=context_for_change_log)     
  print("Change log agent rationalization:")                          
  print(rationalization)                                              
 ──────────────────────────────────────────────────────────────── 

<-- Change log agent execution begin -->
╭──────────────────── New run - change_log_agent ─────────────────────╮
│                                                                     │
│ You are a useful agent named 'change_log_agent'.                    │
│ You've gotten been submitted this process by your supervisor.                  │
│ ---                                                                 │
│ Process:                                                               │
│ We analyzed modifications in income from April 2025 to Could 2025.         │
│ We discovered giant decreases primarily within the 'new' maturity segments      │
│ throughout international locations: Spain_new, UK_new, Germany_new, France_new,       │
│ Italy_new, and other_new. The income fell by round 70% in these   │
│ segments, which have outsized unfavorable impression on complete income      │
│ change. We wish to know the 1-3 most possible causes for this       │
│ vital drop in income within the 'new' buyer segments throughout   │
│ this era.                                                        │
│ ---                                                                 │
│ You are serving to your supervisor resolve a wider process: so be sure that to     │
│ not present a one-line reply, however give as a lot data as      │
│ potential to offer them a transparent understanding of the reply.          │
│                                                                     │
│ Your final_answer WILL HAVE to comprise these elements:                 │
│ ### 1. Process end result (quick model):                                │
│ ### 2. Process end result (extraordinarily detailed model):                   │
│ ### 3. Further context (if related):                            │
│                                                                     │
│ Put all these in your final_answer device, every thing that you just do     │
│ not cross as an argument to final_answer might be misplaced.               │
│ And even when your process decision will not be profitable, please return   │
│ as a lot context as potential, in order that your supervisor can act upon      │
│ this suggestions.                                                      │
│                                                                     │
╰─ LiteLLMModel - openai/gpt-4.1-mini ────────────────────────────────╯

Utilizing the smolagents framework, we will simply arrange a easy multi-agent system, the place a supervisor agent coordinates and delegates duties to staff members with particular abilities. 

Iterating on the immediate

We’ve began with a really high-level immediate outlining the objective and a imprecise route, however sadly, it didn’t work constantly. LLMs should not good sufficient but to determine the method on their very own. So, I created an in depth step-by-step immediate describing the entire plan and together with the detailed specs of the expansion narrative device we’re utilizing. 

process = """
Here's a pandas dataframe displaying the income by section, evaluating values 
earlier than (April 2025) and after (Could 2025). 

You are a senior and skilled knowledge analyst. Your process might be to grasp 
the modifications to the income (after vs earlier than) in several segments 
and supply govt abstract.

## Observe the plan:
1. Begin by udentifying the record of dimensions (columns in dataframe that 
should not "earlier than" and "after")
2. There may be a number of dimensions within the dataframe. Begin high-level 
by taking a look at every dimension in isolation, mix all outcomes 
collectively into the record of segments analysed (do not forget to save lots of 
the dimension used for every section). 
Use the supplied instruments to analyse the modifications of metrics: {tools_description}. 
3. Analyse the outcomes from earlier step and preserve solely segments 
which have outsized impression on the KPI change (absolute of impact_norm 
is above 1.25). 
4. Examine what dimensions are current within the record of serious section, 
if there are a number of ones - execute the device on their combos 
and add to the analysed segments. If after including an extra dimension, 
all subsegments present shut different_rate and impact_norm values, 
then we will exclude this cut up (regardless that impact_norm is above 1.25), 
because it would not clarify something. 
5. Summarise the numerous modifications you recognized. 
6. Attempt to clarify what's going on with metrics by getting data 
from the change_log_agent. Please, present the agent the complete context 
(what segments have outsized impression, what's the relative change and 
what's the interval we're taking a look at). 
Summarise the knowledge from the changelog and point out 
solely 1-3 probably the most possible causes of the KPI change 
(ranging from probably the most impactful one).
7. Put collectively 3-5 sentences commentary what occurred high-level 
and why (primarily based on the information acquired from the change log). 
Then comply with it up with extra detailed abstract: 
- Prime-line complete worth of metric earlier than and after in human-readable format, 
absolute and relative change 
- Record of segments that meaningfully influenced the metric positively 
or negatively with the next numbers: values earlier than and after, 
absoltue and relative change, share of section earlier than, impression 
and normed impression. Order the segments by absolute worth 
of absolute change because it represents the ability of impression. 

## Instruction on the calculate_simple_growth_metrics device:
By default, it is best to use the device for the entire dataset not the section, 
because it offers you the complete details about the modifications.

Right here is the steerage how one can interpret the output of the device
- distinction - absolutely the distinction between after and earlier than values
- difference_rate - the relative distinction (if it is shut for 
  all segments then the dimension will not be informative)
- impression - the share of KPI differnce defined by this section 
- segment_share_before - share of section earlier than
- impact_norm - impression normed on the share of segments, we're  
  in very excessive or very low numbers since they present outsized impression, 
  rule of thumb - impact_norm between -1.25 and 1.25 is not-informative 

In the event you're utilizing the device on the subset of dataframe remember, 
that the outcomes will not be aplicable to the complete dataset, so keep away from utilizing it 
except you wish to explicitly take a look at subset (i.e. change in France). 
In the event you determined to make use of the device on a selected section 
and share these leads to the chief abstract, explicitly define 
that we're diving deeper into a selected section.
""".format(tools_description = tools_description)
agent.run(
    process,
    additional_args={"df": df},
)

Explaining every thing in such element was fairly a frightening process, but it surely’s mandatory if we would like constant outcomes.

Planning steps

The smolagents framework helps you to add planning steps to your agentic circulate. This encourages the agent to start out with a plan and replace it after the desired variety of steps. From my expertise, this reflection could be very useful for sustaining concentrate on the issue and adjusting actions to remain aligned with the preliminary plan and objective. I positively suggest utilizing it in instances when advanced reasoning is required.

Setting it up is as simple as specifying planning_interval = 3 for the code agent.

agent = CodeAgent(
    mannequin=mannequin,
    instruments=[calculate_simple_growth_metrics],
    max_steps=20,
    additional_authorized_imports=["pandas", "numpy", "matplotlib.*", "plotly.*"],
    verbosity_level = 2, 
    planning_interval = 3,
    managed_agents = [change_log_agent]
)

That’s it. Then, the agent offers reflections beginning with fascinated by the preliminary plan.

────────────────────────── Preliminary plan ──────────────────────────
Listed here are the info I do know and the plan of motion that I'll 
comply with to unravel the duty:
```
## 1. Information survey

### 1.1. Information given within the process
- We have now a pandas dataframe `df` displaying income by section, for 
two time factors: earlier than (April 2025) and after (Could 2025).
- The dataframe columns embrace:
  - Dimensions: `nation`, `maturity`, `country_maturity`, 
`country_maturity_combined`
  - Metrics: `earlier than` (income in April 2025), `after` (income in
Could 2025)
- The duty is to grasp the modifications in income (after vs 
earlier than) throughout totally different segments.
- Key directions and instruments supplied:
  - Establish all dimensions besides earlier than/after for segmentation.
  - Analyze every dimension independently utilizing 
`calculate_simple_growth_metrics`.
  - Filter segments with outsized impression on KPI change (absolute 
normed impression > 1.25).
  - Study combos of dimensions if a number of dimensions have
vital segments.
  - Summarize vital modifications and have interaction `change_log_agent` 
for contextual causes.
  - Present a remaining govt abstract together with top-line modifications 
and segment-level detailed impacts.
- Dataset snippet exhibits segments combining international locations (`France`, 
`UK`, `Germany`, `Italy`, `Spain`, `different`) and maturity standing 
(`new`, `current`).
- The mixed segments are uniquely recognized in columns 
`country_maturity` and `country_maturity_combined`.

### 1.2. Information to search for
- Definitions or descriptions of the segments if unclear (e.g., 
what defines `new` vs `current` maturity).
  - Seemingly not obligatory to proceed, however could possibly be requested from 
enterprise documentation or change log.
- Extra particulars on the change log (accessible through 
`change_log_agent`) that would present possible causes for income
modifications.
- Affirmation on dealing with mixed dimension splits - how precisely
`country_maturity_combined` is fashioned and needs to be interpreted in
mixed dimension evaluation.
- Knowledge dictionary or description of metrics if any extra KPI 
in addition to income is related (unlikely given knowledge).
- Dates affirm interval of research: April 2025 (earlier than) and Could 
2025 (after). No must look these up since given.

### 1.3. Information to derive
- Establish all dimension columns obtainable for segmentation:
  - By excluding 'earlier than' and 'after', probably candidates are 
`nation`, `maturity`, `country_maturity`, and 
`country_maturity_combined`.
- For every dimension, calculate change metrics utilizing the given 
device:
  - Absolute and relative distinction in income per section.
  - Impression, section share earlier than, and normed impression for every 
section.
- Establish which segments have outsized impression on KPI change 
(|impact_norm| > 1.25).
- If a number of dimensions have vital segments, mix 
dimensions (e.g., nation + maturity) and reanalyze.
- Decide if mixed dimension splits present significant 
differentiation or not, primarily based on delta price and impact_norm 
consistency.
- Summarize route and magnitude of KPI modifications at top-line 
stage (combination income earlier than and after).
- Establish prime segments driving constructive and unfavorable modifications 
primarily based on ordered absolute absolute_change.
- Collect contextual insights from the change log agent relating to 
possible causes tied to vital segments and the Could 2025 vs 
April 2025 interval.

## 2. Plan

1. Establish all dimension columns current within the dataframe by 
itemizing columns and excluding 'earlier than' and 'after'.
2. For every dimension recognized (`nation`, `maturity`, 
`country_maturity`, `country_maturity_combined`):
   - Use `calculate_simple_growth_metrics` on the complete dataframe 
grouped by that dimension.
   - Extract segments with calculated metrics together with 
impact_norm.
3. Mixture outcomes from all single-dimension analyses and filter
segments the place |impact_norm| > 1.25.
4. Decide which dimensions these vital segments belong 
to.
5. If multiple dimension is represented in these vital 
segments, analyze the mixed dimension fashioned by these 
dimensions (for instance, mixture of `nation` and `maturity` 
or use current mixed dimension columns).
6. Repeat metric calculation utilizing 
`calculate_simple_growth_metrics` on the mixed dimension.
7. Study if the mixed dimension splits create significant 
differentiation - if all subsegments present shut difference_rate 
and impact_norm, exclude the cut up.
8. Put together a abstract of serious modifications:
   - Prime-line KPIs earlier than and after (absolute and relative 
modifications).
   - Record of impactful segments sorted by absolute absolute_change
that influenced general income.
9. Present the record of segments with particulars (values earlier than, 
after, absolute and relative change, share earlier than, impression, 
impact_norm).
10. Utilizing this summarized data, question `change_log_agent` 
with full context:
    - Embody vital segments, their relative modifications, and 
intervals (April to Could 2025).
11. Course of the agent's response to determine 1-3 important possible 
causes of the KPI modifications.
12. Draft govt abstract commentary:
    - Excessive-level overview of what occurred and why, primarily based on log 
data.
    - Detailed abstract together with top-line modifications and 
segment-level metrics impression.
13. Ship the ultimate reply utilizing `final_answer` device containing 
the above govt abstract and data-driven insights.

Then, after every three steps, the agent revisits and updates the plan. 

────────────────────────── Up to date plan ──────────────────────────
I nonetheless want to unravel the duty I used to be given:
```

Here's a pandas dataframe displaying the income by section, 
evaluating values earlier than (April 2025) and after (Could 2025). 

You are a senior and skilled knowledge analyst. Your process might be 
perceive the modifications to the income (after vs earlier than) in 
totally different segments 
and supply govt abstract.

<... repeating the complete preliminary process ...>
```

Listed here are the info I do know and my new/up to date plan of motion to 
resolve the duty:
```
## 1. Up to date info survey

### 1.1. Information given within the process
- We have now a pandas dataframe with income by section, displaying 
values "earlier than" (April 2025) and "after" (Could 2025).
- Columns within the dataframe embrace a number of dimensions and the 
"earlier than" and "after" income values.
- The objective is to grasp income modifications by section and supply
an govt abstract.
- Steerage and guidelines about how one can analyze and interpret outcomes 
from the `calculate_simple_growth_metrics` device are supplied.
- The dataframe accommodates columns: nation, maturity, 
country_maturity, country_maturity_combined, earlier than, after.

### 1.2. Information that we have now realized
- The scale to research are: nation, maturity, 
country_maturity, and country_maturity_combined.
- Analyzed income modifications by dimension.
- Solely the "new" maturity section has vital impression 
(impact_norm=1.96 > 1.25), with a big unfavorable income change (~
-70.6%).
- Within the mixed section "country_maturity," the "new" segments 
throughout international locations (Spain_new, UK_new, Germany_new, France_new, 
Italy_new, other_new) all have outsized unfavorable impacts with 
impact_norm values all above 1.9.
- The mature/current segments in these international locations have smaller 
normed impacts under 1.25.
- Nation-level and maturity-level section dimension alone are 
much less revealing than the mixed nation+maturity section 
dimension which highlights the brand new segments as strongly impactful.
- Complete income dropped considerably from earlier than to after, principally
pushed by new segments shrinking drastically.

### 1.3. Information nonetheless to search for
- Whether or not splitting the info by extra dimensions past 
nation and maturity (e.g., country_maturity_combined) explains 
additional heterogeneous impacts or if the sample is uniform.
- Rationalization/context from change log about what triggered the main 
drop predominantly in new segments in all international locations.
- Confirming whether or not any nation inside the new section behaved 
in another way or mitigated losses.

### 1.4. Information nonetheless to derive
- A concise govt abstract describing the top-level income 
change and figuring out which segments clarify the declines.
- Rationalization involving the change log agent with abstract of 
possible causes for these outsized reductions in income within the 
new segments throughout international locations for April-Could 2025.

## 2. Plan

### 2.1. Confirm if including the extra dimension 
'country_maturity_combined' splits the impactful "new" segments 
into subsegments with considerably totally different impacts or if the 
change charges and normed impacts are comparatively homogeneous. If 
homogeneous, we don't acquire deeper perception and will disregard 
additional splitting.

### 2.2. Summarize all vital segments recognized with 
outsized impact_norm ≥ 1.25, together with their earlier than and after 
values, absolute and relative modifications, section shares earlier than, 
impression, and normalized impression, ordered by absolute worth of the 
change.

### 2.3. Question the change_log_agent with the complete context: 
vital segments are the brand new country_maturity segments with 
giant unfavorable modifications (~ -70%), timeframe April 2025 to Could 2025,
and request prime 1-3 most possible causes for the KPI income drop 
in these segments.

### 2.4. Primarily based on the change log agent's response, synthesize a 
3-5 sentence high-level commentary explaining what occurred 
broadly and why.

### 2.5. Draft an in depth govt abstract together with:
- Complete income earlier than and after in human-readable format with 
absolute and relative change.
- An inventory of serious segments driving these modifications, so as 
by absolute impression, with detailed numbers (earlier than, after, absolute
and relative change, section share earlier than, impression, normed impression).

### 2.6. Use the `final_answer` device to supply the finalized 
govt abstract report.

I actually like how the agent is inspired to reiterate on the preliminary process and keep targeted on the primary drawback. Common reflection like that is useful in actual life as nicely, as groups typically get slowed down within the course of and lose sight of the why behind what they’re doing. It’s fairly cool to see managerial greatest practices being built-in into agentic frameworks.

That’s it! We’ve constructed a code agent able to analysing KPI modifications for easy metrics and explored all the important thing nuances of the method.

You could find the whole code and execution logs on GitHub.

Abstract

We’ve experimented rather a lot with code brokers and at the moment are prepared to attract conclusions. For our experiments, we used the HuggingFace smolagents framework for code brokers — a really helpful toolset that gives: 

  • simple integration with totally different LLMs (from native fashions through Ollama to public suppliers like Anthropic or OpenAI),
  • excellent logging that makes it simple to grasp the entire thought strategy of the agent and debug points,
  • capability to construct advanced methods leveraging multi-AI agent setups or planning options with out a lot effort.

Whereas smolagents is at present my favorite agentic framework, it has its limitations: 

  • It might probably lack flexibility at occasions. For instance, I needed to modify the immediate straight within the supply code to get the behaviour I needed.
  • It solely helps hierarchical multi-agent set-up (the place one supervisor can delegate duties to different brokers), however doesn’t cowl sequential workflow or consensual decision-making processes.
  • There’s no assist for long-term reminiscence out of the field, which means you’re ranging from scratch with each process.

Thank you a large number for studying this text. I hope this text was insightful for you.

Reference

This text is impressed by the “Constructing Code Brokers with Hugging Face smolagents” quick course by DeepLearning.AI.

Tags: AgentsCodeDataKPINarrativesStories

Related Posts

Image 48 1024x683.png
Artificial Intelligence

Cease Constructing AI Platforms | In the direction of Information Science

June 14, 2025
Image 49.png
Artificial Intelligence

What If I had AI in 2018: Hire the Runway Success Heart Optimization

June 14, 2025
Chatgpt image jun 12 2025 04 53 14 pm 1024x683.png
Artificial Intelligence

Connecting the Dots for Higher Film Suggestions

June 13, 2025
Hal.png
Artificial Intelligence

Consumer Authorisation in Streamlit With OIDC and Google

June 12, 2025
Screenshot 2025 06 09 at 10.42.31 pm.png
Artificial Intelligence

Mannequin Context Protocol (MCP) Tutorial: Construct Your First MCP Server in 6 Steps

June 12, 2025
Audiomoth.webp.webp
Artificial Intelligence

Audio Spectrogram Transformers Past the Lab

June 11, 2025
Next Post
Bernd dittrich dt71hajoijm unsplash scaled 1.jpg

The Hidden Safety Dangers of LLMs

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024

EDITOR'S PICK

Yin Yang Futurist.webp.webp

Historical Knowledge Beats AI? Taoism’s Shocking Information to Tech Chaos

April 16, 2025
Drop 345345564545.jpg

AI’s consuming downside may remedy itself • The Register

September 6, 2024
Krakendesktopblog.png

Meet Kraken Desktop: The highly effective, customizable, light-weight buying and selling app

November 1, 2024
Figure2.png

Perform Calling on the Edge – The Berkeley Synthetic Intelligence Analysis Weblog

August 30, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Cease Constructing AI Platforms | In the direction of Information Science
  • Invesco, Galaxy Digital file to launch Solana ETF in Delaware amid SEC approval buzz
  • Unlocking Exponential Progress: Strategic Generative AI Adoption for Companies
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?