• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Tuesday, July 1, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Machine Learning

Environment friendly Metric Assortment in PyTorch: Avoiding the Efficiency Pitfalls of TorchMetrics

Admin by Admin
February 7, 2025
in Machine Learning
0
0 Tuqdso Mwwp9gphd.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Cease Chasing “Effectivity AI.” The Actual Worth Is in “Alternative AI.”

AI Agent with Multi-Session Reminiscence


Metric assortment is a necessary a part of each machine studying challenge, enabling us to trace mannequin efficiency and monitor coaching progress. Ideally, Metrics must be collected and computed with out introducing any extra overhead to the coaching course of. Nonetheless, similar to different parts of the coaching loop, inefficient metric computation can introduce pointless overhead, improve training-step occasions and inflate coaching prices.

This publish is the seventh in our collection on efficiency profiling and optimization in PyTorch. The collection has aimed to emphasise the vital position of efficiency evaluation and Optimization in machine studying improvement. Every publish has centered on totally different phases of the coaching pipeline, demonstrating sensible instruments and strategies for analyzing and boosting useful resource utilization and runtime effectivity.

On this installment, we give attention to metric assortment. We are going to exhibit how a naïve implementation of metric assortment can negatively influence runtime efficiency and discover instruments and strategies for its evaluation and optimization.

To implement our metric assortment, we’ll use TorchMetrics a preferred library designed to simplify and standardize metric computation in Pytorch. Our objectives shall be to:

  1. Exhibit the runtime overhead brought on by a naïve implementation of metric assortment.
  2. Use PyTorch Profiler to pinpoint efficiency bottlenecks launched by metric computation.
  3. Exhibit optimization strategies to cut back metric assortment overhead.

To facilitate our dialogue, we’ll outline a toy PyTorch mannequin and assess how metric assortment can influence its runtime efficiency. We are going to run our experiments on an NVIDIA A40 GPU, with a PyTorch 2.5.1 docker picture and TorchMetrics 1.6.1.

It’s essential to notice that metric assortment habits can differ drastically relying on the {hardware}, runtime surroundings, and mannequin structure. The code snippets supplied on this publish are supposed for demonstrative functions solely. Please don’t interpret our point out of any device or method as an endorsement for its use.

Toy Resnet Mannequin

Within the code block under we outline a easy picture classification mannequin with a ResNet-18 spine.

import time
import torch
import torchvision

system = "cuda"

mannequin = torchvision.fashions.resnet18().to(system)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(mannequin.parameters())

We outline an artificial dataset which we’ll use to coach our toy mannequin.

from torch.utils.information import Dataset, DataLoader

# A dataset with random photos and labels
class FakeDataset(Dataset):
    def __len__(self):
        return 100000000

    def __getitem__(self, index):
        rand_image = torch.randn([3, 224, 224], dtype=torch.float32)
        label = torch.tensor(information=index % 1000, dtype=torch.int64)
        return rand_image, label

train_set = FakeDataset()

batch_size = 128
num_workers = 12

train_loader = DataLoader(
    dataset=train_set,
    batch_size=batch_size,
    num_workers=num_workers,
    pin_memory=True
)

We outline a group of normal metrics from TorchMetrics, together with a management flag to allow or disable metric calculation.

from torchmetrics import (
    MeanMetric,
    Accuracy,
    Precision,
    Recall,
    F1Score,
)

# toggle to allow/disable metric assortment
capture_metrics = False

if capture_metrics:
        metrics = {
        "avg_loss": MeanMetric(),
        "accuracy": Accuracy(activity="multiclass", num_classes=1000),
        "precision": Precision(activity="multiclass", num_classes=1000),
        "recall": Recall(activity="multiclass", num_classes=1000),
        "f1_score": F1Score(activity="multiclass", num_classes=1000),
    }

    # Transfer all metrics to the system
    metrics = {identify: metric.to(system) for identify, metric in metrics.gadgets()}

Subsequent, we outline a PyTorch Profiler occasion, together with a management flag that enables us to allow or disable profiling. For an in depth tutorial on utilizing PyTorch Profiler, please seek advice from the first publish on this collection.

from torch import profiler

# toggle to allow/disable profiling
enable_profiler = True

if enable_profiler:
    prof = profiler.profile(
        schedule=profiler.schedule(wait=10, warmup=2, lively=3, repeat=1),
        on_trace_ready=profiler.tensorboard_trace_handler("./logs/"),
        profile_memory=True,
        with_stack=True
    )
    prof.begin()

Lastly, we outline a typical coaching step:

mannequin.prepare()

t0 = time.perf_counter()
total_time = 0
depend = 0

for idx, (information, goal) in enumerate(train_loader):
    information = information.to(system, non_blocking=True)
    goal = goal.to(system, non_blocking=True)
    optimizer.zero_grad()
    output = mannequin(information)
    loss = criterion(output, goal)
    loss.backward()
    optimizer.step()

    if capture_metrics:
        # replace metrics
        metrics["avg_loss"].replace(loss)
        for identify, metric in metrics.gadgets():
            if identify != "avg_loss":
                metric.replace(output, goal)

        if (idx + 1) % 100 == 0:
            # compute metrics
            metric_results = {
                identify: metric.compute().merchandise() 
                    for identify, metric in metrics.gadgets()
            }
            # print metrics
            print(f"Step {idx + 1}: {metric_results}")
            # reset metrics
            for metric in metrics.values():
                metric.reset()

    elif (idx + 1) % 100 == 0:
        # print final loss worth
        print(f"Step {idx + 1}: Loss = {loss.merchandise():.4f}")

    batch_time = time.perf_counter() - t0
    t0 = time.perf_counter()
    if idx > 10:  # skip first steps
        total_time += batch_time
        depend += 1

    if enable_profiler:
        prof.step()

    if idx > 200:
        break

if enable_profiler:
    prof.cease()

avg_time = total_time/depend
print(f'Common step time: {avg_time}')
print(f'Throughput: {batch_size/avg_time:.2f} photos/sec')

Metric Assortment Overhead

To measure the influence of metric assortment on coaching step time, we ran our coaching script each with and with out metric calculation. The outcomes are summarized within the following desk.

The Overhead of Naive Metric Assortment (by Writer)

Our naïve metric assortment resulted in an almost 10% drop in runtime efficiency!! Whereas metric assortment is important for machine studying improvement, it often entails comparatively easy mathematical operations and hardly warrants such a big overhead. What’s going on?!!

Figuring out Efficiency Points with PyTorch Profiler

To raised perceive the supply of the efficiency degradation, we reran the coaching script with the PyTorch Profiler enabled. The resultant hint is proven under:

Hint of Metric Assortment Experiment (by Writer)

The hint reveals recurring “cudaStreamSynchronize” operations that coincide with noticeable drops in GPU utilization. These kinds of “CPU-GPU sync” occasions had been mentioned intimately in half two of our collection. In a typical coaching step, the CPU and GPU work in parallel: The CPU manages duties like information transfers to the GPU and kernel loading, and the GPU executes the mannequin on the enter information and updates its weights. Ideally, we want to reduce the factors of synchronization between the CPU and GPU with a purpose to maximize efficiency. Right here, nonetheless, we will see that the metric assortment has triggered a sync occasion by performing a CPU to GPU information copy. This requires the CPU to droop its processing till the GPU catches up which, in flip, causes the GPU to attend for the CPU to renew loading the next kernel operations. The underside line is that these synchronization factors result in inefficient utilization of each the CPU and GPU. Our metric assortment implmentation provides eight such synchronization occasions to every coaching step.

A more in-depth examination of the hint exhibits that the sync occasions are coming from the replace name of the MeanMetric TorchMetric. For the skilled profiling professional, this can be enough to establish the foundation trigger, however we’ll go a step additional and use the torch.profiler.record_function utility to establish the precise offending line of code.

Profiling with record_function

To pinpoint the precise supply of the sync occasion, we prolonged the MeanMetric class and overrode the replace methodology utilizing record_function context blocks. This strategy permits us to profile particular person operations inside the methodology and establish efficiency bottlenecks.

class ProfileMeanMetric(MeanMetric):
    def replace(self, worth, weight = 1.0):
        # broadcast weight to worth form
        with profiler.record_function("course of worth"):
            if not isinstance(worth, torch.Tensor):
                worth = torch.as_tensor(worth, dtype=self.dtype,
                                        system=self.system)
        with profiler.record_function("course of weight"):
            if weight is just not None and never isinstance(weight, torch.Tensor):
                weight = torch.as_tensor(weight, dtype=self.dtype,
                                         system=self.system)
        with profiler.record_function("broadcast weight"):
            weight = torch.broadcast_to(weight, worth.form)
        with profiler.record_function("cast_and_nan_check"):
            worth, weight = self._cast_and_nan_check_input(worth, weight)

        if worth.numel() == 0:
            return

        with profiler.record_function("replace worth"):
            self.mean_value += (worth * weight).sum()
        with profiler.record_function("replace weight"):
            self.weight += weight.sum()

We then up to date our avg_loss metric to make use of the newly created ProfileMeanMetric and reran the coaching script.

Hint of Metric Assortment with record_function (by Writer)

The up to date hint reveals that the sync occasion originates from the next line:

weight = torch.as_tensor(weight, dtype=self.dtype, system=self.system)

This operation converts the default scalar worth weight=1.0 right into a PyTorch tensor and locations it on the GPU. The sync occasion happens as a result of this motion triggers a CPU-to-GPU information copy, which requires the CPU to attend for the GPU to course of the copied worth.

Optimization 1: Specify Weight Worth

Now that we’ve got discovered the supply of the problem, we will overcome it simply by specifying a weight worth in our replace name. This prevents the runtime from changing the default scalar weight=1.0 right into a tensor on the GPU, avoiding the sync occasion:

# replace metrics
 if capture_metric:
     metrics["avg_loss"].replace(loss, weight=torch.ones_like(loss))

Rerunning the script after making use of this transformation reveals that we’ve got succeeded in eliminating the preliminary sync occasion… solely to have uncovered a brand new one, this time coming from the _cast_and_nan_check_input perform:

Hint of Metric Assortment following Optimization 1 (by Writer)

Profiling with record_function — Half 2

To discover our new sync occasion, we prolonged our customized metric with extra profiling probes and reran our script.

class ProfileMeanMetric(MeanMetric):
    def replace(self, worth, weight = 1.0):
        # broadcast weight to worth form
        with profiler.record_function("course of worth"):
            if not isinstance(worth, torch.Tensor):
                worth = torch.as_tensor(worth, dtype=self.dtype,
                                        system=self.system)
        with profiler.record_function("course of weight"):
            if weight is just not None and never isinstance(weight, torch.Tensor):
                weight = torch.as_tensor(weight, dtype=self.dtype,
                                         system=self.system)
        with profiler.record_function("broadcast weight"):
            weight = torch.broadcast_to(weight, worth.form)
        with profiler.record_function("cast_and_nan_check"):
            worth, weight = self._cast_and_nan_check_input(worth, weight)

        if worth.numel() == 0:
            return

        with profiler.record_function("replace worth"):
            self.mean_value += (worth * weight).sum()
        with profiler.record_function("replace weight"):
            self.weight += weight.sum()

    def _cast_and_nan_check_input(self, x, weight = None):
        """Convert enter ``x`` to a tensor and examine for Nans."""
        with profiler.record_function("course of x"):
            if not isinstance(x, torch.Tensor):
                x = torch.as_tensor(x, dtype=self.dtype,
                                    system=self.system)
        with profiler.record_function("course of weight"):
            if weight is just not None and never isinstance(weight, torch.Tensor):
                weight = torch.as_tensor(weight, dtype=self.dtype,
                                         system=self.system)
            nans = torch.isnan(x)
            if weight is just not None:
                nans_weight = torch.isnan(weight)
            else:
                nans_weight = torch.zeros_like(nans).bool()
                weight = torch.ones_like(x)

        with profiler.record_function("any nans"):
            anynans = nans.any() or nans_weight.any()

        with profiler.record_function("course of nans"):
            if anynans:
                if self.nan_strategy == "error":
                    increase RuntimeError("Encountered `nan` values in tensor")
                if self.nan_strategy in ("ignore", "warn"):
                    if self.nan_strategy == "warn":
                        print("Encountered `nan` values in tensor."
                              " Might be eliminated.")
                    x = x[~(nans | nans_weight)]
                    weight = weight[~(nans | nans_weight)]
                else:
                    if not isinstance(self.nan_strategy, float):
                        increase ValueError(f"`nan_strategy` shall be float"
                                         f" however you cross {self.nan_strategy}")
                    x[nans | nans_weight] = self.nan_strategy
                    weight[nans | nans_weight] = self.nan_strategy

        with profiler.record_function("return worth"):
            retval = x.to(self.dtype), weight.to(self.dtype)
        return retval

The resultant hint is captured under:

Hint of Metric Assortment with record_function — half 2 (by Writer)

The hint factors on to the offending line:

anynans = nans.any() or nans_weight.any()

This operation checks for NaN values within the enter tensors, nevertheless it introduces a expensive CPU-GPU synchronization occasion as a result of the operation entails copying information from the GPU to the CPU.

Upon a more in-depth inspection of the TorchMetric BaseAggregator class, we discover a number of choices for dealing with NAN worth updates, all of which cross by way of the offending line of code. Nonetheless, for our use case — calculating the common loss metric — this examine is pointless and doesn’t justify the runtime efficiency penalty.

Optimization 2: Disable NAN Worth Checks

To get rid of the overhead, we suggest disabling the NaN worth checks by overriding the _cast_and_nan_check_input perform. As an alternative of a static override, we applied a dynamic resolution that may be utilized flexibly to any descendants of the BaseAggregator class.

from torchmetrics.aggregation import BaseAggregator

def suppress_nan_check(MetricClass):
    assert issubclass(MetricClass, BaseAggregator), MetricClass
    class DisableNanCheck(MetricClass):
        def _cast_and_nan_check_input(self, x, weight=None):
            if not isinstance(x, torch.Tensor):
                x = torch.as_tensor(x, dtype=self.dtype, 
                                    system=self.system)
            if weight is just not None and never isinstance(weight, torch.Tensor):
                weight = torch.as_tensor(weight, dtype=self.dtype,
                                         system=self.system)
            if weight is None:
                weight = torch.ones_like(x)
            return x.to(self.dtype), weight.to(self.dtype)
    return DisableNanCheck

NoNanMeanMetric = suppress_nan_check(MeanMetric)

metrics["avg_loss"] = NoNanMeanMetric().to(system)

Submit Optimization Outcomes: Success

After implementing the 2 optimizations — specifying the load worth and disabling the NaN checks—we discover the step time efficiency and the GPU utilization to match these of our baseline experiment. As well as, the resultant PyTorch Profiler hint exhibits that the entire added “cudaStreamSynchronize” occasions that had been related to the metric assortment, have been eradicated. With a couple of small modifications, we’ve got diminished the price of coaching by ~10% with none modifications to the habits of the metric assortment.

Within the subsequent part we’ll discover an extra Metric assortment optimization.

Instance 2: Optimizing Metric System Placement

Within the earlier part, the metric values resided on the GPU, making it logical to retailer and compute the metrics on the GPU. Nonetheless, in situations the place the values we want to combination reside on the CPU, it may be preferable to retailer the metrics on the CPU to keep away from pointless system transfers.

Within the code block under, we modify our script to calculate the common step time utilizing a MeanMetric on the CPU. This alteration has no influence on the runtime efficiency of our coaching step:

avg_time = NoNanMeanMetric()
t0 = time.perf_counter()

for idx, (information, goal) in enumerate(train_loader):
    # transfer information to system
    information = information.to(system, non_blocking=True)
    goal = goal.to(system, non_blocking=True)

    optimizer.zero_grad()
    output = mannequin(information)
    loss = criterion(output, goal)
    loss.backward()
    optimizer.step()

    if capture_metrics:
        metrics["avg_loss"].replace(loss)
        for identify, metric in metrics.gadgets():
            if identify != "avg_loss":
                metric.replace(output, goal)

        if (idx + 1) % 100 == 0:
            # compute metrics
            metric_results = {
                identify: metric.compute().merchandise()
                    for identify, metric in metrics.gadgets()
            }
            # print metrics
            print(f"Step {idx + 1}: {metric_results}")
            # reset metrics
            for metric in metrics.values():
                metric.reset()

    elif (idx + 1) % 100 == 0:
        # print final loss worth
        print(f"Step {idx + 1}: Loss = {loss.merchandise():.4f}")

    batch_time = time.perf_counter() - t0
    t0 = time.perf_counter()
    if idx > 10:  # skip first steps
        avg_time.replace(batch_time)

    if enable_profiler:
        prof.step()

    if idx > 200:
        break

if enable_profiler:
    prof.cease()

avg_time = avg_time.compute().merchandise()
print(f'Common step time: {avg_time}')
print(f'Throughput: {batch_size/avg_time:.2f} photos/sec')

The issue arises once we try to increase our script to assist distributed coaching. To exhibit the issue, we modified our mannequin definition to make use of DistributedDataParallel (DDP):

# toggle to allow/disable ddp
use_ddp = True

if use_ddp:
    import os
    import torch.distributed as dist
    from torch.nn.parallel import DistributedDataParallel as DDP
    os.environ["MASTER_ADDR"] = "127.0.0.1"
    os.environ["MASTER_PORT"] = "29500"
    dist.init_process_group("nccl", rank=0, world_size=1)
    torch.cuda.set_device(0)
    mannequin = DDP(torchvision.fashions.resnet18().to(system))
else:
    mannequin = torchvision.fashions.resnet18().to(system)

# insert coaching loop

# append to finish of the script:
if use_ddp:
    # destroy the method group
    dist.destroy_process_group()

The DDP modification leads to the next error:

RuntimeError: No backend kind related to system kind cpu

By default, metrics in distributed coaching are programmed to synchronize throughout all gadgets in use. Nonetheless, the synchronization backend utilized by DDP doesn’t assist metrics saved on the CPU.

One option to remedy that is to disable the cross-device metric synchronization:

avg_time = NoNanMeanMetric(sync_on_compute=False)

In our case, the place we’re measuring the common time, this resolution is suitable. Nonetheless, in some instances, the metric synchronization is important, and we’ve got could haven’t any selection however to maneuver the metric onto the GPU:

avg_time = NoNanMeanMetric().to(system)

Sadly, this example offers rise to a brand new CPU-GPU sync occasion coming from the replace perform.

Hint of avg_time Metric Assortment (by Writer)

This sync occasion ought to hardly come as a shock—in any case, we’re updating a GPU metric with a worth residing on the CPU, which ought to necessitate a reminiscence copy. Nonetheless, within the case of a scalar metric, this information switch could be utterly averted with a easy optimization.

Optimization 3: Carry out Metric Updates with Tensors as an alternative of Scalars

The answer is simple: as an alternative of updating the metric with a float worth, we convert to a Tensor earlier than calling replace.

batch_time = torch.as_tensor(batch_time)
avg_time.replace(batch_time, torch.ones_like(batch_time))

This minor change bypasses the problematic line of code, eliminates the sync occasion, and restores the step time to the baseline efficiency.

At first look, this end result could seem stunning: We might anticipate that updating a GPU metric with a CPU tensor ought to nonetheless require a reminiscence copy. Nonetheless, PyTorch optimizes operations on scalar tensors by utilizing a devoted kernel that performs the addition with out an express information switch. This avoids the costly synchronization occasion that will in any other case happen.

Abstract

On this publish, we explored how a naïve strategy to TorchMetrics can introduce CPU-GPU synchronization occasions and considerably degrade PyTorch coaching efficiency. Utilizing PyTorch Profiler, we recognized the traces of code liable for these sync occasions and utilized focused optimizations to get rid of them:

  • Explicitly specify a weight tensor when calling the MeanMetric.replace perform as an alternative of counting on the default worth.
  • Disable NaN checks within the base Aggregator class or substitute them with a extra environment friendly various.
  • Rigorously handle the system placement of every metric to reduce pointless transfers.
  • Disable cross-device metric synchronization when not required.
  • When the metric resides on a GPU, convert floating-point scalars to tensors earlier than passing them to the replace perform to keep away from implicit synchronization.

We’ve got created a devoted pull request on the TorchMetrics github web page protecting a few of the optimizations mentioned on this publish. Please be at liberty to contribute your personal enhancements and optimizations!


Tags: AvoidingcollectionEfficientMetricperformancePitfallsPyTorchTorchMetrics

Related Posts

Efficicncy vs opp.png
Machine Learning

Cease Chasing “Effectivity AI.” The Actual Worth Is in “Alternative AI.”

June 30, 2025
Image 127.png
Machine Learning

AI Agent with Multi-Session Reminiscence

June 29, 2025
Agent vs workflow.jpeg
Machine Learning

A Developer’s Information to Constructing Scalable AI: Workflows vs Brokers

June 28, 2025
4.webp.webp
Machine Learning

Pipelining AI/ML Coaching Workloads with CUDA Streams

June 26, 2025
Levart photographer drwpcjkvxuu unsplash scaled 1.jpg
Machine Learning

How you can Practice a Chatbot Utilizing RAG and Customized Information

June 25, 2025
T2.jpg
Machine Learning

Constructing A Trendy Dashboard with Python and Taipy

June 24, 2025
Next Post
Copyright Shutterstock.jpg

Creators demand tech giants pay for AI coaching information • The Register

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024

EDITOR'S PICK

Openai Logo 2 1 0225.png

SoftBank to Spend $3B Yearly on OpenAI Options

February 4, 2025
Sandbox Land Hex Trust Token.jpg

The Sandbox groups with Hex Belief for licensed, safe custody of its digital belongings – CryptoNinjas

September 21, 2024
Image 48 1024x683.png

Cease Constructing AI Platforms | In the direction of Information Science

June 14, 2025
Best No Kyc Crypto Casinos Featured Image.png

Greatest No KYC Cryptocurrency Casinos to Play With out Verification in 2025

May 14, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Classes Realized After 6.5 Years Of Machine Studying
  • A Newbie’s Information to Mastering Gemini + Google Sheets
  • Japan’s Metaplanet Acquires 1,005 BTC, Now Holds Extra Than CleanSpark, Galaxy Digital ⋆ ZyCrypto
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?