• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Friday, May 9, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

Breaking the Bottleneck: GPU-Optimised Video Processing for Deep Studying

Admin by Admin
February 25, 2025
in Artificial Intelligence
0
Screenshot 2025 02 24 At 11.34.06 am 1024x584.png
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Clustering Consuming Behaviors in Time: A Machine Studying Strategy to Preventive Well being

A Sensible Information to BERTopic for Transformer-Based mostly Matter Modeling


Deep Studying (DL) functions usually require processing video knowledge for duties resembling object detection, classification, and segmentation. Nevertheless, typical video processing pipelines are sometimes inefficient for deep studying inference, resulting in efficiency bottlenecks. On this submit will leverage PyTorch and FFmpeg with NVIDIA {hardware} acceleration to realize this optimisation.

The inefficiency comes from how video frames are sometimes decoded and transferred between CPU and GPU. The usual workflow that we could discover within the majority of tutorials observe this construction:

  1. Decode Frames on CPU: Video information are first decoded into uncooked frames utilizing CPU-based decoding instruments (e.g., OpenCV, FFmpeg with out GPU help).
  2. Switch to GPU: These frames are then transferred from CPU to GPU reminiscence to carry out deep studying inference utilizing frameworks like TensorFlow, Pytorch, ONNX, and many others.
  3. Inference on GPU: As soon as the frames are in GPU reminiscence, the mannequin performs inference.
  4. Switch Again to CPU (if wanted): Some post-processing steps could require knowledge to be moved again to the CPU.

This CPU-GPU switch course of introduces a big efficiency bottleneck, particularly when processing high-resolution movies at excessive body charges. The pointless reminiscence copies and context switches decelerate the general inference pace, limiting real-time processing capabilities.

For example, the next snippet has the everyday Video Processing pipeline that you simply got here throughout when you find yourself beginning to study deep studying:

The Resolution: GPU-Primarily based Video Decoding and Inference

A extra environment friendly method is to maintain your complete pipeline on the GPU, from video decoding to inference, eliminating redundant CPU-GPU transfers. This may be achieved utilizing FFmpeg with NVIDIA GPU {hardware} acceleration. 

Key Optimisations

  1. GPU-Accelerated Video Decoding: As a substitute of utilizing CPU-based decoding, we leverage FFmpeg with NVIDIA GPU acceleration (NVDEC) to decode video frames instantly on the GPU.
  2. Zero-Copy Body Processing: The decoded frames stay in GPU reminiscence, avoiding pointless reminiscence transfers.
  3. GPU-Optimized Inference: As soon as the frames are decoded, we carry out inference instantly utilizing any mannequin on the identical GPU, considerably lowering latency.

Palms on! 

Stipulations 

 With a view to obtain the aforementioned enhancements, we might be utilizing the next dependencies: 

Set up

Please, to get a deep perception of how FFmpeg is put in with NVIDIA gpu acceleration, observe these directions. 

Examined with:

  • System: Ubuntu 22.04
  • NVIDIA Driver Model: 550.120 
  • CUDA Model: 12.4
  • Torch: 2.4.0
  • Torchaudio: 2.4.0
  • Torchvision: 0.19.0

1. Set up the NV-Codecs

2. Clone and configure FFmpeg

3. Validate whether or not the set up was profitable with torchaudio.utils

Time to code an optimised pipeline!

Benchmarking

To benchmark whether or not it’s making any distinction, we might be utilizing this video from Pexels by Pawel Perzanowski. Since most movies there are actually brief, I’ve stacked the identical video a number of instances to offer some outcomes with totally different video lengths. The unique video is 32 seconds lengthy which supplies us a complete of 960 frames. The brand new modified movies have 5520 and 9300 frames respectively.

Unique video

  • typical workflow: 28.51s
  • optimised workflow: 24.2s

Okay… it doesn’t appear to be an actual enchancment, proper? Let’s take a look at it with longer movies.

Modified video v1 (5520 frames)

  • typical workflow: 118.72s
  • optimised workflow: 100.23s

Modified video v2 (9300 frames)

  • typical workflow: 292.26s
  • optimised workflow: 240.85s

Because the video length will increase, the advantages of the optimization turn into extra evident. Within the longest take a look at case, we obtain an 18% speedup, demonstrating a big discount in processing time. These efficiency positive factors are notably essential when dealing with massive video datasets and even in real-time video evaluation duties, the place small effectivity enhancements accumulate into substantial time financial savings.

Conclusion

In immediately’s submit, we now have explored two video processing pipelines, the everyday one the place frames are copied from CPU to GPU, introducing noticeable bottlenecks, and an optimised pipeline, wherein frames are decoded within the GPU and move them on to inference, saving a significantly period of time as movies’ length improve.

References

Tags: BottleneckbreakingDeepGPUOptimisedLearningProcessingVideo

Related Posts

Image 67.png
Artificial Intelligence

Clustering Consuming Behaviors in Time: A Machine Studying Strategy to Preventive Well being

May 9, 2025
Screenshot 2025 05 07 At 8.18.49 pm.png
Artificial Intelligence

A Sensible Information to BERTopic for Transformer-Based mostly Matter Modeling

May 8, 2025
Ai Agent Data Docs Wide.jpeg
Artificial Intelligence

Producing Information Dictionary for Excel Information Utilizing OpenPyxl and AI Brokers

May 8, 2025
Dag Fork 5 1024x538.png
Artificial Intelligence

Regression Discontinuity Design: How It Works and When to Use It

May 7, 2025
Total Derivative.jpg
Artificial Intelligence

The Complete By-product: Correcting the False impression of Backpropagation’s Chain Rule

May 6, 2025
Rene Bohmer Yeuvdkzwsz4 Unsplash Scaled 1.jpg
Artificial Intelligence

The CNN That Challenges ViT

May 6, 2025
Next Post
Tag Reuters Com 2024 Newsml Lynxmpek7k0p5 1.jpg

Finest Strategies for Microsoft Alternate Server Information Restoration

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024
1vrlur6bbhf72bupq69n6rq.png

The Artwork of Chunking: Boosting AI Efficiency in RAG Architectures | by Han HELOIR, Ph.D. ☕️ | Aug, 2024

August 19, 2024

EDITOR'S PICK

Ripple20tokens20and20chart Id Ba6b1ca6 366b 4ba1 98ac Fbd269a2ed13 Size900.jpg

Why Is XRP Going Up? XRP Information on Brazil’s Spot XRP ETF Drives Worth and Predictions Larger

February 20, 2025
Reative Network.jpg

The Countdown to Reactive Community Mainnet Launch

December 6, 2024
Wlfi World Liberty.jpg

Frax Finance group debates $15 million funding in Trump’s World Liberty Monetary

January 23, 2025
1i04elovufa 6dx9txx4e7g.png

A Easy Instance Utilizing PCA for Outlier Detection | by W Brett Kennedy | Nov, 2024

November 3, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • How a Crypto Advertising and marketing Company Can Use AI to Create Highly effective Native Promoting Methods
  • If Google is pressured to surrender Chrome, what occurs subsequent? • The Register
  • Clustering Consuming Behaviors in Time: A Machine Studying Strategy to Preventive Well being
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?