• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Friday, April 17, 2026
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

What It Really Takes to Run Code on 200M€ Supercomputer

Admin by Admin
April 17, 2026
in Artificial Intelligence
0
2017 bsc superordenador marenostrum 4 barcelona supercomputing center.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Tips on how to Maximize Claude Cowork

Prefill Is Compute-Sure. Decode Is Reminiscence-Sure. Why Your GPU Shouldn’t Do Each.


you stroll throughout the campus of the Polytechnic College of Catalonia in Barcelona, you would possibly encounter the Torre Girona chapel on an attractive park. Constructed within the nineteenth century, it incorporates a large cross, excessive arches, and stained glass. However inside the principle corridor, encased in an infinite illuminated glass field, sits a special type of structure.

That is the historic dwelling of MareNostrum. Whereas the unique 2004 racks stay on show within the chapel as a museum piece, the latest iteration, MareNostrum V, one of many fifteen strongest supercomputers on this planet, spans a devoted, closely cooled facility proper subsequent door.

Most knowledge scientists are used to spinning up a heavy EC2 occasion on AWS or using distributed frameworks like Spark or Ray. Excessive-Efficiency Computing (HPC) on the supercomputer degree is a special beast solely. It operates on totally different architectural guidelines, totally different schedulers, and a scale that’s tough to fathom till you utilize it.

I just lately had the prospect to make use of MareNostrum V to generate large quantities of artificial knowledge for a machine studying surrogate mannequin. What follows is a glance beneath the hood of a 200M€ machine: what it’s, why its structure appears the way in which it does, and the way you truly work together with it.

The Structure: Why You Ought to Care Concerning the Wiring

The psychological mannequin that causes essentially the most confusion when approaching HPC is that this: you aren’t renting time on a single, impossibly highly effective pc. You might be submitting work to be distributed throughout 1000’s of impartial computer systems that occur to share a particularly quick community.

Why ought to an information scientist care concerning the bodily networking? As a result of if you happen to’ve ever tried to coach a large neural community throughout a number of AWS cases and watched your costly GPUs idle whereas ready for an information batch to switch, you realize that in distributed computing, the community is the pc.

To forestall bottlenecks, MareNostrum V makes use of an InfiniBand NDR200 material organized in a fat-tree topology. In a regular workplace community, as a number of computer systems attempt to discuss throughout the identical principal swap, bandwidth will get congested. A fat-tree topology solves this by rising the bandwidth of the hyperlinks as you progress up the community hierarchy, actually making the “branches” thicker close to the “trunk.” This ensures non-blocking bandwidth: any of the 8,000 nodes can discuss to every other node at precisely the identical minimal latency.

File:Fat tree.jpg
Fats-Tree structure, by HoriZZon~commonswiki by way of Wikimedia Commons (CC BY-SA 4.0)

The machine itself represents a joint funding from the EuroHPC Joint Enterprise, Spain, Portugal, and Turkey, break up into two principal computational partitions:

Normal Function Partition (GPP):

It’s designed for extremely parallel CPU duties. It accommodates 6,408 nodes, every packing 112 Intel Sapphire Rapids cores, with a mixed peak efficiency of 45.9 PFlops. That is the one you’ll be utilizing most frequently for the “basic” computing duties.

Accelerated Partition (ACC):

This one is extra specialised, designed with AI coaching, molecular dynamics and such in thoughts. It accommodates 1,120 nodes, every with 4 NVIDIA H100 SXM GPUs. Contemplating a single H100 retails for roughly $25,000, the GPU price alone exceeds $110 million.
The GPUs give it a a lot increased peak efficiency than that of the GPP, reaching as much as 260 PFlops.

There are additionally a particular sort of nodes referred to as the Login Nodes. These act because the entrance door to the supercomputer. Once you SSH into Mare Nostrum, that is the place you land. Login nodes are strictly for light-weight duties: transferring recordsdata, compiling code, and submitting job scripts to the scheduler. They don’t seem to be for computing.

Photograph by Planet Volumes on Unsplash

Quantum Infrastructure: Classical nodes are now not the one {hardware} contained in the glass field. As of just lately, Mare Nostrum 5 has been bodily and logically built-in with Spain’s first quantum computer systems. This features a digital gate-based quantum system and the newly acquired MareNostrum-Ona, a state-of-the-art quantum annealer based mostly on superconducting qubits. Relatively than changing the classical supercomputer, these quantum processing models (QPUs) act as extremely specialised accelerators.

When the supercomputer encounters fiercely complicated optimization issues or quantum chemistry simulations that may choke even the H100 GPUs, it could possibly offload these particular calculations to the quantum {hardware}, creating a large hybrid classical-quantum computing powerhouse.

Airgaps, Quotas, and the Actuality of HPC

Understanding the {hardware} is just half the battle. The operational guidelines of a supercomputer are solely totally different from a business cloud supplier. Mare Nostrum V is a shared public useful resource, which suggests the atmosphere is closely restricted to make sure safety and truthful play.

The airgap on MN-V, by writer utilizing Inkscape

The Airgap: One of many largest shocks for knowledge scientists transitioning to HPC is the community restriction. You possibly can entry the supercomputer from the skin world by way of SSH, however the compute nodes completely can’t entry the skin world. There isn’t any outbound web connection. You can not pip set up a lacking library, wget a dataset, or hook up with an exterior HuggingFace repository as you see match. Every part your script wants have to be pre-downloaded, compiled, and sitting in your storage listing earlier than you submit your job.

In actuality, it’s much less of a problem than it seems, for the reason that Marenostrum directors present many of the libraries and software program you could want by way of a module system.

Transferring Knowledge: Due to this strict boundary, knowledge ingress and egress occur by way of scp or rsync by means of the login nodes. You push your uncooked datasets in over SSH, watch for the compute nodes to chew by means of the simulations, and pull the processed tensors again out to your native machine. One stunning facet of this restriction is that, for the reason that precise computation may be so extremely quick, the bottleneck turns into extracting the completed outcomes to your native machine for postprocessing and visualization.

Limits and Quotas: You can not merely launch a thousand jobs and monopolize the machine. Your undertaking is assigned a particular CPU-hour funds. Moreover, there are laborious limits on what number of concurrent jobs a single person can have working or queuing at any given time.

You have to additionally specify a strict wall-time restrict for each single job you submit. Supercomputers don’t tolerate loitering, if you happen to request two hours of compute time and your script wants two hours and one second, the scheduler will ruthlessly kill your course of mid-calculation to make room for the following researcher.

Logging within the Darkish: Since you submit these jobs to a scheduler and stroll away, there isn’t a stay terminal output to stare at. As an alternative, all normal output (stdout) and normal error (stderr) are routinely redirected into log recordsdata (e.g., sim_12345.out and sim_12345.err). When your job completes, or if it crashes in a single day, it’s important to comb by means of these generated textual content recordsdata to confirm the outcomes or debug your code. You do, nevertheless, have instruments to observe the standing of your submitted jobs, similar to squeue or doing the traditional tail -f on the log recordsdata.

Understanding SLURM Workload Supervisor

Once you lastly get your analysis allocation accredited and log into MareNostrum V by way of SSH, your reward is… a very normal Linux terminal immediate.

After months of writing proposals for entry to a 200M€ machine, it’s, frankly, a bit underwhelming. There are not any flashing lights, no holographic progress bars, nothing to sign simply how highly effective the engine behind the wheel is.

Preliminary terminal view after login, by writer

As a result of 1000’s of researchers are utilizing the machine concurrently, you can’t simply execute a heavy python or C++ script immediately within the terminal. For those who do, it’ll run on the “login node,” shortly grinding it to a halt for everybody else and incomes you an extremely well mannered however relatively agency and offended e mail from the system directors.

Slurm Schema on MN-V, by writer utilizing inkscape

As an alternative, HPC depends on a workload supervisor referred to as SLURM. You write a bash script detailing precisely what {hardware} you want, what software program environments to load, and what code to execute. SLURM places your job in a queue, finds the {hardware} when it turns into out there, executes your code, and releases the nodes.

SLURM stands for Simple Linux Utility for Resource Management, and it’s a free and open supply software program that handles job-scheduling in lots of pc clusters and supercomputers.

Earlier than a posh pipeline, it is advisable perceive the way to talk with the scheduler. That is executed utilizing #SBATCH directives positioned on the prime of your submission script. These directives act as your procuring record for assets:

  • --nodes: The variety of distinct bodily machines you want.
  • --ntasks: The whole variety of separate MPI processes (duties) you wish to spawn. SLURM handles distributing these duties throughout your requested nodes.
  • --time: The strict wall-clock time restrict to your job. Supercomputers don’t tolerate loitering; in case your script runs even one second over this restrict, SLURM ruthlessly kills the job.
  • --account: The precise undertaking ID that can be billed to your CPU-hours.
  • --qos: The “High quality of Service” or particular queue you’re concentrating on. For example, utilizing a debug queue grants sooner entry however limits you to brief runtimes for testing.

A Sensible Instance: Orchestrating an OpenFOAM Sweep

To floor this in actuality, right here is how I truly used the machine. I used to be constructing an ML surrogate mannequin to foretell aerodynamic downforce, which required ground-truth knowledge from 50 high-fidelity computational fluid dynamics (CFD) simulations throughout 50 totally different 3D meshes.

Instance circulate round one of many 3D meshes, by writer utilizing ParaView

Right here is the precise SLURM job script for a single OpenFOAM CFD case on the Normal Function Partition:

#!/bin/bash
#SBATCH --job-name=cfd_sweep
#SBATCH --output=logs/sim_percentj.out
#SBATCH --error=logs/sim_percentj.err
#SBATCH --qos=gp_debug
#SBATCH --time=00:30:00
#SBATCH --nodes=1
#SBATCH --ntasks=6
#SBATCH --account=nct_293

module purge
module load OpenFOAM/11-foss-2023a
supply $FOAM_BASH

# MPI launchers deal with core mapping routinely
srun --mpi=pmix surfaceFeatureExtract
srun --mpi=pmix blockMesh
srun --mpi=pmix decomposePar -force
srun --mpi=pmix snappyHexMesh -parallel -overwrite
srun --mpi=pmix potentialFoam -parallel
srun --mpi=pmix simpleFoam -parallel
srun --mpi=pmix reconstructPar

Relatively than manually submitting this 50 occasions and flooding the scheduler, I used SLURM dependencies to chain every job behind the earlier one. This creates a clear, automated knowledge pipeline:

#!/bin/bash
PREV_JOB_ID=""

for CASE_DIR in instances/case_*; do
  cd $CASE_DIR
  
  if [ -z "$PREV_JOB_ID" ]; then
    OUT=$(sbatch run_all.sh)
  else
    OUT=$(sbatch --dependency=afterany:$PREV_JOB_ID run_all.sh)
  fi
  
  PREV_JOB_ID=$(echo $OUT | awk '{print $4}')
  cd ../..
executed

This orchestrator drops a series of fifty jobs into the queue in seconds. I walked away, and by the following morning, my 50 aerodynamic evaluations had been processed, logged, and able to be formatted into tensors for ML coaching.

Instance underside strain on one of many 3D meshes, by writer utilizing ParaView

Parallelism Limits: Amdahl’s Legislation

A standard query from newcomers is: If in case you have 112 cores per node, why did you solely request 6 duties (ntasks=6) to your CFD simulation?

The reply is Amdahl’s Legislation. Each program has a serial fraction that can’t be parallelized. It explicitly states that the theoretical speedup of executing a program throughout a number of processors is strictly restricted by the fraction of the code that have to be executed serially. It’s a really intuitive legislation and, mathematically, it’s expressed as:

[
S=frac{1}{(1-p)+frac{p}{N}}
]

The place S is the general speedup, p is the proportion of the code that may be parallelized, 1−p is the strictly serial fraction, and N is the variety of processing cores.

Due to that (1−p) time period within the denominator, you face an insurmountable ceiling. If simply 5% of your program is essentially sequential, the utmost theoretical speedup you’ll be able to obtain, even if you happen to use each single core in MareNostrum V, is 20x.

Moreover, dividing a job throughout too many cores will increase the communication overhead over that InfiniBand community we mentioned earlier. If the cores spend extra time passing boundary circumstances to one another than doing precise math, including extra {hardware} slows this system down.

Time as assets improve for various N, by writer utilizing matplotlib

As proven on this determine, when simulating a small system (N=100), runtime will increase after 16 threads. Solely at large scales (N=10k+) does the {hardware} change into absolutely productive. Writing code for a supercomputer is an train in managing this compute-to-communication ratio.

The Entry to the Immediate

Regardless of the staggering price of the {hardware}, entry to MareNostrum V is free for researchers, as compute time is handled as a publicly funded scientific useful resource.

In case you are affiliated with a Spanish establishment, you’ll be able to apply by means of the Spanish Supercomputing Community (RES). For researchers throughout the remainder of Europe, the EuroHPC Joint Enterprise runs common entry calls. Their “Improvement Entry” observe is particularly designed for initiatives porting code or benchmarking ML fashions, making it extremely accessible for knowledge scientists.

Once you sit at your desk watching that fully unremarkable SSH immediate, it’s straightforward to overlook what you’re truly . What that blinking cursor doesn’t present is the 8,000 nodes it connects to, the fat-tree material routing messages between them at 200 Gb/s, or the scheduler coordinating lots of of concurrent jobs from researchers throughout six nations.

The “single highly effective pc” image persists in our heads as a result of it’s less complicated. However the distributed actuality is what makes trendy computing potential, and it’s far more accessible than most individuals understand.

References

[1] Barcelona Supercomputing Heart, MareNostrum 5 Technical Specs (2024), BSC Press Room. https://towardsdatascience.com/what-it-actually-takes-to-run-code-on-200me-supercomputer/

[2] EuroHPC Joint Enterprise, MareNostrum 5 Inauguration Particulars (2023), EuroHPC JU. [link]

Tags: 200MCoderunSupercomputertakes

Related Posts

Image 107 1.jpg
Artificial Intelligence

Tips on how to Maximize Claude Cowork

April 16, 2026
Tds banner.jpg
Artificial Intelligence

Prefill Is Compute-Sure. Decode Is Reminiscence-Sure. Why Your GPU Shouldn’t Do Each.

April 15, 2026
Context layer.jpg
Artificial Intelligence

RAG Isn’t Sufficient — I Constructed the Lacking Context Layer That Makes LLM Programs Work

April 15, 2026
Chatgpt image apr 7 2026 02 50 02 pm.jpg
Artificial Intelligence

Vary Over Depth: A Reflection on the Function of the Knowledge Generalist

April 14, 2026
Sayyam abbasi mjrjhv49vi8 unsplash scaled 1.jpg
Artificial Intelligence

Your Mannequin Isn’t Executed: Understanding and Fixing Mannequin Drift

April 13, 2026
Method chaining.jpg
Artificial Intelligence

Write Pandas Like a Professional With Technique Chaining Pipelines

April 12, 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
Chainlink Link And Cardano Ada Dominate The Crypto Coin Development Chart.jpg

Chainlink’s Run to $20 Beneficial properties Steam Amid LINK Taking the Helm because the High Creating DeFi Challenge ⋆ ZyCrypto

May 17, 2025
Image 100 1024x683.png

Easy methods to Use LLMs for Highly effective Computerized Evaluations

August 13, 2025
Blog.png

XMN is accessible for buying and selling!

October 10, 2025
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025

EDITOR'S PICK

Challeges In Construction Estimation.webp.webp

How you can Select the Proper Building Price Estimator

October 7, 2024
Bitcoin Etf Btc Cover.png

Early 2025 Inflows for Crypto Funding Merchandise Attain $585M in Simply 3 Days

January 6, 2025
Clark van der beken a1av h8zbam unsplash scaled 1.jpg

The Channel-Smart Consideration | Squeeze and Excitation

August 10, 2025
Goldman sachs admits they were wrong about bitcoin.jpg

$3.7 Trillion Goldman Sachs Jumps Into Crypto ETF Sport With Daring Software For Bitcoin Revenue Fund ⋆ ZyCrypto

April 14, 2026

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • What It Really Takes to Run Code on 200M€ Supercomputer
  • 4 MEA Nations Race to Construct Crypto Rulebooks as World Licensing Push Accelerates
  • AI Agent Traits Shaping Information-Pushed Companies
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?