• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Tuesday, January 27, 2026
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

Ray: Distributed Computing For All, Half 2

Admin by Admin
January 27, 2026
in Artificial Intelligence
0
Gemini generated image xhy1gaxhy1gaxhy1 scaled 1.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


instalment in my two-part sequence on the Ray library, a Python framework created by AnyScale for distributed and parallel computing. Half 1 lined find out how to parallelise CPU-intensive Python jobs in your native PC by distributing the workload throughout all out there cores, leading to marked enhancements in runtime. I’ll go away a hyperlink to Half 1 on the finish of this text.

This half offers with an identical theme, besides we take distributing Python workloads to the subsequent degree through the use of Ray to parallelise them throughout multi-server clusters within the cloud.

READ ALSO

Leveling Up Your Machine Studying: What To Do After Andrew Ng’s Course

How Cursor Truly Indexes Your Codebase

If you happen to’ve come to this with out having learn Half 1, the TL;DR of Ray is that it’s an open-source distributed computing framework designed to make it straightforward to scale Python applications from a laptop computer to a cluster with minimal code adjustments. That alone ought to hopefully be sufficient to pique your curiosity. In my very own take a look at, on my desktop PC, I took an easy, comparatively easy Python program that finds prime numbers and diminished its runtime by an element of 10 by including simply 4 strains of code.

The place are you able to run Ray clusters?

Ray clusters might be arrange on the next:

  • AWS and GCP Cloud, though unofficial integrations exist for different suppliers, too, comparable to Azure
  • AnyScale, a completely managed platform developed by the creators of Ray.
  • Kubernetes can be used by way of the formally supported KubeRay undertaking.

Stipulations

To comply with together with my course of, you’ll want just a few issues arrange beforehand. I’ll be utilizing AWS for my demo, as I’ve an present account there; nevertheless, I count on the setup for different cloud suppliers and platforms to be very related. You must have:

  • Credentials set as much as run Cloud CLI instructions out of your chosen supplier.
  • A default VPC and no less than one public subnet related to it that has a publicly reachable IP tackle.
  • An SSH Key pair file (.pem) you could obtain to your native system in order that Ray (and also you) can hook up with the nodes in your cluster
  • You’ve sufficient quotas to fulfill the requested variety of nodes and vCPUs in whichever cluster you arrange.

If you wish to do some native testing of your Ray code earlier than deploying it to a cluster, you’ll additionally want to put in the Ray library. We will try this utilizing pip.

$ pip set up ray

I’ll be working all the things from a WSL2 Ubuntu shell on my Home windows desktop.

To confirm that Ray has been put in appropriately, you must have the ability to use its command-line interpreter. In a terminal window, kind within the following command.

$ ray --help

Utilization: ray [OPTIONS] COMMAND [ARGS]...

Choices:
  --logging-level TEXT   The logging degree threshold, selections=['debug',
                         'info', 'warning', 'error', 'critical'],
                         default='data'
  --logging-format TEXT  The logging format.
                         default="%%(asctime)st%%(levelname)s
                         %%(filename)s:%%(lineno)s -- %%(message)s"
  --version              Present the model and exit.
  --help                 Present this message and exit.

Instructions:
  connect               Create or connect to a SSH session to a Ray cluster.
  check-open-ports     Test open ports within the native Ray cluster.
  cluster-dump         Get log information from a number of nodes.
...
...
...

If you happen to don’t see this, one thing has gone fallacious, and you must double-check the output of your set up command.

Assuming all the things is OK, we’re good to go. 

One final necessary level, although. Creating assets, comparable to compute clusters, on a cloud supplier like AWS will incur prices, so it’s important you bear this in thoughts. The excellent news is that Ray has a built-in command that can tear down any infrastructure you create, however to be protected, you must double-check that no unused and doubtlessly pricey providers get left “switched on” by mistake.

Our instance Python code

Step one is to switch our present Ray code from Half 1 to run on a cluster. Right here is the unique code to your reference. Recall that we try to rely the variety of prime numbers inside a selected numeric vary.

import math
import time

# -----------------------------------------
# Change No. 1
# -----------------------------------------
import ray
ray.init()

def is_prime(n: int) -> bool:
    if n < 2: return False
    if n == 2: return True
    if n % 2 == 0: return False
    r = int(math.isqrt(n)) + 1
    for i in vary(3, r, 2):
        if n % i == 0:
            return False
    return True

# -----------------------------------------
# Change No. 2
# -----------------------------------------
@ray.distant(num_cpus=1)  # pure-Python loop → 1 CPU per activity
def count_primes(a: int, b: int) -> int:
    c = 0
    for n in vary(a, b):
        if is_prime(n):
            c += 1
    return c

if __name__ == "__main__":
    A, B = 10_000_000, 20_000_000
    total_cpus = int(ray.cluster_resources().get("CPU", 1))

    # Begin "chunky"; we will sweep this later
    chunks = max(4, total_cpus * 2)
    step = (B - A) // chunks

    print(f"nodes={len(ray.nodes())}, CPUs~{total_cpus}, chunks={chunks}")
    t0 = time.time()
    refs = []
    for i in vary(chunks):
        s = A + i * step
        e = s + step if i < chunks - 1 else B
        # -----------------------------------------
        # Change No. 3
        # -----------------------------------------
        refs.append(count_primes.distant(s, e))

    # -----------------------------------------
    # Change No. 4
    # -----------------------------------------
    complete = sum(ray.get(refs))

    print(f"complete={complete}, time={time.time() - t0:.2f}s")

What modifications are wanted to run it on a cluster? The reply is that only one minor change is required. 

Change 

ray.init() 

to

ray.init(tackle=auto)

That’s one of many beauties of Ray. The identical code runs nearly unmodified in your native PC, and wherever else you care to run it, together with giant, multi-server cloud clusters.

Establishing our cluster

On the cloud, a Ray cluster consists of a head node and a number of employee nodes. In AWS, all these nodes are merely EC2 situations. Ray clusters might be fixed-size or autoscale up and down primarily based on the assets requested by functions working on the cluster. The top node is began first, and the employee nodes are configured with the pinnacle node’s tackle to kind the cluster. If auto-scaling is enabled, employee nodes robotically scale up or down primarily based on the appliance’s load and can scale down after a user-specified interval (5 minutes by default).

Ray makes use of YAML recordsdata to arrange clusters. A YAML file is only a plain-text file with a JSON-like syntax used for system configuration.

Right here is the YAML file I’ll be utilizing to arrange my cluster. I discovered that the closest EC2 occasion to my desktop PC, by way of CPU core rely and efficiency, was a c7g.8xlarge. For simplicity, I’m having the pinnacle node be the identical server kind as all the employees, however you possibly can combine and match totally different EC2 varieties if desired.

cluster_name: ray_test

supplier:
  kind: aws
  area: eu-west-1
  availability_zone: eu-west-1a

auth:
  # For Amazon Linux AMIs the SSH person is 'ec2-user'.
  # If you happen to change to an Ubuntu AMI, change this to 'ubuntu'.
  ssh_user: ec2-user
  ssh_private_key: ~/.ssh/ray-autoscaler_eu-west-1.pem

max_workers: 10
idle_timeout_minutes: 10

head_node_type: head_node

available_node_types:
  head_node:
    node_config:
      InstanceType: c7g.8xlarge
      ImageId: ami-06687e45b21b1fca9
      KeyName: ray-autoscaler_eu-west-1

  worker_node:
    min_workers: 5
    max_workers: 5
    node_config:
      InstanceType: c7g.8xlarge
      ImageId: ami-06687e45b21b1fca9
      KeyName: ray-autoscaler_eu-west-1
      InstanceMarketOptions:
        MarketType: spot

# =========================
# Setup instructions (run on head + staff)
# =========================
setup_commands:
  - |
    set -euo pipefail

    have_cmd() { command -v "$1" >/dev/null 2>&1; }
    have_pip_py() {
      python3 -c 'import importlib.util, sys; sys.exit(0 if importlib.util.find_spec("pip") else 1)'
    }

    # 1) Guarantee Python 3 is current
    if ! have_cmd python3; then
      if have_cmd dnf; then
        sudo dnf set up -y python3
      elif have_cmd yum; then
        sudo yum set up -y python3
      elif have_cmd apt-get; then
        sudo apt-get replace -y
        sudo apt-get set up -y python3
      else
        echo "No supported bundle supervisor discovered to put in python3." >&2
        exit 1
      fi
    fi

    # 2) Guarantee pip exists
    if ! have_pip_py; then
      python3 -m ensurepip --upgrade >/dev/null 2>&1 || true
    fi
    if ! have_pip_py; then
      if have_cmd dnf; then
        sudo dnf set up -y python3-pip || true
      elif have_cmd yum; then
        sudo yum set up -y python3-pip || true
      elif have_cmd apt-get; then
        sudo apt-get replace -y || true
        sudo apt-get set up -y python3-pip || true
      fi
    fi
    if ! have_pip_py; then
      curl -fsS https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
      python3 /tmp/get-pip.py
    fi

    # 3) Improve packaging instruments and set up Ray
    python3 -m pip set up -U pip setuptools wheel
    python3 -m pip set up -U "ray[default]"

Here’s a temporary clarification of every important YAML part.

cluster_name: Assigns a reputation to the cluster, permitting Ray to trace and handle 
it individually from others.

supplier:  Specifies which cloud to make use of (AWS right here), together with the area and 
availability zone for launching situations.

auth:  Defines how Ray connects to situations over SSH - the person identify and the 
personal key used for authentication.

max_workers:  Units the utmost variety of employee nodes Ray can scale as much as when 
extra compute is required.

idle_timeout_minutes:  Tells Ray how lengthy to attend earlier than robotically terminating 
idle employee nodes.

available_node_types:  Describes the totally different node varieties (head and staff), their 
occasion sizes, AMI photos, and scaling limits.

head_node_type:  Identifies which of the node varieties acts because the cluster's controller
(the pinnacle node).

setup_commands:  Lists shell instructions that run as soon as on every node when it is first 
created, sometimes to put in software program or arrange the surroundings.

To begin the cluster creation, use this ray command from the terminal.

$ ray up -y ray_test.yaml

Ray will do its factor, creating all the mandatory infrastructure, and after a couple of minutes, you must see one thing like this in your terminal window.

...
...
...
Subsequent steps
  So as to add one other node to this Ray cluster, run
    ray begin --address='10.0.9.248:6379'

  To hook up with this Ray cluster:
    import ray
    ray.init()

  To submit a Ray job utilizing the Ray Jobs CLI:
    RAY_ADDRESS='http://10.0.9.248:8265' ray job submit --working-dir . -- python my_script.py

  See https://docs.ray.io/en/newest/cluster/running-applications/job-submission/index.html
  for extra data on submitting Ray jobs to the Ray cluster.

  To terminate the Ray runtime, run
    ray cease

  To view the standing of the cluster, use
    ray standing

  To watch and debug Ray, view the dashboard at
    10.0.9.248:8265

  If connection to the dashboard fails, test your firewall settings and community configuration.
Shared connection to 108.130.38.255 closed.
  New standing: up-to-date

Helpful instructions:
  To terminate the cluster:
    ray down /mnt/c/Customers/thoma/ray_test.yaml

  To retrieve the IP tackle of the cluster head:
    ray get-head-ip /mnt/c/Customers/thoma/ray_test.yaml

  To port-forward the cluster's Ray Dashboard to the native machine:
    ray dashboard /mnt/c/Customers/thoma/ray_test.yaml

  To submit a job to the cluster, port-forward the Ray Dashboard in one other terminal and run:
    ray job submit --address http://localhost: --working-dir . -- python my_script.py

  To hook up with a terminal on the cluster head for debugging:
    ray connect /mnt/c/Customers/thoma/ray_test.yaml

  To watch autoscaling:
    ray exec /mnt/c/Customers/thoma/ray_test.yaml 'tail -n 100 -f /tmp/ray/session_latest/logs/monitor*'

Operating a Ray job on a cluster

At this stage, the cluster has been constructed, and we’re able to submit our Ray job to it. To present the cluster one thing extra substantial to work with, I elevated the vary for the prime search in my code from 10,000,000 to twenty,000,000 to 10,000,000–60,000,000. On my native desktop, Ray ran this in 18 seconds. 

I waited a short while for all of the cluster nodes to initialise totally, then ran the code on the cluster with this command. 

$  ray exec ray_test.yaml 'python3 ~/ray_test.py'

Right here is my output.

(base) tom@tpr-desktop:/mnt/c/Customers/thoma$ ray exec ray_test2.yaml 'python3 ~/primes_ray.py'
2025-11-01 13:44:22,983 INFO util.py:389 -- setting max staff for head node kind to 0
Loaded cached supplier configuration
If you happen to expertise points with the cloud supplier, strive re-running the command with --no-config-cache.
Fetched IP: 52.213.155.130
Warning: Completely added '52.213.155.130' (ED25519) to the checklist of recognized hosts.
2025-11-01 13:44:26,469 INFO employee.py:1832 -- Connecting to present Ray cluster at tackle: 10.0.5.86:6379...
2025-11-01 13:44:26,477 INFO employee.py:2003 -- Related to Ray cluster. View the dashboard at http://10.0.5.86:8265
nodes=6, CPUs~192, chunks=384
(autoscaler +2s) Tip: use `ray standing` to view detailed cluster standing. To disable these messages, set RAY_SCHEDULER_EVENTS=0.
(autoscaler +2s) No out there node varieties can fulfill useful resource requests {'CPU': 1.0}*160. Add appropriate node varieties to this cluster to resolve this problem.
complete=2897536, time=5.71s
Shared connection to 52.213.155.130 closed.

As you possibly can see the time taken to run on the cluster was simply over 5 seconds. So, 5 employee nodes ran the identical job in lower than a 3rd of the time it took on my native PC. Not too shabby.

Once you’re completed together with your cluster, please run the next Ray command to tear it down.

$ ray down -y ray_test.yaml

As I discussed earlier than, you must all the time double-check your account to make sure this command has labored as anticipated. 

Abstract

This text, the second in a two-part sequence, demonstrates find out how to run CPU-intensive Python code on cloud-based clusters utilizing the Ray library. By spreading the workload throughout all out there vCPUs, Ray ensures our code delivers quick efficiency and runtimes. 

I described and confirmed find out how to create a cluster utilizing a YAML file and find out how to utilise the Ray command-line interface to submit code for execution on the cluster.

Utilizing AWS for example platform, I took Ray Python code, which had been working on my native PC and ran it — nearly unchanged — on a 6-node EC2 cluster. This confirmed important efficiency enhancements (3x) over the non-cluster run time.

Lastly, I confirmed find out how to use the ray command-line instrument to tear down the AWS cluster infrastructure Ray had created.

If you happen to haven’t already learn my first article on this sequence, click on on the hyperlink under to test it out.

Please be aware that apart from being a some-time person of their providers, I’ve no affiliation with AnyScale or AWS or every other organisation talked about on this article.

Tags: ComputingDistributedPartRAY

Related Posts

Mlm chugani leveling up machine learning after andrew ng course feature scaled.jpg
Artificial Intelligence

Leveling Up Your Machine Studying: What To Do After Andrew Ng’s Course

January 27, 2026
Yancy min 842ofhc6mai unsplash 2.jpg
Artificial Intelligence

How Cursor Truly Indexes Your Codebase

January 26, 2026
5f9754e8 0210 4d79 a7e0 8fdf92823c16 1920x960.jpg
Artificial Intelligence

SAM 3 vs. Specialist Fashions — A Efficiency Benchmark

January 26, 2026
Image 136.png
Artificial Intelligence

Why the Sophistication of Your Immediate Correlates Virtually Completely with the Sophistication of the Response, as Analysis by Anthropic Discovered

January 25, 2026
Featured image1 rotated 1.jpg
Artificial Intelligence

Learn how to Construct a Neural Machine Translation System for a Low-Useful resource Language

January 24, 2026
Image 137.jpg
Artificial Intelligence

Reaching 5x Agentic Coding Efficiency with Few-Shot Prompting

January 24, 2026
Next Post
Mlm chugani 2026 time series foundation models autonomous forecasting feature scaled.jpg

The 2026 Time Sequence Toolkit: 5 Basis Fashions for Autonomous Forecasting

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Chainlink Link And Cardano Ada Dominate The Crypto Coin Development Chart.jpg

Chainlink’s Run to $20 Beneficial properties Steam Amid LINK Taking the Helm because the High Creating DeFi Challenge ⋆ ZyCrypto

May 17, 2025
Image 100 1024x683.png

Easy methods to Use LLMs for Highly effective Computerized Evaluations

August 13, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
Blog.png

XMN is accessible for buying and selling!

October 10, 2025
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025

EDITOR'S PICK

Jonathan petersson w8v3g nk8fe unsplash scaled 1.jpg

Anticipated Worth Evaluation in AI Product Administration

November 9, 2025
Google deepmind xefyysumdr4 unsplash 1 scaled 1.jpg

How Your Prompts Lead AI Astray

August 4, 2025
1jt23qi7mgzulbzcmavdfgg.png

Which Regression method must you use? | by Piero Paialunga | Aug, 2024

August 11, 2024
Blackfriday nov25 1200x600 1.png

Our favorite Black Friday deal to Be taught SQL, AI, Python, and grow to be an authorized information analyst!

November 26, 2025

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • The 2026 Time Sequence Toolkit: 5 Basis Fashions for Autonomous Forecasting
  • Ray: Distributed Computing For All, Half 2
  • Get Free Bitcoin Promo Codes
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?