Single Node
If you are working in a single node via OOD, ARE or Gadi PBS job, you can simply call ray.init()
within your Jupyter Notebook or python script to start a Ray runtime on the current working host.
You can view the resources available to the Ray via function ray.cluster_resources().
import ray ray.init() print(ray.cluster_resources())
The above commands will print a message as below which indicates there are 16 CPU core, 29GB memory and 1 node available for the current local Ray cluster.
{'memory': 28808935835.0, 'object_store_memory': 14404467916.0, 'CPU': 16.0, 'node:10.0.128.152': 1.0}
Multiple Nodes
If you need a larger scale of Ray cluster across multiple nodes, you can start a pre-defined Ray cluster and then connect it in the Jupyter notebook or python script.
An easy way to set up the pre-defined Ray cluster is to utilise dk92 module "gadi_jupyterlab".
Launching a predefined Ray cluster
Gadi
In your PBS job script, you should load NCI-data-analysis/2022.06 together with the gadi_jupyterlab module. Then you need to run a script called "jupyter.ini.sh -R" to set up the pre-defined Ray cluster. It will start a Ray worker on each CPU core of all available compute nodes in a job. You can also specify the number of Ray workers per node via "-p" flag. For example, in a job requesting 96 cores ( 2 nodes) of "normal" queue, you can set up a pre-defined Ray cluster with 12 Ray workers per node, and 24 Ray workers in total via the following command
jupyter.ini.sh -R -p 12
An example of a full job script requesting 96 Ray workers is given below
#!/bin/bash |
In "script.py", you need to connect to the pre-defined Ray cluster by calling ray.init() and specify the address flag as "auto".
import ray ray.init(address="auto") print(ray.cluster_resources())
The above script will print the following message
{'object_store_memory': 114296048025.0, 'CPU': 96.0, 'memory': 256690778727.0, 'node:10.6.48.66': 1.0, 'node:10.6.48.67': 1.0}
ARE
First of all, you need to request multiple nodes in ARE JupyterLab session and specify the proper storage projects.
Then click "Advanced options" button, and put "/g/data/dk92/apps/Modules/modulefiles" in "Module directories" and load both NCI-data-analysis/2022.06 and gadi-jupyterlab/22.06 modules in "Module" field. In the "Pre-script" field, fill in the command "jupyterlab.ini.sh -R" to set up the pre-defined Ray cluster.
Click "Open JupyterLab" button to open the JupyterLab session as soon it is highlighted.
In the Jupyter notebook, using the following lines to connect the pre-defined Ray cluster and print the resources information.
You will see 96 CPU Cores and two nodes are used by the cluster as expected.
Monitoring Ray status
You can easily monitor Ray status via the command "ray status". Open a CLI terminal in either JupyterLab session or a Gadi PBS interactive job and type in the following command
$ watch ray status |
The Ray status will be kept updating every 2 seconds
Every 2.0s: ray status gadi-cpu-clx-1146.gadi.nci.org.au: Mon May 23 15:27:27 2022 |