Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For example, in a PBS job requesting 96 cores of the normal queue (i.e. 2 worker nodes), you could set up the Dask cluster in several waysa Ray cluster with different configurations

$ jupyter.ini.sh -R            # set up a Dask cluster with 48 DaskRay workers per node, 
96 total Dask workers, 1 thread per Ray worker.
$ jupyter.ini.sh -R -p 12 # set up a Dask cluster with 12 DaskRay workers per node,
24 total Dask workers, 1 threads per Ray worker.

Note a Ray worker always contains 1 thread so "-t" is invalid in setting up the Ray cluster.

You can also specify flag "-G" together with "-R" when running jupyter.ini.sh to set up a Rask cluster by using Gadi GPU devices. As default, the number of Dask workers equals the number of GPU devices requested in the PBS job and each worker has 1 thread. 

$ jupyter.ini.sh -R -Gg         # set up a Dask cluster utilising GPU devices. 
# The Ray cluster contains both CPU and GPU resources allocated within the PBS job.

Note: You can also append "-J" flag in above commands to set up a JupyterLab session.

After it is running, you can connect to this Ray cluster in a python script as below:

import ray 
ray.init(address='auto')