...
We note that to run Dask on Gadi's GPU node you should use RAPIDS. The above cases work for CPUs only.
You can start a JupyterLab session by clicking "JupyterLab" button after login ARE website.
Table of Contents maxLevel 2
Local Dask Cluster
In the JuputerLab launch form, you can request single GPU node resources as below
...
Click "Open JupyterLab" button, you will enter the JupyterLab session with single node. You can use the resources on this local node hosting JupyterLab session to start a Local Dask Cluster on-the-fly.
After importing the necessary Dask modules, the essential lines needed in the Jupyter code are:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
from dask.distributed import Client, LocalCluster cluster = LocalCluster() client = Client(cluster) |
After it is set up, you can check the configuration via the Jupyter command "print(client)" as shown below
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
print(client) <Client: 'tcp://127.0.0.1:41179' processes=2 threads=2, memory=5.62 GiB> |
This output shows a local Dask cluster (via the node-local loopback network interface 127.0.0.1) running on 2 CPU Cores - which are the resources that were requested as part of the setup of the current JupyterLab session of this example (i.e., 2 CPU Cores and 6 GB memory).
If you need a Dask cluster crossing multiple computer nodes, please refer to the pre-defined dask cluster.
...