I am trying to set the resources for workers as per the docs here, but on a set up that uses Dask Gateway. Specifically, I'd like to be able to follow the answer to this question, but using Dask Gateway.
I haven't been able to find a reference to worker resources in the ClusterConfig options, and I tried the following (as per this answer), which doesn't seem to work:
def set_resources(dask_worker):
dask_worker.set_resources(task_limit=1)
return dask_worker.available_resources, dask_worker.total_resources
client.run(set_resources)
# output from a 1 worker cluster
> {'tls://255.0.91.211:39302': ({}, {})}
# checking info known by scheduler
cluster.scheduler_info
> {'type': 'Scheduler',
'id': 'Scheduler-410438c9-6b3a-494d-974a-52d9e9fss121',
'address': 'tls://255.0.44.161:8786',
'services': {'dashboard': 8787, 'gateway': 8788},
'started': 1632434883.9022279,
'workers': {'tls://255.0.92.232:39305': {'type': 'Worker',
'id': 'dask-worker-f95c163cf41647c6a6d85da9efa9919b-wvnf6',
'host': '255.0.91.211',
'resources': {}, #### still {} empty dict
'local_directory': '/home/jovyan/dask-worker-space/worker-ir8tpkz_',
'name': 'dask-worker-f95c157cf41647c6a6d85da9efa9919b-wvnf6',
'nthreads': 4,
'memory_limit': 6952476672,
'services': {'dashboard': 8787},
'nanny': 'tls://255.0.92.232:40499'}}}
How can this be done, either when the cluster is created using the config.yaml
of the helm chart (ideally, a field in the cluster options that a user can change!) for Dask Gateway, or after the workers are already up and running?
I've found a way to specify this, at least on Kubernetes, is through the KubeClusterConfig.worker_extra_container_config
. This is my yaml snippet for a working configuration (specifically, this is in my config for the daskhub helm deploy):
dask-gateway:
gateway:
backend:
worker:
extraContainerConfig:
env:
- name: DASK_DISTRIBUTED__WORKER__RESOURCES__TASKSLOTS
value: "1"
An option to set worker resources isn't exposed in the cluster options, and isn't explicitly exposed in the KubeClusterConfig
. The specific format for the environment variable is described here. Resource environment variables need to be set before the dask worker process is started, I found it doesn't work when I set KubeClusterConfig.environment
.
Using this, I am able to run multithreaded numpy (np.dot
) using mkl in a dask worker container that has been given 4 cores. I see 400% CPU usage and only one task assigned to each worker.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With