Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best practices in setting number of dask workers

Tags:

I am a bit confused by the different terms used in dask and dask.distributed when setting up workers on a cluster.

The terms I came across are: thread, process, processor, node, worker, scheduler.

My question is how to set the number of each, and if there is a strict or recommend relationship between any of these. For example:

  • 1 worker per node with n processes for the n cores on the node
  • threads and processes are the same concept? In dask-mpi I have to set nthreads but they show up as processes in the client

Any other suggestions?

like image 383
kristofarkas Avatar asked Jun 29 '18 10:06

kristofarkas


People also ask

How many employees does dask have?

If we start Dask using processes — as in the following code — we get 8 workers, one for each core, with each worker allotted 2 GB of memory (16 GB total / 8 workers, this will vary depending on your laptop).

Why does dask Compute take so long?

compute() concatenates all the Dask DataFrame partitions into a single pandas DataFrame. When the Dask DataFrame contains data that's split across multiple nodes in a cluster, then compute() may run slowly. It can also cause out of memory errors if the data isn't small enough to fit in the memory of a single machine.

What are workers in dask?

Workers keep the scheduler informed of their data and use that scheduler to gather data from other workers when necessary to perform a computation. You can start a worker with the dask-worker command line application: $ dask-worker scheduler-ip:port.


1 Answers

By "node" people typically mean a physical or virtual machine. That node can run several programs or processes at once (much like how my computer can run a web browser and text editor at once). Each process can parallelize within itself with many threads. Processes have isolated memory environments, meaning that sharing data within a process is free, while sharing data between processes is expensive.

Typically things work best on larger nodes (like 36 cores) if you cut them up into a few processes, each of which have several threads. You want the number of processes times the number of threads to equal the number of cores. So for example you might do something like the following for a 36 core machine:

  • Four processes with nine threads each
  • Twelve processes with three threads each
  • One process with thirty-six threads

Typically one decides between these choices based on the workload. The difference here is due to Python's Global Interpreter Lock, which limits parallelism for some kinds of data. If you are working mostly with Numpy, Pandas, Scikit-Learn, or other numerical programming libraries in Python then you don't need to worry about the GIL, and you probably want to prefer few processes with many threads each. This helps because it allows data to move freely between your cores because it all lives in the same process. However, if you're doing mostly Pure Python programming, like dealing with text data, dictionaries/lists/sets, and doing most of your computation in tight Python for loops then you'll want to prefer having many processes with few threads each. This incurs extra communication costs, but lets you bypass the GIL.

In short, if you're using mostly numpy/pandas-style data, try to get at least eight threads or so in a process. Otherwise, maybe go for only two threads in a process.

like image 177
MRocklin Avatar answered Sep 18 '22 06:09

MRocklin