Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does the "number of workers" parameter in PyTorch dataloader actually work?

  1. If num_workers is 2, Does that mean that it will put 2 batches in the RAM and send 1 of them to the GPU or Does it put 3 batches in the RAM then sends 1 of them to the GPU?
  2. What does actually happen when the number of workers is higher than the number of CPU cores? I tried it and it worked fine but How does it work? (I thought that the maximum number of workers I can choose is the number of cores).
  3. If I set num_workers to 3 and during the training there were no batches in the memory for the GPU, Does the main process waits for its workers to read the batches or Does it read a single batch (without waiting for the workers)?
like image 598
floyd Avatar asked Jan 01 '19 19:01

floyd


People also ask

What is number of workers in DataLoader PyTorch?

Num_workers tells the data loader instance how many sub-processes to use for data loading. If the num_worker is zero (default) the GPU has to weight for CPU to load data. Theoretically, greater the num_workers, more efficiently the CPU load data and less the GPU has to wait.

How do you decide how many employees?

An easy way to determine this calculation is to take your annual revenue divided by your average annual employee count and divide by 12 for the number of months. This will give you a number that reflects the amount of income required to sustain the productive employee.

How does PyTorch data loader work?

Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset. The DataLoader supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.

What is batch size in DataLoader PyTorch?

PyTorch dataloader batch size Batch size is defined as the number of samples processed before the model is updated. The batch size is equal to the number of samples in the training data.


1 Answers

  1. When num_workers>0, only these workers will retrieve data, main process won't. So when num_workers=2 you have at most 2 workers simultaneously putting data into RAM, not 3.
  2. Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than cpu cores is ok. But is it efficient? it depends on how busy your cpu cores are for other tasks, speed of cpu, speed of your hard disk etc. In short, its complicated, so setting workers to number of cores is a good rule of thumb, nothing more.
  3. Nope. Remember DataLoader doesn't just randomly return from what's available in RAM right now, it uses batch_sampler to decide which batch to return next. Each batch is assigned to a worker, and main process will wait until the desired batch is retrieved by assigned worker.

Lastly to clarify, it isn't DataLoader's job to send anything directly to GPU, you explicitly call cuda() for that.

EDIT: Don't call cuda() inside Dataset's __getitem__() method, please look at @psarka's comment for the reasoning

like image 95
Shihab Shahriar Khan Avatar answered Sep 20 '22 12:09

Shihab Shahriar Khan