Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Caffe: how to choose maximum avalible batch size that can fit in memory?

I experienced some problems due small GPU memory (1Gb), the problem is that for now I choose batch_size by trial and error and it seems even if memory size printed in log by line Memory required for data: is less than 1Gb it can fail.

So my questions are:

  1. How to automatically choose maximun availible batch size that can fit in GPU memory?
  2. Is it always better to have bigger batch_size?
  3. How calculate peak memory needed for training and forward pass during deploy of network?

UPDATE: Also I checked the code , but I'm not sure what is top_vecs_

like image 680
mrgloom Avatar asked Oct 31 '22 02:10

mrgloom


1 Answers

If memory size printed in log by line Memory required for data is less than your total GPU memory, it still can fail, because other programs are using some of your GPU memory. Under linux you can use nvidia-smi command to check the stats. For me Ubuntu graphic environment using 97MB.

  1. There is no way to say caffe to do it automatically.
  2. Yes, for training. It processes more data in one pass and it will converge in less epochs, because SGD will produce more similar results to GD per iteration. For deploy it's not that critical
  3. This can give you general understanding how to calculate this : http://cs231n.github.io/convolutional-networks/
like image 102
taarraas Avatar answered Nov 03 '22 16:11

taarraas