I have a shared machine with 64 cores on which I have a big pipeline of Keras functions that I want to run. The thing is that it seems that Keras automatically uses all the cores available and I can't do that.
I use Python and I want to run 67 neural networks in a for loop. I would like to use half of the available cores.
I can't find any way of limiting the number of cores in Keras... Do you have any clue?
As @Yu-Yang suggested, I used these lines before each fit
:
from keras import backend as K
K.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=32,
inter_op_parallelism_threads=32)))
Check the CPU usage (htop) :
As mentioned in this solution, (https://stackoverflow.com/a/54832345/5568660)
if you want to use this using Tensforflow or Tensorflow_gpu, you can directly use the tf.config and feed it to the session:
config = tf.ConfigProto(intra_op_parallelism_threads=32,
inter_op_parallelism_threads=32,
allow_soft_placement=True)
session = tf.Session(config=config)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With