It is currently possible to specify which CPU or GPU to use with the tf.device(...) function for specific ops, but is there anyway where you can specify a core of a CPU?
TensorFlow supports running computations on a variety of types of devices, including CPU and GPU.
Running TensorFlow on multicore CPUs can be an attractive option, e.g., where a workflow is dominated by IO and faster computational hardware has less impact on runtime, or simply where no GPUs are available. This talk will discuss which TensorFlow package to choose, and how to optimise performance on multicore CPUs.
TensorFlow Operations, also known as Ops, are nodes that perform computations on or with Tensor objects. After computation, they return zero or more tensors, which can be used by other Ops later in the graph.
There's no API for pinning ops to a particular core at present, though this would make a good feature request. You could approximate this functionality by creating multiple CPU devices, each with a single-threaded threadpool, but this isn't guaranteed to maintain the locality of a core-pinning solution:
with tf.device("/cpu:4"):
# ...
with tf.device("/cpu:7"):
# ...
with tf.device("/cpu:0"):
# ...
config = tf.ConfigProto(device_count={"CPU": 8},
inter_op_parallelism_threads=1,
intra_op_parallelism_threads=1)
sess = tf.Session(config=config)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With