Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Specify either CPU or GPU for multiple models tensorflow java's job

I am using Tensorflow java API (1.8.0) where I load multiple models (in different sessions). Those models are loaded from .pb files using the SavedModelBundle.load(...) method. Those .pb files were obtained by saving Keras' models.

Let's say that I want to load 3 models A, B, C. To do that, I implemented a java Model class :

public class Model implements Closeable {

private String inputName;
private String outputName;
private Session session;
private int inputSize;

public Model(String modelDir, String input_name, String output_name, int inputSize) {
    SavedModelBundle bundle = SavedModelBundle.load(modelDir, "serve");
    this.inputName = input_name;
    this.outputName = output_name;
    this.inputSize = inputSize;
    this.session = bundle.session();
}

public void close() {
    session.close();
}

public Tensor predict(Tensor t) {
    return session.runner().feed(inputName, t).fetch(outputName).run().get(0);
}
}

Then I easily can instantiate 3 Model objects corresponding to my A, B and C models with this class and make predictions with those 3 models in the same java program. I also noticed that if I have a GPU, the 3 models are loaded on it.

However, I would like only model A to be running on GPU and force the 2 others to be running on CPU.

By reading documentation and diving into the source code I didn't find a way to do so. I tried to define a new ConfigProto setting visible devices to None and instantiate a new Session with the graph but it didn't work (see code below).

    public Model(String modelDir, String input_name, String output_name, int inputSize) {
      SavedModelBundle bundle = SavedModelBundle.load(modelDir, "serve");
      this.inputName = input_name;
      this.outputName = output_name;
      this.inputSize = inputSize;
      ConfigProto configProto = ConfigProto.newBuilder().setAllowSoftPlacement(false).setGpuOptions(GPUOptions.newBuilder().setVisibleDeviceList("").build()).build();
      this.session = new Session(bundle.graph(),configProto.toByteArray());
}

When I load the model, it uses the available GPU. Do you have any solution to this problem ?

Thank you for your answer.

like image 547
Alex Avatar asked Jun 12 '18 14:06

Alex


People also ask

Can TensorFlow run on multiple GPU?

TensorFlow provides strong support for distributing deep learning across multiple GPUs. TensorFlow is an open source platform that you can use to develop and train machine learning and deep learning models. TensorFlow operations can leverage both CPUs and GPUs.

Does TensorFlow use GPU by default?

Then, TensorFlow runs operations on your GPUs by default. You can control how TensorFlow uses CPUs and GPUs: Logging operations placement on specific CPUs or GPUs. Instructing TensorFlow to run certain operations in a specific “device context”—a CPU or a specific GPU, if there are multiple GPUs on the machine.

How do I use GPU in TensorFlow training?

If you would like to run TensorFlow on multiple GPUs, it is possible to construct a model in a multi-tower fashion and assign each tower to a different GPU. For example: # Creates a graph. # Creates a session with log_device_placement set to True.


2 Answers

Above given answers did not work for me.Using putDeviceCount("GPU", 0) makes TF use CPU . It is working in version 1.15.0.You can load same model to both cpu and gpu and if gpu throws Resource exhausted: OOM when allocating tensor, use the CPU model to do prediction.

ConfigProto configProtoCpu = ConfigProto.newBuilder().setAllowSoftPlacement(true).putDeviceCount("GPU", 0)
                    .build();
SavedModelBundle modelCpu=SavedModelBundle.loader(modelPath).withTags("serve")
                    .withConfigProto(configProtoCpu.toByteArray()).load();

ConfigProto configProtoGpu = ConfigProto.newBuilder().setAllowSoftPlacement(true)
    .setGpuOptions(GPUOptions.newBuilder().setAllowGrowth(true).build()).build();
SavedModelBundle modelgpu = SavedModelBundle.loader(modelPath).withTags("serve")
                    .withConfigProto(configProtoGpu.toByteArray()).load();
like image 149
subbu Avatar answered Oct 27 '22 17:10

subbu


According to this issue , the new source code fixed this problem. Unfortunately you will have to build from source following these instructions

Then you can test :

ConfigProto configProto = ConfigProto.newBuilder()
                .setAllowSoftPlacement(true) // allow less GPUs than configured
                .setGpuOptions(GPUOptions.newBuilder().setPerProcessGpuMemoryFraction(0.01).build())
                .build();
SavedModelBundle  bundle = SavedModelBundle.loader(modelDir).withTags("serve").withConfigProto(configProto.toByteArray()).load();
like image 45
Remzouz Avatar answered Oct 27 '22 18:10

Remzouz