Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow Java Multi-GPU inference

I have a server with multiple GPUs and want to make full use of them during model inference inside a java app. By default tensorflow seizes all available GPUs, but uses only the first one.

I can think of three options to overcome this issue:

  1. Restrict device visibility on process level, namely using CUDA_VISIBLE_DEVICES environment variable.

    That would require me to run several instances of the java app and distribute traffic among them. Not that tempting idea.

  2. Launch several sessions inside a single application and try to assign one device to each of them via ConfigProto:

    public class DistributedPredictor {
    
        private Predictor[] nested;
        private int[] counters;
    
        // ...
    
        public DistributedPredictor(String modelPath, int numDevices, int numThreadsPerDevice) {
            nested = new Predictor[numDevices];
            counters = new int[numDevices];
    
            for (int i = 0; i < nested.length; i++) {
                nested[i] = new Predictor(modelPath, i, numDevices, numThreadsPerDevice);
            }
        }
    
        public Prediction predict(Data data) {
            int i = acquirePredictorIndex();
            Prediction result = nested[i].predict(data);
            releasePredictorIndex(i);
            return result;
        }
    
        private synchronized int acquirePredictorIndex() {
            int i = argmin(counters);
            counters[i] += 1;
            return i;
        }
    
        private synchronized void releasePredictorIndex(int i) {
            counters[i] -= 1;
        }
    }
    
    
    public class Predictor {
    
        private Session session;
    
        public Predictor(String modelPath, int deviceIdx, int numDevices, int numThreadsPerDevice) {
    
            GPUOptions gpuOptions = GPUOptions.newBuilder()
                    .setVisibleDeviceList("" + deviceIdx)
                    .setAllowGrowth(true)
                    .build();
    
            ConfigProto config = ConfigProto.newBuilder()
                    .setGpuOptions(gpuOptions)
                    .setInterOpParallelismThreads(numDevices * numThreadsPerDevice)
                    .build();
    
            byte[] graphDef = Files.readAllBytes(Paths.get(modelPath));
            Graph graph = new Graph();
            graph.importGraphDef(graphDef);
    
            this.session = new Session(graph, config.toByteArray());
        }
    
        public Prediction predict(Data data) {
            // ...
        }
    }
    

    This approach seems to work fine at a glance. However, sessions occasionally ignore setVisibleDeviceList option and all go for the first device causing Out-Of-Memory crash.

  3. Build the model in a multi-tower fashion in python using tf.device() specification. On java side, give different Predictors different towers inside a shared session.

    Feels cumbersome and idiomatically wrong to me.

UPDATE: As @ash proposed, there's yet another option:

  1. Assign an appropriate device to each operation of the existing graph by modifying its definition (graphDef).

    To get it done, one could adapt the code from Method 2:

    public class Predictor {
    
        private Session session;
    
        public Predictor(String modelPath, int deviceIdx, int numDevices, int numThreadsPerDevice) {
    
            byte[] graphDef = Files.readAllBytes(Paths.get(modelPath));
            graphDef = setGraphDefDevice(graphDef, deviceIdx)
    
            Graph graph = new Graph();
            graph.importGraphDef(graphDef);
    
            ConfigProto config = ConfigProto.newBuilder()
                    .setAllowSoftPlacement(true)
                    .build();
    
            this.session = new Session(graph, config.toByteArray());
        }
    
        private static byte[] setGraphDefDevice(byte[] graphDef, int deviceIdx) throws InvalidProtocolBufferException {
            String deviceString = String.format("/gpu:%d", deviceIdx);
    
            GraphDef.Builder builder = GraphDef.parseFrom(graphDef).toBuilder();
            for (int i = 0; i < builder.getNodeCount(); i++) {
                builder.getNodeBuilder(i).setDevice(deviceString);
            }
            return builder.build().toByteArray();
        }
    
        public Prediction predict(Data data) {
            // ...
        }
    }
    

    Just like other mentioned approaches, this one doesn't set me free from manually distributing data among devices. But at least it works stably and is comparably easy to implement. Overall, this looks like an (almost) normal technique.

Is there an elegant way to do such a basic thing with tensorflow java API? Any ideas would be appreciated.

like image 792
Alexander Lutsenko Avatar asked Dec 13 '17 18:12

Alexander Lutsenko


People also ask

Can TensorFlow run on multiple GPU?

Strategy is a TensorFlow API to distribute training across multiple GPUs, multiple machines, or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes. tf.

How does TensorFlow parallelize?

The TensorFlow runtime parallelizes graph execution across many different dimensions: The individual ops have parallel implementations, using multiple cores in a CPU, or multiple threads in a GPU.

Does TensorFlow use GPU by default?

By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES ) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. To limit TensorFlow to a specific set of GPUs, use the tf.


2 Answers

In python it can be done as follows:

def get_frozen_graph(graph_file):
    """Read Frozen Graph file from disk."""
    with tf.gfile.GFile(graph_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    return graph_def

trt_graph1 = get_frozen_graph('/home/ved/ved_1/frozen_inference_graph.pb')

with tf.device('/gpu:1'):
    [tf_input_l1, tf_scores_l1, tf_boxes_l1, tf_classes_l1, tf_num_detections_l1, tf_masks_l1] = tf.import_graph_def(trt_graph1, 
                    return_elements=['image_tensor:0', 'detection_scores:0', 
                    'detection_boxes:0', 'detection_classes:0','num_detections:0', 'detection_masks:0'])
    
tf_sess1 = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))

trt_graph2 = get_frozen_graph('/home/ved/ved_2/frozen_inference_graph.pb')

with tf.device('/gpu:0'):
    [tf_input_l2, tf_scores_l2, tf_boxes_l2, tf_classes_l2, tf_num_detections_l2] = tf.import_graph_def(trt_graph2, 
                    return_elements=['image_tensor:0', 'detection_scores:0', 
                    'detection_boxes:0', 'detection_classes:0','num_detections:0'])
    
tf_sess2 = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
like image 171
Vedanshu Avatar answered Oct 17 '22 18:10

Vedanshu


In short: There is a workaround, where you end up with one session per GPU.

Details:

The general flow is that the TensorFlow runtime respects the devices specified for operations in the graph. If no device is specified for an operation, then it "places" it based on some heuristics. Those heuristics currently result in "place operation on GPU:0 if GPUs are available and there is a GPU kernel for the operation" (Placer::Run in case you're interested).

What you ask for I think is a reasonable feature request for TensorFlow - the ability to treat devices in the serialized graph as "virtual" ones to be mapped to a set of "phyiscal" devices at run time, or alternatively setting the "default device". This feature does not currently exist. Adding such an option to ConfigProto is something you may want to file a feature request for.

I can suggest a workaround in the interim. First, some commentary on your proposed solutions.

  1. Your first idea will surely work, but as you pointed out, is cumbersome.

  2. Setting using visible_device_list in the ConfigProto doesn't quite work out since that is actually a per-process setting and is ignored after the first session is created in the process. This is certainly not documented as well as it should be (and somewhat unfortunate that this appears in the per-Session configuration). However, this explains why your suggestion here doesn't work and why you still see a single GPU being used.

  3. This could work.

Another option is to end up with different graphs (with operations explicitly placed on different GPUs), resulting in one session per GPU. Something like this can be used to edit the graph and explicitly assign a device to each operation:

public static byte[] modifyGraphDef(byte[] graphDef, String device) throws Exception {
  GraphDef.Builder builder = GraphDef.parseFrom(graphDef).toBuilder();
  for (int i = 0; i < builder.getNodeCount(); ++i) {
    builder.getNodeBuilder(i).setDevice(device);
  }
  return builder.build().toByteArray();
} 

After which you could create a Graph and Session per GPU using something like:

final int NUM_GPUS = 8;
// setAllowSoftPlacement: Just in case our device modifications were too aggressive
// (e.g., setting a GPU device on an operation that only has CPU kernels)
// setLogDevicePlacment: So we can see what happens.
byte[] config =
    ConfigProto.newBuilder()
        .setLogDevicePlacement(true)
        .setAllowSoftPlacement(true)
        .build()
        .toByteArray();
Graph graphs[] = new Graph[NUM_GPUS];
Session sessions[] = new Session[NUM_GPUS];
for (int i = 0; i < NUM_GPUS; ++i) {
  graphs[i] = new Graph();
  graphs[i].importGraphDef(modifyGraphDef(graphDef, String.format("/gpu:%d", i)));
  sessions[i] = new Session(graphs[i], config);    
}

Then use sessions[i] to execute the graph on GPU #i.

Hope that helps.

like image 23
ash Avatar answered Oct 17 '22 18:10

ash