Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras error when predicting on multithreading

I'm trying to create four threads (each one with its own graph and model) that will run concurently and issue predictions in the same way.

My thread code is something like:

        thread_locker.acquire()
        thread_graph = Graph()
        with thread_graph.as_default():
            thread_session = Session()
            with thread_session.as_default():
                #Model Training
                if (once_flag_raised == False):
                    try:
                        model = load_model('ten_step_forward_'+ timeframe +'.h5')
                    except OSError:
                        input_layer = Input(shape=(X_train.shape[1], 17,))

                        lstm = Bidirectional(
                            LSTM(250),
                            merge_mode='concat')(input_layer)

                        pred = Dense(10)(lstm)
                        model = Model(inputs=input_layer, outputs=pred)
                        model.compile(optimizer='adam', loss='mean_squared_error')
                    once_flag_raised = True

                model.fit(X_train, y_train, epochs=10, batch_size=128)
                thread_locker.acquire()
                nn_info_dict['model'] = model
                nn_info_dict['sc'] = sc
                model.save('ten_step_forward_'+ timeframe +'.h5')
                thread_locker.release()
        thread_locker.release()

        (....)
            thread_locker.acquire()
            thread_graph = Graph()
            with thread_graph.as_default():
                thread_session = Session()
                with thread_session.as_default():
                    pred_data= model.predict(X_pred)
            thread_locker.release()

on each thread.

I keep getting the following error (threads - 1 times) when I read the predicting part of the code:

ValueError: Tensor Tensor("dense_1/BiasAdd:0", shape=(?, 10), dtype=float32) is not an element of this graph.

My understanding is that one of the threads "claims" the Tensorflow backend and its default Graph and Session.

Is there any way to work around that?

like image 348
Panos Filianos Avatar asked Nov 30 '25 15:11

Panos Filianos


1 Answers

I have figured what I was doing wrong. My thinking was right but I shouldn't have recreated the Graph and Session below. The bottom part of the code should simply be:

    thread_locker.acquire()
    with thread_graph.as_default():
        with thread_session.as_default():
            pred_data= model.predict(X_pred)
    thread_locker.release()
like image 194
Panos Filianos Avatar answered Dec 02 '25 03:12

Panos Filianos



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!