I'm trying to serve my model using Docker + tensorflow-serving. However, due to restrictions with serving a model with an iterator (usingmake_initializable_iterator()
), I had to split up my model.
I'm using grpc to interface with my model on docker. The problem is that my predicted tensor is about 10MB and about 4.1MB serialized. The error I'm getting is:
"grpc_message":"Received message larger than max (9830491 vs. 4194304)"
Is there a way to write out my predictions to disk instead of transmitting them in the grpc response? The output file is a 32-channel tensor so I'm unable to decode it as a png before saving to disk using tf.io.write_file.
Thanks!
The code to set the size of Messages to Unlimited
in gRPC Client Request using C++ is shown below:
grpc::ChannelArguments ch_args;
ch_args.SetMaxReceiveMessageSize(-1);
std::shared_ptr<grpc::Channel> ch = grpc::CreateCustomChannel("localhost:6060", grpc::InsecureChannelCredentials(), ch_args);
Default message length is 4MB in gRPC, but we can extend size in your gRPC client and server request in python as something given below. You will be able to send and receive large messages without streaming
request = grpc.insecure_channel('localhost:6060',
options=[('grpc.max_send_message_length', MAX_MESSAGE_LENGTH),
('grpc.max_receive_message_length', MAX_MESSAGE_LENGTH)])
In GO lang we have functions refer the URLs
https://godoc.org/google.golang.org/grpc#MaxMsgSize https://godoc.org/google.golang.org/grpc#WithMaxMsgSize
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With