Say you have microservice A,B, and C which all currently communicate through HTTP. Say service A sends a request to service B which results in a response. The data returned in that response must then be sent to service C for some processing before finally being returned to service A. Service A can now display the results on the web page.
I know that latency is an inherent issue with implementing a microservice architecture, and I was wondering what are some common ways of reducing this latency?
Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone elaborate on that as well?
Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone elaborate on that as well?
The goal of an RPC framework like Apache Thrift is
In other words, this allows you to send your data as a very compactly written and compressed packet over the wire, while most of the efforts required to achieve this are provided by the framework.
Apache Thrift provides you with a pluggable transport/protocol stack that can quickly be adapted by plugging in different
Additionally, depending on the target language, you get some infrastructure for the server-side end, such as TNonBlocking or ThreadPool servers, etc.
So coming back to your initial question, such a framework can help to make communication easier and more efficient. But it cannot magically remove latency from other parts of the OSI stack.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With