Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Communication Between Microservices

Say you have microservice A,B, and C which all currently communicate through HTTP. Say service A sends a request to service B which results in a response. The data returned in that response must then be sent to service C for some processing before finally being returned to service A. Service A can now display the results on the web page.

I know that latency is an inherent issue with implementing a microservice architecture, and I was wondering what are some common ways of reducing this latency?

Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone elaborate on that as well?

like image 225
ray smith Avatar asked Feb 27 '16 17:02

ray smith


1 Answers

Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone elaborate on that as well?

The goal of an RPC framework like Apache Thrift is

  • to significantly reduce the manual programming overhead
  • to provide efficient serialization and transport mechanisms
  • across all kinds of programming languages and platforms

In other words, this allows you to send your data as a very compactly written and compressed packet over the wire, while most of the efforts required to achieve this are provided by the framework.

Apache Thrift provides you with a pluggable transport/protocol stack that can quickly be adapted by plugging in different

  • transports (Sockets, HTTP, pipes, streams, ...)
  • protocols (binary, compact, JSON, ...)
  • layers (framed, multiplex, gzip, ...)

Additionally, depending on the target language, you get some infrastructure for the server-side end, such as TNonBlocking or ThreadPool servers, etc.

So coming back to your initial question, such a framework can help to make communication easier and more efficient. But it cannot magically remove latency from other parts of the OSI stack.

like image 162
JensG Avatar answered Sep 29 '22 04:09

JensG