Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Communication between REST Microservices: Latency

The problem I'm trying to solve is latency between Microservice communication on the backend. Scenario. Client makes a request to service A, which then calls service B that calls service C before returning a response to B which goes to A and back to the client.

Request: Client -> A -> B -> C
Response: C -> B -> A -> Client

The microservices expose a REST interface that is accessed using HTTP. Where each new HTTP connection between services to submit requests is an additional overhead. I'm looking for ways to reduce this overhead without bringing in another transport mechanism into the mix (i.e. stick to HTTP and REST as much as possible). Some answers suggest using Apache Thrift but I'd like to avoid that. Other possible solutions are using Messaging Queues which I'd also like to avoid. (To keep operational complexity down).

Has anyone experience in microservices communication using HTTP Connection pooling or HTTP/2? The system is deployed on AWS where service groups are fronted by a ELB.

like image 794
Tiger Johnson Avatar asked Apr 17 '16 17:04

Tiger Johnson


People also ask

What is the best way to communicate between microservices?

The two commonly used protocols are HTTP request/response with resource APIs (when querying most of all), and lightweight asynchronous messaging when communicating updates across multiple microservices. These are explained in more detail in the following sections.

Does microservices introduce network latency?

Microservices are not a silver bullet for improving application development, and they come with some inherent drawbacks. Namely, they create new latency and scalability challenges.


1 Answers

HTTP/1.0 working mode was to open a connection for each request, and close the connection after each response.

Using HTTP/1.0 from remote clients and clients inside microservices (e.g. those in A that call B, and those in B that call C) should be avoided because the cost of opening a connection for each request can contribute for most of the latency.

HTTP/1.1 working mode is to open a connection and then leave it open until either peer explicitly requests to close it. This allow for the connection to be reused for multiple requests, and it's a big win because it reduces the latency, it uses less resources, and in general it is more efficient.

Fortunately nowadays both remote clients (e.g. browsers) and clients inside microservices support HTTP/1.1 well, or even HTTP/2.

Surely browsers have connection pooling, and any decent HTTP client that you may use inside your microservices does also have connection pooling.

Remote clients and microservices clients should be using at least HTTP/1.1 with connection pooling.

Regarding HTTP/2, while I am a big promoter of HTTP/2 for browser-to-server usage, for REST microservices calls inside data centers I would benchmark the parameters you are interested in for both HTTP/1.1 and HTTP/2, and then see how they fare. I expect HTTP/2 to be on par with HTTP/1.1 for most cases, if not slightly better.

The way I would do it using HTTP/2 (disclaimer, I'm a Jetty committer) would be to offload TLS from remote clients using HAProxy, and then use clear-text HTTP/2 between microservices A, B and C using Jetty's HttpClient with HTTP/2 transport.

I'm not sure AWS ELB already supports HTTP/2 at the time of this writing, but if it does not please be sure to drop a message to Amazon asking to support it (many others already did that). As I said, alternatively you can use HAProxy.

For communication between microservices, you can use HTTP/2 no matter what is the protocol used by remote clients. By using Jetty's HttpClient, you can very easily switch between the HTTP/1.1 and the HTTP/2 transports, so this gives you the maximum of flexibility.

like image 151
sbordet Avatar answered Oct 27 '22 14:10

sbordet