The goal is to introduce a transport and application layer protocol that is better in its latency and network throughput. Currently, the application uses REST with HTTP/1.1 and we experience a high latency. I need to resolve this latency problem and I am open to use either gRPC(HTTP/2) or REST/HTTP2.
HTTP/2:
I am aware of all the above advantages. Question No. 1: If I use REST with HTTP/2, I am sure, I will get a significant performance improvement when compared to REST with HTTP/1.1, but how does this compare with gRPC(HTTP/2)?
I am also aware that gRPC uses proto buffer, which is the best binary serialization technique for transmission of structured data on the wire. Proto buffer also helps in developing an language agnostic approach. I agree with that and I can implement the same feature in REST using graphQL. But my concern is over serialization: Question No. 2: When HTTP/2 implements this binary feature, does using proto buffer give an added advantage on top of HTTP/2?
Question No. 3: In terms of streaming, bi-directional use-cases, how does gRPC(HTTP/2) compare with (REST and HTTP/2)?
There are so many blogs/videos out in the internet that compares gRPC(HTTP/2) with (REST and HTTP/1.1) like this. As stated earlier, I would like to know the differences, benefits on comparing GRPC(HTTP/2) and (REST with HTTP/2).
gRPC uses HTTP/2 to support highly performant and scalable API's and makes use of binary data rather than just text which makes the communication more compact and more efficient. gRPC makes better use of HTTP/2 then REST. gRPC for example makes it possible to turn-off message compression.
gRPC heavily uses HTTP/2 features and no browser provides the level of control required over web requests to support a gRPC client. For example, browsers do not allow a caller to require that HTTP/2 be used, or provide access to underlying HTTP/2 frames.
HTTP2 is much faster and more reliable than HTTP1. HTTP1 loads a single request for every TCP connection, while HTTP2 avoids network delay by using multiplexing. HTTP is a network delay sensitive protocol in the sense that if there is less network delay, then the page loads faster.
Inter-process communication gRPC calls between a client and service are usually sent over TCP sockets. TCP is great for communicating across a network, but inter-process communication (IPC) is more efficient when the client and service are on the same machine.
gRPC is not faster than REST over HTTP/2 by default, but it gives you the tools to make it faster. There are some things that would be difficult or impossible to do with REST.
As nfirvine said, you will see most of your performance improvement just from using Protobuf. While you could use proto with REST, it is very nicely integrated with gRPC. Technically, you could use JSON with gRPC, but most people don't want to pay the performance cost after getting used to protos.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With