Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Plugging custom transport into gRPC

Tags:

c++

grpc

Let‘s assume we develop a custom low level transport for gRPC. How can we “plug it” into the gRPC c++ API so that we can use it for a Channel?

like image 346
user3612643 Avatar asked Nov 05 '17 21:11

user3612643


People also ask

What does gRPC use for transport?

gRPC is a technology for implementing RPC APIs that uses HTTP 2.0 as its underlying transport protocol. You might expect that gRPC and HTTP would be mutually exclusive, since they are based on opposite conceptual models.

Is gRPC replacing rest?

gRPC offers a refreshed take on the old RPC design method by making it interoperable, modern, and efficient using such technologies as Protocol Buffers and HTTP/2. The following benefits make it a solid candidate for replacing REST in some operations. Lightweight messages.

Does gRPC use HTTP or TCP?

gRPC uses HTTP/2, which multiplexes multiple calls on a single TCP connection. All gRPC calls over that connection go to one endpoint.

Is gRPC better than rest?

“gRPC is roughly 7 times faster than REST when receiving data & roughly 10 times faster than REST when sending data for this specific payload. This is mainly due to the tight packing of the Protocol Buffers and the use of HTTP/2 by gRPC.”


1 Answers

I'm working on a doc that will soon appear at https://github.com/grpc/grpc/ but here's a preview:

gRPC transports plug in below the core API (one level below the C++ API). You can write your transport in C or C++ though; currently all the transports are nominally written in C++ though they are idiomatically C. The existing transports are:

  • HTTP/2
  • Cronet
  • In-process

Among these, the in-process is likely the easiest to understand, though arguably also the least similar to a "real" sockets-based transport.

In the gRPC core implementation, a fundamental struct is the grpc_transport_stream_op_batch which represents a collection of stream operations sent to a transport. The ops in a batch can include:

  • send_initial_metadata
    • Client: initate an RPC
    • Server: supply response headers
  • recv_initial_metadata
    • Client: get response headers
    • Server: accept an RPC
  • send_message (zero or more) : send a data buffer
  • recv_message (zero or more) : receive a data buffer
  • send_trailing_metadata
    • Client: half-close indicating that no more messages will be coming
    • Server: full-close providing final status for the RPC
  • recv_trailing_metadata: get final status for the RPC
    • Server extra: This op shouldn't actually be considered complete until the server has also sent trailing metadata to provide the other side with final status
  • cancel_stream: Attempt to cancel an RPC
  • collect_stats: Get stats

One or more of these ops are grouped into a batch. Applications can start all of a call's ops in a single batch, or they can split them up into multiple batches. Results of each batch are returned asynchronously via a completion queue.

Internally, we use callbacks to indicate completion. The surface layer creates a callback when starting a new batch and sends it down the filter stack along with the batch. The transport must invoke this callback when the batch is complete, and then the surface layer returns an event to the application via the completion queue. Each batch can have up to 3 callbacks:

  • recv_initial_metadata_ready (called by the transport when the recv_initial_metadata op is complete)
  • recv_message_ready (called by the transport when the recv_message op is complete)
  • on_complete (called by the transport when the entire batch is complete)

The transport's job is to sequence and interpret various possible interleavings of the basic stream ops. For example, a sample timeline of batches would be:

  1. Client send_initial_metadata: Initiate an RPC with a path (method) and authority
  2. Server receive_initial_metadata: accept an RPC
  3. Client send_message: Supply the input proto for the RPC
  4. Server receive_message: Get the input proto from the RPC
  5. Client send_trailing_metadata: This is a half-close indicating that the client will not be sending any more messages
  6. Server receive_trailing_metadata: The server sees this from the client and knows that it will not get any more messages. This won't complete yet though, as described above.
  7. Server send_initial_metadata, send_message, send_trailing_metadata: A batch can contain multiple ops, and this batch provides the RPC response headers, response content, and status. Note that sending the trailing metadata will also complete the server's receive of trailing metadata.
  8. Client recv_initial_metadata: The number of ops in one side of the batch has no relation with the number of ops on the other side of the batch. In this case, the client is just collecting the response headers.
  9. Client recv_message, recv_trailing_metadata: Get the data response and status

In addition to these basic stream ops, the transport must handle cancellations of a stream at any time and pass their effects to the other side. The transport must perform operations like pings and statistics that are used to shape transport-level characteristics like flow control (see, for example, their use in the HTTP/2 transport).

like image 116
vjpai Avatar answered Sep 21 '22 12:09

vjpai