Generic Receive Offload (GRO) is a software technique in Linux to aggregate multiple incoming packets belonging to the same stream. The linked article claims that CPU utilization is reduced because, instead of each packet traversing the network stack individually, a single aggregated packet traverses the network stack.
However, if one looks at the source code of GRO, this feels like a network stack in itself. For example, an incoming TCP/IPv4 packet needs to go through:
eth_gro_receive
inet_gro_receive
tcp_gro_receive
Each function performs decapsulation and looks at respective frame/network/transport headers as would be expected from the "regular" network stack.
Assuming the machine does not perform firewall/NAT or other obviously expensive per-packet processing, what is so slow in the "regular" network stack that the "GRO network stack" can accelerate?
Generic Receive Offload (GRO) is a software technique for increasing inbound throughput of high-bandwidth network connections by reducing CPU overhead.
Generic Receive Offload (GRO) is a widely used SW-based offloading technique to reduce per-packet processing overheads. By reassembling small packets into larger ones, GRO enables applications to process fewer large packets directly, thus reducing the number of packets to be processed.
Short Answer: GRO is done very early in the receive flow so it basically reduces the number of operations by ~(GRO session size / MTU).
More details: The most common GRO function is napi_gro_receive(). It is used 93 times (in kernel 4.14) by almost all networking driver. By using GRO at NAPI level, the driver is doing the aggregation to a large SKB very early, right at the receive completion handler. This means that all the next functions in the receive stack do much less processing.
Here is a nice visual representation of the RX flow for a Mellanox ConnectX-4Lx NIC (sorry this is what I have access to):
As you can see, GRO aggregation is at the very bottom of the call stack. You can also see how much work is done afterwards. Imagine how much overhead you'll have if each of these functions would operate on a single MTU.
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With