Linux kernel has the option to enable the TCP receive copy offload feature (CONFIG_NET_DMA
). I used iperf
(with TCP windows size = 250 KBytes and buffer length = 2 MBytes) and oprofile to test the performance in three cases :with and without NET_DMA enable, NET_DMA enable and sk_rcvlowat
set to 200 KBytes. The results are as follow:
with NET_DMA disabled: the bandwidth can reach 930 Mbps, __copy_tofrom_user
consumes 36.1% of cpu time.
with NET_DMA enabled: the bandwidth is less than the above case 40 Mbps (890 Mbps), __copy_tofrom_user
consumes 33.5% of cpu time.
with NET_DMA enabled (sk_rcvlowat = 200KB): the bandwidth is 874 Mbps, __copy_tofrom_user
consumes 25.1% of cpu time.
I also tried to inspect the function tcp_recvmsg() (in /net/ipv4/tcp.c) (kernel version is 2.6.32.2). This is the way I understand the way NET_DMA works:
// at the start of tcp_revmsg()
target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
#ifdef CONFIG_NET_DMA
tp->ucopy.dma_chan = NULL; preempt_disable(); skb = skb_peek_tail(&sk->sk_receive_queue); { int available = 0; if (skb) available = TCP_SKB_CB(skb)->seq + skb->len - (*seq); if ((available < target) && (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && !sysctl_tcp_low_latency && dma_find_channel(DMA_MEMCPY)) { preempt_enable_no_resched(); tp->ucopy.pinned_list = dma_pin_iovec_pages(msg->msg_iov, len); } else { preempt_enable_no_resched(); } }
#endif
len
: is the buffer length, which can be specified with -l
option in iperf
target
: is the minimum number of bytes tcp_recvmsg()
should return. if sk->sk_rcvlowat
is not set, I saw that target usually get value 1 (DMA transfers rarely take place in case target
= 1).
available
: number of bytes available from the first skb
from receive queue.
I think that the condition (target < available) is crucial to determine whether tcp_recvmsg()
should use DMA or not. As I read from the comments in I/OAT patch file, this condition is true when there is context switch that put the process to sleep and wait more data.
// in the while loop of tcp_recvmsg()
if (copied >= target) {
/* Do not sleep, just process backlog. */ release_sock(sk); lock_sock(sk);
} else
sk_wait_data(sk, &timeo);
While the process is sleeping, the arrived packets will be bumped directly to userspace buffer by tcp_dma_try_early_copy()
in tcp_rcv_established()
(in /net/ipv4/tcp_input.c
). Maybe this is the efficient point of NET_DMA, the process went to sleep, but the data can be moved to its buffer by hardware.
// in /net/ipv4/tcp_input.c:tcp_dma_try_early_copy()
if ((tp->ucopy.len == 0) ||
(tcp_flag_word(tcp_hdr(skb)) & TCP_FLAG_PSH) || (atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1))) { tp->ucopy.wakeup = 1; sk->sk_data_ready(sk, 0);
}
The DMA processing in tcp_dma_try_early_copy()
will stop its job and wake up the sleeping process when there is no more buffer (tp->ucopy.len == 0
) or the total size of allocated skb
is greater than 1/2 sk_rcvbuf
(I found that sk_rcvbuf
is set to TCP windows size of iperf
).
This is the first time I work with TCP/IP stack in Linux. I am not sure what I concluded above is correct, Please fix me if I was wrong. My questions are:
Q1: why are bandwidths in NET_DMA enable cases always lower than case without NET_DMA?
Q2: Is there an good set of values (TCP windows size, buffer length, sk_rcvlowat
) to boost the performance in NET_DMA enabled cases?
Q3: Each DMA transfer is only about 1448 Bytes. Is it too small to be DMAed?
Any suggestions are appreciated. Thanks in advance.
My guess is that with small packets (1448 is considered small nowadays), the latency overhead from activating and waiting for the IOAT interrupt is higher than the overhead of simply copying the memory, especially when memory and CPU access are fast. Modern servers can push 5GB/sec with memcpy
.
For the 10Gbit/sec Ethernet case it would be worthwhile to work with higher MTU as possible and certainly larger buffer sizes. I think the original tests with receive offload only started showing performance increases when single packets were at about PAGE_SIZE.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With