I am looking for advice on how to get efficient and high performance asynchronous IO working for my application that runs on Ubuntu Linux 14.04.
My app processes transactions and creates a file on disk/flash. As the app is progressing through transactions additional blocks are created that must be appended to the file on disk/flash. The app needs also to frequently read blocks of this file as it is processing new transactions. Each transaction might need to read a different block from this file in addition to also creating a new block that has to be appended to this file. There is an incoming queue of transactions and the app can continue to process transactions from the queue to create a deep enough pipeline of IO ops to hide the latency of read accesses or write completions on disk or flash. For a read of a block (which was put in the write queue by a previous transaction) that has not yet been written to disk/flash, the app will stall until the corresponding write completes.
I have an important performance objective – the app should incur the lowest possible latency to issue the IO operation. My app takes approximately 10 microseconds to process each transaction and be ready to issue a write to or a read from the file on disk/flash. The additional latency to issue an asynchronous read or write should be as small as possible so that the app can complete processing each transaction at a rate as close to 10 usecs per transaction as possible, when only a file write is needed.
We are experimenting with an implementation that uses io_submit to issue write and read requests. I would appreciate any suggestions or feedback on the best approach for our requirement. Is io_submit going to give us the best performance to meet our objective? What should I expect for the latency of each write io_submit and the latency of each read io_submit?
Using our experimental code (running on a 2.3 GHz Haswell Macbook Pro, Ubuntu Linux 14.04), we are measuring about 50 usecs for a write io_submit when extending the output file. This is too long and we aren't even close to our performance requirements. Any guidance to help me launch a write request with the least latency will be greatly appreciated.
Linux AIO (sometimes known as KAIO or libaio
) is something of a black art where experienced practitioners know the pitfalls but for some reason it's taboo to tell someone about gotchas they don't already know. From scratching around on the web and experience I've come up with a few examples where Linux's asynchronous I/O submission via io_submit()
may become (silently) synchronous, thereby turning it into a blocking (i.e. no longer fast) call:
O_DIRECT
"hint" (e.g. how you submitted the I/O didn't meet O_DIRECT
alignment constraints, filesystem or particular filesystem's configuration doesn't support O_DIRECT
) and it chooses to silently perform buffered I/O instead, resulting in the case above.io_submit()
again will turn into a blocking call while the other operation completes. The Seastar framework contains a small lookup table of filesystem specific cases./sys/block/[disk]/queue/nr_requests
documentation and the un(der) documented /sys/block/[disk]/device/queue_depth
) within the kernel. Making I/O requests back-up and exceed the size of the kernel queues leads to blocking.
/sys/block/[disk]/queue/max_sectors_kb
but the true limit may be something smaller like 512 KiB) they will be split up within the block layer and go on to chew up more than one request./proc/sys/fs/aio-max-nr
documentation) can also have an impact but the result will be seen in io_setup()
rather than io_submit()
.i_rwsem
) that is in use.The list above is not exhaustive.
With >= 4.14 kernels the RWF_NONBLOCK
flag can be used to make some of the blocking scenarios above noisy. For example, when using buffering and trying to read data not yet in the page cache, the RWF_NONBLOCK
flag will cause submission to fail with EAGAIN
when blocking would otherwise occur. Obviously you still a) need a 4.14 (or later) kernel that supports this flag and b) have to be aware of the cases it doesn't cover. I notice there are patches that have been accepted or are being proposed to return EAGAIN
in more scenarios that would otherwise block but at the time of writing (2019) RWF_NONBLOCK
is not supported for buffered filesystem writes.
If your kernel is >=5.1, you could try using io_uring
which does far better at not blocking on submission (it's an entirely different interface and was new in 2020).
io_submit()
blocking/slowness situations.ENOSPC
due to lack of large amounts of contiguous free space.O_DIRECT
rather than failing the open()
call.
O_DIRECT
is requested on compressed files.O_DIRECT
to "accepting" it by falling back to buffered I/O (see point 3 in the commit message). There's further discussion from the lead up to the commit in the ZFS on Linux "Direct IO" GitHub issue. In the "NVMe Read Performance Issues with ZFS (submit_bio to io_schedule)" issue someone suggests they are getting closer to submitting a change that enables a proper zerocopy O_DIRECT
. If such a change were accepted, it would end up in some future version of ZoL greater than 0.8.2.O_DIRECT
allocating writes.io_submit()
delays. There's also an LWN article talking about an earlier version of the no-wait AIO patch set and some of the cases it doesn't cover (but note that buffered reads were covered by it in the end).Related:
Hopefully this post helps someone (and if does help you could you upvote it? Thanks!).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With