Consider an application that is CPU bound, but also has high-performance I/O requirements.
I'm comparing Linux file I/O to Windows, and I can't see how epoll will help a Linux program at all. The kernel will tell me that the file descriptor is "ready for reading," but I still have to call blocking read() to get my data, and if I want to read megabytes, it's pretty clear that that will block.
On Windows, I can create a file handle with OVERLAPPED set, and then use non-blocking I/O, and get notified when the I/O completes, and use the data from that completion function. I need to spend no application-level wall-clock time waiting for data, which means I can precisely tune my number of threads to my number of cores, and get 100% efficient CPU utilization.
If I have to emulate asynchronous I/O on Linux, then I have to allocate some number of threads to do this, and those threads will spend a little bit of time doing CPU things, and a lot of time blocking for I/O, plus there will be overhead in the messaging to/from those threads. Thus, I will either over-subscribe or under-utilize my CPU cores.
I looked at mmap() + madvise() (WILLNEED) as a "poor man's async I/O" but it still doesn't get all the way there, because I can't get a notification when it's done -- I have to "guess" and if I guess "wrong" I will end up blocking on memory access, waiting for data to come from disk.
Linux seems to have the starts of async I/O in io_submit, and it seems to also have a user-space POSIX aio implementation, but it's been that way for a while, and I know of nobody who would vouch for these systems for critical, high-performance applications.
The Windows model works roughly like this:
Steps 1/2 are typically done as a single thing. Steps 3/4 are typically done with a pool of worker threads, not (necessarily) the same thread as issues the I/O. This model is somewhat similar to the model provided by boost::asio, except boost::asio doesn't actually give you asynchronous block-based (disk) I/O.
The difference to epoll in Linux is that in step 4, no I/O has yet happened -- it hoists step 1 to come after step 4, which is "backwards" if you know exactly what you need already.
Having programmed a large number of embedded, desktop, and server operating systems, I can say that this model of asynchronous I/O is very natural for certain kinds of programs. It is also very high-throughput and low-overhead. I think this is one of the remaining real shortcomings of the Linux I/O model, at the API level.
Linux asynchronous I/O is a relatively recent addition to the Linux kernel. It's a standard feature of the 2.6 kernel, but you can find patches for 2.4. The basic idea behind AIO is to allow a process to initiate a number of I/O operations without having to block or wait for any to complete.
“Asynchronous” essentially means that when a User Mode process invokes a library function to read or write a file, the function terminates as soon as the read or write operation has been enqueued, possibly even before the actual I/O data transfer takes place.
For network socket i/o, when it is "ready", it don't block. That's what the O_NONBLOCK and "ready" means. For disk i/o, we have posix aio, linux aio, sendfile and friends. Follow this answer to receive notifications.
The block I/O layer is the kernel subsystem in charge of managing input/output operations performed on block devices. The need for a specific kernel component for managing such operations is given by the additional complexity of block devices with respect to, for example, character devices.
(2020) If you're using a 5.1 or above Linux kernel you can use the io_uring
interface for file-like I/O and obtain excellent asynchronous operation.
Compared to the existing libaio
/KAIO interface, io_uring
has the following advantages:
liburing
helper library)recvmsg()
/sendmsg()
are supported from >=5.3, see messages mentioning the word support in io_uring.c's git history)read
/write
(e.g. fsync
(>=5.1), fallocate
(>=5.6), splice
(>=5.7) and more)Compared to glibc's POSIX AIO, io_uring
has the following advantages:
io_uring
most certainly can!The Efficient IO with io_uring document goes into far more detail as to io_uring
's benefits and usage. The What's new with io_uring document describes new features added to io_uring
since its inception, while The rapid growth of io_uring LWN article describes which features were available in each of the 5.1 - 5.5 kernels with a forward glance to what was going to be in 5.6 (also see LWN's list of io_uring articles). There's also a "Faster IO through io_uring" videoed presentation (slides) from late 2019 by io_uring
author Jens Axboe. Finally, the Lord of the io_uring tutorial gives an introduction to io_uring
usage.
The io_uring
community can be reached via the io_uring mailing list and the io_uring mailing list archives show daily traffic at the start of 2021.
Re "support partial I/O in the sense of recv()
vs read()
": a patch went into the 5.3 kernel that will automatically retry io_uring
short reads and a further commit went into the 5.4 kernel that tweaks the behaviour to only automatically take care of short reads when working with "regular" files on requests that haven't set the REQ_F_NOWAIT
flag (it looks like you can request REQ_F_NOWAIT
via IOCB_NOWAIT
or by opening the file with O_NONBLOCK
). Thus you can get recv()
style- "short" I/O behaviour from io_uring
too.
io_uring
Though the interface is young (its first incarnation arrived in May 2019), some open-source software is using io_uring
"in the wild":
io_uring
ioengine to the libaio
ioengine on an Optane device.io_uring
backend for MultiRead in Dec 2019 and was part of its 6.7.3 release. Jens states io_uring
helped to dramatically cut latency.io_uring
backend in Dec 2019. While some of the author's original points were addressed in newer kernels, at the time of writing (mid 2021) libev's author has some choice words about io_uring
's maturity and is taking a wait-and-see approach before implementing further improvements.io_uring
backend outperforming the threads
and aio
backends on one workload of random 16K blocks.io_uring
VFS backend in Feb 2020 and was part of the Samba 4.12 release. In the "Linux io_uring VFS backend." Samba mailing list thread, Stefan Metzmacher (the commit author) says the io_uring
module was able to push roughly 19% more throughput (compared to some unspecified backend) in a synthetic test. You can also read the "Async VFS Future" PDF presentation by Stefan for some of the motivation behind the changes.io_uring
more accessible to pure rust. rio is one library talked about a bit and the author says they achieved higher throughput compared to using sync calls wrapped in threads. The author gave a presentation about his database and library at FOSDEM 2020 which included a section extolling the virtues of io_uring
.io_uring
. The author (Glauber Costa) published a document called Modern storage is plenty fast. It is the APIs that are bad showing that with careful tuning glommio could get over 2.5 times the performance over regular (non-io_uring
) syscalls when performing sequential I/O on an Optane device.io_uring
io_uring
improvements (e.g. the workaround to reduce for filesystem inode contention). There is a presentation "Asynchronous IO for PostgreSQL" (be aware the video is broken until the 5 minute mark) (PDF) motivating the need for PostgreSQL changes and demonstrating some experimental results. He has expressed hope of getting his optional io_uring
support into PostgreSQL 14 and seems acutely aware of what does and doesn't work even down to the kernel level. In December 2020, Andres further discusses his PostgreSQL io_uring
work in the "Blocking I/O, async I/O and io_uring" pgsql-hackers mailing list thread and mentions the work in progress can be seen over in https://github.com/anarazel/postgres/tree/aio .io_uring
support which needs a 5.9 kernelio_uring
support but its progress into the project has been slowio_uring
support for eventing (but not syscalls) in April 2020 and the Linux: full io_uring I/O issue outlines plans to integrate it furtherio_uring
io_uring
syscalls can be used. This distro doesn't pre-package the liburing
helper library but you can build it for yourself.io_uring
syscalls can be used. As above, the distro doesn't pre-package liburing
.liburing
so io_uring
is usable.io_uring
syscalls can be used. This distro doesn't pre-package the liburing
helper library but you can build it for yourself.io_uring
(a previous version of this answer mistakenly said it did). According to the Add io_uring support Red Hat knowledge base article (contents is behind a subscriber paywall) backporting of io_uring
to the default RHEL 8 kernel is in progress.Hopefully io_uring
will usher in a better asynchronous file-like I/O story for Linux.
(To add a thin veneer of credibility to this answer, at some point in the past Jens Axboe (Linux kernel block layer maintainer and inventor of io_uring
) thought this answer might be worth upvoting :-)
The real answer, which was indirectly pointed to by Peter Teoh, is based on io_setup() and io_submit(). Specifically, the "aio_" functions indicated by Peter are part of the glibc user-level emulation based on threads, which is not an efficient implementation. The real answer is in:
io_submit(2) io_setup(2) io_cancel(2) io_destroy(2) io_getevents(2)
Note that the man page, dated 2012-08, says that this implementation has not yet matured to the point where it can replace the glibc user-space emulation:
http://man7.org/linux/man-pages/man7/aio.7.html
this implementation hasn't yet matured to the point where the POSIX AIO implementation can be completely reimplemented using the kernel system calls.
So, according to the latest kernel documentation I can find, Linux does not yet have a mature, kernel-based asynchronous I/O model. And, if I assume that the documented model is actually mature, it still doesn't support partial I/O in the sense of recv() vs read().
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With