Installation. The Linux NVMe driver is natively included in the kernel since version 3.3. NVMe devices should show up under /dev/nvme* . Extra userspace NVMe tools can be found in nvme-cli or nvme-cli-gitAUR.
The completely fair queue (CFQ) I/O scheduler, is the current default scheduler in the Linux kernel.
Noop. The Noop scheduler is a unique scheduler. Rather than prioritizing specific I/O operations, it simply places all I/O requests into a FIFO (First in, First Out) queue.
You can change noop to cfq, or deadline. This change can be done without having to reboot your machine. Once changed, the I/O scheduler will switch and (hopefully) you'll see a performance increase (again, depending upon your needs). Again, you can change noop to whatever scheduler you need.
We are writing a highly concurrent software in C++ for a few hosts, all equipped with a single ST9500620NS as the system drive and an Intel P3700 NVMe Gen3 PCIe SSD card for data. Trying to understand the system more for tuning our software, I dug around the system (two E5-2620 v2 @ 2.10GHz CPUs, 32GB RAM, running CentOS 7.0) and was surprised to spot the following:
[root@sc2u0n0 ~]# cat /sys/block/nvme0n1/queue/scheduler
none
This contradicts to everything that I learned about selecting the correct Linux I/O scheduler, such as from the official doc on kernel.org.
I understand that NVMe is a new kid on the block, so for now I won't touch the existing scheduler setting. But I really feel odd about the "none" put in by the installer. If anyone who has some hints as to where I can find more info or share your findings, I would be grateful. I have spent many hours googling without finding anything concrete so far.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With