I would appreciate any donations. Wishlist or send e-mail type donations to maekawa AT daemon-systems.org.
NVME(4) Device Drivers Manual NVME(4) NAME nvme - Non-Volatile Memory Host Controller Interface SYNOPSIS nvme* at pci? dev ? function ? DESCRIPTION The nvme driver provides support for NVMe, or NVM Express, storage controllers conforming to the Non-Volatile Memory Host Controller Interface specification. Controllers complying to specification version 1.1 and 1.2 are known to work. Other versions should work too for normal operation with the exception of some pass-through commands. The driver supports the following features: ⊕ controller and namespace configuration and management using nvmectl(8) ⊕ highly parallel I/O using per-CPU I/O queues ⊕ PCI MSI/MSI-X attachment, and INTx for legacy systems On systems supporting MSI/MSI-X, the nvme driver uses per-CPU IO queue pairs for lockless and highly parallelized I/O. Interrupt handlers are scheduled on distinct CPUs. The driver allocates as many interrupt vectors as available, up to number of CPUs + 1. MSI supports up to 32 interrupt vectors within the system, MSI-X can have up to 2k. Each I/O queue pair has a separate command circular buffer. The nvme specification allows up to 64k commands per queue, the driver currently allocates 1024 entries per queue, or controller maximum, whatever is smaller. Command submissions are done always on the current CPU, the command completion interrupt is handled on the CPU corresponding to the I/O queue ID - first I/O queue on CPU0, second I/O queue on CPU1, etc. Admin queue command completion is handled by CPU0 by default. To keep lock contention to minimum, it is recommended to keep this assignment, even though it is possible to reassign the interrupt handlers differently using intrctl(8). On systems without MSI, the driver uses a single HW interrupt handler for both admin and standard I/O commands. Command submissions are done on the current CPU, the command completion interrupt is handled on CPU0 by default. This leads to some lock contention, especially on command ccbs. The driver offloads command completion processing to soft interrupt, in order to increase the total system I/O capacity and throughput. FILES /dev/nvme* nvme device special files used by nvmectl(8). SEE ALSO intro(4), ld(4), pci(4), intrctl(8), MAKEDEV(8), nvmectl(8) NVM Express, Inc., NVM Express - scalable, efficient, and industry standard, http://nvmexpress.org/, 2016-06-12. NVM Express, Inc., NVM Express Revision 1.2.1, http://www.nvmexpress.org/wp-content/uploads/NVM_Express_1_2_1_Gold_20160603.pdf, 2016-06-05. HISTORY The nvme driver first appeared in OpenBSD 6.0 and in NetBSD 8.0. AUTHORS The nvme driver was written by David Gwynne <email@example.com> for OpenBSD and ported to NetBSD by NONAKA Kimihiro <nonaka@NetBSD.org>. Jaromir Dolecek <jdolecek@NetBSD.org> contributed to making this driver MPSAFE. NOTES At least some Intel nvme adapter cards are known to require PCIe Generation 3 slot. Such cards do not even probe when plugged into older generation slot. The driver was also tested and confirmed working fine for emulated nvme devices under QEMU 2.8.0 and Oracle VirtualBox 5.1.20. NetBSD 8.0 April 27, 2017 NetBSD 8.0