firesyde424
Contributor
- Joined
- Mar 5, 2019
- Messages
- 155
As part of the buildout of a high performance AMD Epyc based TrueNAS Core build, I ran into an issue where only 16 of the 24 NVME drives were being detected by the OS. The server's specs are as follows:
The tunable is "hw.nvme.num_io_queues=64". We played with different numbers and discovered that anything above 4 allowed the drives to be detected. A quick google search of that tunable doesn't actually turn up much in the way of useful information.
Does anyone know what this tunable is and why the default setting in TrueNAS Core wouldn't allow for more than 16 NVME drives to be detected?
***Edit***
I think I just figured out why the tunable was removed on upgrade. The tunable is actually set in the bootloader and I'm guessing the update either modifies or entirely replaces the loader.conf file?
- Dell PowerEdge R7525
- 2 x AMD Epyc 7H12 CPUs, 128 cores\256 threads total @2.6Ghz
- 1TB DDR4 Registered ECC RAM @ 3200MT/sec
- 24 x 30.72TB Micron 9400 Pro U.3 NVME drives
- 2 x Chelsio T62100-LP-CR dual port 100Gbe network adapters
- TrueNAS Core 13.5
The tunable is "hw.nvme.num_io_queues=64". We played with different numbers and discovered that anything above 4 allowed the drives to be detected. A quick google search of that tunable doesn't actually turn up much in the way of useful information.
Does anyone know what this tunable is and why the default setting in TrueNAS Core wouldn't allow for more than 16 NVME drives to be detected?
***Edit***
I think I just figured out why the tunable was removed on upgrade. The tunable is actually set in the bootloader and I'm guessing the update either modifies or entirely replaces the loader.conf file?
Last edited: