This is a question concerning disk IO performance for a zfs pool.
I have a server, main hardware components are:
- 4U Supermicro Chassis, 36 drive bays (24 front, 12 rear), front and rear SAS expanders
- 36 Seagate Exos 12TB SAS drives
- AVAGO Invader SAS Controller, AVAGO MegaRAID SAS FreeBSD mrsas driver version: 07.709.04.00-fbsd, in jbod mode
- Motherboard is Supermicro X11 series, dual 12-core Xeon Scalable Silver processors, 196GB DDR4 ECC RAM
This server was running Windows Server 2016 with the drive array controlled through the MegaRaid card (raid 60). I have now re-purposed it as a NAS running TrueNAS Core 13.U1
I switched the SAS Raid card to jbod mode so the drives are passed directly to TrueNAS. I configured one storage pool comprised of three RaidZ2 Vdevs. Each Vdev is 12 drives.
Running fio I get around 2GiB/s write speed to the pool. The disk i/o graphs show this as around 70 MiB/s per drive. 70 MiB/s times 30 drives = 2.1 GiB/s. Seems to add up.
If I bench an individual drive I get 250 MiB/s outer, 200 MiB/s middle and 120 MiB/s inner. Multiplying 200 MiB/s times 30 drives is 6 GiB/s. Is this the performance I should be seeing?
If so, any recommendations as to what I can do to get closer to this?
Regards, Sean
PS: This is my first post here, if I am missing some important data to advise on this, please let me know.
I have a server, main hardware components are:
- 4U Supermicro Chassis, 36 drive bays (24 front, 12 rear), front and rear SAS expanders
- 36 Seagate Exos 12TB SAS drives
- AVAGO Invader SAS Controller, AVAGO MegaRAID SAS FreeBSD mrsas driver version: 07.709.04.00-fbsd, in jbod mode
- Motherboard is Supermicro X11 series, dual 12-core Xeon Scalable Silver processors, 196GB DDR4 ECC RAM
This server was running Windows Server 2016 with the drive array controlled through the MegaRaid card (raid 60). I have now re-purposed it as a NAS running TrueNAS Core 13.U1
I switched the SAS Raid card to jbod mode so the drives are passed directly to TrueNAS. I configured one storage pool comprised of three RaidZ2 Vdevs. Each Vdev is 12 drives.
Running fio I get around 2GiB/s write speed to the pool. The disk i/o graphs show this as around 70 MiB/s per drive. 70 MiB/s times 30 drives = 2.1 GiB/s. Seems to add up.
If I bench an individual drive I get 250 MiB/s outer, 200 MiB/s middle and 120 MiB/s inner. Multiplying 200 MiB/s times 30 drives is 6 GiB/s. Is this the performance I should be seeing?
If so, any recommendations as to what I can do to get closer to this?
Regards, Sean
PS: This is my first post here, if I am missing some important data to advise on this, please let me know.