Slow read on dual vdev mirror vs RAIDZ2

ClimbingKId

Cadet
Joined
Aug 25, 2021
Messages
6
A long standing TrueNas customer here, running 2x10TB WD Gold Drives in a mirror for over two years. System is TrueNas Core 12.08.1, under ESXI7 with LSI3008 HBA controller passed through, with 32GB Ram. Had been running flawlessly and fast across a 10GB network.

I recently bought two more drives, to double available space. Having taken suitable backups onto a backup Truenas machine, I set about building a new pool, with the 4 pretested drives arranged into a mirror of 2 vdevs, expecting this to give the best read performance, with adequate redundancy.

However, read performance was terrible, and slower than my previous single vdev mirror. After much head scratching I looked at a zpool iostat, to see each of the drives reading only 101M, all the drives not hitting more that 25M each.

1657365822270.png


Writting on the other hand, was very fast, hitting 681M, with each drive hitting 160M+
1657365897583.png


So I burned the mirror and setup a RAIDZ2 array. This time getting much faster reads of about 230-250M.
1657365996186.png


I tried the setup a few times, but never really got satisfactory read speeds from the dual vdev mirror. I know the system is capable of more, given the bandwidth on writes, and on RAIDZ2 read performance, but file copy performance across a network shows only around 100M/sec vs RAIDZ2 doubling or even trippling that.

Cached reads hit 10GB speeds across teh network, and the only thing changed was the addition of two more WD Gold drives. Even my old single mirror vdev was faster.

Any thoughts or suggestions?

Thanks

CC
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
For single-threaded reads from mirror of HDDs to be faster than from single drive, it requires deep enough read-ahead and sequential requests of few megabytes to each disk. ZFS mirror code tries to do that, but results may vary. To utilize bandwidth of multiple vdevs you also need deep enough speculative prefetch. You may check the value of `sysctl vfs.zfs.zfetch.max_distance` and potentially increase it to 32 or even 64MB. TrueNAS autotuner, if enabled, increases it from default 8MB to 32MB, but newer ZFS versions will increase it to 64MB in combination with some other changes.
 

ClimbingKId

Cadet
Joined
Aug 25, 2021
Messages
6
For single-threaded reads from mirror of HDDs to be faster than from single drive, it requires deep enough read-ahead and sequential requests of few megabytes to each disk. ZFS mirror code tries to do that, but results may vary. To utilize bandwidth of multiple vdevs you also need deep enough speculative prefetch. You may check the value of `sysctl vfs.zfs.zfetch.max_distance` and potentially increase it to 32 or even 64MB. TrueNAS autotuner, if enabled, increases it from default 8MB to 32MB, but newer ZFS versions will increase it to 64MB in combination with some other changes.
mav@ Thank you. I have not seen this in any of teh threads I trawled, thank you so much. I will give this a try and report back. May take we a week or so before I can back up an destroy the pool to test.

CC
 
Top