I recently installed TrueNAS 12.0-U8 inside of bhyve on a FreeBSD system, with the HBA passed through directly to the VM so that TrueNAS can access the disks directly. The host server has 2 Xeon E5-2620 CPU with 12 total cores, and I've given the VM 4 of those cores and 16GB RAM (which is what my previous system had on bare metal, but with a Celeron CPU).
Installed on the HBA are 9 10TB disks in RAIDZ3 (I understand that that's not so space efficient, but I want maximum redundancy as I may not have easy and regular access to the server). After all of the overhead, the pool has a total of 50.24TB of space, and I copied 6.54TB of data to the pool.
A scrub of this 6.54TB of data takes 3 days, with the scrub issuing at less than 40M/s, which is slower than the data write.
According to top -SH, zfskern{dsl_scan_iss} is pegging a core at 100%, which I'm presuming is the limiting factor on the scrub. This thread doesn't even show up when a scrub isn't running.
Could this be something related to running TrueNAS within a VM? Is there a tunable somewhere I could alter to speed it up? I couldn't find much online regarding the zfskern{dsl_scan_iss} thread, so I'm asking here!
Thank you in advance for any ideas.
Installed on the HBA are 9 10TB disks in RAIDZ3 (I understand that that's not so space efficient, but I want maximum redundancy as I may not have easy and regular access to the server). After all of the overhead, the pool has a total of 50.24TB of space, and I copied 6.54TB of data to the pool.
A scrub of this 6.54TB of data takes 3 days, with the scrub issuing at less than 40M/s, which is slower than the data write.
According to top -SH, zfskern{dsl_scan_iss} is pegging a core at 100%, which I'm presuming is the limiting factor on the scrub. This thread doesn't even show up when a scrub isn't running.
Could this be something related to running TrueNAS within a VM? Is there a tunable somewhere I could alter to speed it up? I couldn't find much online regarding the zfskern{dsl_scan_iss} thread, so I'm asking here!
Thank you in advance for any ideas.