@Ulysse_31 I have seen the first page of this thread and I suggest reading the following resource.
This resource was originally created by user: @Davvo on the TrueNAS Community Forums Archive. https://www.truenas.com/community/resources/zfs-storage-pool-layout.201/download [1] This amazing document, created by iXsystems in February 2022 as a “White Paper”, cleanly explains how to qualify...
www.truenas.com
Hi Davvo ^^
Since now my account is not a "rookie" account anymore ^^' I'll be able to update that first post and add some additional informations that were discussed afterwards in an UPDATE status ^^
But let me resume it here, I'll use this occasion to give even more informations:
This server is a "droppin replacement" of an already existing zfs "backup" replication server: its config / shape / potential bottlenecks ... following our type of data : we setup the first "basic shape & config" of this server role in ... 2014 ... by "basic shape" I'm talking about the choice of using a poweredge server with a powervault SAS disk bay in raidz2, at that time, the OS was a freebsd 9, and our first tool scripts to make zfs syncs.
In 2017, since the production node was a solaris 11, and to avoid getting too distant versions of zfs, we decided to move the "zfs replication server" also to a solaris 11 OS.
So we built again, same structure, same shape : a poweredge server, a powervault drive bay, this time 12x 4Tb SAS drives, since this version worked just fine in tandem with the production server, we decided to prolong the lifetime of this last one a first time: when it arrived at 90% pool usage in 2019, we added a new powervault drive bay in daisy chain, and added a new raidz2 vdev to the pool. And we double down in 2021 with a third bay, and a third vdev, again, when it arrived at 90% pool usage.
With both hosts, either the freebsd or the solaris, we never had bottleneck issues during scrubs, those two servers did there job very well.
We recently moved the production server to a TrueNAS server, a M40-HA, in which we replicated the same data structure / data usage that was in the old production server also ... so the type of data is still the exact same type.
That is why we decided to give a try on a "zfs replication server" based on TrueNAS. So again, we took the same setup & hardware profile : a poweredge server, a powervault SAS drive bay, and 12x 12Tb drives this time.
Right now, on its last iteration,
the Solaris 11 version, which is right now filled with 6 years of data retention, and 111Tb of data (85% pool usage) we are making scrubs with speeds of 489MBytes per sec.
I can totally understand that "depending on the data type and desired IO load, we need to select the pool & hardware setup accordingly", that of course makes total sense.
But we are talking here of an hardware setup and profile that was tested for our usage since pretty much time and we NEVER had issues ^^" so it is well sized for our usage.
I would like to add an extra side note : We are using zfs since ... a while now ... I touched my first zfs filesystem in solaris 8 (was in ... 2008? ^^' ) while it was owned by Sun ... and here in the company I work for since 2014. We use zfs in various other contexts, we discovered and had goods and bads (the good of snapshoting and replicating ... the bad of dedup :p ... stuff like that ...) We use zfs under linux with ZoL ... we use it in a proxmox cluster environment ... we even use zfs in openindiana on an archival system ... and ... on all those years of usage ... AND
within our specific usage we never had a bottleneck like that during scrub ...
But let me add an update text into the first post to avoid other misleading conclusions ^^'