Which drive configuration is better ?

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
I know this is going to be a subjective question, but in general, I am planning a new TrueNAS server to function as a file server (home stuff) and iscsi Server for couple dozen VM (Home lab, dozen VMs to start). Most of the VM would be infrastructure-type services for my home lab (e.g. AD, Certificate, lightweight SQL, etc) Of course would be some write IOPS requirements, but nothing super heavy as my VDI desktop pools are hosted on another VSAN test bench.
I am planning to purchase 8 x 8TB Seagate IronWolf,. So I am wondering if it is better to have

1 Pool with a single vDev (each vdev as Raidz2 - 6 active drive, 2 parity)
or
1 Pool with two vDev (each vdev as Raidz1 - 3 active drive, 1 parity)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
This rather crude diagram represents what I think pretty clearly:
1607599619470.png


If you lose 2 disks, the chances of pool loss are 0% in RAIDZ2, but are 40% for the 2x RAIDZ1 setup, so that's worth some thought.

Generally if you want IOPS, you need to use mirrors, so you'll want to consider that instead if you care about VM performance.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Been doing some more homework and one other option for me is to just use simple mirror for each vdev and expand accordingly. Question I have are
1. do all the vdev in the same pool has to match in size?
2. as I add more vdev to the pool, do the data on the existing vdev migrate over to the new vdev to balance the load? If not, can you force it like a schedule job?
3. In a mirror vdev configuration, let say 2 x 8TB. Can i update the vdev size by swapping out one of the 8TB with a 16TB, ley it rebuild, and swap out the other 8TB, and let that rebuild?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
In a mirror vdev configuration, let say 2 x 8TB. Can i update the vdev size by swapping out one of the 8TB with a 16TB, ley it rebuild, and swap out the other 8TB, and let that rebuild?
Yes, it works like that.

as I add more vdev to the pool, do the data on the existing vdev migrate over to the new vdev to balance the load? If not, can you force it like a schedule job?
No/sort of. The data will be written to the pool according to ZFS, which will allocate with a preference to a more empty drive, so spreading won't happen for already existing data unless you do something to re-write that data (you could just move it to a new dataset in the same pool and it would re-write).
The only way to have a truly balanced pool is to have it be empty and then fill it without changing the pool layout.

do all the vdev in the same pool has to match in size?
No, VDEV sizes can be different, but if you want a balanced pool (see point above), you should make them equal in size.
 
Top