- Joined
- Oct 29, 2016
- Messages
- 1,506
One significant difference with ZFS vs a hardware RAID controller, which is the source of the legend, that I forgot to mention earlier. ZFS is only copying the data, not the entire drive. ZFS ignores empty space, where hardware RAID is going to exercise the entire drive, not just the data space.
(holy crow it's so easy it's hard.)
yea, I know how zfs tracks what is data and what is empty, unlike both hardware and software raid, it's one of the reasons I like it so much. (I had pfsense with geom setup and it just constantly was sending out resync alerts for each frigging percentage, detecting some unknown desync occurrence - bleah. switched it to zfs, so much better, even though zfs management ain't in the GUI as of yet and I had to make my own alerting)
Also, each of the donor disks in the RAIDz pool is only needing to do the work of accessing 256GB of data instead of accessing a full 1TB of data, so it is less stressful for the RAIDz pool.
ok, I see now what you mean. so in such a scenario, the parity calc and other overhead on a reasonably modern performing system should be virtually nil, adding up to read speed being a non limiter, and thus making the write speed to the new disk the only bottleneck?
does that apply in reverse, wherein writing 256GB to 4 targets realistically gets you an aggregate write speed (technically 6 disks if raidz2)?
I have seen it argued that mirror will give the best performance (believe one of the proponents was a zfs dev), but I have also seen it argued that striped raidz will give the best performance.
if you were making a pool out of 4 disks, would you choose 2mirrors or raidz2? the storage lost should be the same, so the only difference would be quirks of how they function (lose any 2 disks or lose 1 disk in each mirror vdev) + admin ease
I do hope the resizing raidz gets released, because that would greatly reduce the management advantage that mirrors have, assuming it works correctly of course.