Sawtaytoes
Patron
- Joined
- Jul 9, 2022
- Messages
- 221
New 60-drive HDD array (same drives) with just 1 dRAID vdev:
1 x draid2:4d:60c:2s
Only difference here is 2 hotspares total rather than 1 for each of the 4 vdevs. Not sure if it makes a huge difference because it's just "gap" space; completely unused.
To me, this is definitive proof dRAID is not usable in large datasets if bandwidth or iops is even relatively important. If all you care is storing data and not ever looking at it, you should be fine.
I'm wondering if it'd perform better with SSDs, but I can't test that until I can transfer out my data. Seems like dRAID is only good if you break it up into separate vdevs just like RAID-Z :(. It's still got tons of advantages over RAID-Z, but 1 vdev isn't gonna cut it!
EDIT: Oh shoot! I just realized I used 4 data in this set and 5 data in the other one. But 4 data should be much faster right?
This is 9 redundancy group whereas before, with 4 vdevs, I had only 8 redundancy groups spread out 2-wide on each dRAID vdev.
1 x draid2:4d:60c:2s
Code:
# iodepth=1 READ: bw=1616MiB/s (1694MB/s), 1616MiB/s-1616MiB/s (1694MB/s-1694MB/s), io=15.8GiB (16.9GB), run=10003-10003msec WRITE: bw=1703MiB/s (1786MB/s), 1703MiB/s-1703MiB/s (1786MB/s-1786MB/s), io=16.6GiB (17.9GB), run=10003-10003msec # no iodepth READ: bw=1627MiB/s (1706MB/s), 1627MiB/s-1627MiB/s (1706MB/s-1706MB/s), io=15.9GiB (17.1GB), run=10004-10004msec WRITE: bw=1716MiB/s (1799MB/s), 1716MiB/s-1716MiB/s (1799MB/s-1799MB/s), io=16.8GiB (18.0GB), run=10004-10004msec # time cp -a Copied 20 GiB at 1489.90 MB/s.
Only difference here is 2 hotspares total rather than 1 for each of the 4 vdevs. Not sure if it makes a huge difference because it's just "gap" space; completely unused.
To me, this is definitive proof dRAID is not usable in large datasets if bandwidth or iops is even relatively important. If all you care is storing data and not ever looking at it, you should be fine.
I'm wondering if it'd perform better with SSDs, but I can't test that until I can transfer out my data. Seems like dRAID is only good if you break it up into separate vdevs just like RAID-Z :(. It's still got tons of advantages over RAID-Z, but 1 vdev isn't gonna cut it!
EDIT: Oh shoot! I just realized I used 4 data in this set and 5 data in the other one. But 4 data should be much faster right?
This is 9 redundancy group whereas before, with 4 vdevs, I had only 8 redundancy groups spread out 2-wide on each dRAID vdev.
Last edited: