Expanding storage responsibly?

Status
Not open for further replies.

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
I built a system with a z3 configuration and was very disapointed in the speed of access to the disks. I would stay away from it and you can absolutely have ten to 12 disks in a z2 with no problem. You just need to monitor the disks for faults. I have my NAS configured to email me a daily report. There are scripts for that on this board if you have a look around.

What disappointed you able the z3? Were you running VMs on it?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What disappointed you able the z3? Were you running VMs on it?
Your results may not be the same as mine, but I will tell you what I tried and you can decide for yourself. I admit up front that what I did may not have been the best way to go about it, but this is what I did at the time.
I used a Supermicro server board and put one of the dual core Pentium processors in it that supports ECC memory and it ran at 3.4 GHz if I recall correctly. I would give the model number but this was a good two years (or more) ago and I don't remember. If I find the info, I will come back and add it.
I used 4 of the SATA ports from the system board and the 8 SAS ports on an HBA card to connect 12 drives to the system and set them all up as one big RAID-z3. I used z3 because I wanted to be more fault tolerant because I purchased all used hard drives from ebay to build this storage pool. I was concerned that these used drives would be more prone to failure and that turned out to be true.
The thing I was really stunned by was how slow the read and write performance was. I had more than the required amount of memory and the CPU was actually one that had been suggested at the time as being good for a low power system. I was just using this as a backup target at the time and it was so slow that I found it almost unusable. I struggled with trying to make it work better by putting a better SAS HBA in and putting in a faster Xeon CPU with more cores but the performance was still miserably slow. I ultimately built a whole new system with new hard drives and I still use 12 drives in my pool, but now I split it into two vdevs (6 drives each) at RAID-z2 and the performance is much better. I loose the capacity of 4 drives for parity instead of the capacity of 3 drives, but something about having two vdevs instead of just one makes the throughput faster. At work, I have a system with four vdevs that is about twice as fast as the system at home with two vdevs and the system with two vdevs is more than twice as fast as the system with one vdev.
So, all that said, I think that one of the problems was that I had too many drives in a single vdev and I also blame the z3 because of the extra parity information it computes and writes. It spends more time reading and writing parity information because there is the equivalent of another whole drive full of nothing but parity data. That is why it can survive 3 drive failures instead of just 2.
I only have what I have read and my own experience to go by, but I have been working at developing an understanding of ZFS since 2011. Still, I am not an expert and often other members of this board disagree with what I say.
I think it is a pretty widely accepted fact of ZFS that if you want speed, you go with a bunch of smaller vdevs instead of a single big one.
You have to do your research and make your decision, but I feel like I made a bad choice and I thought I might try to help others to not make a similarly poor choice.
I think I am rambling now. Let me know if you have a question.
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I ultimately built a whole new system with new hard drives and I still use 12 drives in my pool, but now I split it into two vdevs (6 drives each) at RAID-z2 and the performance is much better. .

This is why we recommend striped mirrors for block storage. If one needed extra redundancy one can use go with 3-way mirrors. Yes, one might loose 50% or more for parity, but having more vdev's provides more IOPS.
 
Status
Not open for further replies.
Top