Thoughts on mixing drives types in a pool?

Status
Not open for further replies.

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
I'm think I'm probably going to go with 10x4GB drives in a raidz2 configuration. WD REDS are qualified for NAS 24x7 ops so they seem a good bet but:

1. WD say don't use more than 5 of them together (vibration?)

2. Maybe its better /not/ to use drives from just one vendor. I notice that backblze use drives from WD, Hitachi, and Seagate (http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/), though they don't say if they use them in the same storage pod.

Any thoughts?

i
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
1. I think this topic also came up in the WD Red vs. Green topic over in the Hardware forums. All I know is that there are people running more than 5 reds without a problem.

2. Some people argue that you should mix vendors or production batches to minimize the risk of all drives failing at the same time. Others don't share this opinion, though. I did mix my drives, but I haven't seen any scientific proof yet which backs me on that. You'll probably be fine with drives from the same vendor, if you give them a good test in the beginning before putting them into production.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
1. I think this topic also came up in the WD Red vs. Green topic over in the Hardware forums. All I know is that there are people running more than 5 reds without a problem.

2. Some people argue that you should mix vendors or production batches to minimize the risk of all drives failing at the same time. Others don't share this opinion, though. I did mix my drives, but I haven't seen any scientific proof yet which backs me on that. You'll probably be fine with drives from the same vendor, if you give them a good test in the beginning before putting them into production.


Thanks for the pointer. 15 pages left me with: no real conclusions :) Other than widdled Greens are cheaper and maybe just as good. I didn't see anyone suggest that mixing vendors/types might be a bad idea, so I'm tempted.

Not sure how to do a good load test before putting them into production. In fact a started a thread here with specifically that question a little while ago.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
My guess on the "up to 5 drive NAS" specification is that WD assumes if you've got larger needs then you're less price-sensitive, and therefore should be choosing a more expensive product.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
My guess on the "up to 5 drive NAS" specification is that WD assumes if you've got larger needs then you're less price-sensitive, and therefore should be choosing a more expensive product.


Aye, that's possible but don't want to risk $1800 of my own money on that assumption.

btw, price difference between red and green 4tb is now only $10 (amazon). And I note backblaze use WD Reds, which I think decides the red v green argument for me.

But, to mix vendors. Aye, there's the rub.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
If'n it were me, and I were building a 16-drive FreeNAS system tomorrow, I'd consider:
  • WD Reds
  • Seagate Constellation
And not worry about it. If you're paranoid you could buy half of each and set up mirrors across the different vendors, so if you end up with a bad batch from one vendor a whole mirror never goes down. I've never done that (I tend to have arrays of identical drives), but it makes some sense, especially if ZFS doesn't care about small differences between drives...
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Why mirrors? Why not a mixed raidz2 pool?

Backblaze seem to be liking the Hitachi 5k4000 HDS5C4040ALE630, so I'm thinking about 5 of those and 5 wd reds.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Why mirrors? Why not a mixed raidz2 pool?
Because mirrors are faster, and mirrors don't take days/weeks to rebuild when a drive fails, and my capacity requirements aren't such that I need to get more density on a server than I can with 50% loss of capacity due to RAID1.

:)

Actually, I should add to that. My usage isn't streaming movies, or anything like that. I've got a bunch of virtual machines, and some backup servers that I'm supporting. I care much more about IOPS than I do about throughput, and slowness is a harder (and more likely problem) to solve as I grow than loss of capacity. Plus, disk usage growth can be plotted pretty well but web site popularity can triple in a matter of an hour (ask me how I know...)
 
Status
Not open for further replies.
Top