Performance question using 11x 8TB Red Drives

Status
Not open for further replies.

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
I am in the midst of buying 11x 8TB WD Red hard drives to upgrade my 3TB and 4TB drives. I am currently running 12 drives total, spread across the SATA and SAS controllers on my SuperMicro X10SL7-F.

My question is whether anyone can make an educated guess as to whether I would taking any kind of significant performance hit by running those 11 drives across the 2 built-in controllers? I have to assume that there is some kind of hit, but if it's neglible that would be acceptable to me.

The only other scenario I can see here is to get another SAS controller to match the one I have or has 16 ports, but my experience with those is minimal.

Any suggestions are appreciated.

Thanks

Cain
 

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
I guess the other thing I need to consider here is how I am getting all of my data from the current 12 drives to a new set of 11. I have an 8 port Areca 1210 SATA controller I could probably use, but it's going to be a tight fit getting everything to work for transfer.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I am in the midst of buying 11x 8TB WD Red hard drives to upgrade my 3TB and 4TB drives. I am currently running 12 drives total, spread across the SATA and SAS controllers on my SuperMicro X10SL7-F.

My question is whether anyone can make an educated guess as to whether I would taking any kind of significant performance hit by running those 11 drives across the 2 built-in controllers? I have to assume that there is some kind of hit, but if it's neglible that would be acceptable to me.

The only other scenario I can see here is to get another SAS controller to match the one I have or has 16 ports, but my experience with those is minimal.

Any suggestions are appreciated.

Thanks

Cain
Four of the six motherboard SATA ports are 3Gb/s and will therefore be slower... but nevertheless you won't suffer any significant performance hit by having the drives on the two controllers.

You can replace your old disks with the new disks, one-at-a-time, and the pool size will automatically expand once all of the disks are replaced. See "Replacing Disks to Grow a ZFS Pool" in the documentation.

How do you have your pool layed out?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
whether I would taking any kind of significant performance hit by running those 11 drives across the 2 built-in controllers?
On the contrary, instead of a single PCI-e 2.0 x4 link back to the CPU (and thus RAM) - which is shared with networking, graphics, USB and low-speed IO, you now have a significant amount of drives on a separate PCI-e 3.0 x8 link to the CPU.
Four of the six motherboard SATA ports are 3Gb/s and will therefore be slower
Right.

Realistically, it's not going to be a noticeable improvement, but it won't be a degradation either.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Also, you'll want to test your new drives before you use them. Detailed instructions are available in the "Hard Drive Burn-In Testing" thread.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I guess the other thing I need to consider here is how I am getting all of my data from the current 12 drives to a new set of 11. I have an 8 port Areca 1210 SATA controller I could probably use, but it's going to be a tight fit getting everything to work for transfer.

Backup the data to the 8GB Drives. 3 or 4 etc. if you can mount a single 8TB drive either in the system (offline a drive, dangle one outside the case etc, or in your non NAS box)

Then build an 11 way z3 with the remaining 8&4 TB drives.

Restore the data.

Replace the 4TB drives.

Alternatively, buy one more drive, then replace the existing drives on at a time. 2x6x8TB z3, is just as big 11x8TB Z3 Z2, but has twice the iops and will resilver twice as fast.
 
Last edited:

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
Hey Stux, I do like your first idea, I think that will be the easy way to get this done without doing a bunch of hardware acrobatics and multiple multi-hour file transfers. I failed to mention my current zpool status, sorry. I am currently running 6x3tb and 6x4tb each as Z2 in separate pools. So that would preclude me from your 2nd idea.

Though that does raise a separate question: should I be building my new zpool in that way? I was just going to knock out an 11 drive z3 pool and call it a day, but should I be doing a 2x6 z3 instead? If so, I am not sure how you are equating the 2, since wouldn't you be using up 6 of the drives for parity in your scenario (rather than the 3 in mine)? The point of this is to (finally) merge all of my data into a single pool, and I do not want to have to upgrade again after this (at least not for several years). I am more concerned about space than speed, but if there are other factors to consider here I am open to suggestion.

Thanks for the help everyone, this was useful information all the way around.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Though that does raise a separate question: should I be building my new zpool in that way? I was just going to knock out an 11 drive z3 pool and call it a day, but should I be doing a 2x6 z3 instead? If so, I am not sure how you are equating the 2, since wouldn't you be using up 6 of the drives for parity in your scenario (rather than the 3 in mine)?

I think I must've meant

2x6x8TB z2

Only thing that makes sense.

Still twice the IOPS of a single Z3.

Here's another approach.

Replace the 3TB drives one at a time with the 8TB drives. Then replicate your 4TB pool to your 3 (now 8) TB pool. Remove 4TB drives, add additional 8TB drives, and add a new vdev to your original pool.

Or another approach. Pull your 4TB drives. Make an 6x8TB Z2 pool. Replicate the 3TB pool. Pull the 3TB pool, replace with the 4TB pool, replicate the 4TB pool. Pull the 4TB drives and replace with the other 8TB drives... then add the drives to your pool.

The above will be the fastest method. No resilvering, and only 3 whole drive-set swaps.


3 & 4TB disks are still quite decent. Are you sure you don't want to use a 24 bay chassis ;)

4x6 in Z2 is a really good layout. It allows you to spread your vdevs across 3 controllers (each controller supports 8 direct drives), which means even if you lose a controller, you don't lose your vdevs.
 
Last edited:

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
Lol, I am already far above my limit for server size, going to a 24 bay chassis would be problematic. To be honest when Plex Cloud got announced yesterday it made me seriously reconsider the direction of this entire project. What started as a minor hobby project 2 years ago has turned into something a bit more...expensive. At this point though I am pretty much committed, just for the money I spent this year alone.

I had not considered using the Areca controller permanently, but I do like where your head is at on this. I can see the benefit of spreading things out a little, and 4 parity drives has to be better than 3. Plus I get the added performance boost on top of it.
 

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
I just saw your edit with your alternate approach, I think that's the way to go there...so again, thanks.
 
Status
Not open for further replies.
Top