Specing out new FreeNAS box..Intel SP?

Status
Not open for further replies.

Evan Richardson

Explorer
Joined
Dec 11, 2015
Messages
76
I'm down to the last few TB of space in my 48TB box (32 Usable), so I'm starting to plan the build for a larger box.

I'm interested in the new Scalable Platform/Precious Metals series from Intel, but I can't exactly find anything relevant in terms of benchmarks. I currently run a E3-1230V5, which is a 3.4Ghz 4/8 chip. I'm looking at either the Bronze 3106, or maybe Silver 4108 chips, and while their benchmarks look good, I can't find anything comparing to an E3.

My use case is mostly idle, serving media via Plex over NFS, so low requirements, but I care about Rebuild speed, so not sure how Core Count/CPU speed relate to that. Would a https://ark.intel.com/products/123540/Intel-Xeon-Bronze-3106-Processor-11M-Cache-1_70-GHz work? or should I go with the https://ark.intel.com/products/123544/Intel-Xeon-Silver-4108-Processor-11M-Cache-1_80-GHz?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Why do you need a new platform? Your current one takes up to 64GB of RAM, which goes an even longer way with compressed ARC than it did before.
 

Huib

Explorer
Joined
Oct 11, 2016
Messages
96
but I care about Rebuild speed

I'm not 100% sure but if I'm not I WILL be corrected here . If rebuild speed is your most important issue than sets of mirrors would serve you best as you only copy over the mirror disk one to one . However that comes at a gigantic price in loss of usable space and no redundancy during the resilvering.

Or use smaller drives so less data is lost when you lose a drive. That will make resilvering quicker also.

But why is rebuild speed such an issue is the array is mostly idle? You can continue to use it during the resilvering. It will just be slower. And if you use 2 or more redundant drives in a z pool you should not have a big risk of data loss while resilvering the dead drive.....

Just my 2 cents
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Rebuild speeds are not a problem with any sane configuration. And they're basically never CPU-bound, they're IOPS bound.
 

Huib

Explorer
Joined
Oct 11, 2016
Messages
96
Thanks Eric (Told you I would get corrected)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You weren't wrong. Mirrors are faster than RAIDZ at rebuilds (though RAIDZ is getting improvements in that area), it's just that the difference shouldn't be a problem for the average user.
 

Huib

Explorer
Joined
Oct 11, 2016
Messages
96
Thanks for the kind words. Still the biggest question for me is why is rebuild speed an issue on a mostly idle pool.....

Evan. Can you elaborate on that?
 

Evan Richardson

Explorer
Joined
Dec 11, 2015
Messages
76
Why do you need a new platform? Your current one takes up to 64GB of RAM, which goes an even longer way with compressed ARC than it did before.

Good question. The chassis I have now is 12 bay, I plan on going to a 36 bay for expansion possibilities in the future. I could move the entire platform over to the new chassis, and having 24 extra slots would allow me to bring up the new pools and copy data over, but I personally would rather not touch the existing system, migrate everything over to a new platform, then retire the existing hardware. Call it the paranoid in me.

Thanks for the kind words. Still, the biggest question for me is why is rebuild speed an issue on a mostly idle pool.....

Evan. Can you elaborate on that?

sure. The pool sits mostly idle, but in the new build, I plan on adding 2 hot spares just in case. The new build will have either 8 or 10TB drives, which will take a massive amount of time to rebuild as is, My thinking was that since ZFS rebuild is CPU driven and not RAID SOC driven, that a more powerful CPU would assist in rebuild speeds. Seeing as I'm going to be using WD Red 5400rpm drives, I suppose the drives would the limiting factor here instead of CPU :) While SSDs would be the fastest, I just don't want to have this thing take weeks to rebuild with a slow CPU, that's all. I have a plan though for if I can detect a failing drive soon enough, make a clone of it and then use that to resilver...we used to do that at my old job to cut down on rebuild times of 512TB pools.

Thanks.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
I have a plan though for if I can detect a failing drive soon enough, make a clone of it and then use that to resilver...we used to do that at my old job to cut down on rebuild times of 512TB pools.

Thanks.
One of the things I like about ZFS is the replace existing in place;

zpool replace POOL SRC_DISK NEW_DISK

This eliminates some of the normal RAID-{1/5} dual disk failures that bit me in the past, (both distant past and recent past). Having to replace 1 disk, only to find out that a second disk has un-known bad blocks.

Using this replace in place with ZFS allows a failing, (but not failed), disk act as a source for any of its data that is good. Even if a second disk fails during the re-silver, there is less chance of losing data. That said, RAID-Z2 or RAID-Z3, (or triple Mirrors), may not need this precaution. Except now we have 8TB-12TB disks!
 
Last edited by a moderator:

Huib

Explorer
Joined
Oct 11, 2016
Messages
96
I really think you are over thinking it.... a 8 or 10 tb drive will take some time to resilver but it's io bound .... and if it's not in an enterprise enviremen it should not be a problem. Especially with hot spares. I would move over what you have to the new case and be done with it. A faster CPU will not make the drive you need to resilver faster....

FYI it will not take weeks.... any decent CPU is faster than a CPU on a raid card..... have some faith in zfs. It's an enterprise solution after all ;-)

The source data will not be the problem. The limit will be the drive you write to and if it will be able to keep up.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
New scalable Xeon platform makes sense for truly high end PCIe all flash storage.

Not so sure it makes sense for spinning rust.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have a plan though for if I can detect a failing drive soon enough, make a clone of it and then use that to resilver...we used to do that at my old job to cut down on rebuild times of 512TB pools.

Bad idea. FreeNAS identifies drives in such a way that I don't think that will work and I wouldn't try it. Waste of time too.

The limiting factor to resilver is how quickly data can be written to the disk, for example I have a 320 TB pool at work using 6 TB drives and it took about 36 hours to resilver a drive because of how much data had to be written to the drive and the system was in no way overloaded by the process.

The specs say the sustained host to drive transfer speed is supposed to be (on the 6TB drive) 227 MB/s, but observed speed is a lot less. I figure the sustained write speed was less than 100 MB/s, so if you need to write 4TB of data on a 6TB drive, it is going to take a little while, no matter what you do, and the 8TB drive is only rated at a speed of 205 MB/s. So, theoretically, you are writing "0.012" of a TB every minute; or "0.72" of a TB an hour, but it will NOT go that fast it will take time.

Honestly, if you are concerned about rebuild time, you should go with smaller drives. I resilver a 2TB drive at home in about 4 hours and I will probably never use larger than a 4TB drive, or not for a long time.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
FreeNAS identifies drives in such a way that I don't think that will work
It works, it's just unnecessary and the wrong approach to drive failures.
 
Status
Not open for further replies.
Top