Hardware and configuration suggestions needed

Status
Not open for further replies.

mumer8637

Cadet
Joined
Jan 8, 2018
Messages
6
I am planning to deploy a NAS with FreeNAS soon.
The specs will be (provisional):
Core i3 8100
Gigabyte Z370M D3H
Corsair Vengeance LPX DDR4 16GB (8Gx2) or 32GB (16x2)
Corsair VS450 450Watts
120GB M2 SSD
Gigabyte Horus P3 Case

The NAS is for home environment so I am not going with ECC memory or a 80 Plus Titanium PSU.

As for HDD's, I currently have the following in mind:
WD RED 8TB x5 in RaidZ1
WD RED 4TB x2 in Mirror
WD Purple 4TB x2 in Mirror

The plan is simple. 40TB RaidZ1 (32TB usable) array for multimedia. The collection (legal) is quite important thus the one drive redundancy but not mission critical so I don't wanna waste money for another drive to go RaidZ2. The 4TBx2 RED array will be for important stuff and the purples array for surveillance.

I have a few queries.

First, I will be the primary user. Occasionally, there will be 2-3 users hitting the NAS (worst case scenerio) maybe 10 days out of 364. Can I get away with 16GB RAM or go with 32GB given the size of RaidZ1?

Next, I intend to setup Plex servers using Jails. There will be 2 seperate Plex servers (I have my reasons). I am not familiar with Jails currently. The i3 8100 has 4 cores. Will these 2 Jails and FreeNAS be able to use these 4 cores fully or will I have to dedicate cores to Jails individually? In short, if one Plex is idle (will be most of the time), will the second Plex in different Jail be able to use all 4 cores of my i3?

Lastly, for my RAIDZ1 array, if one of the drives fail and the array goes in Degraded mode, how much is the performance reduced (roughly)? And roughly, how long will be rebuild take assuming I am using 16TB of my 32TB usable array?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The purpose of spending all the time and effort making FreeNAS work is to ensure your data is safe, correct? So, why wouldn't you go with ECC memory?
There's a huge userbase of Supermicro gear on here... and a comparatively small userbase of, well, everything else. There's a reason for that. Plus, IPMI is wonderful.
Your power supply is undersized for 9 drives. Review: https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/
RAIDZ1 is dead for large drives. There are reasons discussing why on here. But, again, if you want your data safe... RAIDZ2 is the way to go.
FreeNAS itself needs 8GB RAM, then whatever your jails consume, then having some space free for ARC is nice. I'd suggest 32 in this build.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Lastly, for my RAIDZ1 array, if one of the drives fail and the array goes in Degraded mode, how much is the performance reduced (roughly)? And roughly, how long will be rebuild take assuming I am using 16TB of my 32TB usable array?
I have a server at work with 6TB drives. It takes about 3 days to resilver a drive. So, you better hope you don't have a second failure during the heavy workload of the resilver. That is the reason RAID-z1 is dead. Anyone that cares about the data they are storing is using RAID-z2 or better.
The 4TBx2 RED array will be for important stuff
Why not just get another 8TB drive and make the one pool a RAID-z2 and put the 'important' stuff in the same pool where there will now be two drives of redundancy. You can segregate the data with datasets if you are afraid they will fight with each other, but I just use directories.
 
Last edited:

mumer8637

Cadet
Joined
Jan 8, 2018
Messages
6
I have a server at work with 6TB drives. It takes about 3 days t resilver a drive. So, you better hope you don't have a second failure during the heavy workload of the resilver. That is the reason RAID-z1 is dead. Anyone that cares about the data they are storing is using RAID-z2 or better.
Thanks. That helps.

Why not just get another 8TB drive and make the one pool a RAID-z2 and put the 'important' stuff in the same pool where there will now be two drives of redundancy. You can segregate the data with datasets if you are afraid they will fight with each other, but I just use directories.
Because I had plans to maybe throw in another drive for redundancy i.e. 3x4TB but I guess I will now consider RAIDZ2 instead anyway.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
May I ask how many 6TB drives do you have and at what Raid level? RaidZ2?
Yes, I use RAID-z2 for everything, except virtualization. That server has 60 drives broken into 6 vdevs of 10 drives each.
The reported capacity is 249TB and there is 119TB allocated, so I guess that means each drive would be a little less than half full.

PS. Those are WD Red Pro drives
 

mumer8637

Cadet
Joined
Jan 8, 2018
Messages
6
The purpose of spending all the time and effort making FreeNAS work is to ensure your data is safe, correct? So, why wouldn't you go with ECC memory?
There's a huge userbase of Supermicro gear on here... and a comparatively small userbase of, well, everything else. There's a reason for that. Plus, IPMI is wonderful.
Your power supply is undersized for 9 drives. Review: https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/
RAIDZ1 is dead for large drives. There are reasons discussing why on here. But, again, if you want your data safe... RAIDZ2 is the way to go.
FreeNAS itself needs 8GB RAM, then whatever your jails consume, then having some space free for ARC is nice. I'd suggest 32 in this build.
I wouldn't go with ECC memory primarily because of power consumption. Getting ECC memory limits my CPU to something like Xeon 1200 v4/v5 which idles at a 60W. The better performing i3 8100 idles at a whopping 20W only. There are more reasons too but idle power consumption alone is a deal breaker for me. Plus ZFS works just fine without ECC memory is my limited experience (at least so far).
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Getting ECC memory limits my CPU to something like Xeon 1200 v4/v5 which idles at a 60W. The better performing i3 8100 idles at a whopping 20W only.
The i3 should also support ECC. Also, where are you getting those idle numbers? Those Xeons have a TDP of 80W, and should idle far below that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Plus ZFS works just fine without ECC memory is my limited experience (at least so far)
You wouldn't know if not having ECC was a problem because there is nothing to notify you that there is a problem until the problem is so bad that your data is corrupted or your system crashes or both. The advantage of ECC is that it corrects single bit errors when they happen AND you find out about it. That gives you the opportunity to correct what ever the problem is.
I have been professionally maintaining computers in one capacity or another since 1991 and things have changed a lot over the years but the equipment still is not perfect, even in the data-center where I work now, I have had to replace defective memory in 4 servers in the past year. It doesn't happen often, but it is not a good idea to avoid a thing, like wearing a seat-belt, just because you haven't had a crash yet.
Gigabyte Z370M D3H
Is there some reason you are using a gaming system board with 'crossfire' support to build a NAS? It doesn't look like a wise choice if low power is your plan.
You should take a look at the guides because the latest hardware is not usually the best choice for compatibility reasons:
FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I wouldn't go with ECC memory primarily because of power consumption. Getting ECC memory limits my CPU to something like Xeon 1200 v4/v5 which idles at a 60W. The better performing i3 8100 idles at a whopping 20W only. There are more reasons too but idle power consumption alone is a deal breaker for me. Plus ZFS works just fine without ECC memory is my limited experience (at least so far).
Ignoring the fact that that idle number is wrong (I've got a dual E5-2670 2U rackmount box sitting here that idles at 80W... it's *highly* unlikely that a 4-core E3v4/v5 chip is going to get anywhere close to that), you're forgetting that the major power consumer in any FN box is the drives.

Also, your limited experience is... well, limited. Why would all of the major server manufacturers require ECC memory for almost every server if there weren't a good reason? Memory errors do happen... just because you haven't seen it happen doesn't mean much. If you're spending the money and time for FN, you're obviously pretty concerned about the integrity of your data... so why cheap out on one of the core components of the system - one that stands a very real chance of causing data integrity issues?
 

mumer8637

Cadet
Joined
Jan 8, 2018
Messages
6
The i3 should also support ECC. Also, where are you getting those idle numbers? Those Xeons have a TDP of 80W, and should idle far below that.
You wouldn't know if not having ECC was a problem because there is nothing to notify you that there is a problem until the problem is so bad that your data is corrupted or your system crashes or both. The advantage of ECC is that it corrects single bit errors when they happen AND you find out about it. That gives you the opportunity to correct what ever the problem is.
I have been professionally maintaining computers in one capacity or another since 1991 and things have changed a lot over the years but the equipment still is not perfect, even in the data-center where I work now, I have had to replace defective memory in 4 servers in the past year. It doesn't happen often, but it is not a good idea to avoid a thing, like wearing a seat-belt, just because you haven't had a crash yet.

Is there some reason you are using a gaming system board with 'crossfire' support to build a NAS? It doesn't look like a wise choice if low power is your plan.
You should take a look at the guides because the latest hardware is not usually the best choice for compatibility reasons:
FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/
Ignoring the fact that that idle number is wrong (I've got a dual E5-2670 2U rackmount box sitting here that idles at 80W... it's *highly* unlikely that a 4-core E3v4/v5 chip is going to get anywhere close to that), you're forgetting that the major power consumer in any FN box is the drives.

Also, your limited experience is... well, limited. Why would all of the major server manufacturers require ECC memory for almost every server if there weren't a good reason? Memory errors do happen... just because you haven't seen it happen doesn't mean much. If you're spending the money and time for FN, you're obviously pretty concerned about the integrity of your data... so why cheap out on one of the core components of the system - one that stands a very real chance of causing data integrity issues?
Thanks for all the input. The first post was just some hardware I found available on the easy.
I didn't had any idea about ZFS and FreeNAS.

The list has changed by now.
You guys have convinced me to go with ECC RAM so thank you I guess. :p

So this is what I'll do now.
Supermicro MB (whatever I can get my hand on easily)
Xeon E3 1220 v5/v6 processor (Anything over v2 is power efficient but v5/v6 idles less than 30W whereas v3/v4 idles at 30-40W)
32GB DDR4 ECC Memory (16x2) so I have another 2 slots to upgrade to 64GB in future
M2 SATA SSD (or 2.5" if can't find a board with M2 support)
As for PSU, I won't go with server ones. I'll just find a good 650W 80 Plus Gold PSU (or better).

The PSU is because I want this to be a quite system and will make a custom 3U Rack case with no more than 19" depth so it can fit my network cabinet. :)
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Xeon E3 1220 v5/v6 processor (Anything over v2 is power efficient but v5/v6 idles less than 30W whereas v3/v4 idles at 30-40W)
Again, I'd like to look at the source for that idle power amount. If it's this article (my first search result), it's important to realize that is not CPU idle power, but the whole system.

I have not researched this, but if an idle processor consumes more than 10% TDP, I'd want an explanation from the vendor. Modern processors should actually be very good for idle.
 

mumer8637

Cadet
Joined
Jan 8, 2018
Messages
6
Again, I'd like to look at the source for that idle power amount. If it's this article (my first search result), it's important to realize that is not CPU idle power, but the whole system.

I have not researched this, but if an idle processor consumes more than 10% TDP, I'd want an explanation from the vendor. Modern processors should actually be very good for idle.
That is correct. It is actually what the whole system is pulling from the wall which is what I am interested in. Of course, one big factor is that at those numbers, even a good PSU isn't super efficient (~80%).
 
Last edited by a moderator:

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Right, you have to size the power supply to be at the higher efficiency at typical load, usually at 50% of the rated output. But current power supplies do not lose all that much efficiency below that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's a huge userbase of Supermicro gear on here... and a comparatively small userbase of, well, everything else. There's a reason for that.

I'm just curious if *you* know what that reason is. Because if you roll back in time to the earliest days, you'll find all sorts of AMD E350 and other cra-cray builds.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That is correct. It is actually what the whole system is pulling from the wall which is what I am interested in. Of course, one big factor is that at those numbers, even a good PSU isn't super efficient (~80%).

Perhaps you aren't aware of it, but good PSU's go up as far as 96% efficient and have a pretty great curve.

https://en.wikipedia.org/wiki/80_Plus

80Plus Gold PSU's such as the Seasonic Focus 650FX are rated gold and run around 90% efficiency for the curve from around 25-75%. I found a Platinum graph here:

ax1200ieff400.png


Of course, if you buy a cheap PSU, it isn't going to be certified by 80Plus, so you start to see some inefficiency drag at the low end of the load scale.

The mistake many people make is failing to understand that the PSU itself consumes a certain amount of power, and this throws the low end a little bit, but the fact of the matter is that if you've got a 650W gold PSU at 10% loading and it happens to be only 80% efficient at that rating, you're still talking only 81 watts at the wall. If you manage to find a better PSU with 90% efficiency at 10% loading, that's 72 watts, or 9 watts saved, but the price differential in the PSU's is probably pretty crazy and you'll never make up that savings.

Since you're powering a CPU, RAM, mainboard, network, and a bunch of hard drives, you're probably not going to get yourself down to that 10% utilization unless you have a crazy oversized power supply anyways.

https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811

So.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I'm just curious if *you* know what that reason is. Because if you roll back in time to the earliest days, you'll find all sorts of AMD E350 and other cra-cray builds.
I would assume it's because there is a plethora of secondhand Supermicro gear available cheap, it's easy to get a board that offers IPMI for easy remote management, the gear performs well and is quite stable (as one would expect from a server platform), etc. Those are just some of the reasons I have 8 Supermicro boxes in the rack in my closet. In reality, the reasons don't much matter... if you're running a product that offers community-only support, you're going to have better luck using hardware that's common. Just searching the forums, there are 6 posts with "Z370" in them... 3 of which I made, since I happen to have a GB Z370 board in service, though not for FN. There are thousands of posts concerning various Supermicro boards.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would assume it's because there is a plethora of secondhand Supermicro gear available cheap

While that is a contributing factor, there was a lot of secondhand Supermicro gear back in 2011 as well, yet Supermicro was not popular here at the time.

Early on, there were a bunch of what we would now consider batshit-crazy builds. A lot of users coming in trying to recycle PC's, or, worse, buying new hardware like the AMD APU's, trying to get by on a few GB of RAM, which actually worked fine without ZFS. However, a lot of them would try to go for ZFS on a 4GB platform. Now I don't really care about any of that, I don't have an issue with repurposing hardware, but I started to notice some pretty bad trends. Among them were the fact that we were seeing a lot of reports of pool loss from these small-RAM systems. And I kept track of this, somewhere in my head, and eventually noticed that only people with 6GB or less of RAM were having issues.

So one day I got frustrated and edited the FreeNAS manual, rewriting the hardware requirements and set a minimum of 8GB for ZFS. Dru approved of this and after awhile the APU builds became second-class citizens, and then slowly vanished.

People then complained that it was sooooo..... hard..... to find good hardware. We're not a Supermicro shop here, and I don't sell FreeNAS, but there's a lot of experience with a range of gear, including Intel, HP, Dell, Tyan, ASUS, AsRock, etc. and the Supermicro stuff has the unusual upside of being more like "server LEGO's". Try to find a single-CPU Dell or HP 24-drive server with just an HBA (not RAID). Doesn't exist. Supermicro lets you build "unusual" stuff easily. Plus you can put their boards in a standard deskside or other ATX chassis.

In response, I really began pushing Supermicro stuff, especially low end low cost Xeon X9 in the forums, especially eBay deals, and eventually got around to writing https://forums.freenas.org/index.php?threads/so-you-want-some-hardware-suggestions.12276/ ... and that was probably the tipping point. It's interesting to read that again today, looking back on it, because I had to actively advocate for and justify going with server-grade stuff, because so many people had "alternative ideas."

I'm pretty sure the early push I gave towards Supermicro gear was the major driver towards Supermicro being the winner here. So you *could* blame me. And Supermicro should pay me a commission, hah.

Now, in all honesty, there was also a large component of it in the form of this guy "noobsauce80", who came here with (I think) a Norco case, a Gigabyte board, and an Areca RAID controller... and had various problems with all of it. He did lots of investigation and learning, took good advice to heart, eventually went with newer more compatible hardware, and then became a hardware evangelist for things like server-grade hardware and ECC memory. A tireless proponent for doing things The Right Way. Tell me if you know who that is - he has a different handle now.

I'll definitely give him as much credit as he'd like for scaring away the craptacular-spectacle non-server-grade builds, non-ECC users, etc. This was also very important in building a positive feedback loop of good builds.

Of course, every user reporting success and pleasure with their fine setup is incredibly important as well, as this amplifies the positive feedback loop.

Anyways, it was not always like the way it is today. It used to suck, with people who just didn't want to hear that their old PC or laptop(!) wasn't going to be a good choice for a fileserver. I remember so many questions about crappy network performance, or SATA issues, only to find out that they had some broken-arse SIS or Realtek ethernet, and these people don't WANT to hear "it's never going to work well." They don't WANT to hear "you shouldn't attach a stack of USB drives" or "that external eSATA port multiplier chassis is a terrible idea." It got very tiring trying to help people for whom the only real help was "buy better hardware." Repetition fatigue is a thing, and at some point we kinda picked up a reputation for being "not friendly", though usually when I've looked into that, it seems to be along the lines of "those FreeNAS forum a-holes hated on my hardware and wouldn't tell me how to make my crappy mainboard work right for FreeNAS." This has some ups and downs, the upside being that it means that today's users on the forums tend to be the people with workable setups, which has turned into a nice positive feedback loop that reinforces good practices, but it did come at the price of turning off people who were desperate to use their crappy problematic systems, wouldn't listen to advice, and came away with a bad taste. I'm not sure how that ever could have worked out, because people just don't like hearing that their junk is never going to work well for the task. But we (especially moderators) got blamed for it anyways, both by the users and also by iXsystems. A heavy price was paid to get to this point. So I hope you appreciate how good we have it today.

So to get back to the statement:

There's a huge userbase of Supermicro gear on here... and a comparatively small userbase of, well, everything else. There's a reason for that.

Was the reason what you thought? ;-)

I was just amused and terribly pleased by your comment, because yes there's a reason for that, and on one hand it's fair to say that the reason is because Supermicro is an awesome platform, but on the other hand, things could have gone very differently and we could have ended up with a lot of consumer cheapskate builds on random problematic hardware had there been fewer people here willing to debate the finer points of ECC memory, PSU sizing, system cooling, reliable HBA and network adapters, and all the other fiddly aspects that contribute to engineering excellence.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Now, in all honesty, there was also a large component of it in the form of this guy "noobsauce80", who came here with (I think) a Norco case, a Gigabyte board, and an Areca RAID controller... and had various problems with all of it. He did lots of investigation and learning, took good advice to heart, eventually went with newer more compatible hardware, and then became a hardware evangelist for things like server-grade hardware and ECC memory. A tireless proponent for doing things The Right Way. Tell me if you know who that is - he has a different handle now.
I was only a lurker back then. I spent the first few years just reading and learning before I even created an account. I don't see nearly as many posts from Cyberjock now as I once did.
Anyways, it was not always like the way it is today. It used to suck, with people who just didn't want to hear that their old PC or laptop(!) wasn't going to be a good choice for a fileserver.
Some of the people that want to build crazy just don't post here. There are other places they go to get their information and they don't report problems here. It cuts the FreeNAS development team out of some potentially useful feedback.
 
Status
Not open for further replies.
Top