Hardware for my new server

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
Hi.

I'm picking the hardware for my new server and have some questions regarding performance. The server will work as a home media server with encryption, perhaps with a few light VMs running on top. For the disks, I've settled on WD's Caviar Greens after reading some posts here by cyberjock.

At first I was going to go with a Celeron because I figured that'd be enough for a file server, but after noticing it doesn't have hardware AES, I decided to opt instead for an i3. Would there be much point in going with an i5, beyond the extra cores for VMs?

If later on I decide to add a SATA controller to add more disks, what should I look for to know if FreeNAS will support it?
Also, I understand I should use it with hardware RAID disabled. I've never used a disk controller card, but I'm assuming I'll get some kind of option during boot, right?
Finally, how safe is it to fully saturate the molex connectors from the PSU with SATA adapters and disks?

And the last question, how do these layouts compare in terms of performance and reliability? In all cases I'll be using raidz1.
* 6 x 2 TB
* 5 x 3 TB
* 4 x 4 TB

Thanks.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The i5 does not support ECC and is therefore not recommended. Encryption isn't recommended either, unless you have a specific legal requirement for it--there are too many ways to lose your data. RAIDZ1 isn't recommended for drives larger than 1 TB. If you need ports for more than the 6 drives a typical recommended motherboard will support, the most common recommendation is to use a SAS HBA like the LSI 9211-8i or the IBM M1015 flashed to the proper firmware.
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
Thanks for your reply.

The i5 does not support ECC
I'm not going to use ECC anyway. The only ECC memory I can get hold of costs four times as much per GB as non-ECC memory, and that's not even including the other hardware to support it. It's just not worth it.

Encryption isn't recommended either, unless you have a specific legal requirement for it--there are too many ways to lose your data.
RAIDZ1 isn't recommended for drives larger than 1 TB.
I don't mean to be rude, but please don't answer questions I didn't ask. I'm well aware of the potential issues with the data format I've decided on and I'm capable of weighting the risks against the benefits for myself.
This thread is only about hardware.

the most common recommendation is to use a SAS HBA like the LSI 9211-8i or the IBM M1015
Holy smokes those are expensive! I may as well just build another computer, for that kind of cash. Surely FreeNAS can handle cheaper cards, right?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
IBM M1015s are under $100 on eBay; the LSI-branded cards may be slightly more. ECC RAM is ordinarily 10-15% more expensive than otherwise-equivalent non-ECC. The hardware recommendations here are premised on the belief that you care about your data--otherwise you'd presumably be using a more lightweight NAS OS than FreeNAS, which is admittedly pretty resource-hungry. You're certainly under no obligation to follow them--it's your system, and you can build it however you choose. But the recommendations are what they are.
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
Ordering things from outside the country is a major hassle where I live. I'm not going to gamble paying for hardware today to maybe receive it in two months and maybe having to pay a 50% imports tax (yes, 50% over item + shipment. It's that bad). That's how local importers get away with charging frankly outrageous prices for obsolete hardware.

I find FreeNAS to be sufficiently lightweight, especially compared to other storage OSs of similar capabilities. I chose it because ZFS has checksumming and data scrubbing built-in. This combined with single parity offers a more than sufficient assurance of data integrity (either error correction or detection) for my application, even in the event of random memory errors, but not in the event of non-random memory errors. ECC itself only protects against random errors and it may or may not protect against specific non-random errors. Therefore, to me, ECC adds little to no value.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I don't mean to be rude, but please don't answer questions I didn't ask. I'm well aware of the potential issues with the data format I've decided on and I'm capable of weighting the risks against the benefits for myself.
This thread is only about hardware.
No offense, but I find your response to be rude.

You post a question regarding hardware that you want to use to run FreeNAS. A well respected contributor responded "in kind" and provided insightful information based solely on what data you provided.

Their mentioning of encryption is not "out of the blue" especially since you noted:
At first I was going to go with a Celeron because I figured that'd be enough for a file server, but after noticing it doesn't have hardware AES, I decided to opt instead for an i3.
So please clarify why else anyone would require a CPU with AES if not for Encryption?

In all cases I'll be using raidz1.
Again, it appears to me that the response was based on your mentioning of wanting to run RaidZ1

You seem somewhat knowledgeable so I am sure you are capable enough to read any of the vast articles/posts here that provide the same warnings about how you are wanting to configure your system.

For the disks, I've settled on WD's Caviar Greens after reading some posts here by cyberjock.
Perhaps you should go back and read @cyberjock 's "Hardware recommendations (read this first)":
If you post a build in the forum that doesn't follow these recommendations, expect them to be reiterated to you all over again. Quite literally, the people looking at builds and comment have standards that are relatively in-line with this thread. So if you have a build on paper and it doesn't pass this post you can kind of guess what kind of responses you are going to get.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
Helios you need to read up on scrubbing risks with non-EEC RAM. If that doesn't scare you into EEC nothing will. Based on your responses to recommended changes and what you want to do I would have to say ZFS is not for you. I would recommend you build a Linux server or unraid. Either of those will fill your needs with the hardware you want to run.


Sent from my iPhone using Tapatalk
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Finally, how safe is it to fully saturate the molex connectors from the PSU with SATA adapters and disks?
Assuming you use quality made adapters and you only run a single SATA plug off each molex plug,
it's safe. If you start trying to power multiple drives off one plug (as in Y-connectors) you are at risk!
 
Joined
Dec 2, 2015
Messages
730
Helios you need to read up on scrubbing risks with non-EEC RAM. If that doesn't scare you into EEC nothing will. Based on your responses to recommended changes and what you want to do I would have to say ZFS is not for you. I would recommend you build a Linux server or unraid. Either of those will fill your needs with the hardware you want to run.
In particular, read the first post in this thread by Cyberjock. It explains how an error in the RAM could cause data corruption during a scrub. Non-ECC RAM is really only a good choice if the data has little value.
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
No offense, but I find your response to be rude.
I apologize if my phrasing has offended anyone.

You post a question regarding hardware that you want to use to run FreeNAS. A well respected contributor responded "in kind" and provided insightful information based solely on what data you provided.
I of course appreciate all responses. That said, I think my questions were quite to the point, and I posted them in the hardware section. Commenting on my choice of data format is not helpful, even if the poster meant well.

You seem somewhat knowledgeable so I am sure you are capable enough to read any of the vast articles/posts here that provide the same warnings about how you are wanting to configure your system.
Yup. Like I said, I'm well aware of the potential issues. That's why my questions were regarding the hardware configuration, not the software configuration.

I've read that post, already, thanks. My choice of hardware is partly informed by it. But, for the reasons I've already explained, some of that hardware is simply not doable for a home user in my country.

Helios you need to read up on scrubbing risks with non-EEC RAM. If that doesn't scare you into EEC nothing will. Based on your responses to recommended changes and what you want to do I would have to say ZFS is not for you. I would recommend you build a Linux server or unraid. Either of those will fill your needs with the hardware you want to run.
I'm already aware of the "scrub of death" scenario. I am confident that risk(ZFS+scrub+non-ECC memory) <= risk(ext4+non-ECC memory).
Understand that if I could buy ECC memory for a 10% increase in price, I wouldn't give it a second thought. But we're talking about a 300% increase, NOT including the motherboard or the CPU. That's simply unacceptable to protect against what I feel is a remote risk on a home media server.
So, please, enough about ECC.

Just read these and make your own decision since you don't even want our thoughts.
I never said I didn't want opinions. Quite the contrary, actually. But just because I want opinions doesn't mean I will agree with them.

Assuming you use quality made adapters and you only run a single SATA plug off each molex plug,
it's safe. If you start trying to power multiple drives off one plug (as in Y-connectors) you are at risk!
Thanks, that's good to know!
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
Helios to answer your questions in order.

Don't use Green/Blue drives. Personal experience they are just cheap drives and some good used HGST drives are only $35 each for 2GB drives. Also don't even use any 3GB Seagate drive.

Get the Celeron and forget about using FreeNAS encryption. To many problems and you can use another encryption software to encrypt only the data that need encrypted. If it's a lot of data then get the i3 as the AES will still help.

SATA cards are hit and miss. IBM M1015 are cheap and can attach 8 drives if not using an expander.

In the BIOS set to AHCI. You don't have a controller to worry about yet. Never set up a RAID array to serve to FreeNAS.

PSU question is loaded. Single rail no problem. Multi rail you should balance load between all the rails.

Lastly 6x2TiB drives. Only due to faster resilver. But I would use RAIDZ2 and go 5x3TiB for 9TiB usable. That is only 1TiB less than the RAIDZ1 6x2.

I still think another OS would serve you better than ZFS. Hardware is good until it isn't. With almost every other file system bad RAM wont possibly destroy your entire array. If you run without scrubs to negate that possibility then you are removing one of the key features of ZFS. I'll stop beating this horse that died months if not years ago. Also ensure you are using an Intel NIC.


Sent from my iPhone using Tapatalk
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I of course appreciate all responses. That said, I think my questions were quite to the point, and I posted them in the hardware section. Commenting on my choice of data format is not helpful, even if the poster meant well.
Patient walks into the doctor's office:

Patient: Hey Doc, I have a problem... My fingers are all numb...
Doctor: Hmm... I think I see the issue... how about we first address the fact your upper arm is severed so badly that only a few tendons are holding it together?
Patient: Doc! Did I ask you about my upper arm??!! I said my fingers are all numb!
Doctor: Umm, yes... but you see the reason your fingers are numb is because...
Patient: Hey! Are you deaf?!?!? I came in here and asked you a *simple* question about my fingers...Why can't you answer that????

Yeah, kinda feels like that... Difference is I (we) are here assisting others out of my(our) own desire, free time and not receiving a cent for it. We do it because we want to assist those willing to be assisted. Don't want my(our) advice? So be it, your data your choice... However, it is my (our) choice as well to provide the correct advice to those willing to listen.

I personally have weighed measured and found this discussion and your design/attitude wanting. Therefore, I will disengage and leave you to your own devices.
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
Don't use Green/Blue drives. Personal experience they are just cheap drives and some good used HGST drives are only $35 each for 2GB drives. Also don't even use any 3GB Seagate drive.
I've already looked for HGST drives, but all I can get locally are laptop drives.
My understanding is that Reds are basically Greens with the wdidle patch applied and with TLER, which would prevent ZFS from dropping it on an error.
And yeah, I know about the 3TB Seagates. We had two of those at work in a RAID0 for the VM server for a couple years and last month one of them crapped out, trashing everything.

Get the Celeron and forget about using FreeNAS encryption. To many problems and you can use another encryption software to encrypt only the data that need encrypted. If it's a lot of data then get the i3 as the AES will still help.
I keep looking around on the forums and the documentation, but I still don't understand what the danger is. Everyone just says "don't do it" without citing any specific reasons; all I read is about people losing the keys. If that's the case, it would seem I'm no worse off than right now, with LVM+dm-crypt+passphrase stored in KeePassX. Unless there's some specific issue (e.g. "the disks might be left in an inconsistent state during a resilver"), I honestly don't see what the problem is.
You're saying that the i3 isn't worth it even for AES-NI? It's a bit more expensive than the Celeron, yes, but it's not a huge deal.

SATA cards are hit and miss.
How do you mean? In terms of quality? In terms of FreeNAS supporting it?

PSU question is loaded. Single rail no problem. Multi rail you should balance load between all the rails.
So, let me see if I understood you correctly. Suppose I have a PSU with two SATA power cables with three connectors each, and two molex cables with two connectors each.
If I want to connect four disks it'd be better to put one on each cable rather than two on one SATA cable and two on the other.
If I want to add four more disks I should use one more SATA connector on each SATA cable, and one more molex connector on each molex cable.
Is that correct?

Patient walks into the doctor's office:
Your analogy would be valid if there was a cause-effect relationship between my hardware configuration and my software configuration, but there's not. Whether or not I encrypt my pool or use raidz1 or raidz2 has literally no bearing on the hardware I will pick, beyond the minimum number of disks I will buy.
A more valid analogy would be a patient who complains about numb fingers, and the doctor begins to examine his leg because he entered the office limping.

Difference is I (we) are here assisting others out of my(our) own desire, free time and not receiving a cent for it.
You realize you're being offended on someone else's behalf, right? I'm not going to deny you that right, but I do think it's a bit absurd.

I personally have weighed measured and found this discussion and your design/attitude wanting.
I'm sorry if you find me distastefully curt; some people, both online and offline, do. I don't like wasting my or other people's time. If I think the conversation is not heading anywhere productive I prefer to kill it quickly.
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
After running some numbers, I've found that the most economically efficient configuration is 6 x 3 TB in raidz1, which gives me a theoretical effective storage of 13.6 TiB. In raidz2 this goes down to 10.9 TiB, which is still more than twice my current used space. I've found a benchmark that suggests that raidz2 gives a write performance of approx. 4x the write performance of a single disk, and another where the WD Green is measured at an average of 94 MB/s. 370 MB/s easily overruns my gigabit LAN.

Therefore, at this time I see no reason not to use raidz2.

EDIT: Alright, I found it. Section 8.1.10.1 of the manual.
> If the following additional steps are not performed before the next reboot, you may lose access to the pool permanently.
Yeah, that sucks. I'm not going to use GELI encryption, now. I'll just set up a VM to manage the sensitive data. Not terribly elegant, but this does make it easier to backup, at least.
 
Last edited:

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
Don't use FreeNAS if your not planning on using ECC ram simple as that... theres plenty of other options out there to use instead.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
So, let me see if I understood you correctly. Suppose I have a PSU with two SATA power cables with three connectors each, and two molex cables with two connectors each.
If I want to connect four disks it'd be better to put one on each cable rather than two on one SATA cable and two on the other.
If I want to add four more disks I should use one more SATA connector on each SATA cable, and one more molex connector on each molex cable.
Is that correct?
No. Put 2 on each SATA cable. The nice thing about most PSU manufactures is they put enough sata connectors to handle the load if all the SATA power cables are used. It's good to split up the load if it's a multi-rail PSU for efficiency purposes. Don't need to bother with the molex cables for your drives.

And SATA cards I was talking about FreeNAS having driver support for them.
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
No. Put 2 on each SATA cable. The nice thing about most PSU manufactures is they put enough sata connectors to handle the load if all the SATA power cables are used. It's good to split up the load if it's a multi-rail PSU for efficiency purposes. Don't need to bother with the molex cables for your drives.
So, in that same example with 8 disks, that would be three disks on each SATA cable and one on each molex cable, and NOT two disks on one molex and none on the other.
Basically, first fill out all SATA connectors, and then distribute the load among the molex cables.

I have one more question: When I was testing the new HDD layout at work, I found out that hardware RAID5 has horrible write performance. RAID5 over three disks gave a write performance that was lower than with a single disk. Three Caviar Blacks could only handle writing at 75 MiB/s; absolutely laughable. How can ZFS give higher performance than hardware RAID while adding more parity?
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
So, in that same example with 8 disks, that would be three disks on each SATA cable and one on each molex cable, and NOT two disks on one molex and none on the other.
Basically, first fill out all SATA connectors, and then distribute the load among the molex cables.
That is what I would do if I was using an ATX power supply.

RAID5/6 write performance is not what more power users want. You probably want RAID10 performance but that costs money as half your storage space is used for mirroring. If you are looking at FreeNAS and ZFS for write and read performance you will still only be able to be as fast as your NIC. If you go 10G NIC to get over that hump. But that being said my stripped/mirrored pool of 4x1GB drives has like a 190 mb/s write performance with 320 mb/s read if I remember correctly. But it doesn't do anything for me as my NIC is only 1G. It does help all my jails stay happy.

On that subject I am getting a Dell C2100 with a H700 Raid card that I plan to run all stripped mirrors in to be a monster of an ESXi host. I care more about the read than I do the write performance which is usually what most RAID strategies excel at. Read speed that is. I can't tell you why or if ZFS has better performance than a dedicated RAID controller. I'm not using FreeNAS for HDD performance. I'm using it for data integrity and storage performance. The other things it does is just icing on my cake. :)
 

Helios

Dabbler
Joined
Nov 3, 2015
Messages
31
RAID5/6 write performance is not what more power users want.
Agreed. I was just curious. Beyond the initial copy, the write throughput will mostly be bottlenecked by Internet speeds, so the write performance is really not much of an issue. I'm just a bit concerned because I know from first hand experience that Greens perform quite terribly.
I'll probably pick the disks up tomorrow, so I'll see if I can get a test environment going and post some numbers.

On that subject I am getting a Dell C2100 with a H700 Raid card that I plan to run all stripped mirrors in to be a monster of an ESXi host.
After I'm done moving my data to the new server, I'll see if I can trade my 4 TB Red for two 1 TB Blacks, and RAID0 those with my own two 1 TB Blacks to get a massively fast VM drive going.
 
Top