Suggestions for FreeNAS build

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
Hi,

this is my first post, since I just recently decided to renew my current storage server. By doing so, I came across FreeNAS, which sounds like a resonable way to go.

Currently I have a HP Microserver Gen8 running with a e3-1265l, a HP Smart Array P420 Raid controller and 4x Seagate 8TB Ironwolf drives on Raid 5.
Basically I really like this box, but the it's definitely to noisy to run it at my workspace. Furthermore I also have a 19" rack in this room and therefore thought about building a 19" 4hu storage server based on FreeNAS. 4hu mainly because then, I have more flexibility in terms of cooler and fans to reduce the noise to a minimum.

Best feature of this box is the integration of raid into the integrated-lights-out management interface. In case a hdd dies, it notifies me via email and also indicates the failure via a red led in front of the case. So I only need to replace the HDD and the rebuilding starts. Besides the led, I understood that FreeNAS works the same, right?


In terms of hardware, some is already lying around or could be re-purposed from the HP Gen8:

supermicro x9scm-f
xeon e-3 1230v2
32gb ecc ram
4x Seagate 8TB drives
HP Smart Array P420 and HP Smart Array P411
32gb usb stick as boot media

So my goals are, to have a 19" Storage Server which is as silent as possible. Most likely I will never exceed the usable 24TB of Storage but beeing some sort of future proof may be a smart decision. The supermicro board has two 1GB nics. But adding a 10GE intel card would also be possible if needed.

regarding the case I came across these two:
Inter-tech 4u 4408
inter-tech 4u 4410
if you know alternatives, your suggestions are very welcomed.

So, does this setup work or are there any flaws I don't see yet. Also, the board has 4 Sata2 and 2 Sata3 ports. Is this useable or should I get a HBA controller, or does it even make sense to use one of the HP Smart Array Controllers in HBA mode (they are on the compatibility list for FreeNAS).

thx and Best
MacX
 
Joined
Oct 18, 2018
Messages
969
Best feature of this box is the integration of raid into the integrated-lights-out management interface. In case a hdd dies, it notifies me via email and also indicates the failure via a red led in front of the case. So I only need to replace the HDD and the rebuilding starts. Besides the led, I understood that FreeNAS works the same, right?
If you set up regular smart and long smart tests and scrubs you'll get email notifications about those as well which will often give you a heads up of drives having issues before they fail. The red light won't come on; but you can replace the drive early to prevent the risk of data loss.

supermicro x9scm-f
I use this board on my backup system; totally fine choice if you don't need more then 32GB of memory.

HP Smart Array P420
You should only use this card if you can flash it to IT mode. Any kind of hardware between your OS and your drives which obscures direct access to your drives can be an issue. ZFS and many of the monitoring tools require such access to work reliably. Without it they may work for a time and then stop working in a catastrophic way. Some folks have tried using cards in "jbod" or "passthrough" mode. I can't say for sure with every card whether this is sufficient. As far as I can tell though the "IT" mode flash method is the most reliable. If this card cannot be so flashed you can pick up a used HBA off ebay for ~40-60 depending on which one you buy.

It looks like these 4u chassis have only 8 and 10 drive bays respectively? Is that so? In 4U that is quite low; though I suspect it has to do with noise reduction. Chassis with more drives often put fans behind the drives to keep them cool and thus often use static pressure fans to get air through the tight spaces. I suspect you may have to trade higher drive temps and lower drive densities if you are really focused on noise.

There was a video tutorial on how to make a higher-disk density chassis like the 3U or 4U supermicro chassis quiet; but I can't find the link off hand. The person who did the tutorial used it for FreeNAS though I cannot vouch for the quality of the results.

Edit: I found the link.
 
Last edited:

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
thanks for your answers.

If this card cannot be so flashed you can pick up a used HBA off ebay for ~40-60 depending on which one you buy.
or are the 4 Sata2 and 2 Sata3 ports already sufficient for my build. I am pretty sure, that I will never exeed 6 HDDs with 8TB each, at least the number of ports seems sufficient. But how about the performance, especially regarding the Sata2 ports?

It looks like these 4u chassis have only 8 and 10 drive bays respectively? Is that so? In 4U that is quite low; though I suspect it has to do with noise reduction.
That's right. Since I don't need more than 6 HDDs max, I rather choose a case with less trays in order to lower the noise.

One more thing. Is having the boot system on a usb stick a common way to go? Does using a SSD instead has any advantages? A disadvantage would be that I lose a Sata port.
If I use a usb stick, is it possible to simply do a backup copy of that drive and in case of a failure replace it by the backup?
 
Joined
Oct 18, 2018
Messages
969
are the 4 Sata2 and 2 Sata3 ports already sufficient for my build. I am pretty sure, that I will never exeed 6 HDDs with 8TB each, at least the number of ports seems sufficient. But how about the performance, especially regarding the Sata2 ports?
SATA2 is 3Gbps which is way more than a single spinning disk will muster so that is fine.


Is having the boot system on a usb stick a common way to go? Does using a SSD instead has any advantages? A disadvantage would be that I lose a Sata port.
Not everyone will feel as strongly about this as I do, but I definitely recommend going with SSDs. Cheap, small SSDs are perfect boot devices because they are much more reliable than USB sticks. Whatever you choose, I recommend you mirrow two boot devices, especially if you use USB, and keep frequent backups of your system config. If two SSDs results in lack of SATA ports a cheap HBA easily solves that.

With mirrored boot devices if one fails you've got the other. With the config backup if both die you can easily restore from config backup.
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
Alright,

so I rather go for two small SSDs for mirroring the os. I guess 32GB each!?
Plus a HBA controller.

Regarding Backups, I really like the freenas snapshot and replication features. I am wondering if I can add another two disks of different size (5GB each) than the 4x seagate drives and use them as the backup location for the replication. I don't necessarily wanna have a second freenas server running, like what's suggested sometimes.

So the setup would be
2x 32GB ssd for OS mirrored
4x 8TB Seagate as RaidZ1

I understand the fact, that only one HDD can fail but I am running a raid5 at the moment which is afaik similar in terms of redundancy and failure protection. Furthermore I will do backups of the replication and therefore it's a trade off between uptime and cost for hard drives.

And 2x 5TB HDDs for the Backups of Replication. All 8 disks running on the same HBA controller.
Even tough, I still haven't got exactly understood if having a copy of the replication on disks in the same machine is actually possible. Is it?
 
Joined
Oct 18, 2018
Messages
969
Regarding Backups, I really like the freenas snapshot and replication features. I am wondering if I can add another two disks of different size (5GB each) than the 4x seagate drives and use them as the backup location for the replication. I don't necessarily wanna have a second freenas server running, like what's suggested sometimes.
Do keep in mind that if you keep your backup in the same case your backup is vulnerable to the same types of catastrophic events that could bring down your main system. One option is to rotate the backup drive offsite so you have your main array and 1 backup onsite and 1 backup offsite.

I understand the fact, that only one HDD can fail but I am running a raid5 at the moment which is afaik similar in terms of redundancy and failure protection. Furthermore I will do backups of the replication and therefore it's a trade off between uptime and cost for hard drives.
This is reasonable, I think. I would just make sure your backups are reliable and made frequently.

And 2x 5TB HDDs for the Backups of Replication. All 8 disks running on the same HBA controller.
Even tough, I still haven't got exactly understood if having a copy of the replication on disks in the same machine is actually possible. Is it?
How will those drives be arrayed? I don't think a stripe of two drives is a good backup solution. I would suggest you consider mirror, RAIDZ1, RAIDZ2 or RAIDZ3 for your backups. Ideally one offsite and one onsite. And yes, you can certainly do backups to the same machine; I'm sure a search of the forums will reveal something useful and relevant.
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
Do keep in mind that if you keep your backup in the same case your backup is vulnerable to the same types of catastrophic events that could bring down your main system. One option is to rotate the backup drive offsite so you have your main array and 1 backup onsite and 1 backup offsite.

I totally agree. I am doing upload of my backup to a dedicated cloud space via FTP. And will do this in the future as well.

How will those drives be arrayed? I don't think a stripe of two drives is a good backup solution. I would suggest you consider mirror, RAIDZ1, RAIDZ2 or RAIDZ3 for your backups

At the moment these two disks are external usb 3.0 disks connected to the HP Microserver and I simply push my backup to both disks simultaneously. Which is the simple way of a mirroring. :smile:
For this build I'd like to avoid external usb drive and just use these two drives internally on the HBA controller.

I'm testing out this setup in a virtual machine at the moment. using 4+2 virtual disks. However I am struggling with adding shares which is the basis of a fileserver. Seems like I'm not the only one. I keep on testing.
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
EDIT: the two 5GB drive should be mirrored in FreeNAS
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
one more question regarding the hardware requirements.
Since the recommendation is roughly 1GB of RAM for 1TB storage, would that also count for the backup HDD, I am planning to use? both 5TB drives I am going to mirror?
Currently planned setup is 32TB (4x8TB Seagate drives) + 10TB Backup drives (2*5TB)
Alternatively, I could still connect the backup drives to usb externally and not to the HBA controller

Furthermore, If I want to add two additional 8TB drives the setup would look like this:
48TB (6x8TB Seagate drives) + 10TB Backup drives (2*5TB)

Is this definitely a no-go in terms of the limitation of 32GB Ram on this mainboard?
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
32GB of RAM will be fine for a storage server with a couple of users

Please follow the 3-2-1 backup rule

Having a backup server at a disaster recovery site is not unheard of

Have Fun
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
Please follow the 3-2-1 backup rule
thx. I already do this in my current setup and will definitely do it for the FreeNAS setup as well.

Since I understand now that planning the setup for FreeNAS in advance is absolutely crucial for further expansion of pools as well as capacity ratio, I boiled it down to a couple of scenarios and would like to know your thoughts.

I already own 4x8TB HDDs and therefore this is my starting point for these scenarios. A 10Gbit network card will be bought.

Use cases are
1. simple shares for all kind of data. Documents, Media etc.
2. isci volume for ESXI server
3. regular snapshots of only the important data ~3TB
4. Replication of snapshots to additional mirrored 2x5GB (these hdd also exist)

1. 4x8TB HDD as RaidZ1 (1vdev)
Pro's: a) useable space of ~20TiB
Con's: a) only 1 drive may fail
b) expanding the store has to be done by 4 additional drives with a size of 8TB each

2. 4x8TB HDD as RaidZ2 (1vdev)
Pro's: a) two drives may fail
Con's: a) usable space of only 13 TiB - bad ratio
b) expanding the store has to be done by 4 additional drives with a size of 8TB each

3. 6x8TB HDD as RaidZ1 (1vdev)
Pro's: a) usable space of ~33TiB - much better ratio
Con's: a) only 1 drive may fail
a) I have to buy 2x8TB now
b) expanding the store has to be done by 6 additional drives with a size of 8TB each

4. 6x8TB HDD as RaidZ2 (1vdev)
Pro's: a) usable space of ~28TiB - still much better ratio than scenario 1 and 2.
b) 2 drives may fail
Con's: a) expanding the store has to be done by 6 additional drives with a size of 8TB each

5. 4x8TB HDD as Mirror (2vdev)
Pro's: a) expanding the store can be done by 2 additional drives with of any size (same size for both of course) in a new vdev
b) 2 drives (in different vdevs) may fail
c) Performance is supposed to be a little better than raidz, especially for isci.
d) much faster resilvering
Con's: a) usable space of only ~14TiB

when I look at the figures above, I believe there are only two scenarios which are meaningful for my situation. This is 3. and 5.
For 3 the biggest drawback is cost. Initial cost for 2 more drives as well as for expanding the pool.
For 5 the drawback is of course the usable disk space. However, the big advantages from my point of view is the possibility to expand the pool easily step by step. e.g. I could expand it in a year or so by 2x10TB drives. Depending which size is the best bang for buck.

Do I make the right assumption or is there any issue I don't see. Which scenario would you recommend?

Best
Macx
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
If you want FreeNAS to provide the storage for VMs on ESXi (your use case 2) then mirrors are the way to go. More vdevs = more IOPS, and VMs love IOPS.
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
alright, so it seems like mirrors are the way to go.
I set up a VM enviroment to test my configuration. I added 4 virtual disks with a size of 8GB each to simulate my 8TB drives.

So I started by creating a pool with 2x 8GB drives. The estimated Data Capacity shown is 6GiB, which is less than 50% but still ok.
I extended the pool by the other two 8GB drives and first thing which confuses me is the message:
Extending the pool adds new vdevs in a stripe with the existing vdevs. It is important to only use new vdevs of the same size and type as those already in the pool. This operation cannot be reversed. Continue?
Since I am using mirror and not stripe, does this message count for me? The third and fourth disks are of the same size. However, further down the road, I'd like to add bigger disks?

Second thing I noticed is, that the total estimated Data Capacity is being calculated with 11,5GiB, which is still less than 50%. So far, raw storage is 32GB. Online ZFS calculator tell me something like 14GiB.

lastly I added two times 10GB disks and end up with 19GiB of estimated Data Capacity. Even worse, the overview of my pools shows only ~18GiB.

Based on these numbers, the ratio of usable storage to raw storage is roughly 30%. Is this correct or is testing this in a VM not comparable to bare metal? I could live with 50% of usable disk space but 30% doesn't make any sense to me.

P.S: I know the difference between GB and GiB, but that's not the issue here.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
I extended the pool by the other two 8GB drives and first thing which confuses me is the message:
Extending the pool adds new vdevs in a stripe with the existing vdevs. It is important to only use new vdevs of the same size and type as those already in the pool. This operation cannot be reversed. Continue?

I'm still learning about FreeNAS (building first system soon), but this message doesn't seem to make sense based on what I've read. From my understanding you can create multiple vdevs, each vdev having its own sized disks and type. e.g. vdev 1 is 6x1TB RaidZ2, vdev 2 is 2x8TB mirror.

Here's the only other reference I can find to that message, and it seems to have caused confusion there too:

https://www.ixsystems.com/community/threads/dual-sff-8088-dual-sff-8087-pcie.76911/#post-535321

Would love to hear an explanation from an expert as to what this message is trying to say.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
I think the message is about how the vdevs will preform together - very different vdevs will behave differently
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
I think the message is about how the vdevs will preform together - very different vdevs will behave differently
Yes the basic issue is that if you mix different kinds of vdev you will end up with the worst performance and resilience characteristics of each.

Imagine having a super fast pool of mirrored SSDs and adding a RAIDZ vdev of spinning disks. The speed of the pool will be massively, massively dragged down. Or having a big safe resilient pool using RAIDZ-3 and adding a single disk vdev to it. Now a single disk failure can take out your pool.

Those are extreme examples but that's the basic idea. You want all vdevs in a pool to be of the same kind. If they are different kinds then the pool's behaviour will be dominated by the worst aspects of each kind of vdev.

For the same reason you don't want to mix very different drive types in a pool. If you've got a bunch of hard drives and a bunch of SSDs you likely want to make 2 pools. (And you don't want manufacturers to start shipping SMR drives without telling you!)

The mention of "size" is IMO confusing. I think what they are talking about is more the number of disks on a vdev, not the storage capacity of the disks. All the disks in a vdev want to be the same size (otherwise the additional capacity of the larger ones is wasted) but there is nothing wrong with for example adding a 3-way mirror of 10TB drives to an existing pool which is made up of 3-way mirrors of 2TB drives, or adding a 6x8TB RAIDZ-2 vdev to an existing pool which is made up of one or more 6x4TB RAIDZ-2 vdevs.
 
Last edited:

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
I guess we're talking here about different things.

1. extending a pool by different type is afaik not really possible. e.g. adding a 2x8TB mirror to a 4x8TB raidz2 is not supported. Meaning you can't actually extend the existing raidz2 with a mirror. You can only extend a specific type by the same type.
2. having several pools using different types does work. e.g. having a 4x8TB raidz2 pool and create another 2x8TB mirror pool, doesn't seem to be an issue.

But, having a 2x8TB mirror and extending it by 2x10TB shouldn't be an issue. Even tough there is this weird message coming up when doing so. However you can still ignore this message and extend the pool. While you can not use a different type of raid for this particular pool.

This leads to my conclusion, that using mirrors of two HDDs each is the best way for further extensibility, since you can start with two hdds, and keep on extending the pool with any additional two drives of any size and so on. While doing raidz*, you are fixed to a minimum of 4 HDDs and need to extend by the same amount of disks. And even worse, if you go for 4 HDDs, the usable amount of disk space is even worse than mirroring. That means you should go for 6 HDDs with raidz2 in order to have some sort of a reasonable ratio. However, if you plan to upgrade afterwards, the minimum is another 6 disks to add.
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
1. extending a pool by different type is afaik not really possible. e.g. adding a 2x8TB mirror to a 4x8TB raidz2 is not supported
It is 100% possible and supported, it's just usually not a good idea (I hope I explained the reasons).

There have been many cases discussed on the forums of people accidentally adding a single-drive vdev to a pool made up of RAIDZ vdevs, or of mirrors. Mixing vdev types definitely can be done.

This leads to my conclusion, that using mirrors of two HDDs each is the best way for further extensibility, since you can start with two hdds, and keep on extending the pool with any additional two drives of any size and so on.

Absolutely right, no argument there. Easier incremental expansion is definitely an advantage of using mirrors.
 

macx979

Dabbler
Joined
Sep 25, 2019
Messages
41
It is 100% possible and supported, it's just usually not a good idea (I hope I explained the reasons).
I am not saying your are wrong but I tried exactly this config in a vm.

Then I get this error message:
Adding data vdevs of different types is not supported. First vdev is a raidz2, new vdev is mirror.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
@anmnz Thanks for the detailed explanation. I've got 5x 4TB HDDs which I've been planning to make into a single RaidZ2 vdev. I think it will give us around 10TB, which should be plenty for us for a while (we currently use less than 2TB). I'm not super concerned about performance as it's mainly a photo/media store, and I like the idea of being able to lose two disks to failure without losing my pool. That's my thinking, but should I be considering pairs of 2 mirrors instead? I'd have to pick up another disk, and the calculators make it seem like I'd end up with around the same 10TB of storage, with only a single disk able to die in each vdev.

Adding data vdevs of different types is not supported. First vdev is a raidz2, new vdev is mirror.
@macx979 That's very interesting, sounds like they're helping us out on it being a bad idea.
 
Top