Am I over (or under) thinking this setup

Cyknight

Dabbler
Joined
May 2, 2015
Messages
19
Hello everyone,

This post will be a bit long in the tooth, but I'm trying to get all the information in, so hopefully you'll bear with me.

First, a short background.
I've been running FREENAS servers for years, back to early 9 days, for my own home use, adding servers as the need arose.
A little while ago a power surge wrecked two of my three running servers and I decided to take the opportunity to consolidate everything after all these years.
I'm certainly not claiming to be an expert in FREENAS though. I've re-read the latest recommendations etc. and browsed the forums, but it still leaves me with a few questions.


Second, I'll make the intended use clear first.
The server as I intend to run it will have two distinct functions.
1) One ZPool will be used as an additional (and more importantly local) back-up location of two portions of data that reside on my main office desktop. One is pictures (on a mirrored drive set and backed up to the cloud), the other is personal data (also on a mirrored drive, and weekly back-ups are brought to a remote vault. The intend here is as such not as a true safety feature, but more a convenience factor should anything go amiss with my main computer (ie retain access using laptops should the need arise)
2) The second ZPOOL will be as a storage location for movies (and some music) for media players. No, not plex, the server will not be doing the encoding, it's just a place for the media players to load the data from. Every last one of these are from my own discs, so no, mirroring is not required, as the data is stored on disks (that get stored off site again once they are loaded up.). While it would be a pain to reload everything (again), no data would be lost.
The obvious point of the above list is to underline that I do NOT need cries of 'raid is not an alternative for back-up'. I'm well aware of it, and that's not the intend of the setup.
Heck half of the intend is just the fun of building it in the first place!


Alright, let's get to the hardware setup I have shall we?
This is indeed a DIY setup, re purposing existing older gear, including existing HDD drives.

Motherboard: TYAN S5397AG2NRF Tempest i5400PW
CPU: Dual Xeon X5260
RAM: 16x 4GB ECC DDR2 for a total of 64 GB
HBAs, all running their respective P20 IT firmware
1: LSI 9201-16e
2: LSI 9200-16e
3: LSI 9200-8i
Additional cards:
1: USB 2.0 PCI card
2: SIL 3112 running base firmware (non-raid) - Yes not recommended, stay with me I'll get to it.
FREENAS OS will run on an SSD directly connected to the motherboard.
All other drives are standard HDD (NAS) drives.

Yes, it's older hardware. Yes I know it'll be a powerhog. Again, all these components are existing, so there's zero upfront cost.

The thought is that the first ZPOOL (for the data) will run on the 9200-8i with 8 drives and one from the motherboard for a total of 9 drives (mainly 1TB, with a stray 2Tb)

The second ZPOOL (for the movie storage) will run on the 9200-16e and 9201-16e, totaling 30 drives (mix of 2,3 and 4 Tb).

The idea of the mixed drives is the following. As drives fail (or I find a good deal here and there), they will be replaced with 4TB or larger drives. Should a drive fail in the data ZPOOL, I'll replace one of the smaller drives in the movie ZPOOL with a 4TB, and move the 2 or 3 TB that is freed up to the data one, thus eventually increasing the capacity for future need as it arises.

It'll be running on the latest stable release once it goes 'live' (might see if 12 is worth it if out by then or if I stick to 11 for now)

Just to touch on the SIL card - it will NOT have any drives connected. It's all going to esata ports (with an extra two that connect to the motherboard sata ports) so that I have the ability to add a few temporary drives easily to create a temporary pool to transfer files over in case I need to (ie there's a risky resilver due to more than one drive failure on the 'data' pool).
The USB card is there as an alternate/additional quick temporary connection.


So here is where I'd love to get some thoughts and advise:

For the first ZPOOL, I'm looking at 9 drives in a single RAIDZ3 vdev. That'll leave me with plenty of storage. My one question however is, would it make more sense to use 8 drives, and take one of the 1Tb drives, combine it with a 10th drive (500Gb) in a mirror instead, to run as a jail/plugin pool? The only one I'm thinking of at this point in time is ClamAV though, I've limited to no use for any of the others that I'm aware of. This is one of the items where I'm not sure I'm not overthinking the situation. If I do want ClamAV, should I just create a jail location on the first Zpool?

For the second ZPOOL, I'm thinking 3 vdevs of 10 drives in RAIDZ2. This is the other items I'm not sure on.
I am operating under the assumption that 11 is still the recommended max drives per vdev, but I'm not certain that is still accurate (latest info I've been able to find is pre 11 if not pre-coral). So does this still hold true? Would it be better to go all 30 drives in a RAIDZ3 vdev if the 11 drive recommendation is no longer valid? Or something else entirely?
Assuming the 11 drive max is still in place, is RAIDZ2 good enough, or should I move to Z3 on the 3 vdevs? That would mean storage is going to be close to 75% right from the start, and don't see a major concern for data loss. Again, this is not intended as a back-up, just adding redundancy as a minor measure to help prevent me from having to redo things.

Last direct question: based on the usage scenario (no high read/write demand) am I correct in assuming that a slog and l2arc are not useful?


If there's anything else I'm missing, don't hesitate to call me out on it!

Thanks for reading this long tale if nothing else!
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
The thought is that the first ZPOOL (for the data) will run on the 9200-8i with 8 drives and one from the motherboard for a total of 9 drives (mainly 1TB, with a stray 2Tb)

The second ZPOOL (for the movie storage) will run on the 9200-16e and 9201-16e, totaling 30 drives (mix of 2,3 and 4 Tb).
You do know that a 9200-8i supports 256 drives right? Why do you need multiple HBAs?
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Four questions:

1. What case are you using?
2. Any thought to using larger disks?
3. Have you looked at SAS expanders as an option?
4. Could virtualization be a possible solution to help with a multiserver environment?

There is so much you could do with slightly newer hardware and larger disks...
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Some thoughts:
9 drives for RAIDZ3 is good (8 is fine too).
Separate pool for jails is not a bad idea.
30 drives in RAIDZ3 will potentially have low IOPS (not really a problem for you it seems) and take a long time to resilver or scrub (perhaps more of a problem/risk)... hence the usual recommendation to stay at 12 or fewer in a VDEV.
L2ARC and SLOG probably won't help in your case.

Look into the SAS expander rather than additional HBAs as suggested by @garm, but if you already own the HBAs and have the PCI slots, it's up to you... be careful about ventilation as they can get hot and will be exacerbated by 3 of them close together in an area of a case that is typically not considered when planning for airflow.
 

Cyknight

Dabbler
Joined
May 2, 2015
Messages
19
You do know that a 9200-8i supports 256 drives right? Why do you need multiple HBAs?
Don't need so much as have (there used to be 3 separate smaller servers) vs not having port multipliers. It's also easy enough to have the 16e units with direct breakout cables going to the drive case.

Four questions:

1. What case are you using?

Two cases actually, both custom rackmount cases: a 4U and a 6U repurposed for the setup.
One case (4U) holds the MB, cards and will hold the 9 or 8 1TB drives, the SSD and if I go that route the jail drives.
The second case (6U) is the 'drive' case holding the 30 drives.
Both of these will be mounted in a dedicated 4 post server rack with UPS and the network switch that will have the server and the media players connected to it (media players are in their own rack setup)

2. Any thought to using larger disks?
3. Have you looked at SAS expanders as an option?

This comes back to me having all the stated equipment already. Have I considered it, yes I have, but I've got a LOT of material, and would prefer to get this going at little added cost (and a trip or two less to the eco station), as it's sole true reason to exist is the movie storage funtion. With movie watching experience moving more and more to streaming, it's hard to justify actual monetary investment for me at this time, not knowing where things will pan out in another 1-2 year's time. If it still makes better sense to keep a movie storage device up by that time, I can look into getting a brand new setup then and spend the bigger bucks.
I'm happy to donate my own time in this though, as it's a fun project into itself - full disclosure: I'm doing it in no small part for the sake of doing it. (art for the sake of art ;))

4. Could virtualization be a possible solution to help with a multiserver environment?
If I still had multiple servers, sure - but as mentioned the old severs were mostly fried, so they're no longer serviceable. I suppose I could put even more effort in and start spending money to find replacements for what is broken to go back to the jumbled setup I had, but I'm not sure how that would make sense.
If I'm spending more money, I'd just get a completely different setup with a brand new current gen board and CPU, new RAM and new very large capacity drives on a smaller foot print.

There is so much you could do with slightly newer hardware and larger disks...

Agreed. But I don't really need to do more :)

but if you already own the HBAs and have the PCI slots...

That's exactly the point, I have these, have tested them and are in perfectly working condition.

be careful about ventilation as they can get hot and will be exacerbated by 3 of them close together in an area of a case that is typically not considered when planning for airflow.
Aware of this. The customized case has a push/pull cover over the pcie slots containing these three cards. Temperature monitoring is in place as well. 'Dry' running the case so far has shown stable air temperatures inside the covered area of 25 degrees Celsius or less on low to medium fan speeds.

9 drives for RAIDZ3 is good (8 is fine too).
The question is more on if I'm better off getting the separate jail pool (8 or 9 drives in RAIDZ3 is more than enough storage either way)
Separate pool for jails is not a bad idea.
That's more or less what I'm thinking as well, so may just go down that road and lower the the one pool to 8 drives
30 drives in RAIDZ3 will potentially have low IOPS (not really a problem for you it seems) and take a long time to resilver or scrub (perhaps more of a problem/risk)... hence the usual recommendation to stay at 12 or fewer in a VDEV.
So that is indeed still the 'common' recommendation. Thank you for confirming.
Which brings me back to doing 10 drivers in 3VDEVS each, but then the question becomes if RAIDZ2 for 10 drives (times 3 with 3 VDEVS in a pool) is not already enough redundancy, as I'm going to be at 2 drives per VDEVs can fail before data is lost, and theoretically 6 drives can fail without a loss of data.

This is probably my biggest part where I'm wondering if I'm not overthinking it. On the one hand, more redundancy is never a bad thing, then again would I be over doing it if I go 3 VDEVS in RAIDZ3 - especially as non of this data is critical at all. The intend of redundancy is to maintain up time, not data safety.
3 times RAIDZ3 is going to push the current available storage on the low side with ~ 5.5Tb of storage 'lost' at which point I'm just getting pretty close to reaching the 80% threshold in short order (within a year). I'm estimating based on the rate I'm adding files for the last 2-3 years that 5.5Tb extra probably holds me over for an extra year after that. At that point I'm 18+ months in and I may well have replaced the last 2Tb drive to a larger capacity, bringing the lowest capacity to 3Tb, increasing size by 50%.

L2ARC and SLOG probably won't help in your case.

Didn't think so either, thank you for confirming
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
That's more or less what I'm thinking as well, so may just go down that road and lower the the one pool to 8 drives
I’m “scared” about fragmentation so I keep databases, web services and other “volatile” or frequently changing data in a mirrored set of 256 GB SSDs. My storage pool is at the moment a dual mirror set of 4 and 2 TB disks (what I had on shelf when built the server, to be replaced with 10 TB disks at some point..) and I treat writes to that pool as read only, that data should never be deleted (but it happens..)
 
Last edited:

Cyknight

Dabbler
Joined
May 2, 2015
Messages
19
Agreed, but again, this particular build is NOT my database, nor the primary back-up of my database, it in and of itself is already a back-up of a back up (of a back up of a back up?) as per my earlier details.
 

Cyknight

Dabbler
Joined
May 2, 2015
Messages
19
Thanks all, unless someone comes back with a comment (with reason) to do otherwise in the next 24h or so, I think I'm decided to stick with the 8 1Tb in RAIDZ3 , drop in two 500Gb in mirror drives to run jails and go with 3x 10 2Tb+ drives in RAIDZ2 for the main portion.
 
Top