BUILD 8-16 bay dekstop NAS enclosures?

Status
Not open for further replies.

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
I see a few proprietary boxes out there (QNAP, Drobo for example) and a handful of decent generic 8-bay enclosures but hard to find much information on anything that accepts 10-12-16 drives.

I know there are a bunch of build threads around. I have seen most people use large towers to get 8+ drives into a home/SMB NAS. Is there anything like the U-NAS NSC-800 with 10+ drive bays and are there any other options or is the NSC-800 the only option for a similar form factor?

The only other issue I am questioning is that of a RAID controller vs. a 10GbE NIC. I am looking for mini-ITX chassis. There are a few decent motherboards with many SATA ports but it would be ideal to be able to use SAS with a RAID card. It would also be nice to put a dual 10GbE NIC in the box as well. Are there any good threads or articles regarding that topic?

Thanks
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
For 10 disks the Lian-Li PC-Q26 with the ASRock E3C224D4I-14S is the best option. It comes with an LSI 2308 SAS controller onboard in addition to the 6 Intel SATA ports. In addition to that you have one PCIe 3.0 x8 slot free to support even 56Gbps Infiniband speeds, so 10Gbe isn't an issue there.

For 12+ disks I'd rather look into rackmount chassis, there are Rosewill around which can take a full ATX mobo and a standard ATX PSU, together with quiet 120mm fans. Unscrew the handles, put it on it's side and there you have a relatively compact solution for that amount of disks.
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
Does 10GbE exist on a mini-ITX board? I've also read about some issues with that U-NAS NSX-800 chassis but nothing specific.
 

DJ9

Contributor
Joined
Sep 20, 2013
Messages
183
As far as the case goes, it's pretty much open for what you want and the money you'd like to spend. I'm thinking of building a new system with a Caselabs case, but they are kinda expensive. http://www.caselabs-store.com
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You can always buy a Supermicro rack server used from ebay for a good price. That's what I have. ;)
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
10Gbe exists on a mITX board if you plug a 10Gbe network card into it. "standard" ITX boards don't have 10Gbe or SAS onboard. the E3C224D4I-14S seems your only option if you want to combine 10Gbe and >6 disks in a compact footprint.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I expect the C2750D4I could work too, normal Marvell warning tossed in for good measure.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Possibly. It's possible to make anything crawl, which is why I usually laugh when people say they want to be able to "saturate" their gigE. For what workload? Reading large sequentially written files from an unfragmented pool is probably the only workload for which a vast majority of FreeNAS users will manage to get consistent 110MB/sec+ on a gigE ...
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
I have a few other things going on that have an influence on this project. I thought might be worthwhile to mention the bigger picture because I really appreciate the help and feedback and the suggestions are great... I thought giving the whole picture (or more of it) would be better so that if I shoot down a suggestion there would be a reason behind it, etc.

We just purchased 10GbE SFP infrastructure and its turning out to be a blessing and a curse. We were trying to avoid it but we may need some 10GbE RJ45 hardware after all. We are currently using a lot of bonded/teamed/LAG'd gigabit connections and it gotten to be too much. Too much to manage, too much hardware, too much cabling, etc. The switches needed to be larger and therefore (none of this is in a racked/datacenter location) the power consumption and more importantly the noise level became unacceptable. Additionally, there are times when we push data across subnets or into and out of sandbox environments, and therefore through the firewall, which has gigabit interfaces and multiple VLANs.

We easily saturate 3.5-4Gb/s on a regular basis and our max is about 6-7Gb/s. I'd expect that to go up slightly with the new NAS's but not by too much due to hardware limitations on the machines pulling or processing the data.

I'm going to be building 5-6 of these things and keep a cold spare on hand so 6-7 total. Power consumption and footprint is the primary concern. The smaller and the lower the power consumption the better. I also need to upgrade a few pfSense firewalls so if possible, I would like to use the same hardware across the board so that I have to keep minimal spares on hand, etc. I don't mind paying up a few dollars here or there so that it's all the same hardware across the board.

The strongest contender right now is the U NAS NSC-800 chassis using the Super Micro A1SRi-2758F motherboard. The quad nics plus dedicated IPMI is very appealing - and it can support 64GB of ECC DDR3. If this ends up being the one, I will team the quad NICs and run a dedicated SAS 12Gb/s controller from the PCIe slot. It'll cost a bit more but we'll have to buy a few 10GbE RJ45 switches. Additionally (and this is a very bad idea) we could always use USB 3.0 >> Ethernet adapters to bond up (assuming FreeNAS OS has support and drivers for this) to something like 6-8Gb/s.... a bad idea but perhaps the only choice. Alternatively, we could dump the NAS ethernet into some old gigabit RJ45 switches we have with 10GbE SFP uplinks. Not pretty, requires more hardware, still lots of cables, etc. but it might work.

I took a look at the C2750D4I and the E3C224D4I-14S. I'm not wild about either.

The E3C224D4I-14S only supports 32GB RAM, only has two onboard NICs and it's SAS controller is only 6Gb/s. I don't know that I could justify paying up for SAS drives to be limited to 6Gb/s speeds when we are at SATA 6Gb/s speeds currently. I could put a dual 10GbE NIC in there which solves the connectivity problem but having never tested the system, I would guess that 10GbE would only eliminate the need for the gigabit bonding shenanigans I mentioned for the Super Micro motherboard. I doubt we would need 10GbE as the hardware probably could not make use of it.

I had initially considered the C2750D4I motherboard and had planned on posting up to the forum how FreeNAS handled that chipset/individual SATA controllers without a dedicated HW RAID controller. Running through three different SATA controllers seems not ideal to say the least. Additionally, it's all 6Gb/s SATA speeds.

To come full circle, it seems that the Super Micro motherboard is the way to go in the mini-ITX form factor. That still leaves me questions about a chassis the size/footprint of the U NAS NSC-800 or a Drobo or a QNAP, that fits more than eight drives. That Super Micro A1SRi-2758F motherboard has SATA2 and SATA3 ports. With the NSC-800 I'll probably try to fit 2x 2.5" drives attached somehow to where the single 2.5" boot drive would be placed.

If I could find an equivalent of a 10-bay or 12-bay NSC-800 I'd start weighing the difference between filling the extra bays with 3.5" drives to make the primary array larger or to keep the 8x 3.5" primary array and add a second array with inexpensive 2.5" drives. I'd probably fill a 10-bay chassis up with all 3.5" drives while a 12-bay chassis would still be filled with 10x 3.5" drives and a second array of 4x 2.5" drives.

Not sure if what I'm looking for exists but hopefully it makes a little more sense. The chassis needs to stay as small as possible. A large tower isn't an option.

Thanks for the continued help and suggestions.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There are no USB to ethernet driver support in FreeBSD. Too unreliable, performance is poor, etc.

If you plan to go with >32GB of RAM, the Avoton Atoms are out of the question. The 16GB DIMMs are outrageously expensive and the much more powerful E5 Xeon builds are actually cheaper with the same RAM (and come with lots and lots of other benefits t00).

FreeNAS is designed to work with "doing what is best". It's not always the cheapest (but often not outrageously expensive if you understand what you are doing). So if you think you are going to do things that "may not be ideal" you may find that FreeNAS won't do what you want. But if you are going to use industry best practices to do what is right, youll find FreeNAS very capable of doing what you want.

pfsense and FreeNAS are both based off of FreeBSD. That being said, it should be trivially easy to make sure all of the hardware matches. Just keep in mind that pfsense is built from FreeBSD 10 and FreeNAS is currently 9.3, so some things (like hardware support) is different. I use pfsense at home and I could easily choose to use matching hardware, but the reality is that I wouldn't want the kind of power consumption of a FreeNAS box in a pfsense box. I built my pfsense box out of an Atom, and it is horribly overpowered for the job (and uses just 8w fully loaded). The P4 system I used for a week to learn the ropes of pfsense used almost 100w and performed the same. Literally, buying the pfsense box paid for itself in like 3 years (which will be this summer for me!).
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
cyberjock thanks, I knew there would be a few that would roll their eyes at the "solutions" I schemed up.

The goal for these NAS devices is to replace six (6) Dell T7400 workstations currently in production running 2008r2 Core (no GUI) as iSCSI targets and NFS fileservers. They are giant, use a ton of power and are all out of warranty.

On the other hand... they are free (own/paid for) and I have a graveyard of these things -- I probably have 10 of them collecting dust in varying states and I have several Rubbermaid bins full of parts for them. They all have 64GB of DDR2 ECC RAM and each has dual Xeon X5482 CPUs (dual quad-core CPUs @ 3.2ghz, no hyperthreadding). That's a ton of power consumption but running FreeNAS on them would be a great experience I would imagine.

Each box has a 64GB OS SSD and a SATA2, LSI 8708EM2 raid card with BBU driving eight (8) Seagate ST2000DM001 HDDs. The HDD's are retail grade, old, and out of warranty (and have started to fail at alarming rates).

We initially looked at two options:

1. Purchasing SATA3 6TB (enterprise) HDDs and a decent RAID card (LSI 9260-8i). This came in at about $5k per box, or 11-13 cents per GB and yielded 33.5 TB in RAID6 and 39TB in RAID5. The worry was with the rates of failure we experienced with the old systems we would want to run RAID6, simply so we didn't lose a drive during rebuild. This also gave more worries as the same number of spindles as our last arrays combined with RAID6 (we are running single parity currently) might crush performance down to the point that we just spent a whole bunch of money and the systems aren't much faster or reliable than before.

2. More spindles. A lot more. Purchase a bunch of adapters. The T7400 tower can house 4x 5.25 >> 4x 2.5" adapter kits and 4x 3.5" >> 2x 2.5" adapter kids. That's a total of 24 2.5" HDD's we could put into the machines. Considering the LSI 9260-24i4e controller and running single or double parity it was almost twice the price at $9k per box, yielding 20/21 cents per GB... and 41/43 TB for RAID 5/6. the assumption was that, outside limitations of the PCIe bus, the performance would be much better, the system would be much more reliable, rebuilds from failed drives would be faster, we could use a hot-spare (or two) and take advantage of LSI's SSD cache software.

Both of these options would also allow us to run 10GbE whether RJ45 or SFP however neither of these two options solved for the size of the machines OR the amount of power they consume. And... our target was bumping from ~10TB up to ~20TB, not ~40TB. Not that we don't need 40TB at each location but at that price point it starts to become much more cost effective to pay for CALs and VPNs and centralize the data (which also presents its own challenges, and is not an option in our situation).

I'm looking for ~20TB that's very low power at idle, with a small footprint so it can be hidden away in a remote/home office or telco closet and something with fast random read/write performance and the ability to deliver data to the workstation accessing the NAS.

As I look at it in terms of budget, the drives need to be purchased regardless. The difference to me is:

Cost of new chassis & parts + the savings in power consumption (including cooling in the summer) + savings from less down time and having everything under contract/warranty + the headache of bandwidth and connectivity issues

vs.

Savings in reusing existing hardware + ease of integration into existing infrastructure + headache of maintenance and downtime associated with running legacy hardware in production + higher energy consumption (including more cooling in the summer) + annoying large size of the boxes
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, if you put aside the cost of the LSI controllers and the disks, your hardware isn't exactly what we recommend.

From experience, CPUs with FSBs have their own bottleneck problems. The CPUs can definitely handle the workload you'd want to throw at them, but you are very likely to hit some performance bottlenecks because you have a 1600Mhz FSB. I can't tell you what that limit is going to be (there's too many factors to throw a number out there and tell you how it would perform), but I'd expect ideal throughput between 250 and 500MB/sec for the pool itself when handled only as local storage. Your NIC may create an additional bottleneck if the data has to pass through the FSB (which it almost certainly will... not aware of any exception to this).

This is an example where you have to weight the pros and cons of reusing old hardware. That CPU was from Q4'07, so it's 8 years old. It's going to run hot, it's not going to be gentle on idle power, and it may not be able to perform because of the FSB.

Generally, I don't recommend using anything older than the Nahelem generation (Sandy Bridge is really where idle power took a serious drop). There's no reason why you can't buy the LSI controllers and disks now, put the machine in use, and if the CPU and board can't do the work you need do an upgrade later. At that point, you just replace the motherboard, CPU, and RAM and that's all. The boot device won't have to be wiped, you just plug it in and boot it up. It'll use everything just like you want with one exception. The network cards will likely change, so you'll need to reconfigure those, but everything else will "just work".
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
I agree 100% on the hardware and the FSB bottleneck. That is the reason why I started poking around the forums looking for a solution. I only need 20-25TB not 40TB, and I need more throughput, and I want a smaller machine that consumes a lot less power.

If I were to run that U NAS NSC-800 box in single or double parity it would be more than enough space but what about performance or any rough estimates on how the NAS would perform?

4tb = 22.3 - 26.0 tb $1,900* 8.5 cents/gb - 7.3 cents/gb
5tb = 27.9 - 32.5 tb $2,400* 8.6 cents/gb - 7.4 cents/gb
6tb = 33.5 - 39.1 tb $3,500* 10.4 cents/gb - 8.9 cents/gb
* That's a guess on what costs would be per set of nine (9) drives, assuming we were buying 70-75 drives.

The 5tb drives seem to be a sweet spot given those numbers.

ASRock C2750D4I $409
64GB RAM $681
Total: $1090
plus a $500 10GbE NIC - $1,600 total

Super Micro A1SRi-2758F $330
32GB RAM $389
Total: $719
plus $1300 for RAID controller and 2-3 SSDs - $2,000 total

For an all-in cost of ~$3500 per NAS, the 5TB drives plus the ASRock motherboard is looking good but the on board SATA controllers make me nervous. Are there any better options? Is it foolish to spend that kind of money on a NAS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
64GB of RAM with the Avotons is silly. Its cheaper to go with a full fledged E5 (which will have similar idle power and will be MUCH more powerful).

RAIDZ1 is pretty dangerous. It's almost guaranteed that on a rebuild you'll have a URE, which can be a nightmare for you.

Other than that it seems you understand the basics. Its just a matter if deciding what is appropriate and spending the money. ;)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
RAIDZ1 is pretty dangerous. It's almost guaranteed that on a rebuild you'll have a URE, which can be a nightmare for you.
"Almost guaranteed" sounds entirely too strong--perhaps I've just led a charmed life, but I've rebuilt RAIDZ1 arrays using 2-3 TB disks probably a half-dozen times without incident. I understand the point of the article you link, but it seems to overstate the danger compared to my own small experience in the real world.

If it's "almost guaranteed" that a rebuild will fail, then RAIDZ1 is only marginally, at best, better than stripes. Why then does FreeNAS 9.3 recommend RAIDZ1 for media (in the setup wizard), and state it provides "medium reliability", compared to RAID 0 providing "no reliability"?
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
I haven't done much research on ZFS RAIDZ(n) parity therefore I probably should not be commenting (yet) on the subject. I am also unsure what is more ideal when using FreeNAS, a traditional HW RAID controller, or allowing FreeNAS to manage the drives and parity using ZFS + RAIDZ(n)... I would love clarification on this if someone feels so inclined. Additionally, I see that ZFS can make use of SSDs used for OS and RAIDZ array cache. I am unsure if this is available with FreeNAS or generally part of ZFS regardless of OS, or other.

For clarification, when I had referred to single or double parity previously I was referring to traditional RAID 5 and RAID 6.

I had previously written off the ASRock E3C224D4I-14S motherboard due to it's SAS speeds thinking if I were to use SAS it would be proper SAS HDDs and I would want 12Gb/s speeds. Now knowing that it is not currently possible, I am back to SATA3 which bumps the ASRock E3C224D4I-14S motherboard right back to the front of the list. Many others have used the E3C224D4I-14S in the U NAS NSC-800 chassis in varying different (mostly what I have found is ESXi or home media servers) configurations. It allows for fourteen (14) drives:

Eight (8) via 2x LSI SAS2 8087 >> 4x SATA3
Four (4) via 1x Intel C224 Mini SAS >> 4x SATA3
Two (2) SATA2 on-motherboard ports
AND it has an internal USB port

If I could get some feedback and perhaps some guestimates on performance, I would very much appreciate it. It would probably send me on my way to start ordering parts and leaving you guys alone for a while.

Also a little confused about the need for graphics on CPU vs. graphics on motherboard, and IPMI support for FreeNAS. I like the Xeon E3-1245V3 and the Xeon E3-1240V3 but unsure if on CPU graphics is worth anything to this situation. Gut says spend the $20 extra for on CPU graphics, disable mobo graphics (because motherboard is going to be controlling all the drives the less system resources relying on the mobo the better) and try to get IPMI working, if possible. Any clarification would be very helpful.

NSC-800 Chassis
ASRock E3C224D4I-14S motherboard
Xeon E3-1245V3 (or Xeon E3-1240V3)
32GB DDR3 ECC RAM
8x 5TB SATA3 HDDs with 128mb cache, RAIDZ2 (which I believe is RAID6 equivalent??)

4x 128-256GB SSDs for RAIDZ(n) array cache --thoughts/feedback on size and performance gains (or not) ???? If RAIDZ(n) cache is not an option I'll probably try to shove 4x SSDs (Vdev stripe) into the chassis.

OPERATING SYSTEM/BOOT DEVICE SOLUTION:
This is where my knowledge of FreeNAS gets very weak. Machines will run 24/7. Slow boot time is acceptable. Is OS cache supported in FreeNAS? Is it needed? Traditionally FreeNAS used to use the entire boot device for the OS, has that changed? Can mirrored boot drives be hacked and/or created manually via gmirror or other in FreeNAS?

Within the constraints of the following, what is the best/fastest/most reliable way to get the thing to boot???
2x USB2.0 ports
2x USB3.0 ports
2x SATA2 ports
and
2x USB thumb drives ranging from 8-16gb
2x 32gb Super Micro SATA-DOMs (already own them)
2x 64gb Super Micro SATA-DOMs (already own them)
2x SSDs (willing to purchase any appropriate size)

I think that's it for now. Thank you all for the help and responses. It is much appreciated.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I am also unsure what is more ideal when using FreeNAS, a traditional HW RAID controller, or allowing FreeNAS to manage the drives and parity using ZFS + RAIDZ(n)...
You should not, under any circumstances, use a hardware RAID controller with FreeNAS unless it can present itself as an HBA, and give direct and complete access to the disks it controls.

Also a little confused about the need for graphics on CPU vs. graphics on motherboard, and IPMI support for FreeNAS.
If your motherboard supports IPMI, it has built-in rudimentary graphics. Since FreeNAS uses no graphics at all (only text), the capability built into the motherboard is entirely adequate. The extra money for the CPU graphics will be wasted.

4x 128-256GB SSDs for RAIDZ(n) array cache --thoughts/feedback on size and performance gains (or not) ???? If RAIDZ(n) cache is not an option I'll probably try to shove 4x SSDs (Vdev stripe) into the chassis.
There are two possible types of ZFS cache devices, but it's pretty uncommon to need, or make good use of, either of them.

Traditionally FreeNAS used to use the entire boot device for the OS, has that changed? Can mirrored boot drives be hacked and/or created manually via gmirror or other in FreeNAS?
FreeNAS still uses the entire boot device for the OS. Mirrored boot devices are supported in 9.3; it's a simple matter of selecting both of your USB sticks when you run the installer (or adding the second one through the web GUI).

Within the constraints of the following, what is the best/fastest/most reliable way to get the thing to boot???
With what you list, I'd use the two 32 GB SATA DOMs. They're overkill for size (16 GB is plenty), but that won't hurt anything. They should be considerably faster than a USB stick, especially since FreeNAS doesn't support USB 3.0.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
"Almost guaranteed" sounds entirely too strong--perhaps I've just led a charmed life, but I've rebuilt RAIDZ1 arrays using 2-3 TB disks probably a half-dozen times without incident. I understand the point of the article you link, but it seems to overstate the danger compared to my own small experience in the real world.

It has to do with the math. You are correct, kind of. The HD manufacturers guarantee a certain rate of UREs as a "worst case". Your drive is almost certainly not on that line (hopefully anyway) so the fact that you are better than the worst case gives you some extra engineering leeway. Not something I'd *ever* considering relying on (and plenty of thread in this forum can attest to). So you are right, but the math is all you can go by. Just like most engineering teams, things are slightly overengineered and then deliberately underspecced to account for things going wrong. That doesn't mean that a section of pipe that is rated for 80 PSI should be pushed to 100 PSI just because you know the metal is thicker than engineered and can handle that. Likewise you shouldn't push the limits of the hard drive just because you are pretty sure you can. You should expect it to perform as-rated and nothing above that.

As for Hardware RAID and ZFS, that's a debate that is very protracted. In particular, ZFS advocates will argue that ZFS is better because its your volume manager AND file system, while hardware RAID provides some abstraction of that. ZFS is also copy on write so it brings unique challenges to the table.

I've personally had problems with hardware RAID and NTFS and switched to ZFS after losing a bunch of data. That's why I went to ZFS over hardware RAID.

Big picture, go with hardware RAID or ZFS, do NOT try to mix the two together. Its actually worse than choosing just one. Also realize if you are going to use hardware RAID you are implicitly excluding FreeNAS as the only supported file system *is* ZFS.

The remainder of your questions (and danb35s response) indicate you need to do much more research before jumping onto ZFS with both feet. Do not throw tons of hardware at it "just because you have it". It is entirely possible t add hardware that will make things slower. We see it all the time in the forums. You have to "right-size" the hardware for your task. There is no quick solution. You just have to read up on ZFS and figure out what applies for your needs and wants.
 
Status
Not open for further replies.
Top