20-24 Bay Build, No Redundancy / RAID. Cheapest Way?

Status
Not open for further replies.

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
I've seen some build threads here that look similar but none have mentioned their RAID strategy. If there's already a good thread that's got a similar situation a link over would be fine.

My current FreeNAS box is a Mini ITX with 6 SATA drives in the case, and another 4 bay external eSATA case. I've lost 2 drives in 2 months without warning so something is not right here. It's time to build something with more power and better cooling.

My build is a media server for a single home, capable of serving 2 1080p streams simultaneously (rare, but possible) and also running media download plugins such as Sonarr, Couchpotato, NZBGet 24/7 as well.

STORAGE: My starting point will be: 12 HDDs. 5 of them 5TB in size. 3 of them 3TB in side. 3 of them 2TB. And an SSD SATA-based small drive to hold FreeNAS.

CASE: Eyeballing a Supermicro 4U 24 bay refurb on ebay from one of the usual sources, or, a Norco RPC-4220 on Newegg which I have heard is quiet but worse build quality. The plus here is that I think (?) the Norco backplane supports the larger 5TB+ drives out of the box.

CPU/MoBo/RAM: I actually have no idea what I need here. I don't need to go nuts here since this is just for my home but I also don't want the thing to be taxed when simultaneous stuff like streaming and unpacking a download is happening. If there's a refurb Supermicro/similar out of the box on ebay that has all this stuff onboard already that would probably be my preference rather than buying new components to put inside a Supermicro case.

RAID / SAS / Etc: I have been using PCs and networking them for 20+ years but this is where I am lost despite my best research. I know that I need at least an SAS2 backplane for my larger drives (I think?) but I'm still a little lost on RAID cards for my situation. I do not plan to do any sort of RAID and just want each drive to run independently and not be associated with the other. It would be nice to have the option to convert to RAID for data security down the road, but since these are all basically movie backups, it's not critical data. I'd rather have the storage space than lose a bunch of it to RAID. Knowing this, what hardware do I actually need to run up to 20 drives independently? That's probably my biggest area of understanding that I need here.

NOISE: The server will be in an unfinished part of my basement, so noise not a super huge concern. If I can get a good deal on an ebay refurb 4U or similar and eventually swap out the PSUs for something quieter down the road, that's fine.

Sorry this got a little long. Any insight or links from someone who's been through this, or just smarter than I, I'd really appreicate!!!!

B.
 
Joined
Feb 2, 2016
Messages
574
I do not plan to do any sort of RAID

FreeNAS is not for you. FreeNAS is all about data security and data security requires redundancy which requires some version of RAID and will take away from your storage space.

Find whichever operating system you like the most, find whichever HBA (not RAID) controller(s) are the least expensive and supported by your operating system, toss everything in a box and go from there.

Supermicros are loud. Norco has a good reputation for being quieter.

Cheers,
Matt
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
Thanks Matt.

I agree with you on that. It seemed like FreeNAS with all the plugins relating to media downloads and stuff being built into the OS was a good solution, but I suppose I can find versions of those that run on a standard desktop/server OS and run it just as another machine. I appreciate the response.

If I could indulge for another moment of your time (and thanks for the case insight, was what I was reading as well), if I bought something like the Norco 20 bay case, would I then just need an HBA controller card (or cards, depending on the # of drives I assume), a motherboard/RAM/CPU and I'm good to go?

So essentially, I am building a PC with all the usual components except, additionally, HBA controllers(s)?
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
You kind of have me second guessing and I'm thinking I might also just do a RAID5 with 5 x 5TB drives. If I decide to go that route, same sort of questions. In addition to the SuperMicro or Norco case, any recommendations on the additional hardware I'll need to do the RAID and support the server as far as RAM and purpose (media server / downloader)?

Thanks all. Appreciate the forum!
 
Joined
Feb 2, 2016
Messages
574
It looks like that case has 'five internal SFF-8087 Mini SAS connectors' feeding the drives (one cable for every four drives). A typical motherboard with built-in HBA generally only has two ports which would support eight drives. You'll likely need two or three HBAs or an expander to support 20 drives in that case. I'd prefer HBAs. If you're starting with 12 drives, you can buy one HBA now and then put off purchasing the additional units until you fill the additional bays.

Transcoding two HD streams isn't too difficult so just about any modern CPU will be fine. I'm not sure how much memory your applications need so you'll have to figure that out yourself; 16GB seems safe. So, yes, typical hardware.

The difference in price between the 20-slot and 24-slot Norco is $100. Cases last a really long time. So, while I know you only have 12 drives now, if you think you might expand beyond 20 drives in the next five to eight years, you might want to go with the larger case now.

Cheers,
Matt
 
Joined
Feb 2, 2016
Messages
574
I might also just do a RAID5 with 5 x 5TB drives

Most consumer-grade, relatively inexpensive motherboards will support RAID5 with four to six drives. That specific board has ten SATA ports so you've got ports for the RAID5 array, a boot drive and a few ports to hang unRAIDed drives.

If you're looking at that few drives, you can save a couple hundred bucks by going with a regular, tower case instead of rack-mount case. You'll also be able to get by with a smaller power supply (less money) and have reduced operating costs (fewer watts for the extra drives).

Cheers,
Matt
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
Matt, you are fantastic, thanks. I appreciate the replies. I'll do the math and carve a path forward. After getting tired of running out of space year after year, I was thinking I just needed more bays. Maybe really I just need to ditch the smaller drives and bet a bunch of bigger ones and be done with it. A 2 TB drive taking up a precious SATA port where it could be a 5TB or even 6TB drive makes a huge difference and at least buys me a couple more years at the smaller form factor / power consumption.

That said, it's also tempting to just build the monster case and be able to throw as many different sized drives as I want in there, and have the capability of 100+ TB if I wanted that for some reason, for no other reason just than it would be awesome.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You want a backplane that has expanders built in, so that it just takes one cable from the HBA to the backplane to connect all the drives.

Look at the C2100 that @Mirfster loves. 12 bays and 2 RU. And if you need expansion in the future, you can attach JBOD enclosures (I have a 4U, 47 drive supermicro SC847) off of that, again with only one cable.

[edit: corrected model number to C2100]
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Build the monster case ;)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
I was looking at that build as well. Very nice if I want to go to a standalone all-in-one system and never expand beyond 12 bays. I think. Or, I think you might be saying I can start with one of those DFW systems as a base, and then buy an empty C2100 or similar as a 12-bay expansion.

Think I am now down to 3 choices:

1) New Norco 20/24 bay case on newegg, with new 10-port SATA mobo, RAM, power supplies, 1-2 HBAs, CPU. Positive: Plenty of bays. Negative: Easily the most expensive.

2) DFW-style C2100 pre-built system all-in-one. Positives: Cheaper. Negatives: Only 12 bays, and I'm not quite sure what I would need to buy to add 12 more. Maybe a list of everything I would need to add an additional enclosure to that system (case, and what else?) would help me here.

3) Build a standalone system using all the items in #1 except the Norco case in a regular 6-10 bay ATX case. Buy an expansion card of some sort and an empty C2100 case for another 12 drives there. I think this is a meld of #1 and #2. I just am not sure of everything I would need to buy to make that happen.

As far as RAID goes, I have right now around 35 TB of data and around 40 TB of drive space across various drives and sizes. I will admit I am totally new to RAID (hardware and software). My research ended a while back when I realized I would need either double the drive space for my files (so in my case, aroujnd 70 TB) or that in order to use less than double, other RAID solutions, I would need to scrap all the drives I have and start fresh with a matching set of drives or at least drive sizes. This may be closed-minded thinking on my part however. If there's a way to slap a new empty 5TB drive into my current system and turn my other disks into a RAID, I'm all for it. I think given my current situation I'd need to have started this way from day one (I don't have anywhere to temporarily move 35 TB of data to in order to start fresh with a new RAID using my existing setup of a bunch of single ext4 formatted drives). Would I like to have redundancy so I don't lose data when a drive crashes? You bet. But I'm also a little scared lately, I've been losing so many drives individually to failure, I'm worried that a drive would fail and I'd lose everything, not just what's on that one drive. Guess I'm just a little gun shy.

I'll come back to this post and add a list of items and links and maybe someone can help me fill in the blanks on what I am missing for the system to work as I'd like. I think my brain still hasn't fully grasped the concept of HBAs and SAS2 backplanes yet (the other thing I keep reading is that I need to be careful at which HBA or backplane I am using due to my larger 5 TB drives).

I'm learning, I'm just trying to learn too fast I think.
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
Alright, little more homework here. The objective, have access to a total of 20-24 3.5 bays, support LARGE drives 5TB in size or more, cheap as possible, RAID as an option or just bunch of drives individual.

All options leaving out hard drives, since I already have a bunch of those. In all cases, I think I am missing hardware to actually multiply the drives (in #1), or bridge the cases (in #2, #3)

OPTION 1

- NORCO RPC-4220 4U Rackmount Server Chassis w/ 20 Hot-Swappable SATA/SAS 6G Drive Bays (Mini SAS Connector) ($340 Newegg)
- ASRock Z97 Extreme6 LGA 1150 Intel Z97 HDMI SATA 6Gb/s USB 3.0 ATX Intel Motherboard ($99 AR Newegg)
- Intel Core i3-4170 Haswell Dual-Core 3.7 GHz LGA 1150 54W BX80646I34170 Desktop Processor Intel HD Graphics 4400 ($120 Newegg)
- Crucial 32GB (2 x 16GB) 240-Pin DDR3 SDRAM ECC Registered DDR3 1600 (PC3 12800) Server Memory Model CT2K16G3ERSLD4160B ($207 Newegg)
- CORSAIR CX series CX750 750W 80 PLUS BRONZE Haswell Ready ATX12V & EPS12V Power Supply ($50 Newegg)
- WHAT ELSE???
TOTAL = $816 + ???

OPTION 2

- DELL POWEREDGE C2100 2X XEON E5630 2.53GHZ QC / 32GB / 9211-8I / 2X 750W / TRAYS FreeNAS Edition ($400 DFW)
- Additional bare bones empty C2100 Case, No Proc, RAM, HD ($130 ebay)
- WHAT ELSE???
TOTAL = $530 + ???

OPTION 3

- Phanteks Enthoo Pro Series PH-ES614P_TG Titanium Green Steel / Plastic ATX Full Tower Computer Case ($92 Newegg)
- ASRock Z97 Extreme6 LGA 1150 Intel Z97 HDMI SATA 6Gb/s USB 3.0 ATX Intel Motherboard ($99 AR Newegg)
- Intel Core i3-4170 Haswell Dual-Core 3.7 GHz LGA 1150 54W BX80646I34170 Desktop Processor Intel HD Graphics 4400 ($120 Newegg)
- Crucial 32GB (2 x 16GB) 240-Pin DDR3 SDRAM ECC Registered DDR3 1600 (PC3 12800) Server Memory Model CT2K16G3ERSLD4160B ($207 Newegg)
- CORSAIR CX series CX750 750W 80 PLUS BRONZE Haswell Ready ATX12V & EPS12V Power Supply ($50 Newegg)
- Additional bare bones empty C2100 Case, No Proc, RAM, HD ($130 ebay)
- WHAT ELSE???
TOTAL = $698 + ???
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
If you don't want to go with option 2, then look at the Mr rackables page on ebay for other choices. You do not want to deal with cabling each drive individualy, you want a SAS expander backplane.
If you want a ton of expansion, you would add something like this JBOD to whatever server can fit a PCI HBA in it (you could even just attach this to your existing system, assuming you had a spare PCI slot to put the HBA in. Then you could move all your existing disks into this enclosure.)
http://www.ebay.com/itm/4U-Supermic...810339?hash=item211a65f563:g:HqoAAOSwaB5Xn3mo

You can't just buy an empty chassis and expect it to work, you will need something to control power and fans. The SC847 and other "JBOD enclosures" include this function on a small control board.

I've been losing so many drives individually to failure, I'm worried that a drive would fail and I'd lose everything, not just what's on that one drive.
This is a very valid concern and wise of you to notice the potential for failure. I would say that RAID is absolutely the way to go (and of course, we suggest using ZFS, not HW RAID), and I understand the data migration challenge. If you are happy with the performance of your current CPU, I would suggest adding the SC847 JBOD and spending money more drives to create a backup pool. This would give you a transition area for the migration, and also provide a data backup target for long term.
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
First off, let me just say THANK YOU for sticking with me. You have no obligation to help but you did, and I really appreciate your taking the time, sincerely.

I have been thinking about it, and given the praise I see for that C2100, it does sound like a nice option for me and I am liking the price of option 2. If I go that route, I think I just need to know what else I need to ask DFW for if I basically want them to include an second empty C2100, as well as any necessary hardware for connecting the main unit to that second unit (basically a C2100 JBOD). I also like the form factor of this unit and the fact it's less noisy, ready out of the box (good for my first time with something like this), and that it supports large drives out of the box. Getting to having 24 bays across 2 C2100s is a good idea for me and I may as well do it all at once and be ready for it. Maybe I can save on shipping it together too. Can you help answer that question so I know what to ask for? It may still be cheaper (and it sounds, quieter) than one SC847 configured similarly.

Regarding SC847 - It's pricey, but if that plus a PCI HBA (any recommendations?) would be all I needed, it's another option worth looking at. I'm a little worried that my current server is a little light on power though. Here's what I've got currently.

Motherboard: ASRock FM2A88X-ITX+ FM2+ / FM2 AMD A88X (Bolton D4) SATA 6Gb/s USB 3.0 HDMI Mini ITX AMD Motherboard
CPU: AMD A6-5400K Dual-Core APU - 1MB L2 Cache, 3.6GHz, Socket FM2, Dual Graphics Ready, DirectX 11, Fan, Unlocked, AD540KOKHJBOX
PSU: Xorsair CX430
RAM: A paltry 4 GB


So, while this first was an excercise in only expansion, you can see why I sort of branched out to thinking "maybe I need to just rebuild this from the board up" type of thinking. Unless you think that above is enough to drive FreeNAS connection to a CS847. Something tells me I'm a little light on horsepower there.

Regarding RAID - Making the jump, using ZFS pools, if I wanted to do it right with around 30 TB of data, how much actual space would I need? If a drive failed, I'd literally then just pull the bad drive, put in a new drive, and have 0 data loss?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
First off, let me just say THANK YOU for sticking with me. You have no obligation to help but you did, and I really appreciate your taking the time, sincerely.
No worries! That's why we all volunteer. We've all been through the same learning curve and feel like this is a good way to contribute back to the community.
I think I just need to know what else I need to ask DFW for if I basically want them to include an second empty C2100
Ask them if it there is an option to allow it to function as a JBOD enclosure, or if it needs to have a motherboard/CPU etc. If it's the latter, then tell them you want a 2U / 12-disk JBOD enclosure. I'm not sure about shipping charges or noise. I will point out that my SC847 is much quieter once I pulled the fan connections from the backplane and connected them to the control board (2 minute fix).
but if that plus a PCI HBA (any recommendations?) would be all I needed
I believe it comes with an LSI 9200-8 HBA and cabling.
you can see why I sort of branched out to thinking "maybe I need to just rebuild this from the board up" type of thinking
Agree. Yes, you need you to upgrade to a proper motherboard and RAM. Absolutely.
Regarding RAID - Making the jump, using ZFS pools, if I wanted to do it right with around 30 TB of data, how much actual space would I need? If a drive failed, I'd literally then just pull the bad drive, put in a new drive, and have 0 data loss?
So this is the challenge we are all faced with, and I would suggest looking through cyberjocks newbie guide. The short story is that RAIDZ1 can survive a single drive failure, RAIDZ2-2 simultaneous drive failures, and RAIDZ3 - 3 simultaneous drive failures. With larger drives and wider vdevs (more than 3 disks, the stress of a rebuild in the case of replacing a single disk is more likely to cause a second drive to fail. In this case, if you were running RAIDZ1, all you data would be gone. Now if you have backups, this isn't as critical. If I were you and unable to afford a backup pool, then I would run something like 7 * 8TB drives in RAID Z2 or 9 *6TB drives in RAIDZ2 to get your 30TB of usable space. The other point I forgot to mention is that when a pool gets more than 80% full, performance really starts to suffer. So the quantities I gave you bring you to an 80% space number of 30TB (so technically you actually have 36-37TB usable). And yes, the beauty of RAID is that if you've designed an implemented it properly, you just pull out a failed drive, insert a new one, and everything stays up and running with minimal effort. Down the road when your storage needs grow, you can add another Vdev (like 5 * 20TB disks in RAIDZ2) and that usable space just adds onto your existing pool space.
@Bidule0hm has a great capacity calculator so you can do some what if scenarios. Using that I actually figured out it was cheaper in the near term to buy 20 * used 2TB drives off ebay to use as my transition area and backup pool. So I have my primary pool with 12* 4TB drives in RAID Z2, and then in my SC847 I have my backup pool with 6 * 4TB in RAIDZ2 + 10 * 2TB in RAIDZ2 + 10*2TB in RAID Z2, and I still have 21 drive slots open. :smile:
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
This is fun. I think I just learned more about RAID reading your paragraph than I ever knew before.

Ok, so I think I am really close! It sounds like you are thinking that the C2100 from DFW plus an external JBOD case would be a good fit. Plus that configuration gives me plenty of bays and support for larger drives.

So, let's say I order this:

- DELL POWEREDGE C2100 2X XEON E5630 2.53GHZ QC / 32GB / 9211-8I / 2X 750W / TRAYS FreeNAS Edition ($400 DFW)

and this:
- 4U SUPERMICRO 846E1-R900B BAREBONE SERVER CHASSIS WITH 24x TRAYS ($200 mrrackables on ebay)

I'm up to $600 flat. Now I see that the 4U comes with a SAS backplane that doesn't necessarily support large drives but that's okay, I can use my smaller drives in there. If I am understanding you correctly, since the C2100 comes with an LSI 9200-8 HBA I can connect the 4U directly to the C2100 without any additional hardware or cabling?

That gets me in the door for $600 with a pretty capable system, and a little more robust I think than Option 1 (plus I'm not sure that at Option 1 above I even properly listed everything I needed).

Have I got it??? I know a XEON E5630 is probably not as robust as an i5 or i7, but it's probably still a step up from what I'm running now. Plus I can just part out and sell my old server too to make a few bucks back maybe.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Are you buying additional drives? If not, then there isn't a need to buy the JBOD right now, since you only have 12 drives and the C2100 holds 12.
But yes, the rest sounds accurate, except possibly the 9022-8 (I assume it's the "i" model which means internal ports). You need external ports so you could ask DFW for an HBA that includes both, or you could buy an adapter with external ports.
Also, I would highly suggest that you not buy the E1 version, and instead get the E16 (give you 6G SAS which isn't limited to smaller drives). My guess is you will have this chassis for a long time and the E1 would be a huge limitation down the road.
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
Good call. I'll see if DFW would be willing to swap the 9211-8i for something comparable that included internal and external ports. If they can't or it's price-prohibitive, got a cheap adapter you'd recommend that I could put right into the C2100 to add the external ports to allow me to connect it to the second JBOD?

I have been collecting data and adding new drives frequently so I probably will be ready for > 12 drives in the next few months so I figure may as well just get the second case now so it's ready to load up.

Do you have an ebay link to the E16 version of the Supermicro case (any that holds at least another 12 3.5" drives)? Can't seem to find any on ebay so I must be searching wrong. I figure either that or buy the one I linked and just swap out the backplane only to an E16 backplane? Would that work?
 
Last edited:

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
UPDATE - I went with the C2100 FreeNAS box. I didn't get a lot of help via phone (guy seemed a bit annoyed and in a hurry), no answer via chat or email, about any sort of swaps or additional equipment to extend this to a second JBOD tower. But at $400, it is a solid start and definitely gets me to where I need to be, a capable FreeNAS box that can hold up to 12 large drives and a boot SSD. Hard to beat that at $400.

That said, still would love to know what a good budget choice would be for my jbod case and what I'd need to connect that jbod case to this C2100 package. I just don't think I need a 48 bay case like you. :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The plus here is that I think (?) the Norco backplane supports the larger 5TB+ drives out of the box.
A little late, I guess, but the only Supermicro chassis for which this is an issue are the E1 models with the SAS1 backplanes. Anything with a -TQ backplane (individual SATA ports for each bay), anything with a -A backplane (one mini-SAS port per four bays), or anything with a SAS2 or SAS3 expander backplane should work just fine for any drive capacity. My chassis has the SAS2 expander backplane and I'm running 6 TB disks without issues.
 
Status
Not open for further replies.
Top