BUILD SOHO/Media NAS Build - Review/Suggestions Appreciated

Status
Not open for further replies.

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
If you can figure out a way to do it, e.g. from a client computer running Arq Backup, Amazon Glacier and Google Nearline are only $0.01/GB month. BackBlaze B2 is slated to be $0.005/GB month.

I think my biggest challenge for cloud backup will be bandwidth. What Comcast/xFinity gives me at 24 Mbps isn't bad, but they seem to have limits on how much I can use per month. It might be really expensive to get the phone company to run some bigger pipe to the new house, but since I'm building it right now I can at least look into it.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
If you are willing to drop 10k on a solution, I'd look into the 4U 72 drive supermicro (6048R-E1CR72L). Or if that's too pricey, work your way down to an acceptable priced server, and then plan to buy a JBOD enclosure to put additional drives in (like my SC847). I wouldn't worry about splitting up the load into 2 systems, I would split them into 2 pools on the same system. have a large backup pool and a smaller high speed media pool. After years of futzing around, that's what I did, and I've found it's working great. I'd like to figure out a better plan for offsite backups, but for now, I've been using crashplan.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I think my biggest challenge for cloud backup will be bandwidth. What Comcast/xFinity gives me at 24 Mbps isn't bad, but they seem to have limits on how much I can use per month. It might be really expensive to get the phone company to run some bigger pipe to the new house, but since I'm building it right now I can at least look into it.
For crashplan it probably won't matter, since the upload bandwidth speed never seems to max out my 25M upload connection. CP only uses closer to 5Mbps
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I would think the backup NAS would need less CPU.
Agreed.
I might want to use deduplication on it since the backup sets seem perfect for that
Maybe, but probably not. The costs would probably outweigh the benefits. Search the forums for discussions of deduplication and how rarely it's a win.
And size the zpool for the anticipated size of the backups, with perhaps zpools for different sets of backups.
Could be different pools, or just different datasets within a pool. You'd be trading flexibility for simplicity.
The media NAS might want more CPU cores for transcoding, and wouldn't need L2ARC or SLOG. The zpools would be sized around the media.
I would think the shared business files might be best stored on the media NAS.
Makes sense.
is there a better way to carve this up
For sure, there are lots of alternatives. "Better" is always contextual. At least now you see two boxes as an option.
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
I might want to use deduplication on it since the backup sets seem perfect for that, so I'd need lots of RAM and maybe an L2ARC?
Maybe, but probably not. The costs would probably outweigh the benefits. Search the forums for discussions of deduplication and how rarely it's a win.
Yeah, I had read some of the deduplication tutorials that evangelize the benefits. And, sure, I think since I have a lot of backup sets from the same machine taken weekly where a large part of the set hasn't change, it really might save some storage.

But reading more forum postings, the costs and especially risks are more clear. Since this is a backup pool, I'm pretty convinced I don't want to risk a catastrophic meltdown if I hit the tipping point where the RAM becomes insufficient and the system can't boot. Yikes. That and since I don't know this stuff I'm in over my head on deduplication anyway which is a recipe for problems. I'll spend the saved RAM funds on more disk if I need it.

Thanks.
 
Last edited:

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
Getting close, many thanks for the help so far!

Trying to narrow down the chassis selection...

You all have persuaded me to consider Supermicro chassis. I get the overall airflow and cooling is great, since they're designed for that, but I'm concerned how loud they will be with the small diameter high RPM fans they use. In the new house, the NAS will be in a separate specifically-designed server closet, but for the next year or so they will be out in the office so noise is a consideration. I keep reading comments like:
"[...] in the office was akin to sitting next to a jet waiting for takeoff. After a few days of it sitting in the office we were all threatening OSHA complaints due to the noise! Seriously, it was that loud. It is not well suited for home or office use unless you can isolate it."

Supermicro sure has a lot of variations of their chassis! It took a while to sort them all out. Below are some questions, but also some narrative in case it helps someone else down the road.

I'm looking at rackmount and 4U chassis only. So all the part numbers will start with '84'.

The current series of chassis are the 846, 847 and 848 (the third number is the series). I haven't found a good explanation of what changes were made across series (too much to read through all the press releases). Some differences are the 7 series supports 36 drives, the 848 appears to have 4 fans on the drives instead of 3, and the power supply support goes up in each series. If anyone knows of other meaningful differences across series please let me know.

I think I want the SAS2/SATA backplane/expander (one SFF-8087 in with 6 drives per channel), 'E16' on the part number. They also have a few 'E26' chassis with two SFF-8087 in -- I've read conflicting descriptions these are either 3 drives per channel, or support for redundant HBAs, either way I don't think I need this. Also, I want a SAS2, not SAS3 backplane/expander, correct (I read in a thread somewhere not to mix a SAS2 HBA with a SAS3 backplane)?

There's a chassis with a SAS2/SATA backplane but no expander so individual connectors for each drive, 'A' instead of 'E16'. That seems a waste if I'm going with these chassis. And one with no backplane at all, the 'TS' if I wanted to buy it separately.

After the dash the 'R' is for redundant PSU, which they all seem to have, then the PSU size -- '900' for 900W, '1K28' for 1280W, not sure why '2K02' is for 2000W. I see some vendors list PSUs at slightly different watts, for example 1200 vs 1280 -- are they substituting PSUs or just not listing the part number correctly, or ???

After the PSU, comes the expansion slot config. Full-height, full length standard IO slots add no letter code. The 847s have 'LP' for low-profile. Also 'U' for UIO and 'W' for WIO -- I think these are unique to some Supermicro mainboards and I won't need it.

The 'B' at the end indicates black color.

So I think I'm looking for chassis that are 84nE16-RppppB, where n could be '6', '7' or '8' and pppp should probably be 1200 or more, less wattage might work if I end up with less max drives.

848 Series

The 848 seems the most current (series '8'). Supermicro only lists 1620W, but there are 1800W units on eBay. Otherwise they seem similar 24 bay chassis to the 846 series?

The models here would be 848E16-R1K62B and that 848A-R1800B listed on eBay.

I don't see any used 848s on eBay. New ones are expensive -- $2,300, $2,700. So these are probably out for me.

847 Series

You guys suggested either building two NAS, each tailored to their usage, or one NAS with two (or more) pools each tailored. If I was going with one NAS, the 847 seems the choice since it supports up to 36 drives.

The 847 puts the extra 12 drives in the lower rear under the mainboard position, and all 847s have low-profile expansion slots. Some 847s combine full-height and low-profile slots, but put them sideways -- I'm guessing that requires risers/cables from the slots to the cards? I think I'd avoid this if possible.

The extra 12 drives also seem to have their own backplane/expander that needs an SFF-8087 from the HBA, but the HBAs have 2 outs anyway.

The models here would be 847E16-R1400LPB (1400W Gold), 847E16-R1K28LPB (1280W Platinum). There is also the 847E26-R1400LPB (1400W Gold), with that dual SFF-8087 backplane.

On eBay, I see used 847 servers offered with X8DTN mainboards and E55xx CPUs (some dual) for $600 and up, and new ones for $1,000 up. There's one for $950 that's almost a complete FreeNAS system (without drives). Maybe an X8DTN+ and E5520/E5550/E5560 CPU are actually OK for the systems I'm building? I was targeting something more up-to-date with the X10SRL-F and E5-1650.

A new 847 chassis without the SAS2 backplane/expander seems to start at $1,100. A new 847E16-R1400LPB starts at $1,585. Still a bit pricey.

846 Series

This seems to be a chassis series used in a lot of forum members' builds.

Some have 'B' after the series number, indicating "generation B [...] with extra features". Anyone know what the extra features are? I presume I'd want them?

The models here would be 846BE16-R900B (900W Platinum), 846BE16-R1K28B (1280W Platinum), and the 'A' and 'TQ' are options if I want to buy parts.

On eBay, there are bare bones 846s (no PSU, no expander/backplane, no rack rails) for as low as $135. With a backplane/expander, 1200W PSU and rack rails start at $500. A brand new 846E16-1200B (says 1200W Gold where Supermicro says 1280W Platinum?) starts at $1,280. New 846BE16-R920B starts at $1,201 (note the 'B' model), 846BE1-R1K28B at $1,365, I don't see any used 'B' models on eBay. The used early models (not 'B') can be very cost-effective, new ones and 'B" get pricey.


So, after all that, yeah the Supermicro seems to be a really good chassis for a FreeNAS box. Two used 846E16-R12K8B, or a used 847E16-R1400LPB if I decide on one system and the low profile IO can work out.

But I'm concerned about noise. If I can get a used one at reasonable price, yeah they are probably cost-effective over the Caselabs enclosure.

I'm still not convinced the Caselabs enclosure won't provide good ventilation with the quiet fans, and I think it can be fitted with filters to control the dust intake. I need to check that out further.


Anything in all that seem off base? Thanks!
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Whoo, wall of words!

Check my signature block for the system I'm using. I've been pretty happy with it so far. It is a 36-bay system with dual power supplies... one other thing to keep in mind in the efficiency of the power supplies. Some of the variants are Bronze, some Gold, some Platinum. I have the Platinum, for whatever difference this makes on a system that routinely eats more than 500 watts :)

Avoid the X8 products and the associated FSB CPUs, as they are associated with poor performance and high power consumption. You want to stay with X9 or newer. I got very lucky to find mine for $600 with the motherboard (no processors)... that's cheap in general. Ensure that the backplane (or backplanes, in the case of the 36-bay systems) are SAS2 backplanes... the SAS1 backplanes will only give you access to ~2.2TB of drives.

As far as noise, they are fairly noisy. In my case, three of the 7 fans were connected to the backplane, where they always ran at 100%. I moved the fans over to the motherboard fan connectors, allowing them to spin up and down with the system temperature, and things have gone much better. Even running some intensive drive activity, I saw a minimal change in drive temperature. I wrote more in a previous post if you want to dig it up. My system would be just fine in a closet... I wouldn't want it in an office with me long-term, but I have quite sensitive hearing. I did add active CPU heatsinks/fans to keep the processors a bit cooler.

My experience with tower cases that hold large numbers of drives is that they tend to exhibit bad hot-spotting. They'll be hot one place, cool another. The rack form factor, where the case is a wind tunnel with all the air entering at the front and exiting at the rear, really seems to do a better job with overall cooling.
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
The 846 uses 1 SFF-8087 cable for all 24 drives. Not sure where you saw 1 cable for 6 drives.

It is loud but no that loud. Currently my rack is in my small office and yeah it is loud, but not crazy loud. I close the door and can barely hear it in the living room. (All of my fans are hooked up to the mobo.)

'BPN-SAS2-846EL1' that is the backplane you want.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The 846 uses 1 SFF-8087 cable for all 24 drives. Not sure where you saw 1 cable for 6 drives.

It is loud but no that loud. Currently my rack is in my small office and yeah it is loud, but not crazy loud. I close the door and can barely hear it in the living room. (All of my fans are hooked up to the mobo.)

'BPN-SAS2-846EL1' that is the backplane you want.

Be careful with this. That's exactly the backplane you want... the 36-bay variants have a BPN-SAS2-826EL1 in the back. The EL1 indicates an expander backplane. However, not all backplanes are expanders - some of the ones I've seen on eBay aren't. In this case, you'll need 6 cables - 1 cable for every 4 drives. Unless you're doing something ludicrous like 100% enterprise SSDs where you might actually exceed the bandwidth of one 4-port cable (which is improbable), you definitely want an expander backplane.

The lesson: check your part numbers very carefully.
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
Whoo, wall of words!
Yeah, sorry, I just wanted to capture all my research in case someone wanted to look it up later, like I've been doing.

Avoid the X8 products and the associated FSB CPUs, as they are associated with poor performance and high power consumption. You want to stay with X9 or newer.
Great tip, thanks, I didn't pick up on the FSB part.

My system would be just fine in a closet... I wouldn't want it in an office with me long-term, but I have quite sensitive hearing.
Currently my rack is in my small office and yeah it is loud, but not crazy loud. I close the door and can barely hear it in the living room.
That's not making me feel better about putting one of these in my office. :confused:

My experience with tower cases that hold large numbers of drives is that they tend to exhibit bad hot-spotting. They'll be hot one place, cool another. The rack form factor, where the case is a wind tunnel with all the air entering at the front and exiting at the rear, really seems to do a better job with overall cooling.
Yep, definitely many (most?) tower cases do not move air through them, they just push or pull some air through a few openings. But the Caselabs cases allow for airflow through them, if I install the right fans and mounts in the right places, and keep the cabling neat.

For reference, I'm now considering the Caselabs Magnum STH10 (much less of a beast than the Magnum TX10, but can still hold up to 22 drives). Still haven't given up on the Supermicro 846 yet...

Each of the Flex-Bays (holding 4 drives each) have a fan that push air over those 4 drives, and there would be 5 flex bays in the STH10. The Noctua fans I listed earlier move 63.27 cfm per fan, so that's 316.35 cfm over the 20 drives. The Noctua fans are only 17.8 dBA. The 3 Supermicro 80mm middle fans pull 217.5 cfm over the 24 bays. The Supermicro fans are 53.5 dBA. 50 dBA is like light traffic, or a dishwasher in the next room, or a conversation near you, so I can understand why you guys think it might be a bit much for an office.

The STH10 can (should) be configured to flow air from bottom to top. The top mounts 4 X 120mm fans, pulling 252.8 cfm out the top. Same 17.8 dBA Noctua fans. The Supermicro 80mm rear fans pull 119.2 cfm out the back. They are 47.0 dBA fans.

The STH10 also can be configured with two middle dividers with 3 X 120mm fans each, so they would move 189.82 cfm through the middle, and also a rear exhaust fan for 63.27 cfm out the back. I would configure the STH10 with solid sides to create the air tunnel bottom to top.

Yes, I know airflow isn't quite that simple, but I should be able to move at least 200 cfm through that case, probably more, with fans that are about as loud as rustling leaves.

There was an earlier comment about dust in the Caselabs enclosures. DEMCiflex makes filters that fit every opening in the STH10. They are an extra purchase, but should prevent almost all of the dust intrusion.

The 846 uses 1 SFF-8087 cable for all 24 drives. Not sure where you saw 1 cable for 6 drives.
I actually said 6 drives per channel. One SFF-8087 connection has 4 channels for 24 drives total. We're on the same page. :)

Thanks!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I just posted some SPL measurements, but basically my study is 36dB, the 847 is 51dB at boot, and settles down to 43dB when PWM kicks in. I wouldn't want to have it in the office with me all the time... that's why the new house has a dedicated server room :) Also keep in mind you have heat to deal with... you may not realize it, but one of these beasts eats 500-600 watts when running - around 2,000BTU/hr. You will notice some localized heating effects in most situations.

I'm not saying that you can't move enough air in a large tower case... the problem I've always found is ensuring all of the air moves in and out - that you don't create localized "pockets" of warmer air that are being inefficiently exhausted. There are simply many more flow paths, versus a rack mount enclosure where there is one very simple flow path - front to back. Tower cases also tend to be much larger in volume, so you have to account for that. Finally, the real reason the Supermicro fans are noisy is their high static pressure... they can efficiently force air through and around everything in their way, with a static pressure of more than 1" of water... the Noctua fans you're looking at, while great, are typically half that or less.

As far as the dust filters, any decent dust filter will cut your airflow and static pressure in half.

Of course, unless you're willing and able to sit down and do all the thermodynamics calculations, the proof is in the doing... order everything from someone with a good return policy, set it up, and see what happens. Graph the temperature of all the drives over time. If you're moving enough air, they should stay close to ambient temperature and exhibit little variation drive-to-drive... if you're having airflow issues, you'll find some drives substantially hotter than others. That will be your clue to go reevaluate. As you mentioned, cable management (really, the management of any obstruction) will be critical.
 
Status
Not open for further replies.
Top