Help with a large (around 30Tb) array

Status
Not open for further replies.

RChadwick

Dabbler
Joined
Jun 12, 2012
Messages
19
I need a nice chunk of RAID storage (20-30TB). I originally bought a rackmount case that holds 16 drives, a Supermicro Xeon board, and a RAID controller. I later find out:

1) The brand new 3Ware card I get only supports 2TB. Not getting another 3Ware card ever again in life.
2) I see these relatively cheap enclosures without a Motherboard, like a Powervault MD1000, or an SGI Rackable SE3016, that connects to an external computer.

I think I really like the idea of #2. The enclosures seem cheap, and I have a few spare rackmount PowerEdge servers available. These things usually have 2 connectors on them. What are they? What kind of card do I need in the server? Assuming I get a SAS enclosure, will these work with SATA drives? I'm not directly in IT, and these things are new to me.

Also, do these things work well with FreeNAS?

Thanks!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
3ware isn't the only company that has/had a 2Tb limit. Virtually all of the companies had that problem back in the day. The real problem is that you bought an old card and didn't follow the recommendations in the forum for what hardware to pick. Read the stickies if you get a chance and they will help you out with some of the more basic problems people run into.

Also, be careful with external enclosures. They can work well or they can not work at all.
 

RChadwick

Dabbler
Joined
Jun 12, 2012
Messages
19
Thanks for the beatdown. Very helpful.

I originally was going to just run Windows on this box. I didn't even know of FreeNAS when I purchased the hardware. No Stickies.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sorry, it wasn't meant as a "beatdown". It's just so common for people to hear about FreeNAS yesterday, tried to use it this morning, then are shocked to find its not as simple to use as Windows.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sorry, it wasn't meant as a "beatdown". It's just so common for people to hear about FreeNAS yesterday, tried to use it this morning, then are shocked to find its not as simple to use as Windows.

well in all fairness, windows fails at the 2tb limit for those controllers too, and also the mfrs do a real poor job of communicating the limits to the average consumer.
 

RChadwick

Dabbler
Joined
Jun 12, 2012
Messages
19
Thanks for the replies. I haven't even booted FreeNAS, so I'm not sure how difficult it is to use, but I'm assuming it's not a dealbreaker. My main concern is hardware. Most of the hardware discussions I've found here seem to deal in small, cheap, consumer hardware (Under 5 drives), Low power hardware, and which motherboard is quicker by how many Milliseconds. I'm just looking for a lot of cheap SATA storage. I got a 3Ware card (Bought brand new from the company 2 years ago, expected better support) that fit the motherboard on the rackmount enclosure I got on Ebay, because I thought this was the normal/standard/only way to do things. Then, I see these external enclosures. I'm all for tinkering, and have been building PC's for over 20 years, but I see more reliability in off-the-shelf enterprise stuff. If I can plug a cable from a $200 enclosure to a $100 1U Poweredge server, and get 16 hot-swap bays, that seems like a sweet deal to me. Being able to upgrade the Computer with little more than a cable change sounds wonderful. Right now, my non-64-bit Xeon MB will be a pain to change, and then because of a likely change to PCI-E I'll probably need to change that PITA 3Ware card as well. I'm wondering why I'm not reading more about these external enclosures on here.
Speed is a non-issue for me. I just need a lot of temporary space.
So, any experience with these external enclosures? Problems? Issues? What is the cable/interface called that connects the computer to the enclosure?

Thanks again.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Most people don't use external enclosure much because it has the propensity to be accidentally unplugged. Same goes for USB, Firewire and eSATA. It's totally doable, but why would you? Several people have lost large amounts of data because they accidentally unplugged their drives on accident.

Also most people purpose build their FreeNAS server for their exact need. If they want to have 10 drives they get a motherboard with 10 SATA ports or get controllers to supplement the motherboard.

If you aren't using a 64 bit CPU you are going to be limited to 4GB of RAM and ZFS is pretty much not going to work well if at all. The manual says that if you don't have a 64 bit CPU and/or enough RAM to use UFS until you can afford better hardware. So your plan to "need a lot of temporary space" is not looking too good.

I can understand your desire to reuse old (spare?) hardware. But its old enough that its not likely to be too useful for FreeNAS. If you can't afford to get some hardware that is newer you are likely to invest significant time learning how to get FreeNAS to work for you only to figure out its so starved for resources its unstable or too slow for your needs anyway. An unstable system may cause you to lose your data, which isn't a favorable outcome. You should have 1GB of RAM per TB of storage space per the thumbrule for ZFS. This isn't a hard and fast requirement, but it gives you an idea of how much RAM you may need to get decent performance and reliability. Since you mentioned 30TB in the original post you have no chance of having a zpool that will function with only 4GB of RAM.

There are quite a few people that have built relatively inexpensive systems with the G2020 CPU and a Supermicro motherboard with 16GB of ECC RAM. That sounds like a good place to start for you.
 

Nindustries

Patron
Joined
Jun 12, 2013
Messages
269
So, any experience with these external enclosures? Problems? Issues? What is the cable/interface called that connects the computer to the enclosure?
Thanks again.
I can't say I ever used an external enclosure, but I've seen quite a lot pass on generic NAS builds. The cable they almost always use is eSATA.
The speed depends, either 3GB/s or 2GB/s.
I researched a bit on Synology external enclosures, but for example the smallest is 512eu what isn't cheap. I'd rather build a bigger NAS, but hey that's me.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What is the cable/interface called that connects the computer to the enclosure?

the weakest link? the root of all evil? unnecessary risk?

oh yeah: "not recommended." knew i'd eventually nail it.
 

RChadwick

Dabbler
Joined
Jun 12, 2012
Messages
19
Thanks again for the responses. Since this will be rackmount, I'm totally unconcerned with a cable getting unplugged by accident. One of the external enclosures I found (The cheapest, at $200, including caddies and shipping) has the connectors in front. Even in front, I'd be OK with that risk, but cables in back could only be unplugged on purpose (The Dell Powervault MD100 has them in back: http://www.ebay.com/itm/Dell-PowerV...troller-Modules-2x-Power-Supply-/151083857675 ). It even looks like these can be daisy-chained for more storage if needed. If the only complaint with these things is unplugging the cable, I think I'll go with the external enclosure. I looked at the Synology units. Man, are they expensive. I don't want to build a unit for my 'exact' needs, as I expect my needs will change. I don't need more than 6TB at the moment, but getting a motherboard and enclosure that can only handle 3 SATA drives isn't going to help me in the future when I need 30TB. As it is, the enclosure I did build is already obsolete before I got a chance to use it. It'll either collect a lot of dust, wind up in the garbage, or maybe on Ebay. A total waste of my time and money.
I didn't know a single eSATA cable could handle more than one drive.

Thanks again for the help.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
An eSATA cable can.. via port multipliers.

Using port multipliers(which is how you convert one eSATA into several ports) has been a major pain for many users. They just don't work that well and are often made from bargain bin hardware that doesn't always work reliably all of the time.

The speeds are NOT 3GB/sec(gigabytes per second). They are 3Gb(gigabits per second). One of those is almost 10x faster than the other. You are going to have some serious performance issues if you think you can create a zpool and push it through a single eSATA link. Someone else did that and could never do a zfs scrub because the server would become unresponsive.

Good luck!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks again for the responses. Since this will be rackmount, I'm totally unconcerned with a cable getting unplugged by accident. One of the external enclosures I found (The cheapest, at $200, including caddies and shipping) has the connectors in front. Even in front, I'd be OK with that risk, but cables in back could only be unplugged on purpose (The Dell Powervault MD100 has them in back: http://www.ebay.com/itm/Dell-PowerV...troller-Modules-2x-Power-Supply-/151083857675 ). It even looks like these can be daisy-chained for more storage if needed. If the only complaint with these things is unplugging the cable, I think I'll go with the external enclosure. I looked at the Synology units. Man, are they expensive. I don't want to build a unit for my 'exact' needs, as I expect my needs will change. I don't need more than 6TB at the moment, but getting a motherboard and enclosure that can only handle 3 SATA drives isn't going to help me in the future when I need 30TB. As it is, the enclosure I did build is already obsolete before I got a chance to use it. It'll either collect a lot of dust, wind up in the garbage, or maybe on Ebay. A total waste of my time and money.
I didn't know a single eSATA cable could handle more than one drive.

Thanks again for the help.

Cables in back can be accidentally messed up. It happens, even in racks (maybe that should be "especially in racks"). Also, you have other unnecessary points of failure, such as extra power supplies, which then have additional power cords, and extra controller logic in the data path, etc. You have to consider how well ZFS will take it if its pool, or worse, parts of its pool, suddenly goes offline.

Now, really, at a certain size, you absolutely have to go that route. If you have 50 or more drives, the options for chassis basically go to multiple chassis options, so then yes you have external cabling that is basically mandatory.

But contemporary chassis such as the Supermicro 846 and Norco 4224 allow up to 24 drives in a 4U footprint. And for only 30TB of space, you really only need a 12 drive enclosure (11 4TB in RAIDZ3 plus a spare gets you almost exactly 30TB of usable space...)

If you don't mind eBay, there are often $200-$400 deals on the 846, including this $400 brand new job that almost has my wallet pulling out of my pocket.

You are welcome to choose whatever hardware you would like though. I encourage simpler designs where possible, because it is easier to understand what is going on and what the potential failure modes are.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And now you're starting to see why I said things like:

Read the stickies.. we created them because people keep making mistakes and them filling the forum with the same exact mistake day in and day out.

2. Be careful with external enclosures. They can work well or they can not work at all. If in doubt, see #1.
There's alot more to planning for a very large zpool than meets the eye. And if you make an error that can potentially be very expensive to recover from. Imagine if you had bought all your hardware and THEN realized you could only do 50MB/sec or so to/from your server. You'd probably be a little more than pissed off. The fix for bottlenecking yourself because of eSATA(or USB) or buying hardware that isn't compatible or powerful enough is quite high. Hence the stickies. The truth is, the vast majority of us home and small business users could use the exact same hardware and we'd all be completely happy with it. We'd have 100MB/sec network speeds to/from our zpool, high reliability and low power usage. But everyone seems to be convinced theycan come up with a plan that is faster, more reliable, and is more energy conscious. The truth is jgreco's hardware recommendation thread is based on more experience(and failures) from forum users than you and I could ever come up with on our own.
 
Status
Not open for further replies.
Top