200TB + system

Status
Not open for further replies.

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
Hello,

This is my first post to the forum. If this question has been answered elsewhere, sorry for the duplicate post..

We currently have 200TB of data spread across a few XFS filesystems. We provide offsite backup for customers with SAN volumes averaging ~80TB. We have filled up our current servers, and are looking for a solution with scalability. The current solution with multiple servers doesn't scale well, so this is where ZFS looks appealing.

We have hardware ready to go - multiple SuperMicro 36x3TB bay servers with multiple 45x3TB expansion JBODs, LSI 9266-8i RAID cards with CacheVault.

Since this is for offsite backup, I was considering RAIDZ-2 with 8-10 disks each. Deduplication looks nice, but with the 2-5GB RAM per TB, it is something I will revisit at a later time.

The ZFS storage will be exported to our backup gateway servers (mounts the customer SAN and mounts the ZFS storage) via NFS.

The ZFS storage will be backed up to either another ZFS system or to traditional XFS servers.

With this requirement, is FreeNAS a good option? I explored Nexenta, but the solution is cost prohibitive.

Any advice is appreciated.

Thank you!
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi starfish,

It sounds like you have everything you need already, so why not load FreeNAS up and give it a whirl?

Only thing i would consider changing is the RAID card out for a proper HBA. I don't know how your JBOD's connect but if you are using any internal to external converters it might be best to get rid of them and use a card with an external port.

If you have the slot to support it you might want to look at something like this: http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx

You can find them on ebay.

Honestly it might well be worth contacting iX Systems and seeing if you can't buy an hour or 2 of consulting time for this project.

-Will
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I will tell you this...anything you compare "ready made" to building it yourself you will consider it "cost prohibitive".

I will tell you that ZFS/FreeNAS will do everything you want. If you build it yourself you can save some money, but you won't have a support contract or anything. So you will need to hire, train or contract someone that can do the detailed messy stuff for you.

The noob guide I wrote says this, but I'll say it again as a warning. If you think you know what you are doing but you actually don't you can think you have a reliable trustworthy setup and then watch it all come crashing down without any chance of recovery. This is not a problem with FreeNAS so much as it is as knowing your stuff.
 

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
Thank you for the advice and the quick replies! I reached out to iXsystems via their contact form and will discuss the config and consulting time with them. Ideally, if the primary and backup system is setup correctly from the beginning, the platform should be solid.

The Nexenta software is over $30k just to support 200TB raw. I like the idea of a supported system, but can't justify the $30k.

I'm looking forward to taking the step from XFS to ZFS!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, it's a jungle out there. Pretty much I see companies falling into 1 of 3 categories when it comes to massive redundant storage(and often any large scaled project):

1. Pay for a contract.
2. Pay for an employee to be properly trained to design, build and maintain the systems you need.
3. Pay for it with lost data/large downtime.

In any case, you are paying. The question is, how much is each worth to your business model? Some companies could go under very quickly if customers lost their data.

Just look at Microsoft's recent blunder with their Azure service. They let their SSL cert expire. That's absolutely pathetic and there is no excuse for that kind of incompetence from a multi-billion dollar company. Which category did they fall into: I'd easily say category 3. Getting a new cert is stupid easy and can be done in less than 2 hours(usually less than 30 minutes). Surely there are people that are trained to know better, but they weren't being paid to maintain the system(or at least not enough to make it worth their time). So it expired.

Good luck! I know you need it. The answers to your questions often range from "I don't know" to "wow.. that's expensive!" for anyone working on a large project. :P
 

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
So, I'm going to hire a consultant to setup the system and provide some ongoing support.

I did a basic install of FreeNAS to a USB stick. The server booted from this fine and saw 4 x 2TB drives (connected through an Areca RAID card). A test ZFS filesystem was installed and all worked! The next step was to expose all 45 x 3TB drives from the SuperMicro JBOD chassis to the server. I did this from the LSI bios, and rebooted to this error:

Code:
GEOM: da0: using the secondary instead -- recovery strongly advised.
Trying to mount root from ufs:/dev/ufs/FreeNASs1a
ROOT MOUNT ERROR:
if you have invalid mount options, reboot, and first try the following from
the loader prompt:

    set vfs.root.mountfrom.options=rw

.
.
.


When removing the SAS cables connecting the 45 drives, the system seems to be happy once again. I tried installing to a 2TB drive because the USB drive wouldn't boot after connecting the 45 drives. I confirmed the USB key is set as primary in the BIOS as well.

I'm sure the consultant will have a solution, but I wanted to bounce this off the community as well.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Maybe this will help. There's probably some delay while the controller waits for all those disks to power up and initialize.

http://forums.freebsd.org/showthread.php?t=19715

EDIT: You don't need to edit loader.conf, the suggested change can be done from the GUI under System->Tunables

EDIT-2: I recall AHCI causing problems for some motherboards/controllers and I think it might have been set not to load by default. You could also try adding this to loader.conf using the GUI method:

Code:
ahci_load="YES"
 

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
Thank you for the response. I forgot to mention the JBOD drives appeared fine when I only exported the first four. After seeing that work, I exported all 45 and then the had the panic when booting.

The line for the boot_delay existed and indicated 30000. I added the ahci_load line and will test when I get down to the datacenter later today.

The first few lines of the modified loader file are below:

Code:
#
# Boot loader file for FreeNAS.  This relies on a hacked beastie.4th.
#
autoboot_delay="2"
loader_logo="freenas"
#Fix booting from USB device bug
kern.cam.boot_delay="30000"
ahci_load="YES"

# FUSE (NTFS, etc) support
fuse_load="YES"
# GEOM support


Thanks again for the support!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thank you for the response. I forgot to mention the JBOD drives appeared fine when I only exported the first four. After seeing that work, I exported all 45 and then the had the panic when booting.

How much RAM do you have?

Also, you used the word "export", which somewhat implies that you have partitions/zpools and data(or maybe just an empty storage location). If you do have partitions and they aren't compatible or corrupt with FreeBSD that may be a reason for the kernel panic. There have also been known cases where drives used in a hardware RAID array previously causes kernel panics because the metadata the RAID controller wrote plays games with the OS. The only solution at that point is to wipe the disks(usually just wiping the first and last few GB is fine).

I'm 90% sure the ahci_load="YES" is part of the default FreeNAS install. As far as I know, without AHCI you have no ability to use hard disk native command queuing and hot plugging hard drives. Typically for servers, that also results in a performance hit that is noticable under heavy loads. I'd think it would be crazy to not have that as default, especially since the manual explicitly recommends AHCI and many people use the hot plug feature.
 

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
Sorry for the delay..

The fix for booting was to install FreeNAS to an internal drive. For some reason, when connecting the LSI card with 45 drives, the motherboard doesn't let me boot off USB.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's really really weird!
 

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
I was pulling my hair out trying to see why it wouldn't boot from USB!

This server will be replaced with a new server with a X9DRD-7LN4F motherboard with a lot more RAM. I'll keep the thread updated with any relevant info.

Thank you again to the community for helping me get up and running!

:)
 

peterpkcg

Dabbler
Joined
Apr 21, 2012
Messages
22
It'll be interesting to hear about how FreeNas will handle this. I'm sure there are other large set up out there as well, but I just fell onto this thread.
So keep us updated !
 

starfish

Dabbler
Joined
Feb 24, 2013
Messages
10
Hi Peterpkcg,

Ideally FreeNAS could handle this as well as other ZFS servers. I recently received a demo for over 400TB of SGI storage (SGI NAS), which runs Nexenta for ZFS in the back-end. The solution is very dense with 72 x 3.5" drives in a 4RU server chassis, or 81 x 3.5" drives in the 4RU JBOD chassis.

To keep things interesting, SuperMicro recently announced a 72 x 3.5" in a 4RU JBOD chassis. It's nice to have options between enterprise and mid-tier.
 
Status
Not open for further replies.
Top