BUILD FreeNAS build for Enterprise DR backup

Status
Not open for further replies.

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
I'm a long time reader of the forums and have only recently built a freenas box. I've been very pleased with its feature-set and performance and am hoping to get some feedback for an enterprise level build.

Usage
Deployment as an offsite disaster recovery storage server to store replicated file server backups from multiple remote offices.
- Replicated backups will be sent from 7+ remote offices using StorageCraft ShadowProtect
- Transfer protocol will be FTP
- DR Site will have 100Mb connection total bandwidth with individual remote offices having 10-20Mb connection to DR
- Current backup data size is 35TB and growing
- Daily transfer size is roughly 1-10GB, more often on the low end
- All data will be verified once a week through SMB by ShadowProtect MD5 checksum running on a Windows server

Test Build
1 x Fractal Design R5
1 x Supermicro X10SL7-F
1 x Intel Core i3-4370
1 x Supermicro 16GB SataDOM
4 x 8GB Crucial DDR3 ECC
1 x Seasonic SS-520FL2
8 x WD Green 1.5TB
Following Cyberjock's hardware guide, a test system was built to validate the solution and familiarize myself with the interface. Having only used it for a month or so, I'm already very happy with the results and now researching for a full scale build.

Proposed Initial Build
1 x Supermicro SuperChassis 846XE2C-R1K23B
2 x Intel Xeon E5-2620v3
1 x Supermicro X10DRX
8 x 32GB MEM-DR432L-SL01-ER21
2 x Supermicro SataDOM 32GB (RAID1)
24 x HGST 8TB SAS (3x8 drive RAIDZ2)
2 x LSI 9207-8i
1 x APC UPS (model TBD)

Future Expansion Options
Add HBAs as required
Add Memory as required
Add Supermicro 4U JBOD Chassis(s) (45, 72 or 90 bays) as required
Add HDDs as required

Comments
- Primary concern is long term data reliability and integrity
- Performance requirements for the system are low as throughput is limited by the inbound connection speed
- I have contacted IXsystems and have received quotes for both FreeNAS and TrueNAS builds. Reason for custom build is ability to have greater expandability (i.e. X10DRX with 10 PCI-E 3.0 x4 slots) and lower cost
- No deduplication or encryption will be used
- Server is OK to be shutdown at any time for maintenance

Questions
- In the event of a bad drive, does FreeNAS have the capability to indicate failed bay or is the only solution to write down the serial number of each drive and which bay they are in?
- Is a SSD ZIL and L2ARC necessary for this usage scenario?
- The (2) 9207-8i cards will be connected to the SAS backplane in a redundant configuration. If one card fails, will FreeNAS notify about this?
- Are the pair of E5-2620v3 (2.4GHz/3.2GHz turbo) fast enough?


Any enterprise level experiences and feedback is much appreciated! Thank you!!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
- DR Site will have 100Mb connection total bandwidth with individual remote offices having 10-20Mb connection to DR
- Current backup data size is 35TB and growing
- Daily transfer size is roughly 1-10GB, more often on the low end
You might need more bandwidth, but I'll defer to your analysis of the situation.

- All data will be verified once a week through SMB by ShadowProtect MD5 checksum running on a Windows server
Unnecessary. As long as the pool is healthy and regularly scrubbed, data is guaranteed to be identical to its original state, bit-for-bit. An initial validation might make sense, to ensure the data was stored correctly.

Also, sharing via multiple protocols can be really messy and is something best avoided like the plague. Read-only accesses, if possible, should be ok, though.

Test Build
1 x Fractal Design R5
1 x Supermicro X10SL7-F
1 x Intel Core i3-4370
1 x Supermicro 16GB SataDOM
4 x 8GB Crucial DDR3 ECC
1 x Seasonic SS-520FL2
8 x WD Green 1.5TB
Following Cyberjock's hardware guide, a test system was built to validate the solution and familiarize myself with the interface. Having only used it for a month or so, I'm already very happy with the results and now researching for a full scale build.
Sure beats the "test systems" we tend to see around here and their Pentium 4s. :p

8 x 32GB MEM-DR432L-SL01-ER21
256GB of RAM... I think you can dial that down a bit, at least initially. 128GB would be a saner starting point (even 64GB should work, but 128GB is safer). Of course, more RAM can't hurt.

2 x Supermicro SataDOM 32GB (RAID1)
Sure, but we call them mirrors if ZFS is involved. RAID1 gives the impression of HW RAID or FakeRAID.
You can also consider larger, real SSDs you could offload the .System dataset to. No need for fancy stuff, though, even consumer-grade SSDs would do. Those DOMs can be quite expensive...

2 x LSI 9207-8i
What topology were you thinking of? A single one of them is plenty to deal with 24 drives. You could leave the second one for future, external expansion, when the time comes.

- Primary concern is long term data reliability and integrity
You're in the right place. :D

- Performance requirements for the system are low as throughput is limited by the inbound connection speed
- I have contacted IXsystems and have received quotes for both FreeNAS and TrueNAS builds. Reason for custom build is ability to have greater expandability (i.e. X10DRX with 10 PCI-E 3.0 x4 slots) and lower cost
iX is the go-to solution if you need good performance. Given what you told us of your needs, there shouldn't be a problem.

- No deduplication or encryption will be used
Good, but feel free to use default compression. It's basically free.

- In the event of a bad drive, does FreeNAS have the capability to indicate failed bay or is the only solution to write down the serial number of each drive and which bay they are in?
This is something you'd have to hack together. It should be easy to do with an LSI SAS controller and Supermicro backplane (I think sas2ircu can handle this).
Alternatively, dd'ing something from the defective disk (or all but the defective disk) to the bit bucket is a quick way to identify them.
If you can shut down the server to be sure, it's best to do so.

- Is a SSD ZIL and L2ARC necessary for this usage scenario?
SLOG? No, not unless you want to force these writes to be sync (even then, at the bandwidth we're talking about...), which I'm not even sure is possible with FTP.
L2ARC? You wouldn't gain anything unless you regularly accessed a subset of files. For a backup storage server, I don't see the benefit.

- The (2) 9207-8i cards will be connected to the SAS backplane in a redundant configuration. If one card fails, will FreeNAS notify about this?
Well, this answers the above question. I have no idea. Few people around here, if any, have custom servers with redundant HBAs. I'm not sure FreeNAS will even play nicely or if it's a TrueNAS feature.

- Are the pair of E5-2620v3 (2.4GHz/3.2GHz turbo) fast enough?
Almost certainly. The biggest use for the second CPU would be more RAM, not more computing power.
 

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
Thanks for detailed info Eric!

- Initial transfer of the 35TB will be done manually through hard drives.
- Link speeds will remain as these are upgrading these enterprise grade WAN links are very expensive and they should be sufficient for the daily transfer amounts.
- ShadowProtect software has built-in functionality for the remote replication process which uses FTP so that can't be changed.
- While the main benefit of ZFS is data integrity, the ShadowProtect Image Manager software automatically verifies backups which I think can't hurt. At minimum, I would run the verify tasks initially just to be sure and can disable if I feel its unnecessary.
- SMB would be used because Image Manager would connect to the backups through a UNC path.
- Could you elaborate on why having multiple protocols is messy?
- Didn't realize FreeNAS would mirror the OS in software and I shouldn't use HW/SW RAID, thanks!
- (2) SSDs is a good idea, thanks again!
- How do I tell the size of the system dataset? I tried du -hs .system on my test system but it said No such file or directory.
- Default compression will be used.
- Will look into sas2ircu. By dd'ing to the bit bucket, is this essentially writing a dummy file to the pool so that I can tell which hhd bay doesn't flash?
- Ok, won't use SLOG or L2ARC.
- The dual HBAs were spec'd because the iXsystems FreeNAS quote contained the same setup so I am assuming it should work. If we do go ahead with this build, I'll test this by pulling one of the HBAs!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
- Could you elaborate on why having multiple protocols is messy?
There is no compatibility layer to allow several protocols and their incompatible locking mechanisms to work together in FreeNAS. Simultaneous accesses could corrupt files.

- How do I tell the size of the system dataset? I tried du -hs .system on my test system but it said No such file or directory.
Smallish. For all ZFS storage info, use its commands. For instance, zfs list will list the datasets and subdatasets. The GUI also shows this data.

- Will look into sas2ircu. By dd'ing to the bit bucket, is this essentially writing a dummy file to the pool so that I can tell which hhd bay doesn't flash?
No, you'd need to access the drive directly, in the general case. Read from the drive to /dev/null.

- The dual HBAs were spec'd because the iXsystems FreeNAS quote contained the same setup so I am assuming it should work. If we do go ahead with this build, I'll test this by pulling one of the HBAs!
I'd research advanced SAS topologies before implementing this, though.
 

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
Good to know about the multi protocol compatibility issue. Will do some usage case testing.

I looked into the chassis manual and on the models with a dual port expander backplane, there is failover support for multiple HBAs and it indicates this needs to be configured in the Linux MPIO software. I'm not sure if 'software' is referring to something that needs to be installed or if its referring to the MPIO firmware on the HBA?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Good to know about the multi protocol compatibility issue. Will do some usage case testing.

I looked into the chassis manual and on the models with a dual port expander backplane, there is failover support for multiple HBAs and it indicates this needs to be configured in the Linux MPIO software. I'm not sure if 'software' is referring to something that needs to be installed or if its referring to the MPIO firmware on the HBA?
I honestly don't know. I have 0 knowledge of SAS topologies other than simple trees with a single controller as the root.
 
Joined
Jul 3, 2015
Messages
926
gmultipath is what you're after. Cable up your HBAs to your JBOD/s and on startup all your disks will receive two da numbers and gmultipath will join and label them (disk 1 for example). One will be a passive connection and the other active. Should one fail the other will auto kick in and you'll receive an email from the box telling you what's just happened. If you wish to be more specific about the label name and which disks are called what then slot the disks in after your system has fully booted one at a time and use the gmultipath label command.

Example:

gmultipath label -v FRED /dev/da0 /dev/da2

Do the multipath first before you add your disks to the pool.

I use meaningful names when labelling the disks which corresponds to the drives physical location so for example J1S5 mean JBOD1 Slot5. This helps with identifying disks during disk replacements.
 
Last edited:

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
Thanks Johnny!

So I glean that the HBA failover is all done automatically and no user setting or config is necessary to enable it?

From what you describe, once the disks are added, would I see 3 labels per disk (i.e. da0, da2, disk1)? So I just need to make sure to select the gmultipath disk# names and add them to the VDEV?
 
Joined
Jul 3, 2015
Messages
926
You got it
 

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
After some more research I've decided to change the chassis and motherboard to:
SuperChassis SC846BE2C-R1K28B
X10DRi-LN4+

In speaking with Supermicro about this build, they were concerned about using a SAS2 HBA (Avago 9207-8i) with the SAS3 backplane (BPN-SAS3-846EL). Does anyone have any experience with this combo and if its a legitimate concern?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
In speaking with Supermicro about this build, they were concerned about using a SAS2 HBA (Avago 9207-8i) with the SAS3 backplane (BPN-SAS3-846EL). Does anyone have any experience with this combo and if its a legitimate concern?
I would avoid it. Some users tried that and had random issues (missing drive, and wierd performance issues when handling multiple streams). I can say that SAS3 with a SAS3 backplane has been fine for me though.
 

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
Thanks depasseg, but from everything I've read the 12Gbps SAS3 HBAs are not yet fully supported. Most of this is based on reading Cyberjock's hardware guide as well as not yet seeing any definitive posts that they are fully working.

I will scour the forums some more for info regarding the LSI 3008 based controllers.

For the freenas1 system in your sig, how long has that system been operational? Anything I should be aware of if I go the same route with the 3008?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
It's been running close to a year. No issues. I'm happy with it.

There aren't enough users to give a solid determination of "supported", but I can say that my setup has been working fine (3008 with SAS3 backplane). I had a drive missing issue when I attached a SAS2 enclosure (the SC846) to the SAS3 external ports, so I got a SAS2 based LSI card and it's been working fine. Every issue I've seen on this forum have been a SAS2 to SAS3 configuration. I haven't seen any SAS3 to SAS3 issues reported.
 

DeanB

Dabbler
Joined
Sep 28, 2015
Messages
14
After further emails with Supermicro, I've decided to switch the chassis again, this time to a SC846BE26-R1K28B which has a SAS2 backplane. However, the tech mentioned that the hard drives should be SAS2 as well to ensure compatibility.

Unfortunately, I do want to stick with the SAS3 HGST 8TB models due to size/reliability/power consumption. Is using a SAS3 hard drive in a SAS2 backplane/HBA cause for any real concern?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Is that a dual ported drive? If not, you might want to save some cash and go with the BE16 model.

I haven't heard of many SAS users on here. I know folks are using a multitude of SATA drives with that backplane (well, the single ported variant - BE16) without issue.
 
Status
Not open for further replies.
Top