BUILD Backup storage build

Status
Not open for further replies.

mketek82

Cadet
Joined
Sep 24, 2014
Messages
3
Hello,

We have been running FreeNAS at the office using old commodity hardware over the years and it's run very well. The purpose of our build is to provide more storage capacity both in disk space and slots available over time. There will be some redundancy using snapshots and ZFS send to work around failures if need be.

Our nightly backups are on average 30-50GB comprising of VMware CBT and Symantec BE2014 servicing Windows file servers. All traffic uses NFS. With that in mind our old 12TB FreeNAS system is starting to age and can no longer meet our retention needs.

Parts

  • SuperChassis 846BE26-R920B 24 disk chassis, dual power supply
  • MBD-X9DR7-TF+ -O SuperMicro LGA2011 dual socket board MB-X9DR7TF
  • 2x SNK-P0050AP4 SuperMicro heatsink for CPU
  • 2x Intel Xeon E5-2609V2 2.5 GHz quad core, not high perf. but gives us memory expandability
  • 8x Hynix 16GB ECC PC3-12800 1600MHZ DDR3 SDRAM, 128 GB total RAM with expand options later
  • IBM M1015 to connect to chassis backplane
  • 24x WD40EFRX WD Red 4TB SATA6
  • SATADOM DESMV-16GD07SC1DC for FreeNAS OS
  • Seagate ST240FP0021 - 600 Pro series 240GB MLC for ZIL
  • Intel DC 3700 Series SSDSC2BA200G301 200GB for SLOG
Configuration
I'm not certain on the volume configuration at the moment and recognize there are a couple ways to do it. Right now our storage on the old commodity setup is becoming limited (4x 3TB in Raidz1-0, ugh I know!) and failure prone that having 24 disks at our disposal will be a relief. As stated before - replication of snapshots will be sent to the old commodity server when it's out of production and can be reconfigured.

2x 10 4TB drives, raidz-2 for 58.2TB total
4x 6 4TB drives, raidz-2 for 58.4TB total
1x 24 4TB drives, raidz-3 for 76.4TB total
or a couple different raidz* mixes to get some reliable pools and some performance pools
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
L

L

Guest
I am not understanding what you are doing with this box? Are you using it to backup the vm's and symantec? When I hear the word backup I always first think about network config. Are you running the 30-50GB over a network to this box? Nightly? Hourly? Having a really fast disk subsystem doesn't help if you are running it on 1 x 1gbe. What is your network config?

If you are running this as a storage for the vm's and cifs, then I might architect 2 different pools. The vm's will probably want more performance so I would mirror with as many vdev's as possible using the cache and log on that pool. For the windows files I would probably go raidz2 with a couple vdevs. I really like to build systems where the headnode is separate from jbod, That way in 18 months when everything is twice as fast and half as expensive, you can replace just the headnode or add additional jbods to increase storage capacity.
 
L

L

Guest
I am not understanding what you are doing with this box? Are you using it to backup the vm's and symantec? When I hear the word backup I always first think about network config. Are you running the 30-50GB over a network to this box? Nightly? Hourly? Having a really fast disk subsystem doesn't help if you are running it on 1 x 1gbe. What is your network config?

If you are running this as a storage for the vm's and cifs, then I might architect 2 different pools. The vm's will probably want more performance so I would mirror with as many vdev's as possible using the cache and log on that pool. For the windows files I would probably go raidz2 with a couple vdevs. I really like to build systems where the headnode is separate from jbod, That way in 18 months when everything is twice as fast and half as expensive, you can replace just the headnode or add additional jbods to increase storage capacity.

actually i just read back through that you are running the symantec server on this box... I was thinking symetric. SO the next question would be around how often and when do you run backups?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Our nightly backups are on average 30-50GB comprising of VMware CBT and Symantec BE2014 servicing Windows file servers. All traffic uses NFS. With that in mind our old 12TB FreeNAS system is starting to age and can no longer meet our retention needs.
I was just reading parts lists. I noticed the mention of BackupExec. I have the misfortune of using that software at one location. How have you integrated it into your workflow with FreeNAS? Is FreeNAS a target for BackupExec?
 

mketek82

Cadet
Joined
Sep 24, 2014
Messages
3
anondos,
You're right, what I meant was a ZIL and l2arc. I have been looking for a vendor that carries that exact model S840Z but the ones I found were out of stock so if you have a vendor please let me know. It's hard to find a small ZIL that has higher stats. ! Just saw your second reply - yes BackupExec can be a pain but it's OK for the time being. You are spot on, FreeNAS is the target with the BE2014 server having NFS mount capabilities.

Linda,
Good question. I am using FreeNAS as the NFS storage target for VMware's backups which run nightly doing Full and Incrementals using CBT. The virtual appliance driving these backups is connected via NFS. After hours its between 30-50GB over an eight hour period on average, this can change for various reasons be it volume expansions, clones, moves resetting the CBT etc. Symantec also connects via NFS and uses it as a storage target. Network config is 4x 1GB LACP to our Cisco switch interfaces configured for etherchannel. We should be switching to 10GB in a couple months. ! I also see your other reply - a mix of monthly and weekly fulls that don't step on each other and incremental everywhere else.

Thank you for the questions and suggestions, much appreciated!
 
L

L

Guest
Just in my general opinion backups are by their nature slow. Having a big old raidz2 with all your drives might suffice or 2 sets. You might experiment with a little zil to see if you can reduce your backup window it should... cache ssd is not going to help your backup workload but might help on restores, if they are small and are using the same backup for multiple clients..even then it might end up slowing the backup workload down.

One thing people don't realize about zil is that it is usually really tiny. Maybe 100's of MB. You can see it by running zilstat in the shell. I actually would love to see what zilstat looks like with this load when you bring it online.

I have a customer with 48 drives all hanging off a single 6gb sas controller and they are getting 4.5gb/sec on 10gbe. The 10gbe will probably close your window faster than anything else.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I'd certainly NOT to one 24 disk vdev. Even at raid-z3.

Purely for backups, which generally aren't too iop intensive, I'd do 2 vdevs of 11 drive in z3, and have two disks as spares that can be used as replacements without needing physical access.

You might also consider 4 vdevs of 6 disks in z2 for the same overall storage space. Would do a little better iop wise, but gives up the online spares.
 

mketek82

Cadet
Joined
Sep 24, 2014
Messages
3
@Linda Kateley
I was looking for a smaller ZIL drive but the s840z seems pretty hard to find. I was able to find another drive model from HGST HUSMH8010BSS200. Comparing the two the HGST really smokes the ST240FP0021 so I may go that direction. I also need to take into consideration the IBM M1015 controller and X9DR7-TF+ -O are limited to SAS2 speed. Maybe swap out the IBM for an LSI with SAS-3. I need to do more research there.

HGST HUSMH8010BSS200 / 0B31069 stats - http://www.hgst.com/solid-state-storage/enterprise-ssd/sas-ssd/ultrastar-ssd800mhb#
SAS 12Gb/s
100 GB
MLC
Read throughput max 1,100MB/s seq. 64K
Write throughput max 765 MB/s seq. 64K
Read IOPS max 130,000 random 4K
Write IOPS max 110,000 random 4K

ST240FP0021 stats - http://www.newegg.com/Product/Product.aspx?Item=N82E16820248035
SATA III
240 GB
MLC
Read throughput max 520 MB/s seq. 64K
Write throughput max 450 MB/s seq. 64K
Read IOPS max 85,000 random 4K
Write IOPS max 11,000 random 4K

@titan_rw
I'm with you on the 4 vdevs 6x 4TB in Z2. With that many disks I will have a couple offline spares on site plus if I lose more than 3x disks in one vdev I'd be surprised (and working on grabbing the latest replicated snapshot ;) ).

Thanks
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
That hgst drive looks sweet. Looks like a pcie version is coming as well. I don't see any stock available only announcements. If you grab one let us know where.
 
Status
Not open for further replies.
Top