BUILD Hardware check for big FreeNAS box

Status
Not open for further replies.

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
I'm looking to build a couple of 36-bay supermicro FreeNAS boxes. I'm interested in stability first, obviously, followed by performance, followed by capacity. Price cap will be about 50k for both the production and backup boxes.
  • Supermicro SSG-6047R-E1R36L Barebone 4U SuperStorage for E5-2600v2 $2198.87 $2198.87
  • 2 x Intel 2.6GHz E5-2630 v2 CPU $613.55 $1227.10
  • 16 x Samsung M393B2G70QH0-CMA $110.51 $1768.16
  • 2 x Chelsio T520-CR dual port 10 GbE $564.95 $1129.90
  • 4 x SIIG 3M Twinax cable $64.24 $256.96
  • 2 x Intel S3710 200GB (ZIL mirror) $261.29 $522.58
  • 2 x Intel S3710 400GB (L2ARC) $514.18 $1028.36
  • 2 x Supermicro internal drive bay $19.45 $38.90
  • 4 x Samdisk Cruzer Fit CZ33 16GB USB 2.0 Flash Drive $8.19 $32.76
Total less hard drives: $8203.59
  • 36 x HGST Hard Drive 6TB SAS-600 7200 RPM 3.5" $398.16 $14333.76
  • Total: $22537.35
or
  • 36 x HGST Hard Drive 8TB SAS 12Gb/s 7200RPM 3.5in $430.56 $15500.16
  • Total: $23703.75
History: A couple of years ago I buillt a couple of 6 x 6 RAIDZ2 FreeNAS boxes on Supermicro e1r36n's, after repurposing the raid cards and installing LSI HBAs. They've been very reliable, however while the stated use case was for archival storage, once word got out that a small sea of storage was available they've been used primarily as datastores in VMware and for direct-attach iSCSI. The production system currently has around 100 virtual machines running on it, and about 3-4 dozen iSCSI and NFS LUNs. Now that I know the actual use case, I'd like to get more performance out of these systems. We also have 10GB Ethernet in our data center so I'll be making use of that as well.

Questions:

From my research it seems like SAS3 is still problematic on FreeBSD 9. Is this still the case? If I use SAS2 HBAs and expanders, would it be ok to use SAS3 drives? At this point they're more plentiful in larger sizes.

Has anyone built a larger system using direct-attach SAS? I am assuming this will help performance but I have no experience with it. I am going to call Supermicro when they open to see what they have available.

I did not use compression on the first two boxes, but from what I've learned it generally does not impact performance. If so, I'd like to enable it from the beginning this time.

What would be the best balance between price and performance with 36 drives? For example, how sane would it be to create a pool with 18 ZFS 2-way mirrors? Would this be faster than 6 6-drive RAIDZ2 vdevs? If not these, what layout would be best if we're favoring performance over capacity?

I am seeing more memory use on the backup box, but I still plan on 256 GB memory for both boxes. Supported memory is not very expensive right now. That said, would the two 400GB L2ARC SSDs still be worthwhile?

200 GB is pretty ridiculous for a ZIL but that's the smallest Intel s3710 available. These drives are going to be inside the case so I want something that has a good track record for reliability.

That's all I can think of off the top of my head. Any suggestions would be most welcome.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Price cap will be about 50k for both the production and backup boxes
Dang, I wanna come hang out with you... :)
@anodos , someone is looking to spend money "on a budget"; stop by will ya? ;)

While I have no real useful advice (above my pay scale), this is my way of watching this thread since it intrigues me. BTW, just read your avatar and it cracked me up. :D
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
While you might not want to be on the bleeding edge, depending on the timing you might want to look at FreeNAS 9.3.2. think of it as FreeNAS 9.3 on FreeBSD 10.3. Currently available as a nightly.

jkh announced it yesterday.

Sent from my iPhone using Tapatalk

edit: fix spelling mistakes
 
Last edited:

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
While you might not want to be on the bleeding edge, depending on the timing you might want to look at FreeNAS 9.3.2. think of it as FreeNAS 9.3 on BSD 10.3. Currently available as a nightly.

jih announced it yesterday.
Well that is interesting news, especially since I won't be doing the buy until mid-late April. I had a look at the release notes and I see 10.1 has a new mpr driver with support for LSI's 12GB HBAs, including the IT-mode 3008 in the Supermicro E1CR36L.

Does anyone have experience with the mpr driver from the FreeBSD 10.x codebase? If it's reliable I would much prefer the E1CR36L over the E1R36L.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Cherry picking the points I feel I can address with any confidence:
I did not use compression on the first two boxes, but from what I've learned it generally does not impact performance. If so, I'd like to enable it from the beginning this time.
There's no reason not to leave the default (LZ4) enabled.
how sane would it be to create a pool with 18 ZFS 2-way mirrors? Would this be faster than 6 6-drive RAIDZ2 vdevs?
For block storage, the advice in these forums is always to go with mirrors: more vdevs = more IOPS.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
HGST Hard Drive 6TB SAS-600 7200 RPM 3.5"
HGST Hard Drive 8TB SAS 12Gb/s 7200RPM 3.5in
If it is not too much trouble, could you provide links for these? Not that I don't believe you; just I have never see a true SAS hard drive in those sizes...
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
If it is not too much trouble, could you provide links for these? Not that I don't believe you; just I have never see a true SAS hard drive in those sizes...
These are nearline SAS drives, e.g. SATA drives with SAS interfaces.

HGST Hard Drive 6TB SAS-600 7200RPM 3.5in
http://www.wiredzone.com/hgst-components-hard-drives-enterprise-hus726060als640-10024256

HGST Hard Drive 8TB SAS 12Gb/s 7200RPM 3.5in
http://www.wiredzone.com/hgst-components-hard-drives-enterprise-huh728080al5204-10023978
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
Cherry picking the points I feel I can address with any confidence:

There's no reason not to leave the default (LZ4) enabled.

Agreed.
For block storage, the advice in these forums is always to go with mirrors: more vdevs = more IOPS.
I'm considering doing multiple pools, maybe 12 drives in striped mirrors for iscsi, then 24 drives in 3 or 4 raidz2 vdevs. I'm still not sure how much I trust mirrors given the kind of resilvering times I'd be seeing with 8 TB drives.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
Thanks. Not that I have ever used NL-SAS Drives myself; but are they really all they are cutout to be?
I expect this will be the last NAS we'll get that has spinning drives. But until then, nearline SAS drives are big and cheap. And in my limited experience, they can be very reliable. I have two supermicro e1r36n boxes with Seagate st33000650ss 3TB SAS drives, 72 drives total, and so far none have failed in several years of continuous use. I'd use them again if they weren't so tiny.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
My NetApp has a 24bay expansion shelf with 3TB NL-SAS drives, they've been a good compromise on storage vs iops. I wouldn't put our production sql dbs on it, but its fine for general use. I thought about buying the WD RE drives over the SE for our freeNAS but I didn't need dual channel for the work load.

I guess I shouldn't use my phone as it didn't save the rest of what I typed...
NL is required if you are running a HA system as both controller need to connect to the drives which requires the dual channel SAS interface. If your chassis doesn't support dual controllers you could use a second HBA for redundant connection to the NL drives.

With 256GB of you can go bigger on the L2ARC that would help with VM work load.

Edit: I hit reply to @Mirfster question about NL-SAS but it didn't quote.
 
Last edited:

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
With 256GB of you can go bigger on the L2ARC that would help with VM work load.
It's probably not excessive with 288TB raw. At any rate the wallet is open so I'm getting them.

I'm just wondering though, if I end up going with two pools I'd like to carve up the 2 200GB and 2 400 GB SSD drives so that I can host the ZIL and L2ARC for both pools. I'm pretty sure there isn't a way to do this in the GUI, but I'm assuming that something might be possible from the commandline. Can this be done without hosing up the GUI?
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
It's probably not excessive with 288TB raw. At any rate the wallet is open so I'm getting them.

I'm saying you might want to increase the size of the SSDs for the L2ARC cache. Generally speaking you can get away with about 5:1 ratio of L2ARC to RAM, so, with 256GB of RAM you've got more room for L2ARC.

I'm just wondering though, if I end up going with two pools I'd like to carve up the 2 200GB and 2 400 GB SSD drives so that I can host the ZIL and L2ARC for both pools. I'm pretty sure there isn't a way to do this in the GUI, but I'm assuming that something might be possible from the commandline. Can this be done without hosing up the GUI?

You can add the log and cache through the CLI without breaking the GUI.

Code:
gpart create -s GPT da2  #200GB SSD
gpart create -s GPT da3  #200GB SSD
gpart create -s GPT da4  #400GB SSD
gpart create -s GPT da5  #400GB SSD
gpart add -t freebsd-zfs -a 4k -s 16G da2
gpart add -t freebsd-zfs -a 4k -s 16G da3
gpart add -t freebsd-zfs -a 4k -s 200G da4
gpart add -t freebsd-zfs -a 4k -s 200G da5
gpart add -t freebsd-zfs -a 4k -s 16G da2
gpart add -t freebsd-zfs -a 4k -s 16G da3
gpart add -t freebsd-zfs -a 4k -s 200G da4
gpart add -t freebsd-zfs -a 4k -s 200G da5
zpool add tank1 log mirror da2p1 da3p1
zpool add tank1 cache da4p1 da5p1
zpool add tank2 log mirror da2p2 da3p2
zpool add tank2 cache da4p2 da5p2

That would be an example of how to do it, assuming the correct drive IDs and partition sizes for the drives you end up with.

Edit: I needed to clean up the partitions
 
Last edited:

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
I'm saying you might want to increase the size of the SSDs for the L2ARC cache. Generally speaking you can get away with about 5:1 ratio of L2ARC to RAM, so, with 256GB of RAM you've got more room for L2ARC.
Ah, understood. I'll get 2 800 GB 3710s for the L2ARC.
You can add the log and cache through the CLI without breaking the GUI.

Code:
gpart create -s GPT da2  #200GB SSD
gpart create -s GPT da3  #200GB SSD
gpart create -s GPT da4  #400GB SSD
gpart create -s GPT da5  #400GB SSD
gpart add -t freebsd-zfs -a 4k -s 16G da2
gpart add -t freebsd-zfs -a 4k -s 16G da3
gpart add -t freebsd-zfs -a 4k -s 200G da4
gpart add -t freebsd-zfs -a 4k -s 200G da5
gpart add -t freebsd-zfs -a 4k -s 16G da2
gpart add -t freebsd-zfs -a 4k -s 16G da3
gpart add -t freebsd-zfs -a 4k -s 200G da4
gpart add -t freebsd-zfs -a 4k -s 200G da5
zpool add tank1 log mirror da2p1 da3p1
zpool add tank1 cache da4p1 da5p1
zpool add tank2 log mirror da2p2 da3p2
zpool add tank2 cache da4p2 da5p2

That would be an example of how to do it, assuming the correct drive IDs and partition sizes for the drives you end up with.

Edit: I needed to clean up the partitions
Excellent. I remember reading here about issues people were having due to making changes under the covers, but that may have been back in the pre-9.x days.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I remember reading here about issues people were having due to making changes under the covers, but that may have been back in the pre-9.x days.
Adding vDevs to a zpool or additional zpools through CLI is going to break things in the GUI, but you can add the slog and l2arc manually.
 
Joined
Nov 12, 2015
Messages
2
Hi KevinM.
First off, SAS2 will work with SAS3 drives, but I believe the unit will have a SAS2 bottleneck, because you have one expander for 36 drives, you will be stuck at 6Gbps. You can later upgrade the backplane to 12Gbps (SAS3) for about $900. They have 12Gbps chassis options, you have to dig for them in their website. FYI I love the x10DRH-CT Motherboard (2x 10G NICs and SAS3 built in LSI 3108). Though the raid controller doesn't have HBA mode currently.

Stick with the HGST Helium drives they run 4x faster than HGST Ultras and WD RE4 and are about 10C cooler too.
My 2 cents.
 
Joined
Nov 12, 2015
Messages
2
Yes it does have two expanders. But the RAID controller will cap you at about 6GB/s with the bus that its on being about 8GB/s.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes it does have two expanders. But the RAID controller will cap you at about 6GB/s with the bus that its on being about 8GB/s.

What RAID controller? That platform has an LSI 2308 in IT mode. It was like they *made* it for ZFS. :smile:

But assuming you meant "HBA" and mistyped "RAID controller"... That would seem to be a function of how you choose to connect the expanders to the host, rather than any inherent thing. You can certainly attach to two separate HBA's if there's a PCIe bus bandwidth issue. When you're spending this much on a storage server, no one will blink at paying a few hundred extra for an additional HBA and maybe a cable.
 
Status
Not open for further replies.
Top