Hardware for Supermicro SuperServer 6047R-E1R36N?

Status
Not open for further replies.

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
I am fairly new to FreeNAS and researching hardware for a new SAN using the Supermicro SuperServer 6047R-E1R36N. Here's my list so far:

16 SAMSUNG 8GB DDR3 1600 ECC
36 SEA 3TB CONST ES.2 3.5 SAS 7.2K
2 INTEL XEON E5-2620 2.00GHZ 15MB 6C
1 SUPERMICRO SUPERSTOR SRV XEON E5 6047R-E1R36N
2 SUPERMICRO SC847 INTERNAL DRIVE
1 LSI Logic Controller Card SAS 9207-8I

The e1r36n comes with an LSI controller card, but according to this post the LSI 9207-8i will allow you to control individual disks. Having not seen either of these cards in action, I am wondering about being able to identify and swap out bad drives when the card has 8 hardware ports and 36 drives attached to an expansion backplane. Anyone have any experience or recommendations for this kind of setup?

This SAN will have 108TB raw and 128 GB memory. Does this seem like a reasonable amount of memory for ZFS and RAIDZ2 if we are not using deduplication or compression? I could also get 4 TB drives for not too large a premium per TB.

I'm reading the FreeNAS guide and for RAIDZ2 I'll need 2n + 2 drives per vdev. What would be the recommended layout for 36 drives? 3 vdevs with 8 data, 2 parity and 2 spares each?

I've run into conflicting reports about the usefulness of using SSDs for ZIL logs. Should I use a couple of mirrored SSD drives, and if so what size should I be looking for this amount of storage?

In general, I would like to remove as many surprises as possible before opening the wallet. Any advice would be greatly appreciated.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We've got a variant of this on the bench, X9DR7-TF+ integrated into a SC846-BE26 with the backplane on an M1015 in IT mode. Identifying drives by serial number (labelmaker time!) works fine of course, and I do want to see whether or not the SES drive failure lights are or can be made to be useful - just haven't had time to play yet.

For purposes of attaching a backplane to a SAS controller by way of a SFF-8087-to-SFF-8087 cable, you may be better off thinking of the SAS HBA as having two SFF-8087 ports, and just remembering that each is a multilane cable. Functionally, you really just have a high speed 24Gbps interconnect to the SAS expander on the backplane. Plane_s_ in the case of the 847. So you connect one to the front backplane and one to the back backplane, but for the most part, it just appears as though your HBA has 24 (or 36 in your case) ports.

What's the point of the second proc, especially if you're not planning on compression?

I assume you mean MCP-220-84701-0N. At least in the 846, these make things veeeeeeeeery tight in the area of the power supply connector. Looks like in the 847, they've actually buried it underneath the mainboard tray. You might not like that for serviceability requirements.

In general, I think the 36-drive chassis may be a bad idea. The drives in back will tend to get cooked just a little bit more than the drives in front - especially the drives directly underneath the CPU, which benefit from the heat from the drives up front, their own heat, AND the heat of the CPU above. If there isn't a compelling reason to go with such a dense chassis, your hard drives are likely to experience a more pleasant environment in a more traditional storage server chassis like the 846. An 846 with 4TB drives is 96TB. An 847 with 3TB drives is 108TB. Just a thought.

Oh and the 846 has a 2x2.5" rear sled option, nice.

As for the amount of memory, ZIL, and other factors, you haven't provided any clues as to what the system will be used for. What's your expected working set size, for example? The 1GB-per-TB rule is likely a very loose rule once you get up to a reasonably-resourced system, and I could imagine that 64GB might be more than enough, or 256GB would be very tight, almost entirely depending on the workload.

As for RAIDZ2, you could do RAIDZ3 if you can afford a little less performance in exchange for greater redundancy, and work that out as three sets of 11 drives plus three warm spares.
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
Regarding the use case for this box, it is to be tier 2 storage and there will not be an expectation for extremely high performance. Having said that I would like to get as much performance as possible out of this box. We have been space constrained for some time but have been put off by the cost of expanding our NetApp environment. The current plan is for storage for Exchange archiving, Tivoli backups, storage for rarely accessed scanned documents that we are legally required to keep, CIFS shares, some iSCSI and/or NFS shares for test/dev VMs. Not having complete information my questions are more of the general guideline type.

I will be doing 3x12 RAIDZ2 or RAIDZ3 vdevs, with either two or one spare respectively. Assuming this is will be a relatively low use SAN, what would the recommendations be for SSD ZIL and L2ARC drives assuming 3TBx36 SAS storage? Would these numbers scale linearly if I got 4TB drives? e.g.: for 128 GB RAM and 3TB x 36 tier 2 storage in ZFS, the general recommendation is ______GB SSD (SLC?) for ZIL and ______GB SSD for L2ARC... Or, should I even bother with SSD for ZIL and/or L2ARC? The E1R36N chassis has room for 4 optional 2.5" drives, so I should be able to accommodate mirrors for both.

I'd really like to be able to identify bad drives by drive failure lights. I am considering getting 3 LSI 16-port 9201-16i HBAs, and directly wiring each port. These cards aren't very expensive and I have read on this form that people are using them without having to reflash.

I was interested to read about the 846. We originally found out about the E1R46N because it's on Redhat HCL and they pitched it to us as part of a possible gluster storage solution. That Redhat was willing to support this chassis to some extent legitimized it to our management team. However we do not have funds available for two of these right now, so the thought is we can use FreeNAS on the same hardware and if things go well, we can roll out another chassis as an rsync target after the fiscal new year in July. I personally think there is the potential for much higher performance with ZFS.

So, that is where we are right now. I know my questions are incredibly vague. Apologies for that. Any guidance, no matter how general, about hardware or sizing would be greatly appreciated.

Edit: My initial thought was to create one large zpool with 3x12 vdevs inside it. Bad idea?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For "tier 2"/"low use"/archival/backup use ...

ZFS performance, memory requirements, and L2ARC usefulness are driven largely by the size of the working set, which is the set of data that you're accessing more than once in a while (definitions of both "more" and "once in a while" can vary). If your working set is small enough to fit into ARC, then you get very fast answers (and no pool I/O) for those items. If your working set is larger than that, but can be stuffed into an L2ARC of manageable size, then you can avoid slow pool I/O for those items. But if your pool isn't really very busy at all, then the benefit of getting a 100ms-faster response from the NAS is marginal at best. ARC and L2ARC shine primarily under heavy loads, where they are augmenting access speeds by reducing pool IOPS (and latency).

For your use, you may not have sufficient locality in your access patterns to establish a meaningful working set, in which case, both ARC and L2ARC are less-useful. For a 2011 based box, 64GB is probably the reasonable low end of things. Put it in there as 4 16GB modules. You could try 128GB and see if there's a noticeable difference, if not, 64 is quite possibly fine. But you kind of have to think about how you'll be using it.

For ZIL, that depends on how you're going to be writing to the server. A lot of sync writes? A SLOG is a good idea. The SLOG is not a cache, and you are still limited by your pool's speed. A SLOG doesn't need to be real big - around a gigabyte or maybe a little more if you have a really very large and fast system. But speed helps.
 
Joined
Dec 6, 2013
Messages
3
I'm sorry to bring this topic up again, but I was planning to build a new storage using a 6047R-E1R36N too and I have some questions, since you guys seems to have an experience to these Supermicro chassis.

As jgreco commented on the heat problem on the drives in the back of the chassis, I'm thinking get a 6047R-E1R24N, since 24bays is still good for me, although the main list remain almost the same:

- About 384GB ECC RAM (Depending on the price here in Brazil, 256GB)
- 2x intel E5-2630 V2, since I'm planning to use compression
- 24x Seagate Constellation ES 4TB 7200RPM

i'm not sure yet if with 2x MCP-220-84701-0N fits in the 6047R-E1R24N, but if it does:
- 4x SSD Intel 3700 (2 for SLOG/ 2 for L2ARC)

And my main question is which controller I should get. I was thinking in get a AOC-USAS2-L8e, since the AOC-SAS2LP-H8iR that comes with the 6047R-E1R24N don't have an IT mode. Any suggestions?

Thanks in advance!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Any reason you are avoiding the defacto M1015 that thrives in the FreeNAS world? Reflash it to IT mode and you have an amazing controller.

They look like they might be virtually the same controller except one has Supermicro's seal of approval. How much is the AOC-USAS2-L8e compared to an M1015? m1015s from ebay are the best source for controllers...
 
Joined
Dec 6, 2013
Messages
3
Not M1015 particularly, but since once I got all the disks in a crazy arrange flashing a intel card (i.e. disk from slot 1 as disk 17 in FreeNAS), i'm afraid Supermicro isn't friendly with not-supermicro cards, but that's all.

I'm not sure yet of the prices I can get here in Brazil, but since I have an easier way to get Supermicro equips, AOC-USAS2-L8e was my first thought.
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
I'm sorry to bring this topic up again, but I was planning to build a new storage using a 6047R-E1R36N too and I have some questions, since you guys seems to have an experience to these Supermicro chassis.

As jgreco commented on the heat problem on the drives in the back of the chassis, I'm thinking get a 6047R-E1R24N, since 24bays is still good for me, although the main list remain almost the same:

- About 384GB ECC RAM (Depending on the price here in Brazil, 256GB)
- 2x intel E5-2630 V2, since I'm planning to use compression
- 24x Seagate Constellation ES 4TB 7200RPM

i'm not sure yet if with 2x MCP-220-84701-0N fits in the 6047R-E1R24N, but if it does:
- 4x SSD Intel 3700 (2 for SLOG/ 2 for L2ARC)

And my main question is which controller I should get. I was thinking in get a AOC-USAS2-L8e, since the AOC-SAS2LP-H8iR that comes with the 6047R-E1R24N don't have an IT mode. Any suggestions?

Thanks in advance!

Regarding the e1r24n and e1r36n, ixsystems resells the same units here. If they're selling the 36-drive version it must be at least somewhat reliable. With the 36-drive version there is little difference in cost and you could start with 24 drives in the front, leaving the 12 drives in the back available for easy expandability. Just a thought.

I have two e1r36n's at work, each with 3TB Constellations, 256GB RAM (16x16 Registered DIMMs) and 4 SSD drives for ZIL (2x128GB mirrored) and L2ARC (2x512GB striped). I used two MCP-220-84701-0N drive trays in each system, so I can verify that these trays will each hold 2 SSD drives. You will need two power splitters per box to feed power to the SSD drives. As jgreco notes space is tight and you will need to power down the box to service these drives.

When I was putting my systems together one of my main questions was whether the LSI 2108 the e1r36n comes with is flashable to IT mode. I never did get a definitive answer about that, so in the end I used LSI 9207-8i HBAs which work well.

You may also want to look at the Supermicro e1r24l and e1r36l, which come out of the box with LSI 2308 controllers flashed to IT mode. Note that the L versions have 16 DIMM sockets, vs 24 in the N versions, so this would limit you to 256GB using inexpensive 16 GB Registered DIMMs.

Regrets/observations:

One system is the main production box and the second is primarily a replication target for backups. So far performance and reliability have been excellent. I am running compression on a few cifs datasets, perhaps 5% of overall storage, and so far I have not seen memory use top 100 GB. I should have some headroom left for more compression and possibly to slave an expansion chassis in the future.

I went with 3TB drives because they were on Supermicro's HCL and the powers that be at work were skittish about this science project. If I were to do this again I would definitely go with 4TB drives.

Seagate advertises the ES.2 Constellation as having 512 byte sectors, but I have seen anything that says definitively whether they have 512 byte or 4k sectors internally. I have read here that it does no harm to select 4k sectors for non-SSD drives when configuring your vdevs, and if you are using 4k sector drives there can be a significant boost in performance. Were I to do this again I would probably select 4k sectors for the Seagates when creating the vdevs.

For my setup I used 6 6-drive RAIDZ2 vdevs. This follows the 2n+2 best practice for vdevs and it offers a good balance of performance and redundancy. Internally I have 2 128GB SSD drives mirrored for the ZIL, and 2 512 GB SSD drives for L2ARC. The SSDs may not be necessary but I wanted them anyway.

So far performance has been excellent up to the limits imposed by our network. I attached an nfs lun to one of our AIX boxes and it was within 12% of the fibrechannel lun served up by one of our Netapp filers. Unfortunately our backbone is GB only with no jumbo frame support, and we only have one iscsi vlan, so those limits are not very high.

I purchased a separate 4-port Intel i350 for each system, and created two 4-port lagg interfaces with 2 ports onboard and two on the card, one non-routeable for iscsi, and a second public interface for cifs/nfs. We have some funds allocated this year to begin upgrading our backbone to 10G, but this is an expensive and complicated process and will take some time to complete.

For ZFS replication I used Intel X520-LR1 single-mode 10G fiber cards to directly attach the two systems, since the backup box is in a separate building several hundred yards from the production system. It is not explicitly stated in FreeBSD's documentation but I can verify that these cards work out of the box with FreeNAS 8.3.2.

You will probably need to disable booting from the HBA in order to boot from your USB stick. There was no problem until I installed the drives, but apparently with so many drives installed the BIOS gives up before searching the USB ports.

Finally, I can verify Supermicro's tech support is excellent. I used them to help diagnose a bad stick of memory when I was setting up the first system.

That's all I can think of at the moment. Good luck with your project.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'd say search around and see if you can find someone that has used that model. The M1015 is a safer bet in my opinion(and possibly cheaper).
 
Joined
Dec 6, 2013
Messages
3
Kevin and cyberjock, thanks a lot for your feedback, it helped me a lot. Kevin, your shared experience meant a lot to a newbie like me, I think this project is going for the right direction.

I'm looking now about the M1015 and it's pro/cons, but this projects as a whole depends now on the interested lab, and which hardware they will choose. Theres a risk they just buy a Dell (very very expensive, specially in Brazil) or HP storage, but I will not let this happen. So far, I love learning and working with ZFS, it's pretty awesome.

If they aprove this one, I may post here building process with photos, if you're interested.
 
Status
Not open for further replies.
Top