FreeNAS Dell R510 ESXi storage build - looking for advice

Status
Not open for further replies.

Jon K

Explorer
Joined
Jun 6, 2016
Messages
82
I've just received my new (to me) Dell R510 12-bay. It's equipped with:
  • Two Intel E5620's (4c/8t, 2.4GHz)
  • 64GB DDR3 ECC Memory
  • Intel Pro/1000 Quad Port 1 GbE NIC
  • iDRAC Enterprise
  • Dual PSU
  • Twelve WD Enterprise 1TB SATA disks
  • One (or two?) Samsung 850 Evo 250GB SATA SSDs (advice needed)
  • Two SanDisk 32GB thumb drives (internal USB ports)
  • PERC 6/i to be replaced with some sort of HBA (advice needed)
I currently have the disks above (and 3 more) in a Dell MD1000 in a RAID50 DAS to one of the ESXi 6.0 hosts in my cluster, but I want to share the storage. So, I am building the R510 as a "roll your own SAN" solution. I am looking to use iSCSI and NFS. I am most interested in maximum IOPs. I would probably do the equivalent of a RAID6, so RAIDZ2. I would like to use an SSD tier as I only have 64GB of RAM. The cluster currently supports 20 - 30 VMs that are all somewhat chatty so I'd like to maximize my IOPs there.

I guess what I need to know is what HBA I should pick up - I am used to hardware RAID and very competent in virtualization, but FreeNAS is newish to me. Do I need to mirror the SSDs or is one recommended? There are two 2.5" bays internal to the machine that will hold SSDs nicely. I looked and it seems that the IBM M5015 is "supported" - should I get that card or is an M1015 an option - which is better? Any advice there is appreciated! Thanks all!

Jon
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I've just received my new (to me) Dell R510 12-bay. It's equipped with:
  • Two Intel E5620's (4c/8t, 2.4GHz)
  • 64GB DDR3 ECC Memory
  • Intel Pro/1000 Quad Port 1 GbE NIC
  • iDRAC Enterprise
  • Dual PSU
  • Twelve WD Enterprise 1TB SATA disks
  • One (or two?) Samsung 850 Evo 250GB SATA SSDs (advice needed)
  • Two SanDisk 32GB thumb drives (internal USB ports)
  • PERC 6/i to be replaced with some sort of HBA (advice needed)
I currently have the disks above (and 3 more) in a Dell MD1000 in a RAID50 DAS to one of the ESXi 6.0 hosts in my cluster, but I want to share the storage. So, I am building the R510 as a "roll your own SAN" solution. I am looking to use iSCSI and NFS. I am most interested in maximum IOPs. I would probably do the equivalent of a RAID6, so RAIDZ2. I would like to use an SSD tier as I only have 64GB of RAM. The cluster currently supports 20 - 30 VMs that are all somewhat chatty so I'd like to maximize my IOPs there.

I guess what I need to know is what HBA I should pick up - I am used to hardware RAID and very competent in virtualization, but FreeNAS is newish to me. Do I need to mirror the SSDs or is one recommended? There are two 2.5" bays internal to the machine that will hold SSDs nicely. I looked and it seems that the IBM M5015 is "supported" - should I get that card or is an M1015 an option - which is better? Any advice there is appreciated! Thanks all!

Jon
To maximize IOPS you will want to use mirrors instead of any RAIDZ'n' topology.

You definitely want to ditch the PERC 6/i, and you don't want an IBM M5015, either, because FreeNAS needs a HBA, not a RAID card. The IBM M1015 or the Dell equivalent (PERC H200) are good choices in widespread use and are recommended.

There is a great deal of reference material here on the forum about these subjects. Hopefully one of the enterprise experts will drop by and comment on the merits of NFS vs. iSCSI. I personally have had better luck with NFS as a VMware datastore, but I'm just a one-man shop.

Welcome, and good luck!
 

Jon K

Explorer
Joined
Jun 6, 2016
Messages
82
Thanks for the welcome! Darn, I wish I had known - the seller of my R510 would have supplied an H200 but I didn't think it'd work :( Does an H200 need to be flashed or anything special?

I understand mirroring will give max IOPs of the spindles but I am hoping I can keep capacity but improve performance by utilizing SSD caching.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thanks for the welcome! Darn, I wish I had known - the seller of my R510 would have supplied an H200 but I didn't think it'd work :( Does an H200 need to be flashed or anything special?

I understand mirroring will give max IOPs of the spindles but I am hoping I can keep capacity but improve performance by utilizing SSD caching.
Perhaps the seller will swap controllers with ya? In any case, H200s go for about $60-100 on eBay; M1015s the same or a little more. And yes, you will need to flash either card to IT mode.

I understand about wanting more capacity... life's full of compromises, ain't it? A pair of 6-disk RAIDZ2 arrays would give slightly better performance at the cost of 1/3 of your disk space being given up to parity. Still better than giving up 1/2 of your space to mirrors. But IOPS scale with the number of vdevs; with a single 12-disk vdev you only get the IOPS of a single drive.

Not sure how you envision using an SSD 'tier'. Do you mean that you intend to add a ZIL SLOG device? Or an L2ARC? Or both?

A ZIL SLOG SSD (or faster device) is indicated when serving up VM block storage, unless you're willing to throw caution to the winds, turn off synchronous writes, and hope you don't suffer a catastrophic power failure. For a SLOG SSD, you want a device with integral capacitor power backup, low latency, fast writes, and very high write endurance, like the Intel DC S3700 or S3710.

Most of the experts around here will recommend purchasing more memory before adding an L2ARC as it consumes memory for overhead and can actually have a detrimental affect on performance in some circumstances. But you may have enough RAM to warrant adding an L2ARC SSD. Search the forums and you'll find a great deal of informed discussion about this. Any number of Intel or Samsung or other branded products work well for this purpose. If memory serves, the L2ARC shouldn't be any larger than roughly 4 times your ARC size, so in your case a 256GB drive would be about right.
 
Status
Not open for further replies.
Top