BUILD ESXi Based Home Build Check

Status
Not open for further replies.

CaptnIgnit

Cadet
Joined
Sep 29, 2016
Messages
7
I'm building a ESXi server out for my house to accomplish a few things:
  1. FreeNAS Storage
  2. pfSense Router
  3. Plex Media Server
  4. temp ad hoc VMs for personal projects
Hardware
Case: Fractal Designs Node 804
Motherboard: SuperMicro X10SRM-F
CPU: Intel E5-2620 v4
Memory: Crucial (32GBx4) 128GB RDIMM
Network: Intel 2x 1Gbps onboard + 2x 1Gbps PCI-E Card

Storage Controller: M1015
Storage: 6x 6TB WD Red
Storage: 2x Intel S3500 SSD (See question about size)
Storage: 2x 32GB SuperDOM
Build
The 2 SATA DOMs will be mirrored and partitioned as follows:
  • 8GB - ESXi Boot
  • Remainder - FreeNAS Boot
A FreeNAS VM will be created with the following:
  • 2 Cores
  • 64GB RAM
  • M1015 Passthrough (6 HDD + 2 SSD)
  • 1 Gb Nic Passthrough
Revised based off feedback and further research, stripped mirrors I think achieves my goals better. A stripe of 3x2 of the 6TB drives with a mirror of the SSDs for the SLOG. Once setup, I'll attach a portion of the storage to ESXi as an iSCSI store. The rest will be used for shared file storage.
Questions
  1. Are there any huge errors I'm making in my HW choices?
  2. Is dividing up the SSD's a terrible idea? Should the boot devices move off to their own device and leave the SSDs solely for SLOG?
  3. Do I really need an SLOG? I've read that my particular case is the defacto case for using an SLOG, is that true?
  4. Is 32GB RAM sufficient? If not, is there a cutoff where more RAM doesn't result in a big performance increase?
  5. How big should the SSDs be? As size of drive increase the write performance increases. For an SLOG I've read I don't need much space, but write speed is what matters.
  6. Would I benefit from a L2ARC?
  7. Other recommendations/advice?
  8. The M1015 will be maxed out in this config, is that a potential performance concern? What 12 SATA/3 SAS HBAs are recommended?
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello and most welcome to the FreeNAS Forums.

1. A better CPU would be the E5-1620 upto the the 1650.
2. Yes. Terrible.
3. Maybe. Statistics from your usage will tell. The best approach is to build without, try, test, post statistics combined with your user experience to assess if the setup would benefit from L2ARC or SLOG.
4. Probably yes. If it would be enough for your desired performance? I'd say it is less likely. When you'v ebeen reading further on L2ARC and SLOG you'll discover recomendations in relation to RAM size. Follow these recommendations.
5. You'll find out when you read further on SLOG and L2ARC.
6. See 3, 4 and 5.
7. Put the ESXi plans on hold, until you've assessed the performance and become well acquainted with FreeNAS. Once properly setup, it will be easy to migrate a standalone installation onto the ESXi platform, and conversely - migrate out from ESXi. I've done both on several machines, multiple times. See, ESXi is nice, provides a lot of functionality but really once things start breaking it is a nightmare.

SSD Passthrough/VMDK (Undecided which approach is better)
This is the strongest indicator yet that you are nowhere near prepared enough to undertake the ESXi journey on FreeNAS at this point.

Here's what I suggest you to do to get up to par.
Look through every subforum's top section and read all stickies. It is nothing bad about returning to the "newbie threads" (most rather experiecne users do that from time to time to straighten questions out. It is impossible to internalize them fully at first glance) Thoroughly. From your post, my initial assessment is that you need to revise SLOG/L2ARC, ZIL, and the posts about virtualizing freenas (there are more than one). Lastly I'd read from start to finish the thread from joeschmuck in the offtopic section on "my dream system".

When you've achieved this, you will be faced with answers to your questions and faced with a new set of questions. Looking forward to that.

Good luck :)
 
Joined
Mar 22, 2016
Messages
217
Dice gave some great advice about what to look into further.

A couple of questions though;
What do you plan to do with the FreeNAS? The reason I ask is because you are talking about a SLOG Device, but did not give an reason for it.
Before you go with an L2ARC you want to max the memory out. Since you can fit another 128 gigs in your ESXI system, that would likely be the first step, and then dedicate more RAM to the FreeNAS.
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I plan to setup a RAIDZ2 with the 6 drives and setup the SSDs as the SLOG device. Once setup, I'll attach it to the ESXi server as an NFS or iSCSI store.
If doing iSCSI you are going to need more RAM. Then make sure that "sync=always" is set for the iSCSI Pool and toss in a good SLOG (whether or not you are using NFS or iSCSI). Forget partitioning SSD(s) for ESXi Datastore and SLOG...

Have the SSD(s) on the controller that is passed-through to FreeNAS. Make sure you use SSD(s) that have Power Loss Protection, I would recommend the Intel DC S3710 200GB.

Forget about L2ARC for now, focus on getting more RAM Reserved/Locked in ESXi for the FreeNAS System.

Keep backups of both the ESXi and FreeNAS Configs.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
...and if you're using FreeNAS as a VM datastore, you'll want to set up your pool in striped mirrors, not RAIDZn.
 

CaptnIgnit

Cadet
Joined
Sep 29, 2016
Messages
7
Thanks for all the replies so far, I've updated the original post with updates to the build.
 

CaptnIgnit

Cadet
Joined
Sep 29, 2016
Messages
7
Dice gave some great advice about what to look into further.

A couple of questions though;
What do you plan to do with the FreeNAS? The reason I ask is because you are talking about a SLOG Device, but did not give an reason for it.
Before you go with an L2ARC you want to max the memory out. Since you can fit another 128 gigs in your ESXI system, that would likely be the first step, and then dedicate more RAM to the FreeNAS.

It's largely file storage but a portion will be for VMs. Based of a number of forum/blog posts I've read the example use case for an SLOG is using it as a VM store, which I will be doing. You can do async writes to the NFS or iSCSI store but you open up the door to risk I'd rather not have. The open question is what the performance will look like without an SLOG vs an SLOG and most of what I've read says it's pretty significant in this use case.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The M1015 will be maxed out in this config, is that a potential performance concern? What 12 SATA/3 SAS HBAs are recommended?
On rotating rust, there is no documented reason to worry about speeds on that M1015 with your type of setup.
Multiple folks run a single M1015 to run 24-36 drives.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
On rotating rust, there is no documented reason to worry about speeds on that M1015 with your type of setup.
Multiple folks run a single M1015 to run 24-36 drives.

The bottleneck on the m1015 is the pcie2 8x interface, which is 4GB/s. Each mini SAS port supports about 2.4GB/s. But your peak HD speed is probably less than 300MB/s.

But will your system ever be able to serve at 4GB/s?

If you assume 600MB/s for the two SSDs. You still have 2.8GB/s for your HDs

BTW, look into the e5-16xx v4 range of CPUs. Higher single thread speeds and you're not paying the dual processor tax like you are in the e5-26xx line.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
It's largely file storage but a portion will be for VMs. Based of a number of forum/blog posts I've read the example use case for an SLOG is using it as a VM store, which I will be doing..
Do I understand you intend to partition a single SSD, using one partition as your SLOG device and another partition as VM storage? If so, this is a very bad idea. A SLOG device needs to be a dedicated, write-optimized device with low latency and a supercapacitor/battery backup. You defeat the purpose of a SLOG by partitioning the device and using some of the partitions for other purposes.
You can do async writes to the NFS or iSCSI store but you open up the door to risk I'd rather not have. The open question is what the performance will look like without an SLOG vs an SLOG and most of what I've read says it's pretty significant in this use case.
I run VM storage on a RAIDZ2 array (see 'my systems' below) with quite acceptable performance. I am a developer and don't hit the VMs too hard; I would advise a different design for other use-cases. Also, I use a separate VMware virtual switch for the storage network, as described in this guide. This approach really helps with VM performance.

I ran my system without a SLOG device for a year or so before installing an Intel DC S3700 as a dedicated SLOG, during which time I turned off synchronous writes on my NFS VM dataset. I found the system unusable with synchronous writes turned on but without a SLOG device present. The two attached ATTO benchmark runs show the difference between the system when I ran it with synchronous writes turned off and after adding a SLOG device and turning them on. In the latter case, writes slowed down to ~180MB/s max, still quite usable, but much slower than with sync off.
IntelSSD-ZIL-sync-disabled.jpg IntelSSD-ZIL-sync-enabled.jpg
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You defeat the purpose of a SLOG by partitioning the device and using some of the partitions for other purposes.

Not so sure about that. It depends how much use the other partitions get.

L2arc and slog on the same device is a bad idea because they compete for iops. But if the esxi data store were mostly static then it would be okay.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
@Spearfoot btw, just looked at your sig. Most of the HDs should be in TB but you have them as TiB ;)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Not so sure about that. It depends how much use the other partitions get.

L2arc and slog on the same device is a bad idea because they compete for iops. But if the esxi data store were mostly static then it would be okay.
If the ESXi datastore is mostly static, why put it on SSD-based storage?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I've since split it out but the original thought was to save buying an extra pair of storage devices for the ESXi boot and FreeNAS OS.
How will you mirror the SATA DOMs? While ESXi supports a subset of RAID controllers from the major manufacturers; it can't be used to create a RAID1 array itself. Also, you don't need to partition the SATA DOM, as VMware will allow you to create a single datastore on any storage media (other than USB flash drive) where you can then install both ESXi and FreeNAS (and other VM images) provided there's enough space. I think your 32GB DOM will be adequate for ESXi and FreeNAS, but you may want to check this out (and perhaps other forum members with more experience will chime in about it).

AFAIK, the best you can do with a pair of SATA DOMs is create datastores on both, installing ESXi to one of them. Then use the FreeNAS installer's 'mirrored installation' feature to create a mirrored FreeNAS installation on both devices. That will at least give you redundancy for your FreeNAS boot image.

If 32GB is too small for both ESXi and FreeNAS, you can always install ESXi to a USB flash drive -- the SanDisk Cruzer Fit is a good, durable USB 2.0 model that works well for this purpose -- and then install mirrored FreeNAS images on the two SATA DOMs. I originally started out using a very similar approach in my FreeNAS all-in-ones: I booted ESXi from a 16GB SanDisk Cruzer Fit USB 2.0 drive and installed FreeNAS mirrored to a pair of small SSDs attached directly to the motherboard's SATA ports. Small SSDs being cheap compared to SATA DOMs of similar capacity make this a nice option if you have budget constraints.

FreeNAS is so easy to re-install and restore from a saved configuration file... that you might be tempted to just install it and ESXi to a single SATA DOM and keep the other as a spare. Depends on how critical uptime is to you.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
As hopefully has become a bit more clear, there actually is some value to the advice of becoming familiarized with freenas before going 'bigger'.

I'm not saying one need years of experience. A healthy couple of months and a few reinstalls/migrations would certainly be beneficial.
 
Status
Not open for further replies.
Top