Advice for development lab, ESXi and Freenas 11 Storage

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
I've acquired two machines that I want to setup and use primarily for development, and additionally as offsite backups for my critical data (financial documents, family photos, etc)

Equipment:
Machine A:​
MOBO/CPU: Asrock C2750D4I (8 core, 8 thread total)​
RAM: 16GB ECC​
Storage capacity: 6 HDDs​
Machine B:​
Dell R900​
CPU: 4 @ Xeon X7350 (4 core, 4 thread each, 16 core, 16 thread total)​
RAM: 128GB ECC​
Storage capacity: 5 HDDs​
Storage:​
HBA: LSI00194 - Sas9211-8i 8port Int Single 6gb Sata+SAS Pcie 2.0​
6 @ 4TB WD Red​


Use Case

One full desktop linux VM (persistent sessions, linux environment instead of corporate Win 7 as my laptop is)​
A small (3 to 5 likely) number of mostly light linux VMs for docker development.​
A few windows hosts with various development platforms (visual studio, Allen Bradley, Schneider, Beckhoff PLC programming platforms, etc)​
The windows hosts will remain mostly powered off, and used primarily for 1 to 5 hours a week. At most 2 windows hosts powered on at the same time.​
Windows hosts are relatively light, and are not particularly resource heavy (other than windows)​
Dedicated 1 GbE interfaces on each box solely for A<->B traffic.​
All management, user (myself and 1 or 2 other users max) traffic will occur on existing LAN​

Initial Thoughts
  • I figure that Machine A would be best used as the FN server. It's my old primary home server and I have had no issues with it after replacement board after hardware failure.
  • I figure that Machine B would best be used as the ESXi host (even if limited to 6.0 U3) due to the doubled core count and vastly increased memory.
  • Reading through resources and other threads, I will likely be using iSCSI for the windows hosts. To keep things simple, I'll likely use iSCSI for all VMs unless a good reason to use NFS for linux hosts.
  • 3 striped mirrors in FN, used as datastore for ESXi
Help Needed / Questions
  • I am not familiar with the X7350 generation hardware, but all the review/comparisions I can find put the two boxes around the same performance, single thread and multi thread.
    • Am I reading comparisons wrong, and the similar performance is for 1 Xeon socket vs the Atom SOC?
    • Is using A for FN and B for ESXi the best choice?
  • Would a 6 way mirror provide any benefits over 3 striped mirrors?
    • I would think a 6 way mirror would not be a wise choice for VM storage as the write performance would be the same/similar as a single drive
  • Am I incorrect in thinking that a FN acting as iSCSI / NFS data store would provide better performance than a single disk/ssd on the local ESXi server?
    • I would prefer FN storage due to snapshots, redundancy, and other ZFS voodoo.
  • Machine B is a new to me machine. I initially bought the HBA to use as a PCI pass through for Machine A to run ESXi, and virtualize FN.
    • Would there be significant performance increases using the HBA in Machine B (ESXi with virtual FN via PCI pass-through), and finding another use for Machine A? I understand that if I virtualize FN, very bad things may happen, and that I should not store any critical data. I'm only considering this option since no data is expected to be critical on the hardware.
  • Teaming / LACP
    • This is unknown territory
    • I do not expect to be saturating a GbE connection, but do not know what to expect using network datastore for ESXi
    • Since I have 3 free GbE ports on each machine for ESX/FN comms, would it make any sense to get LACP / teaming setup?
      • I've read some on LACP, and know enough that I don't know enough to make any decisions
      • I'm thinking that there's probably not a good reason to go down this road, and why invite dragons to play if not needed
  • I also want to use this as a replication target for critical data on my main FN box
    • My existing FN data is on non-encrypted pools.
    • Can I replicate to an encrypted area on this system?
      • This would mainly be for disaster recovery in the event of catastrophic failure of my existing RaidZ2 pool on my main FN
    • Would I be able to set up multiple of the above, for cross backup with other FN boxes?
      • I have 2 other people considering setting up FN, and cross hosting critical backups for offsite storage.
I think this is all of the major questions and concerns I have.​
Did I miss anything obvious, or are there other things I should be watching out for?​
Suggestions welcome, and thanks for any help​
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
My opinions...

There is no fundamental issue with virtualizing FN. I've been running it that way for years on ESXi. Most of the issues were caused by (a) people not setting it up correctly or (b) using crappy HW, and that caused a lot of FUD on this forum as a result.

If you need the compute horsepower of two machines, then certainly use two machines. If not, I don't see the issue with virtualizing FN and running an all in one box with system B.

Since you have HDDs, I would make your mirror setup as wide as possible. (Yes, you want at least 2 disk mirrors.) Most of my VM hosting is on an SSD pool now, but my HDD pool is a 6 wide mirror (6x2=12 disks).

Re: network, no don't do LACP. If you do iSCSI you'll want separate subnets and multipath. Though my experience with iSCSI as an ESXi store w/FreeNAS was not good. It worked, but I didn't get the performance I wanted. I moved to NFS. Most of my VMs are on the same box as the FN VM, so the networking speed is all internal vswitch. Works great.

I see no reason you would have an issue replicating to an encrypted pool on a backup. I haven't experimented with the newer replication to know if it supports multiple backup targets. You'd have to research that. At a minimum I think you could daisy chain it. A > B, then B > C, then C > D, etc...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's not so much FUD as it is people who believe themselves expert after reading random blogpost or random YouTube. If you're willing to follow good engineering practices, it's generally safe to do, but this includes following a sane strategy, using compatible hardware, and not trying to "go cheap" on resources.

https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/

If you're planning to go iSCSI for VM hosting, consider 32GB to be the absolute minimum unless your VM's are so sleepy that there's really no activity. We generally suggest 64GB or more for VM hosting.

Validate your virtualization configuration. Most of the older Intel CPU's seemed to exhibit some issues with stuff like PCIe passthru on at least some platforms. I would not bother trying even on old Supermicro X8 gen stuff and what you're talking about is actually Supermicro X7, which is dodgy as hell. I don't know how cleanly that experience translates to Dell gear. But test it for a month or two before putting anything even vaguely valuable on it. Virtualization is a complex house of cards.

The C2750D4I with 64GB would make a nice NAS platform for hosting VM's. The only caveat there is that the C2750 is known to have a manufacturing defect that causes it to die prematurely.
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
Thanks toadman, jgreco.
If you need the compute horsepower of two machines, then certainly use two machines. If not, I don't see the issue with virtualizing FN and running an all in one box with system B.
I doubt I need the compute power of these 2 boxes. I was planning on just machine A, but just before I started to get it setup I was gifted machine B from an internal decommission.

Re: network, no don't do LACP. If you do iSCSI you'll want separate subnets and multipath. Though my experience with iSCSI as an ESXi store w/FreeNAS was not good. It worked, but I didn't get the performance I wanted. I moved to NFS. Most of my VMs are on the same box as the FN VM, so the networking speed is all internal vswitch. Works great.
Understood with LACP, that's what I thought, but I'd much rather rely on the mistakes of others than trash a good thing. :)

Should I take this to mean that if I use iSCSI that each VM should have its own subnet, or that iSCSI traffic should be isolated from other network traffic?
My plan was to have a direct patch between the ESX and FN boxes (if I used 2 machines).

Have you used NFS with windows hosts? My initial thought was to use NFS since I'll be mostly Linux/BSD based, but read that vmware/nfs datastores aren't the best for windows guests.

I see no reason you would have an issue replicating to an encrypted pool on a backup. I haven't experimented with the newer replication to know if it supports multiple backup targets. You'd have to research that. At a minimum I think you could daisy chain it. A > B, then B > C, then C > D, etc...
Thanks for the pointer. I've only dealt with my single FN, and haven't ever had the opportunity to go down the replication path.

It's not so much FUD as it is people who believe themselves expert after reading random blogpost or random YouTube. If you're willing to follow good engineering practices, it's generally safe to do, but this includes following a sane strategy, using compatible hardware, and not trying to "go cheap" on resources.
https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/
I've read that, as well as the ixsystems post by Josh Paetzel here before starting this thread.
I bought the HBA specifically to try and not cheap-out on necessary hardware, and the more I read about everything the happier I am that I did so.

If you're planning to go iSCSI for VM hosting, consider 32GB to be the absolute minimum unless your VM's are so sleepy that there's really no activity. We generally suggest 64GB or more for VM hosting.
Honestly, I'm more familiar with NFS, and like the idea of file, rather than block, storage because I'm lazy and if I /need/ access to any files I'd like to just be able to see them on FN.
In your experience, does NFS work pretty well with windows? The only reason I'm thinking about iSCSI is reports I've read online that imply performance is quite a bit worse with NFS and windows hosts.

If I do check out iSCSI, I'll certainly keep the sizes in mind you suggest. As it turns out, I usually create my VMs with around 64GB disks anyway.

Of course, when I get the system set up I'll likely test both, but I do like best practices from those who came before me.

Validate your virtualization configuration. Most of the older Intel CPU's seemed to exhibit some issues with stuff like PCIe passthru on at least some platforms. I would not bother trying even on old Supermicro X8 gen stuff and what you're talking about is actually Supermicro X7, which is dodgy as hell. I don't know how cleanly that experience translates to Dell gear. But test it for a month or two before putting anything even vaguely valuable on it. Virtualization is a complex house of cards.

The C2750D4I with 64GB would make a nice NAS platform for hosting VM's. The only caveat there is that the C2750 is known to have a manufacturing defect that causes it to die prematurely.
A major part of my concern about the Dell system is exactly due to its age, and the passthrough support that might be flaky due to the latest revision of ESX that VMware says will run on that hardware.

My plan for validation was to build the ESX compatible VMs on my laptop running VM Workstation, then just copy them over to the ESX system and run them remotely, but have all the VMs locally in case of Bad Things happening.

I think I'll stick to the Avaton system then, and leave the Dell for either another use, or find someone else who's interested in it.

Thanks for the reminder about RAM. I'd always meant to upgrade from the 2X8GB modules I have install in it.

I'm painfully aware of 2 major defects, Machine A is my old personal server that hit both the watchdog and clock defects. Asrock sent me an out of warranty replacement once the clock bug hit and killed my system.
Honestly I'm happy I'm able to find a use for the Avaton board since I upgraded to the dual socket X9 system I've got as my primary now.
It would have been /slightly/ overkill for a pfsense/OVPN box.
 
Last edited:

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
Well, apparently I overlooked the fact that the C2750 board lacks vt-d, and the Dell R900 lacks vt-d as well. Both systems show support for VT-x.

My understanding at this point is that it would be unwise to store any data that I am not ready to lose in an instant on either of these, if I choose to virtualize FN on ESXi.

So I think my options are:
  • Virtualize anyway, and accept the risks of data loss with no warning on the virtual FN system
  • fall back to the 2 machine set up, 1 for ESX, 1 for FN
  • Try and convince management to purchase a new(er) board/cpu that supports both VT-x and VT-d
Work is pretty cheap, so I doubt they'll spring for the upgraded hardware. Obviously I would prefer to allow FN direct access to the discs, so I think I need to use both machines.

My question now becomes, which to use for what purpose.

The avaton is a newer platform, but is operating with less cores and much, much less ram, while the R900 has gobs of ram to spare.

My thought is that I should likely set up FN on the avaton, and install ESXi on the R900 since it would (I think) support more VMs with more higher clocked cores and more RAM available to each VM.

Does this sound reasonable?

I'm very unfamiliar with the R900 hardware, while I know that the avaton system has been a rock solid FN server for 5+ years.

jgreco,
I read your recommendation against the Dell was due to the headaches with virtualizing Freenas, not to the platform itself doing one job or the other.
Did I understand your comment correctly?

toadman,
Your recommendation was to use the Dell for the FN platform as a whole, but I don't want to risk the data loss that would come with virtualizing FN on a non VT-d system. Would I take your comments to mean that you would recommend using the Dell as the ESXi box if I need 2 machines?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, my comment was along the lines of

"We know that the equivalent Supermicro X7 and X8 systems have had lots of problems with virtualization."

Some people have suggested that this is a CPU silicon issue, others have suggested that it is due to insufficiently-tested BIOS or platform configuration issues. I honestly don't know.

What I *do* know is that I'd be a bit paranoid. It seems that once things got to Sandy Bridge, we were seeing a lot less random issues with PCIe passthru. Prior to that, it had been a mayhem of near-constant catastro-fscks in the forums because people with the older hardware couldn't get VT-d to work (correctly, at all) and we had this huge problem where lots of people were trying things such as RDM to get disks to FreeNAS, and then they'd be doing this on crapper-visors with only 8-16GB of memory, which meant they'd squeeze FreeNAS for memory, which also had known issues with crashing FreeNAS. It's also a known fact that VT-d support was dodgy early on, but where the fault was is anyone's guess. It could be Intel, it could be the server manufacturer, etc.

Out of this era came: my raising the minimum memory requirement for ZFS with FreeNAS to 8GB, a stern warning not to virtualize FreeNAS in production, and many hours wasted trying to help people whose setups had somehow gone south in ways that were too complex to reproduce.

If you're non-VT-d, go bare metal. Enough things may have changed in the last half-decade that maybe there are other paths forward, but you follow those paths mostly on your own.

It's worth noting that even on a platform where it works fine, you can run into terrifying situations such as the cold shiver you get when reinstalling ESXi and it lists all your NAS drives as "available" targets for reinstall.
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
Thanks for the clarification.

Without VT-d, I'm going bare metal. I've decided to put FN on the Avaton, and have it do nothing but be a datastore for ESXi guests, with the exception of being a zfs replication target for offsite backups. I'd wager the 16GB of ram would be plenty for this use, and will explore the replication to encrypted pool further down the line when I have more time to play around and experiment.

When I started messing about with bhyve a year or two ago, I decided to keep my dunce cap on and allocated 12 or 16 of 16GB of the Avaton memory to a VM, thinking that overprovisioning wasn't going to be an issue. Once my system took a dump, sweet n low beat me over the head with a club and it knocked that cap off. I have no desire to get those chills again.

Thanks for your help, I appreciate the guidance to not be an idiot again with my data.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
16GB of RAM is going to be tight, as in, it should work but it may be wicked slow under load. 32-64GB is better for block storage, especially iSCSI. But that doesn't mean you can't give it a shot at 16. Just trying to set expectations reasonably.
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
Thanks for the warning.

I'll start the project using NFS and the linux guests only for now, and will put in a request to upgrade the RAM once I get a POC going.
 
Top