Home ESXi storage and Container host

Slidspitfire

Cadet
Joined
Mar 29, 2019
Messages
7
Hello there,

This is my first topic after the self introduction one.

I wish I can have some advice from you communauts on the build I am planning.

I currently have a VM host computer (still testing several hypervisors, should move to ESXi in the next days) that through an UnRAID VM provides network storage and some services (DNS PiHole, UNMS, Grafana+InfluxDB+Telegraf, web server...) to my house.
This machine is rather powerful and houses several high end GPUs and CPUs to be able to allocate all the VMs needed by me and my family.

In order to decouple the services from the VDI appliance I am planning on moving from a single super powerful (and power hungry) host to a VDI host booted on demand and an always-on server for network storage and the services listed above.

The two systems will be located in two symmetric PC cases welded side by side into a single case. The FreeNAS server has a mATX motherboard and allows to use the empty PCIe slots of its case for GPUs of the VDI host through risers (de facto allowing for single slot use of dual slot GPUs).

The two systems will be connected to the home network (for the moment) via their own 1GbE NICs, while a P2P 10GbE connection will be put in place in order to allow for the complete removal of storage devices from the VDI host side.
In fact the FreeNAS server will act as an NFS/iSCSI target datastore for ESXi and all the virtual machines disks will be stored there.
It will house a 10GbE Mellanox Connect-X 3 NIC, an Intel Optane 32GB cache drive on NVME, 3 WD Gold 1TB, 2 WD Velociraptors 600GB and 2/3 Samsung SATA SSDs.
Eventually I'll add a stand alone 960 EVO NVME SSD for shared video games library use.

The section's question "Will it FreeNAS?" is basically related to the CPU choice.
I have found a deal (183€) for an awesome BMC+IPMI motherboard, the Asus C246M WS PRO and the CPU compatibility list is the following: https://www.asus.com/Commercial-Servers-Workstations/WS-C246-PRO/HelpDesk_CPU/

Recap:
  • I am planning on building a FreeNAS host
  • It will provide basic NAS features (backup server, shared network storage)
  • It will run several services in the form of Docker containers
  • It will act as a NFS/iSCSI target for ESXi VM disks
  • It will be connected via dual 1GbE to home LAN and through 10 GbE to the ESXi host
  • It will handle some 8-10 SATA drives, plus NVME cache and NVME shared software drive

What would be the best CPU choice for my project? I think that the i3-8100 is a great value with the ECC support and 4 "i" cores, but maybe a Pentium or a Celeron would be just enough.
Can you help me select the best option?

Thank you in advance,

Slid
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
(I find this description of what you are trying to do rather confusing. perhaps you could make like a before and after tabular summary?) it is hard to tell what you are doing without more specific hardware context. forum rules require posting your hardware details, without that post often don't get any replies. many of the common posters put their hardware into their signature.
Intel Optane 32GB cache drive on NVME
what is the point of the optane cache drive, when that can only be read-only L2ARC? cache drives are relatively rarely of use (your ARC is always your fastest cache, and a cache drive eats a chunk of RAM to function anyway, RAM that could be caching files instead). you generally dont need l2arc until your arc is maxed out (as in, you've RAMed your server to the hilt and it can't take anymore), and you are still finding you dont have enough. trying to use cache drives with zfs with insufficient RAM can slow your pool performance drastically. dumping cache drives into zfs is not a good idea most of the time.
3 WD Gold 1TB, 2 WD Velociraptors 600GB and 2/3 Samsung SATA SSDs.
you list 3 different types of drives but give no context as to what their purpose is; are they going in the same pool? if so, the pool will mostly function at the speed of the slowest.
your NFS will be limited by your slowest drive on the pool hosting VM's, particulary if the pool is also hosting the ZIL - esx always sync writes VM data, which can bring your pool to a stuttering halt if not set up correctly. i believe iscsi doesnt use sync write be default and you can disable the sync write on NFS, but no sync writes is risky, particularly with VM's, and can mean VM corrution possiblity on power loss/connection loss for any write.
symmetric PC cases welded side by side
why make 2 servers and then weld them together? is that because of what appears to be trying to use the freenas box to physically hold some GPU's, but they will be connected to the hypervisor hardware? seems like adding an extra complication and point of failure for spurious benefit.
but maybe a Pentium or a Celeron would be just enough.

if you chose a CPU based on it being "just enough" the chances of it not being enough tend to be rather good, particularly if you add more services later on.

freenas support for docker is...a bit spotty at best, as freebsd doesnt have native docker hosting, so you have to allocate a full VM to do it.
what is it about freenas that you are wanting to move to it instead of using unraid, which looks like it already is working and does what you are looking for? it seems to me like you could just make a new esx box and move over the VM functions.
 
Last edited:

Slidspitfire

Cadet
Joined
Mar 29, 2019
Messages
7
Hi artlessknave,

Thanks for your reply and for your suggestions!

Since the FreeNAS hardware is yet to be defined it is difficult for me to add it to the sig.
The VM host is as follows, even if I cannot see the correlation between its hardware and the choice of the FreeNAS one:
  • CPU: dual Xeon E5 (2623 v3 for the moment)
  • MOBO: Asus Z10PE-d16
  • RAM: 8*4GB DDR4 ECC Crucial
  • GPU: 2 Radeon Vega FE, 2 Nvidia Quadro P4000
  • Additional: Apex 2800, Mellanox ConnectX 3, various PCIe to m.2 adapters

I try to give a clearer description on the FreeNAS system requirements.
At the moment my server is used for VDI and for NAS/services. I found it consumes too much energy and heats a lot for the NAS/services part, given the fact that the VMs are needed only during specific times of the day.
I wish I can move the storage/NAS/services part to a separate box, in order to lower consumption as well as noise and heating of the room.
At that point the VM host would simply be booted on demand, instead of being up 24/24.

For this reason I want to decouple the two systems.
For the NAS/services is quite easy to configure the FreeNAS host, since the load won't be strong.
My main concern are the requirements for the use of one of the FreeNAS pools as ESXi datastore using NFS/iSCSI via 10 GbE. I think this will be affected by (and affect) the CPU quite a lot and I wish I can figure out the right CPU choice for this workload.

I try to answer your questions:
  1. The optane drive is meant to be a cache drive for the software repositories. I just want the applications files contained, let's say, in the other SSDs to be cached there to reduce the load on the other drives.
  2. The drives will be tiered as listed: laptop backups and data (NAS functions) on the WD Gold, VMs backups and docker containers persistent storage on the Velociraptors, SSDs for vdisks and software. I think I'll create at least 3 pools, with maybe an additional one using the 960 NVME.
  3. As said the union of the two cases is a matter of tastes and other reasons, yet to be determined BTW. :)
  4. Nice point about UnRAID. I am thinking about moving to ESXi+FreeNAS in order to use the hardware acceleration techs that are available only under VMWare product (e.g. Teradici PCoIP via Apex 2800). In addition I wish I can use a more professional storage solution, since I think that UnRAID approach is quite limited in this sense. I also wish to use iSCSI mainly to learn how to use it and I want to have several independent storage arrays hosting several kinds of data. This is something UnRAID does not handle so well.
    Given the lacking container support in FreeNAS I think that an UnRAID VM can keep running such services.
Let me know if it is clearer now, and don't hesitate to ask more if needed!

Slid
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
again, your RAM is your cache (ARC), everything on any pool will be cached as much as possible by basic zfs design. l2ARC cuts into your ARC cache, so unless you have filled your freenas system with every bit of RAM possible, and are able to look at the graphs and see that your ARC hit ratio would improve with cache, adding cache drives is unlikely to help performance, and has been known to reduce performance.
basically, if you don't already have a pool that needs l2 cache, you don't need to add cache drives. many people think that cache drives make up for having low RAM, which can apply to unraid, but does not apply to zfs, since if you have not enough RAM for your pool and then add cache, which needs RAM to work, you can kill your pool.

one of the reasons of asking about the freenas box is to know how much RAM, and other specs, you are thinking of giving it, even if you dont have the hardware yet. if you give it 8GB you're likely to have a very poor experience. if you choose hardware that has 10TB of RAM and 8 CPU's, you will definitely not need cache drives. without that context it's very hard to make any kind of judgements or recommendations, and is one of the reasons the forum rules include including hardware details.

it does make a bit more sense now, but is clearly a more advanced setup, and you seem to be tying alot of services together with a lot of complexity.
 

Slidspitfire

Cadet
Joined
Mar 29, 2019
Messages
7
Hello again,

Ok I got it. I think that the cache approach for UnRAID and FreeNAS is quite different. In the first one the cache is a write cache that simply prevents the disks from being spun up all the time. For FreeNAS is more of a cache in the real way. Is that right?

The ability of adding a lot of RAM, maybe expanding it in the future, is the reason why I opted for an mATX board with 4 RAM slots. I was planning on 16GB via two 8GB banks, but it might be not enough after your explanation.

At that point maybe the optane cache is much more interesting as a datastore cache on the ESXi host, rather than inside the FreeNAS box.
It should be able to handle the most used applications (which are rarely modified, so reliability and changes propagation are not crucial) in its 32GB.

Indeed this is an advanced setup, and I think I have to better organise it to exploit the best of all products. Basically I want to do the following list of tasks:
  • Have a host with an hypervisor capable of running my high perf VMs. This machine should be booted on demand. This is mainly done.

  • Have a NAS appliance for reliable data storage, backups and VMs' vdisks. The VM images should be served to the hypervisor using network connectivity. This machine should run 24/24 with the higher reliability factor possible.

  • Have a container host capable of running my services containers 24/24.
So, the question is: is FreeNAS the right choice for the second bullet? Can it handle even the third one or should I move to another software solution?

Thank you,

Slid
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
cache approach for UnRAID and FreeNAS is quite different.
yes. I don't use unraid, but I do know from other people trying to do that same thing that what it calls cache is different.
For FreeNAS is more of a cache in the real way
not sure what this means.
zfs is copy-on-write (in place data is safe) and uses RAM for both read and write cache, it will use as much as it can as recent files read cache while writes are done in batches every 5 sec or so, releasing any RAM for OS and application requests on demand.
the closest to a disk write cache is having a SLOG device to move the ZIL out of the pool, but this only applies to sync writes that request to be guaranteed written to non volatile storage, and a SLOG device has specific requirements. if you are using SSD's, a SLOG probably won't do anything useful.
is FreeNAS the right choice for the second bullet?
absolutely, zfs is enterprise storage. it does, however, typically need to be fed enterprise hardware, as recomended by freenas dev and forums guru's; you can find forum posts of people who didn't read, or failed to follow, the requirements and had Very Bad Days ().
Can it handle even the third one
this is harder for me to answer, as I am not familiar with the services you listed. freenas natively supports regular jails, which is bsd's container equivalent, as well as plugins, which are prepackaged jails. you can also run docker with a VM, but I found it to be far more work than it was worth, when I can just install from bsd repos into a native jail with no VM overhead and virtually no overhead at all ( I never got even one docker to the point it had storage, much less get it to turn on, and that was after I finally tricked freenas into making the VM work at all)
if your services are available in freebsd repo/ports, you can run it in a jail as easy as standard software.

perhaps you should spin up a freenas VM and see if you can configure it to do that you want. storage is easy, but specific application service requirements is not.
i assume you mean 24/7.
.
 

Slidspitfire

Cadet
Joined
Mar 29, 2019
Messages
7
Hi again,

Thanks for the constructive discussion!

In fact UnRAID is focused on spinning the disks up as rarely as possible, and basically the cache acts as a (faster?) drive capable of buffering the writes in order to reflect them to the array once again.
In fact that's a buffer, not a cache.

By "cache in the real way" I mean that a cache is basically made of two elements. It acts as a buffer for writes and proactively (or upon first access) loads data in it to increase read performance. In that sense FreeNAS implements a real caching algorithm and array.

I have read the FreeNAS hardware requirements, but I think I am not really far from enterprise level hardware.
What do you think is not the right choice in that respect wrt the components I started to list?
I think that despite the naming the latest i3 CPUs have a bunch of bizarre features (i.e. ECC support) which does not allow us to consider them similar to older i3s.
I read an interesting article on STH about i3-8100 vs Xeons E-21XX and they clearly stated that the i3 constitutes a good alternative for low price machines without renouncing to the most important features.

Great digression of containers/jails as well. The advantage of such docker containers is how easy the installation and configuration are.
I think that translating such services, even commercial ones (e.g. UNSM by ubiquiti), into jails would be an enormous hassle.

For 24/24 and 24/7 sorry, but in Italy we usually say 24 hours over 24 hours to say the same! :)

Cheers,

Slid
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
FreeNAS implements a real caching algorithm and array
technically this is implemented by zfs, freenas really just provides a fancy GUI for ease of management.
What do you think is not the right choice in that respect wrt the components I started to list?
I don't see a problem with the components you did list other than what I have already pointed out. anything you didn't list cannot be commented on, because...you didn't list it. supermicro tends to be favored by freenas people, who REALLY like using stuff that is shown to work reliably; Asus is relatively new to the server space so they don't have as much of a reputation (I have not been happy enough with my own asus consumer boards to want to make them run more than gaming desktops).
a board with ECC is a good indication of being on the right path; you do seem to not be inclined to buy the cheapest board possible, so critiquing your hardware choice might not be productive, but it is still not possible to make specific recommendations, as seeing the hardware you think you would use defines your budget and performance expectations succinctly.
most of what I try to do is present information, since ultimately it is your decision based on the pros and cons for your implementation goals.

and yes, while I don't think they were mentioned so far, the newer i3's with server-y functions can be good if you don't need the performance of the full xeons.
The advantage of such docker containers is how easy the installation and configuration are.
considering I couldn't get dockers to work at all, I can't really agree with them being easy to install or configure...what little I did figure out just seemed needlessly complicated to me, but then I dunno what those apps you listed even are so it's not like I'm a good authority in that sphere.
 

Slidspitfire

Cadet
Joined
Mar 29, 2019
Messages
7
Yay, my fault about the attribution of the caching algorithm...
I think I'm going to read some ZFS doc to better understand the question about tiering/caching algorithms then.

Regarding the hardware I almost always use Asus products. I own a Z10PE-d16 for my VM host and it works flawlessly. It is more of a WS motherboard, but still it's a nice hybrid which pack at least some features that on a server-specific board are not there (e.g. lots of 16x slots without a crazy price tag).
At work I have to deal with Supermicro servers and I am rarely happy with them: their build quality is somewhat disappointing and I am not really convinced about having them in my own machine. In addition I like to try out "new" approaches, therefore using a motherboard not yet tested allows me to learn a bit more and, eventually, mark the path for other users.

The main concern, or doubt, I have is the fact that the i3 CPU would be powerful enough to handle iSCSI or NFS on a 10 GbE link. I have little experience with 10 GbE and I just want to make sure that the CPU won't be the bottleneck of such setup.

Regarding containers it seems like BSD even improves the safety of docker containers putting them in a jail (on other distro it's "simply a chroot instead) and a docker package is available for BSD, so I should be able at least to test them in a VM.\\

Cheers,

Slid
 
Top