Sanity check - AIO Virtualized Freenas ESXi build

Status
Not open for further replies.

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
Hi All

Looking to get a sanity check on my potential build. I’m looking to virtualise Freenas on an ESXi host (6.5 or 6.7*).

Freenas usage

  • No more than 3 concurrent users accessing at any one time.
  • To store images/video and allow them to be edited via adobe Lightroom/photoshop/premiere.
  • To store movies/TV shows to be streamed to a raspberry Pi connected to a TV (via plex).
  • To Archive CCTV footage to from a separate physical box at monthly intervals (this data is not critical but nice to have)
The ESXi host will have the following VM’s:
  • Freenas
  • Plex (max 2 x 1080p streams but more likely only a single stream at a time)
  • Owncloud
  • Windows7

The windows7 VM will need to have some external USB drives and 8 internal HDD’s passed through to it so my plan was to pass-through the onboard SATA controllers to that VM and also pass-through the rear USB 3.0 ports to it.
*Caveat here that I have read about some issues with ESXi 6.7 not allowing onboard SATA ports to be passed through. Hopefully this will be fixed in 6.7u1 if not then 6.5 seems to work.

I will pass through the LSI M1015 to the Freenas VM so that it can manage the 7 drives I will give it directly.

Read/Write performance is not a priority here. I am not looking to saturate 10GbE as I have none of this tech available to me. My network is all 1GbE. I would like to make use of the dual Gigabit nics as I can configure etherchannel on my cisco switch but as long as it performs as good as or better than my shocking transfer rate of 15MB/s on my current NAS then I’ll be happy. Stability and data integrity is what I am aiming for.

Backups will be performed regularly to the NAS that this is replacing which in turn backs up to a series of external 8TB USB drives on a round-robin basis.



Parts to purchase

Motherboard: Supermicro X11SSM-F-O

CPU: E3-1230v6 (with stock heatsink/fan)

RAM (will be increased when funds allow):

1 x 16GB Crucial CT16G4WFD824A (DDR4-2400 ECC UDIMM)
OR
1 x 16GB Kingston KVR24E17D8/16 (DDR4-2400 ECC UDIMM)
Depending on price at the time of purchase

PSU: Corsair RM1000x (1000w 80 Plus Gold)

Storage:

6 x 3TB WD Reds in Z2 (to give approx. 8.3TiB usable capacity)
1 x 1TB WD Purple (to store CCTV footage - don’t need redundancy for this)
1 x Samsung 970 EVO 250 M.2 NVME (for ESXi Datastore to house Freenas boot disk and other VM’s)

Boot disk for ESXi (I've read on these forums that the 3.0 drives get a bit hot and they are the same price so may just go for the USB 2.0):

1 x SanDisk Cruzer Fit 16GB USB 2.0
OR
1 x SanDisk Ultra Fit 16GB USB 3.0

M.2 x4 PCIe 3.0 NVME adapter (to mount NVME drive in x4 PCIe slot)

2 x Mini SAS Reverse breakout cable (which seem to be as rare as hen’s teeth in the UK /Ireland:()

1 x UPS with the ability to shutdown ESXi host gracefully (need to research that one)


Parts I already own

1 x APC UPS SC450RMI1U (no serial cable or shutdown software though)
1 x LSI M1015 (flashed to p20 IT mode)
2 x SFF8087-8087 SAS cables
1 x 4u 24bay rackmount case with 6Gb Backplane with 6 Mini SAS ports (this fits an ATX PSU and gives me the option to expand storage)
8 x Various sized SATA HDD's from Iron Wolfs to Hitachi He8's (these will not be part of any ZFS pool as explained above)

Future Purchases if required/when funds allow

Another 16GB RAM
A SAS expander like IBM 46M0997 or RES2CV360 if I need to add any more drives/ZFS pools.

Q: As I'm looking to pass the rear USB3.0 controller through to the Windows VM, does that mean i am limited to booting ESXi from the USB2.0 controller or is the internal USB3.0 mort on the motherboard ran from a separate controller and therefore usable for a bootable USB?

I think that covers it all and I don’t think I left anything out. Does it pass the “will it Freenas” test? Am i missing anything or heading for a mess?

Thanks!
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
The windows7 VM will need to have some external USB drives and 8 internal HDD’s passed through to it so my plan was to pass-through the onboard SATA controllers to that VM and also pass-through the rear USB 3.0 ports to it.

There is no need to passthrough the host USB controller, you can add up to 20 host-attached USB devices to a VM. If they are USB3 devices just make sure you change the virtual USB controller type to USB3.

Boot disk for ESXi (I've read on these forums that the 3.0 drives get a bit hot and they are the same price so may just go for the USB 2.0)

Booting ESXi off a USB flash drive or a SD flash card is perfectly acceptable, however you need to relocate the ESXi scratch space to a permanent storage location (such as the NVME drive you're going to use for a datastore). https://kb.vmware.com/s/article/1033696

As I'm looking to pass the rear USB3.0 controller through to the Windows VM, does that mean i am limited to booting ESXi from the USB2.0 controller or is the internal USB3.0 mort on the motherboard ran from a separate controller and therefore usable for a bootable USB?

Covered this on in my first reply. ;)
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
There is no need to passthrough the host USB controller, you can add up to 20 host-attached USB devices to a VM. If they are USB3 devices just make sure you change the virtual USB controller type to USB3.

ahh thats great news and makes it much simpler! Thanks

Booting ESXi off a USB flash drive or a SD flash card is perfectly acceptable, however you need to relocate the ESXi scratch space to a permanent storage location (such as the NVME drive you're going to use for a datastore). https://kb.vmware.com/s/article/1033696

Good point, id read that at install time it would not allow the datastore to be placed on a flash drive but I'd just assumed the same for scratch disk. We all know what assuming makes of us :P
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
ok so I'm hoping that since no-one has said, "Don't do that, it will be a slow hunk of junk!" that the above parts list is up to scratch for my stated use case (from the years I have spent reading this forum I believe the bar is set quite high and the everyone has great attention to detail which is why i have posted here). I am trying to think of a bit down the road.

If I were to add a SAS expander to allow me to have more drives. Would the fact that the ZFS pool disks were attached direct to the HBA controller and then all of a sudden are now attached to the expander cause issues with the configuration? Would it be a case that Freenas will just pickup the drives and recognise the pool or will i have to import the pool again and then add my new pool or vdev (depending on which i go for)? If it's a case of importing the pool then it's something i would like to do a test run on before i even think about putting my data onto it.

Thanks
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Boot disk for ESXi (I've read on these forums that the 3.0 drives get a bit hot and they are the same price so may just go for the USB 2.0):

Of course, you could boot esxi off the m2 drive and totally forgo the usb boot disks.

Not sure I’m following why you want to share 7 drives into the Windows vm....
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Your CPU and Ram are a bit anemic. Only 4 physical cores and 4 VMs? They should all be single core except FreeNAS and Windows will greatly benefit from multi core operation. So I figure thats two dual core VMs and 2 single core VMs. It's doable but its tight. On the RAM side, FreeNAS REQUIRES 8GB but 10GB with iSCSI is as low as I would go. that leaves 6GB for Windows, Plex, and Owncloud. You would make better use of the RAM by keeping the latter two as jails in FreeNAS. That would make better use of the CPU cores/cycles too.

EDIT ESXi will chew up a chunk too. figure almost a gigabyte.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You should be fine as long as you have low no expectations of performance.
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
First of all, thanks for all the feedback guys. This is exactly the reason I posted before just jumping in and then wondering why it wasn't going so well. I've sp[end a long time reading and thinking i had got this right but I'm obviously wide of the mark.

Of course, you could boot esxi off the m2 drive and totally forgo the usb boot disks.
Good point. The reason i was going to boot from the USB was two-fold:

  1. I have never booted from a drive installed via a PCIe converter card and wasn't sure if the motherboard would play nice with doing so
  2. From reading this thread, I like the idea about being able to unplug the ESXi usb stick and plug in a freenas boot stick in the event i have a nasty virtualisation issue. I get that i could still override the boot sequence to boot from USB in this event but if I'm away and the missus needs to recover for me then "Power off, remove USB A and replace with USB B" would me much easier than trying to get her to use the IPMI to reorder the boot sequence.
Not sure I’m following why you want to share 7 drives into the Windows vm....

The 7 drives (6x WD Red + 1x WD Purple) will be my ZFS disks passed through to freenas on my M1015 controller. Without going into too much detail. The data on the 8 disks is data specific to an operation that will be being performed on the windows VM and contains shabal256 hashes. The data can be recreated again if required (with a lot of time/CPU usage) and is of no use to anything other than a specific process that will be running on this Windows VM. This is the only purpose of that VM. It will not be used as a "normal" daily driver machine, (no web surfing/media playing). Utilising the full capacity of these drives is the important factor here. I could migrate to a Linux OS if this would be more light on the cores but the drives are all NTFS formatted and I've never played with mounting NTFS drives in any Linux OS so dont know how well they will play.

Only 4 physical cores and 4 VMs? They should all be single core except FreeNAS and Windows will greatly benefit from multi core operation. So I figure thats two dual core VMs and 2 single core VMs. It's doable but its tight.

I guess running owncloud and Plex on its own VM is probably wasteful, I could probably combine them into one. I have a concern with plugins, that owners may just decide to stop developing/supporting them and you are left not being able to upgrade easily etc.

With Regards to RAM, 16GB was just to get me up and running. That will be doubled to 32GB shortly after when funds allow.

Other CPU options? I'm not too well versed when it comes to Xeon CPU's and their naming convention. It looks like the E3 series all have a max of 4 cores. I took a look at the E5 series but I'm struggling to come up with a CPU/Motherboard combo for a similar price (options are very limited in Ireland)that dont have woeful clock speeds. I.e. I'm seeing E5-2603 V3 but I'm not sure what a clock speed of 1.6 would do to SMB transfers :(.

I have seen some builds with Xeon D's "system-on-a-chip" but they seem to be limited to mini-itx with only one PCIe slot (I need two unless the board provides a M.2 connector also).

Any Suggestions? E5 or SoC? ideal budget for CPU/Mobo combo would be circa £500 and would need to include the AVX2 instruction set.

Thanks again
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
contains shabal256 hashes. The data can be recreated again if required (with a lot of time/CPU usage)
Taste the rainbow!;)
I could migrate to a Linux OS if this would be more light on the cores but the drives are all NTFS formatted and I've never played with mounting NTFS drives in any Linux OS so don't know how well they will play.
It would be worth the testing as Windows is a pig. More so on RAM than CPU but everything help/ As a perc (no not the Dell perc) if your running the same version of linux multiple times on the same ESXi host, you can disable memory salting and utilise Transparent Page Sharing to save some RAM.
I have a concern with plugins, that owners may just decide to stop developing/supporting them and you are left not being able to upgrade easily etc.
Plugins on FreeNAS have always been trash. If you run your own jails, it's about the same as running them on linux with the few jail caveats about mapping storage and slightly quirky networking. Even then most of that SHOULD be ironed out by 11.2U1 (about 2 months)
Other CPU options? I'm not too well versed when it comes to Xeon CPU's and their naming convention. It looks like the E3 series all have a max of 4 cores. I took a look at the E5 series but I'm struggling to come up with a CPU/Motherboard combo for a similar price (options are very limited in Ireland)that don't have woeful clock speeds. I.e. I'm seeing E5-2603 V3 but I'm not sure what a clock speed of 1.6 would do to SMB transfers :(.
Yeah your going to be looking at the E5 series. I don't know th line up that well or the motherboards as I tend to run older hardware. Perhaps @Chris Moore has something to add. He makes a lot of hardware suggestions around here.
I have seen some builds with Xeon D's "system-on-a-chip" but they seem to be limited to mini-itx with only one PCIe slot (I need two unless the board provides a M.2 connector also).
I would kill for a mini ATX with two PCIe 8x slots and the Xeon-D!
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
Only 4 physical cores and 4 VMs? They should all be single core except FreeNAS and Windows will greatly benefit from multi core operation.

sorry @kdragon75, just going back over this point again. I'm no VMware expert so I could be wrong. I thought the whole virtualisation thing was to abstract the physical layer and hence be able to "share" the CPU cores/cpu cycles?

From what I have read on these forums, most peoples CPU's are not pegged up above the 70/80% range all the time (granted scrubs will increase usage) so while it makes sense to give the Freenas VM 2 vCPU in its config. it doesn't seem to make sense to reserve the equivalent clock-cycles of 100% of those cores for that VM alone does it?

Now this is where my lack of VMware understanding shows... Since those vCPU's would sit with a very low utilisation most of the time (this is a home NAS (probably should have mentioned that) and serving files should not require lots of horsepower). Those unused CPU cycles can still be used by other VM's right? So, lets say i reserve 3.5GHz (equivalent of 50% of 2 x vCPU's clock cycles on the E3-1230v6), then it would guarantee the Freenas VM those clock cycles (i.e. when performing a scrub) and therefore not let it get bogged down, but any more and it would have to queue.

So looking at my VM usage:
Freenas VM - serving files to 3 users is a very light workload
Owncloud VM - I currently run this on ESXi 5.5 on a HP Microserver Gen7 [AMD Turion II 2.2GHz + 16GB RAM] along with 2 other Linux VM's and a vSphere Appliance. The interface does not suffer any slowdowns but the file transfer speed is dreadful (probably due to the fact the datastore is on a spinning disk and the current NAS is running on a Sparc CPU with 128MB Ram!). I would consider this VM to have light workload too.
Windows VM - This VM will need more CPU resources as it runs a task approx every 4mins (that makes use of that AVX2 instruction set) will probably peg a single core. This task lasts approx 40seconds. I would reserve 3.5GHz here as I want this task to complete ASAP without contention.
Plex VM - This could probably go on the Owncloud VM since when we're watching movies we aren't uploading files etc. ;) - I dont know how this will perform on this CPU but going from the above, I feel like there are plenty of clock cycles available to provide for my needs.

Granted, if all VM's decide, "We want all the CPU cycles now" then there is going to be contention and processes will have to queue for their CPU time. I guess here i could setup a resource priority for Freenas + The Windows VM and a CPU core affinity to keep Freenas and the Windws VM away from using the same cores. (Did i make that up? Is it even a thing?) :oops:


Now obviously all of the above is not even up to scratch with 'back of a napkin theory' and there are no hard numbers to back up my theory but in my head it makes sense. Honest guv! :confused:

if your running the same version of linux multiple times on the same ESXi host, you can disable memory salting and utilise Transparent Page Sharing to save some RAM.

that is simply amazing. Thanks for that link! Yeah all my Linux VM's are CentOS (its what im comfy with coz i use it at work).

Yeah your going to be looking at the E5 series. I don't know th line up that well or the motherboards as I tend to run older hardware. Perhaps @Chris Moore has something to add. He makes a lot of hardware suggestions around here.

Yeah, I'm running my head in circles now. Going on my ramblings above, it seems like it should fly, but I'm not exactly best placed to make that call vs people like yourself.

I'll see if @Chris Moore chimes in with any recommendations. Unfortunately, the E5 series hardware isnt the easiest to get hold of here in Ireland. Most of my shopping list has therefore been restricted to the likes of Amazon.co.uk since they will ship here. :(

I would kill for a mini ATX with two PCIe 8x slots and the Xeon-D!

Right with you there! I'm not asking for the world... just the moon on a stick
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
ok so I'm hoping that since no-one has said, "Don't do that, it will be a slow hunk of junk!" that the above parts list is up to scratch for my stated use case (from the years I have spent reading this forum I believe the bar is set quite high and the everyone has great attention to detail which is why i have posted here). I am trying to think of a bit down the road.

If I were to add a SAS expander to allow me to have more drives. Would the fact that the ZFS pool disks were attached direct to the HBA controller and then all of a sudden are now attached to the expander cause issues with the configuration? Would it be a case that Freenas will just pickup the drives and recognise the pool or will i have to import the pool again and then add my new pool or vdev (depending on which i go for)? If it's a case of importing the pool then it's something i would like to do a test run on before i even think about putting my data onto it.
Thanks

- I had 7 drives directly attached to HBA controller in the past and now have 14 drives on a SAS Expander. It is possible. What you should focus on is how you are going to expand. For me it was easy as I also have a backup server.
- Personally, I no longer buy 3TB WD red drives. From what I have read, 4TB is more stable and I'm replacing my failing 3TB with those.
- Lastly, I'm running 2 Win 7, Plex, NextCloud and FreeNas on basically same hardware you are looking at. My ESXI install on a SATADOM. I would recommend 32gb of RAM, though 16 should be possible.
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
- I had 7 drives directly attached to HBA controller in the past and now have 14 drives on a SAS Expander. It is possible. What you should focus on is how you are going to expand. For me it was easy as I also have a backup server.

Noted and thanks. I would probably be adding another pool rather than another vdev on the same pool since im not interested in insane speeds and for me the worry factor of all my eggs being in one basket in that case disturbs me. I have backups too but I would say copying it all across the network again would be a PITA. If its possible to just plug the drives back in but into the expander and them being seen again then thats a thumbs up in my book.

- Personally, I no longer buy 3TB WD red drives. From what I have read, 4TB is more stable and I'm replacing my failing 3TB with those.

Didn't realise there were issues with these. Moving up to 4TB would put my purchase back for sure. Thanks for sharing your experience with them.

- Lastly, I'm running 2 Win 7, Plex, NextCloud and FreeNas on basically same hardware you are looking at. My ESXI install on a SATADOM. I would recommend 32gb of RAM, though 16 should be possible.

This is joy to my ears. You are running more than I would be and your's is working. Maybe my theory/understanding is not so far off *crosses fingers*
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I thought the whole virtualisation thing was to abstract the physical layer and hence be able to "share" the CPU cores/cpu cycles?
this is true but it works best at scale where we can take advantage of the law of averages. Also, due to how VMs get scheduled on the CPU, it REQUIRES that a VM with x number of cores uses x number of cores even if idle. So if you think that through, if you have 4 cores and tow 4 core VMs. You can only execute one VM per clock. If you have 4 cores and 4 VMs two 2 core, and two 1 core, you can run both 2 core VMs per clock or one 2 core and both 1 core VMs. If you have a 3 core and the rest are 2 cores, while the 3 core is running nothing else can and that one core is "wasted" during that clock. Hyper Threading helps especially if you have a 2 core VM running a primarily signlethreded aplication as you can shedual your VMs idle cores on a virtual (hyperthreaded) core.
Now this is where my lack of VMware understanding shows... Since those vCPU's would sit with a very low utilisation most of the time (this is a home NAS (probably should have mentioned that) and serving files should not require lots of horsepower). Those unused CPU cycles can still be used by other VM's right? So, lets say i reserve 3.5GHz (equivalent of 50% of 2 x vCPU's clock cycles on the E3-1230v6), then it would guarantee the Freenas VM those clock cycles (i.e. when performing a scrub) and therefore not let it get bogged down, but any more and it would have to queue.
Now your getting deeper into resource management. Thinking in terms of clock speeds applies better at scale. Lots of cores and lots of VMs. with a low core count be extremely careful with reservations. CPU reservations only apply in times of load or contention. Buy this can cause all other VMs to be starved during contention. Setting shares is prefered to reservations. Memory reservations are just that reserved and unavailable to other VMs.
Owncloud VM - I currently run this on ESXi 5.5 on a HP Microserver Gen7 [AMD Turion II 2.2GHz + 16GB RAM] along with 2 other Linux VM's and a vSphere Appliance.
I don't recall for VCSA 6.0-6.5 but 6.7 requires a minimum of 16GB any less and it may not be stable.
I would reserve 3.5GHz
Again, reservations are not ideal. Shares can make it run in a higher priority but if something else needs CPU time, it still gets some.
I feel like there are plenty of clock cycles available to provide for my needs.
Keep in mind you need to have cores to schedule on not just clocks.
Granted, if all VM's decide, "We want all the CPU cycles now" then there is going to be contention and processes will have to queue for their CPU time.
Cores AND clocks. Its not one machine sharing all the cores, its several machines sharing sets of cores and clocks.
CPU core affinity to keep Freenas and the Windows VM away from using the same cores. (Did i make that up? Is it even a thing?) :oops:
Its a thing but unless you know you need it and exactly why, odds are you don't need or want it. Think about how the VMs get scheduled. If you start locking things to cores it make it MUCH harder for the scheduler
to do its job.

Again, I'm not saying it wont work, its just wont be fast or do much more than you asking with being in a state of contention.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
Noted and thanks. I would probably be adding another pool rather than another vdev on the same pool since im not interested in insane speeds and for me the worry factor of all my eggs being in one basket in that case disturbs me. I have backups too but I would say copying it all across the network again would be a PITA.

Though I user RSYNC for backups between my two servers across a dedicated cable, I use zfs send/receive to move bulk data. I basically plug my backup server drives that is attached to SAS expander into my main server. That way I can move 10TB in no time!
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
Again, I'm not saying it won't work, its just won't be fast or do much more than you asking with being in a state of contention.

Thanks for the detailed response, it seems insane that a full core can be tied up by a VM when there are clock cycles going spare - but who am I to throw stones. The E3-1230v6 does have hyperthreading but im not sure how much this would help if the physical core gets tied up. - i told you i was showing my lack of VMware understanding :p

I guess moving up to a higher core count (see E5) is the safe (but more expensive and hard to acquire - even from that famous auction site) bet and the current config gets me across the line but not so gracefully. Lots more thinking to do.

If anyone has any E5-xxxx CPU/motherboard combos that fit the bill (6 cores+ AVX or AVX2 circa £500) i'd be very grateful.


Thanks
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
This is joy to my ears. You are running more than I would be and your's is working. Maybe my theory/understanding is not so far off *crosses fingers*

I think a lot will depend on how resource intensive you Win 7 is.

I use mine for mostly DLINK Central Wifi manager that organize the connections to my multiple access points. NextCloud I use for moving fotos from our Iphones to storage. (Love this feature)

When using Plex for transcoding, I'm using about 12% CPU capacity!
 

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
I think a lot will depend on how resource intensive you Win 7 is.

I use mine for mostly DLINK Central Wifi manager that organize the connections to my multiple access points. NextCloud I use for moving fotos from our Iphones to storage. (Love this feature)

When using Plex for transcoding, I'm using about 12% CPU capacity!

These real-world usage details are really great. The Win7 VM will be very CPU intensive but only for ~40secs approx every 4 mins. I think I may look into migrating this to CentOS, I've just been too lazy to look into reliably mounting an NTFS disk on the same.


Just did some digging and found the following.
http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRL-F.cfm
https://ark.intel.com/products/75268/Intel-Xeon-Processor-E5-2643-v2-25M-Cache-3-50-GHz-
You would also be able to use DDR3 Registered memory which is dirt cheap! Its all a bit older but should provide a solid system for cheap if you can find the parts.

Really appreciate you taking the time to look! Those specs look great and i love the amount of expansion on the board. I've found a refurbished CPU already but the motherboard seems to be a bit tricky. Then there is the cost of electricity to take into account. Those CPU's definitely draw more power and unfortunately electricity is stupid expensive here.

Time to get the calculator out and do some sums! Thanks again all
 
Status
Not open for further replies.
Top