Virtualization Experts - Please Enter

Status
Not open for further replies.

Steve_TST

Cadet
Joined
Aug 3, 2016
Messages
2
Hello. This is going to seem like a crazy first post, but if you are a FreeNAS virtualization guru, please keep reading.

My name is Steven and I'm an exec at TST Industries. We are a small startup business in the sportbike industry. I'm not an IT expert - I only know enough to dabble and when the time comes, I research/ask for help until I can pull something together. The forums have been a staple resource for our setup. I appreciate the work the members put into the community.

TL;DR PURPOSE OF THIS POST: If possible and (adequately) safe, I want a virtualization expert to virtualize our current FreeNAS deployment in a production environment and to provide 1-on-1 support for future maintenance. If someone wants to do this pro bono, I'm all ears. But I am prepared to compensate accordingly. The virtualization reasoning is to better utilize the hardware capabilities of our server investment. We are on a small business budget - we do not resort for cheap band-aid solutions, but we want to maximize our dollar. I recognize I don't have the skills nor time to pull this off properly, if it's safe to do with our hardware in the first place.

IF YOU'RE INTERESTED IN THIS OPPORTUNITY & THINK WE CAN DO THIS ADEQUATELY, DETAILS:
I have deployed a successful bare metal FreeNAS file server for our business that has operated well for the past 10 months (9.10-Stable). At the current time, we are in need of deploying virtual machines for internal application hosting (Linux VM's for web apps, Windows 10 VM for SQL Server and app hosting, etc.). Rather than purchase an additional host, I'm hoping we can better utilize our FreeNAS server hardware.

The machine setup:
ZFS pool config and storage requirements:
  • Our ZFS pool configuration has all (10) drives in (1) vdev of RaidZ2. As we need more pool space, we will be purchasing (10) more 6TB drives and adding a second vdev to the pool. Our chassis handles a total of (30) 3.5" drives, so we will have a maximum of (3) vdev's in the pool longterm before a new machine is needed.
  • Our data is split into 2 categories - 1) a collection of critical, normal-sized files. 2) a collection of large media files from our video/photo marketing team.
  • 5% of the data is under the critical category. This data is backed up constantly to the cloud. There is not a database or other frequent IO data set. Most of the critical files are employee docs, product CAD files, purchase records, etc.
  • 95% of the data is media files. 4K high bitrate video footage, RAW camera files, etc. Recent projects are important. Older projects are convenient to have stored should we need to grab a random clip for a current project. Otherwise, it is not mission critical. Currently, this media data is not backed up to the cloud (our internet upload would be tapped out 20 hours a day to handle this...).
  • Our pool is currently 44TB and is 19% used at ~10TB (growing rapidly these days).
Virtualizing Idea:
  • ESXi hypervisor
  • Pass through R750 HBA to FreeNAS
  • Give 48GB+ of RAM and sufficient vCPU to FreeNAS
  • Purchase and install reliable SSD's for ESXi and VM's. These would be connected to the SATA ports on the Supermicro mobo. Realistically, we only need ~500GB of storage for ESXi and VM's in a RAID 1 setup + backup.
  • If possible, it seems it would be easiest to keep the storage for the hypervisor and VM's completely separate from FreeNAS and it's pool.
  • The VM's are linux and windows based. Hosting small databases for internal operations.
  • Mobo has 1Gb onboard NIC seperate from the X540-T2 that could be used, if needed.
Additional Notes:
  • Network config: 1Gb to workstations and general office. An 8 port, 10Gb Netgear switch connects to the FreeNAS server. Cat6 cable throughout the office. Some of our media editors may receive 10Gb NIC's in their workstation eventually. Otherwise, most people on 1Gb connections is sufficient.
  • I can attempt to answer any other questions about our setup.
Once again, I'm preferable to someone walking me through the steps to accomplish this, remotely (unless you happen to be in the central Florida area) and privately. Once ready, we would phone call, skype call, provide SSH, etc. so that the expert performs these steps, it just so happens that I hit the keyboard. Furthermore, it'd be great if this person could be on a retainer to help in the future, should we need the support. If this is successful, I'd be happy to document the work on the forums so that future people can read about it.

If you're interested, please private message me or provide a contact method and we can take it from there.

P.S. If this is inappropriate for the forums, please feel free to remove the thread immediately. I was simply thinking this could be a great place to find, and recognize, some of the members here that put a lot of work into the forums on their own time/dime.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Welcome to the forums.
Thank you for an excellently structured post.

I'll offer and lay out two succinct recipes for your requests.

Getting FreeNAS virtualized:
Lock host VM memory. Passthrough HBA. You got that part.

1. Ezmode: host VM's from ESXi datastore directly.
2x SSD in hardware raid mirror on the motherboard SATA controller.

2. Complicated:
Let FreeNAS share the datastore to back to ESXi host.
Get: SLOG device for the VM pool (Best: NVMe, meh: SATA+full powerloss protection+decent DWPD).
Get a couple of drives: to run mirrors to setup a second pool dedicated to VM's. Since you need quite a lot of space, I'd like to see 2x vdev's (4x 3TB HDD would probably do the trick, or a couple of SSD. Key is to maintain boatloads of free space. Consider 50% = filled. On top of this, a hot spare would be great, and a cold spare on the shelf (Burned in and tested according to forum standards). This pool could be backuped - snapshotted to the larger pool leveraging FreeNAS features.

When running the latter configuration, there are a BOATLOAD of details to keep in mind before the system is reliable and survives a reboot.
VM's must start in particular orders, commands run, shut down in sequence, interactions with UPS and host systems needs to be configured to allow for more or less graceful shutdowns to not cause compromises in data or abruptly shutting off VM's. Fortunately there are some scripts to help out in the resource section. None the less, this type of setup requires a bit of testing and careful planning to pan out right.

The Ez-mode version features:
- Useless control over drive health
- Less smooth to expand both in terms of performance and space.
+ easy to setup and cheapest

The Complicated version:
+ leverage all FreeNAS capabilities
- more expensive
- increased complexity to setup

Final words:
These recommendations are not exhaustively detailed, but provide the grand scheme of things, including specific hardware choices.
I might add, the complicated option is not an experimental highly philosophical build, but an option a few active contributors on the forum who actually runs rather successfully (me included).

Cheers
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
2x SSD in hardware raid mirror on the motherboard SATA controller.
Is that possible while running ESXi? I thought the motherboard RAID controller was not a true RAID controller and requires the OS to support it properly thus making it more of a software RAID. Maybe ESXi does but I couldn't get that to work for my motherboard, I had to get a true RAID card (which was free thanks to @jgreco) and mirror my boot drives. This worked like a champ.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
P.S. If this is inappropriate for the forums, please feel free to remove the thread immediately. I was simply thinking this could be a great place to find, and recognize, some of the members here that put a lot of work into the forums on their own time/dime.
It it not inappropriate however be careful asking someone to remotely manipulate your system. You have never met them and have no idea who they are. They could be hacking into your employee or customer database and stealing identities. I just found out I was convicted of a felony in MN back in 2000. Funny, I've never been there. I've had my identity stolen once before but this old record just popped up. So caution my friend. And nope, I'm not up for the task but I'm happy to provide any help I could give through the forum.

You not being an IT kind of person begs the question of why you are doing this type of work. I'm not trying to put you down, it's just setting up FreeNAS is fairly easy but configuring a bunch of VMs to work properly, if done properly can be a nightmare.

I have a question: The VMs you want to run other than FreeNAS, how much RAM and CPU are you going to give them and how responsive do they need to be? This would allow you to answer if your hardware is capable of providing proper speed to your VMs. FreeNAS itself will be fine, it the other stuff that may suffer.

I agree with @Dice for his suggestions. And as a business server that does contain critical data, you need it to work 100% of the time and work well.

I would build the system similar to my ESXi machine but add in a pair of SSDs for the SLOG for FreeNAS, add a true RAID card to boot the server from and attach a pair of 2TB hard drives mirrored. You would boot from this device and have true failover should one of the drives fail. All your VMs would also be stored on this mirror. If 2TB is not enough then maybe a pair of 3TB drives. Remember, this is just for the VMs not data storage, that is what your FreeNAS is for. Things need to be sized based on your actual needs and then add in some extra capacity for future expansion (I like to double it). But you are not building a new system here and you do have good expansion abilities.

Well let me add one more thing to think about... It's good to not put all your eggs in one basket, meaning that it may be best to run a second server with ESXi on it and all your VMs. Sure it's cheaper to add another CPU and RAM to the board you currently have but it's just something to consider.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Is that possible while running ESXi? I thought the motherboard RAID controller was not a true RAID controller and requires the OS to support it properly thus making it more of a software RAID. Maybe ESXi does but I couldn't get that to work for my motherboard, I had to get a true RAID card (which was free thanks to @jgreco) and mirror my boot drives. This worked like a champ.
Good catch. Fortunately, it is easily fixed by grabbing another controller.

but add in a pair of SSDs for the SLOG for FreeNAS,
This looks like a typo. If not, please elaborate on what brings the need for doubling SLOGs, even if OP would choose to get a NVMe?
I will pay my respects and attempt to make sense of the suggestion: So the backend of running datastores and VMs on the same box would bump the network throughput enough to motivate double SLOGs to keep up. If that is the case, then the underlying pool must be significantly faster than a set of mirrored spinners in order for the SLOG to empty without tanking the pool. Thereby, the priority should rather be to bump the performance of the underlying pool than doubling up on SLOG performance.

All your VMs would also be stored on this mirror. If 2TB is not enough then maybe a pair of 3TB drives. Remember, this is just for the VMs not data storage, that is what your FreeNAS is for.
I concur and like to add to OP- if performance is lacking on VM's, there are at least three ways to alleviate the problem. One being adding more spindles. Second, adding RAM to FreeNAS. Third, adding L2ARC attached to the VM pool.
In this vein I'd like to purpose to OP along with the other recommended build items, to address RAM. 64GB for the entire system may seem like a lot from outside the ZFS bubble. Once inside, it is oh so quickly consumed. One of the better reasons to bump RAM at this point is for you to enable a larger L2ARC successfully to the VM pool. That would effectively grant way increased performance, over adding additional HDD in vdevs (obviously the free space requirement remains).

Well let me add one more thing to think about... It's good to not put all your eggs in one basket, meaning that it may be best to run a second server with ESXi on it and all your VMs. Sure it's cheaper to add another CPU and RAM to the board you currently have but it's just something to consider.
This struck me too.
Although the idea of making the best use of <el dollares> is honorable, it is not obvious how investment vs performance pans out beyond certain thresholds.

If you need a 2nd CPU and more RAM invested, a raid card, 2-4 quality SSDs for datastores... ($250+2x$350+150=$1100 very roughly for 1TB drives) you're really not that far off from getting another box.
Specially not a 2nd hand server from ebay which probably can be purchased for a lesser total at far higher value.
I'm having good fun with this thread - I'm already out scouting ebay ;)

This box gives you all the RAM the VM needs and bumping CPU power. Still costs less than decent proper addons to your current box, (well yes, it doesnt include drives). It sure gives some perspective.
http://www.ebay.com/itm/1U-Supermic...970582?hash=item23836f6356:g:HiEAAOSww5NZA9X9
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I thought the motherboard RAID controller was not a true RAID controller and requires the OS to support it properly thus making it more of a software RAID.
Yup. AFAIK, it only works under Windows.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
This looks like a typo. If not, please elaborate on what brings the need for doubling SLOGs, even if OP would choose to get a NVMe?
I will pay my respects and attempt to make sense of the suggestion: So the backend of running datastores and VMs on the same box would bump the network throughput enough to motivate double SLOGs to keep up. If that is the case, then the underlying pool must be significantly faster than a set of mirrored spinners in order for the SLOG to empty without tanking the pool. Thereby, the priority should rather be to bump the performance of the underlying pool than doubling up on SLOG performance.
Maybe I lost the bubble on this (a submariner term) but the SLOG caches writes to the vdev. If there is a failure of the SLOG at the right time it's possible to lose data. I'm not saying it's going to happen however in a buisiness machine I'd think you would want to cover this type of failure. Of course you could start saying that we would need a complete mirrored machine to be completely covered. I'm sure the odds of a quality SLOG SSD are slim for failure but I'm sure it does happen. Am I over thinking this?

I concur and like to add to OP- if performance is lacking on VM's, there are at least three ways to alleviate the problem. One being adding more spindles. Second, adding RAM to FreeNAS. Third, adding L2ARC attached to the VM pool.
In this vein I'd like to purpose to OP along with the other recommended build items, to address RAM. 64GB for the entire system may seem like a lot from outside the ZFS bubble. Once inside, it is oh so quickly consumed. One of the better reasons to bump RAM at this point is for you to enable a larger L2ARC successfully to the VM pool. That would effectively grant way increased performance, over adding additional HDD in vdevs (obviously the free space requirement remains).
Agreed.

Also nice find on the server.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
If there is a failure of the SLOG at the right time it's possible to lose data.
Here is my understanding of the interaction of SLOGs. You probably have read it all before, yet for the purpose of the discussion I'll lay out some text.
Let's disregard any sort of performance oriented arguments at this point. The time the SLOG come into play for the purpose of data loss protection is only when there is a unforeseen shutdown of FreeNAS, the like of a power outage or VM shutdown.
The reason why power loss protection becomes important as a feature of the SLOG SSD is just that it ...won't do any good in terms of keeping data safe if it is not written to disk prior to shutoff. Hence the capacitors. That's what we need it for.
So to motivate double SLOGs, it would be solely from the point of view that the SLOG is about to crash beyond its PLP and end-to-end protection to function correctly. It is designed to handle system crash / power outage. What is left is... sort of what I imagine a sledgehammer would have for effect :p

Am I over thinking this?
I think you are, but there are some merit to explore the avenues of consequences your suggestion hold.
The incentive to get a second SLOG would would be in scenarios where there is not a single SSD capable of handling the data throughput, typically to VMs, typically via LAN. Hence the typical calculation of SLOG size is based on network interface speed. Now network speed is sort of unlimited when running an All in one esxi box, with pure virtual adapters on virtual switches. Copper speed limitations don't apply. Here the sizing of the SLOG is a bit more difficult to get right.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Damn, you make it sound so good. You had me at "Here".
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I did some editing above to remove some simplification and some unnecessary ice skating adventures :p
Here is a gem on the topic:
https://forums.freenas.org/index.php?threads/finding-suitable-zil-and-l2arc-drives.13561/#post-64274

edit:
Which also includes:
https://forums.freenas.org/index.php?threads/finding-suitable-zil-and-l2arc-drives.13561/#post-63985

edit2:
For any incoming reader, you should dig through the top part of this:
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

edit 3:
Did additional cleaning in the post above.
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
While one can virtualize FreeNAS, *I* wouldn't go that route for your use case. I would set up another box for ESXi. Point the ESXi server to your FreeNAS box.

IXsystems does not provide support for FreeNAS, but they do have a "network of 3rd party consultants" that provide professional support for FreeNAS.

I can't link to the page from my phone, but if you do a Google search for "FreeNAS support iXsystems" I am sure you can find the form.
 

Steve_TST

Cadet
Joined
Aug 3, 2016
Messages
2
First of all, I'd like to thank everyone for their input. I'm now going to take the time to address a few things above, and a few things from private conversations a couple members have started with me. This way, the info is quickly accessible to everyone in one location.

TL;DR
  • We likely require less IT horsepower than originally perceived. Our "production" environment is not as "production" as a 250 employee business that runs off a central server system. We currently have 15 people in the business. Shutting down our FreeNAS server or Windows 10 SQL Server for an hour isn't optimal, but can be done without significant hiccup/cost. The proposed VM's are small, resource light deployments. Our first NodeJS web app is currently being prototyped on a Raspberry Pi. It's a little slower than we'd want, but it's still doing the job. Our Windows VM is minimal in the grand scheme, needing 50GB of storage for the OS, programs, and SQL Server database.
  • Virtualizing FreeNAS on our current machine is most valuable to me to 1) better utilize the hardware that currently sees a <5% load most of the time, and 2) learn how to do this/what's involved for future consideration.
  • If you actually read everything below, thank you for your time and attention. It's sincerely appreciated.

1. What are our business IT needs for this?
I'm suspecting our IT needs are currently minimal in comparison to "production" systems many of you are familiar with. Perhaps my original post skewed the perception the wrong way. In the office, we do not have a datacenter, we do not host our website, we don't store customer data, we don't have an email server, we don't have complex database systems that would take days to rebuild, etc. We have 13 employees in the office, 3 remote, that do most of their work on their workstations and the FreeNAS server is a shared storage resource. To summarize our current setup:
  • The FreeNAS server. The machine usually see's a <5% CPU load. RAM is obviously maxed for ZFS ARC. Network traffic is predominately the video editors that are pushing 4K video files over the network through their editing software (200MB/s via 2 workstations). On the storage pool, it currently has 1TB of normal, important archival data (past purchase orders, part design data, general marketing assets, supplier catalogs/info, etc.). This was the "critical stuff" I mentioned above. It is backed up continuously. Then, we have roughly 10TB currently of video/photo assets. Our video team streams the footage over the network for their daily editing tasks. Yes, it's a problem if the file server is down for an extended period of time, as these employees would have to divert their attention elsewhere, but it is not catastrophic or thousands of dollars per hour of loss-of-work. Furthermore, if we lost the entire pool and those 10TB's of video data were gone overnight, it would be a significant setback, but it would not destroy the business. ** this said, an immediate goal of mine is to backup the previous 6 months of video data externally, so that recent projects are protected.
  • A normal Dell desktop running Windows 10 for our warehouse inventory database. Small SQL Server deployment. Database is like 500MB right now. See's around 300 transactions per day via 3 users (usually not concurrent I/O). The db is backed up with the 1-2-3 location method. If the machine were to be on fire one morning, I could get us back up and running in less than an hour with another machine. This is the most "production" thing we have, and when it goes down, our team simply diverts their focus to another aspect of the warehouse while the system is restored. This is already been put to the test once. Eventually (and hopefully!), we will have enough daily orders that this system will warrant a more structured setup - ie, it will be economically sensible to have a ready-to-go backup machine, copied instance, etc., that can be flipped to "live" in an instant should the main system go down for any reason. I believe virtualizing this will unlock beneficial features to accomplish this effectively, which is why I've thought of hosting this Windows in a VM on the FreeNAS server (or another host if the freenas server doesn't pan out for virtualizing).
  • NEW (we don't currently have this, but need to do it over the next couple weeks) - Linux deployments to host NodeJS web apps for internal LAN use (possibly broader scopes in the future). Currently, I have written a few .NET programs for us that make certain jobs more efficient. However, looking ahead, we see value in moving towards an agnostic, browser based development platform for our internal applications. I have recently hired a developer to start our first project in this arena: an "order returns" application that handles item returns from customer orders. In the future, our goal is to continue the development of small apps that make daily operations more efficient. These apps will be small and resource light. The returns app will use the Windows SQL Server (there is reasoning for this I won't get into here) database, and the transactions will be very light. On the order of 10-20 transactions per day by 2 users. In the immediate future (12-24 months), other apps will be similarly lightweight. I don't currently envision the need for an I/O intense app, like hosting our website, that will see 1,000 concurrent users and hundreds of thousands of queries per day.
2. Realistically, what hardware resources do your VM's need?
Cutting to the chase, I hypothesize an adequate resource allocation would be:
  • Windows VM - 4 to 8GB of RAM, 1 vCPU core, 50GB of storage for OS, programs, and SQL Server database.
  • Linux VM (resources are per web app, which is only 1 at the moment, scaling to 2 or 3 by the end of the year possibly) - 2GB RAM , 1 vCPU core, 10GB of storage for OS, NGINX, web app files, and database.
  • ^^^ we hope to host a couple of our small internal apps per Linux VM, and use NGINX to handle routes. Thus, if we had 2 apps running on the VM, we'd likely only need to give it 2-4GB RAM, 2 vCPU cores, and 10-20GB of storage.
  • FreeNAS - every resource left over. If we did this immediately, without purchasing an additional physical CPU and more RAM, this would leave 58GB of RAM and 6vCPU/12vThreads.
Because our VM's currently need very little storage space, the idea was to have a couple RAID 1 SSD's that are only touched by the hypervisor and VM's. The VM OS's and the small databases would reside on the SSD's. Thus, (2) 480GB SSD's would get us going with room to spare for a little future development.

2. By the time you virtualize the FreeNAS machine, you could have purchased/built a seperate host machine for the VM's.

Going with a separate machine, and forgetting about virtualizing FreeNAS, was always plan B. I simply wanted to throw up a post here to see if someone could advise that our FreeNAS hardware was more or less ready to be virtualized (with the exception of normal things like drives to store the hypervisor and VM's on) and for a small investment, we could better utilize our current freenas server hardware that is usually seeing a 1%-3% load (small spikes from time to time).

Furthermore, I enjoy knowing how things work. Who knows, perhaps we will have another need in the future where a basic experience of virtualizing a ZFS system is valuable. It would allow me to make a smarter decision for the business, know who to contact for advise, etc.

3. Why are you doing this work Steven, if you don't know what you're doing?
Good question! I will get flack for this, but even though I have heavy responsibilities to run our business, I have a unquenchable thirst to know the fundamentals of how things work. I like researching tech stuff, am a mechanical engineer by education (I just had to know how a combustion engine actually works), but most importantly, an entrepreneur that has to hit challenges and figure out the most effective way to solve that challenge with a short and longterm consideration. Right now, it's been valuable for me to have a basic understanding of how a lot of random things work (from ZFS to warehouse fulfillment practices to real estate). In the long run, I have a philosophy that I don't have to be the one that assembles the puzzle, but I like to know what pieces are available, and to know what the final picture they create can look like. It's purely an opportunity->resources->outcome approach, and usually, the more you know about each part, the better results you get. Hopefully from above, you now recognize that our IT needs aren't as complex as a 250 employee or larger business that operates an internal/external datacenter, holds sensitive customer data in house, or would experience significant labor loss if a critical system is down for an hour. Hopefully one day we will get there, and in that event, I don't think you all should expect a thread here to help solve a challenge...

4. How do you backup data?
As mentioned above, our 1TB of important FreeNAS pool data is backup to the cloud continuously. The photo/video assets are not. It'd be great if they were, but this is a risk I currently see we have to take with our available resources.
Our SQL Server database is backed up daily. I will be increasing this frequency very soon, and eventually, I'd like an effective way for this to be done continuously. It is backed up twice locally (different machines and separate physical locations) and then to the cloud.
Employee workstations have important data on the FreeNAS server. My workstation receives more attention and is continuously backed up to the cloud.

5. Be careful giving access to people remotely.
Of course. I wouldn't provide admin access to our systems and then turn a blind eye. In fact, my goal from this thread is to receive advice and instruction so that I can physically perform what's needed without an external person receiving the credentials or access needed to perform changes themselves. If this isn't practical, or if I can't find someone I trust, I'll abandon the project.
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
In fact, my goal from this thread is to receive advice and instruction so that I can physically perform what's needed without an external person receiving the credentials or access needed to perform changes themselves.
The hardware requirements ...running of a Pi? ...damn.
Your goals can easily be achieved. I recon the pieces are already put out in the thread, including more elaborate options.
The missing aspects mostly concern the ESXi part, but to be honest, it is mostly self instructing. When I first got into it, I was quite chocked how simple it all is. Obviously, there are some can of worms, and some thinking that needs to be done, but all in all... getting the basics up is easy and quickly done.

A couple of notes to not forget:
- Startup and shutdown sequence
- vmware tools
- UPS integration (to allow the host to shut down all VM's gracefully upon loosing power)
- CPU&memory limitations or <over subscription> configuration
- Virtual networking (loads of good stuff here)


Just as a reminder - the good thing about virtualization is that you could do most setup work on another temporary host to get VM's configured prior to migrating. Any modern Intel (VT-d supported) CPU would do (eh, yes, well. some BIOS settings regarding virtualization may need to be switched, but no big deal). That is, you could do the bulk of fiddling and configuring prior to moving the VM's (pay attention to PCI passthrough).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well I think your mind is made up which is perfectly fine so I'm going to drop a little advice...

1) For your FreeNAS VM, Lock your RAM. Based on what your FreeNAS use case is, I'd say give it 16GB.
2) Read all you can about ESXi 6.5 (the current version).
3) Try to build your FreeNAS VM as ESXi version 8, 9, or 10. I prefer 8 becasue then I can use vSphere to fully configure it, but this is just something you can play around with.
4) Install vSphere, even if you use Version 13 VMs, there are still features you can adjust that you can't do well from the ESXi web GUI.
5) Backup your FreeNAS configuration file before you start.
6) Disconnect the FreeNAS hard drives during all your ESXi work. Only plug them back in after ESXi is operational. This will prevent you from accidentally deleting/formatting your data drives.

One of the problems I see for you will be the fact that you need to take down the current FreeNAS system in order to make some modifications to the hardware and then install ESXi, configure the vSwitch, pass through the HBA, configure the FreeNAS VM, restore your config file, reconfigure the NIC in FreeNAS. Last is to test that it all works. Then you can start to add some new VMs as you see fit.

I would add a 2TB hard drive for your ESXi boot drive and datastore for the VMs. This gives you ample room to work with. You might consider a VM backup program so you can maintain a copy of your VMs in case a failure occurs.

One last thing, I started a thread in the off topics section last year when I was headed down the ESXi path. It is lengthy but I feel there are some good things worth reading. Hopefully you can glean some good information from it.

Good Luck,
-Mark (aka. Joe)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Good catch. Fortunately, it is easily fixed by grabbing another controller.

Which, for a mainboard controller, means grabbing a different mainboard, and while there are some decent mainboards with ESXi-friendly controllers, such as the X9DR7-TF+, that might be an overkill strategy.

At this point, hobbyists and cheapskate small business users are rejoicing as the three-to-five-year refresh cycle for large enterprises has finally hit some of the second generation 6Gbps RAID controllers, specifically the LSI 2208 family. I recently scored a bunch of these on eBay at $150/each, complete with LSICVM, a package that'd run around $750/each new.

As an aside to the audience, I am not suggesting that these are good FreeNAS controllers. They *are* good ESXi RAID controllers, if you do your homework and use the correct firmware and drivers.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I recently scored a bunch of these on eBay at $150/each, complete with LSICVM, a package that'd run around $750/each new.
Thats a significant price difference. Nice score!
 
Joined
Nov 11, 2014
Messages
1,174
Thats a significant price difference. Nice score!

I think you should get one too if they are not gone. I know you need one.:)

I paid $150 just for the battery(which was capasitor) itself.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Joined
Nov 11, 2014
Messages
1,174
Yeah. And $500 for the controller. But it was brand new from superbiz.com. So if ebay offer it for $150 (card +capasitor)insted of $750 I wouldn't hesitate. Loosen up the pockets Joe, I know you can.:)
 
Status
Not open for further replies.
Top