New to FreeNAS and having 2 issues with initial setup

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I think you're cherry-picking quotes from that link, which was written in 2015 for FreeNAS 9. What's linked here is current practice for FreeNAS 11, which has a larger footprint than FreeNAS 9.
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
I think you're cherry-picking quotes from that link, which was written in 2015 for FreeNAS 9. What's linked here is current practice for FreeNAS 11, which has a larger footprint than FreeNAS 9.
Thanks, but I'll trust a software engineer from the company in question over some random person on a forum.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You asked for help on this forum. If you don't want our help, based on thousands of hours running FreeNAS over several hardware and virtual baselines, and seeing what works, then that's your choice.
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
You asked for help on this forum. If you don't want our help, based on thousands of hours running FreeNAS over several hardware and virtual baselines, and seeing what works, then that's your choice.
No, i don't want your help.

Actually, i was warned of the arrogance on this board before I ever posted. Guess they were correct.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
1. FreeBSD does not run very well in HyperV. Technical fact.
2. If you must use a hypervisor, go ESXi.
3. Initially you wrote you had a dedicated system for FreeNAS so why use a hypervisor at all?

You can find an assortment of machines that I run FreeNAS on in my signature. None of this is anywhere the size/cost range of a cheap home Plex server. Most regulars on this forum run FreeNAS for business, that's why we are regulars.

You might be better off with just Linux or FreeBSD (modulo the HyperV constraint), ZFS and a regular Plex install.

I can't encourage you to go without ECC but is is sufficiently proven that the "scrub of death" is a myth and ZFS without ECC is still better than any other filesystem without ECC. Do a burn-in and memory test and don't overclock ...

HTH,
Patrick
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
1. FreeBSD does not run very well in HyperV. Technical fact.
2. If you must use a hypervisor, go ESXi.
3. Initially you wrote you had a dedicated system for FreeNAS so why use a hypervisor at all?

You can find an assortment of machines that I run FreeNAS on in my signature. None of this is anywhere the size/cost range of a cheap home Plex server. Most regulars on this forum run FreeNAS for business, that's why we are regulars.

You might be better off with just Linux or FreeBSD (modulo the HyperV constraint), ZFS and a regular Plex install.

I can't encourage you to go without ECC but is is sufficiently proven that the "scrub of death" is a myth and ZFS without ECC is still better than any other filesystem without ECC. Do a burn-in and memory test and don't overclock ...

HTH,
Patrick
1. If this is true, why does a senior software engineer from iXSystems state "You absolutely can virtualize FreeNAS" and "Other hypervisors such as bhyve, KVM, and Hyper-V also work...". That entire blog post is calling out statements like yours and others on this thread as, at best, a misleading interpretation of his earlier statements, and at worst, patently false. I've also found multiple people who have successfully setup FreeNAS in a Hyper-V VM. I get that it may not be easy, and I get that there are issues unique to PCIe passthrough which might inhibit a prescriptive solution. But that doesn't mean a FreeNAS VM is going to under-perform, let alone be impossible to configure.
2. If I was able to move away from Windows, I wouldn't even need a hypervisor.
3. Not sure where you're getting this from. I stated at the very beginning of the OP that this machine is a dual-purpose surveillance/plex server that I'm also currently using as a workstation during the pandemic. I'm not sure how that part of my post fell on deaf ears. I tried to make it as transparent as possible. The workstation is not a permanent use case, but it is the current situation I'm in and there's nothing I can do about it. It's easily the most limiting factor here as my machine cannot be down for extended periods and I'm not skilled enough to rebuild an entire machine in a foreign OS in just a few hours or a day at most.

That's fine if you are all using FreeNAS for business but realize that not everyone is. Afterall, this is FreeNAS, so plenty of home users run it as well considering there's no expensive licenses involved. I'm literally just trying to use it to manage a single 3-disk Raid Z, nothing more. I'm not running Plex in FreeNAS, not running any other drives in FreeNAS, not managing the individual files in FreeNAS, no plugins, nothing. I'm purely trying to utilize the redundancy, security, and speed of ZFS, while managing everything else in Windows. And I can say right now it is working flawlessly and is very fast, from a file storage/transfer perspective. But that is all for naught if I am setting my data up for failure by limiting access to security features inherent to the drives and Raid Z and limiting access to features of FreeNAS itself.

Everything I've read suggests I need to pass the HBA to get there. But I'm not even at the point of passing the HBA yet, so I don't even know what kind of effort I'm even in for. At this point I can't even figure out how to dismount the HBA in Windows. But seeing as how I'm running a very common LSI HBA, I'd think someone would have some guidance here. Then maybe I could actually attempt to pass the card. Whether or not that's even possible is another question, as someone on Reddit directed me to a Microsoft article stating that PCIe passthrough is not available in Windows 10 Hyper-V (only in Windows Server). If this is true, then virtualizing any OS for the purpose of data management is pointless.

If I can/do decide to continue with FreeNAS, I will not be replacing all my memory sticks. I just spend money expanding from 16GB to 32GB. I don't need XMP or any memory OC at all, I'm just running it cause why not. I also don't game whatsoever, so that's absolutely no concern.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
LSI SAS 4i4e HBA

I'm assuming you're using DDA for that, PCIe passthrough. Which, last I checked, is only supported on the server SKUs, not the client SKUs.

As this will only be used for Plex media, I get your risk profile.

As for networking: That's a Hyper-V question. Depends a bit on how you do networking on Hyper-V. You'll need an external virtual switch, a legacy network adapter (means type 1, not type 2 virtual machine), and then from there basic network troubleshooting, maybe ask in a Hyper-V forum.

Keep in mind that FreeNAS has a short list of supported adapters. If your host machine is using Realtek, and that's showing up as Realtek within FreeBSD, you'll have issues. There is a list on these forums of supported adapters, in a nutshell: Intel server adapters. If all else fails, DDA one of those through to FreeNAS for exclusive use.

There's a guide here that'll step you through the legacy adapter setup that's necessary: https://www.servethehome.com/install-FreeNAS-hyperv-part-1-basic-configuration/ . Note he's using a dedicated NIC.

People here are not likely to have virtualized FreeNAS using Hyper-V. Some are doing this with ESXi and even Proxmox, with PCIe passthrough for the HBA.

What it comes down to is: You can definitely do as you're doing. If (or when) you run into issues specific to a VM deployment on Hyper-V, you are not likely to get good guidance here, as it's not a setup that people are running - or feel like offering free support for.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
My "not very well" statement is founded in the fact that ESXi officially supports FreeBSD including guest additions while Hyper-V to my knowledge doesn't. Possibly my knowledge is outdated, possibly it's not.

In your initial post you wrote:
For running the FreeNAS VM, I have the following hardware that will be used for no other purpose besides FreeNAS:
I interpreted that as "these are parts of a machine that has no other purpose beside FreeNAS" ...

But if you are not even intending to run Plex in FreeNAS why not run OpenZFS on Windows or a simple mirror (if I am not mistaken Windows supports that without additional software) instead of going through all these contortions to plug a NAS into your Windows machine, then use a network protocol to access it ...? Seriously.

See, us "regulars" run FreeNAS as the only OS on our hardware regardless of size and then integrate $stuff into FreeNAS. After all it's a capable server OS - containers, hypervisor and everyting ...

Kind regards
Patrick
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Afterall, this is FreeNAS, so plenty of home users run it as well considering there's no expensive licenses involved.

Blessing and a curse. Watch the recent interview with Kris Moore (https://www.youtube.com/watch?v=z5H9gB0FVdY) and pay attention to the laughing references to the kind of hardware people want to run FreeNAS on. 9 out of 10 issues with FreeNAS come down to "creative" hardware choices.

You are embarking on a journey here. I do think you can (probably) get some form of thing going, though with additional data loss risk because Win10 won't let you DDA the HBA. Which you have clearly stated you are fine and dandy with, and I get that. After all, it's just entertainment media, that can always be put back on again.

You are very much out on a limb of "I am doing my own thing, and I can then YouTube this and have people marvel at my crazy setup and how I got that to work". As a home user myself, I get the urge to tinker and do "perverted" things with software and virtualization.

All the people on this forum are trying to do is make you very aware of what you're embarking on. I do agree the message could have been a little more gentle. Keep in mind though that there are a LOT of "I want to run FreeNAS on VirtualBox on Ryzen Gaming Pro MegaRGB, why am I having issues?" kind of posts, and folk can get a little fatigued with it. Have some patience with that fatigue, please, and some of us at least will do our utmost to have patience with your FrankenBuild. Which it is. I'm saying this affectionately. It's way out there on a limb, and if you get it to work, as "wobbly" as it would be for production, I think it'd be kinda cool. As long as "I may lose all data at any time and ZFS won't help me" is clearly understood, because, no DDA will hurt, if (or when) things go south on a drive.
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
I'm assuming you're using DDA for that, PCIe passthrough. Which, last I checked, is only supported on the server SKUs, not the client SKUs.

As this will only be used for Plex media, I get your risk profile.

As for networking: That's a Hyper-V question. Depends a bit on how you do networking on Hyper-V. You'll need an external virtual switch, a legacy network adapter (means type 1, not type 2 virtual machine), and then from there basic network troubleshooting, maybe ask in a Hyper-V forum.

Keep in mind that FreeNAS has a short list of supported adapters. If your host machine is using Realtek, and that's showing up as Realtek within FreeBSD, you'll have issues. There is a list on these forums of supported adapters, in a nutshell: Intel server adapters. If all else fails, DDA one of those through to FreeNAS for exclusive use.

There's a guide here that'll step you through the legacy adapter setup that's necessary: https://www.servethehome.com/install-FreeNAS-hyperv-part-1-basic-configuration/ . Note he's using a dedicated NIC.

People here are not likely to have virtualized FreeNAS using Hyper-V. Some are doing this with ESXi and even Proxmox, with PCIe passthrough for the HBA.

What it comes down to is: You can definitely do as you're doing. If (or when) you run into issues specific to a VM deployment on Hyper-V, you are not likely to get good guidance here, as it's not a setup that people are running - or feel like offering free support for.
Thanks. This was helpful.

In regards to DDA, that's what I'm trying to figure out now. And it sounds like you're also confirming my statement in my last post that this apparently is flat out unavailable in Windows 10 Hyper-V. In which case, a FreeNAS VM is completely out of the question for me (unless you know some other way to give FreeNAS full access to drives).

Yes, there's an astronomical difference between a 3-disk Raid Z for a Plex home server and servers housing dozens or hundreds of disks in multiple arrays being deployed in a business environment. Not even in the same ballpark. If there's ever a case to be made for running minimum specs on a build, this is it. Only simpler configuration I can think of is a single disk, but who would ever do that?

I believe I have the networking configured correctly now, or at least everything seems to be working fine to me. My NIC is a Intel(R) Ethernet Connection (7) I219-V, which I setup as an external adapter in Hyper-V. But my VM seems to run completely fine, checkpoints aside. And I can access the shared pool in Windows Explorer and access the FreeNAS WebGUI in a Windows browser without issue, so unless I'm missing something, I'm all good on the network part.

I realize virtualized FreeNAS isn't common, and running in a Hyper-V VM is even less common. But of all the sources available, this place should be the best chance of getting info on this. It's too uncommon of a build configuration to find on other boards, and this is afterall the official FreeNAS board, no?
 

subhuman

Contributor
Joined
Nov 21, 2019
Messages
121
Patrick M. Hausen said:
why not run OpenZFS on Windows or a simple mirror (if I am not mistaken Windows supports that without additional software)
It does, through Storage Spaces. It handles 2-way mirrors just fine.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
But of all the sources available, this place should be the best chance of getting info on this

You'd think so, but not really. Most everybody on this forum is quite fond of their data. Even with backups, people don't really feel like having major failures. And that means, if one is to virtualize, there needs to be PCIe Passthrough. At this point the choices are:

- ESXi, well tested, several prominent members run it. Not a lot of support but at least there's a track record
- Proxmox, a bit more out there, but again several YT videos on how to do that and folk have had success
- Hyper-V on Windows Server, completely unproven, quite niche, FreeBSD support is not a thing MS cares about greatly, and it takes a Win Server license, which is in a different ballpark than Proxmox or ESXi, which can, if things haven't changed lately, both be free.

Running it on Win10 without DDA - just no for any kind of use where keeping the data is even remotely desirable.

Edited to add: People have done things like run ESXi, pass a HBA through to FreeNAS, and pass a GPU through to a Win10 VM, and that way get Win10 and FreeNAS all running on the same hardware. Doable. A little fiddly. More power to those for whom that works well and meets their needs.
 
Last edited:

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
My "not very well" statement is founded in the fact that ESXi officially supports FreeBSD including guest additions while Hyper-V to my knowledge doesn't. Possibly my knowledge is outdated, possibly it's not.

In your initial post you wrote:

I interpreted that as "these are parts of a machine that has no other purpose beside FreeNAS" ...

But if you are not even intending to run Plex in FreeNAS why not run OpenZFS on Windows or a simple mirror (if I am not mistaken Windows supports that without additional software) instead of going through all these contortions to plug a NAS into your Windows machine, then use a network protocol to access it ...? Seriously.

See, us "regulars" run FreeNAS as the only OS on our hardware regardless of size and then integrate $stuff into FreeNAS. After all it's a capable server OS - containers, hypervisor and everyting ...

Kind regards
Patrick
That statement I made about hardware dedicated to FreeNAS meant that was hardware that I'm not allocating for the host OS. The storage drives are entirely dedicated to the VM/FreeNAS. Whereas the "2 cores" and "12GB RAM" were resources allocated to the VM in Hyper-V. So the full CPU and full RAM aren't being dedicated to the VM, but that is how much of each I allocated to the VM.

Why not run OpenZFS? I don't know the answer to that question. I'm a novice here, and FreeNAS and Unraid were the only options I ever found for Hyper-V solutions. So it's not that I can't or won't run OpenZFS, I just don't know anything about it and it was never a recommended option to me. One would think though that a dedicated NAS OS like FreeNAS would be superior to a more general ZFS-based OS.

I'm not running mirroring, because these drives are like ~$400 a piece. Trying to get the bang for my buck on the software side.

Again, I'm not saying my situation is ideal, but I cant just take this server offline to tinker around with a new OS. I don't understand why this is so difficult to understand.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
OpenZFS is not an OS, it's an addition to Windows that lets you create ZFS pools directly in Windows. I am wondering why you want to go with a separate OS installation running inside a hypervisor at all? Create a mirror in Windows and share that from Windows. Or use OpenZFS in Windows and then share that ... no separate OS, no hypervisor, no hassle ...

Patrick
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
Blessing and a curse. Watch the recent interview with Kris Moore (https://www.youtube.com/watch?v=z5H9gB0FVdY) and pay attention to the laughing references to the kind of hardware people want to run FreeNAS on. 9 out of 10 issues with FreeNAS come down to "creative" hardware choices.

You are embarking on a journey here. I do think you can (probably) get some form of thing going, though with additional data loss risk because Win10 won't let you DDA the HBA. Which you have clearly stated you are fine and dandy with, and I get that. After all, it's just entertainment media, that can always be put back on again.

You are very much out on a limb of "I am doing my own thing, and I can then YouTube this and have people marvel at my crazy setup and how I got that to work". As a home user myself, I get the urge to tinker and do "perverted" things with software and virtualization.

All the people on this forum are trying to do is make you very aware of what you're embarking on. I do agree the message could have been a little more gentle. Keep in mind though that there are a LOT of "I want to run FreeNAS on VirtualBox on Ryzen Gaming Pro MegaRGB, why am I having issues?" kind of posts, and folk can get a little fatigued with it. Have some patience with that fatigue, please, and some of us at least will do our utmost to have patience with your FrankenBuild. Which it is. I'm saying this affectionately. It's way out there on a limb, and if you get it to work, as "wobbly" as it would be for production, I think it'd be kinda cool. As long as "I may lose all data at any time and ZFS won't help me" is clearly understood, because, no DDA will hurt, if (or when) things go south on a drive.
As an avid forum user, I get the annoyance about repeated posts on the same topic. But the other side is that the poster doesn't know that it's a oft-repeated topic, so the "regulars" need to take this into account or just not respond. It helps no one to post some of these replies I received in reponse to this.

"Embarking on a journey" is a description I've read before in terms of passing PCIe through a VM, and don't think I'm not weighing the value of such an attempt. I figured I'd try dabbling a little first, since I can just delete the VM easily if it fails. Part of this dabbling is gathering the necessary information and troubleshooting, which I'm doing now. Though it's looks increasingly unlikely that I'll be able to make this work, which is fine, but I won't know until I try, right? It's not that I don't have other alternatives to try, such as Drivepool_snapraid, but this just seemed the best option to try first.
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
You'd think so, but not really. Most everybody on this forum is quite fond of their data. Even with backups, people don't really feel like having major failures. And that means, if one is to virtualize, there needs to be PCIe Passthrough. At this point the choices are:

- ESXi, well tested, several prominent members run it. Not a lot of support but at least there's a track record
- Proxmox, a bit more out there, but again several YT videos on how to do that and folk have had success
- Hyper-V on Windows Server, completely unproven, quite niche, FreeBSD support is not a thing MS cares about greatly, and it takes a Win Server license, which is in a different ballpark than Proxmox or ESXi, which can, if things haven't changed lately, both be free.

Running it on Win10 without DDA - just no for any kind of use where keeping the data is even remotely desirable.

Edited to add: People have done things like run ESXi, pass a HBA through to FreeNAS, and pass a GPU through to a Win10 VM, and that way get Win10 and FreeNAS all running on the same hardware. Doable. A little fiddly. More power to those for whom that works well and meets their needs.
I hear you, and the way you stated it here is fair. But others came in just chastising me for suggesting I'm attempting to run it in a VM, which is ridiculous. As you just said here, ESXi is well tested and run by several on the board. So getting on my case about running in a VM is just a waste of everyone's time. Who cares why I want to do it, I'm doing it. Just post any help you may have on the topic or dont post at all. I'm not directing this at you, BTW, just speaking in generalities.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
OpenZFS on Windows is over yonder. Note it's 0.23 and installation requires you to turn on the option to load unsigned drivers. I view this as a bit experimental, but maybe @Patrick M. Hausen has experience with running it.


Instructions for installing it and creating a pool at https://github.com/openzfsonwindows/ZFSin
 

subhuman

Contributor
Joined
Nov 21, 2019
Messages
121
I'm not running mirroring, because these drives are like ~$400 a piece. Trying to get the bang for my buck on the software side.
And this is the thing... you can buy a Supermicro X9SCL motherboard with a Xeon e3 1220 or 1230 and 32GB ECC RAM for under $200 off eBay. Say another $150 for a new case and power supply (ok, power supplies are hard to find in stock right now...) and you're looking at $350-400. Now you have an actual server, and for less money than just one of those hard drives. It will have no upgrade path (RAM is already maxed out), but it does cover the requirements of what you've been describing.
THIS is what I meant when I said in my initial post that you may want to re-think your hardware choices, and why I recommended in my second that you read forum stickies. This is all covered. There are many posts about recommended hardware, and how to get it on a budget.
 
Last edited:

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
OpenZFS is not an OS, it's an addition to Windows that lets you create ZFS pools directly in Windows. I am wondering why you want to go with a separate OS installation running inside a hypervisor at all? Create a mirror in Windows and share that from Windows. Or use OpenZFS in Windows and then share that ... no separate OS, no hypervisor, no hassle ...

Patrick
See that just goes to show how little I even know about this stuff. I've never heard of OpenZFS as an option in my research. It just wasn't a name that ever came up for whatever reason. And of course now I will be looking into it. My initial thoughts have me confused, as there's no website? OpenZFS.org is just a wiki page, and the link to the Windows version takes you to a website with nothing on it except a single fuzzy photo. Is this like a prank or something?

1593443892210.png


Mirrors are out, because as mentioned, these drives are $$$. That's a last-case scenario.
 

Big Ry

Dabbler
Joined
Jun 28, 2020
Messages
30
And this is the thing... you can buy a Supermicro X9SCL motherboard with a Xeon e3 1220 or 1230 and 32GB ECC RAM for under $200 off eBay. Say another $150 for a new case and power supply (ok, power supplies are hard to find in stock right now...) and you're looking at $350-400. Now you have an actual server, and for less money than just one of those hard drives. It will have no upgrade path (RAM is already maxed out), but it does cover the requirements of what you've been describing.
THIS is what I meant when I said in my initial post that you may want to re-think your hardware choices, and why I recommended in my second that you read forum stickies. This is all covered.
You cant compare HDD prices to server components. The cost of HDDs is fixed no matter what you are building. That's like saying I could have bought a massive server for the price I paid for my house. Yeah sure, thats a true statement, but its irrelevant.

Not to mention, hindsight is 20/20, and you don't have any better hindsight than I do. So telling someone to trash their equipment and drop hundreds or thousands on new equipment is not a reasonable suggestion.
 
Top