Question for those using proxmox?

Ianm_ozzy

Dabbler
Joined
Mar 2, 2020
Messages
43
Hi all.

Just for context:
I have used old equipment as a server for I think close on 4 years. My old gaming hardware i7-3770k & 32GB DDR3.
It was a freenas machine only. It worked well.
Then swapped out the CPU for an i7-3770 ( for IOMMU support).
I installed proxmox & virtualised truenas, maybe 3 ish years ago.
It has worked just fine.
The onboard sata controller was passed to to truenas using hardware passthrough.

The CPU power on that machine was lacking for my needs. Truenas is just one of the virtual machines.

So maybe some old server hardware - something like 64GB+ , decent motherboard & powerful CPU.
OF course, nothing suitable available locally. Buying internationally was either stupidly expensive or dubious.
By dubious I mean refurbished motherboards I do not trust, or likley major delivery issues.
The server hardware - just ludicrous in the cost.
So I stopped wasting time and got reluctantly 64GB DDR4 in sale.

So now It is B450 tomahawk motherboard, 3600X CPU & 64GB memory.

I did go through and read this info:

So IOMMU works on the machine, but only successfully through the PICE 16x slot connected directly to the CPU.
It has a quad port nic, passed through to opnsense. It is the most important Virtual machine, so that is what I am doing.
I use a sata card for the boot & data drive of proxmox.
Passing through the onboard sata to truenas, just gives a few i/O errors and freezes the machine.

In proxmox the truenas data drives are directly attached to the Truenas VM.

A BIOS update MAY fix the IOMMU issues but it is risky. B450 BIOS updates i seems can have a high 'brickking your motherboard rate'
I am highly motivated to NOT spend any more cash on it. My PC parts priority is for gaming!

According to the above post, hardware passthrough is preferred, but is apparently broken.

So directly attached drives?
How risky?

Are the any useful statistics or studies?
Of course there will lbe some horror stories in the forums, but how does that help.
For every disaster are maybe 1000 machines running just fine with no need to post in forums - who knows.

I will probably always be running a single machine for all my VMs. This is to save space & power.
Proxmox is my hypervisor of choice. Recommended or not, is what I am doing.

Oh and of course all data is backup up.

Useful info appreciated.
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
In proxmox the truenas data drives are directly attached to the Truenas VM.
I just switched to truenas as my hypervisor so I can't check with my proxmox installation, but I recall this is not the way to passthrough. Do not select the drives for the TrueNAS VM. Find your mainboard sata controller and pass that one through. From what I researched a few weeks ago this may or may not work, depending on the mainboard.
Probably would be better to use the pci card to attach the drives for passthrough and use the internal sata for proxmox.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I use Proxmox myself and passthrough works just fine (see signature). But I am running on server-grade stuff where this kind of stuff tends to work better.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I know you are asking about Proxmox specifically but have you considered ESXi? It's also free and does tend to work on more machines than Proxmox, well in the past. But as @chuck32 said, if you can pass through the entire controller, that is the better way to do it. In ESXi we have RDM as well, it's a less desirable way to pass individual drives through but it has worked for me in the past when I tested it out and I've helped others use RDM when no other option existed.

As I understand it, Scale should have a better virtualization capability if you wanted to try to virtualize directly through TrueNAS Scale.

good Luck
 
Last edited:

Ianm_ozzy

Dabbler
Joined
Mar 2, 2020
Messages
43
Thanks for the replies.

So as I have stated, 'server grade' hardware is impractical to get a hold of. I spent quite a bit of time looking for something suitable & affordable with no luck. It was very frustrating.
I am using a 2 port sata card to for both & proxmox boot & data drive.
Passing that one through is not practical.
PCI passtrhough does not work at all with anything to with the motherboard chipset. I suspect it is just the way it is with B450 motherboards.
The main CPU slot has a quad port nic for my router VM, and it is staying there. It is the most important Virtual machine.
I made this clear in my post.

With the previous motheboard & cpu it worked with IOMMU , but upgraded as I need decent cpu power with options to upgrade. So presently a 3600X to maybe a second hand 16 core down the track.

I use proxmox as it is free, with all the features, and will continue to do so. I understand you need to pay for useful features for other software.

As for truenas scale, I tried using as a hypervissor & found it highly lacking.
When I tried it:

No backup options of virtual machines.
No firewall options for a virtual machine.
No memory sharing.
Proxmox containers I found vastly better than the truenas scale 'apps'

I understand with other hypervisors, some of those options cost you. Not happening.

What I need to know is a useful info on how directly attached drives to proxmox may change the chances of data loss/corruption. I can find no statistics on this. There seems to be are only guides & opinions.

The guide I put in my post seems fairly old , relating to mainly very old server hardware.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What I need to know is a useful info on how directly attached drives to proxmox may change the chances of data loss/corruption. I can find no statistics on this. There seems to be are only guides & opinions.

You did find the info, you linked to it above. You're not going to find "statistics" because there is no formal method for people to report their failures, and many are resistant to owning up to it anyways. The stats, as they are, are mostly in my head because of the interactions I've had providing support over the years.

Do not use RDM, or Proxmox's stupid variation on it, regardless of what was said above, because unless something's changed recently in ESXi, RDM will stall in a manner similar to a nonredundant VMDK when there's a failure, and if the RDM setup gets messed up, recovery is very difficult. This caused a lot of pool losses back in the day. I worked with a couple of people on "directly attached drives" for Proxmox until I identified that it just seemed to be RDM-for-Proxmox and caused problems there too.

You do not want the host hypervisor's I/O stack getting involved especially where it may be handling things in ways which are not ZFS-compatible. This isn't so much of a problem for ESXi, since the main error mode seems to be stalling on I/O, but Proxmox is a bit more dicey because it is unclear on whether or not their "directly attached drives" guarantees synchronous writes to disk; experimentally, based on observed write speeds, it seems likely that it just passes it on to the driver, which means that you have all the classic RAID card write reordering issues and other related crap to worry about.

We mainly find the bits that make people bleed when people are actually punctured by the bits and injured. I extrapolate some stuff from that point, which you are free to label as "opinion" if you wish, but a lot of what I do is to try to steer people onto clearly safe pathways to design reliable systems. I can't and don't stop people from doing whatever dumbfool things they may wish.

The guide I put in my post seems fairly old , relating to mainly very old server hardware.

That's only because you want to believe that. The real truth is that it's still entirely relevant. PCIe passthru has not changed in the ten years since I've written it, and smarter ways to virtualize have not appeared either. The very first words are "This is still as relevant as ever"; the only thing that's changed since THEN has been that Proxmox has gotten a bit more mature (PCIe passthrough still listed as "experimental"), XCP-ng has sprung into being, and Hyper-V seems to be not entirely catastrophic in some cases.

Fundamentals of compsci don't change quickly. CPU's have not picked up psychic PCIe lanes to communicate directly with users. RAM has not developed infinite speed and capacity. HDD's have not grown to the yottabyte range. Virtualization is still functionally the same as it was ten years ago, so it isn't particularly smart to discount the hard-won wisdom of those who came before you just because you think it's "fairly old". New server hardware doesn't work differently.
 

Ianm_ozzy

Dabbler
Joined
Mar 2, 2020
Messages
43
You did find the info, you linked to it above. You're not going to find "statistics" because there is no formal method for people to report their failures, and many are resistant to owning up to it anyways. The stats, as they are, are mostly in my head because of the interactions I've had providing support over the years.

Do not use RDM, or Proxmox's stupid variation on it, regardless of what was said above, because unless something's changed recently in ESXi, RDM will stall in a manner similar to a nonredundant VMDK when there's a failure, and if the RDM setup gets messed up, recovery is very difficult. This caused a lot of pool losses back in the day. I worked with a couple of people on "directly attached drives" for Proxmox until I identified that it just seemed to be RDM-for-Proxmox and caused problems there too.

You do not want the host hypervisor's I/O stack getting involved especially where it may be handling things in ways which are not ZFS-compatible. This isn't so much of a problem for ESXi, since the main error mode seems to be stalling on I/O, but Proxmox is a bit more dicey because it is unclear on whether or not their "directly attached drives" guarantees synchronous writes to disk; experimentally, based on observed write speeds, it seems likely that it just passes it on to the driver, which means that you have all the classic RAID card write reordering issues and other related crap to worry about.

We mainly find the bits that make people bleed when people are actually punctured by the bits and injured. I extrapolate some stuff from that point, which you are free to label as "opinion" if you wish, but a lot of what I do is to try to steer people onto clearly safe pathways to design reliable systems. I can't and don't stop people from doing whatever dumbfool things they may wish.



That's only because you want to believe that. The real truth is that it's still entirely relevant. PCIe passthru has not changed in the ten years since I've written it, and smarter ways to virtualize have not appeared either. The very first words are "This is still as relevant as ever"; the only thing that's changed since THEN has been that Proxmox has gotten a bit more mature (PCIe passthrough still listed as "experimental"), XCP-ng has sprung into being, and Hyper-V seems to be not entirely catastrophic in some cases.

Fundamentals of compsci don't change quickly. CPU's have not picked up psychic PCIe lanes to communicate directly with users. RAM has not developed infinite speed and capacity. HDD's have not grown to the yottabyte range. Virtualization is still functionally the same as it was ten years ago, so it isn't particularly smart to discount the hard-won wisdom of those who came before you just because you think it's "fairly old". New server hardware doesn't work differently.

Thanks for the reply.
So I understand that it is a better idea to use passthrough of the sata controller.
It does not work with the hardware.
As for other hardware, I have to live in the real world where money is tight.
For truenas/server hardware I have specifically bought:
Another hard drive for parity.
A quad port nic.
Memory.
This was all to go with hardware I had already over a span of about 4 years.

Second hand server stuff is a no go for reasons explained already.


So with what am using, pci passthrough does not work with the controller.
My 'dumbfool' option of connecting drives directly is my only realistic option.
So I have been using it a week so far with no issues.

Your guide did seem dated, but I am a user & not an expert on servers or guides. Anyone looking at a guide of any sort of technical made about a decade ago should be skeptical. The 'dumbfool' option would to be just go with an apparently old guide.

A few edits:
'What I need to know is a useful info on how directly attached drives to proxmox may change the chances of data loss/corruption. I can find no statistics on this. There seems to be are only guides & opinions'

Seems valid to me.
If I were a shareholder, where I assume is deployed in many situations, would be demanding to know why not.

I am using truenas for convenience.
It was on a bare metal machine, now virtualised for convenience, so no need to recopy all my data over from backups.

I would like to point out I changed my router from pfsense to opnsense.
It was nothing to do with any technical aspects.
The netagte forum was just unpleasant & unhelpful.
The opnsense one I find much more useful & pleasant.

This is becoming the case for truenas.
If it does fail/breaks down, will be using something else.
Promox I understand has very convenient options.

Bye
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Your guide did seem dated, but I am a user & not an expert on servers or guides. Anyone looking at a guide of any sort of technical made about a decade ago should be skeptical.

Then definitely be sure not to use PCIe, gigabit ethernet, or SATA.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
A few edits:
'What I need to know is a useful info on how directly attached drives to proxmox may change the chances of data loss/corruption. I can find no statistics on this. There seems to be are only guides & opinions'
You didn't try hard enough. You know... there is this magical function that the forums have called search. It works wonderfully and enables you to find people in the past that have actually tried things like RDM along with other types of shenanigans and end up coming back weeping over their data. I myself have used that function to compile sort of a "greatest hits" collection of people that have lost their pools from implementing sub-optimal solutions.

It's just that if you expect people on the forums to do free work compiling all the statistics for you... you may be out of luck, but you know... you can build the statistics yourself using all the data you find here..... instead of demanding someone else to spoon feed it to you for free.

Seems valid to me.
If I were a shareholder, where I assume is deployed in many situations, would be demanding to know why not.
Obviously, you're not a shareholder or even an ixSystems paying customer cause otherwise, I'm sure you could be demanding some answers from them directly. Though your responses sure make it seem like as if you are indeed a paying customer/shareholder.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Though your responses sure make it seem like as if you are indeed a paying customer/shareholder.

This has been an unfortunate trend in the free software ecosystem over the last decade or two, people feeling that they are entitled to being handed answers on a silver platter, and owed the work product of others for free. The community here is made up of members who are contributing their time and efforts to help others, but it is more than a bit frustrating to have contributions such as the virtualization guides blown off just because someone feels that they're "old". Need I re-release these every year? Every month? Just a bit annoyed.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Oh my, I promise that I will never mention RDM again. I understand some people have had issues with it. I don't know why but whatever. I do not prefer that method over just passing through the entire controller.
 
Top