HP / Dell - HBA discussion (again)

dustmaster

Dabbler
Joined
Nov 24, 2020
Messages
10
Hi to all!
For a fresh Proxmox on ZFS machine I was planning to purchase a refurbished HP DL380 Gen10 with integrated S100i controller, 8x SAS/SATA backplane and NVME PCIe riser to 8x NVME/U.2 cage.

When I started research about an HBA that fits this setup I quickly realised that the S100i, as well as the standard HPE controllers like the P408 or E208 are not supposed to be a good choice. For ex. see this post: the hybrid mode was not a "real" passthrough and in any case the drivers were way less approved reliable by experience and the community.

I got a bit desperate as the generic HP choices seemed to confine to the HPE H200 and H220 (8x, PCIe 3.0, 6G) in IT mode, sporting LSI chips (unlike the H240). But no 12G and no idea about my NVME/U.2 drives.

Installing an LSI PCIe card like the LSI SAS 9305-16i or a LSI SAS 9300-8i would be the second option but nobody really wanted to approve it directly. Found many threads about HP being picky about third-party hardware, jet engine fans in cases of unrecognized hardware, etc.

So I even thought about going Dell as from my research those PERCs seemed to be the better-liked devices - not to mention Dell also being the more favoured brand for homelab servers as they are less prone to license pains and compatibility issues.

In the end I returned to reddit`s zfs community where a rather reliable source dismissed all the HP bashing, even reactivating my original dream: Just use a P408 in hybrid mode - you could do a system Volume in hardware RAID1, let the other drives go HBA and create your ZFS. He posted some original drive infos he got from the HPE SmartArray E208i-p, so everything kinda trustworthy.

What do you make out of this? There seems to be a controversy on the horizon.

I'd want so much more to follow the positive stance as I can't believe it should be that tricky to get a decent Proxmox Install with ZFS on rather current HP hardware .. also those refurbed HPs seem to be quite a bit cheaper than Dells. :wink:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
those PERCs seemed to be the better-liked devices
No, but Dell does have a bunch of HBA options which are lightly-customized LSI HBAs.
no idea about my NVME/U.2 drives.
Those are irrelevant to the HBA discussion. Using NVMe SSDs with a tri-mode HBA is akin to buying racing tires for a Fiat Punto - technically possible, but you just wasted the SSDs because the HBA is holding things up. Incredibly, those things pipe all the NVMe stuff through the SCSI stack, eliminating the latency benefits of NVMe and severely hampering the potential bandwidth.
Directly connect NVMe SSDs, using redrivers/retimers and PCIe switches as needed.
a rather reliable source
I tend to disagree, sounds like some random guy with an HPE server. We've seen plenty of misbehaving systems that boiled down to "HPE's SAS stuff is dodgy as hell". And despite multiple attempts, nobody has really been able to demonstrate a reliable SAS solution with anything other than LSI/Broadcom (some SATA/AHCI controllers are okay, but that area is a minefield). Note that "the disks are visible and seem to work" is a low bar to clear, though admittedly even that is a struggle for many RAID controllers (especially old ones).
Installing an LSI PCIe card like the LSI SAS 9305-16i or a LSI SAS 9300-8i would be the second option but nobody really wanted to approve it directly. Found many threads about HP being picky about third-party hardware, jet engine fans in cases of unrecognized hardware, etc.
Picky is one thing, but rejecting HBAs in the standard slots? That's nasty even by HPE standards.
 

dustmaster

Dabbler
Joined
Nov 24, 2020
Messages
10
Hi there,
interesting what you wrote! 2 very different opinions on the topic, thought he maybe sells those HPs .. :wink:

So to be constructive the way to go with my HP box could be:
- get a HPE H200 or H220 (safe but slow) OR a LSI SAS 9305-16i / SAS 9300-8i IT flashed (fast but nobody yet gave me thumbs up on the combination with the DL380) to take care of the SAS drives
- connect the U.2 drives cage with something like this here which I think is already present in the machine (I did not think about a Tri-Mode Controller when I posted)

OR

- get a Dell :wink:

Questions:
- are the U.2 drives as described a safe bet and will work out of the box with ZFS via Proxmox? Meaning, driver stable, SMART values and such? Seems to be a way less critical thing to do compared to using certain HBAs!?
- maybe somebody could comment on the combo of my particular machine with the LSI PCIe cards?

- would you rather recommend going Dell, say with a R740 and a Dell PERC H730P Adapter? Will be a bit tougher to get NVME support in that one, though

btw. concerning the PCIe cards I just meant temp. or similar readings might not work which confuse thermal management, for ex. Have heard of HP being picky but what it does mean in each single case would have to be found out ..
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
- would you rather recommend going Dell, say with a R740 and a Dell PERC H730P Adapter? Will be a bit tougher to get NVME support in that one, though

The PERC H730P is a RAID controller and falls squarely under the RAID controller guidance:


Proxmox is not TrueNAS but there is a lot of commonality in drivers. Proxmox and basically all the other ZFS projects warn against RAID controllers if you bother to dredge through the documentation, and TrueNAS has an edge here because of the sheer number of deployments. Lots of people have used "other things" with TrueNAS and then eventually came to regret it; it is ill-advised to listen to people with more opinion than experience who suggest you can just use RAID controller $XYZ because they've seen it (seem to) work.

The H730P/H740P do not support NVME AFAIK.
 

dustmaster

Dabbler
Joined
Nov 24, 2020
Messages
10
The PERC H730P is a RAID controller and falls squarely under the RAID controller guidance:
I know .. it just read somewhere like you could change it to a proper HBA mode.
The H730P/H740P do not support NVME AFAIK.
Yes, they don't. I was actually thinking about a similar "splitter" solution like I linked to above for the HP.

So it sounds like gettings suitable Dell adapters also would need some more research :rolleyes:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I know .. it just read somewhere like you could change it to a proper HBA mode.

Go read the link I provided, especially 3), 4), 4a), and 5).

Just because someone calls it "proper HBA mode" doesn't make it so, and also even if it were a true HBA mode, we're also looking for driver reliability as well. The only disk adapters that are known to work well are the LSI HBA IT (and also IR), Intel SATA AHCI, Intel PCH SCU, and a few other more exotic things. Most of the cheap SATA AHCI controllers are dodgy. If you don't mind occasional problems, then be my guest...
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@dustmaster These guys know what they are talking about, they live and breath this stuff and have been here for a long while. If either of these guys told me it will not work or be stable, I would not use it. Of course if you take a risk and it works, please post the results and how you did it. If it flops, post that as well. Even failures help educate the community.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
even if it were a true HBA mode, we're also looking for driver reliability as well
This is an overlooked piece of the equation, and historically has been a big problem on TrueNAS because of the FreeBSD base, specifically with the HP cciss driver experiencing an "exciting and novel approach" to behavior under pressure.

This issue is somewhat ameliorated on TrueNAS SCALE by virtue of it being able to use vendor Linux drivers - which, for better or worse, often receive more attention than FreeBSD - but that isn't license to use just any chip.

u/ewwhite is definitely more than "some guy who owns HP servers" so I'm inclined to put a faith in his experience, but if I recall correctly it is based primarily in the Linux world, so use caution if applying it to the FreeBSD-based CORE.

It may well be that the Microsemi-based cards using the smartpqi driver are perfectly stable and reliable under SCALE, even in a "mixed-mode" - but many TrueNAS users adopt a similar attitude to an extreme-sports enthusiast looking at a yet-untraversed piece of terrain - "Looks safe to me. You go first."
 

dustmaster

Dabbler
Joined
Nov 24, 2020
Messages
10
Well, yes,
when I asked u/ewwhite about his setups working out of the box (my Proxmox setup, for ex.) or with magic sauce only I did not get an answer. Btw. in Proxmox groups I almost didn't get any feedback at all about my planned setup.

Don't get me wrong - I am not trying to advocate some quirky cards here just to set sth. off.

I am looking for a solution that is affordable and easy to get. @Ericloewe pointed to the LSI based Dell adapters, so I hypothesized about the H730p. Which ones actually feature LSI chips is open for my weekend research and I have no clue yet if this route would be more convenient than the HP options I noted.

Still nobody comments on them 9300 LSI cards inside Gen10 HP metal :wink:

And to bypass the whole HBA issue I proposed the NVME variant.. would this be safe at least?
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
And to bypass the whole HBA issue I proposed the NVME variant.. would this be safe at least?
One of the major benefits of NVMe is that it removes a big layer of firmware implementing what is, by now, around 40 years old, depending on if you count SASI as Gen. 0 SCSI or not. For the purposes of our discussion, it means a lot less to go wrong.
If you don't need PCIe switches (i.e. if you're not going to be using a bunch of PCIe cards), you can just wire things up directly with retimer cards and enjoy not paying Broadcom or Microchip for the luxury of attaching storage to your server.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
@Ericloewe pointed to the LSI based Dell adapters
Well, it was more a statement to the effect of "Dell will sell you a server with a first-class HBA option", namely the HBA330 for Gen 13/14/15 or HBA350 for Gen 15/16. The PERCs aren't really suitable (some kinda work, but they're immature as HBAs and performance is abysmal, others are outright unsuitable).
That's not to say you can't buy a Dell HBA330 in PCIe card form factor for an HPE server, but it's typically easier and cheaper to get a vanilla LSI SAS 9300.
 

dustmaster

Dabbler
Joined
Nov 24, 2020
Messages
10
Thanks a lot, @Ericloewe, that's precious info!

As you mentioned the smartpqi driver - I can't really find out if this one is integrated into Proxmox. Maybe it just boils down to that - ZFS via TrueNAS Core/FreeBSD vs. Proxmox/Debian. Certain drivers supporting certain cards ..
 
Top