HPE Gen9 Smart Array, or HBA mode?

pter9527

Cadet
Joined
Feb 2, 2022
Messages
3
Hello guys. My team 's got refurbished HP DL380 Gen9 boxes, all coming with a P440ar controller and a 12LFF front drive cage. We are planning to run TrueNAS as storage provider as well as hypervisor to contain all business VMs and a shared directory for branch offices.

I know lots of discussions went around HBA or server RAID, but seems there were few threads about those newer HP hardware. Since we have only LFF drives and backplane, might not find an extra backplane to install the SFF drives we need to install the TrueNAS OS (expecting it's a hardware RAID1 on a separate controller).

Now that it's only the P440ar (should have RAID mode and HBA mode). I am testing out installing TrueNAS on a RAID1 array, then adding individual single-drive RAID0 as vdevs. We do run a business. Is it reliable or risky? We're on TrueNAS core 12.0 u2.1. Appreciate for the help!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Start by reading this:
 

pter9527

Cadet
Joined
Feb 2, 2022
Messages
3
Start by reading this:
thanks for sharing, so it seems FreeBSD in nature works better with LSI HBAs
is there any risk if i install OS on a single drive? My confusion is what if it's lost and is my data and VM safe?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's not really a FreeBSD issue. At its heart, ZFS needs correct behaviour out of its storage systems. Many devices do not work "correctly" for some portions of the puzzle, such as perhaps they do not correctly implement hot plug, or they're idiotJBOD that encapsulate the disk in their own proprietary partition table (hi 3Ware!), or they have cache that may improperly reorder commands/writes, or they may have weedy CPU's that get crushed under the workload, or (additional list of stupid fails here). On top of this is that device drivers may also be problematic with FreeBSD and/or Linux; some of them are just designed for other use cases (such as CISS), or authored without the assistance or documentation from the manufacturer (lots).

On a FreeNAS/TrueNAS system, your data, including jails, VM's, etc., are stored on the storage pool(s), not on the boot media. However, the boot media may contain the "system" dataset which includes the system configuration. You may want that to reside on the storage pool, and/or you may wish to keep an independent backup of the system configuration. This is easily downloaded.
 

pter9527

Cadet
Joined
Feb 2, 2022
Messages
3
It's not really a FreeBSD issue. At its heart, ZFS needs correct behaviour out of its storage systems. Many devices do not work "correctly" for some portions of the puzzle, such as perhaps they do not correctly implement hot plug, or they're idiotJBOD that encapsulate the disk in their own proprietary partition table (hi 3Ware!), or they have cache that may improperly reorder commands/writes, or they may have weedy CPU's that get crushed under the workload, or (additional list of stupid fails here). On top of this is that device drivers may also be problematic with FreeBSD and/or Linux; some of them are just designed for other use cases (such as CISS), or authored without the assistance or documentation from the manufacturer (lots).

On a FreeNAS/TrueNAS system, your data, including jails, VM's, etc., are stored on the storage pool(s), not on the boot media. However, the boot media may contain the "system" dataset which includes the system configuration. You may want that to reside on the storage pool, and/or you may wish to keep an independent backup of the system configuration. This is easily downloaded.
great news! already testing!!
 

beardmann

Cadet
Joined
Oct 11, 2021
Messages
8
We are also in the progress of building a larger system where we will be using a HPE Server.
For the usable zpools we will be using LSI HBAs in IT mode, which works great.
For the boot-pool, we do have some issues, because we would like to create this as a mirror so that we can tolerate a disk failure here.
We have been testing this on a VM (ESXi based) with two disks, and the installer shows the disks, and it also creates the boot-pool in a mirrored setup.
Yet, as we start testing, and replacing disks, the system cannot boot... here is the recipe:
Virtual disk 1: sda
Virtual disk 2: sdb
mirrored boot-pool with the two disks are created.
We are now able to boot fine... we can simulate disk failure by removing disk 1, and we are still able to boot just fine...
We then present the VM with a replacement disk and do a wipe of the disk and then we do a replace on the boot-pool.
This resivers the pool just fine, but ends up with an error like this:
[EFAULT] Command grub-install --target=i386-pc /dev/sdb failed (code 1): Installing for i386-pc platform. grub-install: error: failed to get canonical path of `/dev/replacing-1'.
We then try to issue the grub-install command as described above in the console, and it completes OK...
Yet... we are now unable to boot the system off this replaced disk... the system reports "No Operating system..." so it does not even identify anything to boot... no grub no nothing...
This is on TrueNAS Scale (22.02.00)

Because of this, we are also looking into using a RAID controller and mirror the boot disks.
Sadly we currently only have the build-in B140i Smart Array, and even after we have created the mirror, TrueNAS identifies both disks in the installer, and not one mirrored device.. this can only be because of a missing driver.
I would thing that using one of the "larger" Smart Array controllers like P440 or P480, the mirrored disks would be presented as one logical disk at the installer?

We would very much like to avoid this RAID controller, if someone can explain why we cannot boot after a disk replacement? :smile:

/B
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We would very much like to avoid this RAID controller, if someone can explain why we cannot boot after a disk replacement?

Yeah, my solution to this is to use IR firmware on a HBA. The IR and IT firmware are closely related (IT is a stripped-down version) and all work through the MPR/MPS driver, so it is highly compatible. Using IR to create a "hardware" (card-based) RAID1 boot device gives you the redundancy, but robs ZFS of its ability to fix issues with redundancy. You can use THREE SSD's for nirvana; two in a RAID1 which are your "boot mirror" behind the RAID1, and then one more as a ZFS redundancy source (which will never be booted from).

I am not interested in ruining my brain this morning looking up HP part numbers so my vague direction to you is to use the same type of controller for the boot pool as you are using for the main pool, presumably some OEM LSI HBA in IT mode, just buy another and flash it to IR mode.

It is even okay to load the main pool HBA with IR firmware if necessary, but there is several percentage performance drop in doing so, and of course you MUST avoid making any "hardware" card-based RAID1 or JBOD settings.

If you are having problems figuring out my point here, please ask. I am half asleep. :smile:
 

beardmann

Cadet
Joined
Oct 11, 2021
Messages
8
Yeah, my solution to this is to use IR firmware on a HBA. The IR and IT firmware are closely related (IT is a stripped-down version) and all work through the MPR/MPS driver, so it is highly compatible. Using IR to create a "hardware" (card-based) RAID1 boot device gives you the redundancy, but robs ZFS of its ability to fix issues with redundancy. You can use THREE SSD's for nirvana; two in a RAID1 which are your "boot mirror" behind the RAID1, and then one more as a ZFS redundancy source (which will never be booted from).

I am not interested in ruining my brain this morning looking up HP part numbers so my vague direction to you is to use the same type of controller for the boot pool as you are using for the main pool, presumably some OEM LSI HBA in IT mode, just buy another and flash it to IR mode.

It is even okay to load the main pool HBA with IR firmware if necessary, but there is several percentage performance drop in doing so, and of course you MUST avoid making any "hardware" card-based RAID1 or JBOD settings.

If you are having problems figuring out my point here, please ask. I am half asleep. :smile:
Well you do not have to stay awake because of this ;-)

Our server is an DL380 Gen9 with only the on-board B140i SATA Controller which I do not think we can re-flash into IR mode.
It has two modes, either "Smart" Array from where you can configure a mirror or "jbod". The mirrored setup I think is based on drivers which TrueNAS does not have as it presents both disks upon installation...
HPE has a few other Smart Array Card options which is installed on the motherboard instead of a PCIe slot which we will all need :smile:
I would think or hope that those Array cards will be able to mirror two disks and present one device to TrueNAS...
Meanwhile we are still testing with "IT" mode with a VM and two disks... so far we have ended up in unbootable setups 3-4 times ;-)
But I think some kind of pattern is starting to form... and just maybe it will work... but the errors we see while using the GUI to replace disks is not comforting at all :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Experimentation and testing prior to deployment correlate with eventual success. :smile:
 

djb

Explorer
Joined
Nov 15, 2019
Messages
76
Hello everyone ! i have a DL380 Gen9 , set the controller to HBA mode.
on truenas installation i can see all the drives. select 2 of them for raid1 and i have an error "can't install on selected drives".
Any suggestions ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hello everyone ! i have a DL380 Gen9 , set the controller to HBA mode.
on truenas installation i can see all the drives. select 2 of them for raid1 and i have an error "can't install on selected drives".
Any suggestions ?

@sretalla posted the answer up above in #2. See


You most likely need to pull the fake HBA and replace it with a real HBA.
 
Top