HP Gen9 Server w/ P840 (HBA Mode) - No Drives Visible

Status
Not open for further replies.

rwyatt85

Cadet
Joined
Mar 23, 2015
Messages
7
Hey guys, I'm going through the motions of figuring out a stable build of FreeNAS for an organization. It's an HP shop, so sticking to the brand has some meta-advantages that I won't get into. We're also avoiding the repurposing of old hardware for longevity reasons. Due to these factors, the build that I'm piloting ends up being this:

HP Gen9 DL380 Server with 12 Large Form Factor Drive Bays. 16 GB ECC RAM, 1 loaded CPU socket (intel). OS installed to USB Flash media. The SKU came pre-loaded with an HP Smart Array P840 RAID card. Initial deployment will be with 4 TB Western Digital RE (SAS) drives, total of 8 with an intent to increase to 12 drives as needed. I'm using FreeNAS 9.3 STABLE.

Now, I don't need the primer on not using hardware RAID. I read the 4-part build guide on the website and looked over the forums. My intention is to do this right. That said, here's where my build is breaking...

The P840 card (which currently returns zero search results on the forum) has an HBA mode option, which I promptly turned on before even starting the install. However, FreeNAS isn't picking anything up. I used the console and checked inside /dev/ and didn't see any devices there either.

So my assumption is that my issue is probably FreeBSD not knowing how to interface with this card, even in HBA mode...based on my build plan, and the 8x (12x preferred) 4TB drive support factor, and needing to be able to plug into whatever cable "standard" this HP frontplane is spitting out to the back of the server...what can the community recommend?

I'm sure that ideally the OS will catch up to the hardware in time, but that doesn't help me for the moment. I'm fine ordering an alternative HBA capable card and swapping it...it just has to support 4TB drives, 12 drives total, and be able to plug into these cables from the front plane. I would feel better hearing some of you guys agree with my assessment of what the actual problem is, though.

(edit) I'd also like to mention that the riser card supports two PCI 8x v3 cards...so If I need to buy two cards to reach 12 drive support, that's viable.
 
Last edited:

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Since it's Gen9 I suppose it's SAS 12Gbps based, which uses SFF-8644 (almost square in footprint) connectors. These can be found on the LSI 9300-8i cards. Usually these servers would have an expander backplane, so an 8port controller is sufficient for all 12 bays.

That said, the HBA mode of the P840 might still be not "good enough". I'd be interested in the outputs of 'dmidecode' and 'camcontrol devlist'. The newer HP cards don't provide a proper HBA mode IIRC, your best bet is a H240ar - but that one still might not be supported.
 

rwyatt85

Cadet
Joined
Mar 23, 2015
Messages
7
Doesn't look like SFF-8644, based on the pictures I found. I've never had to go this deep into SAS interface technicalities before, so I'm trying to figure out exactly what this interface actually is. I've attached two photos - one of the interface ports on the RAID card, and one of the frontplane connections. I unplugged the connector from the frontplane so I could get a better shot of it. There were three of these in total on the frontplane to support the 12 drives.

Also, I attached a txt of the dmidecode output (it was long) and here's the output of camcontrol devlist:

<WD WD4001FYYG-01SL3 VR08> at scbus1 target 0 lun 0 (pass0)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 1 lun 0 (pass1)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 2 lun 0 (pass2)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 3 lun 0 (pass3)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 4 lun 0 (pass4)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 5 lun 0 (pass5)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 6 lun 0 (pass6)
<WD WD4001FYYG-01SL3 VR08> at scbus1 target 7 lun 0 (pass7)
<Verbatim STORE N GO 5.00> at scbus3 target 0 lun 0 (da0,pass8)

It sees the drives, so that's fun.


(Edit) here's at least the Controller output from the dmidecode command, if that's all you were interested in:

Handle 0x00C3, DMI type 203, 31 bytes
OEM-specific Type
Header and Data:
CB 1F C3 00 B3 00 93 00 3C 10 39 32 3C 10 CB 21
01 07 FE FF 00 00 07 0A 02 01 FF FF 01 02 03
Strings:
PciRoot(0x0)/Pci(0x3,0x2)/Pci(0x0,0x0)
HD.Slot.2.1
Smart Array P840 Controller
 

Attachments

  • WP_20150323_001.jpg
    WP_20150323_001.jpg
    492.9 KB · Views: 1,194
  • WP_20150323_002.jpg
    WP_20150323_002.jpg
    95.5 KB · Views: 1,011
  • freenasoutput.txt
    45.6 KB · Views: 568
Last edited:

rwyatt85

Cadet
Joined
Mar 23, 2015
Messages
7
Alright, so I educated myself a bit more. It seems the connector coming off the hard drive backplane is the standard, ubiquitous SFF-8087. So there's 3x of these connections, each supporting 4 of the drives. Makes sense, 4x3 = 12 which is how many drives the server can have.

However, I still can't identify the cable interface going into the RAID card. It's like a SFF-8087 double-wide. Literally looks the same, except wider. I don't see why I couldn't just buy three 8087 cables and bypass the existing unidentified cable to plug into a new card that uses a standard SFF-8087. To that end, it's really just whatever card you guys would recommend for 12x+ drives.

On another note. The fact that camcontrol devlist returned all the hard drives - is that of interest? Does that indicate that my problem may be something else entirely than what I'm presuming?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It's interesting that camcontrol devlist shows the drives, but there are no da* or ada* designations for them. I don't know what that means, but I'd speculate it's something driver-related.

I concur that the backplane connectors look like SFF-8087s. Assuming your backplane doesn't contain a SAS expander (and if it has three SAS ports on it, I'd guess it wouldn't), the easiest and best-supported solution would be 2 x IBM M1015s, or equivalent LSI-branded cards, flashed to IT mode with P16 firmware.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
No, I can confirm from personal experience that 9.3 is still using P16.
Weird, I could have sworn my home build keeps telling me to upgrade from 16 to 18. I'll have to double check tonight now. Haven't had time to upgrade my M1015 to match so I've been ignoring the warning nag screen.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
9.2.1.X and 9.3 use P16. This won't change anytime soon. Maybe for 10.1, if the whole situation gets sorted out.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Yep, I was wrong. Just fixed that and flashed with P16. My mistake, old card had the correct P16 but when I moved the drives to a new system a while back, the new card was incorrect. Upgrading to 9.3 over the weekend raised the flag and I read it backwards.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
For business servers I wouldn't bother with flashing M1015 cards but grab the LSI 9207-8i and 9207-4i4e. These can be flashed to P16 firmware as well without that crossflashing.
 

rwyatt85

Cadet
Joined
Mar 23, 2015
Messages
7
For business servers I wouldn't bother with flashing M1015 cards but grab the LSI 9207-8i and 9207-4i4e. These can be flashed to P16 firmware as well without that crossflashing.

This is precisely what I had decided I would do based on my research (the 9207-8i), so it's comforting to hear it recommended. I even already downloaded a copy of the P16 firmware for it.

I've got 2 SFF-8087 cables on the way to replace the abnormal ones in the server, and I'm going to take a shot at using the two on-motherboard SFF connections that I noticed before I order the card. They may suffice (if they work) until the server needs to be upgraded to >8 disks. Otherwise, I needed the cables anyway.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
BTW, you may want to reconsider your pool configuration. There isn't really a good way to expand a RAIDZ pool from 8 disks to 12.
 

rwyatt85

Cadet
Joined
Mar 23, 2015
Messages
7
BTW, you may want to reconsider your pool configuration. There isn't really a good way to expand a RAIDZ pool from 8 disks to 12.

This is something that I wanted to experiment with once I got the server stood up, before it went into production. My understanding is that you can add vdevs to a storage pool to expand it. Is this not true, or "not as easy as it sounds"? My architecture drawing that I threw together for my team is attached. My assumption was that I could add an additional two vdevs (5 & 6) using the same mirror-pair structure.

I'm basing that off section "8.1.1.3. Extending a ZFS Volume" in the online documentation.
 

Attachments

  • nasdiskplan.png
    nasdiskplan.png
    29.1 KB · Views: 973

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
This is something that I wanted to experiment with once I got the server stood up, before it went into production. My understanding is that you can add vdevs to a storage pool to expand it. Is this not true, or "not as easy as it sounds"? My architecture drawing that I threw together for my team is attached. My assumption was that I could add an additional two vdevs (5 & 6) using the same mirror-pair structure.

I'm basing that off section "8.1.1.3. Extending a ZFS Volume" in the online documentation.
Yes you can expand the pool by adding vdevs. Since you're mirroring, you would start with 4 pairs striped and then add 2 more pairs to the stripe when you extend your pool. However be very careful when doing the extension, once you commit it, it is done and people have been known to accidentally extend with non-redundant vdevs which cannot be reversed short of destroying the pool. Another caveat is that when you extend your pool, it may be a bit slower for writes as zfs will write more to the 2 new pairs until all pairs are balanced for storage (depends on how much is written to the original 4 pairs before you extend your pool).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
What you're proposing will work just fine. My concern was that if you were to set up an 8-disk RAIDZ2 now, adding four disks later would have the pool rather unbalanced. Everything would still work, but performance might not be optimal. With mirrors, though, there shouldn't be an issue. You'll be using half your total capacity for redundancy, of course, and the loss of both disks in any mirrored pair will trash your pool.
 

cjbraun5151

Dabbler
Joined
Apr 28, 2015
Messages
10
rwyatt85, I'm curious how your build turned out? We have the exact same server configuration, HP dl380 with the P840 controller and not seeing any of the 6tb drives installed. Did you go with the 9207-8i and replace the cables? How did it work out?
 

rwyatt85

Cadet
Joined
Mar 23, 2015
Messages
7
rwyatt85, I'm curious how your build turned out? We have the exact same server configuration, HP dl380 with the P840 controller and not seeing any of the 6tb drives installed. Did you go with the 9207-8i and replace the cables? How did it work out?

I ended up buying 3x SFF-8087 cables and two LSI 9207-8i cards, flashing the firmware, and that got me in business. Straight from the drive backplane into the cards, so 3 foot SFF cables are the ideal length to order.

You HAVE to use an LSI card(s)...the onboard controller simply doesn't work no matter what you do.
 

cjbraun5151

Dabbler
Joined
Apr 28, 2015
Messages
10
Thanks, I just got my LSI card today and am waiting on the cables. From the LSI download page, which firmware version did you use? The FreeBSD P16?
 
Status
Not open for further replies.
Top