HBA Card and Storage

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Hello guys,

So, getting a couple of LSI HBA Card. 9400-16i. As per the datasheet, it supports HDD, SSD and NVMe. I want to know whether it is fine to use that way or one card for one type of storage media? i.e Card #1 for HDD, Card #2 for SSD and Card #3 for NVMe. If i use it all together in a single card, will it bottleneck or affect the media speed? Is there any kind of risk involved and is this the correct way?

Thanks
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Hey friend!

Of course you can mix it up with no risk. That is the good part of ZFS, the drives don't care where they connect, they just want all the drives available and it can provide data. Bottlenecking is always possible but all bottlenecking means is the item is the slowest item/component in the entire data path. For HDD, the HDD is the bottleneck, SSD, the SSD or SATA interface is the bottleneck, NVMe, the interface and that could be NVME to the LSA card or the LSA card to motherboard. Now one last item here, if you are trying to transfer data in/out of the NAS then the NIC is likely the bottleneck. And we of course will assume that the pool design provides some really high speeds as well.

But you can use the drives wherever you desire. If you happen to be benchmark testing you can test, power off and rotate some cables around, power on and retest. This will help you out in determining the optimum configuration, assuming one configuration is slower that another.

EDIT: I would however recommend that you utilize the minimum number of add-on cards, they can consume a lot of power and generate a lot of heat.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Of course you can mix it up with no risk. That is the good part of ZFS, the drives don't care where they connect, they just want all the drives available and it can provide data
Bingo!

I was a bit worried about that.
Bottlenecking is always possible but all bottlenecking means is the item is the slowest item/component in the entire data path. For HDD, the HDD is the bottleneck, SSD, the SSD or SATA interface is the bottleneck, NVMe, the interface and that could be NVME to the LSA card or the LSA card to motherboard. Now one last item here, if you are trying to transfer data in/out of the NAS then the NIC is likely the bottleneck. And we of course will assume that the pool design provides some really high speeds as well.
I meant any slow speed if mixing the media types using an HBA Card. For example, i read on this forum that although the LSI/Avago/Broadcom says that it is Trimode Adapter but connecting NVMe is not a good idea and a PLX Card should be used instead which is more reliable. Is that correct?

But you can use the drives wherever you desire. If you happen to be benchmark testing you can test, power off and rotate some cables around, power on and retest. This will help you out in determining the optimum configuration, assuming one configuration is slower that another.
Cool cool!

EDIT: I would however recommend that you utilize the minimum number of add-on cards, they can consume a lot of power and generate a lot of heat.
Gotcha. Get happy, I'm already limited by my PSU.

Secondly, for the pool design, i definitely know that RAIDZ1 is dangerous so, either RAIDZ2/RAIDZ3? How many vdevs out of 14 disks each of 16TB? What are my options for the maximum IOPS/speed and redundancy?

Third, is there any way to know if an HBA Card is failing? And suppose if it fails during a Read/Write operation, what's the risk? Data loss? or the drives are going to be affected too? I think the main reason is the cooling. But I'm not worried about that. I just have seen a few HBA Cards failing but never saw the Intel PCH/Onboard SATA to fail except for a few.

Fourth, not having a rack chassis with good number of bays is what I'm missing but i also don't have the need for that at the moment. As my PSU only has 14 SATA connectors, is there any way i can use a few more disks with some solution? Like custom backplane or some kind of converter? I think i might have the option to use Molex to add two more SATA devices, but honestly I'm not comfortable with that as i had some issues with that in past but those connectors were also cheap from local market.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
A few questions i would like to ask:

1. Do all the NIC have the option to use MTP/MPO and LC cables? Provided that both types of cables require different module/transceivers or the modules are same too?

2. Unlike the fiber jacket color, does MTP/MPO has something like the LC cables?

3. Disregarding the cost factor, which is more reliable and better in terms of speed? LC or the MTP/MPO?

4. In addition, i just came to know that the MTP/MPO cables needs to be cleaned. How frequent cleaning is needed and what is the best and safe method for cleaning? Moreover, the LC Cables also needs to be cleaned?

5. From this forum, i got to know that QSFP is an older technology than SFP28. Is that true?

6. I was looking at Broadcom site if they have any newer HBA Cards and i noticed a few things i would like to ask:

- I saw 9620-16i/9600-24i/9600-16i and 9502-16i/9500-8i/9500-16i. The 96XX are the eHBA. What are eHBA (enhanced HBA), is this because of the 24Gb/s?

- I saw that these newer HBA 96XX and 95XX has a single mixed firmware i.e. no IT mode or IR mode firmware. So, is this any kind of unified firmware and the card has any sort of switch to switch between the IT/IR mode and;

- Is this fine with TrueNAS? Cause as far as i know, the TrueNAS handles the RAID itself and no hardware/software RAID is required from the backend side and it will mess if done that way. Although, i never did that and never saw any kind of warnings if someone tried to use a RAID in the backend.

- I also noticed that Broadcom says these 95XX and 96XX have SAS4. So does that mean the 94XX is still SAS3?

- Are these new HBA 95XX and 96XX supported on TrueNAS Core or TrueNAS Scale?

7. I also saw there is P411-32P NVMe Adapter. I think it can connect up to 32 NVMe devices and has 4 ports. So, does that mean 8 NVMe per port?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
How many vdevs out of 14 disks each of 16TB? What are my options for the maximum IOPS/speed and redundancy?
Read this, there are different ways to get IOPS and Redundancy vs capacity. There are lots of threads discussing these topics.

Third, is there any way to know if an HBA Card is failing?
Your drives start reporting errors, smoke comes out of the magic chip, etc...

I meant any slow speed if mixing the media types using an HBA Card. For example, i read on this forum that although the LSI/Avago/Broadcom says that it is Trimode Adapter but connecting NVMe is not a good idea and a PLX Card should be used instead which is more reliable. Is that correct?
Define slow. As for card type, I don't know that answer. I suggest you build your system starting with the most minimum setup you need. Test and see if it works for you.

You are asking a lot of questions. Google is your friend. Use 'truenas' in the search and you are apt to find that most of your questions are already on the forums.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Read this, there are different ways to get IOPS and Redundancy vs capacity. There are lots of threads discussing these topics.
Thank you for the resource. Will go through it!

Your drives start reporting errors, smoke comes out of the magic chip, etc...
Hmm. And if the HBA card fails, will be my drives intact or they will be affected too?

Define slow. As for card type, I don't know that answer. I suggest you build your system starting with the most minimum setup you need. Test and see if it works for you.
For example, hooking up NVMe (U.2) and some SSD and HDD too in a single Adapter. So will the NVMe give less speed than the defined one? The Gen3 NVMe roughly does 3000MB/s Read and Write. System build is in progress :)

You are asking a lot of questions. Google is your friend. Use 'truenas' in the search and you are apt to find that most of your questions are already on the forums.
The reason to ask is because i did not find the related answers in any website or forum so ultimately posting here was my option :frown:

But i see that people are not happy with the questions. Maybe I'm the most noob person on this planet who has tons of questions regarding the NAS and related thingy ;(
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Secondly, for the pool design, i definitely know that RAIDZ1 is dangerous so, either RAIDZ2/RAIDZ3? How many vdevs out of 14 disks each of 16TB? What are my options for the maximum IOPS/speed and redundancy?
In that case a 3-way-mirror would be advisable. You left out the third aspect, which is space efficiency. For the latter RAIDZ is great, but not for IOPS.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
In that case a 3-way-mirror would be advisable. You left out the third aspect, which is space efficiency. For the latter RAIDZ is great, but not for IOPS.
Yes, you're very right. I forgot the space thingy. Recently, i tested 8 drives in RAID Z and i was unable to get 10Gb/s but when i did 4x2vdev, i was able to get the 10Gb/s. BTW, in that case, each vdev has one redundant drive?
 
Top