Drive Replacement

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
So @Etorix after researching for a bit, here's what i can conclude

1. The normal SATA III is capable of 6Gb/s which means 600MB/s. HDDs can go up to 300MB/s maxed out (enterprise). For instance, the Seagate X18 can go up to 270MB/s as per the datasheet and it goes like 272MB/s in TrueNAS and up to 289-300MB/s in Windows.

Here, we can see that the bandwidth required for running the HDD at full speed will be like 300MB/s or 3Gb/s approx.

The LSI 9400-16i has 4 SAS ports which can be turned into 16 SATA ports, (4 SATA breakouts from each SAS port).

So, that means 4xSATA on each SAS port would require @300MB/s or 3Gb/s will require 1200MB/s or 12Gb/s per SAS port.

Now that we know that one single HDD will require 300MB/s or 3Gb/s, that means 4xSATA on one SAS port will require 1200MB/s or 12Gb/s. Therefore, 4xSATA on each SAS port would require a total bandwidth of 48Gb/s.

Now, as the HBA can push max 48Gb/s only (12Gb/s per SAS port), one can only install up to 16 HDDs

So, for my use case, 16x300MB/s = 4800MB/s or 48Gb/s approx which is just the maxed out performance out of LSI 9400-16i HBA. I left some extra reserved bandwidth as nothing is 100% efficient, plus it's still 272MB/s so 28MB/s left out to maintain the speed when at peak load.

2. Considering a normal SATA SSD which is capable of 500-600MB/s approx. For instance, the Intel S4610 can go up to 560MB/s Read and 510MB/s Write as per the datasheet, we can conclude the following.

Here, we can see that the bandwidth required for running the SSD at full speed will be like 600MB/s or 6Gb/s approx.

The LSI 9400-16i has 4 SAS ports which can be turned into 16 SATA ports, (4 SATA breakouts from each SAS port).

So, that means 4xSATA on each SAS port would require @600MB/s or 6Gb/s will require 2400MB/s or 24Gb/s per SAS port.

Now that we know that one single SSD will require 600MB/s or 6Gb/s, that means 4xSATA on one SAS port will require 2400MB/s or 24Gb/s. Therefore, 4xSATA on each SAS port would require a total bandwidth of 96Gb/s.

Now, as the HBA can push max 48Gb/s only (12Gb/s per SAS port), one can only install up to 8 SSDs. Just half of the HDDs in numbers.

So, for my use case, 8x600MB/s = 4800MB/s or 48Gb/s approx which is just the maxed out performance out of LSI 9400-16i HBA. I left some extra reserved bandwidth as nothing is 100% efficient, plus it's still 560MB/s-510MB/s so a few left out to maintain the speed when at peak load.

The above calculation sounds fair to me, stated that per SAS port is 12Gb/s. But if the whole HBA Card is just 12Gb/s or say all 4 SAS ports combining a total bandwidth of 12Gb/s, the max i can go for is 4 SATA HDD, which i doubt. Because, its hard to digest that an Intel/AMD PCH board can have like 6-12 SATA in the enterprise segment (sometimes in consumer segment too) and each offers 6Gb/s per SATA port and how could a HBA which is dedicated for such function, offers way much less than that?

Let me know if the math is correct now :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So, that means 4xSATA on each SAS port would require @300MB/s or 3Gb/s will require 1200MB/s or 12Gb/s per SAS port.
Nonsense. You're never going to see an application where you'll be reading or writing anywhere close to that maximum speed for all those drives simultaneously. I doubt you'd see that speed sustained on a single drive, much less four simultaneously.

Moreover, the SAS3 bandwidth is 12 Gb/sec per lane, not per port. There are four SAS lanes on one SAS port, so that means 48 Gb/sec/port. The 9400-16i card you're looking at has four ports, which means 16 lanes, which means a max throughput of 192 Gb/sec (assuming the PCIe bus doesn't limit it; I haven't done the math there). That card, presuming appropriate use of expanders, could in theory support 64 HDDs as what you claim to be their maximum sustained speed of 3 Gb/sec/drive, simultaneously.
Now, as the HBA can push max 48Gb/s only (12Gb/s per SAS port), one can only install up to 16 HDDs
Once again, this limitation assumes a scenario where you'd need, and be able to get, full transfer speed from all drives simultaneously. Maybe you're more creative than I am, but I can't imagine any such scenario in anything like the real world.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Nonsense. You're never going to see an application where you'll be reading or writing anywhere close to that maximum speed for all those drives simultaneously. I doubt you'd see that speed sustained on a single drive, much less four simultaneously.
Well, that's an approx speed to get the math done in easy way. When configured in a NAS, those can just push 200MB/s or 2Gb/s max. I'm aware of that :)

Moreover, the SAS3 bandwidth is 12 Gb/sec per lane, not per port.
Ouch, that was the confusion me, @Davvo and @Etorix had. Thanks for clearing it up.

There are four SAS lanes on one SAS port, so that means 48 Gb/sec/port. The 9400-16i card you're looking at has four ports, which means 16 lanes, which means a max throughput of 192 Gb/sec (assuming the PCIe bus doesn't limit it; I haven't done the math there).
Bingo. Problem Solved! It's now very clear. A big thank you for explaining it clearly. But how do i check if the PCIe bus does not limit it? Any way to find it out?

That card, presuming appropriate use of expanders, could in theory support 64 HDDs as what you claim to be their maximum sustained speed of 3 Gb/sec/drive, simultaneously.
Well, as stated above, 3Gb/s is just an approximate, to make the math easier :)

So, without the use of expanders, i can definitely go for 16-18 HDD using the breakout cables on the LSI 9400-16i without any throttling bandwidth right?

Another question i had is can i use HDD/SSD/NVMe together on a single LSI9400-16i? Moreover, if its possible, is it really a safe approach? or better to run one kind of device on each HBA?

Also, a forum member mentioned that i cannot use U.2 Drive to this HBA. Is that correct? And is this HBA not a good choice for connecting U.2 Drives? I need to connect like 4xU.2 Drives for my use case. Would LSI 9400-16i do the job without any hassle or shall i look for U.2 Cards like High Point 1120/1180 model?

Once again, this limitation assumes a scenario where you'd need, and be able to get, full transfer speed from all drives simultaneously. Maybe you're more creative than I am, but I can't imagine any such scenario in anything like the real world.
Hehehe. Of course not. It was just to make the math easier for me ;)

But hey, Thanks for clearing it up.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Also, a forum member mentioned that i cannot use U.2 Drive to this HBA. Is that correct? And is this HBA not a good choice for connecting U.2 Drives? I need to connect like 4xU.2 Drives for my use case. Would LSI 9400-16i do the job without any hassle or shall i look for U.2 Cards like High Point 1120/1180 model?
Have you searched for and found a case and a backplane that accommodates 3.5" SATA drives and 2.5" U.2 drives simultaneously?
You are not planning not to have front accessible hot-plug drive bays, are you?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Have you searched for and found a case and a backplane that accommodates 3.5" SATA drives and 2.5" U.2 drives simultaneously?
Never ever. To clear it up, i was looking to connect the U.2 Drives using an SFF8643 cable to U.2.

You are not planning not to have front accessible hot-plug drive bays, are you?
Not at all. I don't care about that at the moment :)

So, i think you got confused that I'm actually using a backplane. So, i think i can connect using the Cables right?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I would never build a system without a backplane. So you really want to shutdown the system, open the case, losen 4 screws, ... every time you need to swap a disk?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I start to get the impression you are trolling. You go over calculations of theoretical bandwidth limits that have no practical relevance while planning to hack together a server with duct tape and hot glue. 16 disks and no chassis with proper drive bays? You can't be serious.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
I start to get the impression you are trolling.
Why would you even think of that dude?

You go over calculations of theoretical bandwidth limits that have no practical relevance while planning to hack together a server with duct tape and hot glue.
As mentioned in the above post, it was just a roughly calculation to make the math easier at my end. Moreover, as @danb35 explained, it was just a theory, nothing practical, no real world deployment either and this was all done to ensure i buy the right hardware so that i don't have to soon replace the hardware to buy a new piece of hardware just because it underforms. Already, its taking much time but things are getting cleared and i can probably make a better choice.

But is this how the big users at this forum treat the new fellow members who are trying to learn and setup their own NAS? Everyone has a budget, has a particular income and has to do things according to that. Isn't it? Who doesn't like a ready chassis or a complete system from a brand like Supermicro?

16 disks and no chassis with proper drive bays? You can't be serious.
Not sure where i mentioned i will use the drives and other hardware in an open environment. I think you forgot to read my post https://www.truenas.com/community/threads/drive-replacement.114601/post-796699 where i mentioned the details regarding my chassis including the airflow.

If someone is trying to solve something, help them instead of trolling or making fun of them just because you've invested so much time that you know bit and byte and the basics of building a proper NAS, you should at least not mock at them when you yourself don't read the information provided and assume things on your own. I don't want to be rude here and apologize if you feel offended here. All the members here, including you, i have learnt a lot from you guys in the last couple of months and i want that i always be a part of this forum but i seriously don't want that someone just mocks at you for no good reason :)

I hope you understand :)
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Maybe I was a bit snarky. Apologies for that.

I still fail to see how 18 disk drives in total can be a hobbyist project including CNC metal work and custom solutions instead of a proper business case and a budget. Everything I have ever seen that uses more than 4 disks has been "business". And designed accordingly.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So, without the use of expanders, i can definitely go for 16-18 HDD using the breakout cables on the LSI 9400-16i without any throttling bandwidth right?
Without expanders, you're only using breakout cables, which will limit you to 16 drives. Throttling will not be an issue; each lane operates at twice the maximum bandwidth of the SATA drive.
can i use HDD/SSD/NVMe together on a single LSI9400-16i? Moreover, if its possible, is it really a safe approach?
You can, and it's safe enough, but it's silly--as you've already been told, repeatedly and exhaustively up-thread, running NVMe through a HBA adds a pointless and unnecessary expense and layer of processing to those devices.
But how do i check if the PCIe bus does not limit it? Any way to find it out?
There's a whole lot of information on the Internet, and there are search engines to help you find it. I'm sure finding the maximum bandwidth of PCIe 3.1 x8 would be trivial. Have you made any effort to find the answers for yourself?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Maybe I was a bit snarky. Apologies for that.
No worries. Misunderstanding happens and its a part of life ;) As long as we both are on good terms, it's all fine :)

I still fail to see how 18 disk drives in total can be a hobbyist project including CNC metal work and custom solutions instead of a proper business case and a budget.
Well, then check out Fractal Design Define 7XL Chassis. Which has a proper drive mounts for 18 drives. 14 in one single row and 4 more at the bottom with proper cooling, if you use some nice Fans ;)

I also provided the average temps during the summers.

Everything I have ever seen that uses more than 4 disks has been "business". And designed accordingly.
Yes, but as mentioned before on this forum, i do have some home+business needs which involves video editing, rendering, backup and restore on a regular basis :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Why would you even think of that dude?
I'll answer why I would suspect that: because you're obsessing about tiny details that have no relevance to anything ("which cables should I buy?" answer: "nobody cares") while not lifting a finger to actually find the information for yourself. For one recent example, all the posts of yours asking about SAS bandwidth--a 15-second Google search would have found that information for you. But you kept asking here, pinging other users (who were almost certainly getting thread notifications anyway). That's troll-ish behavior.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Without expanders, you're only using breakout cables, which will limit you to 16 drives. Throttling will not be an issue; each lane operates at twice the maximum bandwidth of the SATA drive.
Cool cool

You can, and it's safe enough
Sounds nice!

but it's silly--as you've already been told, repeatedly and exhaustively up-thread
Actually, i apologize for that. I do accept the fact that this thread was way much longer exhausting. But I'm new here and trying to learn things. If not here, where i would go?

running NVMe through a HBA adds a pointless and unnecessary expense and layer of processing to those devices.
Hmm. So what kind of card/equipment do you recommend for connecting U.2 Drives?

There's a whole lot of information on the Internet, and there are search engines to help you find it. I'm sure finding the maximum bandwidth of PCIe 3.1 x8 would be trivial. Have you made any effort to find the answers for yourself?
Now as the bandwidth thingy is solved for me, a big Thanks to you :). That's what i will look into :)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Hmm. So what kind of card/equipment do you recommend for connecting U.2 Drives?
A system with a backplane connected to the PCIe bus. The idea of NVMe is to eliminate all intermediate devices.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
I'll answer why I would suspect that: because you're obsessing about tiny details that have no relevance to anything ("which cables should I buy?" answer: "nobody cares") while not lifting a finger to actually find the information for yourself.
Well, i understand your point. But everyone is a beginner at some point. We all learn and grow :). I asked cause i literally have never used an SFF8643 or SAS Cable before. This year, a couple of months ago i came to know that those SAS ports are generally SFF8643. Of course, I'm not saying that one is free to ask anything small, random or whatever his/her mind thinks on a forum, but i preferred to ask it here so that i can make the right choice on the hardware when picking up the parts.

For one recent example, all the posts of yours asking about SAS bandwidth--a 15-second Google search would have found that information for you. But you kept asking here, pinging other users (who were almost certainly getting thread notifications anyway). That's troll-ish behavior.
I still have the tabs opened, i couldn't find the information that the whole SAS bandwidth is 12Gb/s or per port. I'm still new into this but users like @Davvo and @Etorix are way much advanced than me and they didn't have correct information either. It was you, who got the correct information for all of us :)


Again, i know that this has been a long thread and its all solved in the end, the purpose has been served and i want to Thank all of you for your support and patience. There was no such intention to troll or make fun or bug the experienced forum members and i apologise again if you guys feel that way ;(.

I hope you understand and will forgive me for my mistake :)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
But is this how the big users at this forum treat the new fellow members who are trying to learn and setup their own NAS?
Having helped for more than a few pages in this thread, that hurts.

Remember that the LSI 9400-16i uses a PCIe 3.0 x8 interface, not a full x16. Which makes me ponder about the bandwidth distribution.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I still have the tabs opened, i couldn't find the information that the whole SAS bandwidth is 12Gb/s or per port. I'm still new into this but users like @Davvo and @Etorix are way much advanced than me and they didn't have correct information either.
Allow me to quote myself at post #73:
That's 12 Gb/s per SAS lane, or 6 Gb/s per SATA lane, although the HBA may not have the total capacity to handle all 16 lanes at full (theoretical) bandwidth simultaneously.
…which is pretty much the information you could not find. The open question is whether I'm lacking in paedagogic skills and/or whether you're lacking in listening skills.

Hmm. So what kind of card/equipment do you recommend for connecting U.2 Drives?
If you're not using the convenience of a backplane, it has been suggested quite a few times up-thread to either connect the drives to PCIe slots, using bifurcation and suitable adapters, or to use an add-in card with a PLX switch. (The backplane relies itself on bifurcation.)
There was also a link to actual tests showing the consequences of using a Tri-Mode HBA vs. directly attached drives: Namely, the HBA works fine until it doesn't, latency goes up badly and total throughput quickly hits a hard limit from there, while directly attached drives keep on scaling up, reaching a distinctly higher maximal throughput than the HBA, at a distinctly lower latency.
StorageReview-Dell-PowerEdge-PERC12-SeqRead-64k.png
 
Top