SAS Multipath performance

Status
Not open for further replies.

Pierce

Explorer
Joined
Sep 4, 2013
Messages
64
I have a LSI 2308 based 8e style SAS controller connected to 2 external SAS bays filled with SAS disks via dual porting (4 channels to the primary ports, 4 channels to the secondary ports). There's 50 disks on this HBA.

I've noted that this only seems to function in active/standby mode, so no more than 4 channels at a time are used to access the 50 disks, which are in one big zpool. Needless to say, this is a performance bottleneck with that many disks. Other operating systems seem to be able to use SAS Multipath as active/active, allowing all 8 channels to be used for disk IO. Is this something FreeBSD and FreeNAS may be able to support in the near future, or should I recable my system so 4 channels go to each external SAS tray (there's 2) instead of daisy chaining them the way I did (and yes, lose the redundancy, but gee, whats that protecting against, a failed card, ooops, its the same card).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sorry that nobody has answered this question. I'm looking into it.

However, do you somehow think that having only 4x6Gb/sec lanes is going to be a bottleneck for you? Are you running multiple 10Gb LAN?
 

Pierce

Explorer
Joined
Sep 4, 2013
Messages
64
I'd like to run multiple 10g.. I'm nnot currently, I'm more interested in local disk perfomance, like from a PostgreSQL server running in a jail.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, if you are going to talk about running databases, throughput isn't your enemy, I/O is. And 50 disks is definitely a good start, but multipath in any mode only buys you redundancy from a lost path. It won't be a bottleneck at all since your I/O is going to be your limiter... by a very very wide margin.

If you plan to do database stuff, you want as many vdevs as you can. So if you do 25 mirrors, you're looking at a peak throughput of 2.5GB/sec (assuming 100MB/sec per the typical thumbrule) which is just scratching the edge of what 4x6Gb lanes can do. Databases aren't going to be running at 2.5GB/sec. In fact, I'd be astonished if you're using a database and actually need/get 1GB/sec.

If you aren't doing databases, you can do multiple RAIDZ2 vdevs and see some pretty high throughput (obviously going up into 3-4GB/sec in theory) and you could certainly bottleneck the storage subsystem with only 4 lanes. Unfortunately you're like to still be bottlenecked by the networking side of things, and then still more by the protocols used. Some of iXsystems' larger customers do 2-3GB/sec (some do it 24x7 for weeks on end when running various projects). They have very large systems (far larger than your system and FAR more expensive).

Anyway, what I'm trying to say is that it sounds to me like someone is number counting and wanting the highest number, regardless of if there is actual value added and regardless of if it is even possible to hit said number. ;) BUT, regardless of whether I'm right or not, I'm waiting for a response on the active/active mode. I'm 99% sure it's active/passive with no option to do anything else and there's a good reason for this. But I'd like to give you the low-down on exactly why (and I'm curious now too).

But, I think that until you are actually running workloads that are saturating throughput of your 4 lanes you shouldn't be the least bit worried about it. I've got 24 disks in my system right now, and I'm running with just 4 lanes (I do run dual 10Gb) and I never see saturation of my 10Gb for more than short bursts because of the network protocols, etc.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The answers you seek:

No easy way to make active/active work in FreeNAS. :(

active/active for some hard drives hits a bug in some hard drives' firmware and performance ends up very poor as a result. active/passive seems to be the one most everyone wants anyway so the default is active/passive.
 
Status
Not open for further replies.
Top