SAS Multipath support on TrueNAS Scale 22.12.X

morphin

Dabbler
Joined
Jun 27, 2023
Messages
31
Hello everyone!

I'm new in this forum but I'm a very old freenas user.
I wanted to create a new NFS storage but after 3-4 years even freenas name was changed.

I did some research on speed and stability comparision and I decided to use TrueNAS Scale because it is Linux based and it is faster on some cases.
After installation I realized that there is no multipath support yet.
I checked the kernel with "lsmod | grep multipath" and kernel has multipath and its loaded.
I run some multipath related command but they does not exist at userland.

I have a lot of experience on configuring multipath service and config but I'm afraid about 2 things:
1- Somehow it could conflict with the gui and the pool will be corrupted.
2- When you add multipath on feature release, after version update this may cause problems and pool will be corrupted.


Now lets talk about the best practice of manually implementing multipath feature:
1- Export all pools (if it exist)
2- Install multipath tools
3- Play around with multipath config
4- Generate multipath links
5- Check the links with $(multipath -ll) command via shell
6- If everything looks good, reboot the server (to be safe, I recommend disabling the auto pool import and test it first)
7-If everything looks good, enable the auto pool import and reboot the server
8- Check the pool via zpool status and be sure everyhing is allright.
9- Check the drive links and be sure the pool is imported by using multipath links.
Thats it!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I did some research on speed and stability comparision and I decided to use TrueNAS Scale because it is Linux based and it is faster on some cases.

That's weird. What cases are those? In general, the ZFS codebase is the same between CORE and SCALE, but CORE leads on stuff like dynamic ARC memory management and in-kernel iSCSI that make it a much more compelling choice.

After installation I realized that there is no multipath support yet.

It has been stated several times that no multipath support is planned, although if enough paying customers squeak, that could change.

CORE remains the recommended choice if your needs are oriented towards heavy duty high reliability storage.
 

wasbash

Cadet
Joined
Apr 20, 2023
Messages
1
Hello everyone!

I'm new in this forum but I'm a very old freenas user.
I wanted to create a new NFS storage but after 3-4 years even freenas name was changed.

I did some research on speed and stability comparision and I decided to use TrueNAS Scale because it is Linux based and it is faster on some cases.
After installation I realized that there is no multipath support yet.
I checked the kernel with "lsmod | grep multipath" and kernel has multipath and its loaded.
I run some multipath related command but they does not exist at userland.

I have a lot of experience on configuring multipath service and config but I'm afraid about 2 things:
1- Somehow it could conflict with the gui and the pool will be corrupted.
2- When you add multipath on feature release, after version update this may cause problems and pool will be corrupted.


Now lets talk about the best practice of manually implementing multipath feature:
1- Export all pools (if it exist)
2- Install multipath tools
3- Play around with multipath config
4- Generate multipath links
5- Check the links with $(multipath -ll) command via shell
6- If everything looks good, reboot the server (to be safe, I recommend disabling the auto pool import and test it first)
7-If everything looks good, enable the auto pool import and reboot the server
8- Check the pool via zpool status and be sure everyhing is allright.
9- Check the drive links and be sure the pool is imported by using multipath links.
Thats it!
We definitely need multi-path support in our truenas - scale environment. I wish this would come very soon,
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We definitely need multi-path support in our truenas - scale environment. I wish this would come very soon,

If you're an Enterprise customer, contact your sales rep and make some noise. If you're not, there's a Jira ticket around somewhere advocating for it, if my memory is correct. The impression I've gotten is it isn't going to happen unless there's some major change. I believe that iX spent a lot of time and effort getting it to work correctly under FreeBSD, and found the RoI to be a bit questionable. It's a nice feature to have, but it might be a bit heavyhanded to fix rare/infrequent problems.
 

morphin

Dabbler
Joined
Jun 27, 2023
Messages
31
Hello jgreco! Thank you for the answer.

That's weird. What cases are those? In general, the ZFS codebase is the same between CORE and SCALE, but CORE leads on stuff like dynamic ARC memory management and in-kernel iSCSI that make it a much more compelling choice.

I only check users benchmark results and I did not any benchmark by myself so I can not stand up for any case but;
1- I see that network is more stable. I don't know why but with same test there was more up and down on core. maybe it is a monitoring problem via gui, I really do not know but in this test, core completed 2-3 second before than Scale. This is confusing for me.
2- NFS I/O performance looks better on Scale. I also learned this via users tests but NFS could be better on Linux.

CORE remains the recommended choice if your needs are oriented towards heavy duty high reliability storage.

When we compare the lifetime, ofc Core is the winner because it is very stable and tested by too many users.
Scale is a new player but I couldn't see a good offical informations about
1- Which features does not exist on the Scale?
2- What is the main reason for developing Scale? (My guess it is only for adding linux related new tools and features like docker etc. But I want to know is there any plan for ZFS and NFS because other things not important for me)
3-How long the Core will stay as recommended choice?

My reasons why I want to use Scale instead of Core.
1- I stop using Unix and FreeBSD 6-7 years ago. Linux knowledge is more updated for me and I will play around faster and better. I feel free and powerfull on Linux.
2- When we started to port ZFS into Linux, we had to port and simulate unix related things. I couldn't track OpenZFS for 3 years and I'm not sure but I believe it is more reliable on linux right now. I know FreeBSD is more stable but also I don't think we will have any problem at Linux with ZFS.
3- I only care NFS and maybe ISCSI for special cases. I don't know the NFS stability on FreeBSD and Linux but I'm able to manage NFS server and debug when it is neccessary.

To sum up, it would be great if you give some advise to help my decision on this :)

It has been stated several times that no multipath support is planned, although if enough paying customers squeak, that could change.

Sad to hear that. Multipath is a default thing for any OS these days and this should not be a feature for a storage product and it is also exist on Core as right? Ofc, I can not decide on this but any experienced user will not consider this feature as a must for getting a license. I believe it will be better if you only consider GUI and automation related features as a enterprise plus but it is what it is.

Community note: Every opensource project can be exist by users supports and contribitions. If you don't want to buy thats okay, you can still donate few bucks, it wont hurt. :wink:
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Hello everyone!

I'm new in this forum but I'm a very old freenas user.
I wanted to create a new NFS storage but after 3-4 years even freenas name was changed.

I did some research on speed and stability comparision and I decided to use TrueNAS Scale because it is Linux based and it is faster on some cases.
After installation I realized that there is no multipath support yet.
I checked the kernel with "lsmod | grep multipath" and kernel has multipath and its loaded.
I run some multipath related command but they does not exist at userland.

I have a lot of experience on configuring multipath service and config but I'm afraid about 2 things:
1- Somehow it could conflict with the gui and the pool will be corrupted.
2- When you add multipath on feature release, after version update this may cause problems and pool will be corrupted.


Now lets talk about the best practice of manually implementing multipath feature:
1- Export all pools (if it exist)
2- Install multipath tools
3- Play around with multipath config
4- Generate multipath links
5- Check the links with $(multipath -ll) command via shell
6- If everything looks good, reboot the server (to be safe, I recommend disabling the auto pool import and test it first)
7-If everything looks good, enable the auto pool import and reboot the server
8- Check the pool via zpool status and be sure everyhing is allright.
9- Check the drive links and be sure the pool is imported by using multipath links.
Thats it!

Yes, FreeNAS supported multipath.

However, it was removed from both TrueNAS CORE 13.0 and SCALE for safety reasons. We've had several users lose whole pools by miscabling systems and making administrative mistakes. It was about the only thing that caused catastrophic data loss.

TrueNAS CORE 13 still import multipath pools but discouraged users from creating new pools that way. TrueNAS SCALE does not import or create multipath pools.

We converted iX products from using multipath to not.. We've not seen any decrease in reliablity, but definitely have reduced the number of tricky support situations.

It was good technology, but complex and high risk. Risk-reward was poor.
 
Last edited:

morphin

Dabbler
Joined
Jun 27, 2023
Messages
31
Yes, FreeNAS supported multipath.

However, it was removed from both TrueNAS CORE 13.0 and SCALE for safety reasons. We've had several users lose whole pools by miscabling systems and making administrative mistakes. It was about the only thing that caused catastrophic data loss.

TrueNAS CORE 13 still import multipath pools but discouraged users from creating new pools that way. TrueNAS SCALE does not import or create multipath pools.

We converted iX products from using multipath to not.. We've seen any decrease in reliablity, but definitely have reduced the number of tricky support situations.

It was good technology, but complex and high risk. Risk-reward was poor.

Hello morganL, Thank you for the answer.

I see the reason now thanks but I want to talk about this.
I'm a software engineer and I spend 8 years and designed an enterprise storage product (from scratch on unix and linux "after zfs license change") and with multipath+ZFS I did not lose or corrupt any pool or drive.
I know it is hard to predict all of the problems but it is not that hard to develop a protection algorithm.
1- Do not auto create multipath links
2- During boot, get the multipath interface hardware status for all reall block devices
3- Check the status output and compare each drive path with their sas id
4- While checking the missing paths for each channel:
4.a: If one of the channels is not working (in same hba interface or enclouser) then it is okay but raise a warning and do not continue without user permission:
4.b: If some drives exist on channel 1, and some of them exist on channel 2 (in same hba interface or enclouser) then raise a warning and do not continue until this problem is solved.
5- Use default blacklist and do not create multipath for sata drives or virtual block drives.
6- Before importing a pool, check the pool and drive health and if you see any problem raise a warning and do not continue without user permission.

I really do not know the issues related multipath but I guess this will prevent any corruption related multipath.

BTW: I want to ask a question, do you have any benchmark result for "multipath vs no-multipath" case on freebsd and linux?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
1- I stop using Unix and FreeBSD 6-7 years ago. Linux knowledge is more updated for me and I will play around faster and better. I feel free and powerfull on Linux.
Given the amount of fiddling you should be doing with the base OS (approximately zero in either case), this really isn't relevant.
We converted iX products from using multipath to not.. We've seen any decrease in reliablity, but definitely have reduced the number of tricky support situations.
I've always felt like this would be the case, a feature invented to solve a problem that did not really exist in practice... but it's nice to have some data backing up that assessment.
 
Joined
Jul 3, 2015
Messages
926
BTW: I want to ask a question, do you have any benchmark result for "multipath vs no-multipath" case on freebsd and linux?
By default multipath in FreeNAS was active/passive so zero performance benefit out of the box. Only if you enabled active/active could you see some benefit and in my experience it was only helpful when running scrubs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1- I see that network is more stable. I don't know why but with same test there was more up and down on core. maybe it is a monitoring problem via gui, I really do not know but in this test, core completed 2-3 second before than Scale. This is confusing for me.

I have no idea what "more up and down" means. In general, on a modern platform, CORE tends to be faster due to its better ARC memory management -- if you have more ARC, you go to pool for less data, and overall speeds are better in large part because of that. Both CORE and SCALE should do very well with 1GbE, 10GbE, etc. at basic network I/O as long as you're using recommended network cards (see the 10 Gig Networking Primer). Certain stuff like iSCSI is also going to be much faster on CORE because iXsystems sponsored a kernel iSCSI target system. Also I tend to find that if a SCALE system is left to idle for awhile, it seems like it likes to swap more stuff out and then when you try to access it, it can be laggy recovering.

2- NFS I/O performance looks better on Scale. I also learned this via users tests but NFS could be better on Linux.

Haven't seen that. Again, ARC memory management.

1- Which features does not exist on the Scale?

Kernel iSCSI, ARC memory management, bhyve, jails, and a lack of Linux.

2- What is the main reason for developing Scale? (My guess it is only for adding linux related new tools and features like docker etc.

iX wanted to be able to address Kube and Gluster, to cover scale-out issues that would have been tricky on FreeBSD.

But I want to know is there any plan for ZFS and NFS because other things not important for me)

Both CORE and SCALE already support ZFS and NFS so I don't get what you're asking.

3-How long the Core will stay as recommended choice?

This suggests that you are under the impression that SCALE will become the recommended choice at some point. This may never happen. Sites that need rock solid NAS are going to be on CORE for quite some time at the rate things are going, and it could be that that will never change. Linux has a substantially more restrictive license, and FreeBSD's performance is going to be better for the foreseeable future. It's hard to speak to organizational motivations, but my guess is that SCALE is simply a great way to address some features like Kube and Gluster. Having been in the forum here for many years, there were lots of users who would come in and say "but it's not Linux" and then leave. SCALE could be nothing more than a ploy to offer the Linux lovers an answer to that. I expect that the ARC issues with Linux will be addressed at some point, at which point it becomes more of a bake-off between the two operating systems.

I stop using Unix and FreeBSD 6-7 years ago. Linux knowledge is more updated for me and I will play around faster and better. I feel free and powerfull on Linux.

It's an appliance. You're really not supposed to be digging around in the innards. If you intend to do so, then, yes, whatever you feel more comfortable with is going to be a better fit.

I'm not sure but I believe it is more reliable on linux right now.

It is not. The ZFS codebase is the same for both. These forums are not filled with people crying about the unreliability of ZFS on either platform.

I know FreeBSD is more stable but also I don't think we will have any problem at Linux with ZFS.

The big issue on Linux is the sucky ARC memory management. If you are fine with putting double the RAM into your Linux system that you would put into a FreeBSD system, you can mostly eliminate this disparity.

I only care NFS and maybe ISCSI for special cases. I don't know the NFS stability on FreeBSD and Linux but I'm able to manage NFS server and debug when it is neccessary.

This doesn't seem to be a reason to use SCALE instead of CORE. Both offer RFC-compliant NFS implementations and are managed similarly. I don't have any idea what "NFS stability" means. It's not like the NFS server code on FreeBSD was written by amateurs; parts of it trace back to original Sun Microsystems code, I believe.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One big reason NOT to use SCALE, (or Linux in general), is the fast pace of the kernel development. You might say that is good. But, their have been many bugs introduced into the kernel, some lasting years. Some are security issues. Even though SCALE is likely going to use a stable & long term support Linux kernel, people will want SCALE BECAUSE of the faster kernel development. This is due to newer hardware more or less requiring newer kernels. Back porting those drivers does take time & work due to an unstable kernel API.

"Real" Unix OSes, FreeBSD, Solaris & AIX support a stable kernel interface for a particular level of OS, (aka FreeBSD 13.x, Solaris 11.4, AIX 7.2), which allows easy porting of newer hardware's drivers. Linux on the other hand, can be a pain for this to happen. So much so, one of the few constant requests in the forums for SCALE is updated kernels for newer hardware.

Even OpenZFS has problems with newer Linux kernels. Basically OpenZFS is only "qualified" for specific Linux kernel ranges. The latest Linux kernel releases are almost never, (if ever), supported out of the box. It takes a few weeks for the OpenZFS people to find the differences and make OpenZFS work.


Now all that said, I do use Linux on all my non-NAS home computers, (miniature desktop, miniature media server, 2 x laptops). And all storage is ZFS, (2 pools for each, 1 x OS & other remaining space). But, I have found that my Linux computers are more stable with long term support kernels. Only if I buy a new computer that I might have to resort to using a Linux kernel that is not long term support. (And even then, that's temporary, until another long term support kernel is released after that I can migrate to.)


I wish Linux kernel, (and all supporting GNU software), would prioritize stability, reliability and security over new features. Case in point, the program "screen" is getting deprecated in some Linux distros because it has not been updated in a while. People think the code is "crufty" and hard to maintain. And they said alternatives are available that are maintained, like "tmux". Now I have to learn all the details of "tmux" that I know already for "screen".


Anyway, that's my rant.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Hello morganL, Thank you for the answer.

I see the reason now thanks but I want to talk about this.
I'm a software engineer and I spend 8 years and designed an enterprise storage product (from scratch on unix and linux "after zfs license change") and with multipath+ZFS I did not lose or corrupt any pool or drive.
You are a special case. We have to target TrueNAS for less experienced users.

BTW: I want to ask a question, do you have any benchmark result for "multipath vs no-multipath" case on freebsd and linux?

We see more performance benefit and less complexity from wideporting.
However, its a single HBA with 2 ports to same JBOD or expander.

From a pure storage performance basis... FreeBSD (trueNAS 13.0) is still generally better than Linux (trueNAS SCALE). We hope to close that gap, but there are Linux/ZFS issues to be resolved.
 

morphin

Dabbler
Joined
Jun 27, 2023
Messages
31
You are a special case. We have to target TrueNAS for less experienced users.

We see more performance benefit and less complexity from wideporting.
However, its a single HBA with 2 ports to same JBOD or expander.

From a pure storage performance basis... FreeBSD (trueNAS 13.0) is still generally better than Linux (trueNAS SCALE). We hope to close that gap, but there are Linux/ZFS issues to be resolved.

I understand. Dealing with Linux after Unix is a real mess and Linux requires special tunning for different use scenarios and hardwares and it's hard to track all the kernel changes because of the development speed.

After I setup this storage I won't touch it and probably I will not update until I see a bug or failure. So I decided to use Core for this project.

Thank you for the answers and great explanation on current development status.
 

pricemc1

Dabbler
Joined
Aug 20, 2023
Messages
14
Yes, FreeNAS supported multipath.

However, it was removed from both TrueNAS CORE 13.0 and SCALE for safety reasons. We've had several users lose whole pools by miscabling systems and making administrative mistakes. It was about the only thing that caused catastrophic data loss.

TrueNAS CORE 13 still import multipath pools but discouraged users from creating new pools that way. TrueNAS SCALE does not import or create multipath pools.

We converted iX products from using multipath to not.. We've not seen any decrease in reliablity, but definitely have reduced the number of tricky support situations.

It was good technology, but complex and high risk. Risk-reward was poor.
Sorry but I'm a bit confused. I have installed 3 instances of Core 13 U5 in the last few days and all of them are hooked up to dual controller SAS JBODs (Dell MD1400/1420) via 2-port SAS HBAs. Each HBA's ports are cabled to one of enclosure's controllers. When I look in the web GUI there is a multipathing section that appears under Storage. It shows all the SAS drives in the JBOD as mutlipath (active/passive though, not active/active). You mentioned that multipath had been deprecated in 13. If that is so, why I am I seeing the multipath stuff in the interface?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Sorry but I'm a bit confused. I have installed 3 instances of Core 13 U5 in the last few days and all of them are hooked up to dual controller SAS JBODs (Dell MD1400/1420) via 2-port SAS HBAs. Each HBA's ports are cabled to one of enclosure's controllers. When I look in the web GUI there is a multipathing section that appears under Storage. It shows all the SAS drives in the JBOD as mutlipath (active/passive though, not active/active). You mentioned that multipath had been deprecated in 13. If that is so, why I am I seeing the multipath stuff in the interface?

When was the system setup?
 

pricemc1

Dabbler
Joined
Aug 20, 2023
Messages
14
All three systems have been setup in the last few days. See screen shots showing the web gui and the the version of core.

When was the system setup?
Just this week I set them up using the latest stable Core media available for download. See attached screen shots showing version and web gui page.

multipath_core.jpg



core_version.jpg
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
All three systems have been setup in the last few days. See screen shots showing the web gui and the the version of core.


Just this week I set them up using the latest stable Core media available for download. See attached screen shots showing version and web gui page.

View attachment 70236


View attachment 70235

There's some ambiguity.

The software changes were designed to allow existing multipath pools to keep operating. We didn't remove the organ.

However, we no longer document and support their operation and actively discourage any use because of this. We found there were too many support cases due to user mistakes and there was loss of pools and data. We actually don't use it in TrueNAS Enterprise either.

If you can create pools with it.... I would not trust them. There is not enough testing.
 

pricemc1

Dabbler
Joined
Aug 20, 2023
Messages
14
There's some ambiguity.

The software changes were designed to allow existing multipath pools to keep operating. We didn't remove the organ.

However, we no longer document and support their operation and actively discourage any use because of this. We found there were too many support cases due to user mistakes and there was loss of pools and data. We actually don't use it in TrueNAS Enterprise either.

If you can create pools with it.... I would not trust them. There is not enough testing.
Perhaps it would be better if they updated the documentation to specifically state that it is deprecated and not supported. As you say, there is definitely some ambiguity in how it is presented since the web interface has nothing to indicate that it could be an unsupported or sub sub-optimal configuration. Most people are probably going to assume that they should connect both controllers on their enclosures redundantly since most commercial vendors support and recommend a multipath model for traditional monolithic storage systems. If TrueNAS and iX has a different philosophy about multipathing that's fine, but I would recommend it be clearly stated in an official manner (i.e. the documentation) rather than in forum posts. That being said, I do appreciate your response here in the forum and since these systems are not in production yet I will redo them without the redundant HBA connections to avoid any potential issues.

Thanks,
pricemc1
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Perhaps it would be better if they updated the documentation to specifically state that it is deprecated and not supported. As you say, there is definitely some ambiguity in how it is presented since the web interface has nothing to indicate that it could be an unsupported or sub sub-optimal configuration. Most people are probably going to assume that they should connect both controllers on their enclosures redundantly since most commercial vendors support and recommend a multipath model for traditional monolithic storage systems. If TrueNAS and iX has a different philosophy about multipathing that's fine, but I would recommend it be clearly stated in an official manner (i.e. the documentation) rather than in forum posts. That being said, I do appreciate your response here in the forum and since these systems are not in production yet I will redo them without the redundant HBA connections to avoid any potential issues.

Thanks,
pricemc1

Agreed - it should be in Release Notes and in docs if it appears in web UI.

For the record:

In 13.0/CORE: SAS Multipath is not removed from software. Existing pools can operate. It is no longer supported and tested by iXsystems, and so new multipath pools are not recommended.

In SCALE: SAS Multipath was explicitly removed.. software and webUI. Users with Multipath pools cannot migrate from CORE to SCALE without removing multipath.

For both CORE and SCALE

SAS Wideporting - multiple cables from same HBA to same controller on JBOD - is supported. The drives do not appear as multiple drives in the systems. If one cable is unplugged the system keeps on operating.

In the event of an HBA or JBOD Controller failure, all cables can be moved to a replacement device manually.

For the Enterprise appliances. iX only uses wide-porting in some limited cases for extra bandwidth. Each NAS controller is connected to a separate JBOD controller. We failover both simultaneously.

These decisions were made after several customers accidentally recabled systems and caused pool corruption. The current system has not seen any of these problems.
 
Last edited:

pricemc1

Dabbler
Joined
Aug 20, 2023
Messages
14
Agreed - it should be in Release Notes and in docs if it appears in web UI.

For the record:

In 13.0/CORE: SAS Multipath is not removed from software. Existing pools can operate. It is no longer supported and tested by iXsystems, and so new multipath pools are not recommended.

In SCALE: SAS Multipath was explicitly removed.. software and webUI. Users with Multipath pools cannot migrate from CORE to SCALE without removing multipath.

For both CORE and SCALE

SAS Wideporting - multiple cables from same HBA to same controller on JBOD - is supported. The drives do not appear as multiple drives in the systems. If one cable is unplugged the system keeps on operating.

In the event of an HBA or JBOD Controller failure, all cables can be moved to a replacement device manually.

For the Enterprise appliances. iX only uses wide-porting in some limited cases for extra bandwidth. Each NAS controller is connected to a separate JBOD controller. We failover both simultaneously.

These decisions were made after several customers accidentally recabled systems and caused pool corruption. The current system has not seen any of these problems.
So I rebuilt one of the systems with the JBOD chassis cabled with only one SAS cable from a single HBA port to a single port on a single controller, as you say that would be the supported configuration. I completely blew away the old Truenas VM and wiped the the physical drives, redid the passthroughs, and everything was done from scratch. There are a combination of SAS and SATA drives in the array that was rebuilt.

After rebuild, when I went to create a new pool. In the pool creation wizard, the SATA drives showed up as regular devices (i.e. da24). The SAS devices showed up as multipath devices (i.e. multipath/disk5) though in the wizard. It was very odd because after initial Truenas install and before creating the first pool there was a not a multipath tab showing. When I created the first pool, then the Multipath tab started appearing under Storage as well. When I had installed Truenas the first time, with both JBOD controllers hooked up during install, the multipath tab was there from the beginning.

When I looked in the drives tab all the drives were displayed as individual devices regardless of if they were SATA or SAS. The only places I saw the references to multipath were when I went to create a pool and on the multipath tab (after pools are created). When cabled to only one JBOD controller with a single cable, I then started getting alerts like "Multipath multipath/disk2 connection is not optimal. Please check disk cables." for each SAS drive in the pool after building a pool using SAS drives. I could dismiss the alerts but that just grayed them out. They didn't actually go away. Each JBOD controller has 2 ports on it. So next, I tried cabling both ports on the HBA to the same JBOD controller (wide-porting as you referred to it) to see if that might help but it didn't resolve the multipath alerts. I then went and cabled the second HBA port to the second JBOD controller and all the alerts went away. It seems that even in its current version, Core sees dual-ported SAS drives as multipath regardless of if both controllers in the JBOD are cabled up or not and it expects both ports to be cabled to both controllers.

Given this behavior, are you sure about your statements in regards to how multipath behaves in Core currently? I'm not positive, but I think I did a test Scale install and with Scale and I didn't see anything showing up multipath in the interface regardless of how things were cabled and I didn't get any alerts about it in Scale it seems to corroborate what you have said, but not so with Core...
 
Last edited:
Top