JMB582/JMB585 on 11.2 - will it work?

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
No, it won't. Since 11.2 is completely unsupported since end of 2019, I recommend upgrading to TrueNAS CORE 12.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You could also consider getting an LSI HBA instead of that card and then have the choice of upgrading or not.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, do not try to use SATA port multipliers. No good will come of it.

Okay, fffffffffffffffffffffffffffffine. I wrote a resource about this.

 

bruor

Cadet
Joined
Dec 27, 2021
Messages
7
I just had some issues attempting to use an OWC Thunderbay 8 w/ Thunderbolt 3 enclosure to work using PCI passthrough with 2x JMB585 controllers. Wondering if you had any issues with bare metal and that chipset on v12?

The cards seemed to be detected fine and would sporadically allow a pool to be created but then all the drives would start throwing CAM errors and show as disconnected from the controllers. Ended up having to use physical RDM for the disks and passing them through to the VM that way.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What's "v12"? What's a VM in this discussion? TrueNAS? Or are you trying to run a VM on top of TrueNAS and having problems? Please take some time to describe your setup so that other people can understand what you're doing, what you're asking about, because while these things may be super-obvious to you, they're only confusing to other people.
 

bruor

Cadet
Joined
Dec 27, 2021
Messages
7
I was trying to ask OP if they had success with that JMicron chipset on a bare metal install on TrueNAS core 12. Sorry if v12 was a little ambiguous.

Here's a quick summary of my setup and what I saw:
I'm running Esxi 7.0u2 on an Intel based 2018 Mac Mini.
I have an 8 bay thunderbolt 3 enclosure attached which exposes 2 JMB585 based controllers which host 4x WD RED 2TB disks each.
When using pci passthrogh for the controllers to the VM, the controllers would drop the disks and the console would spam with CAM timeouts and detachment messages.
I was able to set up the volume on the disks while using PCI passthrough after a few tries/reboots, but gave up using it since I wasn't able to get the drives to stay online/attached.
I disabled passthrough on the controllers and converted the drives to physical RDM. The VM found the disks/pool on boot and it has been running as expected since.

I'm thinking of moving this installation to a mini pc if I can find something reasonably priced that can attach to the disk enclosure, but don't want to make that leap if I can't test or verify that someone else is using that chipset without issues.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That makes a whole heck of a lot more sense; context is everything.

Spam with CAM timeouts is symptomatic of SATA port multipliers. Do you know if this is a port multiplier doohickey? If so, my suspicion is that it's going to be difficult to guarantee stability.
 

MaksDampf

Cadet
Joined
May 9, 2023
Messages
2
That makes a whole heck of a lot more sense; context is everything.

Spam with CAM timeouts is symptomatic of SATA port multipliers. Do you know if this is a port multiplier doohickey? If so, my suspicion is that it's going to be difficult to guarantee stability.
Sorry to revive a dead topic, but i came here via a google search and i think we should correct this for further Searchers.

You are talking about SATA port multipliers and i understand the problems that come with them. For one, none of the intel Chipsets i know are even verified for use with said port multipliers, so most problems will have to do with that.

But the JMB585 is not a Sata Port multiplier.
I think you are mistaking it for the JMB575, which is a Sata to Sata port multiplier.
As far as i know the JMB585 ist a true PCIgen3x2 Controller with 5x individual SATA ports which in turn support SATA port miltipliers as the 575 again supporting up to 25 ports in total.
But if you use the JMB585 as a normal host controller with one drive on each port there should be no problems at all.
It will reach 1700mb/s showing clearly that this is not a simple port multiplier. While it is not server-grade rated but rather designed for workstation class thunderbolt enclosures, these numbers are not any worse than comparable LSI Chipsets using 2x PCIe Lanes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sorry to revive a dead topic, but i came here via a google search and i think we should correct this for further Searchers.

?

For one, none of the intel Chipsets i know are even verified for use with said port multipliers, so most problems will have to do with that.

What is this responsive to? We were discussing JMB parts, not Intel parts.

But the JMB585 is not a Sata Port multiplier.
I think you are mistaking it for the JMB575, which is a Sata to Sata port multiplier.

I'm not mistaking it for anything. It is not my job to research what a given chipset is. If it's a SATA AHCI controller, the JMB's are known to be problematic. If it's a SATA port multiplier, PM's are known to be generally problematic for many reasons. Either way it is a bad idea and probably a lost cause.

As far as i know the JMB585 ist a true PCIgen3x2 Controller with 5x individual SATA ports which in turn support SATA port miltipliers as the 575 again supporting up to 25 ports in total.

Which is a nonstarter as explained in the resource.

But if you use the JMB585 as a normal host controller with one drive on each port there should be no problems at all.

Not true; Thunderbolt is not supported as an attachment technology, and the JMB SATA AHCI controllers are not recommended in any case, Piling two bad things on top of each other does not make it better.

comparable LSI Chipsets using 2x PCIe Lanes.

And, pray tell, what part number would THAT be? I've been working with the LSI stuff for a long time and there are no "comparable LSI chipsets". Or LSI chipsets with 2x PCIe lanes. The nearly unobtainium 9211-4i is x4.

i think we should correct this for further Searchers.

Well, this should be much clearer now.
 

MaksDampf

Cadet
Joined
May 9, 2023
Messages
2
Please be respectful of other forum members.
What is this responsive to? We were discussing JMB parts, not Intel parts.
And what are SATA port multipliers often connected to? Right, Sata ports. And if they are not validated for that, it is the Intel Chipset that causes problems, not the port multipliers that are connected to it. I responded to your post erroneously confusing the JMB585 PCIe Sata controller with prior Jmicron SATA port multipliers.
I'm not mistaking it for anything. It is not my job to research what a given chipset is.
If you don't have the time to read and comprehend what the question is, then don't answer. You are filling the web with half-truths and that is a problem for any user that tries to get answers.

If it's a SATA AHCI controller, the JMB's are known to be problematic. If it's a SATA port multiplier, PM's are known to be generally problematic for many reasons. Either way it is a bad idea and probably a lost cause.
That is as simple to say as HGST harddisks are problematic based on experiences with the Deskstar 75GXP series. If you are not interested in the truth then move on and don't bug people who care about it.
Which is a nonstarter as explained in the resource.
Have you actually read the "ressource"?
There is not a single mention of the JMB585 and neither in the linked backblaze article which is mostly about quality issues of backplanes not controllers. They mention a SYPEX chipset that is just slow because its only PCIe2.0x1 and furthermore completely unrelated. And no, SATA Port multipliers and Controllers like the JMB585 are not comparable at all.
Not true; Thunderbolt is not supported as an attachment technology, and the JMB SATA AHCI controllers are not recommended in any case, Piling two bad things on top of each other does not make it better.
Im am sorry to disappoint you, but the Thunderbolt protocol is the same as the PCIe protocol. It is just tunneled through PHYs to allow it to be used with special cables. So technically all you need to do is combine a PCIe Controller like the JMB585 with a thunderbolt PHY and you can connect it to your notebook. Which is what many DAS-Devices actually do.
And, pray tell, what part number would THAT be? I've been working with the LSI stuff for a long time and there are no "comparable LSI chipsets". Or LSI chipsets with 2x PCIe lanes. The nearly unobtainium 9211-4i is x4.
You can run it at x2 though. Same as running it on PCIe 2.0 which many people here do as many older intel Chipsets (pre haswell) don't offer 3.0 from the chipset lanes but only from the CPU.
Anyways, you will likely see no speed difference at all since harddisks will not get close to the 2GB/sec PCI-E 3.0 x2 cap. 1700mb/s is after all a realistic saturation speed after you substract the protocol overhead, the controller might even be faster if it had more lanes.
Well, this should be much clearer now.
No it isn't. This is just a bunch of half truths, not very helpful to anybody.

I did some research and the "problematic" JMB575 controller seems to be just fine running linux mdadm and unraid. From what i read the real culprit is a bug in Free-BSDs handling of UUIDs, that is triggering a bug in port multipliers and USB enclosures. These chips often don't communicate the Drive serial numbers, but just the same string for every controller or drive. Free-BSD is incorrectly treating serial numbers as UUID and thus assuming it is a multipath to a single drive rather than multiple drives.
So yes it is a bug in the Sata Port multiplier. But also it is unexpected Behaviour from Free-BSD which i would classify as a bug.

But this is not about Port Multipliers anyways but the JMB585 PCIe SATA Controller, which has no such bug and according to Reddit works just fine with linux, mdadm as well as truenas.
 
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And what are SATA port multipliers often connected to? Right, Sata ports. And if they are not validated for that, it is the Intel Chipset that causes problems, not the port multipliers that are connected to it. I responded to your post erroneously confusing the JMB585 PCIe Sata controller with prior Jmicron SATA port multipliers.

For the rest of the audience here, a SATA chipset that doesn't support SATA port multipliers, such as the Intel SATA, simply does not work (sometimes passing through the port on the first PM lane). Doesn't "cause problems". Simply doesn't work.

If you don't have the time to read and comprehend what the question is, then don't answer. You are filling the web with half-truths and that is a problem for any user that tries to get answers.

I read an immense amount and answer a large number of questions, as should be clear from the approximately 18,000 post count you see.

Have you actually read the "ressource"?

I wrote the resource. Additionally, I have done extensive research on the topic and have spent years driving people towards the LSI HBA solution because SATA PM's are so terrible in a ZFS environment.

There is not a single mention of the JMB585 and neither in the linked backblaze article which is mostly about quality issues of backplanes not controllers. They mention a SYPEX chipset that is just slow because its only PCIe2.0x1 and furthermore completely unrelated. And no, SATA Port multipliers and Controllers like the JMB585 are not comparable at all.

Yes, in the years since BackBlaze experimented with SATA PM's, various backplanes (which are typically carriers for the port multipliers) have come and gone. They finally gave up about a decade ago because the damn things are so problematic. Still, the JMB controllers are known to be problematic.

Im am sorry to disappoint you, but the Thunderbolt protocol is the same as the PCIe protocol. It is just tunneled through PHYs to allow it to be used with special cables. So technically all you need to do is combine a PCIe Controller like the JMB585 with a thunderbolt PHY and you can connect it to your notebook. Which is what many DAS-Devices actually do.

I'm sorry to disappoint *you*, but this disqualifies Thunderbolt as a candidate. No serious server should rely on this technology.

You can run it at x2 though. Same as running it on PCIe 2.0 which many people here do as many older intel Chipsets (pre haswell) don't offer 3.0 from the chipset lanes but only from the CPU.

x2 sockets on a server mainboard are not a thing.

Anyways, you will likely see no speed difference at all since harddisks will not get close to the 2GB/sec PCI-E 3.0 x2 cap. 1700mb/s is after all a realistic saturation speed after you substract the protocol overhead, the controller might even be faster if it had more lanes.

Still not seeing x2 sockets on a server mainboard.

I did some research and the "problematic" JMB575 controller seems to be just fine running linux mdadm and unraid.

That's fine, it's not ZFS. ZFS throws crushing I/O loads at the disks and makes SATA port multipliers, even ones that work 100%, unusable for most applications.

From what i read the real culprit is a bug in Free-BSDs handling of UUIDs, that is triggering a bug in port multipliers and USB enclosures. These chips often don't communicate the Drive serial numbers, but just the same string for every controller or drive. Free-BSD is incorrectly treating serial numbers as UUID and thus assuming it is a multipath to a single drive rather than multiple drives.

That appears to be a matter of opinion as to how UUID's should be generated for ephemeral devices. iXsystems seems to have decided that this is really a port multiplier and USB enclosure problem. Since their focus is on producing NASware for their hardware platform that does not rely on these things, they have previously indicated that this is a design decision, not a bug, to use the serial number to identify unique disks.

So yes it is a bug in the Sata Port multiplier. But also it is unexpected Behaviour from Free-BSD which i would classify as a bug.

It would actually appear to be a behaviour in TrueNAS that you disagree with, if we want to be accurate about it.

But this is not about Port Multipliers anyways but the JMB585 PCIe SATA Controller, which has no such bug and according to Reddit works just fine with linux, mdadm as well as truenas.

You can find just about anything "according to Reddit". Not particularly credible.

We've had problems with the JMB controllers. I'm not willing to recommend them unless/until these can be dialed in with more specificity. ZFS generates crushing amounts of I/O that tends to tease out design flaws in hardware. By way of comparison, the ASMedia 106x SATA controllers have worked reasonably well for people as long as there's no SATA port multiplier involved and as long as they are not knockoff silicon.
 

Sine

Cadet
Joined
Apr 15, 2022
Messages
5
The JBM585 is not a port multiplier. Bit it *supports* port multiplication. Not that you would want to use that feature.

And yes .. it is far from data center grade hardware. But for some homelab NAS shenanigans it does the Job.

I have two of these controllers in use at the moment. (even the NVME drive slot variant god forbid)
One in my own nas to add some extra SATA ports for a couple of SSD scratch disks.

The second one (sensitive souls avert your eyes) in the NVME slot of a intel NUC with three SSD's hanging of it (one being the boot-pool, the other a data mirror)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The JBM585 is not a port multiplier.

Yes, of course it's not, but it is commonly used on cards that incorporate port multipliers and they often don't clearly identify as anything more than JMB585. I have actually been doing support here for a number of years and a number of thousands of messages, which is what finally tormented me into writing the associated resources regarding this topic. It doesn't really matter if the JMB585 is a port multiplier or not because it still doesn't work well, even as a pure AHCI controller. So it's easy just to say "don't use JMB585" and be correct regardless.

And yes .. it is far from data center grade hardware. But for some homelab NAS shenanigans it does the Job.

Generally at a price more expensive than buying a used LSI HBA would cost.
 
Top