In my opinion, yes, a separate 12Gbps SAS card with 8e ports for each enclosure would be best. Your enclosures are quite large, 60 disks, so more SAS lanes, and higher throughput per lane, (assuming your JBOD enclosure's SAS Expander supports SAS 3's 12Gbps per lane speed).
Daisy chaining works better with smaller number of disks in the upstream enclosure. Like you have HBA <-> 12 disk JBOD <-> 12 disk JBOD. But 60 to 60, too many in my opinion.
And since you likely want low down time, pre-planning & installing 2 x SAS 12Gbps cards with 8e, plus cables and rack space is warranted. In theory, you can add the second 60 disk JBOD live to your NAS, since it would be a on dedicated SAS HBA.
Their are some SAS 16e cards, which might be useful. But, those generally need a full height PCIe slot for the 4 x SAS connectors on the PCIe back panel.
Do your enclosures support 2 x 4 lane SAS connectors?
Do they also support SAS 3's 12Gbps speed for there SAS Expander?
Failure modes depend on circumstances. As has been said, ZFS pool loss is by loosing a vDev. However, a loss of communication, (HBA overheating, cable failure or JBOD power failure), does not generally result in ZFS pool loss. Your pool would go off line and you would have to fix the problem. ZFS was designed with this specific failure in mind.
The example where
@Patrick M. Hausen lost his pool, was probably due to unknown vDev redundancy loss, before the enclosure lost power. That is the purpose of the E-Mails, let you know of failures so you can deal with them.