Hey
@NWMrTIm
The page has actually since been updated - the "mirrored cluster" option has been dropped in favor of the erasure-coding option, but the same general principle remains. "Federated Storage" is "each server has its own storage, but can be accessed through a single name-space" - but if Server A goes offline, the data on it is temporarily inaccessible. Think of a single server named \\TRUENAS, and then you have FOLDER A and FOLDER B - two separate folders, two separate servers.
The mirrored or erasure-coded cluster distributes the data across all of the member servers in the cluster - you still have a single namespace of \\TRUENAS, but because the data is distributed, the loss of a single server means you can reconstruct the data similar to a RAIDZ configuration.
But I want to ask HoneyBadger:
When you say: the "shared-storage clustering" you're looking to implement is only supported with TrueNAS Enterprise on the M-series platform.
does that mean I can't set that up on a couple of VMs?? or some old hardware - even if the old hardware is Xeon E3 12xx series with LSI 7211 controller in IT mode to act as an HBA? (BTW I set up TNAS Core on it and it works great) I guess my question is: should it work if the hardware supports it, or will the software prevent it because it will detect I'm not running specifically a TrueNAS Enterprise on the M-series platform?
The shared-storage clustering identified as the "HA System" above does require a TrueNAS Enterprise license and iXsystems hardware that supports it (TrueNAS X/M/F-series) - based on your other thread of wanting hypervisors to have failover/uninterrupted access, that's also what you'd require.
You can certainly set up two TrueNAS systems and use some manner of replication/sync between them in order to keep them in near-real-time sync for files using something like Syncthing, but to have it fail over in an "uninterrupted" fashion when a single system is rebooted requires both systems to have access to the drives over a primary storage bus like a dual-ported SAS/NVMe backplane, as well as arbitration to decide which head unit should be mounting the pool at a given time. Having two controllers attempt to forcibly mount the same ZFS pool at once is a recipe for data corruption.