Could you explain what you mean by 'flaky and fragile', is that something specific to use under freeBSD or just in general? Is that something which rears it's ugly head with all USB connections or just some? (ie - if my connections seem stable can I assume it's okay) Is it problems that ZFS can work around, even with some performance hit, or something endangering data integrity even in a case where snapshots might be stored on the local SATA drives?
I mean at the moment i'm using a literal array of USB drives under windows for dumb storage which have worked fine for two years without any unique problems caused by the interface. (sure drives die, so does SATA, but its more hassle trying to replace the SATA drive than plug in a replacement USB drive). I know someone people have "connects, disconnects", and there can be hassles like plugging in one drive knocks another drive off while it's figuring out what all to access and stuff, but i'm seeking more specific details because otherwise my particular needs seem to indicate it's the best choice.
I'm aware there might be performance bottlenecks of trying to do a striped array through one.
Even if this is not my "primary" storage (if in the end I have a SATA pool) I still have a need to experiment with the creation of a sneakernet migration volume, like creating a single ZFS drive on two or more USB drives, so that all the protection against data corruption and so forth remains in force during transit, until it can be imported into another system/copied onto SATA based local pool storage. I am hoping that all that integrity verification and redundancy would make it more useful performing that job rather than accessing the pool via windows and mailing them with NTFS. :P
The second likely/desired use (whether it's done with USB drives, or SATA drives on hot swap trays) is the creation of "offline storage sets", since I got a firm talking to of the difficulty of going much over 32TB being the frontier so far. My librarian project doesn't need "24/7 100% uptime storage" I need dumb storage sitting on the shelf to save data for the future when such datasets are easier to manage in realtime and that would be periodically reconnected (mostly often enough to prevent head stiction since powered down drives shouldn't have excessive bit rot) to enable a full scrub (knowing full well it might take a week plus to run that scrub) and then powered back down again once done.