I honestly can't even tell you why there's such a thing as a mandatory root dataset for ZFS pools. I bet if you traced it back to its origins, you'd find out that the developers just shrugged their shoulders and figured "Eh, we'll just keep it that way. Who cares."
Unless someone can demonstrate the creation of a pool that contains no datasets within? Not even a root dataset. A pool in which you can have multiple root datasets?
Think of how much better ZFS would be if that were the case.
It's arbitrary why there's always a root dataset of the same name as the pool that exists upon creation of a brand new pool.
A pool is a pool. A dataset is a dataset. Pool commands affect the pool (features, scrubs, vdevs), and dataset commands affect datasets (snapshots, datasets, compression, encryption, recordsize, etc.) Yet for some reason they went with this weird relationship between a pool and its mandatory root dataset...
It removes flexibility and enforces strange rules and hierarchies (as seen in a situation like this.)
Ideally, we should have a brand new pool with zero datasets, such as:
And then you create root datasets (yes, plural), such as:
- BigPool
- robert
- archives
- documents
- media
- torrents
- sally
- billy
- archives
- documents
- temporary
See? Then you can replicate an entire root dataset from another pool and stick it in as another root dataset.
But nope. For some reason you have no choice but to have the one-and-only root
dataset "BigPool" under the
pool "BigPool". (It's tacky, but you can still create your own pseudo-root datasets.)
I just don't understand why they went this route with ZFS?
Can you imagine if partitioning was like this?
"You want to create partitions under
Drive1? Sure! But you must always have a master
partition named
Drive1, and only within it can you create multiple partitions..."
