I'm upgrading from Synology to TrueNAS, and I'm trying to get the best of all worlds here (TrueNAS as software, but with the expansion functionality in Synology).
I think I've figured out how to make that happen.
I have a mixture of drive sizes, and I plan to add drives over time. I have 18 bays to fill up, and I currently have 7 drives (4 6TB, 2 8TB, and one 18TB).
Since drives usually come in multiples of 2TB, I'm planning to break each drive down into multiple 2TB partitions (up to 15 per drive). And then combine partitions across drives into a single vdev. And then combine those vdevs in parallel into a single pool.
Then when I add drives, I can use a script to "fail" individual 2TB partitions from existing drives and redistribute them to new drives (until I have enough drives with 2TB free on them to create a new vdev of parallel partitions). This should allow me to grow the array one drive at a time, upgrade drive sizes anytime I want, and not lose any space due to being able to effectively have one huge array (which can survive 2 or 3 drive failures, depending on how I set it up--without needing extra space).
I'm pretty sure I can script this all out using the excellent command-line functionality available with TrueNAS scale. I'd honestly love to see this built into TrueNAS, as it would make it a no-brainer upgrade from Synology (especially since Synology hasn't added many (or any) new high-capacity 3rd-party drives to their compatible list).
Has anyone tried this before? I don't see any obvious issues, as long as it's scripted (so that I don't configure it wrong). My only concerns at the moment are that ZFS might try to write files to multiple partitions in separate vdevs thinking it's spreading out the load but unintentionally put a lot of extra wear and tear on a small number of drives that happen to have those partitions--but I would presume that ZFS typically tries to spread the load so that shouldn't be too big of a deal.
I figure that this group of fine folks has tried just about everything and might have some good input before I go and do something very bad :)
BTW, it should be possible to maximize drive usage (without needing to wait for the OpenZFS update that expands drives in real time) in this way, albeit with a little shuffling around of data necessary. As long as there are >=3 (or >=4) drives available with the same or greater capacity, those drives could all get additional 2TB partitions on them in a new vdev that was added to the pool, expanding the storage space further. The only downside there is that those larger drives would get a little more wear and tear, but I think that's just the normal kind of tradeoff with mixed-size arrays (and something that could be assuaged by TrueNAS prioritizing storage of infrequently-used data on larger drives).
I think I've figured out how to make that happen.
I have a mixture of drive sizes, and I plan to add drives over time. I have 18 bays to fill up, and I currently have 7 drives (4 6TB, 2 8TB, and one 18TB).
Since drives usually come in multiples of 2TB, I'm planning to break each drive down into multiple 2TB partitions (up to 15 per drive). And then combine partitions across drives into a single vdev. And then combine those vdevs in parallel into a single pool.
Then when I add drives, I can use a script to "fail" individual 2TB partitions from existing drives and redistribute them to new drives (until I have enough drives with 2TB free on them to create a new vdev of parallel partitions). This should allow me to grow the array one drive at a time, upgrade drive sizes anytime I want, and not lose any space due to being able to effectively have one huge array (which can survive 2 or 3 drive failures, depending on how I set it up--without needing extra space).
I'm pretty sure I can script this all out using the excellent command-line functionality available with TrueNAS scale. I'd honestly love to see this built into TrueNAS, as it would make it a no-brainer upgrade from Synology (especially since Synology hasn't added many (or any) new high-capacity 3rd-party drives to their compatible list).
Has anyone tried this before? I don't see any obvious issues, as long as it's scripted (so that I don't configure it wrong). My only concerns at the moment are that ZFS might try to write files to multiple partitions in separate vdevs thinking it's spreading out the load but unintentionally put a lot of extra wear and tear on a small number of drives that happen to have those partitions--but I would presume that ZFS typically tries to spread the load so that shouldn't be too big of a deal.
I figure that this group of fine folks has tried just about everything and might have some good input before I go and do something very bad :)
BTW, it should be possible to maximize drive usage (without needing to wait for the OpenZFS update that expands drives in real time) in this way, albeit with a little shuffling around of data necessary. As long as there are >=3 (or >=4) drives available with the same or greater capacity, those drives could all get additional 2TB partitions on them in a new vdev that was added to the pool, expanding the storage space further. The only downside there is that those larger drives would get a little more wear and tear, but I think that's just the normal kind of tradeoff with mixed-size arrays (and something that could be assuaged by TrueNAS prioritizing storage of infrequently-used data on larger drives).