Fragmentation on ZFS doesn't matter -- until it does. And for me and my current pool / setup, it actually doesn't matter much.
I have an existing pool that is fragmented and a new pool that is empty. The ZFS mythos says that a send / copy will reduce fragmentation. In my case, though I am facile with the commands and process, that myth is just that, and the new "copy" ends up 2x as fragmented as the source.
Because fragmentation comes up so often, it seems to me to make sense to try a number of approaches that might lay bare what does and does not work in reducing it via the only tools we really have: new blank drives and the bash prompt. Let's be scientific. Pose hypotheses. I'm happy to supply the petri dish and report back on results.
I made this a contest just to have a little fun and contribute results back to the community for others to point to, a la: "[contest entrant] proved that ______ doesn't work when you are migrating to a new pool -- your fragmentation may go up and not down"
Specifics:
I will run up to ten (10) tests, executing user-suggested commands from the community. I will award a $50 Amazon Gift Card to the person who is able to lower fragmentation on my new pool by the largest amount AND as a token of thanks to iX Systems for FreeNAS and for hosting great conversations on this forum, I will donate $50 to iX systems in whatever manner they deem acceptable. Oh, and bragging rights to the winner, too!
Phase 1: indicate that you are interested, asking whatever questions you need to know about the system that I haven't given. This phase will end when we hit around 20 or so interested people, and I will list the top 10 players participating, using a combination of 2500 - (100 * queue position) + number of forum posts, and sorting those in descending order. [note: still open as of Nov 22 @ 1934 GMT]
Phase 2: Each player will describe the configuration settings I am to set, run and execute. I will run each player's test, reporting out the results.
Phase 3: If there is sufficient lobbying / groaning about a scenario missing, I will do a couple of last runs. Then I will award a prize. I reserve the right to award early if someone just nails it.
Setup: [last edited Nov 28 @9:42 GMT; space got a little tighter on source pool but not worried]
- SuperMicro 12-bay SAS enclosure
- SuperMicro X8DTN mobo
- Avago 9211-8i HBA, in IT mode; capable of 6Gbps
- 2x Intel Xeon X5670 processors
- 96 GB ECC RAM
- Ubuntu 16.04.1 with ZFS native, kernel-modded, not FUSE (why am I here? Because ZFS on Ubuntu is nascent. Someday I will convert to FreeNAS)
- SLOG available, but not in pool (HyperX® Predator PCIe 240GB SSD with speeds of up to 1400MB/s read and 1000MB/s write)
- Existing pool:
- 6x 4TB in Raidz2, unencrypted on 6Gbps SAS drives (6Gbps is limit of backplane)
- Lots of big media files
- 84% of capacity utilized (I will use a common snapshot for all tests)
- I had a slog in place for first 8TB or so, then removed it
- pool is currently -- and has *always* -- been about 22% fragmented, ever since rsyncing files to it from an old NAS
- compression is on for all datasets, but is 1.00 for all but one, where it is 1.03
- I have not tweaked logbias nor anything else on the pool
- Drives are Seagate 7200rpm (ST4000NM0023)
- recordsize is 128K [thanks SweetAndLow]
Goal:
- Copy all files to new pool, which is 6x 4TB (identical size)
- New pool is encrypted using luks. Given AES-NI, there is no expected performance hit. It is like GELI, but what one has to use on Ubuntu.
- [update 11/28: encryption process does not make fragmentation any worse than when using bare drives directly]
- Drives are slightly better models (ST4000NM0034) but ostensibly the same
Baseline:
- command: zfs send -R tank1@recursive_shot_112116 | pv -s 12T | zfs receive tank22
- results: sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank1 21.8T 18.0T 3.79T - 22% 82% 1.00x ONLINE -
tank22 21.8T 17.9T 3.81T - 49% 82% 1.00x ONLINE -
I'm a lurker to the community, so if this contest rubs people the wrong way and ends up being a horrible idea, I'll just politely tip my hat and slink away. But, to reiterate, I thought that my unique, time-isn't-an-issue, identical source / target situation might allow me to run tests to benefit us all.
Lastly: this contest is a private contest, not sponsored or endorsed in any way by iXsystems nor FreeNAS nor any commercial entity. iXsystems may at their sole discretion stop this contest for any reason [I certainly hope that they do not]. All test results published here become the property of iXsystems.
Best,
TC
I have an existing pool that is fragmented and a new pool that is empty. The ZFS mythos says that a send / copy will reduce fragmentation. In my case, though I am facile with the commands and process, that myth is just that, and the new "copy" ends up 2x as fragmented as the source.
Because fragmentation comes up so often, it seems to me to make sense to try a number of approaches that might lay bare what does and does not work in reducing it via the only tools we really have: new blank drives and the bash prompt. Let's be scientific. Pose hypotheses. I'm happy to supply the petri dish and report back on results.
I made this a contest just to have a little fun and contribute results back to the community for others to point to, a la: "[contest entrant] proved that ______ doesn't work when you are migrating to a new pool -- your fragmentation may go up and not down"
Specifics:
I will run up to ten (10) tests, executing user-suggested commands from the community. I will award a $50 Amazon Gift Card to the person who is able to lower fragmentation on my new pool by the largest amount AND as a token of thanks to iX Systems for FreeNAS and for hosting great conversations on this forum, I will donate $50 to iX systems in whatever manner they deem acceptable. Oh, and bragging rights to the winner, too!
Phase 1: indicate that you are interested, asking whatever questions you need to know about the system that I haven't given. This phase will end when we hit around 20 or so interested people, and I will list the top 10 players participating, using a combination of 2500 - (100 * queue position) + number of forum posts, and sorting those in descending order. [note: still open as of Nov 22 @ 1934 GMT]
Phase 2: Each player will describe the configuration settings I am to set, run and execute. I will run each player's test, reporting out the results.
Phase 3: If there is sufficient lobbying / groaning about a scenario missing, I will do a couple of last runs. Then I will award a prize. I reserve the right to award early if someone just nails it.
Setup: [last edited Nov 28 @9:42 GMT; space got a little tighter on source pool but not worried]
- SuperMicro 12-bay SAS enclosure
- SuperMicro X8DTN mobo
- Avago 9211-8i HBA, in IT mode; capable of 6Gbps
- 2x Intel Xeon X5670 processors
- 96 GB ECC RAM
- Ubuntu 16.04.1 with ZFS native, kernel-modded, not FUSE (why am I here? Because ZFS on Ubuntu is nascent. Someday I will convert to FreeNAS)
- SLOG available, but not in pool (HyperX® Predator PCIe 240GB SSD with speeds of up to 1400MB/s read and 1000MB/s write)
- Existing pool:
- 6x 4TB in Raidz2, unencrypted on 6Gbps SAS drives (6Gbps is limit of backplane)
- Lots of big media files
- 84% of capacity utilized (I will use a common snapshot for all tests)
- I had a slog in place for first 8TB or so, then removed it
- pool is currently -- and has *always* -- been about 22% fragmented, ever since rsyncing files to it from an old NAS
- compression is on for all datasets, but is 1.00 for all but one, where it is 1.03
- I have not tweaked logbias nor anything else on the pool
- Drives are Seagate 7200rpm (ST4000NM0023)
- recordsize is 128K [thanks SweetAndLow]
Goal:
- Copy all files to new pool, which is 6x 4TB (identical size)
- New pool is encrypted using luks. Given AES-NI, there is no expected performance hit. It is like GELI, but what one has to use on Ubuntu.
- [update 11/28: encryption process does not make fragmentation any worse than when using bare drives directly]
- Drives are slightly better models (ST4000NM0034) but ostensibly the same
Baseline:
- command: zfs send -R tank1@recursive_shot_112116 | pv -s 12T | zfs receive tank22
- results: sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank1 21.8T 18.0T 3.79T - 22% 82% 1.00x ONLINE -
tank22 21.8T 17.9T 3.81T - 49% 82% 1.00x ONLINE -
I'm a lurker to the community, so if this contest rubs people the wrong way and ends up being a horrible idea, I'll just politely tip my hat and slink away. But, to reiterate, I thought that my unique, time-isn't-an-issue, identical source / target situation might allow me to run tests to benefit us all.
Lastly: this contest is a private contest, not sponsored or endorsed in any way by iXsystems nor FreeNAS nor any commercial entity. iXsystems may at their sole discretion stop this contest for any reason [I certainly hope that they do not]. All test results published here become the property of iXsystems.
Best,
TC
Last edited: