Elliot Dierksen
Guru
- Joined
- Dec 29, 2014
- Messages
- 1,135
@Ender117 and @HoneyBadger - Moving discussion to a new thread.
I was maxing out at 4G NFS writes with my pool constructed as follows:
I noticed the fragmentation, and figured that was part of that part of the issue. This caused me to move the data off and re-create the pool. Now it looks like this:
What puzzles me is a that I added 8 disks at a time as RAIDZ2. I was expecting each of those additions to create a single vdev, but I ended up with 4x2 (it did state that when I was creating them in the GUI). What gives? I know more vdevs = more performance, but I am confused by the construction of the vdevs. I did get about the amount of available storage space that I was expecting (12 disks out of 16). Strange things are afoot at the Circle-K....
I was maxing out at 4G NFS writes with my pool constructed as follows:
Code:
root@freenas2:/nonexistent # zpool list -v RAIDZ2-I NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT RAIDZ2-I 14.5T 5.75T 8.75T - 32% 39% 1.00x ONLINE /mnt raidz2 7.25T 5.52T 1.73T - 60% 76% gptid/bd041ac6-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/bdef2899-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/bed51d90-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/bfb76075-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/c09c704a-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/c1922b7c-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/c276eb75-9e63-11e7-a091-e4c722848f30 - - - - - - gptid/c3724eeb-9e63-11e7-a091-e4c722848f30 - - - - - - raidz2 7.25T 240G 7.02T - 5% 3% gptid/a1b7ef4b-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/a2eb419f-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/a41758d7-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/a5444dfb-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/a6dcd16f-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/a80cd73c-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/a94711a5-3c2a-11e8-978a-e4c722848f30 - - - - - - gptid/aaa6631d-3c2a-11e8-978a-e4c722848f30 - - - - - - log - - - - - - nvd0p1 15.9G 5.65M 15.9G - 0% 0% cache - - - - - - nvd0p4 213G 17.3G 196G - 0% 8% spare - - - - - - gptid/4abff125-23a2-11e8-a466-e4c722848f30 - - - - - -
I noticed the fragmentation, and figured that was part of that part of the issue. This caused me to move the data off and re-create the pool. Now it looks like this:
Code:
root@freenas2:/nonexistent # zpool list -v RAIDZ2-I NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT RAIDZ2-I 14.5T 158G 14.3T - 0% 1% 1.00x ONLINE /mnt raidz2 3.62T 39.6G 3.59T - 0% 1% gptid/70852685-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/71686954-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/724e4021-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/73554422-dddc-11e8-adca-e4c722848f30 - - - - - - raidz2 3.62T 39.6G 3.59T - 0% 1% gptid/746dafd4-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/75626368-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/764f85ad-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/779e221d-dddc-11e8-adca-e4c722848f30 - - - - - - raidz2 3.62T 39.6G 3.59T - 0% 1% gptid/ce00e493-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/ceee50a2-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/cfe1d0e6-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/d0d66ee5-dddc-11e8-adca-e4c722848f30 - - - - - - raidz2 3.62T 39.6G 3.59T - 0% 1% gptid/d450f2fb-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/d55539a0-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/d64a2170-dddc-11e8-adca-e4c722848f30 - - - - - - gptid/d7709adf-dddc-11e8-adca-e4c722848f30 - - - - - - log - - - - - - nvd0p1 15.9G 0 15.9G - 0% 0% cache - - - - - - nvd0p4 213G 17.5G 195G - 0% 8% spare - - - - - - gptid/f3fa63e8-dddc-11e8-adca-e4c722848f30 - - - - - -
What puzzles me is a that I added 8 disks at a time as RAIDZ2. I was expecting each of those additions to create a single vdev, but I ended up with 4x2 (it did state that when I was creating them in the GUI). What gives? I know more vdevs = more performance, but I am confused by the construction of the vdevs. I did get about the amount of available storage space that I was expecting (12 disks out of 16). Strange things are afoot at the Circle-K....