reorg data

bumann

Dabbler
Joined
Jan 25, 2018
Messages
15
Hi,

i updatet my freenas from 2x4tb (mirror) hdd to 4x 2x4tb(mirror) hdd.
Before the update, the 2x4 Volumen was 95% used.

Is it possible to reorg the data on the volumen?

Here the output from zpool iostat -v

Code:
root@freenas10g:~ # zpool iostat -v
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                            1.02G  73.5G      0      0    478    511
  ada0p2                                1.02G  73.5G      0      0    478    511
--------------------------------------  -----  -----  -----  -----  -----  -----
pool4TB2x4                              3.53T  11.0T     79     94  7.55M  1.73M
  mirror                                3.49T   139G     79      5  7.54M  98.8K
    gptid/8f32429a-1eb8-11e8-a292-002590c1a044      -      -     37      3  3.77M   101K
    gptid/90654c23-1eb8-11e8-a292-002590c1a044      -      -     37      3  3.77M   101K
  mirror                                14.2G  3.61T      0     26  2.24K   491K
    gptid/bf1227d3-2adf-11e9-a62a-002590c1a044      -      -      0      7  1.09K   493K
    gptid/bff56701-2adf-11e9-a62a-002590c1a044      -      -      0      7  1.19K   493K
  mirror                                13.3G  3.61T      0     27  1.38K   505K
    gptid/00e8c2f3-2ae0-11e9-a62a-002590c1a044      -      -      0      7    695   508K
    gptid/01ebecc3-2ae0-11e9-a62a-002590c1a044      -      -      0      7    749   508K
  mirror                                11.2G  3.61T      0     21  1.78K   351K
    gptid/3255589c-2ae0-11e9-a62a-002590c1a044      -      -      0      7    915   354K
    gptid/332b0da4-2ae0-11e9-a62a-002590c1a044      -      -      0      7    938   354K
logs                                        -      -      -      -      -      -
  mirror                                15.9M   137G      0     12      0   327K
    gptid/70a2032d-2ae5-11e9-a62a-002590c1a044      -      -      0     12      4   327K
    gptid/710f27ca-2ae5-11e9-a62a-002590c1a044      -      -      0     12      4   327K
cache                                       -      -      -      -      -      -
  gptid/4e49e31f-2ae5-11e9-a62a-002590c1a044  16.5G   216G      0      0  25.9K  68.9K
--------------------------------------  -----  -----  -----  -----  -----  -----

root@freenas10g:~ #


I use the volumen for proxmox kvm storage via NFS Mount.

Do you think it is better to do such reorg for performance?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
As new data is written in the pool, it will be stripe across all vdevs. Old data that is not changed will be only in the original location.
As ZFS is copy on write, you can make it redistribute the data by making a copy. The copy will be written in a new location that is stripe across all vdevs, then the original can be deleted.
Easily done and it will probably be a performance improvement.
 
Top