someone1
Dabbler
- Joined
- Jun 17, 2013
- Messages
- 37
Hello!
Background Info:
As I can't migrate the pool due to some encrypted root error, I am migrating as follows:
This is working great, except that all datasets in GigaVol are larger than their counterparts in MegaVol, even blank datasets are ~1% bigger in size! most datasets (so far) are 10-13% larger in size:
I can't figure out why there's such a large difference (5.46GB used on old pool vs 6.16GB used on new pool = 12.82% more space used!) - the same compression algorithm is being used though the ratios are different, the same exact settings/data should be found in each dataset. Does encryption (or it being a RaidZ2 pool) really add so much overhead in terms of space usage?
Any advice/guidance would be greatly appreciated!
Background Info:
- I'm running on TrueNAS-12.0-U2.1
- I just got 6x8TB disks to make a new RaidZ2 pool with.
- I wanted to use encryption for the new pool, so I enabled it while creating the pool using the GUI
- I plan to migrate my existing 5x4TB RaidZ1 pool to this new pool
- Old Pool: MegaVol
- New Pool: GigaVol
Code:
$ zfs get all GigaVol ... GigaVol encryption aes-256-gcm - GigaVol keylocation prompt local GigaVol keyformat hex - GigaVol pbkdf2iters 0 default GigaVol encryptionroot GigaVol - GigaVol keystatus available - GigaVol special_small_blocks 0 default
As I can't migrate the pool due to some encrypted root error, I am migrating as follows:
Code:
// Shutdown all plugins, VMs, jails, services, shares, etc. $ zfs snapshot -r MegaVol@migrate $ zfs list -t snapshot -r -o name MegaVol | grep migrate // Generate list of zfs send/receive commands to run // Example: $ zfs send -p MegaVol/iocage@migrate | pv | zfs receive -F -o encryption=on GigaVol/iocage
This is working great, except that all datasets in GigaVol are larger than their counterparts in MegaVol, even blank datasets are ~1% bigger in size! most datasets (so far) are 10-13% larger in size:
Code:
$ zfs get logicalused,logicalreferenced,used,referenced,compressratio,recordsize,compression,encryption,encryptionroot,usedbydataset,usedbychildren MegaVol/iocage NAME PROPERTY VALUE SOURCE MegaVol/iocage logicalused 8.13G - MegaVol/iocage logicalreferenced 7.92M - MegaVol/iocage used 5.64G - MegaVol/iocage referenced 14.7M - MegaVol/iocage compressratio 1.78x - MegaVol/iocage recordsize 128K default MegaVol/iocage compression lz4 local MegaVol/iocage encryption off default MegaVol/iocage encryptionroot - - MegaVol/iocage usedbydataset 14.7M - MegaVol/iocage usedbychildren 5.62G -
Code:
$ zfs get logicalused,logicalreferenced,used,referenced,compressratio,recordsize,compression,encryption,encryptionroot,usedbydataset,usedbychildren GigaVol/iocage NAME PROPERTY VALUE SOURCE GigaVol/iocage logicalused 7.59G - GigaVol/iocage logicalreferenced 8.64M - GigaVol/iocage used 6.16G - GigaVol/iocage referenced 21.0M - GigaVol/iocage compressratio 1.74x - GigaVol/iocage recordsize 128K default GigaVol/iocage compression lz4 received GigaVol/iocage encryption aes-256-gcm - GigaVol/iocage encryptionroot GigaVol - GigaVol/iocage usedbydataset 21.0M - GigaVol/iocage usedbychildren 6.14G -
I can't figure out why there's such a large difference (5.46GB used on old pool vs 6.16GB used on new pool = 12.82% more space used!) - the same compression algorithm is being used though the ratios are different, the same exact settings/data should be found in each dataset. Does encryption (or it being a RaidZ2 pool) really add so much overhead in terms of space usage?
Any advice/guidance would be greatly appreciated!