missing

petervg

Dabbler
Joined
Sep 26, 2021
Messages
11
Hi guys,

I have a pool with 1.67Tb of data (2 drive mirror). I want to switch from core to scale so I bought some new drives (identical to the ones I already have) and I replicated the pool to a new pool. So what I did was:
- take a manual snapsot of the source pool (recursive)
- replicate the pool to the new drives (checked full filesystem replication, added "manual-%Y-%m-%d_%H-%M" and "auto-%Y-%m-%d_%H-%M" to make sure the correct naming scemes are included) and set snapshot retention policy to "same as source"

I compared the replicated pool with the original one using FreeFileSync (onle checked filenames and dates), and apart from a few system files related to truenas (mainly logs) I cannot spot any relevant discrepancie between the 2 pools.

But... the source pool is 1.67Tb whereas the replicated pool is only 1.28Tb (so almost 400Gb of data missing???).

So I tried to investigate in more detail:
One of the datasets called "databasebackups' where I store some backups is 303.24Gb (1.23 compression ratio) on the source pool and only 106.28Gb (1.33 compression ratio) on the replicated pool. I used FreeFileSync again to compare not only the files, but also the content of the files. Not a single difference even though the replicated dataset size is only 1/3 of the original dataset.

This question has been asked before in the forum (april 17, 2021 ) but it was never answered so I'm still a bit puzzled why there is such a huge difference in file size.

I did notice I did not replicate the periodic snapshots, but from what I understood, snapshots take up virtually no disk space. The periodic snapshots are taken once per week, and this server has been running only for 6 months...

Anybody has some more insights as to where this discrepancy in pool size could come from?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Periodic snapshots DO take up space. Any change to the pool won't be reflected in a pre-existing snapshots, (aka periodic ones). The amount of space is directly related to the amount of changes.

For example, if you had a 1GB file in your pool. A periodic snapshot comes along and "saves" the file. You delete the file in the pool. But, the space won't be released until the last snapshot that references that 1GB file is removed.

You can get a hint of how much space your snapshots are taking up, using this command string;

zfs list -t snapshot -r

There is probably a way to do this from the GUI, but, I learned ZFS from Solaris days.
 

petervg

Dabbler
Joined
Sep 26, 2021
Messages
11
@Arwen: thanks for the feedback! I was indeed wrong about the snapshot disk size. They do indeed take a significant amount of diskspace. I haven't yet figured out exactly how I need to interpret this (adding up the used space for the snapshots is still significantly less then what I expected) but you did point me in the right direction.

Seems I have some reading to do :smile:
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
ZFS snapshots, clones and alternate boot environments are great features. But, they all require some thought to their use.

Hope your reading goes well.
 
Top