ZFS Replication Pools are not equal after task is finished

Claywd

Cadet
Joined
Dec 25, 2022
Messages
8
So I am planning to move from one pool to a new pool, obviously moving to the larger pool size. However, my jails folder is not updated even though I have the jails turned off and the task says finished.

Plan was to keep the replication going until I could schedule some time to swap the pools. Its been going for about a week. Went to go confirm the datasets were equal in size (mostly) and found sever gb missing from the new pool.



Included screenshots of the pools, the replication task, and the logs.

not sure if I did something wrong when configuring the replication task or if it only replicates snapshots and this means I have some snapshots missing from my system.

1675034693574.png


1675034426309.png
 

Attachments

  • 113.txt
    304 bytes · Views: 160
Joined
Oct 22, 2019
Messages
3,641
You shouldn't be storing your media files within the plex jail's dataset itself. Use mountpoints instead.

As for the size discrepancy, it could be missing "intermediate" snapshots that are not part of your "@auto_%Y%m%d_%H%M" Periodic Snapshot Task.
 

Claywd

Cadet
Joined
Dec 25, 2022
Messages
8
You shouldn't be storing your media files within the plex jail's dataset itself. Use mountpoints instead.

As for the size discrepancy, it could be missing "intermediate" snapshots that are not part of your "@auto_%Y%m%d_%H%M" Periodic Snapshot Task.
Good tip.

> As for the size discrepancy, it could be missing "intermediate" snapshots that are not part of your "@auto_%Y%m%d_%H%M" Periodic Snapshot Task.

Where would these have been created? I don't take manual snapshot and my setup is fairly basic. I can't imagine how these snapshots would have been created if not by my Periodic Snapshot Task.
 
Joined
Oct 22, 2019
Messages
3,641
Where would these have been created? I don't take manual snapshot and my setup is fairly basic. I can't imagine how these snapshots would have been created if not by my Periodic Snapshot Task.
If not manual snapshots, and if not automatic snapshots (using a different naming scheme), then these are possibly snapshots created for your jails from "update" and "upgrade" operations.

You can check either by visiting the Snapshots page or in the command-line using this command:
Code:
zfs list -r -t snap -o name,used,refer "SSD Storage"/iocage | grep -v auto

The above command will list all snapshots without "auto" in their name.

It's also discouraged to use "spaces" in the names of ZFS objects, such as pools, datasets, snapshots, and bookmarks. This could potentially cause issues in the future.
 
Last edited:

Claywd

Cadet
Joined
Dec 25, 2022
Messages
8
It's also discouraged to use "spaces" in the names of ZFS objects, such as pools, datasets, snapshots, and bookmarks. This could potentially cause issues in the future.
yeah I know. Thats why the new pool doesn't use spaces.

Ah, so these snapshots were created when I updating the plugins and that's why only the jails folders are missing the intermediate snapshots.

Okay, so now that I know I have snapshots which have not been replicated, what is right way to remediate this? do snapshots need to be applied in a particular order and so I need to rerun a task or are they atomic and I can just replicate these by tac'n on a command to the above zfs list ... | grep to fill in the blanks in my zfs replication?
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Okay, so now that I know I have snapshots which have not been replicated, what is right way to remediate this?
I commented on a bug report / feature request, because of this counter-intuitive way that TrueNAS's replications operate. They are, by design, not able to handle intermediary snapshots.

That's why I don't use the GUI for my replications (even though I prefer the GUI) because I want my intermediary snapshots to be replicated as well.

Sorry to say, but that's how TrueNAS was designed (and I would argue they won't fix it because of "fear" they might break something.)

If TrueNAS had a GUI for "Syncoid", it would serve as a practical and simplified backup solution, which doesn't require tethering replication tasks to snapshot tasks or "parseable" snapshot names.
 

Claywd

Cadet
Joined
Dec 25, 2022
Messages
8
So what I have so far is this yaml file setup for use with the following zettarepl command. zettarepl is what the gui is using under the hood.

Code:
zettarepl run --once replication.yaml


replication.yaml
Code:
replication-tasks:
  src:
    # Either push or pull
    direction: push

    # Transport option defines remote host to send/receive snapshots. You
    # can also just specify "local" to send/receive snapshots to localhost
    transport:
      type: local

    # Source dataset
    source-dataset: mnt/SSD\ Storage
    # Target dataset
    target-dataset: mnt/plugins

    # Or you can specify multiple source datasets, e.g.:
    # source-dataset:
    #   - data/src/work
    #   - data/src/holiday/summer
    # They would be replicated to data/dst/work and data/dst/holiday/summer

    # "recursive" and "exclude" work exactly like they work for periodic
    # snapshot tasks
    recursive: true
    exclude:
      - data/src/excluded

    # Send dataset properties along with snapshots. Enabled by default.
    # Disable this if you use custom mountpoints and don't want them to be
    # replicated to remote system.
    properties: true

    # When sending properties, exclude these properties
    properties-exclude:
      - mountpoint

    # When sending properties, override these properties
    properties-override:
      compression: gzip-9

    # Send a replication stream package, which will replicate the specified filesystem, and all descendent file systems.
    # When received, all properties, snapshots, descendent file systems, and clones are preserved.
    # You must have recursive set to true, exclude to empty list, properties to true. Disabled by default.
    replicate: false

    # # Use the following encryption parameters to create a target dataset.
    # encryption:
    #   # Encryption key
    #   key: "0a0b0c0d0e0f"
    #   # Key format. Can be "hex" or "passphrase"
    #   key-format: "hex"
    #   # Path to store encryption key.
    #   # A special value "$TrueNAS" will store the key in TrueNAS database.
    #   key-location: "/data/keys/dataset.key"

    # You must specify at least one of the following two fields for push
    # replication:

    # List of periodic snapshot tasks ids that are used as snapshot sources
    # for this replication task.
    # "recursive" and "exclude" fields must match between replication task
    # and all periodic snapshot tasks bound to it, i.e. you can't do
    # recursive replication of non-recursive snapshots and you must
    # exclude all child snapshots that your periodic snapshot tasks exclude
    # periodic-snapshot-tasks:

    # List of naming schemas for snapshots to replicate (in addition to
    # periodic-snapshot-tasks, if specified). Use this if you want to
    # replicate manually created snapshots.
    # As any other naming schema, this must contain all of "%Y", "%m",
    # "%d", "%H" and "%M". You won't be able to replicate snapshots
    # that can't be parsed into their creation dates with zettarepl.
    also-include-naming-schema:
      - "SSD Storage/iocage/jails/homebridge@ioc_update_12.3-RELEASE_2022-10-11_16-54-5"
      - "SSD Storage/iocage/jails/homebridge@ioc_update_12.3-RELEASE_2022-10-11_21-20-3"
      - "SSD Storage/iocage/jails/homebridge@ioc_plugin_upgrade_2022-10-12             "
      - "SSD Storage/iocage/jails/homebridge@ioc_update_12.3-RELEASE_2022-12-23_16-46-1"
      - "SSD Storage/iocage/jails/homebridge/root@ioc_update_12.3-RELEASE_2022-10-11_16"
      - "SSD Storage/iocage/jails/homebridge/root@ioc_update_12.3-RELEASE_2022-10-11_21"
      - "SSD Storage/iocage/jails/homebridge/root@ioc_plugin_upgrade_2022-10-12        "
      - "SSD Storage/iocage/jails/homebridge/root@ioc_update_12.3-RELEASE_2022-12-23_16"
      - "SSD Storage/iocage/jails/nextcloud@ioc_update_12.2-RELEASE_2022-10-11_14-43-59"
      - "SSD Storage/iocage/jails/nextcloud@ioc_update_12.2-RELEASE_2022-10-11_16-38-46"
      - "SSD Storage/iocage/jails/nextcloud@ioc_plugin_upgrade_2022-10-11              "
      - "SSD Storage/iocage/jails/nextcloud@ioc_update_13.1-RELEASE_2022-11-11_22-03-32"
      - "SSD Storage/iocage/jails/nextcloud@ioc_update_13.1-RELEASE_2022-12-16_18-21-46"
      - "SSD Storage/iocage/jails/nextcloud/root@ioc_update_12.2-RELEASE_2022-10-11_14-"
      - "SSD Storage/iocage/jails/nextcloud/root@ioc_update_12.2-RELEASE_2022-10-11_16-"
      - "SSD Storage/iocage/jails/nextcloud/root@ioc_plugin_upgrade_2022-10-11         "
      - "SSD Storage/iocage/jails/nextcloud/root@ioc_update_13.1-RELEASE_2022-11-11_22-"
      - "SSD Storage/iocage/jails/nextcloud/root@ioc_update_13.1-RELEASE_2022-12-16_18-"
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_16-54-48    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_17-57-05    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_17-57-16    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_21-19-01    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_21-19-58    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_21-20-18    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_21-26-58    "
      - "SSD Storage/iocage/jails/nginx@ioc_update_12.3-RELEASE_2022-10-11_21-30-45    "
      - "SSD Storage/iocage/jails/nginx@manual-2022-10-11_23-31                        "
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_16-54-4"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_17-57-0"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_17-57-1"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_21-19-0"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_21-19-5"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_21-20-1"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_21-26-5"
      - "SSD Storage/iocage/jails/nginx/root@ioc_update_12.3-RELEASE_2022-10-11_21-30-4"
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-04-18_18-55-53     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-04-23_16-04-50     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-04-23_16-13-27     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-04-23_16-47-29     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-04-23_16-50-47     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-08-22_09-04-26     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-08-22_09-34-12     "
      - "SSD Storage/iocage/jails/plex@ioc_plugin_upgrade_2022-10-12                   "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-10-11_17-47-43     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-10-11_17-47-51     "
      - "SSD Storage/iocage/jails/plex@ioc_update_12.2-RELEASE_2022-10-11_23-20-24     "
      - "SSD Storage/iocage/jails/plex@ioc_update_13.0-RELEASE_2022-10-26_22-21-17     "
      - "SSD Storage/iocage/jails/plex@ioc_update_13.0-RELEASE_2022-12-27_23-05-54     "
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-04-18_18-55-53"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-04-23_16-04-50"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-04-23_16-13-27"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-04-23_16-47-29"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-04-23_16-50-47"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-08-22_09-04-26"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-08-22_09-34-12"
      - "SSD Storage/iocage/jails/plex/root@ioc_plugin_upgrade_2022-10-12              "
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-10-11_17-47-43"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-10-11_17-47-51"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_12.2-RELEASE_2022-10-11_23-20-24"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_13.0-RELEASE_2022-10-26_22-21-17"
      - "SSD Storage/iocage/jails/plex/root@ioc_update_13.0-RELEASE_2022-12-27_23-05-54"
      - "SSD Storage/iocage/releases/12.3-RELEASE/root@nginx"

    # If true, replication task will run automatically either after bound
    # # periodic snapshot task or on schedule
    auto: false

    # Same crontab-like schedule used to run the replication task.
    # Required when you don't have any bound periodic snapshot task but
    # want replication task to run automatically on schedule. Otherwise,
    # overrides bound periodic snapshot task schedule.
    # schedule:
    #   minute: "0"

    # How to delete snapshots on target. "source" means "delete snapshots
    # that are no more present on source", more policies documented
    # below
    retention-policy: source


Of course, neither did this fix the difference in directory size between the new pool and old pool for my jails... so now I'm a bit unsure of what the right way to fix this is. There has to be a way to restore these snapshots to the new pool.
 
Last edited:

Claywd

Cadet
Joined
Dec 25, 2022
Messages
8
If the answer is that I need to copy the data using something besides the zfs replication tools outlined here then is it possible to back fill instead of overwriting? I'm using ssds so I don't want to just rewrite almost 4tb of data to the new pool.
 

Claywd

Cadet
Joined
Dec 25, 2022
Messages
8
also, if zettarepl can't handle intermediate snapshots then I suppose I should be using a different tool or replicating datasets to backup pools. Anybody recommend sanoid?
 
Joined
Oct 22, 2019
Messages
3,641
Anybody recommend sanoid?
Syncoid, part of the Sanoid suite, works great, and I've used it... but not in TrueNAS.

It's scriptable, and does its own snapshot-on-demand before sending anything. You can also choose if you want to use "-i" or "-I", to include intermediary snapshots with the latter. It's entirely decoupled from any other tasks. (It handles its own sends, snapshot-creation, and snapshot-pruning.)

Problem is, it's not included with TrueNAS (likely never will be), and you shouldn't try to install packages within TrueNAS itself, due to its appliance nature (and you'll lose it after a reboot.) It might even cause unintended breakages in other places that confuse the middleware.
 
Top