Is it possible to allow a jail to see all host pools and datasets?

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
1) TCP transport without the SSH overhead. My servers are connected over a wireguard tunnel. Truenas does have SSH+netcat which I suspect is somewhat similar but meh
Why "meh"? SSH for the control channel with strong authentication, netcat for low overhead TCP. That does exactly what you require ...

2) Automatic retries. Again TrueNAS might do this but I cannot see any documentation that it does do this.
If the connection to my remote system goes down, e.g. for a day or two, TrueNAS will just continue and re-transmit all intermediate snapshots once it comes back up. Only condition: there needs to be a common snapshot at both ends to use as the base for the differential.

3) Resumable sends and receives. As this is a ZFS feature I'm sure TrueNAS does this but again it isn't documented.
Don't know, honestly.

4) Automatic bookmark & hold management for guaranteed incremental send & recv. Let's say I want to snapshot every minute and send the smallest amount each minute. Realistically I only want to store minute snapshots for an hour. If there is some outage (for example moving the backup server offsite after seeding locally) then I would want the snapshots to be kept until they are synced. I'm pretty sure TrueNAS doesn't hold snapshots, I may be wrong.
5) Easier and more flexible snapshot management. Easier to setup age-based fading (grandfathering scheme) snapshots.
Yeah ... got it :) I keep hourly for a week, sync with remote hourly, but keep only one snapshot per day for four weeks at the remote. This TrueNAS can do with standard configuration via UI.

6) Cross-platform tool.
TrueNAS uses zettarepl, which supposedly is cross-platform:

Kind regards,
Patrick
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
From a cursory glance a while back, zettarepl gave me something of a “not invented here” vibe. I wonder if anyone has a more informed opinion around here.
 

scott2500uk

Dabbler
Joined
Nov 17, 2014
Messages
37
So to give a brief update I have been able to run zrepl on the host system at both ends by installing the zrepl plugin and then stopping the jail as we just need the binaries.

He is my first testing config, I had to add a few more global configs than normal to put a run path.

Local system config /mnt/tank/iocage/jails/zrepl/root/usr/local/etc/zrepl/zrepl.yml:

Code:
global:
  logging:
    - type: "stdout"
      level:  "error"
      format: "human"
    - type: "syslog"
      level:  "info"
      format: "logfmt"
  control:
    sockpath: /mnt/tank/iocage/jails/zrepl/root/var/run/zrepl/control
  serve:
    stdinserver:
      sockdir: /mnt/tank/iocage/jails/zrepl/root/var/run/zrepl/stdinserver

jobs:
- name: sunfish_to_sturgeon
  type: push
  connect:
     type: tcp
     address: "10.0.0.2:8888"
  filesystems: {
    "tank/photography": true,
  }
  snapshotting:
    type: periodic
    prefix: zrepl_
    interval: 10m
  pruning:
    keep_sender:
    - type: not_replicated
    - type: last_n
      count: 10
    keep_receiver:
    - type: grid
      grid: 1x1h(keep=all) | 24x1h | 30x1d | 6x30d
      regex: "^zrepl_"


My backup server config /mnt/tank/iocage/jails/zrepl/root/usr/local/etc/zrepl/zrepl.yml:

Code:
global:
  logging:
    - type: "stdout"
      level:  "error"
      format: "human"
    - type: "syslog"
      level:  "info"
      format: "logfmt"
  control:
    sockpath: /mnt/tank/iocage/jails/zrepl/root/var/run/zrepl/control
  serve:
    stdinserver:
      sockdir: /mnt/tank/iocage/jails/zrepl/root/var/run/zrepl/stdinserver
    
jobs:
- name: sink
  type: sink
  serve:
    type: tcp
    listen: ":8888"
    listen_freebind: true
    clients: {
      "10.0.0.1": "Sunfish"
    }
  root_fs: "tank"


To begin with I'm just testing with one of our datasets which is small at about 125GiB. The initial snapshot was transferred at an average speed of 2.5Gbps on a 10Gbps direct connection between the two servers using jumbo frames. Running two x six disk raidz2 vols for the tank pool using 12x12TB mechanical disks. I would have expected a better speed but can totally live with 2.5Gbps for the initial seed. Once the backup is offsite it will be limited to 1Gbps and the transfer amount will be very small incremental amounts throughout the day.

On the receiving end, it is set up as fan in so stores the receiving datasets like this:

tank/$HOST/$POOL/$DATASET
Screenshot 2021-12-21 at 12.18.35.png

I would prefer I could control the exact source to destination position but not the end of the world.

I found out that the replicated datasets are not mounted at the other end, that is fine but whatever you do, don't mount them otherwise you change the remote dataset and then the backup cannot continue due to diverged data. If you want to mount the dataset to check on it, clone a snapshot and mount that instead.

What I don't fully understand yet:
Both ends have their Pools encrypted. Encryption setup under TrueNAS 12 when creating the pools.
Seems like the data on the backup server is not encrypted...
Does that mean if the disks were yanked onto another system could the backed-up datasets be read without the main pools decryption key?

A few other things I need to work out are:
1) how does the pruning/snapshotting interact with snapshots created by TrueNAS GUI where they are prefixed auto_
2) the best way to make sure the zrepl deamon is ran and kept running on both systems at boot. Suggestions welcomed.
3) notifications of progress and errors. At a minimum, we need emails if there are any errors and a weekly digest of what has happened. zrepl can log to syslog and stderr, so I wonder if TrueNAS default notifications could pick it up.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
2) the best way to make sure the zrepl deamon is ran and kept running on both systems at boot. Suggestions welcomed.
Startup script.
3) notifications of progress and errors. At a minimum, we need emails if there are any errors and a weekly digest of what has happened. zrepl can log to syslog and stderr, so I wonder if TrueNAS default notifications could pick it up.
I don't think so, but you can do some shell trickery in the startup script to capture the output and email it daily. Emailing immediately on errors is going to be trickier.
 
Joined
Oct 22, 2019
Messages
3,641
1) how does the pruning/snapshotting interact with snapshots created by TrueNAS GUI where they are prefixed auto_

You mean to use TrueNAS's built-in auto snaphot task, in conjunction with zrepl in a jail?

I'd be very, very careful, as TrueNAS uses zettarepl (in-house software) to manage auto snapshots, pruning, and replications (all via the GUI). It's based on parsing the snapshot names and comparing them against existing/enabled tasks.

This can lead to unintentionally destroyed snapshots.

See this post for further elaboration:
 
Top