Replicating iocage jails for disaster recovery

FloRho

Cadet
Joined
Feb 2, 2021
Messages
6
Hi,
I am trying to replicate the whole iocage from one TrueNAS to another.

In case of a complete failure of the main TrueNAS I want to be able to run the jails on the second one.

Replication works fine, but however if I want to select Plugin- und Jail-Storage on the second TrueNAS I get the following error: [EFAULT] Failed to activate Pool-1: ZFS pool "Pool-1" root dataset is locked.

Can someone help be how to replicate jails in a way, that in case of an emergency I can use it on another TrueNAS?

Thanks,
Florian
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Can someone help be how to replicate jails in a way, that in case of an emergency I can use it on another TrueNAS?
You may notice if you do zfs get all Pool-1/iocage on your backup server that the readonly propery is probably on.

The way to do it would be to set all of the datasets for iocage to not be readonly and then activate the pool... you may need to export the jails on the source and later import them when bringing up the replica to have them appearing in the GUI.

You won't be able to continue with replication if you modify the property on the target side though, so you would only do this if it's needed.
 

FloRho

Cadet
Joined
Feb 2, 2021
Messages
6
Hi,
thank you fpr your response.

You may notice if you do zfs get all Pool-1/iocage on your backup server that the readonly propery is probably on.
No, it isn't set as readonly:
Code:
root@LittleAlice-1[~]# zfs get all Pool-1/iocage
NAME           PROPERTY                VALUE                   SOURCE
Pool-1/iocage  type                    filesystem              -
Pool-1/iocage  creation                Wed Apr 21  3:21 2021   -
Pool-1/iocage  used                    58.8G                   -
Pool-1/iocage  available               180G                    -
Pool-1/iocage  referenced              10.8M                   -
Pool-1/iocage  compressratio           1.34x                   -
Pool-1/iocage  mounted                 yes                     -
Pool-1/iocage  quota                   none                    default
Pool-1/iocage  reservation             none                    default
Pool-1/iocage  recordsize              128K                    default
Pool-1/iocage  mountpoint              /mnt/Pool-1/iocage      default
Pool-1/iocage  sharenfs                off                     default
Pool-1/iocage  checksum                on                      default
Pool-1/iocage  compression             lz4                     received
Pool-1/iocage  atime                   on                      default
Pool-1/iocage  devices                 on                      default
Pool-1/iocage  exec                    on                      default
Pool-1/iocage  setuid                  on                      default
Pool-1/iocage  readonly                off                     local
Pool-1/iocage  jailed                  off                     default
Pool-1/iocage  snapdir                 hidden                  default
Pool-1/iocage  aclmode                 passthrough             received
Pool-1/iocage  aclinherit              passthrough             received
Pool-1/iocage  createtxg               737                     -
Pool-1/iocage  canmount                on                      default
Pool-1/iocage  xattr                   on                      default
Pool-1/iocage  copies                  1                       local
Pool-1/iocage  version                 5                       -
Pool-1/iocage  utf8only                off                     -
Pool-1/iocage  normalization           none                    -
Pool-1/iocage  casesensitivity         sensitive               -
Pool-1/iocage  vscan                   off                     default
Pool-1/iocage  nbmand                  off                     default
Pool-1/iocage  sharesmb                off                     default
Pool-1/iocage  refquota                none                    default
Pool-1/iocage  refreservation          none                    default
Pool-1/iocage  guid                    15886875950319145120    -
Pool-1/iocage  primarycache            all                     default
Pool-1/iocage  secondarycache          all                     default
Pool-1/iocage  usedbysnapshots         840K                    -
Pool-1/iocage  usedbydataset           10.8M                   -
Pool-1/iocage  usedbychildren          58.8G                   -
Pool-1/iocage  usedbyrefreservation    0B                      -
Pool-1/iocage  logbias                 latency                 default
Pool-1/iocage  objsetid                585                     -
Pool-1/iocage  dedup                   off                     default
Pool-1/iocage  mlslabel                none                    default
Pool-1/iocage  sync                    standard                default
Pool-1/iocage  dnodesize               legacy                  default
Pool-1/iocage  refcompressratio        1.10x                   -
Pool-1/iocage  written                 456K                    -
Pool-1/iocage  logicalused             72.3G                   -
Pool-1/iocage  logicalreferenced       8.10M                   -
Pool-1/iocage  volmode                 default                 default
Pool-1/iocage  filesystem_limit        none                    default
Pool-1/iocage  snapshot_limit          none                    default
Pool-1/iocage  filesystem_count        none                    default
Pool-1/iocage  snapshot_count          none                    default

I know that I cannot make a new replication if I change something on the second NAS, this is okay, I will currently test the replication.

This is the complete error log:
Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1459, in activate
    iocage.activate(pool['name'])
  File "/usr/local/lib/python3.8/site-packages/iocage_lib/iocage.py", line 368, in activate
    ioc_common.logit(
  File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_common.py", line 107, in logit
    callback(content, exception)
  File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_common.py", line 80, in callback
    raise callback_exception(message)
RuntimeError: ZFS pool "Pool-1" root dataset is locked

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1461, in activate
    raise CallError(f'Failed to activate {pool["name"]}: {e}')
middlewared.service_exception.CallError: [EFAULT] Failed to activate Pool-1: ZFS pool "Pool-1" root dataset is locked


Is there better option for replicating the iocage?

Florian
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You may need to activate the pool first (before replicating the iocage datasets) to avoid that lock.
 
Top