Zerotier surviving reboots


Jul 17, 2019

I played with a new FreeNAS setup yesterday and also ran into the issue described already various times in these forums: After a reboot, the joined networks are lost, and the ID of the zerotier client changes.

Why does that happen?
Zerotier stores its runtime data in the directory /var/db/zerotier-one. The changes in this directory are lost after reboot, as /var is mounted as tmpfs:
tmpfs on /var (tmpfs, local)

The steps needed to solve this are these:
  • Create a dataset that will hold the zerotier runtime data
  • Make a nullfs-mount to the runtime directory at boot time
  • Restart the zerotier daemon after the nullfs-mount

If you already have zerotier running on your FreeNAS Box, make sure to stop the daemon, it will not like with its data being juggled around:
service zerotier stop

Navigate to System ⯈ Tunables
Remove the zerotier_enable rc.conf entry if you already have it. This will be set by our start script, as zerotier might start before the mountpoint is available, making zerotier create new keys at each boot. This is prevented by only enabling the service after the mountpoint is available.

Creating the dataset
My configuration uses a zpool named zpool. Make sure to adapt this to your instance.

Navigate to Storage ⯈ Pools
Click on ⋮⯈ Add Dataset next to zpool
Enter zerotier as name. All other options can usually be inherited.
Click Save

The dataset is now mounted at /mnt/zpool/zerotier

Moving exisiting configuration there
If you already had zerotier running, you now need to move the exisiting configuration onto the created dataset:
mv /var/db/zerotier-one/* /mnt/zpool/zerotier/

Create the start script
The start script will run at boot time to mount the dataset at the right position (I chose not to modify the default mountpoint of the dataset):
cat >/mnt/zpool/zerotier/ <<EOF
mkdir -p /var/db/zerotier-one
/sbin/mount_nullfs /mnt/zpool/zerotier /var/db/zerotier-one
sysrc zerotier_enable=YES
/usr/sbin/service zerotier restart
chmod +x /mnt/zpool/zerotier/

Run the start script at boot time
Navigate to Tasks ⯈ Init/Shutdown Scripts
Click ADD
Select Type: Script
Enter Script: /mnt/zpool/zerotier/
Select When: Pre Init
Make sure Enabled is checked.
Click Save

Now its time to either reboot to test that the script is being run at boot time or you can simply run the start script by hand:

Verify and connect
To verify that the mounts are correct and zerotier is running:
mount |grep zerotier
# zpool/zerotier on /mnt/zpool/zerotier (zfs, local, nfsv4acls)
# /mnt/zpool/zerotier on /var/db/zerotier-one (nullfs, local)
zerotier-cli info
# 200 info 00XXXXXXXX 1.2.12 ONLINE

If you did not have zerotier running before, you can now join a network:
zerotier-cli join #network

Please let me know if this guide works for you or if you run into any issues.

Thats all,
Last edited:


Hall of Famer
Aug 16, 2011
Interesting, and thanks for writing this up--I've been frustrated that the devs seem to be satisfied with such a half-baked implementation of zerotier. But I'm wondering if this couldn't be simplified by setting the mountpoint property of the zerotier dataset to be /var/db/zerotier. If so, it would save the need to run the post-init script, as it would automatically mount to the right place on pool import.


Jul 17, 2019
@danb35 I have chosen not to modify the default mountpoint here, as I don't know how this might interfere with FreeNAS expectations. And I am using the script also to add a bridge and add the zerotier interface to the bridge:
ifconfig bridge1 create
ifconfig bridge1 addm zxxxxxxxxxxxxxx

After adding vnet1:bridge1 to VNET enabled jails, this allows to give jails direct zerotier access with a single zerotier instance running. It's just important to enable bridge access on the Zerotier interface and assign a static ip in the jail.

Caution: Zerotier uses an MTU of 2800, so the zerotier interface must be the first one being assigned to the bridge. The Script must also be run pre-init.