Replication via Tailscale?

Fish

Contributor
Joined
Jun 4, 2015
Messages
108
Hi all,

I've got 2 TrueNAS Core systems running the latest version, we'll call them MainServer and RemoteServer. My goal is to setup a replication task from MainServer to RemoteServer so I have an offsite backup.

I'd like to use Tailscale to connect the 2 machines so I don't have to worry about punching holes in firewalls, etc. Here is the setup I'm thinking of:

1. Install Tailscale in a jail on RemoteServer as a Subnet Router and advertise the IP of the truenas host (192.168.40.5/32)
2. Drop the Tailscale binary on MainServer in /root/ (so it doesn't get eaten during an upgrade) and setup an init script to start the deamon on boot.

Then I think I should be able to point a MainServer replication task at 192.168.40.5 and the system should be able to route packets via the Tailscale tunnel.

Is there anything I'm missing here? Any easier way to do this?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Simple enough to test it, just see if you can ssh from MainServer to RemoteServer on that IP after you do the 2 things you propose.

If that works, the replication will also work.
 

loremia

Cadet
Joined
Apr 23, 2023
Messages
6
I'm doing the exact same thing with 2 TrueNAS Scale systems but I'm using the official Tailscale App instead of a jail container.
I can ssh from my laptop to both systems but I am not able to establish a ssh connection between the two. I don't really understand what I am doing wrong.

Do you manage to perform the task?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

loremia

Cadet
Joined
Apr 23, 2023
Messages
6
Using what address?
Any! I really tried all the combinations.
I think there is something to setup in kubernetes.
Why that? From my laptop I can ping all the devices inside the Tailscale network but I cannot do it from TrueNAS-A and TrueNAS-B shells
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
what is your output from netstat -pr with the tunnel running?
 

loremia

Cadet
Joined
Apr 23, 2023
Messages
6
what is your output from netstat -pr with the tunnel running?
From my TrueNAS Scale A

Screenshot 230512 144305.png
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So there's nothing in the routing table to catch any packets coming from your NAS and push them into the Kubernetes container that runs Tailscale.

To be fair, the container was designed to support routing for other containers (already inside the kubernetes network), so that's not shocking.

So perhaps it can be made to work by advertising the route of your subnet from the tailscale client (192.168.100.0/24), which is done in your account at tailscale.com (after adding that to the "Advertise Routes" list in the app config).

After you do that and restart the app, does the routing table change?
 

loremia

Cadet
Joined
Apr 23, 2023
Messages
6
So there's nothing in the routing table to catch any packets coming from your NAS and push them into the Kubernetes container that runs Tailscale.

To be fair, the container was designed to support routing for other containers (already inside the kubernetes network), so that's not shocking.

So perhaps it can be made to work by advertising the route of your subnet from the tailscale client (192.168.100.0/24), which is done in your account at tailscale.com (after adding that to the "Advertise Routes" list in the app config).

After you do that and restart the app, does the routing table change?
Advertise my nas with 192.168.100.10/32, added the rout from the tailscale.com dashboard but unfortunately nothing changed.

This is how I set up the App
Screenshot 230512 145118.png
 

loremia

Cadet
Joined
Apr 23, 2023
Messages
6
So there's nothing in the routing table to catch any packets coming from your NAS and push them into the Kubernetes container that runs Tailscale.

To be fair, the container was designed to support routing for other containers (already inside the kubernetes network), so that's not shocking.

So perhaps it can be made to work by advertising the route of your subnet from the tailscale client (192.168.100.0/24), which is done in your account at tailscale.com (after adding that to the "Advertise Routes" list in the app config).

After you do that and restart the app, does the routing table change?
Ok maybe I found the problem.
I will try to uncheck the Userspace variable when I'll be back home and the I will let you know
Screenshot 230512 154423.png
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
That does look like a pretty good explanation of what's happening.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Also, you put the Advertise Routes in the wrong place (although what you did may have worked)... it should be at the top, not in the environment variables.
 

loremia

Cadet
Joined
Apr 23, 2023
Messages
6
Also, you put the Advertise Routes in the wrong place (although what you did may have worked)... it should be at the top, not in the environment variables.
As we predicted, unchecking the Userspace entry everything worked perfectly (without even advertising Routes!).
I report for completeness that the same thing is doable with ZeroTier but Tailscale has much more functionality and I could not give up the exit node function.

Thank you for the support! :)
 

Alma11

Cadet
Joined
Jul 27, 2023
Messages
4
Hi all,

I've got 2 TrueNAS Core systems running the latest version, we'll call them MainServer and RemoteServer. My goal is to setup a replication task from MainServer to RemoteServer so I have an offsite backup.

I'd like to use Tailscale to connect the 2 machines so I don't have to worry about punching holes in firewalls, etc. Here is the setup I'm thinking of:

1. Install Tailscale in a jail on RemoteServer as a Subnet Router and advertise the IP of the truenas host (192.168.40.5/32)
2. Drop the Tailscale binary on MainServer in /root/ (so it doesn't get eaten during an upgrade) and setup an init script to start the deamon on boot.

Then I think I should be able to point a MainServer replication task at 192.168.40.5 and the system should be able to route packets via the Tailscale tunnel.

Is there anything I'm missing here? Any easier way to do this?
Hi Fish (and anyone else interested),

So I am doing the same thing, two TrueNAS CORE systems (TrueNAS-A and TrueNAS-B), and desire to perform replication from A to B over the internet.

Both systems have Tailscale installed in a jail. Both on the same Tailscale account. Both systems advertise their local route.

That said, every time I attempt to perform Replication from TrueNAS-A to TrueNAS-B over Tailscale, the two TrueNAS systems can not see each other.

Did you have any luck figuring this out on TrueNAS CORE a few months ago? Any advice would be appreciated.
 

Fish

Contributor
Joined
Jun 4, 2015
Messages
108
@Alma11 Sure, here's how I did it. Be advised that this method is a bit janky and there are definitely better ways to set this up.

This is just a quick notes dump on my setup, if I get time I'll do a full blog post in the future.

I have 2 machines setup with TrueNAS - I'll call them Primary and Backup. Backup is intended to be installed in a remote loation, possibly behind any number of firewalls/NAT layers such that accessing it directly is not easily done. Backup has a jail called tailscale in which I have installed and logged into Tailscale. I have also set a static IP on the jail which will be important later. The LAN IP of Backup is 10.40.1.248.

On my home network where Primary lives, I have a machine with Tailscale installed that acts as a Subnet Router (in tailscale terminology). Therefore Primary is accessible across my Tailnet via its LAN IP 10.10.1.153.

So now we can access Primary from within the Tailscale jail, but what we need is for TrueNAS on Backup to be able to access Primary during a Replication Task. To solve this, I used Nginx to act as a reverse proxy on port 22 and 443. There are other tools much better suited for this task such as just using OpenSSH as a forwaring proxy or Socat but this was just a quick solution and allows me to forward both SSH and HTTPS traffic to the web interface (since otherwise we have no way of accessing the Web UI on Backup).

The nginx config at /usr/local/etc/nginx/nginx.conf in the Tailscale jail looks like this:

Code:
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
daemon on;
events {}
pid nginx.pid;

http {
    server {
        listen 443 ssl;
        ssl_certificate /etc/ssl/certs/nginx.crt;
        ssl_certificate_key /etc/ssl/private/nginx.key;
        ssl_prefer_server_ciphers on;
        location / {
            proxy_pass https://10.40.1.248:443;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_ssl_protocols TLSv1.3;
            proxy_ssl_ciphers DEFAULT;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }
}

stream {
    server {
        listen 22;
        proxy_pass 10.10.1.153:22;
    }
}


In order to reverse proxy SSH traffic, the ngx_stream_module is required. I can't remember if I installed this as a separate package or if the pkg version of nginx included it. Maybe you have to build the port to enable the module.

In Backup, I setup an SSH Connection:
  • Host: <tailscale Jail IP>
  • User: root
then setup a Replication Task:
  • direction: Pull
  • SSH Connection: <name from previous step>
 

Alma11

Cadet
Joined
Jul 27, 2023
Messages
4
Hi Fish,

To start—thank you very much for taking the time in sharing your setup. It provided good direction. I wanted to attempt some of this myself before responding to wrap my head around some of the steps. Just adding a 'Caution' for anyone new reading this that the outlined steps I tried below are still a work in progress...to not fool anyone looking in that its a 'working solution'. With that said, I tried to document exactly what was attempted to best communicate. A lot of this is new to me, so if I am doing something wrong, specific detail is appreciated (both incorrect steps or small tweaks you might have done are welcome). Orange indicates specific questions, but I also want to verify throughout that I am doing individual steps correctly, or if missing something. By the way--this is best viewed on Desktop (not Mobile).


DEFINITIONS

For the sake of discussion, I kept with the same terminology and IP’s you stated prior for ‘Primary’ and ‘Backup’. That said, I expanded them to also include a made up Tailscale JAIL IP (the local router assigned Jail address...DHCP in my case but Static in your recommends), and also a made up Tailscale IP (the addresses assigned by the Tailscale application, similar to the format 100.x.y.z).

  • Primary: TrueNAS LAN IP 10.10.1.153 Tailscale JAIL IP 10.10.1.160 Tailscale IP 100.1.2.3

  • Backup: TrueNAS LAN IP 10.40.1.248 Tailscale JAIL IP 10.40.1.161 Tailscale IP 100.4.5.6

OVERALL THOUGHTS

Overall, this is how I am view/interpret what has to be done. Primary and Backup both need Tailscale installed in a jail (and both be on the same Tailscale account). Both machines need to advertise their own TrueNAS IP route.

In a Replication Task (created on PRIMARY) between Primary and Backup, we have to input a ‘Source’ (Primary, where the data current resides) and a ‘Destination Location’ (Backup, where we want to replicate the data to). The goal is to input Primary’s TrueNAS LAN IP into ‘Source’ (the TrueNAS’s IP of 10.10.1.153, which can’t be edited from within a Replication Task), and output the Backups TrueNAS LAN IP (10.40.1.248) into the ‘Destination Location’ SSH connection.

Going off your writeup—in order to achieve this, we need a Reverse Proxy (Nginx) and a SSH connection. Here is the primary place I am stuck (where to install what, and if I am missing steps in my installation). My thought was to install Nginx and edit the config file on PRIMARY (so Primary could associate with Backup’s SSH connection). Then start an SSH connection on BACKUP (I was thinking this had to be installed on Backup for Primary to SSH into Backup). Then, go back to PRIMARY and create a Replication Task (Source=PRIMARY_TrueNAS_LAN_IP and Destination=BACKUP_TrueNAS_LAN_IP_via_SSH). Thoughts?

In this way, data would be ‘pushed’ from Primary to Backup (which I prefer over a ‘pull’, if possible....so everything, other than an SSH connection to Backup, can be done on Primary). In the end, Primary Pool data is dumped to Backup Pool via Tailscale and SSH.

STEPS & QUESTIONS

A. On Primary (Tailscale + Nginx)
  1. I installed Tailscale in a Jail (setup with DHCP—you did static, but for now I assumed as long as the IP does not change, DHCP would be OK). This was made a subnet router, with advised route of the Backup LAN IP (tailscale up --advertise-routes=10.10.1.153/32), and approved via tailscale app.
  2. Inside the Tailscale Jail, nginx was installed. Nginx was tested by typing in ifconfig, getting the epair0b inet IP—and typing it in a browser (which showed the ‘Welcome to nginx!’ screen).
  • pkg update
  • pkg install nginx nano python
  • sysrc nginx_enable=yes
  • pkg install py39-certbox openssl
  • service nginx onestart
3. Although installed above, I didn’t use certbot…looks like you would have to create a domain, which I was hoping to avoid if possible (this is new territory for me--so I don't understand enough to know what is or is not required). Thoughts?
4. I opened the Tailscale Jail shell and typed in the following commands to open and edit the nginx.conf file.​
  • cd /usr/local/etc/nginx/
  • nano nginx.conf
5. Inside the .conf file, this is where I found the http{---server{…}---} and stream{---server{…}---} sections. I deleted everything and pasted in your provided nginx.conf file (with my equivallent IP addresses...maybe this was a mistake?). Note, in answer to your question, I found the ngx_stream_module.so in the directory you listed in your .conf code, so that must be automatically installed with nginx. I saved and exited nano via CTRL+S and CTRL+X.​
6. Code (copy of your above post, with some comments)​
Code:
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
daemon on;
events {}
pid nginx.pid;

http {
    server {
        listen 443 ssl;
        ssl_certificate /etc/ssl/certs/nginx.crt; #HOW DID YOU GET THIS?
        ssl_certificate_key /etc/ssl/private/nginx.key; #HOW DID YOU GET THIS?
        ssl_prefer_server_ciphers on;
        location / {
            proxy_pass https://10.40.1.248:443; #BACKUP LAN IP, PORT 443
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_ssl_protocols TLSv1.3;
            proxy_ssl_ciphers DEFAULT;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }
}

stream {
    server {
        listen 22;
        proxy_pass 10.10.1.153:22; #PRIMARY LAN IP, PORT 22
    }
}


7. In the above ssl_certificate and ssl_certificate_key, did you generate the nginx.crt and nginx.key somehow or was that automatic with the nginx install? I ask because couldn’t find those in the jail folders and wondering if I am missing a step to obtain these.​

B. On Backup (Tailscale + SSH Connection)
  1. I installed Tailscale in a Jail (setup with DHCP—you did Static). This was made a subnet router, with advised route of the Primary LAN IP (tailscale up --advertise-routes=10.40.1.248/32), and approved via tailscale app.
  2. Via TrueNAS menus, I navigated to System, SSH Connection, and Add.
  • TrueNAS URL (Backup Tailscale JAIL IP?): https:// 10.10.1.161
  • Username: root
  • Password (: *Make up new or use TrueNAS Root Password?*
  • Private Key: Generate new key.
  • Note that when trying to save this, the connection did NOT work; I am just stated what I tried for completeness.
C. On Primary (Replication Task)
  1. Open TrueNAS Tasks, Replication Task, and Add.
  • Source Location: On this System (Primary)
  • Destination Location: *SSH into Backup at TrueNAS IP (10.40.1.248).*
  • Note that this is what I planned on doing if the above SSH had worked.

Apologies for the long writeup—but decided to lean on more specifics to best communicate where I was at and avoid confusion. Any thoughts, corrections, etc. on this are welcomed and appreciated. Thanks again for the direction and advice, I look forward to getting this working.
 

Alma11

Cadet
Joined
Jul 27, 2023
Messages
4
Hi Fish,

To start—thank you very much for taking the time in sharing your setup. It provided good direction. I wanted to attempt some of this myself before responding to wrap my head around some of the steps. Just adding a 'Caution' for anyone new reading this that the outlined steps I tried below are still a work in progress...to not fool anyone looking in that its a 'working solution'. With that said, I tried to document exactly what was attempted to best communicate. A lot of this is new to me, so if I am doing something wrong, specific detail is appreciated (both incorrect steps or small tweaks you might have done are welcome). Orange indicates specific questions, but I also want to verify throughout that I am doing individual steps correctly, or if missing something. By the way--this is best viewed on Desktop (not Mobile).


DEFINITIONS

For the sake of discussion, I kept with the same terminology and IP’s you stated prior for ‘Primary’ and ‘Backup’. That said, I expanded them to also include a made up Tailscale JAIL IP (the local router assigned Jail address...DHCP in my case but Static in your recommends), and also a made up Tailscale IP (the addresses assigned by the Tailscale application, similar to the format 100.x.y.z).

  • Primary: TrueNAS LAN IP 10.10.1.153 Tailscale JAIL IP 10.10.1.160 Tailscale IP 100.1.2.3

  • Backup: TrueNAS LAN IP 10.40.1.248 Tailscale JAIL IP 10.40.1.161 Tailscale IP 100.4.5.6

OVERALL THOUGHTS

Overall, this is how I am view/interpret what has to be done. Primary and Backup both need Tailscale installed in a jail (and both be on the same Tailscale account). Both machines need to advertise their own TrueNAS IP route.

In a Replication Task (created on PRIMARY) between Primary and Backup, we have to input a ‘Source’ (Primary, where the data current resides) and a ‘Destination Location’ (Backup, where we want to replicate the data to). The goal is to input Primary’s TrueNAS LAN IP into ‘Source’ (the TrueNAS’s IP of 10.10.1.153, which can’t be edited from within a Replication Task), and output the Backups TrueNAS LAN IP (10.40.1.248) into the ‘Destination Location’ SSH connection.

Going off your writeup—in order to achieve this, we need a Reverse Proxy (Nginx) and a SSH connection. Here is the primary place I am stuck (where to install what, and if I am missing steps in my installation). My thought was to install Nginx and edit the config file on PRIMARY (so Primary could associate with Backup’s SSH connection). Then start an SSH connection on BACKUP (I was thinking this had to be installed on Backup for Primary to SSH into Backup). Then, go back to PRIMARY and create a Replication Task (Source=PRIMARY_TrueNAS_LAN_IP and Destination=BACKUP_TrueNAS_LAN_IP_via_SSH). Thoughts?

In this way, data would be ‘pushed’ from Primary to Backup (which I prefer over a ‘pull’, if possible....so everything, other than an SSH connection to Backup, can be done on Primary). In the end, Primary Pool data is dumped to Backup Pool via Tailscale and SSH.

STEPS & QUESTIONS

A. On Primary (Tailscale + Nginx)
  1. I installed Tailscale in a Jail (setup with DHCP—you did static, but for now I assumed as long as the IP does not change, DHCP would be OK). This was made a subnet router, with advised route of the Backup LAN IP (tailscale up --advertise-routes=10.10.1.153/32), and approved via tailscale app.
  2. Inside the Tailscale Jail, nginx was installed. Nginx was tested by typing in ifconfig, getting the epair0b inet IP—and typing it in a browser (which showed the ‘Welcome to nginx!’ screen).
  • pkg update
  • pkg install nginx nano python
  • sysrc nginx_enable=yes
  • pkg install py39-certbox openssl
  • service nginx onestart
3. Although installed above, I didn’t use certbot…looks like you would have to create a domain, which I was hoping to avoid if possible (this is new territory for me--so I don't understand enough to know what is or is not required). Thoughts?
4. I opened the Tailscale Jail shell and typed in the following commands to open and edit the nginx.conf file.​
  • cd /usr/local/etc/nginx/
  • nano nginx.conf
5. Inside the .conf file, this is where I found the http{---server{…}---} and stream{---server{…}---} sections. I deleted everything and pasted in your provided nginx.conf file (with my equivallent IP addresses...maybe this was a mistake?). Note, in answer to your question, I found the ngx_stream_module.so in the directory you listed in your .conf code, so that must be automatically installed with nginx. I saved and exited nano via CTRL+S and CTRL+X.​
6. Code (copy of your above post, with some comments)​
Code:
load_module /usr/local/libexec/nginx/ngx_stream_module.so;
daemon on;
events {}
pid nginx.pid;

http {
    server {
        listen 443 ssl;
        ssl_certificate /etc/ssl/certs/nginx.crt; #HOW DID YOU GET THIS?
        ssl_certificate_key /etc/ssl/private/nginx.key; #HOW DID YOU GET THIS?
        ssl_prefer_server_ciphers on;
        location / {
            proxy_pass https://10.40.1.248:443; #BACKUP LAN IP, PORT 443
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_ssl_protocols TLSv1.3;
            proxy_ssl_ciphers DEFAULT;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }
}

stream {
    server {
        listen 22;
        proxy_pass 10.10.1.153:22; #PRIMARY LAN IP, PORT 22
    }
}


7. In the above ssl_certificate and ssl_certificate_key, did you generate the nginx.crt and nginx.key somehow or was that automatic with the nginx install? I ask because couldn’t find those in the jail folders and wondering if I am missing a step to obtain these.​

B. On Backup (Tailscale + SSH Connection)
  1. I installed Tailscale in a Jail (setup with DHCP—you did Static). This was made a subnet router, with advised route of the Primary LAN IP (tailscale up --advertise-routes=10.40.1.248/32), and approved via tailscale app.
  2. Via TrueNAS menus, I navigated to System, SSH Connection, and Add.
  • TrueNAS URL (Backup Tailscale JAIL IP?): https:// 10.10.1.161
  • Username: root
  • Password (: *Make up new or use TrueNAS Root Password?*
  • Private Key: Generate new key.
  • Note that when trying to save this, the connection did NOT work; I am just stated what I tried for completeness.
C. On Primary (Replication Task)
  1. Open TrueNAS Tasks, Replication Task, and Add.
  • Source Location: On this System (Primary)
  • Destination Location: *SSH into Backup at TrueNAS IP (10.40.1.248).*
  • Note that this is what I planned on doing if the above SSH had worked.

Apologies for the long writeup—but decided to lean on more specifics to best communicate where I was at and avoid confusion. Any thoughts, corrections, etc. on this are welcomed and appreciated. Thanks again for the direction and advice, I look forward to getting this working.

@Fish -- Forgot to reference your post (and can't seem to edit a comment)--adding here.
 

Alma11

Cadet
Joined
Jul 27, 2023
Messages
4
Hey @Fish, thanks for the link. Unfortunately he talks on TrueNAS Scale, and I am on Core. So close! So still have to fiddle with figuring this out with Core. Did you switch to Scale after all?
 

Fish

Contributor
Joined
Jun 4, 2015
Messages
108
@Alma11 I'm having a hard time tracking what questions you have with the inline comments but I think they boil down to:

  • Should I install certbot and where do I generate SSL certificates?
    • I didn't do any SSL stuff because both machines connect over Wireguard. The ssl_certificate lines you see in the nginx config just point to whatever self-signed certs nginx created when I installed it.
I don't really have the capacity to help you troubleshoot your setup. Feel free to open a new thread in this forum and lay out where you are and what's not working.
 
Top