apt
runs so slowly inside a jail? I'm trying to install wireguard on a Ubuntu Jammy jail, and the progress is glacial; I have another SSH session connected to the host, and its CPU usage is virtually idle.apt install wireguard
has been running for something like 45 minutes, and is still at 20% - the apt
process barely goes up from 0% CPU, and that for a brief interval, every few minutes. Average load is under 1.0, on a 4-core CPU.Strange indeed;apt install wireguard
has been running for something like 45 minutes, and is still at 20% - theapt
process barely goes up from 0% CPU, and that for a brief interval, every few minutes. Average load is under 1.0, on a 4-core CPU.
P.S.: monitoring from a second SSH shell, with btop
I noticed something similar early on in my testing of jlmkr . It turned out to be DNS timeouts for me. Maybe it's something to consider for you?
I have a pihole docker container running in a jail using macvlan networking, and my TrueNAS Scale system can't talk to it. On my DD-WRT router I push pihole as the primary DNS server on my network via DHCP options, and the rest of my clients can talk to it without issue - but not my TrueNAS. This is related to annoying networking configuration issues - I ran TrueNAS on BSD for years and never had to deal with pesky issues like this. Someone earlier in this thread linked a video to set up a bridge to solve this, but it's not a big deal for me to just use my router for DNS on TrueNAS so I haven't looked into it further (using pihole mostly for ad blocking, etc)
Anyway, one of the first things I noticed was super slow apt updates and installs inside jails, and after investigating further, found it to be DNS lookup timeouts. I had statically set the TrueNAS nameserver to what everything else on my network uses - the pihole - and TrueNAS couldn't talk to it, would timeout, and then talk to the secondary nameserver and get a response. I fixed it under network->global configuration - I changed the nameserver to the router (the same IP as the default route for me), and things immediately improved.
May not be your problem, but thought it might be useful to suggest looking in this direction just in case.
-D
systemd_nspawn_user_args=--capability=CAP_NET_ADMIN
- Tailscale refuses to run otherwise (though it may be simply because I'm not very well versed in this feature, and thus haven't found a safer workaround).Did you add a startup task to runI know this project is new(ish) to Scale and more testing/feedback is needed but I can say that my initial experience has been extremely positive. As of this past weekend I finalized testing the changes needed to feel comfortable ditching ESXI all together in my home environment. Couple of learning moments along the way. When using TN Scale as a guest under ESXI, creating a bridge failed until I found this post about modifying the network security settings of the host. Obviously TN Scale on bare metal bypasses that. The only other thing I noticed along the way was that Jailmaker complained about using /mnt/tank/jails as an initial spot to run from and wanted to see a jailmaker directory with jlmkr.py in it. I complied however when I reboot Scale I have to manually start the jail each time. I'm assuming this has to do with the location of the jail in a different directory but I'm not sure.
Either way, great work - was able to say goodbye to VMware and Core as well as embrace Scale on bare metal and couldn't be happier. It's been a dream. Thank you!
jlmkr.py startup
And I think you need to set startup=1 in the config for the jail too.Did you add a startup task to runjlmkr.py startup
Tailscale requires CAP_NET_ADMIN capability to be enabled. I found I also had to enable CAP_NET_RAW in order for Tailscale to work.Thank you, it's worth investigating.
Also, I'm not sure of the security implications, but I've been running my systemd-nspawn jails withsystemd_nspawn_user_args=--capability=CAP_NET_ADMIN
- Tailscale refuses to run otherwise (though it may be simply because I'm not very well versed in this feature, and thus haven't found a safer workaround).
Tailscale requires CAP_NET_ADMIN capability to be enabled. I found I also had to enable CAP_NET_RAW in order for Tailscale to work.
I stopped using the official TrueNAS app for Tailscale and switched to running it in a nspawn container. It was the last remaining TN app I was running, and the resource usage drop after unsetting the pool in kubernetes was very noticeable.