nfs for configuration data?

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
Pain: I want to store VM-hosted application configuration data in shared dirs on TNS for reliability and ease of migration. Some apps apparently store data in database files (assuming sqlite or similar) that seem incompatible with nfs sharing. These apps lock up / crash unless the config directories are stored in a local file system. So config data is stored on individual VM disks.

Hoping for some expert advice on this.

First, is there some non-default nfs configuration option that can make this type of data access work reliably on remote storage?

Second, I've not tried SMB. I'm just assuming it'll be similar if not worse since I've heard SMB is generally less performant when doing linux-linux sharing. Is this actually the case, or should I explore SMB?

If not, are there best practices for this scenario? Use rsync to sync ZFS directory to VMD directory? Some other elegant hack using iSCSI or VirtFS?

TIA.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
NFS will never be reliable for storing databases, due to the mismatch between record-level locking needed for databases, and the crappy file-level locking in NFS. SMB is somewhat better with file locking, but still no good for databases, for the same reason. Just search here for all the trials and tribulations people have with trying to run Access reliably off a share.

iSCSI will work, because it will appear to be a local drive to the VM.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
If the majority of your applications can use e.g. MySQL or MariaDB you can deploy a central database server and save all the data there. On CORE that can easily be done in a jail. On SCALE there probably is an App for that. SQLite doesn't have a client-server feature over the network, that's probably why it's called "lite" :wink:
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
Thanks for the replies. Sadly, the apps (Plex and Sonarr are the ones I can't work around) don't seem to have options for a real database server. Locked into SQLite.

Noob Q: Is there any solution that allows a normal ZFS directory to be presented as a virtual block device? Looks like VirtFS might do that when/if supported, but any option over TCP/IP? iSCSI and Gluster both seem like they would work, but I think both require dedicated volumes/extents.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Noob Q: Is there any solution that allows a normal ZFS directory to be presented as a virtual block device?
What do you mean by that. A block device is by definition a range of blocks, i.e. a "virtual disk". The VM puts its own filesystem data structure on top of that. You can put something like this into a single file on ZFS or use a ZVOL. But it will on the host system always be a single "blob" of blocks.

There is to my knowledge no technology that allows a directory on the host to be mounted as a directory inside the VM that keeps full POSIX semantics (i.e. local storage properties) inside the VM. Best is not to use a VM at all but a container technology that allows local file system mounts. Both Docker & Co. and FreeBSD jails can do that. It's a bit easier to orchestrate with jails for my tastes.
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
Sorry, yes. Block device is incorrect, but a remote mountable FS that behaves close enough for this scenario. Just Googling...maybe it's got to implement the 9P protocol? Something like diod, maybe? I don't know enough to know even what to look for, really.

Agree on the use of containers. But I won't always want containers hosted on the device that hosts the files...so a desire to have the files shared across the wire.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Just Googling...maybe it's got to implement the 9P protocol? Something like diod, maybe? I don't know enough to know even what to look for, really.
You are looking for the right thing, but all these are not established but cutting edge technologies. So to rephrase: there is to my knowledge no established technology (read: with a record like NFS or SMB) that preserves local file system semantics over the wire.

Of course you can get that with any kind of block device, you weren't wrong about that either. iSCSI does work like a local disk. But you will end up with a single object on the server instead of a number of files.

Could you try to rephrase your "user story" as that is commonly called today?

What's the problem with a block device "blob". You can put the virtual disk of your VM on a zvol. Snapshot, rollback, replicate, archive that. Snapshots and replication only create differentials so the amount of storage needed is in the same order of magnitude as with a file based incremental backup. Plus because it is ZFS everything is checksummed and known consistent.

I am perfectly fine with treating my VM disks as a single unstructured object and using the mechanisms outlined above. The only drawback is that a partial restore instead of a full rollback requires more effort. You need to boot another VM with the correct snapshot version of the virtual disk and then extract the files from within the guest. But apart from that?

I run hourly snapshots of all my VMs and keep them for a week, replicate them locally from SSD mirror to HDD RAIDZ2 for a week, additionally replicate one of the snapshots every 24 hours to a remote system and keep them for four weeks ... ZFS is just amazing.

HTH,
Patrick
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
Thanks, Patrick. That's very helpful. I probably have an unhealthy desire to maintain "universal" access to my files. Just doesn't need to happen in this case. Storing a few files and a large SQLite blob a single, slightly larger blob isn't a big deal. I need to separate data from my main VM disk, but I CAN create small iSCSI extents for each app and store data there. Not my "ideal" but they do remain portable.

SO, I created the iSCSI share. Followed the directions here (https://www.truenas.com/docs/scale/shares/) to connect my Ubuntu VM. Bit of trial and error and got it working. BUT, it's all disconnected on reboot. Edited fstab, added _netdev option, but the device (/dev/sdb1) is just not there after reboot. Have to run through the manual iscsiadm --login sequence. I'm missing something needed to reinitiate the login on boot. Presumably editing some conf file somewhere that I just can't seem to find documented.

Any ideas?
 
Top