Smeghead
Cadet
- Joined
- Oct 28, 2020
- Messages
- 6
I'm playing with TrueNAS Scale to see if I want it to be the basis for the replacement for our home storage/VM server. However, I appear to have faceplanted quite early on, and I'm looking for advice as to whether this is my own dumb fault or whether this is a limitation in Scale itself.
Since whatever I choose will stick for a long time (experience suggests 5-6 years or so), I want to get a solid feel for it before committing. I also want to try out the app catalogue to see if there's much advantage over fully virtualising each function, and to gain experience with Scale ahead of possibly using it at work.
I should state that I've been using KVM-hosted VMs for the better part of a decade first on Centos and later on Ubuntu, so I'm fairly familiar with the concepts involved. I'm mainly looking to evaluate whether I should go with the likes of Scale for simpler management of storage, focus on the VM side of things with Proxmox, or learn nothing new and just go with another Ubuntu server system.
In terms of first impressions, the storage side of things looks about the same as it's always been in FreeNAS/Core, so I'm not particularly worried. I'll get to testing that later.
However, I'm running into an error attempting to spin up my first VM for testing. I figured I'd try installing Ubuntu server 22.04 from scratch as a test VM. I fired up a bridge and walked through the UI options for creating a VM, all of which looked fairly familiar from a KVM perspective. However, this failed at the final "confirm options" step. Clicking "save" results in the following error:
More info shows:
Now, the error message is fairly self-evident: there's something in there that limits the full path to a disk image to less than 63 characters.
However, the question I have is: why?
The path I gave to the location isn't massively complicated. From the root of the SSD mirror I'm using to test, Relative to the root of the SSD mirror I'm testing with, I'm trying to create a VM with a disk image in the 'vm/ubuntu_2204_blank' subdirectory. That doesn't seem excessive; I'm not attempting to traverse a tree that's 30 subdirectories deep, though even if I were, I would expect that to be perfectly legal.
The UI path ends up translating to an internal path in /dev/zvol that's 42 characters long including the trailing slash. TrueNAS then wants to create a disk image name in that location that's an additional 24 characters long. That totals 66 characters, which is apparently illegal.
Now, while I could easily rename things to trim three characters, I'm having a hard time understanding why I should be forced to.
ZFS has a path length limitation of, what, 4096 characters? Something like that, I believe.
Simiarly, KVM doesn't have any limitation like thjis. I've used KVM on various Linux distributions for years with similar naming conventions (both with and without ZFS) and have never run into anything like this.
A 63-character limit places serious limitations on the ability to organise where everything is stored. It more or less requires a very shallow and wide directory tree, with very few subdirs in between the root and each disk image. This is complicated by the fact that every character used in the directory name is then duplicated for the name of the disk image, to which 7 more characters are appended, presumably as a way to reduce the chance of collisions with an existing file.
Heck, I'm attempting to use one directory level past the root of the pool with a two-character name and it's failing because I gave the VM's storage dir a minimally descriptive name. This all feels like an unacceptable limitation not too far removed from the 8.3 filenames of yesteryear.
Since none of these complications are managed or even flagged by the UI beforehand, I'm leaning towards I did a dumb somewhere, and that this is not a typical limitation. However, I'm struggling to understand what I could have messed up.
I'm also having a hard time reconciling the notion of user error with the script flagged in the blurb, which makes me wonder if this is an actual problem with legacy code. Line 354 of /usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py shows:
Note the comment which seems to reference a planned change from TrueNAS 12 to 13 to raise this limit to 255 characters. I don't have a Core 13 system I can check this on (our work server is still on the FreeNAS 11.3 train), though I suppose I could spin up a VM to take a look to see if this was ever changed there. I'll post back here with whatever I find.
I did find this thread, which discusses a near-identical problem. There didn't appear to be much of a resolution that I could see, and this was from two years ago on a FreeNAS machine while cloning a jail. So, similar, but different.
This is on a fresh install of TrueNAS Angelfish (version TrueNAS-SCALE-22.02.2). I've done next to nothing to it as of yet; I only got it up on the network and wanted to play with VMs as those and apps are the parts of the system that will decide whether I stick with it. It's so early days that it's not currently serving space to the rest of the network and I've not added any bulk storage drives so far - I'm assuming all that is as bombproof as ever, and I'll get to that later.
The hardware shouldn't really matter in this case, but this is based on a 4-core Supermicro X10SDV with 32GB of ECC RAM. I'm looking forward to taking advantage of 10G later on, and may up the RAM at some point. It's going in a Jonsbo N1 for a major size reduction over the old tower that it'll replace, and will allow me to run up to 5 bulk storage drives (TBD, not currently installed).
The mirror in question is on a pair of SSDs that I'm borrowing from somewhere else while testing all of this, using a free partition on both for a mirror that I'm using as my test volume. These will be replaced in the final system with dedicated drives bought for the machine, but for now I'm just fiddling about with what I have on hand. I don't believe it has any bearing, but both vm and vm/ubuntu_2204_blank in the discussion above are child datasets of the ssd-mirror test pool; I'm in the habit of putting each VM in its own dataset so each can be snapshotted and replicated independently as needed on more general-purpose Ubuntu setups, and I would intend to continue doing that here.
Since whatever I choose will stick for a long time (experience suggests 5-6 years or so), I want to get a solid feel for it before committing. I also want to try out the app catalogue to see if there's much advantage over fully virtualising each function, and to gain experience with Scale ahead of possibly using it at work.
I should state that I've been using KVM-hosted VMs for the better part of a decade first on Centos and later on Ubuntu, so I'm fairly familiar with the concepts involved. I'm mainly looking to evaluate whether I should go with the likes of Scale for simpler management of storage, focus on the VM side of things with Proxmox, or learn nothing new and just go with another Ubuntu server system.
In terms of first impressions, the storage side of things looks about the same as it's always been in FreeNAS/Core, so I'm not particularly worried. I'll get to testing that later.
However, I'm running into an error attempting to spin up my first VM for testing. I figured I'd try installing Ubuntu server 22.04 from scratch as a test VM. I fired up a bridge and walked through the UI options for creating a VM, all of which looked fairly familiar from a KVM perspective. However, this failed at the final "confirm options" step. Clicking "save" results in the following error:
Error creating VM.
Error while creating the DISK device. [ENAMETOOLONG] attributes.path: Disk path /dev/zvol/ssd-mirror/vm/ubuntu_2204_blank/ubuntu_2204_blank-7ko1yf is too long, reduce to less than 63 characters
More info shows:
Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 176, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1293, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/service.py", line 920, in create rv = await self.middleware._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1293, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1133, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1265, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 167, in do_create data = await self.validate_device(data, update=False) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 484, in validate_device raise verrors middlewared.service_exception.ValidationErrors: [ENAMETOOLONG] attributes.path: Disk path /dev/zvol/ssd-mirror/vm/ubuntu_2204_blank/ubuntu_2204_blank-7ko1yf is too long, reduce to less than 63 characters
Now, the error message is fairly self-evident: there's something in there that limits the full path to a disk image to less than 63 characters.
However, the question I have is: why?
The path I gave to the location isn't massively complicated. From the root of the SSD mirror I'm using to test, Relative to the root of the SSD mirror I'm testing with, I'm trying to create a VM with a disk image in the 'vm/ubuntu_2204_blank' subdirectory. That doesn't seem excessive; I'm not attempting to traverse a tree that's 30 subdirectories deep, though even if I were, I would expect that to be perfectly legal.
The UI path ends up translating to an internal path in /dev/zvol that's 42 characters long including the trailing slash. TrueNAS then wants to create a disk image name in that location that's an additional 24 characters long. That totals 66 characters, which is apparently illegal.
Now, while I could easily rename things to trim three characters, I'm having a hard time understanding why I should be forced to.
ZFS has a path length limitation of, what, 4096 characters? Something like that, I believe.
Simiarly, KVM doesn't have any limitation like thjis. I've used KVM on various Linux distributions for years with similar naming conventions (both with and without ZFS) and have never run into anything like this.
A 63-character limit places serious limitations on the ability to organise where everything is stored. It more or less requires a very shallow and wide directory tree, with very few subdirs in between the root and each disk image. This is complicated by the fact that every character used in the directory name is then duplicated for the name of the disk image, to which 7 more characters are appended, presumably as a way to reduce the chance of collisions with an existing file.
Heck, I'm attempting to use one directory level past the root of the pool with a two-character name and it's failing because I gave the VM's storage dir a minimally descriptive name. This all feels like an unacceptable limitation not too far removed from the 8.3 filenames of yesteryear.
Since none of these complications are managed or even flagged by the UI beforehand, I'm leaning towards I did a dumb somewhere, and that this is not a typical limitation. However, I'm struggling to understand what I could have messed up.
I'm also having a hard time reconciling the notion of user error with the script flagged in the blurb, which makes me wonder if this is an actual problem with legacy code. Line 354 of /usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py shows:
Code:
if path and len(path) > 63: # SPECNAMELEN is not long enough (63) in 12, 13 will be 255 verrors.add( 'attributes.path', f'Disk path {path} is too long, reduce to less than 63 characters', errno.ENAMETOOLONG )
Note the comment which seems to reference a planned change from TrueNAS 12 to 13 to raise this limit to 255 characters. I don't have a Core 13 system I can check this on (our work server is still on the FreeNAS 11.3 train), though I suppose I could spin up a VM to take a look to see if this was ever changed there. I'll post back here with whatever I find.
I did find this thread, which discusses a near-identical problem. There didn't appear to be much of a resolution that I could see, and this was from two years ago on a FreeNAS machine while cloning a jail. So, similar, but different.
This is on a fresh install of TrueNAS Angelfish (version TrueNAS-SCALE-22.02.2). I've done next to nothing to it as of yet; I only got it up on the network and wanted to play with VMs as those and apps are the parts of the system that will decide whether I stick with it. It's so early days that it's not currently serving space to the rest of the network and I've not added any bulk storage drives so far - I'm assuming all that is as bombproof as ever, and I'll get to that later.
The hardware shouldn't really matter in this case, but this is based on a 4-core Supermicro X10SDV with 32GB of ECC RAM. I'm looking forward to taking advantage of 10G later on, and may up the RAM at some point. It's going in a Jonsbo N1 for a major size reduction over the old tower that it'll replace, and will allow me to run up to 5 bulk storage drives (TBD, not currently installed).
The mirror in question is on a pair of SSDs that I'm borrowing from somewhere else while testing all of this, using a free partition on both for a mirror that I'm using as my test volume. These will be replaced in the final system with dedicated drives bought for the machine, but for now I'm just fiddling about with what I have on hand. I don't believe it has any bearing, but both vm and vm/ubuntu_2204_blank in the discussion above are child datasets of the ssd-mirror test pool; I'm in the habit of putting each VM in its own dataset so each can be snapshotted and replicated independently as needed on more general-purpose Ubuntu setups, and I would intend to continue doing that here.