ENAMETOOLONG when creating new VM disk image

Smeghead

Cadet
Joined
Oct 28, 2020
Messages
6
I'm playing with TrueNAS Scale to see if I want it to be the basis for the replacement for our home storage/VM server. However, I appear to have faceplanted quite early on, and I'm looking for advice as to whether this is my own dumb fault or whether this is a limitation in Scale itself.

Since whatever I choose will stick for a long time (experience suggests 5-6 years or so), I want to get a solid feel for it before committing. I also want to try out the app catalogue to see if there's much advantage over fully virtualising each function, and to gain experience with Scale ahead of possibly using it at work.

I should state that I've been using KVM-hosted VMs for the better part of a decade first on Centos and later on Ubuntu, so I'm fairly familiar with the concepts involved. I'm mainly looking to evaluate whether I should go with the likes of Scale for simpler management of storage, focus on the VM side of things with Proxmox, or learn nothing new and just go with another Ubuntu server system.

In terms of first impressions, the storage side of things looks about the same as it's always been in FreeNAS/Core, so I'm not particularly worried. I'll get to testing that later.

However, I'm running into an error attempting to spin up my first VM for testing. I figured I'd try installing Ubuntu server 22.04 from scratch as a test VM. I fired up a bridge and walked through the UI options for creating a VM, all of which looked fairly familiar from a KVM perspective. However, this failed at the final "confirm options" step. Clicking "save" results in the following error:

Error creating VM.

Error while creating the DISK device. [ENAMETOOLONG] attributes.path: Disk path /dev/zvol/ssd-mirror/vm/ubuntu_2204_blank/ubuntu_2204_blank-7ko1yf is too long, reduce to less than 63 characters

More info shows:
Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 176, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1293, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/service.py", line 920, in create rv = await self.middleware._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1293, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1133, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1265, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 167, in do_create data = await self.validate_device(data, update=False) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py", line 484, in validate_device raise verrors middlewared.service_exception.ValidationErrors: [ENAMETOOLONG] attributes.path: Disk path /dev/zvol/ssd-mirror/vm/ubuntu_2204_blank/ubuntu_2204_blank-7ko1yf is too long, reduce to less than 63 characters

Now, the error message is fairly self-evident: there's something in there that limits the full path to a disk image to less than 63 characters.

However, the question I have is: why?

The path I gave to the location isn't massively complicated. From the root of the SSD mirror I'm using to test, Relative to the root of the SSD mirror I'm testing with, I'm trying to create a VM with a disk image in the 'vm/ubuntu_2204_blank' subdirectory. That doesn't seem excessive; I'm not attempting to traverse a tree that's 30 subdirectories deep, though even if I were, I would expect that to be perfectly legal.

The UI path ends up translating to an internal path in /dev/zvol that's 42 characters long including the trailing slash. TrueNAS then wants to create a disk image name in that location that's an additional 24 characters long. That totals 66 characters, which is apparently illegal.

Now, while I could easily rename things to trim three characters, I'm having a hard time understanding why I should be forced to.

ZFS has a path length limitation of, what, 4096 characters? Something like that, I believe.

Simiarly, KVM doesn't have any limitation like thjis. I've used KVM on various Linux distributions for years with similar naming conventions (both with and without ZFS) and have never run into anything like this.

A 63-character limit places serious limitations on the ability to organise where everything is stored. It more or less requires a very shallow and wide directory tree, with very few subdirs in between the root and each disk image. This is complicated by the fact that every character used in the directory name is then duplicated for the name of the disk image, to which 7 more characters are appended, presumably as a way to reduce the chance of collisions with an existing file.

Heck, I'm attempting to use one directory level past the root of the pool with a two-character name and it's failing because I gave the VM's storage dir a minimally descriptive name. This all feels like an unacceptable limitation not too far removed from the 8.3 filenames of yesteryear.

Since none of these complications are managed or even flagged by the UI beforehand, I'm leaning towards I did a dumb somewhere, and that this is not a typical limitation. However, I'm struggling to understand what I could have messed up.

I'm also having a hard time reconciling the notion of user error with the script flagged in the blurb, which makes me wonder if this is an actual problem with legacy code. Line 354 of /usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_devices.py shows:
Code:
            if path and len(path) > 63:
                # SPECNAMELEN is not long enough (63) in 12, 13 will be 255
                verrors.add(
                    'attributes.path',
                    f'Disk path {path} is too long, reduce to less than 63 characters', errno.ENAMETOOLONG
                )


Note the comment which seems to reference a planned change from TrueNAS 12 to 13 to raise this limit to 255 characters. I don't have a Core 13 system I can check this on (our work server is still on the FreeNAS 11.3 train), though I suppose I could spin up a VM to take a look to see if this was ever changed there. I'll post back here with whatever I find.

I did find this thread, which discusses a near-identical problem. There didn't appear to be much of a resolution that I could see, and this was from two years ago on a FreeNAS machine while cloning a jail. So, similar, but different.

This is on a fresh install of TrueNAS Angelfish (version TrueNAS-SCALE-22.02.2). I've done next to nothing to it as of yet; I only got it up on the network and wanted to play with VMs as those and apps are the parts of the system that will decide whether I stick with it. It's so early days that it's not currently serving space to the rest of the network and I've not added any bulk storage drives so far - I'm assuming all that is as bombproof as ever, and I'll get to that later.

The hardware shouldn't really matter in this case, but this is based on a 4-core Supermicro X10SDV with 32GB of ECC RAM. I'm looking forward to taking advantage of 10G later on, and may up the RAM at some point. It's going in a Jonsbo N1 for a major size reduction over the old tower that it'll replace, and will allow me to run up to 5 bulk storage drives (TBD, not currently installed).

The mirror in question is on a pair of SSDs that I'm borrowing from somewhere else while testing all of this, using a free partition on both for a mirror that I'm using as my test volume. These will be replaced in the final system with dedicated drives bought for the machine, but for now I'm just fiddling about with what I have on hand. I don't believe it has any bearing, but both vm and vm/ubuntu_2204_blank in the discussion above are child datasets of the ssd-mirror test pool; I'm in the habit of putting each VM in its own dataset so each can be snapshotted and replicated independently as needed on more general-purpose Ubuntu setups, and I would intend to continue doing that here.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
This appears to be a straight carryover from the TrueNAS Core 13.0-RELEASE code, because I find the exact same path length restriction there in /usr/local/lib/python3.9/site-packages/middlewared/plugins/vm.py, line 2300. The underlying file system can handle path lengths much longer, so this might be a UI restriction to protect bhyve's limitations, or to work around a limit with the UI's path handler in Core.

Edit: This looks like a concession to the 255-byte maximum command length in FreeBSD, as launching a bhyve VM needs to include all the devices and their properties in the exec arguments. See https://wiki.freebsd.org/bhyve for example bhyve invocations.
 
Last edited:

Smeghead

Cadet
Joined
Oct 28, 2020
Messages
6
Yeah, I was going to say the same thing. I spun up a Core 13 VM and ran into exactly the same thing when attempting to set up a VM there. I found the vm.py script you mention and indeed, it does a very similar thing on line 2300 as its Scale sibling with a magic 63-character limit.

Is it safe to assume that this is worth a ticket at this point?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Is it safe to assume that this is worth a ticket at this point?

More like a feature request, since KVM isn't subject to the same restrictions as bhyve. However, I don't hold out much hope for action, as iX explicitly wants to facilitate migrations from Core->Scale, and possibly back the other way.
 

Smeghead

Cadet
Joined
Oct 28, 2020
Messages
6
Hmmm. I'll go look at their ticket system to see if it's worth my while. It's certainly possible to work around this with shorter paths, but I can see this being a repeated irritant without a change.

However, if what you say is true, then I think I'll be avoiding Scale for this particular setup and looking at a different option. I'm not convinced I want to commit to Scale for a sizeable chunk of a decade with the likes of the above, this, and the warning upon install that none of the space on the boot devices is available for my use regardless of how oversized they are compared with Scale's needs. It's possible that I'm just in the right place at the wrong time, what with Scale being so early on in its lifecycle. It has a huge amount of potential, but being an early adopter of anything has its risks.

In an ideal world this machine would become a pure NAS (I'd likely just stick Core on it and call it job done) and I would have a separate box as a VM host; a Ryzen NUC-alike stuffed with RAM would run rings around this little Xeon D. However, that's not currently approved by the Minister for War & Finance, so I'm having to make do with a straight replacement of my existing old all-in-one server that's currently on its last legs.
 

fastzombies

Explorer
Joined
Aug 11, 2022
Messages
57
I just hit the same problem on Core 13. Does this basically mean VM cloning won't work? My paths are already fairly short, I can't reduce them any further.

This would mean we all are forced to do a new OS install whenever we want another VM.
 
Top