Error adding mirror drive to pool

fabio.soggia

Cadet
Joined
Oct 8, 2021
Messages
4
Hello. I built a pool with a single disk, now trying to add a second mirror disk to the pool for test. Disks are identical, but procedure says it is too small (both crucial nvme 1tb, 1 week old). Disk 1 is empty, just pool initial settings, so no problem to destroy the pool, but what if it was in production system?
TrueNAS-12.0-U6 core
Read of a guy having same problem with U1, but solved with U5
Any suggestion? Many thanks

Error: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/attach_disk.py", line 82, in attach await job.wrap(extend_job) File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 496, in wrap raise CallError(subjob.exception) middlewared.service_exception.CallError: [EFAULT] concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 214, in extend i['target'].attach(newvdev) File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 214, in extend i['target'].attach(newvdev) File "libzfs.pyx", line 2030, in libzfs.ZFSVdev.attach libzfs.ZFSException: device is too small During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 94, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 217, in extend raise CallError(str(e), e.code) middlewared.service_exception.CallError: [EZFS_BADDEV] device is too small """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 398, in __run_body rv = await self.middleware._call_worker(self.method_name, *self.args, job={'id': self.id}) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [EZFS_BADDEV] device is too small

diskinfo -v nvd0 nvd0 512 # sectorsize 1000204886016 # mediasize in bytes (932G) 1953525168 # mediasize in sectors 0 # stripesize 0 # stripeoffset CT1000P2SSD8 # Disk descr. 2211E619079A # Disk ident. Yes # TRIM/UNMAP support 0 # Rotation rate in RPM root@truenas[~]# diskinfo -v nvd1 nvd1 512 # sectorsize 1000204886016 # mediasize in bytes (932G) 1953525168 # mediasize in sectors 0 # stripesize 0 # stripeoffset CT1000P2SSD8 # Disk descr. 2211E6190998 # Disk ident. Yes # TRIM/UNMAP support 0 # Rotation rate in RPM
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
[EZFS_BADDEV] device is too small
You're going to have to look harder at the size of the device... likelihood that either you added the first disk without the 2GB partition for SWAP and now the second one is trying to have it or the disk is actually smaller than the first.

If you set the "swap size in GiB" (on System | Advanced) to 0, does it work?
 

fabio.soggia

Cadet
Joined
Oct 8, 2021
Messages
4
Thank you. My thought was that adding a disk to a pool consists of initially create the same partition schema and then copy the data.
Instead the swap partition is created using the setting in System | Advanced, so they were different and correctly (or not?) the data partition was too small, not the device
By the way, for future user in the same situation (or me, for consulting):
1) if you read the error check disk geometry:
gpart show
mine showed this:
=> 40 1953525088 nvd0 GPT (932G) 40 88 - free - (44K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1949330696 2 freebsd-zfs (930G) => 40 1953525088 nvd1 GPT (932G) 40 88 - free - (44K) 128 67108864 1 freebsd-swap (32G) 67108992 1886416136 2 freebsd-zfs (900G)
2) Go to Storage | Disks | (the drive you want to add but gives error) and click on WIPE to destroy partition schema (BE CAREFUL!!!)
3) Go to System | Advanced and change Storage Swap Size in GiB accordingly to you existing pool drive
4) Try to add the drive to the pool as usual (Storage | Pools | Status | Extend); it should work now
5) Go to System | Advanced and change back Storage Swap Size in GiB as you need
 

fabio.soggia

Cadet
Joined
Oct 8, 2021
Messages
4
My question now is: why the boot pool did not follow this behaviour?
Let me explain
1) installed on a single disk with 16gb swap
2) Gone to System | Boot | Boot Pool Status | Extend
3) Done with no error, but System | Advanced | Storage Swap Size in GiB was 32GiB
=> 40 234441568 ada0 GPT (112G) 40 1024 1 freebsd-boot (512K) 1064 33554432 3 freebsd-swap (16G) 33555496 200867840 2 freebsd-zfs (96G) 234423336 18272 - free - (8.9M) => 40 234441568 ada1 GPT (112G) 40 1024 1 freebsd-boot (512K) 1064 33554432 3 freebsd-swap (16G) 33555496 200886112 2 freebsd-zfs (96G)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think all the logic of the boot pool setup is in the installer, so mirroring later is probably a bit janky.

Feel free to raise a bug about it as I agree it should probably work more along the lines of what you mentioned... just use the same partition layout as the existing one and mirror it.


EDIT: re-reading that, I see you were saying the opposite... I guess my answer applies in the reverse then...

Key difference being that the Swap partition size change is supposed to be for all pool member disks and is different on the boot pool (particularly in recent versions).

A general point would be that it's always best to wipe a disk before using it to add to a pool.
 
Top