surfrock66
Dabbler
- Joined
- Apr 2, 2013
- Messages
- 36
I have a zfs pool I've been bringing forward for years. It started as FreeNAS, then TrueNAS Core, and now TrueNAS Scale (Specifically TrueNAS-SCALE-22.02.3). My pool is healthy, and I just replaced the final 2TB disk with a 4TB disk, so I should be ready to expand my pool to utilize the new space. When I do that, I get an error:
I can get more details from that:
My pool is online and healthy. Smart tests are all clean:
I'm also able to show that smartctl shows no problems, and I can see the disk size with blkid:
I had this issue once before on the same disk and never got a response, but my hope is that the new debug info will be a clue to what I should do to get this fixed? Thanks!
Code:
pool.expand Error: [EZFS_NOCAP] cannot relabel '/dev/disk/by-partuuid/905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f': unable to read disk capacity
I can get more details from that:
Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 260, in __zfs_vdev_operation
op(target, *args)
File "libzfs.pyx", line 411, in libzfs.ZFS.__exit__
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 260, in __zfs_vdev_operation
op(target, *args)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 324, in <lambda>
self.__zfs_vdev_operation(name, label, lambda target, *args: target.online(*args), expand)
File "libzfs.pyx", line 2211, in libzfs.ZFSVdev.online
libzfs.ZFSException: cannot relabel '/dev/disk/by-partuuid/905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f': unable to read disk capacity
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 114, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1276, in nf
return func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 324, in online
self.__zfs_vdev_operation(name, label, lambda target, *args: target.online(*args), expand)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 262, in __zfs_vdev_operation
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_NOCAP] cannot relabel '/dev/disk/by-partuuid/905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f': unable to read disk capacity
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 411, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 446, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1272, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1140, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/expand.py", line 74, in expand
await self.middleware.call('zfs.pool.online', pool['name'], c_vd['guid'], True)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1344, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1301, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1307, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1222, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1205, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_NOCAP] cannot relabel '/dev/disk/by-partuuid/905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f': unable to read disk capacityMy pool is online and healthy. Smart tests are all clean:
Code:
# zpool status
pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:06 with 0 errors on Mon Sep 12 03:45:07 2022
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
errors: No known data errors
pool: sr66-nas-v01
state: ONLINE
scan: resilvered 562G in 03:05:15 with 0 errors on Sat Sep 17 12:08:54 2022
config:
NAME STATE READ WRITE CKSUM
sr66-nas-v01 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
2163cd00-d9f2-11ec-8049-3cecef2b41e8 ONLINE 0 0 0
905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f ONLINE 0 0 0
8e9d48ba-83f4-11e8-8495-b8975a5c2ef9 ONLINE 0 0 0
079dc207-aeeb-11eb-916f-3cecef2b41e8 ONLINE 0 0 0
cef87bc6-2244-11ed-af8a-3cecef2b41e8 ONLINE 0 0 0
cc77e3e8-9b15-11ec-9b4e-3cecef2b41e8 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
798b36b5-98f8-11eb-916f-3cecef2b41e8 ONLINE 0 0 0
d90771d8-af7e-11eb-916f-3cecef2b41e8 ONLINE 0 0 0
421a0481-b657-40e5-a31a-4a9be1b94d54 ONLINE 0 0 0
8484ff04-fa36-11eb-94bb-3cecef2b41e8 ONLINE 0 0 0
80f8ce97-98f8-11eb-916f-3cecef2b41e8 ONLINE 0 0 0
81d0f180-98f8-11eb-916f-3cecef2b41e8 ONLINE 0 0 0
errors: No known data errors
I'm also able to show that smartctl shows no problems, and I can see the disk size with blkid:
Code:
# blkid | grep -e 9056; /dev/sdb2: LABEL="sr66-nas-v01" UUID="6450892928072451702" UUID_SUB="4346000397375589435" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="905647b7-3ca7-11e9-a8f0-8cae4cfe7d0f" # smartctl -l selftest /dev/sdb smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.131+truenas] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 30993 - # 2 Short offline Completed without error 00% 30980 - # 3 Short offline Completed without error 00% 30885 - # 4 Extended offline Completed without error 00% 30797 - # 5 Short offline Completed without error 00% 30693 - # 6 Short offline Completed without error 00% 30525 - # 7 Extended offline Completed without error 00% 30438 - # lsblk /dev/sdb NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 3.6T 0 disk ├─sdb1 8:17 0 2G 0 part │ └─md126 9:126 0 2G 0 raid1 │ └─md126 253:1 0 2G 0 crypt [SWAP] └─sdb2 8:18 0 3.6T 0 part
I had this issue once before on the same disk and never got a response, but my hope is that the new debug info will be a clue to what I should do to get this fixed? Thanks!