Hello,
I am using TrueNAS Core 12.0-U8. I have an existing mirror pool - 2 disks, 4TB each, created through the gui. One disk is SATA, one disk is SAS.
I would ideally like to add another 2 disk vdev to this mirror pool, but at this point I can't actually do anything with these two disks inside the gui. Here are the different things I have tried:
1. In Storage/Pools/Add Vdevs to Pool - I select the disks and click Add Vdev
2. In Storage/Pools - Add New Pool, select the two disks
3. Storage/Pools/Expand Pool (suggesting I found in a thread)
Asks me if I want to use all the storage in the pool. I say yes, back to the menu.
4. Create pool from command line
No issues, creates fine. Then did zpool destroy, again without issue
5. Repeat 1 and 2, thinking perhaps the create/destroy may have reset somethin. No change.
6. Reboot. Repeat 1 and 2. No change.
7. Tried creating single disk pool for either disk. Both errored with
8. Create pool at command line, export it, import in Truenas. Success. But, I wanted to keep things the 'correct' TrueNAS way, and want to have it created in the gui.
9. Destroy pool (in TrueNAS), try to create new pool in Truenas. Same error as above.
10. Tried overwriting the whole disk with dd (suggesting from another thread):
# dd if=/dev/zero of=/dev/ada3 bs=1m count=1
# dd if=/dev/zero of=/dev/ada3 bs=1m oseek=`diskinfo ada3 | awk '{print int($3 / (1024*1024)) - 4;}'`
Then, back to the gui to add the vdev to my pool:
Creating a pool with either or both of the disks results in the same no such pool or dataset error as above also;
I have tried searching for multiple days in any way I can think of, but can find no other solution to try. If there is something, I apologize in advance for not having found it.
My complete setup:
TrueNAS 12.0-U8
Intel Core-i5 2500k, 3.30GHz
Gigabyte B75m-d3h motherboard
24GB RAM (non ECC)
1 HGST Ultrastar 4TB SAS drive (and 1 to be added)
1 WD Red 4TB SATA drive (and 1 to be added)
LSI 9220-8i SAS controller
Intel Gigabit ET Dual port ethernet
Thank you
Patrick
I am using TrueNAS Core 12.0-U8. I have an existing mirror pool - 2 disks, 4TB each, created through the gui. One disk is SATA, one disk is SAS.
I would ideally like to add another 2 disk vdev to this mirror pool, but at this point I can't actually do anything with these two disks inside the gui. Here are the different things I have tried:
1. In Storage/Pools/Add Vdevs to Pool - I select the disks and click Add Vdev
Code:
FAILED [EFAULT] [EZFS_NOENT] no such pool or dataset Error: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 858, in do_update raise CallError(extend_job.error) middlewared.service_exception.CallError: [EFAULT] [EZFS_NOENT] no such pool or dataset
2. In Storage/Pools - Add New Pool, select the two disks
Code:
FAILED ('no such pool or dataset',) Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 111, in do_create zfs.create(data['name'], topology, data['options'], data['fsoptions']) File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 111, in do_create zfs.create(data['name'], topology, data['options'], data['fsoptions']) File "libzfs.pyx", line 1294, in libzfs.ZFS.create libzfs.ZFSException: no such pool or dataset """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 773, in do_create raise e File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 710, in do_create z_pool = await self.middleware.call('zfs.pool.create', { File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call return await self._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1213, in _call return await methodobj(*prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/service.py", line 484, in create rv = await self.middleware._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) libzfs.ZFSException: ('no such pool or dataset',)
3. Storage/Pools/Expand Pool (suggesting I found in a thread)
Asks me if I want to use all the storage in the pool. I say yes, back to the menu.
4. Create pool from command line
No issues, creates fine. Then did zpool destroy, again without issue
5. Repeat 1 and 2, thinking perhaps the create/destroy may have reset somethin. No change.
6. Reboot. Repeat 1 and 2. No change.
7. Tried creating single disk pool for either disk. Both errored with
Code:
FAILED [EFAULT] [EZFS_NOENT] no such pool or dataset Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 111, in do_create zfs.create(data['name'], topology, data['options'], data['fsoptions']) File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 111, in do_create zfs.create(data['name'], topology, data['options'], data['fsoptions']) File "libzfs.pyx", line 1294, in libzfs.ZFS.create libzfs.ZFSException: no such pool or dataset """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 773, in do_create raise e File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 710, in do_create z_pool = await self.middleware.call('zfs.pool.create', { File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call return await self._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1213, in _call return await methodobj(*prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/service.py", line 484, in create rv = await self.middleware._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) libzfs.ZFSException: ('no such pool or dataset',)
8. Create pool at command line, export it, import in Truenas. Success. But, I wanted to keep things the 'correct' TrueNAS way, and want to have it created in the gui.
9. Destroy pool (in TrueNAS), try to create new pool in Truenas. Same error as above.
10. Tried overwriting the whole disk with dd (suggesting from another thread):
# dd if=/dev/zero of=/dev/ada3 bs=1m count=1
# dd if=/dev/zero of=/dev/ada3 bs=1m oseek=`diskinfo ada3 | awk '{print int($3 / (1024*1024)) - 4;}'`
Then, back to the gui to add the vdev to my pool:
Code:
Error: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run await self.future File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 858, in do_update raise CallError(extend_job.error) middlewared.service_exception.CallError: [EFAULT] [EZFS_NOENT] no such pool or dataset
Creating a pool with either or both of the disks results in the same no such pool or dataset error as above also;
I have tried searching for multiple days in any way I can think of, but can find no other solution to try. If there is something, I apologize in advance for not having found it.
My complete setup:
TrueNAS 12.0-U8
Intel Core-i5 2500k, 3.30GHz
Gigabyte B75m-d3h motherboard
24GB RAM (non ECC)
1 HGST Ultrastar 4TB SAS drive (and 1 to be added)
1 WD Red 4TB SATA drive (and 1 to be added)
LSI 9220-8i SAS controller
Intel Gigabit ET Dual port ethernet
Thank you
Patrick
Last edited: