Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

SOLVED Can't add to existing pool or create new pool

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

pallen38

Cadet
Joined
Feb 25, 2022
Messages
5
Hello,

I am using TrueNAS Core 12.0-U8. I have an existing mirror pool - 2 disks, 4TB each, created through the gui. One disk is SATA, one disk is SAS.

I would ideally like to add another 2 disk vdev to this mirror pool, but at this point I can't actually do anything with these two disks inside the gui. Here are the different things I have tried:



1. In Storage/Pools/Add Vdevs to Pool - I select the disks and click Add Vdev

Code:
FAILED
[EFAULT] [EZFS_NOENT] no such pool or dataset
 
Error: Traceback (most recent call last):
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
367, in run
 
await self.future
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
403, in __run_body
 
rv = await self.method(*([self] + args))
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line
975, in nf
 
return await f(*args, **kwargs)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py",
line 858, in do_update
 
raise CallError(extend_job.error)
middlewared.service_exception.CallError:
[EFAULT] [EZFS_NOENT] no such pool or dataset




2. In Storage/Pools - Add New Pool, select the two disks

Code:
FAILED
('no such pool or dataset',)
Error:
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
 
File "/usr/local/lib/python3.9/concurrent/futures/process.py",
line 243, in _process_worker
  
r = call_item.fn(*call_item.args, **call_item.kwargs)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
111, in main_worker
  
res = MIDDLEWARE._run(*call_args)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
45, in _run
  
return self._call(name, serviceobj, methodobj, args, job=job)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
39, in _call
  
return methodobj(*params)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
39, in _call
  
return methodobj(*params)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line
979, in nf
  
return f(*args, **kwargs)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py",
line 111, in do_create
  
zfs.create(data['name'], topology, data['options'], data['fsoptions'])
 
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py",
line 111, in do_create
  
zfs.create(data['name'], topology, data['options'], data['fsoptions'])
 
File "libzfs.pyx", line 1294, in libzfs.ZFS.create
libzfs.ZFSException: no such pool or
dataset
"""
 
The above exception was the direct
cause of the following exception:
 
Traceback (most recent call last):
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
367, in run
  
await self.future
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
403, in __run_body
  
rv = await self.method(*([self] + args))
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line
975, in nf
  
return await f(*args, **kwargs)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py",
line 773, in do_create
  
raise e
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py",
line 710, in do_create
  
z_pool = await self.middleware.call('zfs.pool.create', {
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1256, in call
  
return await self._call(
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1213, in _call
  
return await methodobj(*prepared_call.args)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/service.py", line
484, in create
  
rv = await self.middleware._call(
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1221, in _call
  
return await self._call_worker(name, *prepared_call.args)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1227, in _call_worker
  
return await self.run_in_proc(main_worker, name, args, job)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1154, in run_in_proc
  
return await self.run_in_executor(self.__procpool, method, *args,
**kwargs)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1128, in run_in_executor
  
return await loop.run_in_executor(pool, functools.partial(method, *args,
**kwargs))
libzfs.ZFSException: ('no such pool
or dataset',)



3. Storage/Pools/Expand Pool (suggesting I found in a thread)
Asks me if I want to use all the storage in the pool. I say yes, back to the menu.


4. Create pool from command line
No issues, creates fine. Then did zpool destroy, again without issue

5. Repeat 1 and 2, thinking perhaps the create/destroy may have reset somethin. No change.

6. Reboot. Repeat 1 and 2. No change.

7. Tried creating single disk pool for either disk. Both errored with

Code:
FAILED
[EFAULT] [EZFS_NOENT] no such pool
or dataset
 
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File
"/usr/local/lib/python3.9/concurrent/futures/process.py", line 243,
in _process_worker
    r =
call_item.fn(*call_item.args, **call_item.kwargs)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
111, in main_worker
    res =
MIDDLEWARE._run(*call_args)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
45, in _run
    return
self._call(name, serviceobj, methodobj, args, job=job)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
39, in _call
    return
methodobj(*params)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line
39, in _call
    return
methodobj(*params)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line
979, in nf
    return
f(*args, **kwargs)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py",
line 111, in do_create
  
zfs.create(data['name'], topology, data['options'], data['fsoptions'])
  File
"libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py",
line 111, in do_create
  
zfs.create(data['name'], topology, data['options'], data['fsoptions'])
  File
"libzfs.pyx", line 1294, in libzfs.ZFS.create
libzfs.ZFSException: no such pool or dataset
"""
 
The above exception was the direct cause of the
following exception:
 
Traceback (most recent call last):
  File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
367, in run
    await
self.future
  File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
403, in __run_body
    rv = await
self.method(*([self] + args))
  File
"/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line
975, in nf
    return await
f(*args, **kwargs)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py",
line 773, in do_create
    raise e
  File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py",
line 710, in do_create
    z_pool =
await self.middleware.call('zfs.pool.create', {
  File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1256, in call
    return await
self._call(
  File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1213, in _call
    return await
methodobj(*prepared_call.args)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/service.py", line
484, in create
    rv = await
self.middleware._call(
  File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1221, in _call
    return await
self._call_worker(name, *prepared_call.args)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1227, in _call_worker
    return await
self.run_in_proc(main_worker, name, args, job)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1154, in run_in_proc
    return await
self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File
"/usr/local/lib/python3.9/site-packages/middlewared/main.py", line
1128, in run_in_executor
    return await
loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('no such pool or dataset',)



8. Create pool at command line, export it, import in Truenas. Success. But, I wanted to keep things the 'correct' TrueNAS way, and want to have it created in the gui.

9. Destroy pool (in TrueNAS), try to create new pool in Truenas. Same error as above.

10. Tried overwriting the whole disk with dd (suggesting from another thread):
# dd if=/dev/zero of=/dev/ada3 bs=1m count=1
# dd if=/dev/zero of=/dev/ada3 bs=1m oseek=`diskinfo ada3 | awk '{print int($3 / (1024*1024)) - 4;}'`


Then, back to the gui to add the vdev to my pool:

Code:
Error: Traceback (most recent call
last):
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
367, in run
  
await self.future
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/job.py", line
403, in __run_body
  
rv = await self.method(*([self] + args))
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line
975, in nf
  
return await f(*args, **kwargs)
 
File
"/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py",
line 858, in do_update
  
raise CallError(extend_job.error)
middlewared.service_exception.CallError:
[EFAULT] [EZFS_NOENT] no such pool or dataset



Creating a pool with either or both of the disks results in the same no such pool or dataset error as above also;


I have tried searching for multiple days in any way I can think of, but can find no other solution to try. If there is something, I apologize in advance for not having found it.

My complete setup:
TrueNAS 12.0-U8
Intel Core-i5 2500k, 3.30GHz
Gigabyte B75m-d3h motherboard
24GB RAM (non ECC)
1 HGST Ultrastar 4TB SAS drive (and 1 to be added)
1 WD Red 4TB SATA drive (and 1 to be added)
LSI 9220-8i SAS controller
Intel Gigabit ET Dual port ethernet

Thank you

Patrick
 
Last edited:

sretalla

Hall of Famer
Joined
Jan 1, 2016
Messages
6,548
Would it make sense for me to just manage the pool outside of the gui?
Why would you need TrueNAS if you don't use the GUI?

Can you share a screenshot of the Add/create pool screen before you click create so we can see if there is anything unusual there?

I can confirm it works on my installs.
 

pallen38

Cadet
Joined
Feb 25, 2022
Messages
5
Why would you need TrueNAS if you don't use the GUI?
Well, because I am using TrueNAS for everything else, but was wondering if it was worth even dealing with this particular issue or just working around it as.

I can confirm it works too - I have created all of my other pools with other disks in the gui.

Below is a screenshot of the gui before clicking create. From my experience creating my other pools I am seeing nothing strange.

1649943524049.png
 

sretalla

Hall of Famer
Joined
Jan 1, 2016
Messages
6,548
OK, so you seem to be doing nothing "wrong" (although we haven't seen the adding a VDEV version of the screenshot... I assume nothing different there). I guess you confirm that clicking the Create button produces that "no such pool or dataset" error?

I would probably recommend taking a config backup, installing fresh and restore your config, then try again.
 

pallen38

Cadet
Joined
Feb 25, 2022
Messages
5
I guess I'll have to try the backup and restore method, since there have been no other thoughts on this. Things not working through the gui that are fine from the CLI, backing up and restoring to fix an issue - I was hoping to avoid exactly this type of thing by using TrueNAS as opposed to some Linux solution, but here we are.
 

sretalla

Hall of Famer
Joined
Jan 1, 2016
Messages
6,548
The process to get back to exactly where you are is surprisingly easy and fast... actually one of the key benefits of an appliance-type OS like TrueNAS.
 

pallen38

Cadet
Joined
Feb 25, 2022
Messages
5
Just wanted to update this in case anyone ever has this issue. I did not find the idea of reinstalling acceptable (sure, it's easy enough, but I don't want a system that I'll need to reinstall every time there's an issue), but I did not feel like switching to another solution, so I just left those drives offline. Today I went in and applied pending updates, and after that I am not able to add the disks/vdev.

So, it appears that whatever was broke got fixed in the update, or more likely a corupt file or configuration got overwritten.
 

sretalla

Hall of Famer
Joined
Jan 1, 2016
Messages
6,548
I went in and applied pending updates, and after that I am not able to add the disks/vdev.

So, it appears that whatever was broke got fixed in the update, or more likely a corupt file or configuration got overwritten.
An update makes an entirely new copy of the "OS" part of the appliance, so would have replaced whatever was corrupted with a new one.

If you change your boot environment back, the corruption will still be there.
 
Top