Unable to create new Pool ([EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda)

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
I am unable to create a pool consisting of a single SSD to use for VMs in Scale.
I have tried deleting partitions using fdisk as suggested here: Unable to Create Stripe Pool with One NVMe AIC
I see this is an error that is known but haven't been able to find a solution to get it working.

Attached below is the error modal I get as well as the contents of it. I am getting the same error as I did before running fdisk, deleting both partitions and writing the changes to the disk.

Thanks in advance!

Code:
[EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda

Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 382, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 418, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1131, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1263, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 743, in do_create
    formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
    await asyncio_map(format_disk, disks.items(), limit=16)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
    return await asyncio.gather(*futures)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
    return await real_func(arg)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 32, in format_disk
    devname = await self.middleware.call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1278, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1182, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info_linux.py", line 98, in gptid_from_part_type
    raise CallError(f'Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda
 

roysen

Dabbler
Joined
Dec 11, 2021
Messages
15
I have the same problem here, any solution?

Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 382, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 418, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1131, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1263, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 743, in do_create
formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in _call
return await methodobj(*prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
await asyncio_map(format_disk, disks.items(), limit=16)
File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
return await asyncio.gather(*futures)
File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
return await real_func(arg)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 32, in format_disk
devname = await self.middleware.call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1278, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1182, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info_linux.py", line 98, in gptid_from_part_type
raise CallError(f'Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sde
 

roysen

Dabbler
Joined
Dec 11, 2021
Messages
15
My problem seems to be that my disk was used in a raid5 before. I used mdadm to stop the device from being busy.

sudo mdadm --detail /dev/mdxxx
sudo mdadm --stop /dev/mdxxx
sudo mdadm --remove /dev/mdxxx
sudo mdadm --zero-superblock /dev/sdx

After this I was able to creat my pool.
 

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
I have the same problem here, any solution?

Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 382, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 418, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1131, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1263, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 743, in do_create
formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in _call
return await methodobj(*prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
await asyncio_map(format_disk, disks.items(), limit=16)
File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
return await asyncio.gather(*futures)
File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
return await real_func(arg)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 32, in format_disk
devname = await self.middleware.call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1278, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1182, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info_linux.py", line 98, in gptid_from_part_type
raise CallError(f'Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sde
I ssh’d in and used fdisk to delete all the partitions (hadn’t used the disk so idk what was on there)
Rebooted the server. Worked fine from the GUI. Could also manually create the pool, openzfs documentation has information about how to do so.
 

pyhtoncoder

Cadet
Joined
Mar 12, 2023
Messages
1
Hello Truenas Community, I am somewhat new to alot of this server stuff so plz forgive me if I have left any details out or used improper terms. I have included a small key below to try to gap this curv. I have been running TrueNAS Scale for about 4 months now. I'm on my 3rd install of truenas scale now and I love it. However I have ran into an issue that keeps repeating anytime I wish to add a single "used" drive for new pool. Now before everyone jumps my case I have read and am fully aware that this feature is not officially supported by IX systems. However when I first installed truenas for testing it only had a single 500gb hard drive in it to begin with. Things worked great. So great I decided to install permanently on my network. However when installing again (same hardware and single drive with the addition of (2) 4TB drives.) I was met with this error listed below when trying to make a pool on this same single 500gb drive. No biggie. I didn't really need it anyway. So I moved on. Installed a 1TB drive and was able to make the single drive array again with no issues. But when deleteing the pool for readding with proper names and permissions im back to square 1. Before anyone jumps my case about how unsafe this is ether... Its a cache drive. Waste of resorces to make redundent. All other multi drive pools work fine and import after the fact just fine. I was even able to reinstall TrueNAS and import all of those pools when I had a OS drive failure last week. Only single "used" drives in a striped raid pool have this issue. Not to beat a dead horse but to confirm for you all drives listed below exept 1 WAS FUNCTIONING as a single drive in a stripped setup at one point on my server. Wanting to change my setup is when issues have accured. The single drive that has never worked came from a TrueNAS Core setup. (worked at one point but not on my system)

My understanding of this issue: The drives are retaining the superblock data informing the OS that the drive is a member of raid even after an fswipe. I did try running FSwipe on a sepreate system as well. I am aware that IX Systems does not support this feature however it worked once. Why does it not work a second time. If I am screwed than I am willing to exept this issue and raid the cache drive properly... I just don't want to because I need about 6TB for this cache location and like mentioned above its a huge waist of resourses in my case to make this redundent. I just wanna know why this is even coming up. It seems prepostrus to me to think that I DD wiped this 8TB drive for 19 hours yet TrueNAS can't do anything with it. Also food for thought the 8TB drive was in fact in production in a single drive raid setup in my friends server for about 2 years. With this information and my own testing I know for a fact this is possilbe, and it does work altho not supported for some pretty clear resons. It's not just me that runs things like this once in a while, and its not just me it works for. I am also willing to exept that my understanding of this issue is wrong. I just wanna get to the bottom of this. What ever the case.

Info:

"new" = 3.5in HDD that has NEVER been a member of any pool in the past
"used" = 3.5in HDD that has been a member of a pool in the past
"sepreate system" = Removed drives from TrueNAS box and installed in Ubuntu server box for attempted completion of these tasks not on a truenas system.

Drives Tested: (all for the intent of cache drive in a single drive setup)(all drives are 3.5in SATA HDD's)
(1) 500GB Toshiba drive worked first time. Never again.
(2) 650GB WD Blue sister drives. Worked first time never again after. (not in raid but as a single drive raid. I tried both of them but they are just the same brand and spec) worked first time, never again.
(1) 1TB drive again it worked the fist time but never again.
(1) 8TB drive (From a friend who has used this drive in a NAS from the past.) NEVER WORKED

Things I have tried to solve this issue already:
I have used the internal format utility to format drives

I have used gpart to remove all partition data. (both on host system and on seprate machine)

I have DD secure and quick wiped many times on all drives listed.

I have of course restarted (and reinstalled truenas scale)

I have ran fswipe on drives then DD wiped them.

I have placed drive in a seprate server and used mdadm manually (I was able to create a single drive pool in all cases and it functioned great in the other machine. (was repeatable) (ubuntu server) I then moved the drive to truenas and attempted to import pool with no luck. Truenas does not even see the fact the drive is a member of a raid array

I have read (hopfully all content) on this forum about this issue

I have formatted on seprate machine and left with no partition when moving back to truenas trying to import.

I have tried the solutions listed in this thred by jolness1 and roysen

I have watched many Youtube videos on raid and how to set it up and some common issues when trying to make new pools with raid. (looked for TrueNAS content as well as generic mdadm content.) I don't even know how many or what videos I wached, I just want to show I have been trying to help myself before comming here.

I have banged my head on the wall and let the project sit a few days to think about it. 2 months to be exact. I've tried a few times since installing truenas. I have tried all options I can think of or what was recomended to me before posting.

System Specs:
CPU: AMD FX-8350
RAM: 16gb
Single network interface
Sandisk SSD OS drive
Platform: Generic
Version: TrueNAS-SCALE-22.12.1 (I just upgraded. I had this issue before this update too)

Full Error message:
[EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda

Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 461, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1186, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1318, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 765, in do_create await self.middleware.call('pool.format_disks', job, disks) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1335, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 33, in format_disks await asyncio_map(format_disk, disks.items(), limit=16) File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map return await asyncio.gather(*futures) File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func return await real_func(arg) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 26, in format_disk devname = await self.middleware.call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1346, in _call return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1249, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info.py", line 82, in gptid_from_part_type raise CallError(f'Partition type {part_type} not found on {disk}') middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda
 

deeverse

Cadet
Joined
Dec 31, 2023
Messages
2
I have the same issue and have also followed each and every measure you are describing. I additionally initiated my disk on a mac to no avail. I am stuck with the issue.
 

deeverse

Cadet
Joined
Dec 31, 2023
Messages
2
Reporting back: I have successfully been able to add the disk now. In my case, I was able to connect it to another TrueNAS Scale system, where I could create and export a pool without the error. Then, I was able to import the pool into the system in question. None of the fdisk/wipefs/dd and rebooting processes on record helped before.
 

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
Reporting back: I have successfully been able to add the disk now. In my case, I was able to connect it to another TrueNAS Scale system, where I could create and export a pool without the error. Then, I was able to import the pool into the system in question. None of the fdisk/wipefs/dd and rebooting processes on record helped before.
I should have updated this thread, it ended up being a bad cable from my HBA. Or at least that's my guess as a new cable solved the issue entirely and it never came back (although my board has since had it's internal SATA ports fail recently, including the ones I use a SATA DOM for a boot disk from).

I am glad you were able to get it sorted out!
 

Parakeet3215

Cadet
Joined
Jan 31, 2024
Messages
1
I should have updated this thread, it ended up being a bad cable from my HBA. Or at least that's my guess as a new cable solved the issue entirely and it never came back (although my board has since had it's internal SATA ports fail recently, including the ones I use a SATA DOM for a boot disk from).

I am glad you were able to get it sorted out!
It would seem this was the problem for me too. I moved to a different SATA port on my motherboard and now I can create a pool. It's really strange as I was able to format and use the drive on the same hardware (motherboard, cable, sata port etc) using popOS, which led me to believe this was a software not hardware issue.
 
Top