Failed to wipe disk da0. Bug?

Joined
Apr 26, 2015
Messages
320
Bit confused about what is going on here.

TrueNAS-12.0-U4
Running on a 1U IBM server with 24GB.
External storage is IBM 1746 FAStT that work perfectly with an older version of freenas, 9.3.

I created multiple 1 and 2 TB arrays on an external storage chassis.
I then created a pool on TN and started using it as shared storage for both Windows and Linux machines on the LAN.
I then come to create another pool but cannot get past this error.

[EFAULT] Failed to wipe disk da0: [EFAULT] Command gpart create -s gpt /dev/da0 failed (code 1): gpart: Invalid argument

The details show the following;

Error: Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 666, in do_create
formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1241, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1198, in _call
return await methodobj(*prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
await asyncio_map(format_disk, disks.items(), limit=16)
File "/usr/local/lib/python3.9/site-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
return await asyncio.gather(*futures)
File "/usr/local/lib/python3.9/site-packages/middlewared/utils/asyncio_.py", line 13, in func
return await real_func(arg)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/format_disks.py", line 29, in format_disk
await self.middleware.call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1241, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1209, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1113, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.9/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/disk_/format.py", line 25, in format
raise CallError(f'Failed to wipe disk {disk}: {job.error}')
middlewared.service_exception.CallError: [EFAULT] Failed to wipe disk da0: [EFAULT] Command gpart create -s gpt /dev/da0 failed (code 1):
gpart: Invalid argument

What is causing this and how can I fix it?
 
Joined
Jun 2, 2019
Messages
591
Is da0 your boot pool?
 
Joined
Apr 26, 2015
Messages
320
I believe it is mfid0.

nas-01.png
 
Joined
Jun 2, 2019
Messages
591
Seems your not the first. Some have solved by formatting the drive via CLI first.

 
Joined
Apr 26, 2015
Messages
320
It's external storage, already formatted into logical drives so no option for that.
 
Joined
Apr 26, 2015
Messages
320
So I found that I cannot create pools larger than 2TB.
The thing is that the storage is already RAID5 on the external device, I don't need additional protection by TN.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
The thing is that the storage is already RAID5 on the external device, I don't need additional protection by TN.
You're using ZFS incorrectly and in fact are at risk of pool loss with that method...


Please consider changing your setup if you want to keep your data.
 
Joined
Apr 26, 2015
Messages
320
I looked at that post and don't really see how this is possible. The data is very well protected by the IBM storage device, there is no risk of data loss especially because it all gets backed up to a second identical storage device.

It is not clear to me when you say I'm using ZFS incorectly?
I created logical drives on the storage.
TN is presenting the pools just as I've created them.
TN will let me use 1TB pools but nothing more.
TN will also let me use multiple logical drives to create a pool, so long as they are all 1TB.

What is it I'm not understanding other than the fact that TN won't allow me to use larger than 1TB pools?
What do you mean change your setup? Do you mean change the hardware, as in a chassis with the drives built in that TN can control?
For years, I've used both internal and external storage with FN, up to 9.3.
 
Top