Error creating pool

Joined
Apr 26, 2015
Messages
320
I've been trying to use this freenas setup I installed some time back but keep running into problems each time I get back to it when trying to set up pools. The storage I'm using is IBM external via FC and is seen by FN. I'm including some images to show how far I'm getting.

Did something go wrong when I built the system and need to rebuild or something else? I'm at a loss with this newer (to me) version.
I've been using 9.3 for years without a single issue.

Can anyone give me some leads on what I have to look into to get this system to work.

Overview
Platform: Generic
Version: FreeNAS-11.3-U3.2
HostName: nas02.home.loc
Uptime: 2:12PM up 124 days, 20:38, 0 users

2020-12-11_140807.png
2020-12-11_140840.png
2020-12-11_140935.png

Code:
Error: Traceback (most recent call last):

  File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 219, in wrapper
    response = callback(request, *args, **kwargs)

  File "./freenasUI/api/resources.py", line 1421, in dispatch_list
    request, **kwargs

  File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 450, in dispatch_list
    return self.dispatch('list', request, **kwargs)

  File "./freenasUI/api/utils.py", line 252, in dispatch
    request_type, request, *args, **kwargs

  File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 482, in dispatch
    response = method(request, **kwargs)

  File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 1384, in post_list
    updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs))

  File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 2175, in obj_create
    return self.save(bundle)

  File "./freenasUI/api/utils.py", line 493, in save
    form.save()

  File "./freenasUI/storage/forms.py", line 282, in save
    return False

  File "./freenasUI/storage/forms.py", line 279, in save
    }, job=True)

  File "/usr/local/lib/python3.7/site-packages/middlewared/client/client.py", line 399, in call
    return jobobj.result()

  File "/usr/local/lib/python3.7/site-packages/middlewared/client/client.py", line 172, in result
    raise ClientException(job['error'], trace={'formatted': job['exception']})

middlewared.client.client.ClientException: [EFAULT] Failed to wipe disk da0: [EFAULT] Command gpart create -s gpt /dev/da0 failed (code 1):
gpart: Invalid argument
 
Joined
Apr 26, 2015
Messages
320
I then upgraded to 12.0-U1 and get a similar error.

Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 361, in run
    await self.future
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 397, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 655, in do_create
    formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
    await asyncio_map(format_disk, disks.items(), limit=16)
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
    return await asyncio.gather(*futures)
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/asyncio_.py", line 13, in func
    return await real_func(arg)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/format_disks.py", line 29, in format_disk
    await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/disk_/format.py", line 25, in format
    raise CallError(f'Failed to wipe disk {disk}: {job.error}')
middlewared.service_exception.CallError: [EFAULT] Failed to wipe disk da0: [EFAULT] Command gpart create -s gpt /dev/da0 failed (code 1):
gpart: Invalid argument
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Why are your serial numbers all the same? That won't work.
 
Joined
Apr 26, 2015
Messages
320
Hi Chris,

I've no idea, it's not something I've done manually or intentionally.
Is it something I can edit or can view from the cli?

# ctladm port -l
Port Online Frontend Name pp vp
3 YES camtgt isp0 0 0 naa.21000024ff25f3b8
4 YES camtgt isp1 0 0 naa.21000024ff25f3b9

I see there is an edit option for the disks. Should I manually make each serial unique?
Nope, won't let me edit that.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
Anyone?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Will you share the output of the command zpool status -v

it should look something like this:

Code:
  pool: Emily
 state: ONLINE
  scan: scrub repaired 0B in 06:21:38 with 0 errors on Tue Dec 15 06:21:40 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        Emily                                           ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/af7c42c6-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b07bc723-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b1893397-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b2bfc678-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b3c1849e-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b4d16ad2-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/acb0b918-ba5d-11e9-b6dd-00074306773b  ONLINE       0     0     0
            gptid/85d8ab3b-e442-11ea-99b6-00074306773b  ONLINE       0     0     0
            gptid/d1ea0d87-ba96-11e9-b6dd-00074306773b  ONLINE       0     0     0
            gptid/b9de3232-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/baf4aba8-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/bbf26621-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
        logs
          gptid/ae487c50-bec3-11e8-b1c8-0cc47a9cd5a4    ONLINE       0     0     0
        cache
          gptid/ae52d59d-bec3-11e8-b1c8-0cc47a9cd5a4    ONLINE       0     0     0

errors: No known data errors

 
Joined
Apr 26, 2015
Messages
320
Hi, sure,

# zpool status -v
pool: freenas-boot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.

scan: scrub repaired 0B in 00:00:39 with 0 errors on Tue Dec 15 03:45:39 2020
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mfid0p2 ONLINE 0 0 0

errors: No known data errors



I have not been able to create a pool of course but I think you know that. I see the mention of zppol upgrade?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have not been able to create a pool of course but I think you know that. I see the mention of zppol upgrade?
Sorry. Looks like I have replied to the wrong post. I am not sure why, but the serial numbers need to be unique. I had another person on the forum almost two years ago that had a bunch of drives where the serial numbers were zero. That doesn't work either. ZFS don't tolerate it.
You have to figure out why the serial numbers are all the same. It must be something about that IBM external via FC ...
 
Joined
Apr 26, 2015
Messages
320
I guess that's what is odd, my old 9.3 version works just fine with all the same hardware. It seems the devs messed with things so that we could not use FC so as not to compete with their commercial side. That's just an opinion but it's what I feel since they don't want to support FC anymore.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Having unique serial numbers on the drives as a requirement appears to be independent of the FC question. If I recall correctly, the user that had a quantity of SATA hard drives with zero serial numbers (I don't know why they were zero) had attempted to use both a SATA controller and a SAS controller and was not able to use more than one drive in the same pool. He sent me two of the drives for testing and I believe that I even tried using them on Linux with no success. When or how that "feature" came to be, I can't say but it has been a part of the landscape of ZFS on both Linux and FreeBSD for at least two years.
As for the FC portion of the question, I don't think that iXsystems is using FC with their TrueNAS installations. I don't work for them, but I have spoken with their sales people with regard to implementing one of their solutions where I work. I am not saying the technology is deprecated, just that I don't think they use it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I went looking and found about as many articles saying fiber channel is dead as I found saying that it is alive and growing.
Hopefully someone can add to this story.
 
Joined
Apr 26, 2015
Messages
320
It's such an amazing technology, easy to grow, network, etc.
I saw that others have gotten FC to work and always thought maybe it stopped being supported for commercial reasons. I'm not saying that's the case, just it's what I felt may be the case. I don't know anything about the company, only that I have enjoyed using FN and would enjoy continuing with it.

I appreciate your help. Maybe I have no choice but to stick with the old 9.3 for FC based storage.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It's such an amazing technology, easy to grow, network, etc.
I found this thread that I thought might be helpful:

 
Top