Drives not showing assigned to pool

tensi0n

Cadet
Joined
Jan 30, 2021
Messages
5
Hello everyone,

I am new to this forum so please forgive me for not contributing much. FreeNAS and TrueNAS has been wonderful thus far and I've never needed to come here until today. That being said, here is my issue:

I recently moved my TrueNAS system over to a new one. The old system was a custom box consisting of a SuperMicro motherboard with built in SAS controller. The OS runs off two USB drives for redundancy. I recently moved everything to a SuperMicro JBOD chassis which is connected via two 8088 cables to a Dell PowerEdge R310. The SAS controller is a "LSI 9200-8e 6Gbps 8-lane external SAS HBA P20 IT Mode". Everything moved over great and was up and running as normal. It wasn't until I added some older SATA drives, to create a new pool, that I noticed my main pool (Pool1) didn't show the existing drives as being used by Pool1. The drives that I moved over are 4TB drives. When I was creating a new pool, I saw that the four 4TB drives could be used to create a new pool. I thought this wasn't right because the drives are supposedly already assigned. I have updated TrueNAS to the most current version and I've conducted a pool scrub. I'm not sure what to do as this point.

I've attached some screen shots help make it clear as to what I'm talking about. Thank you.
 

Attachments

  • truenas issue.JPG
    truenas issue.JPG
    100.8 KB · Views: 497
  • truenas issue2.JPG
    truenas issue2.JPG
    95.8 KB · Views: 423

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,471
That does appear odd for sure. Please go ahead and make a ticket on jira.ixsystems.com and attach these screenshots, as well as a debug file (system -> advanced -> save debug). We'll need to investigate what's going on here and see if this is a legit bug or not.
 

Electr0

Dabbler
Joined
Dec 18, 2020
Messages
47
@tensi0n - Did you ever file that bug report?

@Kris Moore - Did anyone else report this issue?

What happened with this issue?
I've ran into it myself...

Screen Shot 2021-08-09 at 20.35.38.png


I too can add the three missing disk to a new pool, in the pool manager, as shown in "truenas issue2.jpg" screenshot.

I've attached the output of geom disk list and zpool status -v.

As you can see, ada0 and ada1 are acutally SSDs, not HDDs.

Also, ada2, ada3 & ada4 don't actually "exist", they should be da0-da5.

The only thing I can think that may have caused it:
1. I did a fresh install of TrueNAS
2. Setup the SSDs as a mirroed boot pool ada0p2 and ada1p2
3. Used the remaining space (ada0p3 and ada1p3) as a mirroed storage pool
4. Restored a config file from a different hardware setup, from which I just moved the 6 HDDs

My thought is that something from the previous setup/config got imported and overwrote the correct (new) config.

Any idea what could be wrong?

-------------------

EDIT:

I just added the output for both midclt call disk.query and midclt call -job disk.sync_all commands.
It seems that when I run the second command, it fails and spews some errors regarding the middleware python package.
I found these two commands in this bug report from last year that details a similar issue:
TrueNAS | NAS-105996 | UI: Storage > Disks: missing disk - but correctly reported in pool status

Possible related errors:

I've also attached my debug file.

-----------------

geom disk list
Code:
root@truenas[~]# geom disk list
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 250059350016 (233G)
   Sectorsize: 512
   Mode: r2w2e5
   descr: Samsung SSD 860 EVO 250GB
   lunid: 5002538e4969cc67
   ident: S3YJNX1M608874M
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 250059350016 (233G)
   Sectorsize: 512
   Mode: r2w2e5
   descr: Samsung SSD 860 EVO 250GB
   lunid: 5002538e497953e1
   ident: S3YJNX0M741744E
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: da0
Providers:
1. Name: da0
   Mediasize: 6001175126016 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5
   descr: ATA ST6000VN0033-2EE
   lunid: 5000c500c2e8096c
   ident: ZAD98P31
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da1
Providers:
1. Name: da1
   Mediasize: 6001175126016 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5
   descr: ATA ST6000VN0033-2EE
   lunid: 5000c500c2e7d37c
   ident: ZAD98PRK
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 6001175126016 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5
   descr: ATA ST6000VN0033-2EE
   lunid: 5000c500c29b374f
   ident: ZAD93KHP
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da3
Providers:
1. Name: da3
   Mediasize: 6001175126016 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5
   descr: ATA ST6000VN0033-2EE
   lunid: 5000c500c2db0ef9
   ident: ZAD97RRQ
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da4
Providers:
1. Name: da4
   Mediasize: 6001175126016 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5
   descr: ATA ST6000VN0033-2EE
   lunid: 5000c500c2d6065e
   ident: ZAD97WY1
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da5

Providers:

1. Name: da5
   Mediasize: 6001175126016 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5
   descr: ATA ST6000VN0033-2EE
   lunid: 5000c500c2e7a0e4
   ident: ZAD98JVD
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255


zpool status -v
Code:
root@truenas[~]# zpool status -v
  pool: boot-pool
state: ONLINE
  scan: resilvered 1.14M in 00:00:08 with 0 errors on Mon Aug  9 10:47:13 2021
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p2  ONLINE       0     0     0
            ada1p2  ONLINE       0     0     0

errors: No known data errors

  pool: ssd
state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        ssd                                             ONLINE       0     0 0
          mirror-0                                      ONLINE       0     0 0
            gptid/6ddbf6b4-f8ef-11eb-afa2-002590b5331b  ONLINE       0     0 0
            gptid/7016c503-f8ef-11eb-afa2-002590b5331b  ONLINE       0     0 0

errors: No known data errors

  pool: tank
state: ONLINE
  scan: scrub repaired 0B in 12:24:29 with 0 errors on Mon Aug  9 09:49:41 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            ONLINE       0     0 0
          mirror-0                                      ONLINE       0     0 0
            gptid/00ece65b-e3fc-11eb-aa57-d485646a6b67  ONLINE       0     0 0
            gptid/0119ab6e-e3fc-11eb-aa57-d485646a6b67  ONLINE       0     0 0
          mirror-1                                      ONLINE       0     0 0
            gptid/0157c9f9-e3fc-11eb-aa57-d485646a6b67  ONLINE       0     0 0
            gptid/016508f7-e3fc-11eb-aa57-d485646a6b67  ONLINE       0     0 0
          mirror-2                                      ONLINE       0     0 0


midclt call disk.query
Code:
root@truenas[~]# midclt call disk.query
[{"identifier": "{serial_lunid}ZAD97RRQ_5000c500c2db0ef9", "name": "da3", "subsystem": "da", "number": 3, "serial": "ZAD97RRQ", "size": 6001175126016, "multipath_name": "", "multipath_member": "", "description": "", "transfermode": "Auto","hddstandby": "ALWAYS ON", "hddstandby_force": false, "advpowermgmt": "DISABLED", "acousticlevel": "DISABLED", "togglesmart": true, "smartoptions": "", "expiretime": null, "critical": null, "difference": null, "informational": null, "model": "ATA ST6000VN0033-2EE", "rotationrate": 7200, "type": "HDD", "zfs_guid": "8450599037489905515", "devname": "da3", "enclosure": null, "pool": null}, {"identifier": "{serial_lunid}ZAD98PRK_5000c500c2e7d37c", "name": "ada4", "subsystem": "ada", "number": 4, "serial": "ZAD98PRK", "size": 6001175126016, "multipath_name":"", "multipath_member": "", "description": "", "transfermode": "Auto", "hddstandby": "ALWAYS ON", "hddstandby_force": false, "advpowermgmt": "DISABLED", "acousticlevel": "DISABLED", "togglesmart": true, "smartoptions": "", "expiretime": null, "critical": null, "difference": null, "informational": null, "model": "ST6000VN0033-2EE110", "rotationrate": 7200, "type": "HDD", "zfs_guid": null, "devname": "ada4", "enclosure": null, "pool": null}, {"identifier": "{serial_lunid}ZAD93KHP_5000c500c29b374f", "name": "ada3", "subsystem": "ada", "number": 3, "serial": "ZAD93KHP", "size": 6001175126016, "multipath_name": "", "multipath_member": "", "description": "", "transfermode": "Auto", "hddstandby": "ALWAYS ON", "hddstandby_force": false, "advpowermgmt": "DISABLED", "acousticlevel": "DISABLED", "togglesmart": true, "smartoptions": "", "expiretime": null, "critical": null, "difference": null, "informational": null, "model": "ST6000VN0033-2EE110", "rotationrate": 7200, "type": "HDD", "zfs_guid": null, "devname": "ada3", "enclosure": null, "pool": null}, {"identifier": "{serial_lunid}ZAD98P31_5000c500c2e8096c", "name": "ada2", "subsystem": "ada", "number": 2, "serial": "ZAD98P31", "size": 6001175126016, "multipath_name": "", "multipath_member": "", "description": "", "transfermode": "Auto", "hddstandby": "ALWAYS ON", "hddstandby_force": false, "advpowermgmt": "DISABLED", "acousticlevel": "DISABLED", "togglesmart": true, "smartoptions": "", "expiretime": null, "critical": null, "difference": null, "informational": null, "model": "ST6000VN0033-2EE110", "rotationrate": 7200, "type": "HDD", "zfs_guid": null, "devname": "ada2", "enclosure": null, "pool": null}, {"identifier": "{serial_lunid}ZAD98JVD_5000c500c2e7a0e4", "name": "ada1", "subsystem": "ada", "number": 1, "serial": "ZAD98JVD", "size": 6001175126016, "multipath_name": "", "multipath_member": "", "description": "", "transfermode": "Auto", "hddstandby": "ALWAYS ON", "hddstandby_force": false, "advpowermgmt": "DISABLED", "acousticlevel": "DISABLED", "togglesmart": true, "smartoptions": "", "expiretime": null, "critical": null, "difference": null, "informational": null, "model": "ST6000VN0033-2EE110", "rotationrate": 7200, "type": "HDD", "zfs_guid": "1925182301347828445", "devname": "ada1", "enclosure": null, "pool": null}, {"identifier": "{serial_lunid}ZAD97WY1_5000c500c2d6065e", "name": "ada0", "subsystem": "ada", "number": 0, "serial": "ZAD97WY1", "size": 6001175126016, "multipath_name": "", "multipath_member": "", "description": "", "transfermode": "Auto", "hddstandby": "ALWAYS ON", "hddstandby_force": false, "advpowermgmt": "DISABLED", "acousticlevel": "DISABLED", "togglesmart": true, "smartoptions": "", "expiretime": null, "critical": null, "difference": null, "informational": null, "model": "ST6000VN0033-2EE110", "rotationrate": 7200, "type": "HDD", "zfs_guid": "12915272379188220704", "devname": "ada0", "enclosure": null, "pool":  null}] 


midclt call -job disk.sync_all
Code:
root@truenas[~]# midclt call -job disk.sync_all
Status: (none)Total Progress: [________________________________________] 0.00%
expected string or bytes-like object
Traceback (most recent call last):  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run    await self.future  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body    rv = await self.method(*([self] + args))  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 973, in nf    return await f(*args, **kwargs)  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/disk_/sync.py", line 151, in sync_all    await self.middleware.call('enclosure.sync_disk', disk['disk_identifier'])  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1248,in call    return await self._call(  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1216,in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepar
ed_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1120,in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.9/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 217, in sync_disk
    enclosure, element = self._get_slot_for_disk(disk["name"])
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 164, in _get_slot_for_disk
    return self._get_slot(lambda element: element["data"]["Device"] == disk)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 152, in _get_slot
    for enclosure in self.middleware.call_sync("enclosure.query", enclosure_query or []):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1275,in call_sync
    return methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 70, in query
    for enc in self.__get_enclosures():
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 342, in __get_enclosures
    return Enclosures(self.middleware.call_sync("enclosure.get_ses_enclosures"),
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 362, in __init__
    enclosure = Enclosure(num, data, stat, system_info)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 413, in __init__
    self._parse(data)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 417, in _parse
    self._parse_freebsd(data)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 436, in _parse_freebsd
    self._set_model(data)
  File "/usr/local/lib/middlewared_truenas/plugins/enclosure.py", line 537, in _set_model
    elif MINI_REGEX.match(self.system_info["system_product"]):
TypeError: expected string or bytes-like object
 

Attachments

  • debug-truenas-20210810120231.tgz
    1.1 MB · Views: 139
Last edited:

Electr0

Dabbler
Joined
Dec 18, 2020
Messages
47
Following up on this, I found the ticket that (I'm assuming) @tensi0n lodged on jira.ixsystems.com
TrueNAS | NAS-109184 | Drives not showing assigned to pool

According to the only comment on the ticket:
your da4-da7 drives have the same serial numbers as da0-da3 drives, your system is seeing the same drives twice, probably due to a wiring issue. We support multipaths, but we don't create multipaths from drives that are already used in a pool. Please consult your controller manual in order to disable this.

I've double checked and cross referenced the serials numbers of the drives, but they all appear (as far as I can tell) to be reported as individually different in TrueNAS.

Additionally, da3 is the only reported disk in the "Reporting / Disk" section. It is also the only disk that is reported correctly in "Storage / Disks" section. (screenshot attached)
 

Attachments

  • Screen Shot 2021-08-10 at 15.56.45.png
    Screen Shot 2021-08-10 at 15.56.45.png
    337.8 KB · Views: 316
Last edited:

Electr0

Dabbler
Joined
Dec 18, 2020
Messages
47
For anyone who stumbles upon this thread in the future...

I created a ticket on jira.ixsystems.com
TrueNAS | NAS-111776 | Middleware Plugin Python Error & Drives Not Assigned to Pool in GUI

According to the Assignee
The traceback that you're experiencing has already been resolved in our next release. It's a minor hot-fix release and will be 12.0-U5.1. As of today, it should be out in the next week or so.

He also provided me with a copy of the enclosure.py file which I was able to place on my system.
After restarting the middleware daemon (and performing a full reboot) the issue was resolved completely.
 

mt008

Cadet
Joined
May 3, 2022
Messages
2
Hello, I am new to truenas, I imported a ZFS pool from XigmaNAS to TrueNAS. The pool imported fine with my cache ssd drive but these drives are not showing up as assigned to the pool. So they are still available to select when creating a new pool etc...

I found this thread and it speaks to an issue that was in a previous version of TrueNAS.

I have ran the commands in shell midclt call -job disk.sync_all and it executes without error.

Any help would be greatly appreciated

thanks

Version:
TrueNAS-12.0-U8.1



2022-05-04 06_38_16-TrueNAS - 10.10.99.29.png


2022-05-04 06_38_57-TrueNAS - 10.10.99.29.png
 
Top