Disk upgrade went bad : duplicated pool?

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
Hi,
I was upgrading a RAID-Z1 pool of 3x 2 TB disks when ... I don't know. But that went bad. Probably a power issue for the disks. I had to reboot.
Then, FreeNAS reported the pool as FAILED.
I plugged back the old disk.
I tried a zfs import -fF and apparently corrected the pool, went back online, no errors, scrubbed ok, etc.

But I still can't access the content. I see the structure in the GUI (with jails, iocage, and the main dataset for the data). But everything related is not working, like some pointers are not ok : jails not booting, SMB access to the data impossible, etc.

Did I missed something?

Also, strangely, I saw that during the boot sequence, something about not being able to import "POOL", because he was already imported. Could it be that the ZFS pool got split in two versions during the upgrade (when one disk has been disconnected one version was created, when the disk has been out back it was a different version)?

Was can I try?
What do you need from me to diagnose?

Thanks!
I'm in trouble...
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I... do not think that the import -fF was a good idea. zfs not importing a pool is usually for a really good reason, and forcing past that should generally only be done when recommended by someone who knows the system.
the first question is "do you have a backup?" but it sounds like probably "no".
the second question is...did you read the forum rules? many of the people who *could* help won't get involved if you didn't do what they view as the minimum effort, and the content of your post, particularly some of the things that are not present, indicate that you probably didn't.
the third question is, do you have spare drives that you can try replicating the pool to so you have a backup before you do anything else? if not, you may want to acquire some as someone who tries to help may need them.
unfortunately, it definitely sounds like you need one of those guru's familiar with the fairly threadbare world of zfs recovery, so I would recommend you try and make your post as compliant as you can.
(note that raiz1 is highly recomended against unless you know what you are doing, understand your risks, and have a backup or are working with data you dont care about)
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
First post, sorry. I'm kind of stressed from the situation, and wanted to keep focused my question. That's why I asked "what do you need from me to diagnose"?

No, no backup. I kind of have lots on my plate now in my life, and I trusted ZFS, and took the risk. And I failed. Thank you...

So, let's try with the system first:
Supermicro X10SDV-4C-TLN2F, 32 GB ECC RAM, NVMe boot disk.
ESXi on top, 6.7 u3, with the Lynx Point controller passed to the Freenas VM, with 12 GB RAM and 4 vCPU.
Freenas-11.2-U7
Two pools, POOL and POOL2. POOL is concerned, POOL2 is ok.

POOL was originally made of 3 Seagate NAS HDD 2 TB. But I started to migrate one (ada6) to a 10 TB WD with success. When I tried to insert the second 10 TB drive to replace ada5, the problem occurred. Meaning I probably have the first disk (out) unsync with the 2 others.

Do I missed something for the system?

BTW, thank you for lecturing me on the proper usage of the forum rules, but maybe you could start pointing me to what would be required?

I'll dump with what I think can help, but ... I'm not even sure of where to start.
I followed the recommendations here : https://docs.oracle.com/cd/E23823_01/html/819-5461/gbbwl.html#gbctt

Please, if those gurus can help, I'd be very thankful.
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
Code:
root@freenas11:~ # zpool status -v
  pool: POOL
 state: ONLINE
  scan: scrub in progress since Fri Jan  3 22:58:56 2020
        2.34T scanned at 739M/s, 1.44T issued at 453M/s, 4.96T total
        0 repaired, 28.94% done, 0 days 02:16:03 to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        POOL                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/e574a6f0-7fb0-11e7-9d4e-000c296065e2  ONLINE       0     0     0
            gptid/e629996e-7fb0-11e7-9d4e-000c296065e2  ONLINE       0     0     0
            gptid/4b1ada95-2e40-11ea-9d12-000c296065e2  ONLINE       0     0     0

errors: No known data errors


Scrub finished without errors.

Code:
zfs list
NAME                                                             USED  AVAIL  REFER  MOUNTPOINT
POOL                                                            3.31T   207G   128K  /POOL
POOL/.bhyve_containers                                          70.3M   207G  70.3M  /POOL/.bhyve_containers
POOL/.system                                                     629M   207G   980K  legacy
POOL/.system/configs-66311c036e824820af44b2dbf4c55f10            117M   207G   116M  legacy
POOL/.system/cores                                              29.4M   207G  4.48M  legacy
POOL/.system/rrd-66311c036e824820af44b2dbf4c55f10                431M   207G  96.9M  legacy
POOL/.system/samba4                                             3.22M   207G  1.07M  legacy
POOL/.system/syslog-66311c036e824820af44b2dbf4c55f10            46.4M   207G  40.2M  legacy
POOL/.system/webui                                               117K   207G   117K  legacy
POOL/esxi_nfs                                                    117K   207G   117K  /POOL/esxi_nfs
POOL/iocage                                                     12.4G   207G  4.16M  /iocage
POOL/iocage/download                                             532M   207G   117K  /iocage/download
POOL/iocage/download/11.1-RELEASE                                260M   207G   260M  /iocage/download/11.1-RELEASE
POOL/iocage/download/11.2-RELEASE                                272M   207G   272M  /iocage/download/11.2-RELEASE
POOL/iocage/images                                               117K   207G   117K  /iocage/images
POOL/iocage/jails                                               9.67G   207G   117K  /iocage/jails
POOL/iocage/jails/jackett                                        787M   207G   128K  /iocage/jails/jackett
POOL/iocage/jails/jackett/root                                   786M   207G  1.69G  /iocage/jails/jackett/root
POOL/iocage/jails/radarr                                        2.73G   207G   464K  /iocage/jails/radarr
POOL/iocage/jails/radarr/root                                   2.73G   207G  3.11G  /iocage/jails/radarr/root
POOL/iocage/jails/transmission                                  6.17G   207G   240K  /iocage/jails/transmission
POOL/iocage/jails/transmission/root                             6.17G   207G  6.87G  /iocage/jails/transmission/root
POOL/iocage/log                                                  149K   207G   149K  /iocage/log
POOL/iocage/releases                                            2.22G   207G   117K  /iocage/releases
POOL/iocage/releases/11.1-RELEASE                               1.10G   207G   117K  /iocage/releases/11.1-RELEASE
POOL/iocage/releases/11.1-RELEASE/root                          1.10G   207G  1.10G  /iocage/releases/11.1-RELEASE/root
POOL/iocage/releases/11.2-RELEASE                               1.12G   207G   117K  /iocage/releases/11.2-RELEASE
POOL/iocage/releases/11.2-RELEASE/root                          1.12G   207G  1.11G  /iocage/releases/11.2-RELEASE/root
POOL/iocage/templates                                            117K   207G   117K  /iocage/templates
POOL/jails                                                      19.3G   207G   234K  /POOL/jails
POOL/jails/.warden-template-pluginjail-11.0-x64                  592M   207G   592M  /POOL/jails/.warden-template-pluginjail-11.0-x64
POOL/jails/.warden-template-pluginjail-11.0-x64-20180129100457   592M   207G   592M  /POOL/jails/.warden-template-pluginjail-11.0-x64-20180129100457
POOL/jails/.warden-template-standard-11.0-x64                   2.18G   207G  2.18G  /POOL/jails/.warden-template-standard-11.0-x64
POOL/jails/.warden-template-standard-11.0-x64-20180319163448    2.18G   207G  2.18G  /POOL/jails/.warden-template-standard-11.0-x64-20180319163448
POOL/jails/.warden-template-standard-11.0-x64-20181030170244    2.17G   207G  2.17G  /POOL/jails/.warden-template-standard-11.0-x64-20181030170244
POOL/jails/Jackett                                               903M   207G  2.78G  /POOL/jails/Jackett
POOL/jails/Sonarr                                               8.74G   207G  10.7G  /POOL/jails/Sonarr
POOL/jails/headphones_2                                         2.03G   207G  2.60G  /POOL/jails/headphones_2
POOL/main_smb                                                   3.27T   207G  3.27T  /POOL/main_smb


Code:
gpart show -l
=>      40  33554352  ada0  GPT  (16G)
        40      1024     1  (null)  (512K)
      1064  33553320     2  (null)  (16G)
  33554384         8        - free -  (4.0K)

=>         40  11721045088  ada1  GPT  (5.5T)
           40           88        - free -  (44K)
          128      4194304     1  (null)  (2.0G)
      4194432  11716850688     2  (null)  (5.5T)
  11721045120            8        - free -  (4.0K)

=>         40  11721045088  ada2  GPT  (5.5T)
           40           88        - free -  (44K)
          128      4194304     1  (null)  (2.0G)
      4194432  11716850688     2  (null)  (5.5T)
  11721045120            8        - free -  (4.0K)

=>         40  11721045088  ada3  GPT  (5.5T)
           40           88        - free -  (44K)
          128      4194304     1  (null)  (2.0G)
      4194432  11716850688     2  (null)  (5.5T)
  11721045120            8        - free -  (4.0K)

=>        40  3907029088  ada4  GPT  (1.8T)
          40          88        - free -  (44K)
         128     4194304     1  (null)  (2.0G)
     4194432  3902834688     2  (null)  (1.8T)
  3907029120           8        - free -  (4.0K)

=>        40  3907029088  ada5  GPT  (1.8T)
          40          88        - free -  (44K)
         128     4194304     1  (null)  (2.0G)
     4194432  3902834688     2  (null)  (1.8T)
  3907029120           8        - free -  (4.0K)

=>         40  19532873648  ada6  GPT  (9.1T)
           40           88        - free -  (44K)
          128      2097152     1  (null)  (1.0G)
      2097280  19530776408     2  (null)  (9.1T)
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
Code:
zdb -C
freenas-boot:
    version: 5000
    name: 'freenas-boot'
    state: 0
    txg: 14800033
    pool_guid: 7446991895193817932
    hostname: ''
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 7446991895193817932
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 7804841523949619088
            path: '/dev/ada0p2'
            whole_disk: 1
            metaslab_array: 33
            metaslab_shift: 27
            ashift: 9
            asize: 17174364160
            is_log: 0
            DTL: 145
            create_txg: 4
            com.delphix:vdev_zap_leaf: 31
            com.delphix:vdev_zap_top: 32
    features_for_read:


Also, around the time that happened, I found that in the dmesg:
Code:
Jan  3 22:01:37 freenas11 kernel: Limiting closed port RST response from 255 to 200 packets/sec
Jan  3 22:07:30 freenas11 ZFS: vdev state changed, pool_guid=52674479369185151 vdev_guid=9034277843773831370
Jan  3 22:07:30 freenas11 ada5 at ahcich34 bus 0 scbus37 target 0 lun 0
Jan  3 22:07:30 freenas11 ada5: <ST2000VN000-1HJ164 SC60> s/n W720P41C detached
Jan  3 22:07:30 freenas11 GEOM_MIRROR: Device swap0: provider ada5p1 disconnected.
Jan  3 22:07:30 freenas11 (ada5:ahcich34:0:0:0): Periph destroyed
Jan  3 22:07:34 freenas11 GEOM_ELI: Device mirror/swap0.eli destroyed.
Jan  3 22:07:34 freenas11 GEOM_MIRROR: Device swap0: provider destroyed.
Jan  3 22:07:34 freenas11 GEOM_MIRROR: Device swap0 destroyed.
Jan  3 22:07:34 freenas11 GEOM_MIRROR: Device mirror/swap0 launched (2/2).
Jan  3 22:07:34 freenas11 GEOM_ELI: Device mirror/swap0.eli created.
Jan  3 22:07:34 freenas11 GEOM_ELI: Encryption: AES-XTS 128
Jan  3 22:07:34 freenas11 GEOM_ELI:     Crypto: hardware
Jan  3 22:08:17 freenas11 kernel: Limiting closed port RST response from 219 to 200 packets/sec
Jan  3 22:08:17 freenas11 kernel: Limiting closed port RST response from 219 to 200 packets/sec
Jan  3 22:08:44 freenas11 kernel: Limiting closed port RST response from 202 to 200 packets/sec
Jan  3 22:08:44 freenas11 kernel: Limiting closed port RST response from 202 to 200 packets/sec
Jan  3 22:08:54 freenas11 ada5 at ahcich34 bus 0 scbus37 target 0 lun 0
Jan  3 22:08:54 freenas11 ada5: <ST2000VN000-1HJ164 SC60> ACS-2 ATA SATA 3.x device
Jan  3 22:08:54 freenas11 ada5: Serial Number W720P41C
Jan  3 22:08:54 freenas11 ada5: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
Jan  3 22:08:54 freenas11 ada5: Command Queueing enabled
Jan  3 22:08:54 freenas11 ada5: 1907729MB (3907029168 512 byte sectors)
Jan  3 22:08:54 freenas11 ZFS: vdev state changed, pool_guid=52674479369185151 vdev_guid=907820637324977230
Jan  3 22:08:54 freenas11 ZFS: vdev state changed, pool_guid=52674479369185151 vdev_guid=9034277843773831370
Jan  3 22:08:54 freenas11 ZFS: vdev state changed, pool_guid=52674479369185151 vdev_guid=457353395039255039
Jan  3 22:08:59 freenas11 ZFS: vdev state changed, pool_guid=52674479369185151 vdev_guid=457353395039255039
Jan  3 22:08:59 freenas11 ada6 at ahcich35 bus 0 scbus38 target 0 lun 0
Jan  3 22:08:59 freenas11 ada6: <WDC WD100EMAZ-00WJTA0 83.H0A83> s/n 1EGEKM2Z detached
Jan  3 22:08:59 freenas11 (ada6:ahcich35:0:0:0): Periph destroyed
Jan  3 23:22:42 freenas11 syslog-ng[2535]: syslog-ng starting up; version='3.20.1'
Jan  3 23:22:42 freenas11 Waiting (max 60 seconds) for system process `vnlru' to stop... done
Jan  3 23:22:42 freenas11 Waiting (max 60 seconds) for system process `bufdaemon' to stop... done
Jan  3 23:22:42 freenas11 Waiting (max 60 seconds) for system process `syncer' to stop...
Jan  3 23:22:42 freenas11 Syncing disks, vnodes remaining... 0 0 0 0 0 0 0 done
Jan  3 23:22:42 freenas11 All buffers synced.
Jan  3 23:22:42 freenas11 GEOM_ELI: Device mirror/swap0.eli destroyed.
Jan  3 23:22:42 freenas11 GEOM_ELI: Detached mirror/swap0.eli on last close.
Jan  3 23:22:42 freenas11 GEOM_ELI: Device mirror/swap1.eli destroyed.
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
And for what I did:
Code:
root@freenas11:~ # zpool status
  pool: POOL2
 state: ONLINE
  scan: scrub repaired 0 in 0 days 13:53:25 with 0 errors on Sun Dec 29 13:53:26                                                                            2019
config:

        NAME                                            STATE     READ WRITE CKS                                                                           UM
        POOL2                                           ONLINE       0     0                                                                                0
          raidz1-0                                      ONLINE       0     0                                                                                0
            gptid/8bdff6b3-e055-11e7-9b9a-000c296065ec  ONLINE       0     0                                                                                0
            gptid/8cb2682f-e055-11e7-9b9a-000c296065ec  ONLINE       0     0                                                                                0
            gptid/8d7f25e1-e055-11e7-9b9a-000c296065ec  ONLINE       0     0                                                                                0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:17 with 0 errors on Fri Jan  3 03:45:17                                                                            2020
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada0p2    ONLINE       0     0     0

errors: No known data errors
root@freenas11:~ # zpool import
   pool: POOL
     id: 52674479369185151
  state: FAULTED
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-3C
 config:

        POOL                                            FAULTED  corrupted data
          raidz1-0                                      FAULTED  corrupted data
            gptid/e574a6f0-7fb0-11e7-9d4e-000c296065e2  ONLINE
            9034277843773831370                         UNAVAIL  cannot open
            gptid/4b1ada95-2e40-11ea-9d12-000c296065e2  ONLINE


Changed the ada5 disk here, from the new 10 TB to the former 2 TB.

Code:
root@freenas11:~ # zpool import
   pool: POOL
     id: 52674479369185151
  state: FAULTED
 status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-72
 config:

        POOL                                            FAULTED  corrupted data
          raidz1-0                                      FAULTED  corrupted data
            gptid/e574a6f0-7fb0-11e7-9d4e-000c296065e2  ONLINE
            gptid/e629996e-7fb0-11e7-9d4e-000c296065e2  ONLINE
            gptid/4b1ada95-2e40-11ea-9d12-000c296065e2  ONLINE


root@freenas11:~ # zpool status
  pool: POOL2
 state: ONLINE
  scan: scrub repaired 0 in 0 days 13:53:25 with 0 errors on Sun Dec 29 13:53:26 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        POOL2                                           ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/8bdff6b3-e055-11e7-9b9a-000c296065ec  ONLINE       0     0     0
            gptid/8cb2682f-e055-11e7-9b9a-000c296065ec  ONLINE       0     0     0
            gptid/8d7f25e1-e055-11e7-9b9a-000c296065ec  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:17 with 0 errors on Fri Jan  3 03:45:17 2020
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada0p2    ONLINE       0     0     0

errors: No known data errors
root@freenas11:~ # zpool import -fF
   pool: POOL
     id: 52674479369185151
  state: FAULTED
 status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-72
 config:

        POOL                                            FAULTED  corrupted data
          raidz1-0                                      FAULTED  corrupted data
            gptid/e574a6f0-7fb0-11e7-9d4e-000c296065e2  ONLINE
            gptid/e629996e-7fb0-11e7-9d4e-000c296065e2  ONLINE
            gptid/4b1ada95-2e40-11ea-9d12-000c296065e2  ONLINE
root@freenas11:~ # zpool import -fF POOL
root@freenas11:~ # zpool status
  pool: POOL
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 8.07M in 0 days 00:00:01 with 0 errors on Fri Jan  3 22:56:21 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        POOL                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/e574a6f0-7fb0-11e7-9d4e-000c296065e2  ONLINE       0     0     0
            gptid/e629996e-7fb0-11e7-9d4e-000c296065e2  ONLINE       0     0     0
            gptid/4b1ada95-2e40-11ea-9d12-000c296065e2  ONLINE       0     0    44

errors: No known data errors
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
to what would be required?
I can be insensitive, but mainly the hardware info, which you just gave, is one of the first things. i dont see PSU or case though
is POOL (im not a fan of the names tbh, kind of invites confusion) mounted? if it is, can you navigate to it?
potentially useful:
Code:
zfs list
df -h
ls /mnt/POOL (or whatever df says)
camcontrol devlist
glabel status

is there a possibility that the data/power connector/cable on the drive is loose/bent/twisted? sometimes that happens when mucking about in a case to replace non-hotswap-bay drives.
since you only replaced one drive so far, your pool should still be, uhm, ~3.5TB right?
raidz1 with 10tb and no backup is, frankly, asking for dataloss, so your first priority is getting readonly access to the pool and copying everything *somewhere*. often the "best" way to "fix" zfs pools is to nuke it and start again but obviously without a backup that method wont help here yet. is it possible to connect the original 3x2TB and, i assume, the 3x10TB drives at the same time, or are you limited by case size/layout or SATA/SAS ports? do you have extra drives in the 2TB range that could be used to create a backup pool? ideally you want 2 cloned pools of at least raidz1 (eg POOL and backup), though being able to make a raidz2 would drastically be better. if you really care about the contents of this pool, you may have to bite the wallet and get some more drives so you can get to a point of stability before attempting any further changes.
unfortunately, if i am tracking the gptids correctly here, it looks like the drive that's failing is the last of the 2TB, and you may be experiencing the exact reason raidz1 is not recommended, being that there is an increased risk of the rebuild procedure itself being enough to kill another drive during the resilver.
i ran across this, which maybe be more reliable that how you currently have it mounted (you do NOT want to be writing anything at this time)
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
Yeah, I think so. The power cable might have been disconnected during the reinstallation of the bay.
3.31 TB. Everything seems like before (even the fact that it's over 90% used ...).

My strategy is what I can do with the time I have. The plan was to free the 3x 2TB to be used as cold storage drives ... I'm used to ZFS being rock solid in a professional environment, and ... well, that's at my home. And I'm no expert at all, just a basic user with a hectic life. And I just got many bad lucks, including a motherboard and the backup disk that failed too during last weeks ... (which triggered the move to upgrade those disks, the panic, etc...)
So, I used z1 for "small" disks, which is a tradeoff, I agree. Now, for the 10TB, also agreed, that's too large for Z1, but that would have been just for large but insignificant data (movie, and such).
Unfortunately, I had tons of important data on those 2TB ... Anyways, I know I screwed up. Don't do that when you're tired on in a hurry.

Back to the issue.
My pool <POOL> is mounted, and "navigable" (see zfs list). Most of the content was in the <main_smb>, which is not accessible via SAMBA. Idem for all the jails: I can see the repository, but they are not starting.

I'll be able to connect the 6 drives at the same time. I'll disconnect the <POOL2> pool and import read-only <POOL>, copy it to (say)<POOL1> and hope the Z1 works with the last disk.
I'll have to do that next weekend, though. I'm leaving for the week (business). But if that works, that's a good idea.
Do you have a link for a procedure you trust?

Otherwise, I should be able to find a disk big enough to be a temporary target, yes.

You think the last one failed? ada6 / gptid = 4b...? Because of the checksum?
I wonder if that's not the true value, though. (It's the brand new one, and the two others share the same power cable ...)

What I find strange, those, is that the disk repaired itself (in appearance), works ok, but the content is like ... void? I'm at this point where I need to diagnose what is dead, from what's not.

Thanks for the link for the "Don't panic" thread.

MTF for what you asked.
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
zfs list : see above for POOL

df -h:
Code:
root@freenas11:~ # df -h
Filesystem                                                        Size    Used   Avail Capacity  Mounted on
freenas-boot/ROOT/11.2-U7                                         7.9G    772M    7.1G    10%    /
devfs                                                             1.0K    1.0K      0B   100%    /dev
tmpfs                                                              16G     10M     16G     0%    /etc
tmpfs                                                             2.0G    8.0K    2.0G     0%    /mnt
tmpfs                                                             2.0T    472M    2.0T     0%    /var
fdescfs                                                           1.0K    1.0K      0B   100%    /dev/fd
[...POOL2 ...]
POOL                                                              207G    128K    207G     0%    /POOL
POOL/.bhyve_containers                                            207G     70M    207G     0%    /POOL/.bhyve_containers
POOL/esxi_nfs                                                     207G    117K    207G     0%    /POOL/esxi_nfs
POOL/jails                                                        207G    234K    207G     0%    /POOL/jails
POOL/jails/.warden-template-pluginjail-11.0-x64                   208G    592M    207G     0%    /POOL/jails/.warden-template-pluginjail-11.0-x64
POOL/jails/.warden-template-pluginjail-11.0-x64-20180129100457    208G    592M    207G     0%    /POOL/jails/.warden-template-pluginjail-11.0-x64-20180129100457
POOL/jails/.warden-template-standard-11.0-x64                     209G    2.2G    207G     1%    /POOL/jails/.warden-template-standard-11.0-x64
POOL/jails/.warden-template-standard-11.0-x64-20180319163448      209G    2.2G    207G     1%    /POOL/jails/.warden-template-standard-11.0-x64-20180319163448
POOL/jails/.warden-template-standard-11.0-x64-20181030170244      209G    2.2G    207G     1%    /POOL/jails/.warden-template-standard-11.0-x64-20181030170244
POOL/jails/Jackett                                                210G    2.8G    207G     1%    /POOL/jails/Jackett
POOL/jails/Sonarr                                                 218G     11G    207G     5%    /POOL/jails/Sonarr
POOL/jails/headphones_2                                           210G    2.6G    207G     1%    /POOL/jails/headphones_2
POOL/main_smb                                                     3.5T    3.3T    207G    94%    /POOL/main_smb
POOL/iocage                                                       207G    4.2M    207G     0%    /iocage
POOL/iocage/download                                              207G    117K    207G     0%    /iocage/download
POOL/iocage/download/11.1-RELEASE                                 207G    260M    207G     0%    /iocage/download/11.1-RELEASE
POOL/iocage/download/11.2-RELEASE                                 207G    272M    207G     0%    /iocage/download/11.2-RELEASE
POOL/iocage/images                                                207G    117K    207G     0%    /iocage/images
POOL/iocage/jails                                                 207G    117K    207G     0%    /iocage/jails
POOL/iocage/jails/jackett                                         207G    128K    207G     0%    /iocage/jails/jackett
POOL/iocage/jails/jackett/root                                    209G    1.7G    207G     1%    /iocage/jails/jackett/root
POOL/iocage/jails/radarr                                          207G    464K    207G     0%    /iocage/jails/radarr
POOL/iocage/jails/radarr/root                                     210G    3.1G    207G     1%    /iocage/jails/radarr/root
POOL/iocage/jails/transmission                                    207G    240K    207G     0%    /iocage/jails/transmission
POOL/iocage/jails/transmission/root                               214G    6.9G    207G     3%    /iocage/jails/transmission/root
POOL/iocage/log                                                   207G    149K    207G     0%    /iocage/log
POOL/iocage/releases                                              207G    117K    207G     0%    /iocage/releases
POOL/iocage/releases/11.1-RELEASE                                 207G    117K    207G     0%    /iocage/releases/11.1-RELEASE
POOL/iocage/releases/11.1-RELEASE/root                            208G    1.1G    207G     1%    /iocage/releases/11.1-RELEASE/root
POOL/iocage/releases/11.2-RELEASE                                 207G    117K    207G     0%    /iocage/releases/11.2-RELEASE
POOL/iocage/releases/11.2-RELEASE/root                            208G    1.1G    207G     1%    /iocage/releases/11.2-RELEASE/root
POOL/iocage/templates                                             207G    117K    207G     0%    /iocage/templates
POOL/.system                                                      207G    980K    207G     0%    /var/db/system
POOL/.system/cores                                                207G    4.5M    207G     0%    /var/db/system/cores
POOL/.system/samba4                                               207G    1.1M    207G     0%    /var/db/system/samba4
POOL/.system/syslog-66311c036e824820af44b2dbf4c55f10              207G     40M    207G     0%    /var/db/system/syslog-66311c036e824820af44b2dbf4c55f10
POOL/.system/rrd-66311c036e824820af44b2dbf4c55f10                 207G     98M    207G     0%    /var/db/system/rrd-66311c036e824820af44b2dbf4c55f10
POOL/.system/configs-66311c036e824820af44b2dbf4c55f10             207G    116M    207G     0%    /var/db/system/configs-66311c036e824820af44b2dbf4c55f10
POOL/.system/webui                                                207G    117K    207G     0%    /var/db/system/webui


Code:
root@freenas11:~ # ls /mnt/
md_size  POOL2/

POOL is not here.

Code:
root@freenas11:~ # camcontrol devlist
<VMware Virtual SATA Hard Drive 00000001>  at scbus3 target 0 lun 0 (pass0,ada0)
<NECVMWar VMware SATA CD01 1.00>   at scbus4 target 0 lun 0 (pass1,cd0)
<WDC WD6001F4PZ-49CWHM0 01.0RAE1>  at scbus33 target 0 lun 0 (pass2,ada1)
<WDC WD6001F4PZ-49CWHM0 01.0RAE1>  at scbus34 target 0 lun 0 (pass3,ada2)
<WDC WD6001F4PZ-49CWHM0 01.0RAE1>  at scbus35 target 0 lun 0 (pass4,ada3)
<ST2000DM001-1CH164 HP34>          at scbus36 target 0 lun 0 (pass5,ada4)
<ST2000VN000-1HJ164 SC60>          at scbus37 target 0 lun 0 (pass6,ada5)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus38 target 0 lun 0 (pass7,ada6)


Finally:
Code:
root@freenas11:~ # glabel status
                                      Name  Status  Components
gptid/9d958009-5625-11e7-90cb-000c296065e2     N/A  ada0p1
gptid/8bdff6b3-e055-11e7-9b9a-000c296065ec     N/A  ada1p2
gptid/8cb2682f-e055-11e7-9b9a-000c296065ec     N/A  ada2p2
gptid/8d7f25e1-e055-11e7-9b9a-000c296065ec     N/A  ada3p2
gptid/e574a6f0-7fb0-11e7-9d4e-000c296065e2     N/A  ada4p2
gptid/e629996e-7fb0-11e7-9d4e-000c296065e2     N/A  ada5p2
gptid/4b1ada95-2e40-11ea-9d12-000c296065e2     N/A  ada6p2
gptid/4b093140-2e40-11ea-9d12-000c296065e2     N/A  ada6p1
gptid/8bcfc141-e055-11e7-9b9a-000c296065ec     N/A  ada1p1
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
I can be insensitive
You're trying to help. That's what's important to me.

I just feel very stupid and overwhelmed.
So: Thanks!
And happy f***ing New Year! ;-)
 

Tsaukpaetra

Patron
Joined
Jan 7, 2014
Messages
215
This might be a stupid question, but after you mucked around in the terminal and got the pool to mount, did you reboot the system to let FreeNAS try and wrap its head around what you did?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
This might be a stupid question,
nope, that is an excellent question, however one of the things to be cautious of with potentially damaged, backupless pools, is you want to avoid reboots and remounts as much as possible, because each one increases the risk. you also want to avoid leaving such a pool running for a week before dealing with it, so it might be best to shut the whole thing off, and see if it starts up correctly after the break, and then deal with trying to recover anything or further mucking about.
on the point of a backup, technically you dont need more than one drive to at least have a backup, you wont have automatic healing, but zfs WILL tell you the instant any file is corrupted; this doesnt help with getting the bad file back but at least you KNOW what files might be bad. you could also get one drive and set copies=2, but that is advanced cmd stuff.
I just feel very stupid and overwhelmed.
ya, and I'm bad at remembering to take that into account and am very good at making the stupid and overwhelmed "feelings" worse for no good reason.
 

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
might be best to shut the whole thing off, and see if it starts up correctly after the break


I'll look into it tomorrow, for sure!
like suggested above : read only.
I'll report back.


on the point of a backup, technically you don't need more than one drive to at least have a backup
Yeah ... I used a USB3 external drive to do that. Somehow, it become corrupted too. (ZFS single drive.)
When that happened, I decided to rush the upgrade to the new disks, and use one of the former ones as a new backup.
That didn't go well...
Apparently, I'm in a Murphy's Law period.

you won't have automatic healing, but zfs WILL tell you the instant any file is corrupted; this doesnt help with getting the bad file back but at least you KNOW what files might be bad.
I was expecting that when I launched the -f command. But nothing.
Perhaps the problem is now with FreeNAS not being able to retrieve the files and the mounting parameters?

you could also get one drive and set copies=2, but that is advanced cmd stuff.
I'll go with an automated encrypted backup to the cloud instead... I already tried that, but got stuck somewhere in the past.
Having two copies locally is not really better: if you loose your disks, you loose both versions.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
When that happened, I decided to rush the upgrade to the new disks
ah, the classic "OMG its broken I should immediately make it worse!" reaction.
 
Last edited:

jpoudet

Dabbler
Joined
Jan 4, 2020
Messages
10
ah, the classic "OMG its broken i should immediately make it worse!" reaction.
Yeah. Exactly.
I have to take more time to investigate from now on.

About that: still no luck this weekend. Mounted read only, with another name for the POOL (to try if that helps with the "dedoubling" I saw).
Cannot read the content. Everything looks fine otherwise.
I'm at my limits there now.

Any idea?
Could that the file system is broken?

How do I get the message I saw about the pool being already mounted? Where are those logs?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
logs are in /var/log/messages, or logs are in /var/log/messages* for archived logs
 
Top