Pool pool1 state is offline: None

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
my Freenas (11.3 release) is base on PVE system.
pool1 consists of 3 hard drives. One of the drive was not recognized by the PVE. After upgrading the PVE , everything returned to normal. But Freenas start, pool1 became unknown.
pool.jpg

disk.JPG


Can someone help let me know how to do next ?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
That's an odd mix of drive sizes for those 3 disks if they're in the same pool.

How are you getting those drives from PVE to the FreeNAS VM? Passthrough PCIe controller?

Are they physical disks or PVE virtual disks (seems from the name not to be the case, but just checking)?

Output from zpool import ?
 

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
That's an odd mix of drive sizes for those 3 disks if they're in the same pool.

How are you getting those drives from PVE to the FreeNAS VM? Passthrough PCIe controller?

Are they physical disks or PVE virtual disks (seems from the name not to be the case, but just checking)?

Output from zpool import ?
Not passthrough ,just virtual disk . I don't know so much for both systhem, so all the setting is just basic.
yes ,3disks are in the same pool.
do I need print some informations from PVE ?

root@freenas[~]# zpool import pool: pool1 id: 17058582633973292600 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-3C config: pool1 UNAVAIL insufficient replicas 8985129990810004893 UNAVAIL cannot open gptid/96fb84be-f27f-11e9-be62-3f425a160e15 ONLINE gptid/989f2a1b-f27f-11e9-be62-3f425a160e15 ONLINE gptid/99f82ce3-f27f-11e9-be62-3f425a160e15 ONLINE
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
OK, so here's your problem... you have no fault tolerance and a pool of 4 disks... 1 is missing and your pool is unable to mount without it since there is no fault tolerance.

You're going to need to figure out where that 4th disk is or start the acceptance process for having lost your pool contents.

Additionally (compounding the risk you're taking by using no fault tolerance in your pool layout), what you're doing by using virtual disks is robbing FreeNAS of the possibility of applying ZFS management and SMART testing to the disks, so you're additionally likely to have failures/loss of those virtual disks for reasons that you won't find out until after they are gone since FreeNAS can't monitor what's going on in the background.

You need to read this article: https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/

I hope you weren't storing important data on it.
 

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
OK, so here's your problem... you have no fault tolerance and a pool of 4 disks... 1 is missing and your pool is unable to mount without it since there is no fault tolerance.

You're going to need to figure out where that 4th disk is or start the acceptance process for having lost your pool contents.

Additionally (compounding the risk you're taking by using no fault tolerance in your pool layout), what you're doing by using virtual disks is robbing FreeNAS of the possibility of applying ZFS management and SMART testing to the disks, so you're additionally likely to have failures/loss of those virtual disks for reasons that you won't find out until after they are gone since FreeNAS can't monitor what's going on in the background.

You need to read this article: https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/

I hope you weren't storing important data on it.

Hi,Sretalla
I don't know why it display 4 disk. I am sure this pool only use 3 disk and I could find all of them in freenas disk page.

Can you give me a ticket which is lead to reconnect a disk back ?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I am sure this pool only use 3 disk and I could find all of them in freenas disk page.
Your pool disagrees with you.

pool1 UNAVAIL insufficient replicas <-- This is the pool telling you a disk is missing
8985129990810004893 UNAVAIL cannot open <-- This is the missing disk
gptid/96fb84be-f27f-11e9-be62-3f425a160e15 ONLINE
gptid/989f2a1b-f27f-11e9-be62-3f425a160e15 ONLINE
gptid/99f82ce3-f27f-11e9-be62-3f425a160e15 ONLINE

Perhaps you need to look in PVE to see what disks there were presented. FreeNAS will just use the disk as normal once it's back.
 
Last edited:

K_switch

Dabbler
Joined
Dec 18, 2019
Messages
44
I don't know why it display 4 disk
Normally i would say something about running
Code:
geom disk list
but in this instance just check in your Hardware settings in your Proxmox node... I know you tried
Code:
zpool import
but could you please post the output of
Code:
zpool status
may sound trivial but i would be interested how a KVM FreeNAS could just lose a disk.

Thanks!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Not passthrough ,just virtual disk
That, in combination with the striped configuration of your pool, has an extremely high likelihood of being fatal to your data.
 

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
Normally i would say something about running
Code:
geom disk list

Code:
root@freenas[~]# geom disk list
Geom name: cd0
Providers:
1. Name: cd0
   Mediasize: 0 (0B)
   Sectorsize: 2048
   Mode: r0w0e0
   descr: QEMU QEMU DVD-ROM
   ident: (null)
   rotationrate: unknown
   fwsectors: 0
   fwheads: 0

Geom name: da0
Providers:
1. Name: da0
   Mediasize: 26843545600 (25G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   descr: QEMU QEMU HARDDISK
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da1
Providers:
1. Name: da1
   Mediasize: 1954210119680 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: QEMU QEMU HARDDISK
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 2931315179520 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: QEMU QEMU HARDDISK
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da3
Providers:
1. Name: da3
   Mediasize: 2931315179520 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: QEMU QEMU HARDDISK
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

but could you please post the output of
Code:
zpool status
may sound trivial but i would be interested how a KVM FreeNAS could just lose a disk.

Code:
root@freenas[~]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:14 with 0 errors on Tue Apr 28 03:45:14 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors


That, in combination with the striped configuration of your pool, has an extremely high likelihood of being fatal to your data.

yes, I know. It's almost work 1 year without any trouble. But after this happen, I will try more secure method for it. Now, just hope to get the data back.

Code:
root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   2.7T  0 disk
├─D2T01-D2T01_tmeta          253:20   0    88M  0 lvm 
│ └─D2T01-D2T01-tpool        253:22   0   2.7T  0 lvm 
│   ├─D2T01-D2T01            253:23   0   2.7T  0 lvm 
│   └─D2T01-vm--100--disk--0 253:24   0   2.7T  0 lvm 
└─D2T01-D2T01_tdata          253:21   0   2.7T  0 lvm 
  └─D2T01-D2T01-tpool        253:22   0   2.7T  0 lvm 
    ├─D2T01-D2T01            253:23   0   2.7T  0 lvm 
    └─D2T01-vm--100--disk--0 253:24   0   2.7T  0 lvm 
sdb                            8:16   0   1.8T  0 disk
├─D1T01-D1T01_tmeta          253:15   0   120M  0 lvm 
│ └─D1T01-D1T01-tpool        253:17   0   1.8T  0 lvm 
│   ├─D1T01-D1T01            253:18   0   1.8T  0 lvm 
│   └─D1T01-vm--100--disk--0 253:19   0   1.8T  0 lvm 
└─D1T01-D1T01_tdata          253:16   0   1.8T  0 lvm 
  └─D1T01-D1T01-tpool        253:17   0   1.8T  0 lvm 
    ├─D1T01-D1T01            253:18   0   1.8T  0 lvm 
    └─D1T01-vm--100--disk--0 253:19   0   1.8T  0 lvm 
sdc                            8:32   0 931.5G  0 disk
├─sdc1                         8:33   0     2G  0 part
└─sdc2                         8:34   0 929.5G  0 part
sdd                            8:48   0   2.7T  0 disk
├─D2T02-D2T02_tmeta          253:9    0    88M  0 lvm 
│ └─D2T02-D2T02-tpool        253:11   0   2.7T  0 lvm 
│   ├─D2T02-D2T02            253:12   0   2.7T  0 lvm 
│   └─D2T02-vm--100--disk--0 253:13   0   2.7T  0 lvm 
├─D2T02-D2T02_tdata          253:10   0   2.7T  0 lvm 
│ └─D2T02-D2T02-tpool        253:11   0   2.7T  0 lvm 
│   ├─D2T02-D2T02            253:12   0   2.7T  0 lvm 
│   └─D2T02-vm--100--disk--0 253:13   0   2.7T  0 lvm 
└─D2T02-D2T02_meta0          253:14   0    88M  0 lvm 
sde                            8:64   0 178.9G  0 disk
├─sde1                         8:65   0  1007K  0 part
├─sde2                         8:66   0   512M  0 part /boot/efi
└─sde3                         8:67   0 178.4G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  44.5G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   1.1G  0 lvm 
  │ └─pve-data-tpool         253:4    0 107.7G  0 lvm 
  │   ├─pve-data             253:5    0 107.7G  0 lvm 
  │   ├─pve-vm--100--disk--0 253:6    0    25G  0 lvm 
  │   ├─pve-vm--103--disk--0 253:7    0    42G  0 lvm 
  │   └─pve-vm--104--disk--0 253:8    0    55G  0 lvm 
  └─pve-data_tdata           253:3    0 107.7G  0 lvm 
    └─pve-data-tpool         253:4    0 107.7G  0 lvm 
      ├─pve-data             253:5    0 107.7G  0 lvm 
      ├─pve-vm--100--disk--0 253:6    0    25G  0 lvm 
      ├─pve-vm--103--disk--0 253:7    0    42G  0 lvm 
      └─pve-vm--104--disk--0 253:8    0    55G  0 lvm 


sda,sdb,sdd are the disk used for freenas. It's seens like the setting have 2 virtul disk setting for freenas(VM 100).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
It's a long shot as you don't have all pool disks, but can you try zpool history pool1

I'm at a loss to explain where that 4th disk in the pool came from... I wasn't there to see what happened the whole time, so I can only tell you what the pool config currently says, which is 4 disks, 1 missing, can't mount.

There's a tiny possibility you can force it to mount/repair with zpool import -fF -o readonly=on pool1 (you might want to try it with -fFn first to see what it says... dry run).
 

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
It's a long shot as you don't have all pool disks, but can you try zpool history pool1

I'm at a loss to explain where that 4th disk in the pool came from... I wasn't there to see what happened the whole time, so I can only tell you what the pool config currently says, which is 4 disks, 1 missing, can't mount.

There's a tiny possibility you can force it to mount/repair with zpool import -fF -o readonly=on pool1 (you might want to try it with -fFn first to see what it says... dry run).

maybe this is not the rightway.
Code:
root@freenas[~]# zpool history pool1
cannot open 'pool1': no such pool
root@freenas[~]# zpool import -fF -o readonly=on pool1
cannot import 'pool1': no such pool or dataset
    Destroy and re-create the pool from
    a backup source.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Does zpool import still show pool1?

Maybe you need to do it by ID

zpool import -fF -o readonly=on 17058582633973292600
 

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
Does zpool import still show pool1?

Maybe you need to do it by ID

zpool import -fF -o readonly=on 17058582633973292600
Code:
root@freenas[~]# zpool import
   pool: pool1
     id: 17058582633973292600
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
 config:

    pool1                                         UNAVAIL  insufficient replicas
      8985129990810004893                         UNAVAIL  cannot open
      gptid/96fb84be-f27f-11e9-be62-3f425a160e15  ONLINE
      gptid/989f2a1b-f27f-11e9-be62-3f425a160e15  ONLINE
      gptid/99f82ce3-f27f-11e9-be62-3f425a160e15  ONLINE
root@freenas[~]# zpool import -fF -o readonly=on 17058582633973292600
cannot import 'pool1': no such pool or dataset
    Destroy and re-create the pool from
    a backup source.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
How about zpool history
 

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
@K_switch
If default setting doesn't, it's not.

@sretalla
Code:
root@freenas[~]# zpool history
History for 'freenas-boot':
2019-10-15.22:24:07 zpool create -f -o cachefile=/tmp/zpool.cache -o version=28 -O mountpoint=none -O atime=off -O canmount=off freenas-boot da0p2
2019-10-15.22:24:07 zpool set feature@async_destroy=enabled freenas-boot
2019-10-15.22:24:07 zpool set feature@empty_bpobj=enabled freenas-boot
2019-10-15.22:24:07 zpool set feature@lz4_compress=enabled freenas-boot
2019-10-15.22:24:07 zfs set compress=lz4 freenas-boot
2019-10-15.22:24:07 zfs create -o canmount=off freenas-boot/ROOT
2019-10-15.22:24:13 zfs create -o mountpoint=legacy freenas-boot/ROOT/default
2019-10-15.22:26:01 zpool set bootfs=freenas-boot/ROOT/default freenas-boot
2019-10-15.22:29:11 zfs set beadm:nickname=default freenas-boot/ROOT/default
2019-10-15.22:29:11 zfs snapshot -r freenas-boot/ROOT/default@2019-10-15-14:29:11
2019-10-15.22:29:16 zfs clone -o canmount=off -o mountpoint=legacy freenas-boot/ROOT/default@2019-10-15-14:29:11 freenas-boot/ROOT/Initial-Install
2019-10-15.23:01:30 zfs set beadm:nickname=Initial-Install freenas-boot/ROOT/Initial-Install
2019-10-19.23:16:25 <iocage> zfs set org.freebsd.ioc:active=no freenas-boot
2019-10-23.03:45:04 zpool scrub freenas-boot
2019-10-31.03:45:06 zpool scrub freenas-boot
2019-11-08.03:45:05 zpool scrub freenas-boot
2019-11-16.03:45:05 zpool scrub freenas-boot
2019-11-24.03:45:06 zpool scrub freenas-boot
2019-12-02.03:45:05 zpool scrub freenas-boot
2019-12-10.03:45:05 zpool scrub freenas-boot
2019-12-18.03:45:05 zpool scrub freenas-boot
2019-12-26.03:45:04 zpool scrub freenas-boot
2020-01-03.03:45:05 zpool scrub freenas-boot
2020-01-11.03:45:04 zpool scrub freenas-boot
2020-01-19.03:45:04 zpool scrub freenas-boot
2020-01-27.03:45:05 zpool scrub freenas-boot
2020-02-04.03:45:04 zpool scrub freenas-boot
2020-02-05.16:37:26 zfs snapshot -r freenas-boot/ROOT/default@2020-02-05-16:37:26
2020-02-05.16:37:26 zfs clone -o canmount=off -o mountpoint=legacy freenas-boot/ROOT/default@2020-02-05-16:37:26 freenas-boot/ROOT/11.2-U7
2020-02-05.16:37:27 zfs set beadm:nickname=11.2-U7 freenas-boot/ROOT/11.2-U7
2020-02-05.16:37:27  zfs set beadm:keep=False freenas-boot/ROOT/11.2-U7
2020-02-05.16:37:29  zfs set sync=disabled freenas-boot/ROOT/11.2-U7
2020-02-05.16:40:29  zfs inherit  freenas-boot/ROOT/11.2-U7
2020-02-05.16:40:30 zfs set canmount=noauto freenas-boot/ROOT/11.2-U7
2020-02-05.16:40:30 zfs set mountpoint=/tmp/BE-11.2-U7.pY6k9FGA freenas-boot/ROOT/11.2-U7
2020-02-05.16:40:30 zfs set mountpoint=/ freenas-boot/ROOT/11.2-U7
2020-02-05.16:40:30 zpool set bootfs=freenas-boot/ROOT/11.2-U7 freenas-boot
2020-02-05.16:40:30 zfs set canmount=noauto freenas-boot/ROOT/Initial-Install
2020-02-05.16:40:30 zfs set canmount=noauto freenas-boot/ROOT/default
2020-02-05.16:40:35 zfs promote freenas-boot/ROOT/11.2-U7
2020-02-05.21:43:29 zfs snapshot -r freenas-boot/ROOT/11.2-U7@2020-02-05-21:43:29
2020-02-05.21:43:29 zfs clone -o canmount=off -o beadm:keep=False -o mountpoint=/ freenas-boot/ROOT/11.2-U7@2020-02-05-21:43:29 freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:43:29 zfs set beadm:nickname=11.3-RELEASE freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:43:30  zfs set sync=disabled freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:46:36  zfs inherit  freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:46:36 zfs set canmount=noauto freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:46:36 zfs set mountpoint=/tmp/BE-11.3-RELEASE.My4P35PB freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:46:36 zfs set mountpoint=/ freenas-boot/ROOT/11.3-RELEASE
2020-02-05.21:46:36 zpool set bootfs=freenas-boot/ROOT/11.3-RELEASE freenas-boot
2020-02-05.21:46:36 zfs set canmount=noauto freenas-boot/ROOT/11.2-U7
2020-02-05.21:46:36 zfs set canmount=noauto freenas-boot/ROOT/Initial-Install
2020-02-05.21:46:36 zfs set canmount=noauto freenas-boot/ROOT/default
2020-02-05.21:46:41 zfs promote freenas-boot/ROOT/11.3-RELEASE
2020-02-11.03:45:01  zpool scrub freenas-boot
2020-02-18.03:45:01  zpool scrub freenas-boot
2020-02-25.03:45:01  zpool scrub freenas-boot
2020-03-03.03:45:02  zpool scrub freenas-boot
2020-03-10.03:45:03  zpool scrub freenas-boot
2020-03-17.03:45:01  zpool scrub freenas-boot
2020-03-24.03:45:01  zpool scrub freenas-boot
2020-03-31.03:45:01  zpool scrub freenas-boot
2020-04-07.03:45:01  zpool scrub freenas-boot
2020-04-14.03:45:01  zpool scrub freenas-boot
2020-04-28.03:45:01  zpool scrub freenas-boot
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
OK, so we're really out of options here... you need to find that 4th disk or it's over... if the data is really important to you I saw a post a few weeks back where somebody was promoting a ZFS recovery tool, but if that disk is really missing, then I wouldn't expect much will be recovered anyway.

On the proxmox host you might try find / -name "*-100*"
 

K_switch

Dabbler
Joined
Dec 18, 2019
Messages
44
@phoenix13023

the other thing you could do is check the Proxmox VM .conf file... located /etc/pve/qemu-server/*.conf check that config location.

The other thought is if these are just virtual disks then maybe you should start browsing through your file structure in your proxmox host...
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700

phoenix13023

Cadet
Joined
Feb 10, 2020
Messages
9
@K_switch @sretalla

thank you so much for what you have done.

I know this error maybe not so serious, but I don't inside pve and freenas so much. Untill now, I guess maybe is pve create a new vm disk setting for freenas and the old data is still there.

I give up. lost some data and star a new solution for my home file server.
thanks again.
 
Top