need help on zfs_members

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
In truenas I had a zfs pool consisting of 2 nmve-ssd-storage drives 980 pro samsung
I was fixing network issues and I was kicked out while trying different subnets.

When I got back in via the shell commands the zfs pool was somehow not reachable. It was showing offline not visible.
So I made a very stupid action and disconnected it so I would be able to mount it into a new truenas installation.
But now the zpool somehow fell apart into zfs_members
The data and pool are untoched.

The hardware is 2x nmve ssd's 1TB
I have now installed proxmox and keep the internet going because I wanted to install pfsense in truenas into a vm.
The proxmox is now installed onto another ssd drive so I could run some shell commands without working on the nvme's.

So in proxmox I see the 2 drives there now, being labeled as 2 zfs_members.
I need to restore this data can I still access the 2 drives separatly? is there a way to combine them back into a zpool without destroying the data?
I am very stressed at the moment because all my kids childhood photo's are on these 2 disks.

1659177027997.png



Please help me fix this.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Have you tried the following command to see if the pool is importable?
Code:
zpool import

Note that does not actually import the pool, just display any exported pool on the system.
 

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
Hi thank you.
the command replied no pool available to import
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Do you remember how your pool was constructed? Was it a mirror of 2 disks, or a stripe of 2 disks? Also, do you remember the name of your pool?
 
Last edited:

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
it was partioned
1 it contained boot mirrored
2 it contained ssd-storage-apps mirrored (destroyed)
3 it contained ssd-storage striped data into one zpool (most important)
I do not care about the first 2

But proxmox always showed the zfs pool nr 3
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Unfortunately, ProxMox allows you to do things like this. TrueNAS insists on pool members being full disks, and makes it very hard to create partitioned setups like this. Furthermore, as stripes, you don't have any redundancy if you corrupt one of the members during recovery.

This is beyond what can be fixed with the ZFS command-line tools on TrueNAS. You'll have to use Klennet ZFS Recovery to rescue your data.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Be that as it may, you'll need to use specialty rescue software now. All the ZFS command line tools won't help you now.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
Ok I understand this is my own doing I know it was not recommended can this software run on debian?
 

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
Yes, and these guides are NOT SUPPORTED.
However it is truenas that made this weird disconnection of this pool partition. (partition or entire disk) it should not matter imo.
If it where a fool bunch of disks it would had been the same problem. It is all software in the end
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Ok I understand this is my own doing I know it was not recommended can this software run on debian?
No, Klennet only runs on Windows.
 

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
That is very expensive software.
Hoe can it be intact members unable to rejoin a pool?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Because the pool itself no longer exists. All the command-line tools for re-attaching members to a pool assume the pool is already there. The only command remaining is zpool create, which will destroy the contents of your zfs_members.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Kieros - Try this command;
Code:
zpool import -D

I am not sure a plain import will show destroyed pools. But you mention a destroyed app pool, so perhaps the wrong pool got destroyed. This is pretty harmless to try.

If the output shows something that looks right, (2 disk striped pool), then we can look at importing it using recovery options.
 

Kieros

Explorer
Joined
Jan 13, 2022
Messages
50
Too bad no discovery of destroyed pools with that command.
It should be possible to create a empty zpool. Where you could bring back former zfs_members.
The moment it somehow was destroyed the drives have not been touched. So why can it not be brought back together.
I did a scan with klennet. I am still looking into that.

But I keep thinking there must be a way to perhaps force this back into a zpool or something.
Is there no way to create a new zpool with a single drive and add this data back into it or is it always wiped?

So I took this 120GB SSD installed proxmox, installed windows 10 passed through the 2x 1TB drives as shown above.
Thing is I miss like 64 GB on both drives in windows. Which means I had partioned this drives 32GB mirrored for the boot of truenas and 32GB for the mirrored partiton ssd-storage-apps ( this one I destroyed myself) but the data is somehow not readable in klennet which only comes with like 2x 936 GB. Meaning the mirrored ZFS boot is still intact but also not recognized. Because I tried to boot it and pc kept rebooting.
 
Last edited:

AlexGG

Contributor
Joined
Dec 13, 2018
Messages
171
It should be possible to create a empty zpool.

Be careful with creating empty anything on a TRIM-capable media. Once something decides the space is free and TRIMs it, the data is poof-gone, and it happens pretty quick.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
No, you can not create an empty ZFS pool and re-add members that have existing data to it. ZFS pools need a device to start with. When you add new devices, they either give a warning that they are potentially part of a different ZFS pool, and you would need to force it. But, forcing it will cause it to wipe the metadata on the new members. Or you get no warning, and the metadata is overwritten.

Basically you need to be exceptionally careful what you try. Ideally you would make a 100% block for block copy of the devices and attempt recovery on those copies. If / when recovery fails, simply make yet another 100% block for block copy to try again.

Many of us in the forums suggest regular, supportable and conservative hardware, software and configurations. Just to avoid things like what happened with your pool.

Good luck.
 
Top