Importing pool crashes and server reboots

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
Thanks for the tips. I was now able to copy the zvol from the readonly HostingPool to the PreciousPool without any error messages.

But when I list the volumes on the server I discover that the columns USED and AVAIL differ:
Code:
$ sudo zfs list -t volume
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
HostingPool/VMs/vm-disks/dctmsrv1-wiem8  91.0G  3.33T  9.39G  -
PreciousPool/VMs/dctmsrv1-wiem8          9.39G  8.52T  9.39G  -

Is that because the stats reflect the pool as a whole and not just the zvol?

PreciousPool has about 11TiB and HostingPool has about 3.6TB usable capacity
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Nice! Which method did you end up using?

Yes available means what is available to the pool.

Just in case, you only transferred the current state of the zvol, no previous snapshots. Sorry I forgot to mention it earlier.

You'd need to check the documentation for zfs send to see how to transfer all snapshots, I think it's the -R flag. If you need previous snapshots.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
I tried adding the -R flag but got the answer that it was unsupported in my setup:
Code:
$ sudo zfs send -R HostingPool/VMs/vm-disks/dctmsrv1-wiem8 | sudo zfs receive -v PreciousPool/VMs/dctmsrv1-wiem8
Error: Unsupported flag with filesystem or bookmark.
cannot receive: failed to read from stream


The source volume holds 5 snapshots:
Code:
$ sudo zfs list -t snapshot | grep dctmsrv1-wiem8
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@Debian_12.2_LVM_SSH_2023-10-31_19-26                                                                                            516M      -  5.08G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@PostgreSQL15_pgAdmin4-2023-11-06_17-37                                                                                         16.5M      -  6.04G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@ODBC_2023-11-10_22-21                                                                                                          12.6M      -  6.15G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@OpenJDK17_2023-11-10_22-56                                                                                                     8.35M      -  6.50G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@preSetup_2023-11-11_00-15                                                                                                      6.87M      -  9.08G  -

Maybe I should do send-receive incrementally, one snapshot at a time?
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
it looks like your syntax is incorrect. you need to specify which *snapshots* to send, but you have tried to specify the dataset

sudo zfs send -R HostingPool/VMs/vm-disks/dctmsrv1-wiem8 | sudo zfs receive -v PreciousPool/VMs/dctmsrv1-wiem8
should look more like
sudo zfs send -R HostingPool/VMs/vm-disks/dctmsrv1-wiem8@Debian_12.2_LVM_SSH_2023-10-31_19-26 | sudo zfs receive -v PreciousPool/VMs/dctmsrv1-wiem8

there is also a way to send all snaps but i dont remember it exactly. something like
sudo zfs send -R HostingPool/VMs/vm-disks/dctmsrv1-wiem8@ | sudo zfs receive -v PreciousPool/VMs/dctmsrv1-wiem8
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
Some success.

I was able to send all snapshots of the source volume to the receiving destination volume.

The source volume had five snapshots:
Code:
$ sudo zfs list -t snapshot | grep dctmsrv1
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@Debian_12.2_LVM_SSH_2023-10-31_19-26                                                                                            516M      -  5.08G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@PostgreSQL15_pgAdmin4-2023-11-06_17-37                                                                                         16.5M      -  6.04G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@ODBC_2023-11-10_22-21                                                                                                          12.6M      -  6.15G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@OpenJDK17_2023-11-10_22-56                                                                                                     8.35M      -  6.50G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@preSetup_2023-11-11_00-15                                                                                                      6.87M      -  9.08G  -


Using -R option in conjunction with specifying the latest snapshot, all five snapshots were sent:
$ sudo zfs send -R HostingPool/VMs/vm-disks/dctmsrv1-wiem8@preSetup_2023-11-11_00-15 | sudo zfs receive -v PreciousPool/VMs/dctmsrv1-wiem8

Now both the source and the destination have the same set of snapshots:
Code:
$ sudo zfs list -t snapshot | grep dctmsrv1                                                                                              
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@Debian_12.2_LVM_SSH_2023-10-31_19-26                                                                                            516M      -  5.08G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@PostgreSQL15_pgAdmin4-2023-11-06_17-37                                                                                         16.5M      -  6.04G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@ODBC_2023-11-10_22-21                                                                                                          12.6M      -  6.15G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@OpenJDK17_2023-11-10_22-56                                                                                                     8.35M      -  6.50G  -
HostingPool/VMs/vm-disks/dctmsrv1-wiem8@preSetup_2023-11-11_00-15                                                                                                      6.87M      -  9.08G  -
PreciousPool/VMs/dctmsrv1-wiem8@Debian_12.2_LVM_SSH_2023-10-31_19-26                                                                                                    516M      -  5.08G  -
PreciousPool/VMs/dctmsrv1-wiem8@PostgreSQL15_pgAdmin4-2023-11-06_17-37                                                                                                 16.5M      -  6.04G  -
PreciousPool/VMs/dctmsrv1-wiem8@ODBC_2023-11-10_22-21                                                                                                                  12.6M      -  6.15G  -
PreciousPool/VMs/dctmsrv1-wiem8@OpenJDK17_2023-11-10_22-56                                                                                                             8.35M      -  6.50G  -
PreciousPool/VMs/dctmsrv1-wiem8@preSetup_2023-11-11_00-15                                                                                                                 0B      -  9.08G  -


Unfortunately I do not believe this operation include the changes made in the source volume post latest snapshot since they differ in size:
Code:
$ sudo zfs list -t volume                
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
HostingPool/VMs/vm-disks/dctmsrv1-wiem8  91.0G  3.33T  9.39G  -
PreciousPool/VMs/dctmsrv1-wiem8          91.0G  8.52T  9.08G  -


To get the latest changes I need to create another snapshot but that is not possible since could only import the pool as readonly:
Code:
$ sudo zfs snapshot HostingPool/VMs/vm-disks/dctmsrv1-wiem8@latest_changes
cannot create snapshots : pool is read-only


So I guess I have to settle with only getting the five existing snapshots in the volume clone and continue from there.
Unless you've got any other tips to share.

Thanks for all the guidance you gave to help me look for the correct zfs operations to use in this situation.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you could use something like find to identify all files that have changed since the snapshot and rsync them
only you know what the content is if that's worth trying.

as you have experienced, zfs is not great at data recovery; it's design heavily assumes that you have backups.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Unfortunately I do not believe this operation include the changes made in the source volume post latest snapshot since they differ in size:
Code:
$ sudo zfs list -t volume               
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
HostingPool/VMs/vm-disks/dctmsrv1-wiem8  91.0G  3.33T  9.39G  -
PreciousPool/VMs/dctmsrv1-wiem8          91.0G  8.52T  9.08G  -

But when I list the volumes on the server I discover that the columns USED and AVAIL differ:
Code:
$ sudo zfs list -t volume
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
HostingPool/VMs/vm-disks/dctmsrv1-wiem8  91.0G  3.33T  9.39G  -
PreciousPool/VMs/dctmsrv1-wiem8          9.39G  8.52T  9.39G  -

The problem is, you specified an existing snapshot when using -R. So everything after that snapshot wasn't regarded. However, when using the command as I suggested without -R and without specifying a specific snapshot, a current snapshot was created.

So in theory there should be a way to combine both methods.

Replicate as I suggested to get the current data and then you replicated using the -R flag to a different dataset (just in case), i.e. HostingPool/VMs/vm-disks-backups/ and after amount of time X you decide you do not need older snapshots and delete them. This way you will have all of your data, but at the cost that you will approximately use double the space.

Or someone more knowledgeable knows how to combine the -R flag with the fact that you can also create a newer snapshot but not via zfs snapshot though. Maybe the -i / -I come into play here.

For the lack of better knowledge, if you need the older snapshots temporarily store two copies as described above.
 

Mithor

Dabbler
Joined
Jan 30, 2023
Messages
13
I decided to go for two copies and created the second copy with:
Code:
$ sudo zfs send HostingPool/VMs/vm-disks/dctmsrv1-wiem8 | sudo zfs receive -v PreciousPool/VMs/dctmsrv1-wiem8-latest
receiving full stream of HostingPool/VMs/vm-disks/dctmsrv1-wiem8@--head-- into PreciousPool/VMs/dctmsrv1-wiem8-latest@--head--
received 13.3G stream in 11.80 seconds (1.12G/sec)

... and got (where HostingPool is the source):
Code:
$ sudo zfs list -t volume                                                                                           
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
HostingPool/VMs/vm-disks/dctmsrv1-wiem8  91.0G  3.33T  9.39G  -
PreciousPool/VMs/dctmsrv1-wiem8          91.0G  8.51T  9.08G  -
PreciousPool/VMs/dctmsrv1-wiem8-latest   9.39G  8.43T  9.39G  -


As I mentioned earlier, using the -R option without specifying a snapshot returned Error: Unsupported flag with filesystem or bookmark.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
As I mentioned earlier, using the -R option without specifying a snapshot returned Error: Unsupported flag with filesystem or bookmark.
No doubt, I just think there has to be some sort of workaround I don't know about.

But the result matters, for now you got all your data! :)
 
Top