Check on the status of a ZFS send job

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Try that, yes. Just don't save any new data to it, since those changes won't be reflected when you try to send the snapshot back to the "new" pool.

But since the snapshot exists, and the GUIDs match, trying to export/re-import and to trigger a mount to verify the files exist isn't really necessary. You can still do it though, if you want to be sure.
Just for my own sanity, Ill test that out.

Looks like it finds it just fine on the import.
1674759533171.png


Doesn't show anything in the storage tab.
1674759640895.png


but does show the folders in the sharing.
1674759688893.png


Seems like a permission issue when I try to access the share from my windows machine even with the same permissions and share setup.

1674759793930.png


Ill fiddle with this a bit more.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Just for my own sanity, Ill test that out.

Looks like it finds it just fine on the import.
View attachment 62947

Doesn't show anything in the storage tab.
View attachment 62948

but does show the folders in the sharing.
View attachment 62949

Seems like a permission issue when I try to access the share from my windows machine even with the same permissions and share setup.

View attachment 62950

Ill fiddle with this a bit more.
Windows caching causing some issues.
Got into the share now and everything looks good!
1674760109765.png


So I guess I am ready to remove the old datastore and remake it.
And i'll just be sending the same snapshot backup over? Not recreating it or anything correct?
 
Joined
Oct 22, 2019
Messages
3,641
Doesn't show anything in the storage tab.
What is it supposed to show? You were only dealing with a single dataset.


And i'll just be sending the same snapshot backup over? Not recreating it or anything correct?
Yes, but the snapshot needs to be "nested" under the new pool's root dataset, so it'll end up like this:
tds@backup -> newtank/tds@backup

Don't try to do this, thinking you can "swap" tds for newtank:
tds@backup -> newtank@backup




Saving files and folders directly inside the root dataset, and stuffing everything into a single dataset is precarious, and you end up with more convoluted migrations.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
What is it supposed to show? You were only dealing with a single dataset.



Yes, but the snapshot needs to be "nested" under the new pool's root dataset, so it'll end up like this:
tds@backup -> newtank/tds@backup

Don't try to do this, thinking you can "swap" tds for newtank:
tds@backup -> newtank@backup




Saving files directly inside the root dataset, and stuffing everything into a single dataset is precarious, and you end up with more convoluted migrations.
Thought it would have included the .bhyve_containers & iocage datasets on the original pool but nothing was stored in those to being with so it really doesn't matter.

Ohh so that will solve the root file problem during the migration.
That's extremely helpful, thanks!
So right now this is what I plan on using for the transfer back:

zfs send -Rv tds@backup | pv | zfs receive datastorev2/tds@backup

That look good?
Not sure what the flags do and trying to look them up now.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Thought it would have included the .bhyve_containers & iocage datasets on the original pool but nothing was stored in those to being with so it really doesn't matter.
It depends how you configured and sent the dataset/snapshot. (If not "recursive" or "full", it will only send the specified dataset/snapshot, without including child datasets.)


So right now this is what I plan on using for the transfer back:

zfs send -Rv tds@backup | pv | zfs recve datastorev2/tds@backup

That look good?
I hope so. :tongue:

I would advise you understand and use tmux and take advantage of "resume tokens".

The first attempt for a send-recv, you don't need to point the "sending side" to use an existing resume token. Any subsequent attempts that you want to resume, you need to specify the token with the -t flag.

However, every attempt (first, second, third, etc), you always specify the receiving side to generate a resume token with the -s flag.

 
Last edited:

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
It depends how you configured and sent the dataset/snapshot. (If not "recursive" or "full", it will only send the specified dataset/snapshot, without including child datasets.)



I hope so. :tongue:

I would advise you understand and use tmux and take advantage of "resume tokens".

The first attempt for a send-recv, you don't need to point the "sending side" to use an existing resume token. Any subsequent attempts that you want to resume, you need to specify the token with the -t flag.

However, every attempt (first, second, third, etc), you always specify the receiving side to generate a resume token with the -s flag.

Getting an error when I try to send over the snapshot:

"Cannot receive: Cannot specify snapshot name for multi-snapshot stream"
1674778921778.png

*I just kept the same datastore name with the Capitol D in Datastore
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Getting an error when I try to send over the snapshot:

"Cannot receive: Cannot specify snapshot name for multi-snapshot stream"
View attachment 62956
*I just kept the same datastore name with the Capitol D in Datastore
From what I gather, the dataset has to exist. All the data that is on the array is just plex media files. So I just made a media dataset and pointed the zfs rece to that dataset.

Unless that is a bad way of doing it, it appears to be transferring back now.
1674779887705.png
 
Joined
Oct 22, 2019
Messages
3,641
Getting an error when I try to send over the snapshot:
I missed that part in your example command. The "recv" should only have the dataset; no snapshots specified. (The "send" takes care of that.)

zfs send -Rv tds@backup | pv | zfs receive datastorev2/tds@backup




Unless that is a bad way of doing it, it appears to be transferring back now.
You didn't use "-s" so if it gets interrupted, you won't be able to resume.
 
Top