Check on the status of a ZFS send job

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Moving my data to a temp array while I upgrade and rebuild my current one.
Total size is 60.7TB of used space.

I have been replicating this data for a few days but windows decided that now was the time to update so my console connection closed but from the disk useage I can see that it is still writing to the disks.

Is there anyway for me to see the progress of a ZFS send job that is currently in progress?
 
Joined
Oct 22, 2019
Messages
3,641
There should be a "Tasks" icon in the upper-right corner of any page you're viewing.

It looks like a "mini clipboard".

tasks-icon.png
 
Joined
Oct 22, 2019
Messages
3,641
Can you provide the hardware specs? Are they in enclosures or some sort of "RAID" card?

Is this the sending server or the receiving server?

Scroll down the "Tasks" list, since your replication is likely further down, as it was started some time ago.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Can you provide the hardware specs? Are they in enclosures or some sort of "RAID" card?

Is this the sending server or the receiving server?

Scroll down the "Tasks" list, since your replication is likely further down, as it was started some time ago.
Dell R620 through a Dell 7RJDT SAS card to a NetApp 24 disk shelf and temp array in a Frankenstein storage box

Current array is 3 x 5 x 8TB drives in a Raidz2
New array is going to be 4 x 6 x 8TB Raidz2

Temp storage is just some drives I had available to me:
2 x 12TB
3 X 10TB
3 X 8TB
3 x 5 TB
all just stripped.

All the storage is local to the Truenas machine so its just a pool to pool clone as the current array needs to be rebuild / is at 97% full.

I don't see a active replication job on the task list or one that has completed but I still see data being written to the drives:
1674692461796.png
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Dell R620 through a Dell 7RJDT SAS card to a NetApp 24 disk shelf and temp array in a Frankenstein storage box

Current array is 3 x 5 x 8TB drives in a Raidz2
New array is going to be 4 x 6 x 8TB Raidz2

Temp storage is just some drives I had available to me:
2 x 12TB
3 X 10TB
3 X 8TB
3 x 5 TB
all just stripped.

All the storage is local to the Truenas machine so its just a pool to pool clone as the current array needs to be rebuild / is at 97% full.

I don't see a active replication job on the task list or one that has completed but I still see data being written to the drives:
View attachment 62912
Just so I am not missing something, this is the current task list:
1674692935395.png

1674692988181.png
1674693008670.png
 
Joined
Oct 22, 2019
Messages
3,641
I have been replicating this data for a few days
I wonder if there's a "cutoff" point for Tasks that are viewable in the list?

Can you use the "search" function to see if you can find anything from the keywords, such as "repl" or "send"? There's a small field to start typing text when you click the Tasks icon.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
I wonder if there's a "cutoff" point for Tasks that are viewable in the list?

Can you use the "search" function to see if you can find anything from the keywords, such as "repl" or "send"? There's a small field to start typing text when you click the Tasks icon.
Send gives me nothing and rep gives me the errored tasks.

I can tell by this point data is getting written. Its going to fill that drive up pretty much to the top and it has gone from 10TB to 5TB free in the last 8'ish hours.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Send gives me nothing and rep gives me the errored tasks.

I can tell by this point data is getting written. Its going to fill that drive up pretty much to the top and it has gone from 10TB to 5TB free in the last 8'ish hours.
So it looks like it completed overnight.

Although I have not idea how to validate the snapshot that was sent over
Is there a way to validate it?
 
Joined
Oct 22, 2019
Messages
3,641
Is there a way to validate it?
If the snapshot exists, then it was a success. There is no "partial" snapshot-to-snapshot replication.

If instead you see datasets with obscure names, such as mydata_023823, then this could be the result of a failed replication.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
the temp (TDS) is filled now but I assumed it would restore the files. I assume that it just sends the snapshot itself to the new datastore.

1674755522198.png

1674755565847.png


Just trying to validate as the next step is to delete datastore and remake it.
 

Attachments

  • 1674755553207.png
    1674755553207.png
    22.1 KB · Views: 155
Joined
Oct 22, 2019
Messages
3,641
Does the same exact backup snapshot exist on both pools?

Your root dataset is over 60 TB? You saved everything directly inside the root dataset? :oops:

the temp (TDS) is filled now but I assumed it would restore the files. I assume that it just sends the snapshot itself to the new datastore.
Can you rephrase this? I'm not sure what you mean.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Datastore: Where the media is stored
TDS: The temp storage where im trying to store the snapshot
1674756236360.png


This is what I am seeing on the snapshot page now that the process is complete.
1674756319930.png
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Does the same exact backup snapshot exist on both pools?

Your root dataset is over 60 TB? You saved everything directly inside the root dataset? :oops:


Can you rephrase this? I'm not sure what you mean.
Also, I just assumed the ZFS send command would extract the datasets / data to the pool. Looks like it just moved the snapshot itself and it is inaccessible for file browsing.
 
Joined
Oct 22, 2019
Messages
3,641
If you want to confirm:
Code:
zfs get guid Datastore@backup
zfs get guid tds@backup
 
Joined
Oct 22, 2019
Messages
3,641
Also, I just assumed the ZFS send command would extract the datasets / data to the pool. Looks like it just moved the snapshot itself and it is inaccessible for file browsing.
The snapshot is a property of the dataset. Meaning, the filesystem tds@backup should be the same as the filesystem Datastore@backup

The reason you cannot browse the files is because the dataset hasn't been mounted. This might have been an option configured in the task. You can try to export and then re-import the tds pool, which should mount the relevant datasets. If it does not, you need to check this property:
Code:
zfs get canmount tds
zfs get mountpoint tds


Sidenote: It's bad practice to save files and folders directly inside the root dataset of any pool.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
The snapshot is a property of the dataset. Meaning, the filesystem tds@backup should be the same as the filesystem Datastore@backup

The reason you cannot browse the files is because the dataset hasn't been mounted. This might have been an option configured in the task. You can try to export and then re-import the tds pool, which should mount the relevant datasets. If it does not, you need to check this property:
Code:
zfs get canmount tds
zfs get mountpoint tds


Sidenote: It's bad practice to save files and folders directly inside the root dataset of any pool.
Guids match. That size difference was throwing me off.
This is the output of those two commands.

And just to be clear, you are saying to export the tds pool and just reimport?

Also, that is a good point of note. When I get everything moved over ill make a new dataset to throw all the media in.

1674758427862.png
 
Joined
Oct 22, 2019
Messages
3,641
And just to be clear, you are saying to export the tds pool and just reimport?
Try that, yes. Just don't save any new data to it, since those changes won't be reflected when you try to send the snapshot back to the "new" pool.

But since the snapshot exists, and the GUIDs match, trying to export/re-import and to trigger a mount to verify the files exist isn't really necessary. You can still do it though, if you want to be sure.
 
Top