scurrier
Patron
- Joined
- Jan 2, 2014
- Messages
- 297
Summary: I'm getting an "error getting available space" and "error getting total space" on my backup mirrors while replicating. I may have brought this on myself while changing the compression method on the backup mirrors while replicating to it.
--------------------------------------------------------------
Today, I set up replication and got it running. I am using it to backup from one local zfs array to another. The target dataset is nested one level below a top level zvol. After the replication started, an entry appeared on the replication page with a status showing that the replication progress was 0%. I'm replicating about 4tb of data, but the progress % didn't move after waiting for a period where I thought it should have progressed.
So I began to investigate:
...the storage tab shows an additional row with an error message about my backup dataset. So that's a problem. I did some searching on the forum and couldn't find anyone who had this issue created in the same way I did. The zvol parent of this dataset still is reported as "healthy."
You'll also notice the backup dataset has very low used space. So even if I borked this dataset by changing compression while replicating, I'm doubting that everything was A-OK before that, because it should have been filling up the dataset faster.
Is this a bug or did I do something stupid? Or both?
Any data I could gather to help with an investigation?
Any advice how to cleanly stop a replication in progress?
--------------------------------------------------------------
Today, I set up replication and got it running. I am using it to backup from one local zfs array to another. The target dataset is nested one level below a top level zvol. After the replication started, an entry appeared on the replication page with a status showing that the replication progress was 0%. I'm replicating about 4tb of data, but the progress % didn't move after waiting for a period where I thought it should have progressed.
So I began to investigate:
- I looked at the disk usage and found that it was barely using the disks. We're talking occasional flashes of a few MB/s read and write.
- I looked at the process and found that it was at 100% WCPU.

...the storage tab shows an additional row with an error message about my backup dataset. So that's a problem. I did some searching on the forum and couldn't find anyone who had this issue created in the same way I did. The zvol parent of this dataset still is reported as "healthy."
You'll also notice the backup dataset has very low used space. So even if I borked this dataset by changing compression while replicating, I'm doubting that everything was A-OK before that, because it should have been filling up the dataset faster.
Is this a bug or did I do something stupid? Or both?
Any data I could gather to help with an investigation?
Any advice how to cleanly stop a replication in progress?