Restoring data from Replication task

Joined
Jan 17, 2023
Messages
3
Being new to all this, I started to set up my first data set and saw deduplication as an option and thought "Wonderful, I have at least 200gb of duplicate files,” and with 16gb of ram I wasn't too worried about the performance hit that popped up to warn me. I even dedicated a 500gb ssd for dedup tables. After importing a bit under 3tb to the pool I could see the performance hit and did a bit of googling to figure out that dedup doesn't even do what I thought it did and would basically make my system almost unusable. (a little popup to explain some of the options while you are trying to set them up would have been very helpful.).

I decided to copy back the files and nuke the dataset so I could start over. Since there is no good way I could find to export data back to ntfs (which had already been wiped and added to the pool) I decided to use "replication task" to back up my data so I could nuke the pool and then rebuild it without deduplication. I make a new pool, saw that the replication compiled successfully, nuked pool 1 and now I have no idea how to get the data back. I'm using scale BTW and everything I have read and seen about restoring from a replication looks like they were using CORE because I don't see any of the options they are using. I also have no experience with Linux or BSD, I'm ok on DOS though, so its more a matter of just not knowing any of the commands.

Stupidly, I replicated it right to the top level of the 2nd pool, which is probably a large part of why I'm struggling here.

At this point, even if I could restore the old pool and copy the data off the very vey very slow way I would be fine with it.

Thanks for taking the to time to read all this, and especially if you have any suggestions to help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Welcome to the forums.

Sorry to hear you're having trouble. Please take a few moments to review the Forum Rules, conveniently linked at the top of every page in red, and pay particular attention to the section on how to formulate a useful problem report, especially including a detailed description of your hardware.

Dedup is not to be taken lightly, and once enabled, is very difficult to expunge. You cannot just delete a dataset to get rid of it, as it becomes a fundamental part of your pool. 16GB of RAM on Scale gives you about 8GB of ARC. 8GB of ARC is nowhere near enough to support a 500GB SSD for your dedup tables; dedup requires gobs of RAM even if just for the pointers to the DDT. It isn't clear to me exactly what has happened to you so I'm not going to comment further, but hopefully someone else has a suggestion or two.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I would start with zfs list

Once you get a handle on what data you have and where it is (from that output), you can help us to understand what you want the end result to be.
 
Joined
Jan 17, 2023
Messages
3
Sorry, for not explaining my setup enough. So pool 1 I called pp, it was 5 3tb drives in Z2. When I decided to replicate it I created pool 2, qq with 3 or 4 small drives (all I had left) with no redundancy. Nuked pool one, read somewhere it's easier to restore if you name the pool the same thing it was. so pool 3 is also pp. No I do not have any apps, but I can't seem to get rid of them from either pool. Every time I try to share qq I get this error:
1673993078312.png

But you can see in the zfs list that the 2.7gb of data is there. I just don't know how to get to it or do anything with it.
 

Attachments

  • zfs list.png
    zfs list.png
    67.7 KB · Views: 166
Top