Select all of ZFS snapshots for deletion in one click

Status
Not open for further replies.

sysfu

Explorer
Joined
Jun 16, 2011
Messages
73
I'm trying to find a way to "select all" in the ZFS snapshots screen so I can destroy multiple choices with a few clicks. Currently it seems that each snapshot must be checked manually before clicking the destroy button at the bottom left.

I have hundreds of snapshots to destroy and manually checking each box with the mouse is utter madness.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I agree with ProtoSD. But as another alternative in the meantime you should be able to find a script(or make your own) that can delete them all for you with just a few commands.
 

sysfu

Explorer
Joined
Jun 16, 2011
Messages
73
Turns out you can use shift+click in the GUI to perform the equivalent of select all. Thanks William for bringing this to my attention in the ticket
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Thanks for posting that. I'm not sure if that's in the manual, but it's good to know.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Fantastic! My periodic snapshots have got a little out of hand (need to configure these a little better) and wasn't looking forward to 3000+ clicks :o

It this isn't in the manual, it should be!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
It is in the manual. And a good administrator should never need to manually delete snapshots. That can totally fubar replication in nasty ways.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
I'm deleting them so I can set-up replication :p but thanks for the advice. Once I've got it set-up properly I won't delete anymore, promise!

Unsurprisingly, it doesn't like doing them all in one go, but seems happy with 100 at a time. That's still 100x better!
 

RichTJ99

Patron
Joined
Sep 12, 2013
Messages
384
Very helpful advice! I needed to do this & will hopefully not need to do it again.
 

brumnas

Dabbler
Joined
Oct 4, 2015
Messages
33
I wanted to trash my snapshots yesterday as well; but in the GUI, in just said "working" - I waited like 10 minutes and then closed the dialog. Then I fired
Code:
zfs destroy pool2/backup"
, which held all the unwanted snapshots (coming from periodic sync from pool1). The pool2 was still visible with all those datasets in the GUI, so I dismounted it from GUI and then wanted to re-import the volume (e.g. pool). But the disks were not selectable! So I rebooted and tried again: nothing, the pool was gone.. The whole idea was to get rid of the snapshots synced from pool1.

The interesting point is from @cyberjock - that the synced snapashots scheme can somehow break the things if you manually delete the snapshots. I didn't find so far nothing related to this, but it seems be very true. I tried to delete manually the synced pool2 snapshots (e.g. the copies), of course the sync was not running in that time. And I ended up with a broken pool.

Situation:
- pool1 is source
- pool2/sync is sync destination
- pool1 was reorganized heavily, which caused very big snaphots
- pool2 is a bit smaller than pool1 so I wanted to clean-up the unneeded snapshots, keeping snapshots from pool1 for a moment
- both pool1/2 are encrypted

FreeNAS behavior:
- Clicking "destroy" on pool2/backup froze up in "waiting" (or loading or something like that)
- manually destroying poo2/backup went smoothly, but GUI was still showing it
- trying to "detach" pool2 in GUI (I guess it's an "export") went ok but GUI didn't show the disks in "import volume"

A bit scary for me; it seems if the GUI is not responding and I solve it from console, pool can be gone like magic. But what to do if the GUI just stops responding also after the reboot?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
So, there's a few things to discuss:

1. If you destroy a dataset that has a lot of used space that must be free, ZFS will work in 1 of two modes:

a. If you have the zfs feature flag async_destroy enabled # (zpool get all <zpoolname>) then the dataset should be freed in a few seconds and it will asynchronously destroy data as it goes.
b. If the feature flag isn't listed or disabled, then the dataset must be destroyed synchronously. This could take a few minutes, or a few days. The speed of the zpool and the rate at which ZFS can find all of the used blocks to clear them in a single transaction is the limiting factor.

If the flag is enabled, async_destroy feature will clean up the disk space and your freed space will slowly increase over time. It is possible that the async_destroy process itself will use up so much I/O that the zpool will go catatonic until the job finishes.

Once you've attempted to destroy a dataset, if the system is rebooted (regardless of the feature flag being enabled or not), the system *must* complete the deletion before the zpool will be mountable. So on bootup the system will start the mounting process of the zpool, then stop and handle the open transaction, and once that is done it will continue the bootup sequence. There are some ways to get around this, but they are somewhat hacky and nothing I'd recommend. The best advice is to leave the system on and let it run until it finishes. Assuming your system isn't corrupt or having some kind of hardware failure, it *will* eventually finish.

It's possible that your detach had the "destroy the zpool" checkbox checked and you destroyed the zpool permanently. I can't say exactly what happened because I've never tried to export a zpool that had a bunch of async destroy work to do, but there could be some kind of unforseen consequence to doing this that I'm not aware of. Not many destroy a dataset and then try to export the zpool. ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
As for the snapshot and replication code, if you are using any version of FreeNAS (or TrueNAS) that is after about Dec 2015, then the replicator is much more friendly to you destroying snapshots on the source system (you still shouldn't be doing this on the destination system unless you know what you are doing). But, you must ensure that your source and destination have at least one snapshot in common or the source system will destroy the dataset and begin rereplicating all of the data again from scratch.
 

brumnas

Dabbler
Joined
Oct 4, 2015
Messages
33
Thank you, nice :). I've checked the async flag: it's ON for both pool1 and pool2 - so it was probably running in the background =>

Question #1: Can find it then like "ps -ax | grep zfs"? Now it's not running so I can check.

Question #2: Is it safe (GUI vise) to delete the dataset from the console like "zfs destroy pool/ds"?

Today, I've simulated the stuff once again: replicated all the snapshots from pool1 onto pool2, which is too small to hold it: so it eventually stopped and gave me a warning "pool full". So far so good. But now I would like to delete all those SS from pool2, after I've disabled the replication. I've looked into the crontab and checked the "autosnap.py", which reads this "enabled" flag before syncing the SS. So it should be safe to trash the pool2/backup (destination) snapshots or whole "backup" dataset itself. But: is it really?

I've followed the advise from this forum and wanted to select all the SS in the browser with mouse-clicking - there are 5880 of them ;-).. So I felt already bad trying selecting them in the browser, as the way the GUI looks, it doesn't smell as the most stable jQuery solution.. Of course it sucked; I was able to select them but clicking "destroy" made just nothing. So I tried to select fewer of them, but still the same, "destroy" does nothing. =>

Question #3: Is there any "safe" limit to select the SS for destroying in GUI? If so, it should be enforced to the user (I guess FreeNAS10 will behave differently anyway)

Question #4: May I trash all of these from console without confusing the GUI/FreeNAS addons?

To give the answers to your input above: no, I surely didn't select "mark as unused" while detaching the volume. But it's true, that may be the async process was doing its job - my failure I didn't notice this, I'm new to the ZFS. May be the zfs process was destroying in bg and I forced it to export.. Hm.. really ugly thought :).. But it shouldn't trash my pool - the disks were totally "pool-unaware" after that; I was really sweating after the NAS reboot as the pool1 was the only copy I was left with. So now I attached an external Seagate 8TB BackupPlus, formatted it for UFS from the terminal and copied the most important stuff jut to be sure..

PS: I'm in the process of switching from a Synology 12 disks box to my first FreeNAS Supermicro-X11/XeonV5/64GB - which is another story, as I'm the living proof of concept one can do that based on these forums ;-). Although there were a few very unclear things which made/makes me scratch my head!
 

brumnas

Dabbler
Joined
Oct 4, 2015
Messages
33
UPD: I've risked it ;-).. To help others searching for cleanup space from a lot of snapshots:

1. Get the space used by snapshots vs real data:
Code:
zfs list -o space -t all pool1/shares


Notice: the real space used by a snapshot is not visible without the "-o space".

2. List all the snapshots for a folder:
Code:
zfs list -t snapshot | grep /XXX | less


Notice: I was not able to list it like "zfs list -t snapshot pool1/", as I wanted to refer to the "root" of pool1, which FreeNAS describes like "pool1/pool1", which is a bit confusing (named "volume", but it's an root dataset filesystem). I didn't know what to enter after the "pool1/" as "pool1/pool1" didn't work. So greped for it - somebody could give me a clue probably :).

3. Look what would (e.g. simulate by "-n") happen cleaning those up:
Code:
zfs destroy -rvn POOL/XXX@%


Notice: % stands for wildcard. This shows also how much space would be freed up.

4. Empty 'em
..just leave out "-n" from the command above

The response is immediate, no waiting or anything like that: I looked for zfs doing something async, nope, nothing, "htop" gave nothing, "ps -ax | grep zfs" nothing - I thought I made a mistake, but this is really so fast!

And FreeNAS GUI is happy, space is freed up!

Happy Sunday :)
 
Last edited:

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
You can specify the dataset you want to look at
Code:
zfs list -t snap -rd1 XXX
and not need the grep. The additional flags specify recursion and limit depth to one. The depth might not be needed as you're already restricting to snapshots.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Turns out you can use shift+click in the GUI to perform the equivalent of select all. Thanks William for bringing this to my attention in the ticket
Update: in FreeNAS 11.1 shift-click (in Edge) didn't work. What did work was selecting them in one sweep by keeping left mouse button pressed on go over all the line numbers so they are marked blue.
 
Status
Not open for further replies.
Top