Unable to Destroy Snapshot - IO ERROR

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
I am unable to delete snapshots from the UI or Command line.
the error on GUI;
Items Delete Failed: [EFAULT] I/O error
From Command Line;
cannot destroy snapshots: I/O error

Any ideas?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
So you have an I/O error on at least one of the disks in your pool...

dmesg | grep da may help you to identify which disk(s)

zpool status -v

For the disks in that pool:
smartctl -a /dev/daX (replace X with the number and/or use adaX if that's appropriate for your disks)
 

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
zpool status -v shows no errors.
SMART is enabled on all diskss. ( this is a 60 disk system with 4x RAIDZ2 VDEVS).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Run a scrub and let's see where that goes.

What hardware are you running? OS version?
 

jones982

Cadet
Joined
Dec 1, 2020
Messages
5
Same error here (TrueNAS-12.0-U6). Made a recursive snapshot on a pool (2 hdd mirror with cache ssd) via the gui, had a dataset with "refreservation" and the disk became full. Now:
  • zfs destroy -r "test@manual-2021-10-24_21-57" leads to "cannot destroy snapshots: I/O error"
  • zpool status -v test responds with "errors: No known data errors"
  • zpool scrub test leads to "cannot scrub test: out of space"
Cannot even remove a dataset to re-gain space.
  • zfs destroy test/photos responds with "cannot destroy 'test/photos': filesystem has children"

Strange issue which seems to allows no back and forth.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I have been told that ZFS requires space to delete a snapshot for some reason.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
ZFS is copy on write. If your pool is full, ZFS can't create a new copy of old pointers to update the snapshot lists. At that point ZFS becomes read-only, and your only options are either a) expand the pool; b) destroy the pool and reconstitute with only the most recent data.
 

jones982

Cadet
Joined
Dec 1, 2020
Messages
5
For reference, I have found a way by making the 2 hdd mirrored + 1 log pool to a single drive with 2 spare disks.
Code:
zpool detach test spare1 spare2

#Check autoexpand true, did not use zpool online -e
zfs get autoexpand test

#Expand size (not reversible)
zpool add test spare1

#Fix errors
zfs destroy -r "test@manual-2021-10-24_21-57"
zfs set refreservation=none "test/dataset1"

#Create migrate snapshot
zfs snapshot -r test@migrate

#Copy to new pool on spare2
zpool create test_new spare2
zfs send -R test@migrate | pv | zfs receive -F test_new

Currently running...
 
Top