Tank reaching 97%

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
But why the process did not change after hours.
Because you have a pool consisting of a single, twelve-disk vdev, with resulting horrible IOPS. And another disk that's showing lots of problems too.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Ouch, 4.37KB/s?!

And 6 data errors? That pool is a goner. You'll want to destroy it and make a new one (with a more sane layout).
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
The slowness likely occurs because of the degraded disk drive, is my guess.

You could do a clear to reset the error count and see if the error count is going up.

The data may be suspect, though.

You want 5 or 6 TB drives and go Z2 or Z3... Only reason to use Z3 is because of the lack paying attention to the RAID system.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The data may be suspect, though.
...if by "suspect" you mean "known to be defective"...
You want 5 or 6 TB drives and go Z2 or Z3
For 20 TB of block storage? For block storage workloads, that means about 40 TB of net capacity. Ten 5 TB disks in RAIDZ2? It'd be an improvement over what's there now, but...
 
Last edited:

dnet

Dabbler
Joined
Mar 27, 2014
Messages
23
After a week without changes to the resilvering process, it may be necessary to restart the machine.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The filesystem that's using the extent is presumably not going to react well to sudden shrinkage of the underlying disk, so that would require one or more of the following:
  • Nuke it, restore from backup to a new share that is properly configured to not allow for the pool to get so full.
  • Add more storage.
  • Get rid of snapshots.
  • Move other data elsewhere.

For future reference, by writing zeros across free space, assuming compression on ZFS is enabled (it is by default), you should actually be able to get some space back.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
...if by "suspect" you mean "known to be defective"...

For 20 TB of block storage? For block storage workloads, that means about 40 TB of net capacity. Ten 5 TB disks in RAIDZ2? It'd be an improvement over what's there now, but...

40TB or ideally more, 60TB wouldn't be unreasonable. And not in RAIDZanything.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
For future reference, by writing zeros across free space, assuming compression on ZFS is enabled (it is by default), you should actually be able to get some space back.
Yes, writing zeros to freed space on the client side of the block device should work as a crude stand-in for TRIM/UNMAP.
 
Status
Not open for further replies.
Top