Pool offline because a damaged disk after power outage - How to recover the pool and data?

alvareztb

Cadet
Joined
Nov 30, 2022
Messages
2
Dear friends of this community.

My truenas system is built on:
  • Motherboard Gygabyte model GA-H110M-H
  • CPU: Intel(R) Core(TM) i5-6400 CPU @ 2.70GHz
  • Memory: 2 x 16 GB DDR4 3200
  • Available Memory: 31.8 GiB (as shown in Dashboard)
  • Storage: 3 x 2 TB HDD in raidz1 array.
  • Only one Pool "AWS_tank"
  • Version: TrueNAS-12.0-U8
After a power outage one of the disk got damaged.

The pool "AWS_tank" is OFFLINE right now and Data is not available.

1669834217939.png


I replaced the damaged disk for a new one (same brand, model and capacity) and tried some workarounds from other posts with no success.

The output of zpool status command is the following:

root@truenas[~]# zpool status pool: boot-pool state: ONLINE scan: resilvered 2.71M in 00:00:05 with 0 errors on Wed Nov 30 11:31:51 2022 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 da0p2 ONLINE 0 0 0 errors: No known data errors


The output of zpool import command is the following:

root@truenas[~]# zpool import pool: AWS_tank id: 16198973132710883895 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. The pool may be active on another system, but can be imported using the '-f' flag. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C config: AWS_tank FAULTED corrupted data raidz1-0 DEGRADED gptid/ec7f6f5e-0e69-11ec-836c-503eaa1fd636 ONLINE gptid/ec9f349f-0e69-11ec-836c-503eaa1fd636 UNAVAIL cannot open gptid/eca8fd77-0e69-11ec-836c-503eaa1fd636 ONLINE

This "gptid/ec9f349f-0e69-11ec-836c-503eaa1fd636 UNAVAIL cannot open" is the damaged disk. It is dead, I tried it on other linux system and is unreadable.
A new one disk is attached now.

Also, I tried zpool import -f AWS_tank command with the following response:

root@truenas[~]# zpool import -f AWS_tank cannot import 'AWS_tank': I/O error Destroy and re-create the pool from a backup source.

I don't have a pervious data backup.

What can I do in order to recover the pool and data ?

Thanks in advance for your time and help.
Tei
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Try this command and post the output here in the forum:
zpool import -fFn AWS_tank

Or if that did not work, try:
zpool import -fFXn AWS_tank

These will not import your pool, but we should get a better idea why the pool won't import.


ZFS was specifically designed to have zero data loss on unexpected power offs, (aka crashes). The only data you can loose, is data in flight, (just like any other file system). When a crash occurs during writes, either the full set of data was written and available afterwards. Or none in flight data is available.

Their are exceptions to this, bad hardware. If;
  • A storage device lies about flushing it's write cache
  • Drive re-ordering writes
  • Using write cache based hardware RAID controller
  • Or potentially non-ECC RAM with errors
When Sun Microsystems designed and tested ZFS, they did not anticipate the massive numbers of users on home & consumer hardware. Thus, those exceptions generally don't apply to actual server grade hardware designed for NAS uses.
 

alvareztb

Cadet
Joined
Nov 30, 2022
Messages
2
Thank you Arwen.

None of the commands generate console output. :oops:

Code:
root@truenas[~]# zpool import -fFn AWS_tank
root@truenas[~]# zpool import -fFXn AWS_tank
root@truenas[~]#


Maybe I'm missing something?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Well, if the commands did not generate any output, I would guess that initial checks by ZFS indicate that the pool is importable.

I worry about the part where the normal import has "FAULTED corrupted data" for the pool.

It is possible that discarding the last transactions may allow you to import your pool. That's the "-F" option. If that does not work, it may be possible to use extreme measures by adding "-X".

If you are good with throwing away the last few actions, (which may lead to recent action data loss), try this;
zpool import -fF AWS_tank
 
Top