Just lost 600Gb of data from pool

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
I have had 2 disks mirrored which had backups for media files and data files until a few days ago, when I installed the update from Truenas 12.0.8 to 13.0.
The said pool is 1Tb, and did have over 600Gb of data but now only holds 11.5Gb of data .
There was no warning nor error messages about this and only found out when I wanted to backup more data to a specific folder on the 1Tb pool. That folder andmany others had just disappeared.
I assume the lost data has gone for good now, but I'm curious as to how it happened.

Out of interest, if I restored a backup of the system , ie go back to 12.0.8, would the lost data come back??
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How did you check the amount of data and the presence of your folders? Command line via ssh or via file sharing and e.g. Windows Explorer or Mac Finder?

Let's start with asserting your status. Please post the output of these two commands:
Code:
zpool status -v
zfs list
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
I checked the presence of the folders by going to freenas file sharing from my linux mint desktop pc. I also checked by looking at the said pool after logging into to TrueNAS gui. In the 1st instance there are only 3 folders showing, and no other files showing. When checking the said pool, I saw that there was only 11.65Gb used space, whereas before that was about 650Gb.
Code:
root@freenas:~ # zpool status -v
  pool: Backup_Data
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 924K in 01:30:20 with 0 errors on Sun Jun 12 01:30:21 2022
config:

        NAME                                            STATE     READ WRITE CKSUM
        Backup_Data                                     ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/ce563f3f-528e-11e6-9216-0cc47a0c1eef  ONLINE       0     0     0



errors: No known data errors
root@freenas:~ # zfs list
NAME                                                           USED  AVAIL     REFER  MOUNTPOINT
Backup_Data                                                   11.7G   888G     8.53G  /mnt/Backup_Data
Backup_Data/.system                                            955M   888G      850M  legacy
Backup_Data/.system/configs-2c252f0045f7400bba077e5a4017475a  55.9M   888G     55.9M  legacy
Backup_Data/.system/cores                                       88K  1024M       88K  legacy
Backup_Data/.system/rrd-2c252f0045f7400bba077e5a4017475a      37.0M   888G     37.0M  legacy
Backup_Data/.system/samba4                                    3.75M   888G      404K  legacy
Backup_Data/.system/services                                    96K   888G       96K  legacy
Backup_Data/.system/syslog-2c252f0045f7400bba077e5a4017475a   7.69M   888G     7.69M  legacy
 
Last edited by a moderator:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Your mirror pool lost one disk. Was it created with one disk, and you added the 2nd to establish the mirror? That's the only scenario I can think of that would fit your situation, with the older disk holding the bulk of the data, and only new data being written to prefer the new disk.

You may want to check the cabling to both disks, including the power cables, as it's very uncommon for disks to just drop out like this.
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
To be honest, as it was about 7 years ago when I created the mirror, I can only think it was with both disks. I bought x2 1Tb WD Red disks at the same time.
I have reseated the cabling to both 1Tb disks, but no change.
One thing I didn't do when I upgraded to TrueNAS 13.0 was to run 'zpool upgrade', should I have done.
Thanks
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Are both members of the pool connected to motherboard SATA ports? I've seen ports go bad before. Try shuffling the missing disk to a different port.
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
Yes they go to a LBA card via a mini SAS cable.
What about the 'zfspool upgrade' Should I do that and how.

Thanks
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
What about the 'zfspool upgrade' Should I do that and how.

No, if you upgrade, the offline disk won't have the same pool settings, and you'll be guaranteed not to recover from this.

Does the drive spin up? Does the HBA see it?
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
Ok, I won't upgrade the zfs pool then.
The only way I can tell if the drive is spinning up is by by touch , and both drives feel as tho' they are spinning
How can I tell if the HBA card sees the drives.
Thanks
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
camcontrol devlist
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
Thanks
I have 8 drives in 3 pools , and all 8 drives are showing, if it means anything:


root@freenas:~ # camcontrol devlist
<ATA WDC WD20EFRX-68E 0A82> at scbus0 target 0 lun 0 (pass0,da0)
<ATA WDC WD2001FFSX-6 0A81> at scbus0 target 2 lun 0 (pass1,da1)
<ATA WDC WD10EFRX-68F 0A82> at scbus0 target 4 lun 0 (pass2,da2)
<ATA WDC WD10EFRX-68F 0A82> at scbus0 target 5 lun 0 (pass3,da3)
<ATA WDC WD20EFRX-68A 0A80> at scbus0 target 6 lun 0 (pass4,da4)
<ATA WDC WD20EFRX-68A 0A80> at scbus0 target 7 lun 0 (pass5,da5)
<ATA WDC WD20EFRX-68A 0A80> at scbus0 target 8 lun 0 (pass6,da6)
<ATA WDC WD20EFRX-68A 0A80> at scbus0 target 9 lun 0 (pass7,da7)

There are x2 intel ssd mirrored boot drives and a
AHCI SGPIO enclosure below the listed dirves
 
Joined
Oct 22, 2019
Messages
3,641
What about the other pools?

Something about this doesn't make any sense. Why would even removing a drive from a mirror randomly take data with it? A zpool that is online or degraded is still a continuous operation of saved data, with the caveat that you lose redundancy until you replace the failed drive in the vdev. It's not like "Some files are saved on this particular drive in the mirror, while others are saved on the other one."

Perhaps a history of the pool can clue you in?

Maybe there's a event that happens around the time you noticed the data is missing?

Code:
zpool history NameOfPool


You might have to redirect the output into a text file to view later.

Code:
zpool history NameOfPool > /path/to/NameOfPool-history.txt


zpool history requires root/sudo privileges.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Also, see if the GPT UUID for that disk still exists: glabel status
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
Just to be clear, I did not remove any disk from the said pool, and I don't know why I would??
As you said, I would have thought that if 1 disk goes bad, then all the data is still there on the other disk, but in my case , it it missing completely. Only 3 folders exist on the disk now

The other 2 pools are fine and all fine and the data is there.

I have only copied the history from before I upgraded to TrueNAS 13.0, and all the data was on the disks.

2022-06-12.09:34:46 zpool import 2390377312181417493 Backup_Data
2022-06-12.09:34:46 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-06-12.09:35:03 zfs snapshot Backup_Data/.system/samba4@wbc-1655022898
2022-06-12.09:40:08 zpool import 2390377312181417493 Backup_Data
2022-06-12.09:40:08 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-06-12.09:40:25 zfs snapshot Backup_Data/.system/samba4@wbc-1655023220
2022-06-28.20:41:37 zfs destroy Backup_Data/.system/samba4@update--2021-08-06-07-34--12.0-U4.1
2022-06-28.20:41:42 zfs snapshot Backup_Data/.system/samba4@update--2022-06-28-19-41--12.0-U8.1
2022-06-28.20:44:25 zpool import 2390377312181417493 Backup_Data
2022-06-28.20:44:25 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-06-28.20:44:38 zfs set acltype=off Backup_Data/.system
2022-06-28.20:44:41 zfs snapshot Backup_Data/.system/samba4@wbc-1656445480
2022-07-09.23:15:37 zpool import 2390377312181417493 Backup_Data
2022-07-09.23:15:38 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-07-09.23:15:53 zfs snapshot Backup_Data/.system/samba4@wbc-1657404952
2022-07-10.12:01:59 zpool import 2390377312181417493 Backup_Data
2022-07-10.12:01:59 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-07-10.12:02:14 zfs snapshot Backup_Data/.system/samba4@wbc-1657450933
2022-07-10.13:01:58 zpool import 2390377312181417493 Backup_Data
2022-07-10.13:01:58 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-07-10.13:02:13 zfs snapshot Backup_Data/.system/samba4@wbc-1657454533
2022-07-10.13:13:29 zpool import 2390377312181417493 Backup_Data
2022-07-10.13:13:29 zpool set cachefile=/data/zfs/zpool.cache Backup_Data
2022-07-10.13:13:43 zfs snapshot Backup_Data/.system/samba4@wbc-1657455223

root@freenas:~ # glabel status
Name Status Components
gptid/258dc84a-823c-11e9-be50-0cc47a0c1eef N/A ada0p1
gptid/3deb57ca-820a-11e9-ac36-001f16a89eef N/A ada1p1
gptid/459c45ee-4792-11e6-a4b6-0cc47a0c1eef N/A da0p2
gptid/464e7612-4792-11e6-a4b6-0cc47a0c1eef N/A da1p2
gptid/ce563f3f-528e-11e6-9216-0cc47a0c1eef N/A da2p2
gptid/cf115aaa-528e-11e6-9216-0cc47a0c1eef N/A da3p2
gptid/434eb5fb-4792-11e6-a4b6-0cc47a0c1eef N/A da4p2
gptid/44db1bfb-4792-11e6-a4b6-0cc47a0c1eef N/A da5p2
gptid/441279f9-4792-11e6-a4b6-0cc47a0c1eef N/A da6p2
gptid/d647d843-4792-11e6-a4b6-0cc47a0c1eef N/A da7p2
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
I have only copied the history from before I upgraded to TrueNAS 13.0, and all the data was on the disks.
There's nothing of interest during / immediately after the time of the upgrade?

Either way, if that's really your pool's history, it's quite sparse and uneventful. In fact, there's no indication of any other datasets except the System Dataset (".system").

Did you ever create datasets under the top-level root dataset in that pool? Or did you simply create folders and directly save your files there?

Did you ever create any (recursive) snapshots for the top-level root dataset?

Is there earlier history than what you pasted which shows the existence/activity of any other datasets or snapshots?
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
As far as I can remember, I think I just created folders and directly saved the files to them
No, again, I don't think I ever created and recursive snapshots for the tiop-level dataset.

Out of interest, as I asked this of Samuel Tai, as the pool is showing as online with no errors, would running 'zpool upgrade' have any effect either way. Samuel said not to as 1 of the disks might be bad.
Also I have a backup file for TueNAS 12.0.8, could I restore it, with hope of all the files and folders being returned???

Ok, this is the complete history from the said pool, attached.
 

Attachments

  • history.txt
    63.4 KB · Views: 95
Joined
Oct 22, 2019
Messages
3,641
As far as I can remember, I think I just created folders and directly saved the files to them
That's risky and also discouraged. One should never use the top-level root dataset to directly save your files within.

And what do you mean you created folders? Via an SMB share? Via some jail? Via the command-line itself?


Here's what I'm guessing happened...

You created your pool.

You never created any datasets underneath the top-level root dataset (to save your files into).

You set this pool to house the System Dataset (".system"), which is fine. This explains the events that deal with ".system".

You set this pool to hold your iocage jails (which you later removed/re-located to another pool?) I ask this because it appears even iocage is gone, and there are "destroy" entries for iocage in your history.

At some point recently, the files/folders that live directly under the top-level root dataset were deleted (not sure why or by what mechanism), and there are no snapshots that exist, thus you cannot revert this.

Another alternative explanation is that you were saving your files inside of a jail directly, and upon changing the location of iocage to another pool, your either lost (or relocated) these files.


Also I have a backup file for TueNAS 12.0.8, could I restore it, with hope of all the files and folders being returned???
To re-import the config file will only restore your settings, not your saved/destroyed data.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I wonder why your mirror pool shows only one disk as per your zpool status output above? Or did you manually trim the output and delete the line for the second disk?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I wanted to backup more data to a specific folder on the 1Tb pool. That folder andmany others had just disappeared.
What 'mechanism' of backup do you refer to here?
Please describe with as much detail as possible.
 

avalon60

Guru
Joined
Jan 15, 2014
Messages
597
I wonder why your mirror pool shows only one disk as per your zpool status output above? Or did you manually trim the output and delete the line for the second disk?

No I didn't knowingly trim a line from the ouput when I posted it, but I have reposted it and checked all lines are there: That was an error on my part!
root@freenas:~ # zpool status Backup_Data
pool: Backup_Data
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 924K in 01:30:20 with 0 errors on Sun Jun 12 01:30:21 2022
config:

NAME STATE READ WRITE CKSUM
Backup_Data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/ce563f3f-528e-11e6-9216-0cc47a0c1eef ONLINE 0 0
gptid/cf115aaa-528e-11e6-9216-0cc47a0c1eef ONLINE 0 0
 
Last edited:
Top