Pool offline after moving hard discs to another pc

Joined
Oct 5, 2015
Messages
9
Hello,

My Dell server died and I move 4 HHDs from that server to new PC, connect them, put USB Flash with TrueNAS from dead server into new PC and boot it.
Set up network and connect to TrueNAS through web interface. After logging in I get this:
True-NAS-01.png

I can see the HDDs
True-NAS-02.png

When I type zpool status -v I get this:
Code:
Last login: Wed Mar  6 12:52:16 on pts/2
FreeBSD 13.1-RELEASE-p9 n245429-296d095698e TRUENAS

        TrueNAS (c) 2009-2023, iXsystems, Inc.
        All rights reserved.
        TrueNAS code is released under the modified BSD license with some
        files copyrighted by (c) iXsystems, Inc.

        For more information, documentation, help or support, go here:
        http://truenas.com
Welcome to VASPKS FreeNAS server

Warning: the supported mechanisms for making configuration changes
are the TrueNAS WebUI and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.

root@freenas:~ # zpool status -v
  pool: freenas-boot
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:08:18 with 11 errors on Tue Nov 28 03:53:18 2023
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  DEGRADED     0     0     0
          da0p2       DEGRADED     0     0     0  too many errors

errors: Permanent errors have been detected in the following files:

        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/boot/kernel-debug/t4fw_cfg.ko
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/data/pkgdb/freenas-db
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/usr/local/www/freenasUI/locale/ru/LC_MESSAGES/django.mo
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/usr/local/lib/python3.7/site-packages/s3transfer/__pycache__/upload.cpython-37.pyc
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/usr/local/lib/python3.7/site-packages/formtools/wizard/__pycache__/views.cpython-37.pyc
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/usr/local/lib/python3.7/site-packages/botocore/data/greengrass/2017-06-07/service-2.json
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/boot/kernel-debug/mlx4ib.ko
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/usr/share/locale/zh_CN.UTF-8/LC_COLLATE
        freenas-boot/ROOT/13.0-U6.1@2020-05-12-19:57:21:/boot/kernel-debug/smartpqi.ko
        freenas-boot/ROOT/13.0-U6.1@2020-05-20-21:00:42:/usr/local/lib/migrate93/freenasUI/services/migrations/__pycache__/0067_auto__del_field_pluginsjail_jail_ip.cpython-37.pyc
root@freenas:~ #


Can You, please, help me to activate the Pool or put HDDs in that pool or what ever I need to do to make it ONLINE and RUNNING?
Thanx a toon in forward for any suggestion.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Your boot pool is degraded, start with a fresh installation and import your configuration backup. If that doesn't work, did you try importing the pool?
 
Joined
Oct 5, 2015
Messages
9
Yes I try importing the pool with "zpool import" and I get "no pools available to import" message.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
This is above my paygrade then.

You may want to list all your used hardware in detail (I would argue the old server and the new server), what exactly died etc. in order to receive helpful answers.

Edit: in general swapping hardware is rather painless. Take your boot drive with you or restore from config on a fresh install, usually the pool is imported automatically then. That's why I can't help you it it doesn't and manual import doesn't work.
 
Last edited:
Joined
Oct 5, 2015
Messages
9
Couple questions:

1. Should I change System Dataset Pool to freenas_boot to be able to do something with FS_Pool which is offline?

kfTnR3V.png


2. If I EXPORT/DISCONNECT FS_Pool should I be able to create new pool and import HHDs without destroying data on them?

J2FSAmd.png


KYnokUG.png
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
NO, you do NOT want to "create a new pool". That would destroy all the data on your existing pool.

ZFS Import and Export are just ways of mounting or dis-mounted all file systems and preparing the disks for use, or external removal.

Yes I try importing the pool with "zpool import" and I get "no pools available to import" message.
This indicates that the "magic number" and "ZFS header" of the disks is missing. Probably because the partition table is not located at the beginning of the disks.

Did your prior Dell server use hardware RAID and supply LUNs to the old TrueNAS?

Some hardware RAID disk controllers reserve some disk space for their status & sync tables. If that was the case, it is possible that the disks are good, but unusable as plain disks. This is one reason we highly discourage hardware RAID controllers when using ZFS.


We need a partition table listing from the 4 disks. I don't remember the FreeBSD command. Perhaps someone else knows.

Edit: Found the command. Please run this against each of your 4 data disks, (replacing DEVICE with your disk device name);
gpart show DEVICE
 
Last edited:
Joined
Oct 5, 2015
Messages
9
I have Dell PowerEdge R320 server where those HDDs were and iDrac died on that server that's why I transfer those HDDs to a new PC.
I hope this information hepls.

When I enter gpart show I get this:
Code:
root@freenas:~ # gpart show
=>      40  30218768  da0  GPT  (14G)
        40      1024    1  freebsd-boot  (512K)
      1064  30212096    2  freebsd-zfs  (14G)
  30213160      5648       - free -  (2.8M)

root@freenas:~ #


And when I type gpart show ada0 (or ada1, ada2, ada3), I get the same:
Code:
root@freenas:~ # gpart show ada0
gpart: No such geom: ada0.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Sorry, I have no clue if the Dell PowerEdge R320 server has hardware RAID controller. Perhaps someone else can answer that question.

You want to use;
gpart show /dev/ada0 gpart show /dev/ada1 gpart show /dev/ada2 gpart show /dev/ada3
 
Joined
Oct 5, 2015
Messages
9
Is there any way to extract data from those 4 HDDs outside TrueNAS server?
For instance if I put those HDDs into PC with some Linux, will I be able to copy data to another HDD?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Before you move the disks, make sure your OS is working right. Try listing the boot disk;

gpart show /dev/da0


As for the data pool disks, in Linux I would use;

fdisk -l

That would list all the disk's partition tables, that the OS can find.
 
Joined
Oct 5, 2015
Messages
9
I boot pc with raid HDDs with Ubuntu Live image and in terminal I get this:
Code:
root@ubuntu:/# lsblk -f
NAME     FSTYPE          FSVER    LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
loop0    squashfs        4.0                                                             0   100% /rofs
loop1    squashfs        4.0                                                             0   100% /snap/core22/1122
loop2    squashfs        4.0                                                             0   100% /snap/bare/5
loop3    squashfs        4.0                                                             0   100% /snap/firefox/3836
loop4    squashfs        4.0                                                             0   100% /snap/snapd/20671
loop5    squashfs        4.0                                                             0   100% /snap/gtk-common-themes/1535
loop6    squashfs        4.0                                                             0   100% /snap/snap-store/959
loop7    squashfs        4.0                                                             0   100% /snap/gnome-42-2204/141
loop8    squashfs        4.0                                                             0   100% /snap/snapd-desktop-integration/83
sda      ddf_raid_member 01.00.00             Dell    \x10                                       
└─ddf1_FS_disk
                                                                                                  
sdb      ddf_raid_member 01.00.00             Dell    \x10                                       
└─ddf1_FS_disk
                                                                                                  
sdc      ddf_raid_member 01.00.00             Dell    \x10                                       
└─ddf1_FS_disk
                                                                                                  
sdd      ddf_raid_member 01.00.00             Dell    \x10                                       
└─ddf1_FS_disk
                                                                                                  
sde                                                                                               
└─sde1   vfat            FAT32    UBUNTU 22_0 60AE-42EC                               9.8G    32% /cdrom
root@ubuntu:/#

Can you please help me to mount ddf1_FS_disk and copy files from that pool.

Thanx in forward.
 
Joined
Oct 5, 2015
Messages
9
Here is output for fdisk -l
Code:
root@ubuntu:/# fdisk -l

Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000VX004-1RU1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start        End    Sectors Size Id Type
/dev/sda1           1 4294967295 4294967295   2T ee GPT

Partition 1 does not start on physical sector boundary.


Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000VX004-1RU1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000VX004-1RU1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000VX004-1RU1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 14.45 GiB, 15512174592 bytes, 30297216 sectors
Disk model: DataTraveler 2.0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x48a3c0f0

Device     Boot Start      End  Sectors  Size Id Type
/dev/sde1  *     2048 30297215 30295168 14.4G  c W95 FAT32 (LBA)


Disk /dev/mapper/ddf1_FS_disk: 5.46 TiB, 5999532441600 bytes, 11717836800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disklabel type: gpt
Disk identifier: 1F5E7F4E-8C2E-11E5-8B01-B083FED93844

Device                           Start         End     Sectors  Size Type
/dev/mapper/ddf1_FS_disk-part1     128     4194431     4194304    2G FreeBSD swap
/dev/mapper/ddf1_FS_disk-part2 4194432 11717836759 11713642328  5.5T FreeBSD ZFS

root@ubuntu:/# 
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't understand what "ddf_raid_member" is, except perhaps an artifact from a hardware RAID controller. If that is the case, hardware RAID controllers don't work well, (or at all), with ZFS. And I would not know how to recover from this.

The partition listing from fdisk -l shows part of the problem. Disk sda has a partition table entry, but the other 3 do not. This is likely due to the hardware RAID controller. This is almost like you had the 4 disks in a single RAID LUN, (really not recommended with ZFS).

The only thing I can recommend at this point is to perform a recovery using the same model of hardware RAID controller. I've done this once, at least 7 years ago and it was not pleasant. (But I had real documentation on how this was done. Plus vendor support if needed.)

Then, when, (or if), you can get your ZFS pool working, backup the data and redo the disks.


Sorry I can't be more helpful.
 
Top