Hello, today I was "smart" and I wanted to test out Truenas Scale, so I installed it on a separate SSD while leaving the original Core install alone. Installed scale, booted, and imported the settings from Core and all I got was pool offline. Then I shut down the server, ran it again, the pool showed up and had 4 of 6 disks working, 2 seem to have corrupted GPT table. I restarted again, and pool was offline again. Tried going back to Core but of course it doesn't work here either now.
Motherboard: Gigabyte GA-990FXA-UD3 rev 4.0
CPU: AMD FX 8350
16 GB RAM
Disk configuration:
6 HDDs in Z2, one vdev (connected directly to motherboard)
1 boot SSD (boot drive connected via cheap 2 port PCIe board)
Found this in /var/log/messages :
zpool import
I didn't expect such a failure just trying if the import works properly.
Is there any way to solve this situation? Do I stay on Core or just try to fix it on Scale?
Hopefully the 7 TB of data can be saved...
Motherboard: Gigabyte GA-990FXA-UD3 rev 4.0
CPU: AMD FX 8350
16 GB RAM
Disk configuration:
6 HDDs in Z2, one vdev (connected directly to motherboard)
1 boot SSD (boot drive connected via cheap 2 port PCIe board)
Found this in /var/log/messages :
truenas ada2: <WDC WD40EFAX-68JH4N1 83.00A83> ACS-3 ATA SATA 3.>
truenas ada2: Serial Number WD-WX22DA04C3EH
truenas ada3: <WDC WD40EFAX-68JH4N1 83.00A83> ACS-3 ATA SATA 3.>
truenas ada3: Serial Number WD-WX12D110ZLYZ
....
truenas GEOM: ada2: the primary GPT table is corrupt or invalid.
truenas GEOM: ada2: using the secondary instead -- recovery str>
truenas GEOM: ada3: the primary GPT table is corrupt or invalid.
truenas GEOM: ada3: using the secondary instead -- recovery str>
truenas intsmb0: <AMD SB600/7xx/8xx/9xx SMBus Controller> at de>
truenas smbus0: <System Management Bus> on intsmb0
truenas kernel: lo0: link state changed to UP
truenas kernel: re0: link state changed to UP
truenas (ada5:ata0:0:0:0): READ_DMA48. ACB: 25 00 87 be c0 40 d>
truenas (ada5:ata0:0:0:0): CAM status: Command timeout
truenas (ada5:ata0:0:0:0): Retrying command, 3 more tries remain
truenas (ada5:ata0:0:0:0): READ_DMA48. ACB: 25 00 87 be c0 40 d>
truenas (ada5:ata0:0:0:0): CAM status: Command tim
zpool import
oot@truenas[~]# zpool import
pool: tank
id: 1482594721782283518
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
tank FAULTED corr
upted data
raidz2-0 DEGRADED
disk/by-partuuid/c43b21c4-0805-11ec-8d17-74d4350b5e08 UNAVAIL cannot open
gptid/c4c0c266-0805-11ec-8d17-74d4350b5e08 ONLINE
gptid/c4f2ee7f-0805-11ec-8d17-74d4350b5e08 ONLINE
gptid/c4ce501d-0805-11ec-8d17-74d4350b5e08 ONLINE
disk/by-partuuid/c4fc1ca1-0805-11ec-8d17-74d4350b5e08 UNAVAIL cannot open
gptid/c50fa01f-0805-11ec-8d17-74d4350b5e08 ONLINE
root@truenas[~]#
I didn't expect such a failure just trying if the import works properly.
Is there any way to solve this situation? Do I stay on Core or just try to fix it on Scale?
Hopefully the 7 TB of data can be saved...