optic-cyclone
Cadet
- Joined
- Mar 18, 2016
- Messages
- 2
During a recent change of my hard drive fans I had to remove my hard drive cables. I did not label them and thought that I had reconnected them back the way they were removed but I must not have remembered properly. When I went to turn the system back on it would not boot. After connecting a monitor and keyboard I found that the BIOS was not pointing to the appropriate boot drive (one of the drives I had disconnected is acting as my FreeNAS boot drive while the others are all data drives). In the BIOS setup I simply changed the boot order to place the boot drive at the top of the list and restarted the system. It booted right up, but I had a critical error on the system and it showed that my pool was degraded. When I took a look at
My pool is made up of two vdevs. My original of (4x 2TB drives) plus my added (3x 5TB drives). Two of the 5TB are the ones that are currently offline. The pool has been running well in its current state for years. I am currently running FreeNAS-9.10.2-U6 and won't be doing any upgrading until I can build a better system.
Current status of my pool:
The system does still know about the other drives (ada5 and ada6) as seen by camcontrol.
In looking at dmesg for information on ada5 and ada6, I came up with the following.
per cyberjock's thread I also attempted a recovery on the partition table for the two drives
per another thread Dusan suggested that the partition information could be copied from a known good drive to the others. I haven't attempted this yet and am looking for help because I really don't want to cause any further issues and I want to recover my data. Can someone knowledgeable of this type of situation please explain why what I did created such an issue and how I can correct this? If it helps I do know the gptids for the other two drives. I have not written any data out to these drives since this occurred so I feel as though all of the data should be perfectly fine if I can just find a way to make the pool happy again.
zpool status
it showed that the drive that was Unavailable had a series of numbers as its ID rather than the other drives which showed gptid like the following 'gptid/38ac7ea1-c584-11e5-a678-00241dce521e'. It is unclear to me as to why this occurred, but I figured that I must still not have the order of the cables correct so I shut down the system again and moved them again. This time the system booted but my pool was offline and unavailable. There are now two drives with what appears to be missing partition information. I would not have guessed this would be an issue with FreeNAS and after a fair amount of reading on these forums I am still unclear as to why my cable swapping would have created this issue, but I am right now just wanting to get my pool back up and available. It's clear now that I should have stopped everything and simply sent in this question before my pool became unavailable, but I am already hesitant running the system as a RaidZ1 (yes I am planning a complete overhaul soon to a RaidZ2 or RaidZ3) and simply wanted to get everything back in order immediately.My pool is made up of two vdevs. My original of (4x 2TB drives) plus my added (3x 5TB drives). Two of the 5TB are the ones that are currently offline. The pool has been running well in its current state for years. I am currently running FreeNAS-9.10.2-U6 and won't be doing any upgrading until I can build a better system.
Current status of my pool:
zpool import
Code:
pool: ztank id: 10487339118375057665 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-3C config: ztank UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas gptid/38ac7ea1-c584-11e5-a678-00241dce521e ONLINE 5180206853595741775 UNAVAIL cannot open 6120802147465411546 UNAVAIL cannot open raidz1-1 ONLINE gptid/1fd79dd9-3966-11e5-a06c-00241dce521e ONLINE gptid/205fa2dd-3966-11e5-a06c-00241dce521e ONLINE gptid/20f2d89b-3966-11e5-a06c-00241dce521e ONLINE gptid/83dfc093-76ae-11e5-98b1-00241dce521e ONLINE
The system does still know about the other drives (ada5 and ada6) as seen by camcontrol.
camcontrol devlist
Code:
<Hitachi HDS5C3020ALA632 ML6OA180> at scbus0 target 0 lun 0 (pass0,ada0) <Hitachi HDS5C3020ALA632 ML6OA580> at scbus1 target 0 lun 0 (pass1,ada1) <Hitachi HDS5C3020ALA632 ML6OA180> at scbus2 target 0 lun 0 (pass2,ada2) <HDS722020ALA330 RSD HUA JKAOA31E> at scbus3 target 0 lun 0 (pass3,ada3) <TOSHIBA MD04ACA500 FP1A> at scbus4 target 0 lun 0 (pass4,ada4) <TOSHIBA MD04ACA500 FP2A> at scbus4 target 1 lun 0 (pass5,ada5) <TOSHIBA MD04ACA500 FP2A> at scbus5 target 0 lun 0 (pass6,ada6) <Hitachi HDT721032SLA380 ST2OA39D> at scbus5 target 1 lun 0 (pass7,ada7)
In looking at dmesg for information on ada5 and ada6, I came up with the following.
dmesg | grep 'ada[56]'
Code:
ada5 at ata0 bus 0 scbus4 target 1 lun 0 ada5: <TOSHIBA MD04ACA500 FP2A> ATA8-ACS SATA 3.x device ada5: Serial Number 6567K3YFFS9A ada5: 150.000MB/s transfers (SATA, UDMA5, PIO 8192bytes) ada5: 4769306MB (9767539055 512 byte sectors) ada5: Previously was known as ad1 ada6 at ata1 bus 0 scbus5 target 0 lun 0 ada6: <TOSHIBA MD04ACA500 FP2A> ATA8-ACS SATA 3.x device ada6: Serial Number 55T7K38MFS9A ada6: 150.000MB/s transfers (SATA, UDMA5, PIO 8192bytes) ada6: 4769306MB (9767539055 512 byte sectors) ada6: Previously was known as ad2 GEOM: ada5: corrupt or invalid GPT detected. GEOM: ada5: GPT rejected -- may not be recoverable. GEOM: ada6: corrupt or invalid GPT detected. GEOM: ada6: GPT rejected -- may not be recoverable.
per cyberjock's thread I also attempted a recovery on the partition table for the two drives
gpart recover ada5
Code:
gpart: arg0 'ada5': Invalid argument
gpart recover ada6
Code:
gpart: arg0 'ada6': Invalid argument
per another thread Dusan suggested that the partition information could be copied from a known good drive to the others. I haven't attempted this yet and am looking for help because I really don't want to cause any further issues and I want to recover my data. Can someone knowledgeable of this type of situation please explain why what I did created such an issue and how I can correct this? If it helps I do know the gptids for the other two drives. I have not written any data out to these drives since this occurred so I feel as though all of the data should be perfectly fine if I can just find a way to make the pool happy again.
Last edited by a moderator: