- Joined
- Feb 15, 2014
- Messages
- 20,194
It's all on the dubious-sounding controller.Why? What am I missing?
It's all on the dubious-sounding controller.Why? What am I missing?
Hi BigDave,First off I would remove all three drives (of the functioning volume) from this machine ASAP, in case
this has been caused by a hardware failure. Once the tank1 is safe, connect the tank2 volume to the
machine without the use of eSATA and the mobo SATA ports. My hope is, you get lucky, but I'm not
holding out much hope. Good Luck!
The N36L's built-in controller? I didn't realize it was considered dubious.It's all on the dubious-sounding controller.
Hi Eric, that disk cage is the default offering from HP microserver not sure what its called - Sata disk controller? and i would have thought there are no issues thereIt's all on the dubious-sounding controller.
ZFS uses its own labels, so the device number doesn't matter.the change in dev/ada number will have any impact? or Freenas use some other reference ID?
Sounds good and i'll make use of the great advice hereWhy? What am I missing?
@BigDave's made an excellent suggestion.
There are ample resources here to help guide you, and folks willing to review proposed builds for potential issues.
Ok, so this is what I'll give a tryZFS uses its own labels, so the device number doesn't matter.
You could go ahead and use the gui to detach the Tank1 volume (please reference the manual for this),Curious to know what will be the status of tank1? once all the drives in tank 1 are disconnected.
I would do this just to eliminate any potential confusion with error messages.You could go ahead and use the gui to detach the Tank1 volume
5. Detach tank2 through CLI (since that's how you imported it most recently).5. Detach tank2 through UI
6. Auto-import through UI (this failed last time)
Apologies to all. I was meant to keep the thread updated. The machine is in a shutdown state and I plan to work on Robert and BigDave's advise over the weekend (sat morning my time).Were you able to get your data back?
[root@freenas] /mnt/storageTank2/Backup# zpool status -v pool: storageTank2 state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 28 22:46:57 2016 config: NAME STATE READ WRITE CKSUM storageTank2 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/16035408-f10b-11e2-a097-984be1087f8d ONLINE 0 0 0 gptid/16c352dd-f10b-11e2-a097-984be1087f8d ONLINE 0 0 153 gptid/17859654-f10b-11e2-a097-984be1087f8d ONLINE 0 0 24 errors: No known data errors
I would leave well alone until you've recovered as much data as possible. If you start scrubbing now it will slow down your data transfer. Then by all means try scrubbing and clearing and SMART tests.What you guys suggest I should do
That's awesome! Glad to hear this worked for you. Kudos for keeping your cool.As advised, I carried out the following tasks:
- Detach tank1 through UI. I noticed export for tank2 was not required as zfspool import command returned tank2 in the list
- Shutdown and safely removed all drives from the system and placed 3xHitachi into the HP controller
- Upon successful boot, I confirmed all disks were recognised by OS using dmesg, Freenas UI showed all disks as online, and zfspool import retuned tank2 in the list
- I then used 'auto import' from UI and was able to successfully import tank2 !
- At this stage, I created the AFP/CIFS shares to copy my data back to remote location
Some positive news and update.
And thanks for your advice @BigDave and @Robert Trevellyan in this post.That's awesome! Glad to hear this worked for you. Kudos for keeping your cool.
Hi Robert, I plan to run scrub after successful completion of backup jobs. I am keen to diagnose the issue and this may take few more days due to time constraints at my end. I will keep you guys posted.The ultimate goal will be to identify and replace any failing disks, then rebuild (or replace)
Thanks @gpsguy My N36L is almost 5 years old now. From memory, I think i updated the BIOS to enable AHCI for drive #4 & #5. Still, its a good reminder, will go into BIOS and confirm once I have the chance to shutdown the server again.Glad to hear things worked for you.
Since the problem pool used the ODD and eSATA connections, are you running one of the hacked BIOS' on the server? If not, those connections use an IDE emulation. To get AHCI, you need an alternative BIOS.
[root@freenas] /etc# zpool status pool: storageTank2 state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 28 22:46:57 2016 config: NAME STATE READ WRITE CKS UM storageTank2 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/16035408-f10b-11e2-a097-984be1087f8d ONLINE 0 0 0 gptid/16c352dd-f10b-11e2-a097-984be1087f8d ONLINE 0 0 15. 7K gptid/17859654-f10b-11e2-a097-984be1087f8d ONLINE 0 0 2 42
[root@freenas] /etc# zpool scrub storageTank2 [root@freenas] /etc# zpool status -v pool: storageTank2 state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: scrub repaired 14.0G in 3h29m with 0 errors on Fri Jun 10 23:09:43 2016 config: NAME STATE READ WRITE CKSUM storageTank2 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/16035408-f10b-11e2-a097-984be1087f8d ONLINE 0 0 0 gptid/16c352dd-f10b-11e2-a097-984be1087f8d ONLINE 0 0 246K gptid/17859654-f10b-11e2-a097-984be1087f8d ONLINE 0 0 1.46K errors: No known data errors
[root@freenas] /etc# zpool clear storageTank2 [root@freenas] /etc# zpool status -v pool: storageTank2 state: ONLINE scan: scrub repaired 14.0G in 3h29m with 0 errors on Fri Jun 10 23:09:43 2016 config: NAME STATE READ WRITE CKSUM storageTank2 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/16035408-f10b-11e2-a097-984be1087f8d ONLINE 0 0 0 gptid/16c352dd-f10b-11e2-a097-984be1087f8d ONLINE 0 0 0 gptid/17859654-f10b-11e2-a097-984be1087f8d ONLINE 0 0 0 errors: No known data errors
You should schedule regular short and extended SMART tests as part of proper care and feeding of any FreeNAS system. Combined with SMART status checks and working email notifications, they are likely to give you early warning of disk failure.I am inclined to run a thorough SMART test on the disks if it is of any worth.