yarkin
Cadet
- Joined
- Jul 31, 2016
- Messages
- 5
Hi to all!
I have FreeNAS 9.10-STABLE-201605021851 installed 2 month ago to a new hardware and running well until last night. It has configured 2 ZFS pools: ssdpool1 (4 SSD RAIDZ1) and hddpool1 (5 HDD RAIDZ2). I'm a beginner at FreeNAS and ZFS.
After FreeNAS started scrubbing ssdpool1 pool last night (it sent me email) OS went to reboot and then failed to init ZFS (it was prompt like "db>").
I haven't found what to do and just reinstalled FreeNAS to last stable release (9.10-STABLE-201606270534) and then tried to import my data pools. hddpool1 was impoerted without any issue, but when I tried to import the other one there were many errors in server console like:
and then server went to reboot.
'zpool import' shows ssdpool1 is fine:
I can import it in readonly mode:
and 'zfs list' shows everything inside the pool.
I tried to make a copy of needed data (2 zvols) to hddpool1 by using 'zfs send ... | zfs recv ...' , the first zvol was copied well (it's small, about 1 MB of data), but then the second one (it's big, about 300 GB of data) was copying some time the error occured again and system went to reboot.
I've checked my RAM (new memory with ECC) by memtest86 and it's done 2 passes without any errors.
Now I'm running 'zdb -e -bcsvL ssdpool1' but it requires many hours to done.
What should I do if zdb makes the same error and my system is rebooted or even doesn't show any errors?
Hope for your help! Thanks.
I have FreeNAS 9.10-STABLE-201605021851 installed 2 month ago to a new hardware and running well until last night. It has configured 2 ZFS pools: ssdpool1 (4 SSD RAIDZ1) and hddpool1 (5 HDD RAIDZ2). I'm a beginner at FreeNAS and ZFS.
After FreeNAS started scrubbing ssdpool1 pool last night (it sent me email) OS went to reboot and then failed to init ZFS (it was prompt like "db>").
I haven't found what to do and just reinstalled FreeNAS to last stable release (9.10-STABLE-201606270534) and then tried to import my data pools. hddpool1 was impoerted without any issue, but when I tried to import the other one there were many errors in server console like:
Code:
Tracing command kernel pid 0 tid 100088 td 0xffffff000262b000 sched_switch() at sched_switch+0x154 mi_switch() at mi_switch+0x21d sleepq_switch() at sleepq_switch+0x123 sleepq_wait() at sleepq_wait+0x4d _sleep() at _sleep+0x357 taskqueue_thread_loop() at taskqueue_thread_loop+0xb7 fork_exit() at fork_exit+0x12a fork_trampoline() at fork_trampoline+0xe --- trap 0, rip = 0, rsp = 0xffffff8012426d30, rbp = 0 ---
and then server went to reboot.
'zpool import' shows ssdpool1 is fine:
Code:
# zpool import pool: ssdpool1 id: 5935814353070452819 state: ONLINE status: The pool was last accessed by another system. action: The pool can be imported using its name or numeric identifier and the '-f' flag. see: http://illumos.org/msg/ZFS-8000-EY config: ssdpool1 ONLINE raidz1-0 ONLINE gptid/a8a04159-393a-11e6-8dc6-0025905c9f10 ONLINE gptid/a8c9b9e9-393a-11e6-8dc6-0025905c9f10 ONLINE gptid/a8f63183-393a-11e6-8dc6-0025905c9f10 ONLINE gptid/a91fa680-393a-11e6-8dc6-0025905c9f10 ONLINE
I can import it in readonly mode:
Code:
# zpool import -F -f -o readonly=on -R /mnt/temp ssdpool1
and 'zfs list' shows everything inside the pool.
I tried to make a copy of needed data (2 zvols) to hddpool1 by using 'zfs send ... | zfs recv ...' , the first zvol was copied well (it's small, about 1 MB of data), but then the second one (it's big, about 300 GB of data) was copying some time the error occured again and system went to reboot.
I've checked my RAM (new memory with ECC) by memtest86 and it's done 2 passes without any errors.
Now I'm running 'zdb -e -bcsvL ssdpool1' but it requires many hours to done.
What should I do if zdb makes the same error and my system is rebooted or even doesn't show any errors?
Hope for your help! Thanks.