ZFS not mounting

Status
Not open for further replies.

urcaGun

Dabbler
Joined
Dec 10, 2012
Messages
19
Hi...

System rebooted by itself and then it's not started... "Mounting local file system" and nothing else...
I have 2 ZFS drives - host and warehouse
If I disconnect the "host" ZFS drive - then system boot up very good and work.
Second ZFS drive I can see over network.

What can i do, to make "host" drive boot with OS ? How to check it ? How to check ZFS ?
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Use [code][/code] tags. From an SSH session as root post the output of the following:
Code:
zpool status -v

camcontrol devlist

glabel status

gpart show
 

urcaGun

Dabbler
Joined
Dec 10, 2012
Messages
19
ProtoSD, it's possible to do only after OS boot up... but it's not boot up if one ("host") ZFS drive connected.

Without problem disk i've got this:
Code:
zpool status -v

  pool: swap1
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
	replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
  scan: none requested
config:

	NAME                   STATE     READ WRITE CKSUM
	swap1                  UNAVAIL      0     0     0
	  2137101375734684980  UNAVAIL      0     0     0  was /dev/ada1p2

  pool: warehouse
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	warehouse   ONLINE       0     0     0
	  ada1p2    ONLINE       0     0     0

errors: No known data errors


Code:
 camcontrol devlist

<Maxtor 6Y080P0 YAR41BW0>          at scbus0 target 0 lun 0 (ada0,pass0)
<WDC WD10EARS-00Y5B1 80.00A80>     at scbus1 target 0 lun 0 (ada1,pass1)


Code:
glabel status

          Name  Status  Components
 ufs/FreeNASs3     N/A  ada0s3
 ufs/FreeNASs4     N/A  ada0s4
ufs/FreeNASs1a     N/A  ada0s1a



Code:
gpart show

=>        1  160086527  ada0  MBR  (76G)
          1         62        - free -  (31k)
         63    1930257     1  freebsd  [active]  (942M)
    1930320         63        - free -  (31k)
    1930383    1930257     2  freebsd  (942M)
    3860640       3024     3  freebsd  (1.5M)
    3863664      41328     4  freebsd  (20M)
    3904992  156181536        - free -  (74G)

=>      0  1930257  ada0s1  BSD  (942M)
        0       16          - free -  (8.0k)
       16  1930241       1  !0  (942M)

=>        34  1953525101  ada1  GPT  (931G)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330696     2  freebsd-zfs  (929G)
  1953525128           7        - free -  (3.5k)
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
It is probably trying to mount the ZFS at that point, Something Bad Happenned (TM)

Press ctrl+t at the point it is hung to check the process running and its state.
 

urcaGun

Dabbler
Joined
Dec 10, 2012
Messages
19
Am I need to post all of these process ?

It is probably trying to mount the ZFS at that point

It's obvious, man...

My Linux Mint has zfs utils, but they compare only version 23 of ZFS, i can see pool, i can see that it active, but i can't mount or see any params or properties...
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
Go fu** yourself :)

If you dont know how to explain a problem or post details the problem is yours. You should go back to basic school grade.

I hope you lose all your data :)
 

urcaGun

Dabbler
Joined
Dec 10, 2012
Messages
19
I removed HDD from NAS, conected it to "victoria" and checked. All surface test done good. FreeNAS without this disk is working fine. Problem imho in logical structure of data on this HDD.
Is there any checks for this ?
 

urcaGun

Dabbler
Joined
Dec 10, 2012
Messages
19
Hmmm.... While there is no advices, i'm trying by myself to solve a problem..:(
What i've done...
Disconnect problem drive
Boot up freenas
From WEB-interface shut off zfs pool
reboot system and connect drive
System boot up normaly
In the "drives" - auto import, check my pool 'host", import.... about 10 minutes for something... and nothing..:)
Ok, reboot - web-interface - drives - one working pool, second pool is absent.
Ok. putty - root, "mount -uw /", "zfs import" - i see my pool, it's in good status, it can be connected...
"zfs import host"... and waiting. wating.... After 3-5 minutes - kernel panic....
And now system with connected problem drive can't boot up...

What i can do with this drive, somebody can help ? Or how to change loader.conf so my system can boot up with out any panic messages... ???
 
Joined
Nov 15, 2013
Messages
1
RUS

Баг исполнен на:
OS: FreeNAS 9.1.1 AMD Athlon, 4GB DDR3, RAID-Z1 3x1TB
OS: FreeNAS 9.1.1 Intel Celeron, 2GB DDR3, RAID-Z1 3x2TB

При удалении файлов большого размера на "Vol0/DATASET"возникла ошибка в консоли
"kernel: pid 76109 (***), uid 0, was killed: out of swap space."
После перезагрузки FreeNAS, загрузка остановилась на "Mounting local file system"
При нажатии ctrl+t видно старт процесса zpool который по какой то причине не может успешно завершиться и продолжить загрузку следующих процессов из rc.d

Решение:

1 Выключил FreeNAS
2 Отключил шнур питания от дисков...
3 Загрузил FreeNAS
4 Включил шнур питания в диски SATA
5 зашёл в ssh и выполнил команды:
cd /etc/rc.d
./ix-zfs stop
./ix-zfs start
6 в WEB application через 10 минут активного шуршания дисков и по завершению старта, статус дисков отображался как HEALTHY и все партиции успешно замонтировались...
7 после перезагрузки ошибок не наблюдалось...

Всем хорошего настроения)))




ENG-Translate

OS: FreeNAS 9.1.1 AMD Athlon, 4GB DDR3, RAID-Z1 3x1TB
OS: FreeNAS 9.1.1 Intel Celeron, 2GB DDR3, RAID-Z1 3x2TB

During removal of files of the big size on "Vol0/DATASET" there was a mistake in the console
"kernel: pid 76109 (***), uid 0, was killed: out of swap space. "
After FreeNAS reset, loading stopped on "Mounting local file system"
By pressing ctrl+t it is visible start of process of zpool which for any reason can't successfully come to the end and continue loading of the following processes of rc.d

Decision:

1 I Switched off FreeNAS
2 I Disconnected an power cord from disks...
3 I Loaded FreeNAS
4 I Included an power cord in the disks SATA
5 I came into ssh and I executed commands:
cd/etc/rc.d
./ix-zfs stop
./ix-zfs start
6 in WEB application in 10 minutes of active rustling of disks and on completion of start, the status of disks was displayed as HEALTHY and all partition successfully mount...
7 after reset of mistakes it wasn't observed...

I am sorry for transfer... All of good mood)))
 
Status
Not open for further replies.
Top