Problem with disk in zvol

Status
Not open for further replies.

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
Hello!
I have installed FreeNAS-9.2.1.8-RELEASE-x64 on server vs 2 internal disks and 36 external throught LSI MegaRAID SAS 9280-4i4e (dont ask me why this controller). All this 36 disks is in Raid 0 ( with this instruction http://skeletor.org.ua/?p=3850 (in russian, but commands is easy to understand).
I have two zpools, now it's degraded:
Code:
 NAME                                            STATE     READ WRITE CKSUM
        MainData                                        DEGRADED     0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/e59aa02b-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e5e6b1dc-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e6362bad-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e68340ff-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e6d22a15-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e71c161d-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e76147c7-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e7afc172-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e7fd1436-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e84c91ba-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
          raidz2-1                                      DEGRADED     0     0     0
            gptid/e89e5a42-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e8f220ba-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e948520e-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e998d540-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/e9ec9702-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/ea3b8926-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/ea891935-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            18168852208074115215                        UNAVAIL      0     0     0  was /dev/gptid/eada0883-6411-11e4-958c-002590e8bb9e
            gptid/eb2cd53a-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/eb7ca2ff-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
          raidz2-2                                      DEGRADED     0     0     0
            gptid/ebdcf4f4-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/ec340b46-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/ec852bcd-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            11500276686699951897                        OFFLINE      0     0     0  was /dev/gptid/ecdd547d-6411-11e4-958c-002590e8bb9e
            gptid/ed3459cf-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/ed9c7b67-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/edf4b74a-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/ee49ff5a-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/eea155c4-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/eef8df33-6411-11e4-958c-002590e8bb9e  ONLINE       0     0     0

 pool: Mirror
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        Mirror                                          DEGRADED     0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/4accb0c7-6412-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/4b1ecbae-6412-11e4-958c-002590e8bb9e  ONLINE       0     0     0
          mirror-1                                      DEGRADED     0     0     0
            gptid/4b7b6298-6412-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            10847201807994614780                        UNAVAIL      0     0     0  was /dev/gptid/4bd07944-6412-11e4-958c-002590e8bb9e
          mirror-2                                      ONLINE       0     0     0
            gptid/4c336b20-6412-11e4-958c-002590e8bb9e  ONLINE       0     0     0
            gptid/4c888550-6412-11e4-958c-002590e8bb9e  ONLINE       0     0     0


i tried replace this disk, made it offline and online after, but this didnt help me.
Raid 0 is ok on this disks (checked by MegaCli)
 

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
System doesnt see my disks
Code:
kern.disks: mfid33 mfid32 mfid31 mfid30 mfid29 mfid28 mfid27 mfid26 mfid25 mfid24 mfid23 mfid22 mfid21 mfid20 mfid19 mfid18 mfid17 mfid16 mfid15 mfid14 mfid13 mfid12 mfid11 mfid10 mfid9 mfid8 mfid7 mfid6 mfid5 mfid4 mfid3 mfid2 mfid1 mfid0
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Looks like you are using hardware raid not following the software raccomandations...
It looks like you are having multiple disk failures, but it should be fixed replacing the failed disks. In case that diesn't work it could be a raid card failure
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What enemy85 said. Things don't work right with hardware RAID. The problem is that you be able to unplug the hard drive from the system but the RAID card may or may not lie about the status of the array. The RAID card will happily lie to the system about all sorts of things, and the last thing you want is to be lied to with ZFS. That *always* ends badly.

My advice is to backup the pool, destroy it, and recreate it without using hardware RAID. Yes, doing RAID0 of single disks is still using hardware RAID. It is a shame that you have such a big pool yet still used hardware RAID. It will certainly take considerable time to restore that pool from backup. :(

Normally this is where I'd ask for a bunch of info about the disk, but you can't retrieve that info either because you are using that controller. Unfortunately you're at an impass because I can't use the normal diagnostic tools to even figure out the status of your server, nor can I tell you how to get out of this problem because you are using what is basically a configuration that is unique to your server.

All that being said, I do hope you take this opportunity to recreate your pool and use proper hardware instead of trying to work around whatever problem you are currently experiencing. This was your "warning" from the ZFS Gods that they aren't happy. Take advantage of the warning and don't be the next guy to lose their pool. A few weeks ago I had 3 different people, all with RAID-0 single disk arrays on that controller or equivalents that lost their pools without warning and had no backup. One was a company that was willing to pay whatever it takes because the company was out of business without the data.
 

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
thank you all for answers, but i solved the problem. it was no hard
it was problem vs raid card
short manual:
list all disk
Code:
MegaCli -PDList -aALL

and seek for Unconfigured, then bring it to good state, for example
Code:
MegaCli -PDMakeGood -PhysDrv \[30:18]\ -a 0 
,
where 0 - controller number, 30 -enclosure number, 18 - slot number. then reboot the server.
i cant change this card now and i canth swithc it to hba, cause it doesnt have it firmware:(
 
Status
Not open for further replies.
Top