ZFS is full and system is now unuseable

Status
Not open for further replies.

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
I currently have 168,194 left and yes i know one of the drives is bad that is what triggered this whole thing
Code:
[root@freenas ~]# zpool status -v                                                                                                 
  pool: Main_Storage                                                                                                               
state: DEGRADED                                                                                                                   
status: One or more devices are faulted in response to persistent errors.                                                         
        Sufficient replicas exist for the pool to continue functioning in a                                                       
        degraded state.                                                                                                           
action: Replace the faulted device, or use 'zpool clear' to mark the device                                                       
        repaired.                                                                                                                 
  scan: resilvered 61.5M in 3h42m with 0 errors on Thu Jun  5 19:53:24 2014                                                       
config:                                                                                                                           
                                                                                                                                   
        NAME                                            STATE    READ WRITE CKSUM                                                 
        Main_Storage                                    DEGRADED    0    0    0                                                 
          raidz2-0                                      DEGRADED    0    0    0                                                 
            gptid/9312d37a-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/935bccb3-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/93a52983-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/93ef29f6-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/943cec6e-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/948beb6e-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/94dfc0e8-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/9531217e-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/95825069-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/95d426a1-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/96265ad5-9a28-11e2-b80d-003048b9116a  ONLINE      0    0    0                                                 
            gptid/9678026c-9a28-11e2-b80d-003048b9116a  FAULTED    10  139    0  too many errors                               
                                                                                                                                   
errors: No known data errors                                                                                                       
[root@freenas ~]#                                                                                                                 

Code:
[root@freenas ~]# zpool list -v                                                                                                   
NAME                                    SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT                                         
Main_Storage                            10.9T  10.6T  233G    97%  1.00x  DEGRADED  /mnt                                         
  raidz2                                10.9T  10.6T  233G        -                                                             
    gptid/9312d37a-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/935bccb3-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/93a52983-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/93ef29f6-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/943cec6e-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/948beb6e-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/94dfc0e8-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/9531217e-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/95825069-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/95d426a1-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/96265ad5-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
    gptid/9678026c-9a28-11e2-b80d-003048b9116a      -      -      -        -                                                     
[root@freenas ~]#  
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Please replace the failed disk drive. Either now or, at the latest, when you get down to 94%.

Think of your RAID-Z2 with 12 disks, as if it had reliability of two RAID-5s, each with 6 disks.

P.S. You faulted disk has nothing to do with the number of snapshots you have or the speed at which system is removing them.
 

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
I actually cant replace it. when i go to the volume manager it just sits there and says loading forever and never shows me the option to replace it
 

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
it seems to of deleted all of the snaps but when i imported the volume it allowed me to do all the cleaning but i dont see it showing up anywhere that its actually mounted. but when i go back ot try to import it again its not there either.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Please post once again results for
Code:
tail /var/log/messages
zpool list -v
zpool status -v
I have no idea what is going on. Trying to check the status first.
 

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
Code:
[root@freenas ~]# tail /var/log/messages                                                                                         
Jun  9 11:52:56 freenas kernel: ....................................................................                             
Jun  9 11:52:56 freenas kernel: ..........+++                                                                                   
Jun  9 11:52:57 freenas smartd[3332]: Configuration file /usr/local/etc/smartd.conf parsed but has no entries (like /dev/hda)   
Jun  9 11:52:57 freenas root: /etc/rc: WARNING: failed to start smartd                                                           
Jun  9 11:53:00 freenas mDNSResponder: mDNSResponder (Engineering Build) (Feb  8 2014 00:26:22) starting                         
Jun  9 11:53:00 freenas mDNSResponder:  8: Listening for incoming Unix Domain Socket client requests                           
Jun  9 11:53:00 freenas mDNSResponder: mDNS_AddDNSServer: Lock not held! mDNS_busy (0) mDNS_reentrancy (0)                       
Jun  9 11:53:01 freenas kernel: done.                                                                                           
Jun  9 11:53:01 freenas mDNSResponder: mDNS_Register_internal: ERROR!! Tried to register AuthRecord 0000000800C2FD60 freenas.local.
(Addr) that's already in the list                                                                                               
Jun  9 11:53:01 freenas mDNSResponder: mDNS_Register_internal: ERROR!! Tried to register AuthRecord 0000000800C30180 200.1.16.172.in
-addr.arpa. (PTR) that's already in the list                                                                                     
[root@freenas ~]#

Code:
[root@freenas ~]# zpool list -v                                                                                                     
no pools available                                                                                                                  
[root@freenas ~]#

Code:
[root@freenas ~]# zpool status -v                                                                                               
no pools available                                                                                                               
[root@freenas ~]#                                                                                                               
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I would have asked for the output of dmesg | grep ada however, S.M.A.R.T. does not see any disks...

Does the BIOS see disks? The controller might have failed. Also please try
Code:
camcontrol devlist
dmesg | grep -i controller | grep -v USB
 

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
yes the controller sees all the disks and freenas saw them when i did the auto import but now they dont show up at all after i did that
Code:
[root@freenas ~]# camcontrol devlist                                           
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 12 lun 0 (da0,pass0)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 13 lun 0 (da1,pass1)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 14 lun 0 (da2,pass2)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 15 lun 0 (da3,pass3)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 16 lun 0 (da4,pass4)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 17 lun 0 (da5,pass5)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 18 lun 0 (da6,pass6)       
<ATA WDC WD1002FBYS-0 0C06>        at scbus0 target 19 lun 0 (da7,pass7)       
<WDC WD1002FBYS-02A6B0 03.00C06>  at scbus1 target 0 lun 0 (ada0,pass8)       
<WDC WD1002FBYS-02A6B0 03.00C06>  at scbus2 target 0 lun 0 (ada1,pass9)       
<WDC WD1002FBYS-02A6B0 03.00C06>  at scbus3 target 0 lun 0 (ada2,pass10)     
<WDC WD1002FBYS-02A6B0 03.00C06>  at scbus4 target 0 lun 0 (ada3,pass11)     
<SanDisk Cruzer 1100>              at scbus8 target 0 lun 0 (pass12,da8)       
[root@freenas ~]#

Code:
[root@freenas ~]# dmesg | grep -i controller | grep -v USB                     
ahci0: <Intel ICH9 AHCI SATA controller> port 0x1c50-0x1c57,0x1c44-0x1c47,0x1c48
-0x1c4f,0x1c40-0x1c43,0x18e0-0x18ff mem 0xde000800-0xde000fff irq 17 at device 3
1.2 on pci0                                                                   
atkbdc0: <Keyboard controller (i8042)> port 0x60,0x64 irq 1 on acpi0           
fdc0: <floppy drive controller> port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 on acpi0   
[root@freenas ~]#
 

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
I found the cause of the issue. i redid a new install and import and noticed on the terminal that it said the import failed because one of my plugin jails name was to long and it caused it to fail. is there a way to remove that or have it not import the jails from the pool?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Working on the assumption that the message you saw is just some unexpected GUI limit.

There could be a FreeNAS way of doing it, but a brute force approach would be to import from the command line by zpool import

To learn the name of the dataset with the plugin zfs list

Now you can destroy just that one dataset by zfs destroy Jail_Name_from_above

Wrap up be exporting (so you can import in the GUI!) zpool export Your_Pool_Name

P.S. Before proceeding, you may want to post here the error message (screenshot) about the jail name being too long
 

dovaka

Dabbler
Joined
Apr 2, 2013
Messages
31
well after a lot of not sleeping i got the original stick to boot up which doesnt seem to have a problem with that jail name. most likely because its preexisting and not trying to be imported. after that i finished deleting more snapshots that were there, approx 32k of them. and did a bit of house cleaning in the gui for plug ins and jails. the system is now functional but seems extremely slow but now that i can finally turn CIFS back on I have begun copying my data off of it so i dont have to try to pull 8tb from my cloud storage and then im just going to blank the server and start over.
Thanks for all the help it was greatly appreciated.
 
Status
Not open for further replies.
Top