SCSI Failure Error

Status
Not open for further replies.

fluentd

Dabbler
Joined
Aug 20, 2013
Messages
26
I noticed that Freenas was giving me Pool Degraded and this has happened to before all I would do it take the system outside dust it out and make sure all the cables are nice and tight.

I tried that today and after I booted the system back up I am getting the following

SCSI sense: HARDWARE FAILURE asc:44,0 (Internal target failure)

Attached is a screen shot of Pool1 with the faulted drive. I have no way to figure out which one is faulted and even if this is why I am getting a Scsi error.
 

Attachments

  • drives.jpg
    drives.jpg
    182.6 KB · Views: 239

fluentd

Dabbler
Joined
Aug 20, 2013
Messages
26
Here is what my log is showing:

I tried pulling one of the drives cables but then in GUI it wont show me any information for the Pool so I am a bit lost. When I built the machine I was kind of learning as I went so labeling the drives never happened. Any kind of trick to help me figure out what drive is giving me issues would be great.

Also after I got the SCSI errors I am no longer able to access my shared folders so I can't access any of my data.
 

Attachments

  • log.txt
    45.2 KB · Views: 423
Last edited:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
In code tags, please post the output of:
zpool status
and then:
glabel status
 

fluentd

Dabbler
Joined
Aug 20, 2013
Messages
26
Code:
 pool: Pool1                                                                                                                      
state: DEGRADED                                                                                                                   
status: One or more devices are faulted in response to persistent errors.                                                          
        Sufficient replicas exist for the pool to continue functioning in a                                                        
        degraded state.                                                                                                            
action: Replace the faulted device, or use 'zpool clear' to mark the device                                                        
        repaired.                                                                                                                  
  scan: scrub in progress since Sun Nov 22 18:02:28 2015                                                                           
        2.42T scanned out of 17.6T at 424M/s, 10h25m to go                                                                         
        0 repaired, 13.77% done                                                                                                    
config:                                                                                                                            
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        Pool1                                           DEGRADED     0     0     0                                                 
          raidz1-0                                      DEGRADED     0     0     0                                                 
            gptid/c5473810-2358-11e1-99f3-14dae943ed94  ONLINE       0     0     0  block size: 512B configured, 4096B native      
            gptid/c5a653a3-2358-11e1-99f3-14dae943ed94  ONLINE       0     0     0  block size: 512B configured, 4096B native      
            gptid/c6572670-2358-11e1-99f3-14dae943ed94  ONLINE       0     0     0  block size: 512B configured, 4096B native      
            gptid/c70cad1f-2358-11e1-99f3-14dae943ed94  FAULTED      0 3.09K     1  too many errors                                
            gptid/c7d4b16c-2358-11e1-99f3-14dae943ed94  ONLINE       0     0     0                                                 
            gptid/c88da5f0-2358-11e1-99f3-14dae943ed94  ONLINE       0     0     0                                                 
          raidz1-1                                      ONLINE       0     0     0                                                 
            gptid/ab3937a7-ea46-11e1-a72d-14dae943ed94  ONLINE       0     0     0                                                 
            gptid/abf87f67-ea46-11e1-a72d-14dae943ed94  ONLINE       0     0     0                                                 
            gptid/acd87751-ea46-11e1-a72d-14dae943ed94  ONLINE       0     0     0  block size: 512B configured, 4096B native      
                                                                                                                                   
errors: No known data errors                                                                                                       
                                                                                                                                   
  pool: Pool3                                                                                                                      
state: ONLINE                                                                                                                     
status: The pool is formatted using a legacy on-disk format.  The pool can                                                         
        still be used, but some features are unavailable.                                                                          
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the                                                            
        pool will no longer be accessible on software that does not support feature                                                
        flags.                                                                                                                     
  scan: scrub repaired 0 in 2h39m with 0 errors on Sun Nov  1 01:39:47 2015                                                        
config:                                                                                                                            
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        Pool3                                           ONLINE       0     0     0                                                 
          raidz1-0                                      ONLINE       0     0     0                                                 
            gptid/10c071bc-3cbb-11e1-8dcd-14dae943ed94  ONLINE       0     0     0                                                 
            gptid/1139224c-3cbb-11e1-8dcd-14dae943ed94  ONLINE       0     0     0                                                 
            gptid/11c27864-3cbb-11e1-8dcd-14dae943ed94  ONLINE       0     0     0                                                 
            gptid/12495eae-3cbb-11e1-8dcd-14dae943ed94  ONLINE       0     0     0                                                 
                                                                                                                                   
errors: No known data errors 


Code:
Name  Status  Components                                                                     
gptid/c5a653a3-2358-11e1-99f3-14dae943ed94     N/A  ada0p2                                                                         
gptid/c5473810-2358-11e1-99f3-14dae943ed94     N/A  ada1p2                                                                         
gptid/ab3937a7-ea46-11e1-a72d-14dae943ed94     N/A  ada2p2                                                                         
gptid/abf87f67-ea46-11e1-a72d-14dae943ed94     N/A  ada3p2                                                                         
gptid/acd87751-ea46-11e1-a72d-14dae943ed94     N/A  ada4p2                                                                         
gptid/c6572670-2358-11e1-99f3-14dae943ed94     N/A  da0p2                                                                          
gptid/c70cad1f-2358-11e1-99f3-14dae943ed94     N/A  da1p2                                                                          
gptid/c7d4b16c-2358-11e1-99f3-14dae943ed94     N/A  da2p2                                                                          
gptid/c88da5f0-2358-11e1-99f3-14dae943ed94     N/A  da3p2                                                                          
gptid/10c071bc-3cbb-11e1-8dcd-14dae943ed94     N/A  da4p2                                                                          
gptid/1139224c-3cbb-11e1-8dcd-14dae943ed94     N/A  da5p2                                                                          
gptid/11ab821f-3cbb-11e1-8dcd-14dae943ed94     N/A  da6p1                                                                          
gptid/11c27864-3cbb-11e1-8dcd-14dae943ed94     N/A  da6p2                                                                          
gptid/1233b57b-3cbb-11e1-8dcd-14dae943ed94     N/A  da7p1                                                                          
gptid/12495eae-3cbb-11e1-8dcd-14dae943ed94     N/A  da7p2                                                                          
                             ufs/FreeNASs3     N/A  da8s3                                                                          
                             ufs/FreeNASs4     N/A  da8s4                                                                          
                    ufsid/53b5e06f1a44c077     N/A  da8s1a                                                                         
                            ufs/FreeNASs1a     N/A  da8s1a                                                                         
                            ufs/FreeNASs2a     N/A  da8s2a   
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Code:
gptid/c70cad1f-2358-11e1-99f3-14dae943ed94     N/A  da1p2

According to the gptid label it's drive da1 @ line 8.
Wait until the scrub finishes and see if you can gain access to the
GUI. You need the drives serial number. Write down the gptid number
and set it aside for now, we may need it later. If the drive in question
shows issues after some smart tests, you may need to replace the disk.
For now, you wait. If you are religious, say a prayer, your pool is made
up of alot of drives and you have only one drive's worth of parity. :eek:
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
While waiting for that scrub to finish, please tell which version of FreeNAS you are running.
You might also get familiar with the section of the manual for replacing a failed drive,
just in case :rolleyes:
If you are on FreeNAS 9.3, the section number is 8.1.10 Replacing a Failed Drive.
 

fluentd

Dabbler
Joined
Aug 20, 2013
Messages
26
I still dont understand why I cant access my shared folders with data? The degraded stat has been around for about a month now. It was working today, I was coping files over to it earlier.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I still dont understand why I cant access my shared folders with data? The degraded stat has been around for about a month now. It was working today, I was coping files over to it earlier.
Right now you are trying to prevent the loss of all your data, please don't worry about your shared
folder not showing up on your network. You have bigger fish to fry at the moment...
FreeNAS-9.2.1.8-RELEASE-x64
Thanks for this ^^^^
Do you have access to a copy of the 9.2.1.8 manual? Study up on replacing a failed drive.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

fluentd

Dabbler
Joined
Aug 20, 2013
Messages
26
Im guessing the highlighted drive is the one that is faulted? 2TB?
 

Attachments

  • drives.jpg
    drives.jpg
    319.4 KB · Views: 228

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
"and gain the extra space", no

The replacement drive just needs to be the same size or larger.

Please go back to the CLI and type in # smartctl -a /dev/da1
and post the results.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I forgot to ask, is the scrub done yet, if so, is the pool still giving a degraded warning?
 

fluentd

Dabbler
Joined
Aug 20, 2013
Messages
26
Well it looks like the data still good. I am watching a show from plex which storage is on my fn system.

Scrub is at 40%
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
All right, I'm hittin the sack, check the smart output of that da1 drive asap.
I'll come back to this thread tomorrow morn and check.
You have some serious issues we need to go over so your data doesn't go POOF! g'nite!
 
Status
Not open for further replies.
Top