[SOLVED] Critical Alert - SickRage corrupted

Status
Not open for further replies.

henrique

Cadet
Joined
Oct 23, 2013
Messages
4
Hello,

I'm getting one critical alert and I don't know how to fix it. Can you please try to help me?

The output of zpool status -v:
Code:
  
pool: Data                                                                                                                       
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 0h33m with 0 errors on Sun Oct 18 04:33:12 2015                                                        
config:                                                                                                                            
                                                                                                                                   
        NAME                                          STATE     READ WRITE CKSUM                                                   
        Data                                          ONLINE       0     0     0                                                   
          gptid/e5b75fab-38d8-11e3-9a06-3cd92b0c1f4e  ONLINE       0     0     0                                                   
                                                                                                                                   
errors: No known data errors                                                                                                       
                                                                                                                                   
  pool: freenas-boot                                                                                                               
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 0h5m with 0 errors on Thu Nov  5 03:50:17 2015                                                         
config:                                                                                                                            
                                                                                                                                   
        NAME        STATE     READ WRITE CKSUM                                                                                     
        freenas-boot  ONLINE       0     0     0                                                                                   
          da0p2     ONLINE       0     0     0                                                                                     
                                                                                                                                   
errors: No known data errors                                                                                                       
                                                                                                                                   
  pool: mediacenter                                                                                                                
state: ONLINE                                                                                                                     
status: One or more devices has experienced an error resulting in data                                                             
        corruption.  Applications may be affected.                                                                                 
action: Restore the file in question if possible.  Otherwise restore the                                                           
        entire pool from backup.                                                                                                   
   see: http://illumos.org/msg/ZFS-8000-8A                                                                                         
  scan: scrub repaired 0 in 3h51m with 1 errors on Sun Nov  1 03:51:27 2015                                                        
config:                                                                                                                            
                                                                                                                                   
        NAME                                          STATE     READ WRITE CKSUM                                                   
        mediacenter                                   ONLINE       0     0     0                                                   
          gptid/161b6de3-17d7-11e3-afbd-3cd92b0c1f4e  ONLINE       0     0     0                                                   
          gptid/169af013-17d7-11e3-afbd-3cd92b0c1f4e  ONLINE       0     0     0                                                   
          gptid/1749c828-17d7-11e3-afbd-3cd92b0c1f4e  ONLINE       0     0     0                                                   
                                                                                                                                   
errors: Permanent errors have been detected in the following files:                                                                
                                                                                                                                   
        /mnt/mediacenter/jails/sickrage_1/usr/pbi/sickrage-amd64/share/sickrage/SickRage/lib/tornado/platform


I know it's something related with SickRage plugin, but I searched on forum and seems that nobody has the same problem.

Thanks in advance
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The problem is data corruption on your pool--it just happens to be in one of the files of your sickrage plugin. Since you built your pool with no redundancy, ZFS has no way to fix the corrupted data. Your only real option is to delete and reinstall the plugin.
 

henrique

Cadet
Joined
Oct 23, 2013
Messages
4
Thanks for the quick reply!

Now a newbie question: there is an easy way to create some redundancy? I just want to avoid this in the future.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
There really isn't a way to add redundancy at this point. Unfortunately, the only real answer would be to back up your data, destroy the pool, and rebuild it in a redundant configuration (RAIDZ).
 

henrique

Cadet
Joined
Oct 23, 2013
Messages
4
Hi again,

After removing and installing the sickrage plugin, I still have the critical alert. Should I run something to refresh the state of freenas?
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Hi again,

After removing and installing the sickrage plugin, I still have the critical alert. Should I run something to refresh the state of freenas?
a zpool scrub would clear that

to as to Dan's earlier comment, you could actually add redundancy if you want to go with a mirror. you could add a mirrored drive to your current setup.
most consumers go with raidz/2 setups because you get more utilization of your drive space with more drives. in that case you can't migrate directly from your current zpool
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
as to Dan's earlier comment, you could actually add redundancy if you want to go with a mirror
Since his pool appears to consist of a single vdev (with three disks striped), I didn't it would be possible to add mirrors to each of them. I haven't tried it, though, so I could certainly be wrong.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Since his pool appears to consist of a single vdev (with three disks striped), I didn't it would be possible to add mirrors to each of them. I haven't tried it, though, so I could certainly be wrong.
whoops your right! I only looked at the first pool which is obviously his freenas-boot
 
Status
Not open for further replies.
Top