Failing disk and time to change RAID-z strategy?

Status
Not open for further replies.

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
Hello,

I am currently using 8x 2TB (mixture of WD, Seagate & Hitachi) drives in a single RAID-Z3 pool (so 5 disks for data, 3 for parity). All these are installed in a Supermicro 16 bay enclosure and have plenty of memory and CPU power to handle the resulting 10TB vdev split in 4 datasets with compression enabled on all of them (except one).

While the read speeds are easily saturating a gigabit connection, I am having frequent pewrformance problems with writing or moving stuff in/out of the freenas server as I usually get around 7.5-10 MB/s which is tremendously low (IMO).

I believe this is due to the hard drives being of the green type (except the Hitachi's).

Just this morning one of the seagate Barracuda's started throwing unrecoverable sector errors (8 of them) which I believe is just the start. Right now the pool is being scrubed and I expect it completes within the next 30 hours..

I will order a replacement drive but in the meantime I am wondering how (and if I should at all) start over with the config and setup my drives differently.

Right now I see 3 options:

-Order a 2TB drive and replace the faulty one
-Order a 4TB drive, replace the faulty one and gradually (as the remaining 2TB's dies or if money allows) replace the remaining 2TB's with 4TBs. At the end expand the pool.
-Reconfigure everything using the remaining 7 2tb drives and a new 3-4TB drive..

My goals are in order of importance:
-Data integrity (over and above all others combined)
-Storage space (Over 10TB pool is good, 20+TB is too big I will never use all that space)
Write performance (my virtual machines are regularly backing files up and need some decent performance)
Read performance (This freenas server is used for media streaming and some other file reads but not so much)

I am seeking any advice, opinion, etc to help me make my mind..

Thanks!

EDIT: Important to mention, all 2TB drives were purchased around the same years and are all 4 years old) so I expect them to slowly die, especially the WD and seagate ones)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
You cannot blame write speed of 10 MB/s on using Green HDs.

What is your NIC? Intel? Realtek?
 
L

L

Guest
The general rule of thumb is more vdev's (or raid sets) better performance. One big raid group is going to give the worst over all performance. More smaller disks are better for perf. Also think about how you want to grow. If you want to add disks to the pool you will want to add in like groups.
 

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
Hello solarisguy

I agree 1000000% with you on the green drives.... Im almost glad they're failing because I will be able to replace them with 7200RPM ones.

NIC's are 3X Intel (ESB2/Gilgal) 82563EB

If I decide to reorganize, based on Linda's comment, I am thinking about splitting up in different vdevs and mirroring them but I'd need same size drives for that?

What would you recommend? Do you really think I could achieve better performance with 7200RPM drives?

Statistically speaking, wouldn't I be better off with more smaller drives (8x 2tb) instead of fewer larger drives? y reasoning is that if I want to keep more or less the same amount of pool parity (right now 37.5%) then this space will be used as parity on the drives of the pool. So for a RaidZ3 pool like mine, a little less than 800GB on each drive is used for parity. If I end up with an identical pool but using 4TB drives for example, the parity amount will be around 1.6TB. If one drive fails, this will be twice as long to resilver.

The other way would be to have a simpler Raid Z2 array with larger drives but the redundancy would be less, finally another way would be a simple RAID1 array but in that case a lot of space would be used for parity.

I have to think about that a bit more....
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I think that what Solarisguy said was that your slow write speed is not due to the green drives. Getting faster drives will not help at this sort of write speed. You need to find out why it is slow and try to fix that before planning new drives or pools. For instance, what percentage of your pool is used? And as Solarisguy asked, what NIC does your server use? I suppose it would be useful to the experts (of which I am not one) to know all the specs for your FreeNAS machine.
 

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
Hello Roger,

ask and you shall receive!

-Pool is 58.4% used (dataset 1 is 4% used, dataset 2 is 70% used, dataset 3 is 26% used, dataset 4 is 54% used)
-Pool is RaidZ3 of usable 10.4TB
-Network is using only one onboard Intel 82563EB Gigabit controller (out of 3 available), tested & confirmed to work at gigabit speeds
-2x Quad core Xeon L5420 at 2.4GHz
-48GB DDR2-800 ECC RAM
-4 drives connected to a IBM M1015, the other 4 connected to another IBM M1015
-I was wrong in my initial post: all datasets are using LZ4 compression
-CPU usage rarely exceeds 20% (even during scrub like right now, its averaging 12.5%)
-System load is between 1.0 & 1.5 (during scrub at least, will check after scrub but I think its usually below 1.0)
-Disk activity is rarely exceeding 30MB/s (evenb during scrub)

Anything else please ask!!!
Thanks!
 

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
The M1015s are flashed to IT mode (simple HBA) and they are in the two PCI-E x8 slots of the motherboard

What FreeBSD commands can I run to find out more about the health/status of the storage (disk) subsystem? Sorry to ask I am a linux guy and Im just starting out with Freebsd...

:)

EDIT: pciconf -lv shows:

mps0@pci0:9:0:0: class=0x010700 card=0x30201000 chip=0x00721000 rev=0x03 hdr=0x00
vendor = 'LSI Logic / Symbios Logic'
device = 'SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]'
class = mass storage
subclass = SAS
mps1@pci0:10:0:0: class=0x010700 card=0x30201000 chip=0x00721000 rev=0x02 hdr=0x00
vendor = 'LSI Logic / Symbios Logic'
device = 'SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]'
class = mass storage
subclass = SAS
 

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
I think the issues are creeping...

Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 04 60 40 d8 00 00 08 00 length 4096 SMID 141 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d8 f3 4d f8 00 00 08 00 length 4096 SMID 75 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 33 5a b5 a0 00 00 08 00 length 4096 SMID 208 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 b2 7c 43 98 00 00 38 00 length 28672 SMID 302 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 b2 7c 42 88 00 00 e0 00
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): CAM status: SCSI Status Error
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): SCSI status: Check Condition
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): SCSI sense: MEDIUM ERROR asc:11,0 (Unrecovered read error)
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): Info: 0xb27c4308
Dec 1 18:57:03 freenas kernel: (da5:mps1:0:1:0): Error 5, Unretryable error
Dec 1 18:58:44 freenas smartd[3490]: Device: /dev/da5 [SAT], 8 Currently unreadable (pending) sectors
Dec 1 18:58:44 freenas smartd[3490]: Device: /dev/da5 [SAT], 8 Offline uncorrectable sectors
Dec 1 18:58:44 freenas smartd[3490]: Device: /dev/da5 [SAT], ATA error count increased from 0 to 1
Dec 1 18:58:47 freenas smartd[3490]: Device: /dev/da5 [SAT], 8 Currently unreadable (pending) sectors
Dec 1 18:58:47 freenas smartd[3490]: Device: /dev/da5 [SAT], 8 Offline uncorrectable sectors
Dec 1 18:58:47 freenas smartd[3490]: Device: /dev/da5 [SAT], ATA error count increased from 0 to 1
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d6 8b d8 c8 00 00 38 00 length 28672 SMID 565 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 b2 96 a3 70 00 00 40 00
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): CAM status: SCSI Status Error
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): SCSI status: Check Condition
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): SCSI sense: MEDIUM ERROR asc:11,0 (Unrecovered read error)
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): Info: 0xb296a378
Dec 1 19:04:50 freenas kernel: (da5:mps1:0:1:0): Error 5, Unretryable error
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 b2 96 e4 d0 00 00 08 00 length 4096 SMID 995 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 b2 96 e4 b8 00 00 30 00
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): CAM status: SCSI Status Error
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): SCSI status: Check Condition
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): SCSI sense: MEDIUM ERROR asc:11,0 (Unrecovered read error)
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): Info: 0xb296e4d8
Dec 1 19:05:22 freenas kernel: (da5:mps1:0:1:0): Error 5, Unretryable error
Dec 1 19:28:45 freenas smartd[3490]: Device: /dev/da5 [SAT], 24 Currently unreadable (pending) sectors (changed +16)
Dec 1 19:28:45 freenas smartd[3490]: Device: /dev/da5 [SAT], 24 Offline uncorrectable sectors (changed +16)
Dec 1 19:28:46 freenas smartd[3490]: Device: /dev/da5 [SAT], ATA error count increased from 1 to 3
Dec 1 19:28:46 freenas smartd[3490]: Device: /dev/da5 [SAT], 24 Currently unreadable (pending) sectors (changed +16)
Dec 1 19:28:46 freenas smartd[3490]: Device: /dev/da5 [SAT], 24 Offline uncorrectable sectors (changed +16)
Dec 1 19:28:46 freenas smartd[3490]: Device: /dev/da5 [SAT], ATA error count increased from 1 to 3
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d3 48 96 58 00 00 e0 00 length 114688 SMID 627 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d3 48 ab d0 00 00 20 00 length 16384 SMID 605 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d3 48 ed b0 00 00 a0 00 length 81920 SMID 473 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d3 48 97 38 00 00 e0 00 length 114688 SMID 136 terminated ioc 804b scsi 0 state 0 xfer 0
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): READ(10). CDB: 28 00 d3 48 ab c8 00 00 08 00
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): CAM status: SCSI Status Error
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): SCSI status: Check Condition
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): SCSI sense: MEDIUM ERROR asc:11,0 (Unrecovered read error)
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): Info: 0xd348abc8
Dec 1 19:47:02 freenas kernel: (da5:mps1:0:1:0): Error 5, Unretryable error
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
You might be better off not trying to finish the scrub. Can you borrow a 2TB disk from a friend? :)

Do you know how to get S.M.A.R.T. data from the command line? (If yes, please use -a, not -x)

Does your IBM M1015 have correct firmware level? You only wrote that it has proper firmware type.
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
Solarisguy is correct. I'd be less worried about green drives and more worried about da5.

By the way, my transfer rates are 60MB/sec. That doesn't even come close to maxing out my green drives.
 

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
OK I will stop the scrub.

Right now a pool status gives:

[root@freenas] ~# zpool status
pool: zpool
state: ONLINE
scan: scrub in progress since Mon Dec 1 10:05:02 2014
3.78T scanned out of 10.7T at 93.5M/s, 21h41m to go
108K repaired, 35.18% done
config:

NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
gptid/70057ca3-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0
gptid/7231ce76-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0
gptid/74010031-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0
gptid/74c45142-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0 (repairing)
gptid/7577d07e-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0
gptid/7799b692-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0
gptid/7979c1c6-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0
gptid/7ba4673f-0fb8-11e4-9267-0030487f11ba ONLINE 0 0 0

errors: No known data errors


I will replace the faulty da5 with a 3TB Seagate I have left around (brand new) and order a Hitachi 3TB as a spare...

I just hope the resilvering completes before the spare arrives from Newegg.... Good thing I am using raid Z3!


smartctl -a on da5 gives:

[root@freenas] ~# smartctl -a /dev/da5
smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
Serial Number: S1E1REY8
LU WWN Device Id: 5 000c50 060fb47fd
Firmware Version: CC24
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon Dec 1 22:01:58 2014 EST

==> WARNING: A firmware update for this drive may be available,
see the following Seagate web pages:
http://knowledge.seagate.com/articles/en_US/FAQ/207931en
http://knowledge.seagate.com/articles/en_US/FAQ/223651en

SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 575) seconds.
Offline data collection
capabilities: (0x73) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 217) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x3085) SCT Status supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 117 093 006 Pre-fail Always - 123097192
3 Spin_Up_Time 0x0003 096 095 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 67
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 081 060 030 Pre-fail Always - 132758628
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 8456
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 67
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 096 096 000 Old_age Always - 4
188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0
189 High_Fly_Writes 0x003a 096 096 000 Old_age Always - 4
190 Airflow_Temperature_Cel 0x0022 074 061 045 Old_age Always - 26 (Min/Max 20/33)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 37
193 Load_Cycle_Count 0x0032 047 047 000 Old_age Always - 106473
194 Temperature_Celsius 0x0022 026 040 000 Old_age Always - 26 (0 17 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 24
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 24
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 6793h+00m+27.197s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 44744340727
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 76091246758

SMART Error Log Version: 1
ATA Error Count: 4
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 4 occurred at disk power-on lifetime: 8454 hours (352 days + 6 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455

Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 e0 ff ff ff 4f 00 15d+00:23:01.209 READ FPDMA QUEUED
60 00 e0 ff ff ff 4f 00 15d+00:23:01.209 READ FPDMA QUEUED
60 00 10 ff ff ff 4f 00 15d+00:23:01.209 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 15d+00:23:01.208 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 15d+00:23:01.208 READ FPDMA QUEUED

Error 3 occurred at disk power-on lifetime: 8453 hours (352 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455

Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 ff ff ff 4f 00 14d+23:41:20.808 READ FPDMA QUEUED
60 00 60 ff ff ff 4f 00 14d+23:41:20.807 READ FPDMA QUEUED
60 00 d8 ff ff ff 4f 00 14d+23:41:20.807 READ FPDMA QUEUED
60 00 30 ff ff ff 4f 00 14d+23:41:20.806 READ FPDMA QUEUED
60 00 30 ff ff ff 4f 00 14d+23:41:20.806 READ FPDMA QUEUED

Error 2 occurred at disk power-on lifetime: 8453 hours (352 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455

Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 38 ff ff ff 4f 00 14d+23:40:48.290 READ FPDMA QUEUED
60 00 40 ff ff ff 4f 00 14d+23:40:48.290 READ FPDMA QUEUED
60 00 28 ff ff ff 4f 00 14d+23:40:48.287 READ FPDMA QUEUED
60 00 f0 ff ff ff 4f 00 14d+23:40:48.214 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 14d+23:40:48.213 READ FPDMA QUEUED

Error 1 occurred at disk power-on lifetime: 8453 hours (352 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455

Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 ff ff ff 4f 00 14d+23:33:02.133 READ FPDMA QUEUED
60 00 08 d8 40 60 44 00 14d+23:33:02.122 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 14d+23:33:02.092 READ FPDMA QUEUED
60 00 38 ff ff ff 4f 00 14d+23:33:02.081 READ FPDMA QUEUED
60 00 e0 ff ff ff 4f 00 14d+23:33:02.081 READ FPDMA QUEUED

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 8440 -
# 2 Extended offline Completed: read failure 40% 8440 2791145480
# 3 Short offline Completed without error 00% 8368 -
# 4 Short offline Completed without error 00% 8296 -
# 5 Short offline Completed without error 00% 8224 -
# 6 Short offline Completed without error 00% 8152 -
# 7 Short offline Completed without error 00% 8080 -
# 8 Short offline Completed without error 00% 8008 -
# 9 Short offline Completed without error 00% 7936 -
#10 Short offline Completed without error 00% 7864 -
#11 Short offline Completed without error 00% 7792 -
#12 Short offline Completed without error 00% 7719 -
#13 Extended offline Completed without error 00% 7717 -
#14 Short offline Completed without error 00% 7695 -
#15 Short offline Completed without error 00% 7623 -
#16 Short offline Completed without error 00% 7551 -
#17 Short offline Completed without error 00% 7479 -
#18 Short offline Completed without error 00% 7407 -
#19 Short offline Completed without error 00% 7335 -
#20 Short offline Completed without error 00% 7263 -
#21 Short offline Completed without error 00% 7191 -

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

I am trying to find out the firmware version of the M1015's

EDIT: Is there a way to identify the da5 faulty disk without shutting down the freenas server and pulling each drive out of the caddies one by one? They have LED lights so if making da5 work would turn on the LED light I'd know which one it is....?? Is that possible?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
In View Disks the first column is Name and the second is Serial. Write down the serial number from there, get a good magnifying glass and some light.

You may need to shutdown your system...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You already provided the info on what drive it is...

[root@freenas] ~# smartctl -a /dev/da5
smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p9 amd64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
Serial Number: S1E1REY8
LU WWN Device Id: 5 000c50 060fb47fd
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
Yeah you are gonna have to get a look at the serial numbers to find S1E1REY8. It will be one of the Seagates :p
 

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
OK here's a good one for you guys..

I have searched the web for a clear procedure on how to replace the dead drive (gracefully, not just pull the dead drive out and kick a replacement in..) but couldnt find any. Then I thought why not look at the Freenas handbook?

Section 6.3.12 of the handbook states :

Before physically removing the failed device, go to Storage → Volumes → View Volumes → Volume Status and locate the failed disk.

Storage → Volumes → View Volumes brings me to the datasets, not the disks. I assume where I need to go is "View Disks"... Then in that page (where I see all 8 drives), I do not see any "offline" or "replace" buttons.. Even if I highlight the da5 drive, all I get are a "edit" and "wipe" buttons.

So do I simply pull the defective drive out and swap it? Do I add a new drive first, then will Freenas understand that it is a replacement drive?

See the screenshot. I just dont want to wreck the pool, right now its still "Online" and green.... ;)

The screenshot doesnt show which FreeNAS version I use but it is "FreeNAS-9.2.1.6-RELEASE-x64 (ddd1e39)"
 

Attachments

  • snapshot10.jpg
    snapshot10.jpg
    169.3 KB · Views: 211

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
Hmm OK for the errata but that doesn't explain why I cannot follow the manual to the letter...

As per my screenshot I provided, you see that I have no "replace", "offline" or "other" buttons, so clearly there is a discrepancy between the manual and the version of Freenas I run.
Tonight I will post a screenshot of the "View Volumes" page showing no such buttons as well... View Volumes shows my datasets, not the underlying disks.

EDIT: Based on my comments, would it be possible that the "Offline" or "Replace" buttons are not there because the disk has not failed yet? Technically it has not failed as it is still responding to freenas and has only bad sectors.. In that case can I pull the drive, then pop the replacement in and proceed with the instructions of the manual?

Im surprised nobody has encountered this scenario yet!?
 
Last edited:

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Have you actually clicked on one of the disks to select it? This mouse business is quite hard to get used to. But things happen when you actually select a given item.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You have to click on a disk for the buttons to show up, IIRC.
 
Status
Not open for further replies.
Top