Cannot import 'pool' message after running wdidle3 tool

Status
Not open for further replies.

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Was told that I needed to run the wdidle tool to disable the park feature of the 3 WD Green drives I have.

> ahcich8: Timeout on slot 22 port 0
> ahcich8: is 00000000 cs 00400000 ss 00400000 rs 00400000 tfd 50 serr 00000000 cmd 10009617
> (ada7:ahcich8:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 08 98 d2 8a 40 11 00 00 00 00 00
> (ada7:ahcich8:0:0:0): CAM status: Command timeout
> (ada7:ahcich8:0:0:0): Retrying command
> ahcich8: Timeout on slot 12 port 0
> ahcich8: is 00000000 cs 00001000 ss 00001000 rs 00001000 tfd 50 serr 00000000 cmd 10008c17
> (ada7:ahcich8:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 08 c0 e1 8a 40 11 00 00 00 00 00
> (ada7:ahcich8:0:0:0): CAM status: Command timeout
> (ada7:ahcich8:0:0:0): Retrying command

Shut the server down, pulled the drives, used ultimate boot cd on a separate pc and disabled the park feature on each drive one at a time.

Booted the server up and received cannot import 'pool': I/O error and the output during bootup said that I needed to run zpool import -F pool, then run a scrub. I'm in the scrub process now with fingers crossed. Was told to paste in output of smrtctl command in a new thread.

[root@freenas] ~# zpool import -F pool
cannot mount '/pool': failed to create mountpoint
cannot mount '/pool/.system': failed to create mountpoint
cannot mount '/pool/.system/cores': failed to create mountpoint
cannot mount '/pool/.system/samba4': failed to create mountpoint
cannot mount '/pool/.system/syslog': failed to create mountpoint
cannot mount '/pool/jails': failed to create mountpoint
[root@freenas] ~#


[root@freenas] ~# smartctl -x /dev/ada7

smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p3 amd64] (local build)

Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org



=== START OF INFORMATION SECTION ===

Model Family: Western Digital Caviar Green (AF, SATA 6Gb/s)

Device Model: WDC WD30EZRX-00DC0B0

LU WWN Device Id: 5 0014ee 2084aba3c

Firmware Version: 80.00A80

User Capacity: 3,000,592,982,016 bytes [3.00 TB]

Sector Sizes: 512 bytes logical, 4096 bytes physical

Device is: In smartctl database [for details use: -P show]

ATA Version is: ACS-2 (minor revision not indicated)

SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)

Local Time is: Wed Apr 30 17:59:42 2014 CDT

SMART support is: Available - device has SMART capability.

SMART support is: Enabled

AAM feature is: Unavailable

APM feature is: Unavailable

Rd look-ahead is: Enabled

Write cache is: Enabled

ATA Security is: Disabled, NOT FROZEN [SEC1]

Wt Cache Reorder: Enabled



=== START OF READ SMART DATA SECTION ===

SMART overall-health self-assessment test result: PASSED



General SMART Values:

Offline data collection status: (0x84) Offline data collection activity

was suspended by an interrupting command from host.

Auto Offline Data Collection: Enabled.

Self-test execution status: ( 0) The previous self-test routine completed

without error or no self-test has ever

been run.

Total time to complete Offline

data collection: (40320) seconds.

Offline data collection

capabilities: (0x7b) SMART execute Offline immediate.

Auto Offline data collection on/off support.

Suspend Offline collection upon new

command.

Offline surface scan supported.

Self-test supported.

Conveyance Self-test supported.

Selective Self-test supported.

SMART capabilities: (0x0003) Saves SMART data before entering

power-saving mode.

Supports SMART auto save timer.

Error logging capability: (0x01) Error logging supported.

General Purpose Logging supported.

Short self-test routine

recommended polling time: ( 2) minutes.

Extended self-test routine

recommended polling time: ( 404) minutes.

Conveyance self-test routine

recommended polling time: ( 5) minutes.

SCT capabilities: (0x70b5) SCT Status supported.

SCT Feature Control supported.

SCT Data Table supported.



SMART Attributes Data Structure revision number: 16

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE

1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0

3 Spin_Up_Time POS--K 181 181 021 - 5933

4 Start_Stop_Count -O--CK 100 100 000 - 37

5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0

7 Seek_Error_Rate -OSR-K 200 200 000 - 0

9 Power_On_Hours -O--CK 089 089 000 - 8124

10 Spin_Retry_Count -O--CK 100 253 000 - 0

11 Calibration_Retry_Count -O--CK 100 253 000 - 0

12 Power_Cycle_Count -O--CK 100 100 000 - 36

192 Power-Off_Retract_Count -O--CK 200 200 000 - 26

193 Load_Cycle_Count -O--CK 001 001 000 - 842728

194 Temperature_Celsius -O---K 114 104 000 - 36

196 Reallocated_Event_Count -O--CK 200 200 000 - 0

197 Current_Pending_Sector -O--CK 200 200 000 - 0

198 Offline_Uncorrectable ----CK 200 200 000 - 0

199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0

200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 0

||||||_ K auto-keep

|||||__ C event count

||||___ R error rate

|||____ S speed/performance

||_____ O updated online

|______ P prefailure warning



General Purpose Log Directory Version 1

SMART Log Directory Version 1 [multi-sector log support]

Address Access R/W Size Description

0x00 GPL,SL R/O 1 Log Directory

0x01 SL R/O 1 Summary SMART error log

0x02 SL R/O 5 Comprehensive SMART error log

0x03 GPL R/O 6 Ext. Comprehensive SMART error log

0x06 SL R/O 1 SMART self-test log

0x07 GPL R/O 1 Extended self-test log

0x09 SL R/W 1 Selective self-test log

0x10 GPL R/O 1 NCQ Command Error log

0x11 GPL R/O 1 SATA Phy Event Counters

0x80-0x9f GPL,SL R/W 16 Host vendor specific log

0xa0-0xa7 GPL,SL VS 16 Device vendor specific log

0xa8-0xb7 GPL,SL VS 1 Device vendor specific log

0xbd GPL,SL VS 1 Device vendor specific log

0xc0 GPL,SL VS 1 Device vendor specific log

0xc1 GPL VS 93 Device vendor specific log

0xe0 GPL,SL R/W 1 SCT Command/Status

0xe1 GPL,SL R/W 1 SCT Data Transfer



SMART Extended Comprehensive Error Log Version: 1 (6 sectors)

No Errors Logged



SMART Extended Self-test Log Version: 1 (1 sectors)

No self-tests have been logged. [To run self-tests, use: smartctl -t]



SMART Selective self-test log data structure revision number 1

SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS

1 0 0 Not_testing

2 0 0 Not_testing

3 0 0 Not_testing

4 0 0 Not_testing

5 0 0 Not_testing

Selective self-test flags (0x0):

After scanning selected spans, do NOT read-scan remainder of disk.

If Selective self-test is pending on power-up, resume after 0 minute delay.



SCT Status Version: 3

SCT Version (vendor specific): 258 (0x0102)

SCT Support Level: 1

Device State: Active (0)

Current Temperature: 36 Celsius

Power Cycle Min/Max Temperature: 29/36 Celsius

Lifetime Min/Max Temperature: 19/46 Celsius

Under/Over Temperature Limit Count: 0/0

SCT Temperature History Version: 2

Temperature Sampling Period: 1 minute

Temperature Logging Interval: 1 minute

Min/Max recommended Temperature: 0/60 Celsius

Min/Max Temperature Limit: -41/85 Celsius

Temperature History Size (Index): 478 (286)



Index Estimated Time Temperature Celsius

287 2014-04-30 10:02 37 ******************

... ..( 84 skipped). .. ******************

372 2014-04-30 11:27 37 ******************

373 2014-04-30 11:28 38 *******************

374 2014-04-30 11:29 37 ******************

375 2014-04-30 11:30 38 *******************

... ..( 3 skipped). .. *******************

379 2014-04-30 11:34 38 *******************

380 2014-04-30 11:35 37 ******************

381 2014-04-30 11:36 38 *******************

... ..( 3 skipped). .. *******************

385 2014-04-30 11:40 38 *******************

386 2014-04-30 11:41 37 ******************

387 2014-04-30 11:42 38 *******************

388 2014-04-30 11:43 38 *******************

389 2014-04-30 11:44 38 *******************

390 2014-04-30 11:45 37 ******************

391 2014-04-30 11:46 38 *******************

... ..( 16 skipped). .. *******************

408 2014-04-30 12:03 38 *******************

409 2014-04-30 12:04 37 ******************

410 2014-04-30 12:05 38 *******************

... ..( 24 skipped). .. *******************

435 2014-04-30 12:30 38 *******************

436 2014-04-30 12:31 37 ******************

437 2014-04-30 12:32 38 *******************

... ..(138 skipped). .. *******************

98 2014-04-30 14:51 38 *******************

99 2014-04-30 14:52 ? -

100 2014-04-30 14:53 29 **********

101 2014-04-30 14:54 ? -

102 2014-04-30 14:55 28 *********

... ..( 5 skipped). .. *********

108 2014-04-30 15:01 28 *********

109 2014-04-30 15:02 29 **********

110 2014-04-30 15:03 29 **********

111 2014-04-30 15:04 29 **********

112 2014-04-30 15:05 30 ***********

113 2014-04-30 15:06 30 ***********

114 2014-04-30 15:07 30 ***********

115 2014-04-30 15:08 31 ************

116 2014-04-30 15:09 ? -

117 2014-04-30 15:10 28 *********

118 2014-04-30 15:11 ? -

119 2014-04-30 15:12 28 *********

... ..( 2 skipped). .. *********

122 2014-04-30 15:15 28 *********

123 2014-04-30 15:16 ? -

124 2014-04-30 15:17 29 **********

... ..( 2 skipped). .. **********

127 2014-04-30 15:20 29 **********

128 2014-04-30 15:21 30 ***********

... ..( 2 skipped). .. ***********

131 2014-04-30 15:24 30 ***********

132 2014-04-30 15:25 31 ************

... ..( 4 skipped). .. ************

137 2014-04-30 15:30 31 ************

138 2014-04-30 15:31 32 *************

... ..( 2 skipped). .. *************

141 2014-04-30 15:34 32 *************

142 2014-04-30 15:35 33 **************

... ..( 3 skipped). .. **************

146 2014-04-30 15:39 33 **************

147 2014-04-30 15:40 34 ***************

... ..( 3 skipped). .. ***************

151 2014-04-30 15:44 34 ***************

152 2014-04-30 15:45 35 ****************

... ..( 2 skipped). .. ****************

155 2014-04-30 15:48 35 ****************

156 2014-04-30 15:49 36 *****************

... ..( 8 skipped). .. *****************

165 2014-04-30 15:58 36 *****************

166 2014-04-30 15:59 35 ****************

167 2014-04-30 16:00 36 *****************

168 2014-04-30 16:01 36 *****************

169 2014-04-30 16:02 35 ****************

... ..( 19 skipped). .. ****************

189 2014-04-30 16:22 35 ****************

190 2014-04-30 16:23 36 *****************

... ..( 29 skipped). .. *****************

220 2014-04-30 16:53 36 *****************

221 2014-04-30 16:54 37 ******************

... ..( 64 skipped). .. ******************

286 2014-04-30 17:59 37 ******************



SCT Error Recovery Control command not supported



Device Statistics (GP Log 0x04) not supported



SATA Phy Event Counters (GP Log 0x11)

ID Size Value Description

0x0001 2 0 Command failed due to ICRC error

0x0002 2 0 R_ERR response for data FIS

0x0003 2 0 R_ERR response for device-to-host data FIS

0x0004 2 0 R_ERR response for host-to-device data FIS

0x0005 2 0 R_ERR response for non-data FIS

0x0006 2 0 R_ERR response for device-to-host non-data FIS

0x0007 2 0 R_ERR response for host-to-device non-data FIS

0x0008 2 0 Device-to-host non-data FIS retries

0x0009 2 2 Transition from drive PhyRdy to drive PhyNRdy

0x000a 2 1 Device-to-host register FISes sent due to a COMRESET

0x000b 2 0 CRC errors within host-to-device FIS

0x000f 2 0 R_ERR response for host-to-device data FIS, CRC

0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC

0x8000 4 2115 Vendor specific



[root@freenas] ~#
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Shut the server down, pulled the drives, used ultimate boot cd on a separate pc and disabled the park feature on each drive one at a time.

Booted the server up and received cannot import 'pool': I/O error and the output during bootup said that I needed to run zpool import -F pool, then run a scrub. I'm in the scrub process now with fingers crossed. Was told to paste in output of smrtctl command in a new thread.

ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0
9 Power_On_Hours -O--CK 089 089 000 - 8124
193 Load_Cycle_Count -O--CK 001 001 000 - 842728
196 Reallocated_Event_Count -O--CK 200 200 000 - 0
197 Current_Pending_Sector -O--CK 200 200 000 - 0

So the smartctl data doesn't show any errors. What it does show is for almost a year your head loading/unloading has gone up dramatically. It's a good thing you ran the utility. In your posting (parts I didn't quote) I noticed that you haven't run any SMART Tests. Did you not setup FreeNAS to run SMART Tests intentionally? I highly recommend that you setup running a nightly Short SMART test for all your drives and a weekly Long SMART test. This can be done by the GUI.

After your scrub is complete start a long test on all your drives. This will take 404 minutes to complete (almost 7 hours). Your FreeNAS will still be functional but it will be slow. The more you use FreeNAS, the longer it will take to run the test. You can start the smart test via the GUI or SSH. The GUI is the safer option but the SSH is super easy: type 'smartctl -t long /dev/ada0' and to check to see how the testing went, 'smartctl -a /dev/ada0' and look for the SMART Test log. You could run the short version of the test if you wanted a quick check, it takes 2 minutes to run.

Also, since you indicated in a previous thread that your system has crashed several times, I recommend that you list your system hardware specs and run MemTest on your RAM and test your CPU as well. You could have a failure anywhere in your system causing the crashing. It could be the power supply even (it does happen more than you think).

When looking at the smartctl data, the ID's 5, 196 and 197 are the indicators of a failing hard drive. If you start to see numbers in the Raw Value being something other than zero, you are looking at a likely failure soon. Look at all your drives, if you see something questionable then post those results. Use 'smartctl -a /dev/ada0' to look at those results.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
I was under the impression that I was running smart tests till I joined this site. The current set of hardware I have was deemed to be insufficient since it did not support ecc ram. So I just purchased the following and plan on replacing everything inside and importing the pool so everything would be run on proper hardware.

List of hardware that I'm going to switch over to
Mobo - http://www.newegg.com/Product/Product.aspx?Item=N82E16813182341
Proc - http://www.newegg.com/Product/Product.aspx?Item=N82E16819116935
128GB of memory that matches what supermicro recommended
Drive controller - http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112
Drive expander - http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207

Using this case with all 24 bays filled. 16 x 3TB drives + 8 x 4TB drives. Divided up into 3 raidz1 vdevs.

Current hardware
Mobo - http://www.newegg.com/Product/Product.aspx?Item=N82E16813131881
Proc - http://www.newegg.com/Product/Product.aspx?Item=N82E16819113281
32GB of memory that matches what asus recommends
Power supply - http://www.newegg.com/Product/Product.aspx?Item=N82E16817182082
7 drives are run from the mobo
1 run from http://www.newegg.com/Product/Product.aspx?Item=N82E16816115097
16 run on an Areca ARC-1260
intel network card

After coming to this forum to see why I was experiencing drops when transferring data to the server. Found out that I was rockin amateur hardware so I've been getting things ready for the swap once the rest of the memory arrives. Also that running raidz1 was stupid.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Should I be worried about this?

[root@freenas] ~# zpool import -F pool
cannot mount '/pool': failed to create mountpoint
cannot mount '/pool/.system': failed to create mountpoint
cannot mount '/pool/.system/cores': failed to create mountpoint
cannot mount '/pool/.system/samba4': failed to create mountpoint
cannot mount '/pool/.system/syslog': failed to create mountpoint
cannot mount '/pool/jails': failed to create mountpoint
[root@freenas] ~#

It sees the pool now and that's what it's running the scrub on but it's not mounted anywhere. Is that ok?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I don't think it's a good sign at all. To be honest someone like @cyberjock should look at this and offer his advice. He may tell you you pool is history or that you could save it. Doing the -F to force the import has risks, you may have found the downside since it didn't mount the pool. What does 'zpool status' return? Please put that in either code brackets or pre brackets to retain the format.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
I was one of the people telling him to use wdidle3, but only in respect to the extremely high load cycle count.
At the same time I already suggested on Monday to run smart tests.

I can't really imagine that wdidle is the cause of this. Most likely the drive(s) had another precondition, maybe whatever causes the timeout errors in the first place. As joeschmuck said, the output of zpool status could help. Also check your other drives for smart issues.

Next time you post output from the shell, you can use [code]-tags to make it more readable in the forums.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I heard my name.. so here I am.

I can't really imagine that wdidle is the cause of this. Most likely the drive(s) had another precondition, maybe whatever causes the timeout errors in the first place.

You're both right and wrong. The issue is that as the load cycle increases you *are* wearing out your hardware. Once that hardware wears out to a certain point it will begin failing and generally just being very ugly to work with. There's no "this failure is from load cycles" so any failure with a high load cycle count can be attributable to the load cycles. Keword is attrbutable because there is no guarantee of anything except "the drive is broken and needs to be replaced/RMAd".

As an engineer I take particular care to not exceed design limits and such. Being that these drives are rated for 350k-500k and this drive has over 800k, that's not a good sign. So this may have been self-induced by not fixing the setting, or it may just be a coindidence. We will never know.

But, what I think we can *all* agree on is that going over the designed limits definitely isn't a particularly smart idea, especially if you can control it. ;)

As for your pool, you are in deep crap. There's not going to be any "easy" fix for this. I'd strongly recommend you not use the server further until you decide if you want to do recovery or just a restore from backup. I offer data recover services. If you want to go into discussion on that, send me a PM or message me in IRC. We can talk pricing and whatnot. I won't need you to mail me the disks, but I will need remote access to the FreeNAS box via SSH.

Just let me know if you are interested.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Code:
[root@freenas] ~# zpool status
  pool: pool
state: ONLINE
  scan: scrub in progress since Wed Apr 30 17:30:51 2014
        17.3T scanned out of 45.1T at 440M/s, 18h20m to go
        0 repaired, 38.50% done
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        pool                                            ONLINE      0    0    0
          raidz1-0                                      ONLINE      0    0    0
            gptid/c59ba78d-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c62bee4e-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c6b5eefb-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c74caf33-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c7ea809f-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c8e3aa85-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c9d36e9a-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/caccde76-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
          raidz1-1                                      ONLINE      0    0    0
            gptid/0cee6a0a-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d0aec4a-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d2a1c99-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d473040-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d643217-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d804c20-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d9cad2e-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0db889c9-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
          raidz1-2                                      ONLINE      0    0    0
            gptid/872574b3-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/8743b24c-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87662313-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/8783c9e7-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87a19df7-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87bf7c23-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87ddaaf0-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87fcffd6-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
 
errors: No known data errors
[root@freenas] ~#


That's where it's at now.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
This output looks good and healthy for now.

The issue that it's not mounted correctly is maybe due to the fact that you did not specify the mount path, and the root file system is read-only, so the path /pool could not be created. Try importing with zpool import -R /mnt -f pool. This will create the necessary mount directories in /mnt, as the FreeNAS GUI does it.

I'd wait for the scrub to finished first. Then export the pool with zpool export pool and try the import command with -R or just use the GUI to auto-import the volume.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You should try mounting the pool from the WebGUI. Unless that fails you shouldn't be making a habit of mounting from the CLI. ;)

Ok god KempelofDoom. That idea would be a nightmare if you had 20+TB of data. Swapping cartridges where you will probably only be able to fill 3 or 4 in a day due to transfer rates, etc. I wouldn't want that job. You'll spend a week just trying to get a full backup done. Ug!
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Restarting the server or using the GUI is probably your safest bet. If that doesn't work out as expected, only then try to manually import the pool for further diagnosis.
If you ever recreate the pool you should probably use RAID-Z2 or Z3 vdevs. RAID-Z1 on eight drives is quite risky.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Tapes would be slow but I wouldn't use compression so it should transfer relatively quick. Much faster than me re-encoding everything and torrenting the rest. Some of the torrents aren't available anymore so if I recover from this I'm doing a disk copy of that. Not to mention all the work of renaming files to conform with Plex rules. I do have a buddy in another state that is my backup and I'm his so I could get a large chunk of it that way. We ship a 16TB array back and forth to stay in sync since he uses a synology instead.

It was only after joining this forum did I realize that my vdev setup was flawed by using raidz1. None of that advice was in the manual which I read before I started anything. I also played with some junk disks that I have to run scenarios before going full bore with the real data. What I learned then and with the manual seems trivial to the presentation that cyberjock made. Live and learn.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Actually, compresssion isn't the problem.. the problem is with getting data from your server to the tape drive. You plan to do 10Gb between the devices? If not, you are never going to get more than 133MB/sec since that's the maximum througput of Gigabit. FreeNAS doesn't support any tape drives at all...

To sum it up, you're looking at something that involves 2 machines(FreeNAS + a desktop), with then connected via a LAN of some kind doing the backing up to tape, that you'll have to change out every 4+ hours or so, then do the next tape until it's done.

The reality of it, you could probably build a whole second box to do ZFS replication to, have it handle everything without joking around with Tapes, have the convenience of instant access to the backups whenever needed, and is probably far more reliable since you are using ZFS. ;)

Big picture, we've seen people propose the tape idea before. And for big pools, using tapes is not economical, not logistically sound, not fast, or all of the above.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
You make some great points. Those were the driving reason for going in with someone else and mirroring each others data so we'd be each others backup. I can access his data but trying to transfer TB's of data over a WAN connection would be brutal hence the 16TB array we ship back and forth. I did entertain the idea of making another freenas box to mirror my own but that means waiting for next years tax return since I'm tapped out after getting all the new hardware to rock zfs proper.

Has anyone here tried Amazon's glacier storage option? Does the hardware I have listed for the rebuild appropriate? I made a point of getting a mobo that would support up to 512GB of ram when those sticks are sold. The processor was looked down on in a few other threads but it looks like such a decent proc and a banger on cost.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Code:
[root@freenas] ~# zpool status
  pool: pool
state: ONLINE
  scan: scrub in progress since Wed Apr 30 17:30:51 2014
        21.0T scanned out of 45.1T at 419M/s, 16h44m to go
        0 repaired, 46.55% done
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        pool                                            ONLINE      0    0    0
          raidz1-0                                      ONLINE      0    0    0
            gptid/c59ba78d-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c62bee4e-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c6b5eefb-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c74caf33-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c7ea809f-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c8e3aa85-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c9d36e9a-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/caccde76-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
          raidz1-1                                      ONLINE      0    0    0
            gptid/0cee6a0a-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d0aec4a-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d2a1c99-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d473040-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d643217-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d804c20-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d9cad2e-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0db889c9-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
          raidz1-2                                      ONLINE      0    0    0
            gptid/872574b3-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/8743b24c-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87662313-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/8783c9e7-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87a19df7-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87bf7c23-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87ddaaf0-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87fcffd6-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
 
errors: No known data errors
[root@freenas] ~#


I almost want to stay up till this finishes. It's like sitting in the waiting room while someone important to you is getting surgery.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
My guess is everything is going to come back clean and the pool will mount. warri probably had it right when he noticed you weren't even trying to mount it from the CLI correctly.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Is it a good idea to use the sata ports on the motherboard to control the disks or should everything be run from the same controller card? How about combining the use of both?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
It doesn't matter at all unless you have bottlenecks you are trying to avoid.
 

KempelofDoom

Explorer
Joined
Apr 11, 2014
Messages
72
Code:
[root@freenas] ~# zpool status
  pool: pool
state: ONLINE
  scan: scrub repaired 0 in 38h7m with 0 errors on Fri May  2 07:38:25 2014
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        pool                                            ONLINE      0    0    0
          raidz1-0                                      ONLINE      0    0    0
            gptid/c59ba78d-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c62bee4e-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c6b5eefb-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c74caf33-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c7ea809f-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c8e3aa85-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/c9d36e9a-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
            gptid/caccde76-c484-11e2-b6fd-60a44ca93e6b  ONLINE      0    0    0
          raidz1-1                                      ONLINE      0    0    0
            gptid/0cee6a0a-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d0aec4a-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d2a1c99-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d473040-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d643217-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d804c20-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0d9cad2e-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
            gptid/0db889c9-f64d-11e2-9680-60a44ca93e6b  ONLINE      0    0    0
          raidz1-2                                      ONLINE      0    0    0
            gptid/872574b3-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/8743b24c-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87662313-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/8783c9e7-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87a19df7-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87bf7c23-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87ddaaf0-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
            gptid/87fcffd6-bdca-11e3-b83b-60a44ca93e6b  ONLINE      0    0    0
 
errors: No known data errors
[root@freenas] ~#


Have to wait till the workday is done before I can go home and restart. Fingers crossed.
 
Status
Not open for further replies.
Top