marcevan
Patron
- Joined
- Dec 15, 2013
- Messages
- 432
Running FreeNAS-9.1.1-RELEASE-x64 (a752d35) with 16GB RAM.
Had 1 pool called MEDIA with a stripe of 3 2-disk mirrors plus one spare 1TB.
the 2-disk mirrors were all 3TB drives.
So like this (from gpart show)
[CODE]=> 34 5860533101 ada0 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada1 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada2 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada3 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada4 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada5 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 63 15638465 da3 MBR (7.5G)
63 1930257 1 freebsd [active] (942M)
1930320 63 - free - (31k)
1930383 1930257 2 freebsd (942M)
3860640 3024 3 freebsd (1.5M)
3863664 41328 4 freebsd (20M)
3904992 11733536 - free - (5.6G)
=> 0 1930257 da3s1 BSD (942M)
0 16 - free - (8.0k)
16 1930241 1 !0 (942M)[/CODE]
Then zdb shows the pool which is encouraging:
[CODE]
MEDIA:
version: 5000
name: 'MEDIA'
state: 0
txg: 220477
pool_guid: 9173378823382608771
hostid: 1822453256
hostname: 'freenas.basement.local'
vdev_children: 4
vdev_tree:
type: 'root'
id: 0
guid: 9173378823382608771
create_txg: 4
children[0]:
type: 'mirror'
id: 0
guid: 4578489925772993350
metaslab_array: 34
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7034236002078255833
path: '/dev/gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8'
phys_path: '/dev/gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8'
whole_disk: 1
DTL: 193
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 8131083936012038549
path: '/dev/gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8'
phys_path: '/dev/gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8'
whole_disk: 1
DTL: 192
create_txg: 4
children[1]:
type: 'mirror'
id: 1
guid: 7287074714158299330
metaslab_array: 38
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 220
children[0]:
type: 'disk'
id: 0
guid: 18438491820313231629
path: '/dev/gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
DTL: 212
create_txg: 220
children[1]:
type: 'disk'
id: 1
guid: 5740885520923749251
path: '/dev/gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8'
phys_path: '/dev/gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8'
whole_disk: 1
DTL: 195
create_txg: 220
children[2]:
type: 'mirror'
id: 2
guid: 7549782153430211792
metaslab_array: 149
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 45718
children[0]:
type: 'disk'
id: 0
guid: 15446892777379765105
path: '/dev/gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
DTL: 214
create_txg: 45718
children[1]:
type: 'disk'
id: 1
guid: 8708169765859944444
path: '/dev/gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
DTL: 210
create_txg: 45718
children[3]:
type: 'disk'
id: 3
guid: 9809961858194575065
path: '/dev/gptid/ca8978cb-5db9-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/ca8978cb-5db9-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
metaslab_array: 151
metaslab_shift: 34
ashift: 12
asize: 1998246641664
is_log: 0
DTL: 194
create_txg: 45726
features_for_read:
[/CODE]
When I discovered that 1 of the 3TB drives was on a SATA-1 port, and all others on SATA-3 ports, I got suspicious of the performance and decided to:
1. Remove the 1 TB drive as it was on a good SATA-3 port.
2. Unplug the 3 TB drive from the SATA-1 port and put onto the SATA-3 port.
From there I knew it would resilver, and saw it start.
Inadvertantly, the box was restarted; children...ugh.
OK, I rebooted and now has detached volume MEDIA so nothing anyone can reach but I cannot auto-import it:
freenas manage.py: [middleware.exceptions :38] [MiddlewareError : The volume "MEDIA" failed to import, for further details check pool status]
Ok, here's little tell from zpool status:
I suspect "available" is the key, and I hope that it means it will be available at some point.
Problem is I cannot try and see how long it will take to be available or if it will be at all.
Any help given the above outputs would be greatly appreciated.
Oh, adding this which I believe is doubtful on my situation because resilvering was happening but that's when the PC cycled:
zpool import
[NOPARSE}
[/NOPARSE]
But for anyone wanting to help (please, please), I also showing:
Camcontrol Devlist
[CODE]
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus5 target 0 lun 0 (ada0,pass0)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus6 target 0 lun 0 (ada1,pass1)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus7 target 0 lun 0 (ada2,pass2)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus8 target 0 lun 0 (pass3,ada3)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus9 target 0 lun 0 (ada4,pass4)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus10 target 0 lun 0 (ada5,pass5)
<Generic USB SD Reader 1.00> at scbus14 target 0 lun 0 (pass6,da0)
<Generic USB CF Reader 1.01> at scbus14 target 0 lun 1 (pass7,da1)
<Generic USB xD/SM Reader 1.02> at scbus14 target 0 lun 2 (pass8,da2)
<Generic USB MS Reader 1.03> at scbus14 target 0 lun 3 (pass10,da4)
<Kingston DataTraveler G3 PMAP> at scbus15 target 0 lun 0 (pass9,da3)
[/CODE]
As well as
glabel status
[CODE]
Name Status Components
gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8 N/A ada0p2
gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8 N/A ada1p2
gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8 N/A ada2p2
gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8 N/A ada4p2
gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8 N/A ada5p2
ufs/FreeNASs3 N/A da3s3
ufs/FreeNASs4 N/A da3s4
ufs/FreeNASs1a N/A da3s1a
gptid/4765aca5-65ab-11e3-b5ba-6805ca1adbd8 N/A ada3p1
gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8 N/A ada3p2
[/CODE]
Had 1 pool called MEDIA with a stripe of 3 2-disk mirrors plus one spare 1TB.
the 2-disk mirrors were all 3TB drives.
So like this (from gpart show)
[CODE]=> 34 5860533101 ada0 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada1 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada2 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada3 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada4 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 34 5860533101 ada5 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 5856338696 2 freebsd-zfs (2.7T)
5860533128 7 - free - (3.5k)
=> 63 15638465 da3 MBR (7.5G)
63 1930257 1 freebsd [active] (942M)
1930320 63 - free - (31k)
1930383 1930257 2 freebsd (942M)
3860640 3024 3 freebsd (1.5M)
3863664 41328 4 freebsd (20M)
3904992 11733536 - free - (5.6G)
=> 0 1930257 da3s1 BSD (942M)
0 16 - free - (8.0k)
16 1930241 1 !0 (942M)[/CODE]
Then zdb shows the pool which is encouraging:
[CODE]
MEDIA:
version: 5000
name: 'MEDIA'
state: 0
txg: 220477
pool_guid: 9173378823382608771
hostid: 1822453256
hostname: 'freenas.basement.local'
vdev_children: 4
vdev_tree:
type: 'root'
id: 0
guid: 9173378823382608771
create_txg: 4
children[0]:
type: 'mirror'
id: 0
guid: 4578489925772993350
metaslab_array: 34
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7034236002078255833
path: '/dev/gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8'
phys_path: '/dev/gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8'
whole_disk: 1
DTL: 193
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 8131083936012038549
path: '/dev/gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8'
phys_path: '/dev/gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8'
whole_disk: 1
DTL: 192
create_txg: 4
children[1]:
type: 'mirror'
id: 1
guid: 7287074714158299330
metaslab_array: 38
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 220
children[0]:
type: 'disk'
id: 0
guid: 18438491820313231629
path: '/dev/gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
DTL: 212
create_txg: 220
children[1]:
type: 'disk'
id: 1
guid: 5740885520923749251
path: '/dev/gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8'
phys_path: '/dev/gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8'
whole_disk: 1
DTL: 195
create_txg: 220
children[2]:
type: 'mirror'
id: 2
guid: 7549782153430211792
metaslab_array: 149
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 45718
children[0]:
type: 'disk'
id: 0
guid: 15446892777379765105
path: '/dev/gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
DTL: 214
create_txg: 45718
children[1]:
type: 'disk'
id: 1
guid: 8708169765859944444
path: '/dev/gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
DTL: 210
create_txg: 45718
children[3]:
type: 'disk'
id: 3
guid: 9809961858194575065
path: '/dev/gptid/ca8978cb-5db9-11e3-b5ba-6805ca1adbd8'
phys_path: '/dev/gptid/ca8978cb-5db9-11e3-b5ba-6805ca1adbd8'
whole_disk: 1
metaslab_array: 151
metaslab_shift: 34
ashift: 12
asize: 1998246641664
is_log: 0
DTL: 194
create_txg: 45726
features_for_read:
[/CODE]
When I discovered that 1 of the 3TB drives was on a SATA-1 port, and all others on SATA-3 ports, I got suspicious of the performance and decided to:
1. Remove the 1 TB drive as it was on a good SATA-3 port.
2. Unplug the 3 TB drive from the SATA-1 port and put onto the SATA-3 port.
From there I knew it would resilver, and saw it start.
Inadvertantly, the box was restarted; children...ugh.
OK, I rebooted and now has detached volume MEDIA so nothing anyone can reach but I cannot auto-import it:
freenas manage.py: [middleware.exceptions :38] [MiddlewareError : The volume "MEDIA" failed to import, for further details check pool status]
Ok, here's little tell from zpool status:
Code:
no pools available
I suspect "available" is the key, and I hope that it means it will be available at some point.
Problem is I cannot try and see how long it will take to be available or if it will be at all.
Any help given the above outputs would be greatly appreciated.
Oh, adding this which I believe is doubtful on my situation because resilvering was happening but that's when the PC cycled:
zpool import
[NOPARSE}
Code:
pool: MEDIA id: 9173378823382608771 state: UNAVAIL status: One or more devices were being resilvered. action: The pool cannot be imported due to damaged devices or data. config: MEDIA UNAVAIL missing device mirror-0 ONLINE gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8 ONLINE gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8 ONLINE mirror-1 ONLINE gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8 ONLINE gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8 ONLINE mirror-2 ONLINE gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8 ONLINE gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8 ONLINE
[/NOPARSE]
But for anyone wanting to help (please, please), I also showing:
Camcontrol Devlist
[CODE]
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus5 target 0 lun 0 (ada0,pass0)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus6 target 0 lun 0 (ada1,pass1)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus7 target 0 lun 0 (ada2,pass2)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus8 target 0 lun 0 (pass3,ada3)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus9 target 0 lun 0 (ada4,pass4)
<WDC WD30EFRX-68EUZN0 80.00A80> at scbus10 target 0 lun 0 (ada5,pass5)
<Generic USB SD Reader 1.00> at scbus14 target 0 lun 0 (pass6,da0)
<Generic USB CF Reader 1.01> at scbus14 target 0 lun 1 (pass7,da1)
<Generic USB xD/SM Reader 1.02> at scbus14 target 0 lun 2 (pass8,da2)
<Generic USB MS Reader 1.03> at scbus14 target 0 lun 3 (pass10,da4)
<Kingston DataTraveler G3 PMAP> at scbus15 target 0 lun 0 (pass9,da3)
[/CODE]
As well as
glabel status
[CODE]
Name Status Components
gptid/d9fcbbaf-65a5-11e3-b5ba-6805ca1adbd8 N/A ada0p2
gptid/e6533050-5ba7-11e3-a9f0-6805ca1adbd8 N/A ada1p2
gptid/6175d11f-5baa-11e3-a9f0-6805ca1adbd8 N/A ada2p2
gptid/e5cda0d2-5ba7-11e3-a9f0-6805ca1adbd8 N/A ada4p2
gptid/0a02dfab-65aa-11e3-b5ba-6805ca1adbd8 N/A ada5p2
ufs/FreeNASs3 N/A da3s3
ufs/FreeNASs4 N/A da3s4
ufs/FreeNASs1a N/A da3s1a
gptid/4765aca5-65ab-11e3-b5ba-6805ca1adbd8 N/A ada3p1
gptid/4776dabb-65ab-11e3-b5ba-6805ca1adbd8 N/A ada3p2
[/CODE]