Need help with getting pool online after making mistakes

Status
Not open for further replies.

Fuzzball

Cadet
Joined
Mar 18, 2014
Messages
3
The short:
I made some mistakes and I thought I had a good backup of everything important, but I'm missing some stuff in those backups. I have a raidz1 with three 3TB drives. Two drives should be functional, but one is listed as unavailable. I'd like to bring the pool online with the two below /dev/gpt/tank-* drives in at least a read only way to grab the data I really need. Is there a way to force mount it?
Code:
$ zpool status tank
  pool: tank
 state: FAULTED
status: One or more devices is currently being resilvered.  The pool will
		continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Dec 23 17:23:11 2017
		0 scanned at 0/s, 0 issued at 0/s, 7.30T total
		0 resilvered, 0.00% done, no estimated completion time
config:

		NAME					 STATE	 READ WRITE CKSUM
		tank					 FAULTED	  3	 0	 0
		  raidz1-0			   DEGRADED	43	33	 0
			7893811221329945986  UNAVAIL	  0	 0	 0  was /dev/gpt/tank-0-K790
			gpt/tank-1-K7EW	  ONLINE	   0	 0	 0
			5490429709276012074  UNAVAIL	  0	 0	 0  was /dev/gptid/d8d8a354-e82f-11e7-ace6-60eb699bdba8


The long story:
I have been running FreeBSD 10 and rolling my own NAS on a HP Microserver N40L three Seagate 3TB drives in a raidz1 and a Seagate 3TB drive formatted in UFS. I'm at the physical limit of what I can jam in there and wanted to use FreeNAS for the easy web GUI and updates so I got myself an Ebay special Dell C2100 with the backplane that has three SFF-8484 connectors. One SFF-8484 has a breakout cable connects to the four onboard SATA ports and the other two I bought an SFF-8484 to SFF-8087 cable to connect to an IBM M1015 flashed to IT Mode. I think my problems lie with the backplane and/or the cables as I've used the M1015. I also got my hands on several used HGST 3TB drives that I intended to setup as a raidz2 and migrate my data over, but have not been used in this so I won't be referring to them again. Mistake 1 was not testing the system thoroughly. Mistake 2 as not immediately moving the drives back to the old system when I started to see issues.

I setup FreeNAS 11.1 on the C2100. In the C2100 two of the drives in the pool went on the motherboard connections, the third drive in the pool went on the M1015, the UFS drive went on the motherboard (unmounted), and finally a spare Seagate 3TB went on the motherboard. The tank pool mounted fine with everything online. I did not upgrade the version of the zpool. I made a change to one of the unimportant files in the pool and later saw the pool was in a degraded state with the drive on the M1015 being the culprit. I went ahead and told FreeNAS to replace that drive with the spare on the motherboard (Mistake 3). A day passed and only a few KB was resilvered with no active progress. I did a few reboots, but nothing was moving along. I then moved the all the drives over to the M1015 spots with no change (Mistake 4). Then I saw one of the two good drives marked as unavailable and my heart sank knowing I had screwed up.

Here's where things stand:
Code:
$ ls /dev/gpt
freebsd-boot	homefs		  rootfs		  swapfs		  tank-0-K790	 tank-1-K7EW	 tmpfs		   usrfs		   varfs
$ zdb
freenas-boot:
	version: 5000
	name: 'freenas-boot'
	state: 0
	txg: 79690
	pool_guid: 12110775019371991801
	hostname: ''
	com.delphix:has_per_vdev_zaps
	vdev_children: 1
	vdev_tree:
		type: 'root'
		id: 0
		guid: 12110775019371991801
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 11021728861805289996
			path: '/dev/da0p2'
			whole_disk: 1
			metaslab_array: 33
			metaslab_shift: 28
			ashift: 9
			asize: 32010141696
			is_log: 0
			DTL: 110
			create_txg: 4
			com.delphix:vdev_zap_leaf: 31
			com.delphix:vdev_zap_top: 32
	features_for_read:
tank:
	version: 5000
	name: 'tank'
	state: 0
	txg: 30131799
	pool_guid: 7103296614445671964
	hostid: 3691257932
	hostname: 'freenas'
	com.delphix:has_per_vdev_zaps
	vdev_children: 1
	vdev_tree:
		type: 'root'
		id: 0
		guid: 7103296614445671964
		children[0]:
			type: 'raidz'
			id: 0
			guid: 17979348125048439281
			nparity: 1
			metaslab_array: 30
			metaslab_shift: 36
			ashift: 12
			asize: 8995321675776
			is_log: 0
			create_txg: 4
			com.delphix:vdev_zap_top: 160
			children[0]:
				type: 'disk'
				id: 0
				guid: 7893811221329945986
				path: '/dev/gpt/tank-0-K790'
				phys_path: '/dev/gpt/tank-0-K790'
				whole_disk: 1
				DTL: 154
				create_txg: 4
				com.delphix:vdev_zap_leaf: 161
			children[1]:
				type: 'disk'
				id: 1
				guid: 5811819241200281728
				path: '/dev/gpt/tank-1-K7EW'
				phys_path: '/dev/gpt/tank-1-K7EW'
				whole_disk: 1
				DTL: 156
				create_txg: 4
				com.delphix:vdev_zap_leaf: 163
			children[2]:
				type: 'disk'
				id: 2
				guid: 5490429709276012074
				path: '/dev/gptid/d8d8a354-e82f-11e7-ace6-60eb699bdba8'
				whole_disk: 1
				DTL: 229
				create_txg: 4
				com.delphix:vdev_zap_leaf: 227
				resilver_txg: 30131394
	features_for_read:
$ zpool status tank
  pool: tank
 state: FAULTED
status: One or more devices is currently being resilvered.  The pool will
		continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Dec 23 17:23:11 2017
		0 scanned at 0/s, 0 issued at 0/s, 7.30T total
		0 resilvered, 0.00% done, no estimated completion time
config:

		NAME					 STATE	 READ WRITE CKSUM
		tank					 FAULTED	  3	 0	 0
		  raidz1-0			   DEGRADED	43	33	 0
			7893811221329945986  UNAVAIL	  0	 0	 0  was /dev/gpt/tank-0-K790
			gpt/tank-1-K7EW	  ONLINE	   0	 0	 0
			5490429709276012074  UNAVAIL	  0	 0	 0  was /dev/gptid/d8d8a354-e82f-11e7-ace6-60eb699bdba8
$


Both the FreeBSD 10 and FreeNAS 11.1 installs are on separate USB drives so I can move them around easily. I really appreciate it.
 
Last edited:
D

dlavigne

Guest
If it's still not complete, please create a ticket at bugs.freenas.org and post the issue number here.
 
Status
Not open for further replies.
Top