blocksize and ashift: 1 of my vdevs in a pool is setup incorrectly

Status
Not open for further replies.

melbournemac

Dabbler
Joined
Jan 21, 2017
Messages
20
Hi,

freeNAS 11.1 RELEASE
  • two pools. Main data pool consists of 4 x WD 6TB Reds configured as 2 vdevs each being mirrored
Have had freeNAS running for a couple of years without issue. System UI performance was poor this morning. Discovered a scrub of the main data pool was still running & CPU & disks were getting a workout. I'll alter the scrub start time to better align with low system usage.

Whilst looking into the scrub I noticed that one of the vdevs in MacJones_pool1 is reporting blocksize mismatch (children[1] in the output below). More digging showed that the ashift is set to 9 instead of 12. I am unsure how I managed to achieve this.

Code:
root@freenas:~ # zdb -U /data/zfs/zpool.cache
MacJones_pool1:
	version: 5000
	name: 'MacJones_pool1'
	state: 0
	txg: 7269129
	pool_guid: 11917207372033963665
	hostid: 3259113761
	hostname: ''
	com.delphix:has_per_vdev_zaps
	vdev_children: 2
	vdev_tree:
		type: 'root'
		id: 0
		guid: 11917207372033963665
		children[0]:
			type: 'mirror'
			id: 0
			guid: 6602097936253641891
			metaslab_array: 38
			metaslab_shift: 35
			ashift: 12
			asize: 5999022833664
			is_log: 0
			create_txg: 4
			com.delphix:vdev_zap_top: 789
			children[0]:
				type: 'disk'
				id: 0
				guid: 8089679698083475835
				path: '/dev/gptid/8255ae0b-8ae1-11e6-b81d-001e676ba520'
				whole_disk: 1
				DTL: 189
				create_txg: 4
				com.delphix:vdev_zap_leaf: 790
			children[1]:
				type: 'disk'
				id: 1
				guid: 8581421012564144232
				path: '/dev/gptid/836aaed3-8ae1-11e6-b81d-001e676ba520'
				whole_disk: 1
				DTL: 188
				create_txg: 4
				com.delphix:vdev_zap_leaf: 791
		children[1]:
			type: 'mirror'
			id: 1
			guid: 14728795341280710015
			metaslab_array: 35
			metaslab_shift: 33
			ashift: 9
			asize: 5999022833664
			is_log: 0
			create_txg: 4
			com.delphix:vdev_zap_top: 792
			children[0]:
				type: 'disk'
				id: 0
				guid: 18259605005843121397
				path: '/dev/gptid/b7a2b02f-8bf0-11e6-88b8-001e676ba520'
				whole_disk: 1
				DTL: 186
				create_txg: 4
				com.delphix:vdev_zap_leaf: 793
			children[1]:
				type: 'disk'
				id: 1
				guid: 12383229025031835147
				path: '/dev/gptid/415e27e6-8c05-11e6-88b8-001e676ba520'
				whole_disk: 1
				DTL: 192
				create_txg: 4
				com.delphix:vdev_zap_leaf: 794
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
MacJones_pool2:
	version: 5000
	name: 'MacJones_pool2'
	state: 0
	txg: 1911297
	pool_guid: 15754485980253393599
	hostid: 3259113761
	hostname: ''
	com.delphix:has_per_vdev_zaps
	vdev_children: 1
	vdev_tree:
		type: 'root'
		id: 0
		guid: 15754485980253393599
		children[0]:
			type: 'disk'
			id: 0
			guid: 11404614096857004109
			path: '/dev/gptid/c3f51df8-b363-11e6-bc6e-001e676ba520'
			whole_disk: 1
			metaslab_array: 35
			metaslab_shift: 33
			ashift: 9
			asize: 998052462592
			is_log: 0
			DTL: 49
			create_txg: 4
			com.delphix:vdev_zap_leaf: 50
			com.delphix:vdev_zap_top: 51
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data


Reading other threads / posts it appears I have to rebuild the pool & if I do I might get some performance gains. To help me decide if I want to fix this configuration error hoping for some input
  • is there any way to fix this vdev without doing the entire pool?
  • is there any way to estimate the performance gains that may be achieved if I fix it?
  • are there any other risks of leaving the pool / vdev configuration as it is?

thanks,

Steve
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1) There is no way to fix the vdev without doing the entire pool.

2) The performance gains are not likely to be substantial, unless you already notice that the vdev is unusually slow.

3) It's fine to leave it.
 

melbournemac

Dabbler
Joined
Jan 21, 2017
Messages
20
1) There is no way to fix the vdev without doing the entire pool.

2) The performance gains are not likely to be substantial, unless you already notice that the vdev is unusually slow.

3) It's fine to leave it.

appreciate the update
 
Status
Not open for further replies.
Top