Volume expansion question

Status
Not open for further replies.

scottmaverick

Dabbler
Joined
Aug 24, 2016
Messages
24
My original volume consisted of 12 3tb drives in raidz3. I had read that performance is degraded if the capacity is over 80%. So I bought another 12 3tb drives and used them to extend the volume. I did this using the GUI and extended the existing volume by adding the new 12 drives in a raidz3 configuration. I now have a volume that consists of 2 vdevs each of which is raidz3.

I started copying more files to the volume and noticed that it was still only writing to my first vdev which was 80% full. The idea was to keep the vdev under 80% to avoid the performance degradation. Is there a way to have new files be only written to the new vdev? Or is my understanding of how everything works completely wrong?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Please show us the following output in CODE tags;
Code:
zpool status
zpool list -v
 

scottmaverick

Dabbler
Joined
Aug 24, 2016
Messages
24
Code:
 zpool status
  pool: Media
 state: ONLINE
  scan: scrub repaired 0 in 17h9m with 0 errors on Thu Dec 15 21:09:49 2016
config:

		NAME											STATE	 READ WRITE CKSUM
		Media										   ONLINE	   0	 0	 0
		  raidz3-0									  ONLINE	   0	 0	 0
			gptid/04c115ba-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/05c1e9ee-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/06c70250-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/07c5a98d-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/08c1d3f1-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/09bf94c1-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/0abdd3ba-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/0bbe0b65-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/0cbd894c-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/0dbaaae6-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/0eb7691c-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/0fbbce51-6a25-11e6-a63b-e0cb4e0ffe38  ONLINE	   0	 0	 0
		  raidz3-1									  ONLINE	   0	 0	 0
			gptid/72193012-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/733d960f-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/7467fddf-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/75845a00-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/76a5dfeb-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/77c36a78-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/78e5066a-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/7a0ffc76-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/7b3745ee-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/7c5d623b-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/7d7dcd79-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0
			gptid/7e9c399a-c792-11e6-a5e5-e0cb4e0ffe38  ONLINE	   0	 0	 0

errors: No known data errors

  pool: Plex
 state: ONLINE
  scan: scrub repaired 0 in 0h7m with 0 errors on Thu Dec 15 04:07:43 2016
config:

		NAME										  STATE	 READ WRITE CKSUM
		Plex										  ONLINE	   0	 0	 0
		  gptid/2d25dedb-696f-11e6-8378-e0cb4e0ffe38  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Dec  7 03:45:38 2016
config:

		NAME										  STATE	 READ WRITE CKSUM
		freenas-boot								  ONLINE	   0	 0	 0
		  gptid/92ff4e72-68c0-11e6-bc4a-e0cb4e0ffe38  ONLINE	   0	 0	 0

errors: No known data errors



Code:
zpool list -v
NAME									 SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
Media									 65T  27.7T  37.3T		 -	18%	42%  1.00x  ONLINE  /mnt
  raidz3								32.5T  27.4T  5.10T		 -	36%	84%
	gptid/04c115ba-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/05c1e9ee-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/06c70250-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/07c5a98d-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/08c1d3f1-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/09bf94c1-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/0abdd3ba-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/0bbe0b65-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/0cbd894c-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/0dbaaae6-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/0eb7691c-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/0fbbce51-6a25-11e6-a63b-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
  raidz3								32.5T   275G  32.2T		 -	 0%	 0%
	gptid/72193012-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/733d960f-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/7467fddf-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/75845a00-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/76a5dfeb-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/77c36a78-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/78e5066a-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/7a0ffc76-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/7b3745ee-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/7c5d623b-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/7d7dcd79-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
	gptid/7e9c399a-c792-11e6-a5e5-e0cb4e0ffe38	  -	  -	  -		 -	  -	  -
Plex									81.5G  20.9G  60.6G		 -	71%	25%  1.00x  ONLINE  /mnt
  gptid/2d25dedb-696f-11e6-8378-e0cb4e0ffe38  81.5G  20.9G  60.6G		 -	71%	25%
freenas-boot							14.2G  1.28G  13.0G		 -	  -	 8%  1.00x  ONLINE  -
  gptid/92ff4e72-68c0-11e6-bc4a-e0cb4e0ffe38  14.2G  1.28G  13.0G		 -	  -	 8%

 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It would appear your second vDev, "raidz3-0", is currently using 275GB.

How much new data have you written to the pool overall?

ZFS should favor the new RAID-Z3 vDev over the original one. If you
have old static data in the pool that you can copy to a new dataset, that
should make use of the new vDev.

But otherwise, you appear to have done the correct thing in adding your
new RAID-Z3 vDev. (Occasionally we see mis-matched pools...)
 
Status
Not open for further replies.
Top