Moving data from a full volume?

Status
Not open for further replies.

zey

Explorer
Joined
Oct 31, 2014
Messages
51
Here's my situation.

I build a new server with 4 x 3TB hard drives. Created a volume in the only real option I had, mirror. I moved all my data over and it fulled the volume up to 95%.

I moved the hard drives (4 x 2TB) from the old server to the new and extended the volume by added the four in a mirror. My question is how do I balance the load? Currently my performance is impacted due to the first volume being full.

My future upgrade will most likely be 8 x 8TB drives. But that won't be anytime soon.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Well unless you resilver or otherwise manipulate the data it stays where it was written. That is the whole point of CoW.

But what do you mean with “first volume”?

If you extended the pool with 2 mirrors of 2x2 TB drives the you should have added just under 4 TB to the pool. Those new disks will be faster then the full disks and thus pick up the performance slack. Sure you don’t get the IO all eight could have given you but that will even out when you start replacing with the 8 TB disks.
 
  • Like
Reactions: zey

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You need to rewrite the data. So move it to a different dataset or something.
 
  • Like
Reactions: zey

zey

Explorer
Joined
Oct 31, 2014
Messages
51

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
but that will even out when you start replacing with the 8 TB disks.
...and will even out as you write data to the pool and change data that's already there.
 
  • Like
Reactions: zey

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Here's my situation.

I build a new server with 4 x 3TB hard drives. Created a volume in the only real option I had, mirror. I moved all my data over and it fulled the volume up to 95%.

I moved the hard drives (4 x 2TB) from the old server to the new and extended the volume by added the four in a mirror. My question is how do I balance the load? Currently my performance is impacted due to the first volume being full.

My future upgrade will most likely be 8 x 8TB drives. But that won't be anytime soon.
How is it you filled up what should have been about 5 TB of storage with the content from the old server when it only should have had about 3 TB of capacity?
That confuses me.
I wonder how you actually have the pool configured. Can you post the output of zpool status?

It would help if you posted the output in [ code ] tags so it looks like this:
Code:
pool: Vol1
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
		continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Mon Sep 4 17:01:17 2017
		76.7G scanned out of 5.05T at 90.5M/s, 16h0m to go
		18.3G resilvered, 1.48% done
config:

		NAME											 STATE	READ WRITE C									KSUM
		Vol1											 DEGRADED	0	0 8									7.3K
		 raidz2-0										DEGRADED	0	0									 175K
			gptid/bd936cf5-9894-11e6-b64f-d05099c0c5b3	ONLINE	 0	0										0
			gptid/be3e1755-9894-11e6-b64f-d05099c0c5b3	ONLINE	 0	0										0
			gptid/bef699fe-9894-11e6-b64f-d05099c0c5b3	ONLINE	 0	0										0
			gptid/bfe92d1c-9894-11e6-b64f-d05099c0c5b3	ONLINE	 0	0										0
			gptid/c0cf3a07-9894-11e6-b64f-d05099c0c5b3	ONLINE	 0	0										0

errors: Permanent errors have been detected in the following files:
 

zey

Explorer
Joined
Oct 31, 2014
Messages
51
How is it you filled up what should have been about 5 TB of storage with the content from the old server when it only should have had about 3 TB of capacity?
That confuses me.
I wonder how you actually have the pool configured. Can you post the output of zpool status?

Code:
  pool: cargo																													   
state: ONLINE																													 
  scan: scrub repaired 0 in 7h17m with 0 errors on Thu Nov  2 05:06:22 2017														 
config:																															 
																																   
		NAME											STATE	 READ WRITE CKSUM												 
		cargo										   ONLINE	   0	 0	 0												 
		  mirror-0									  ONLINE	   0	 0	 0												 
			gptid/8b1c8f4a-bda4-11e7-87cf-000c29005692  ONLINE	   0	 0	 0												 
			gptid/8be1530e-bda4-11e7-87cf-000c29005692  ONLINE	   0	 0	 0												 
		  mirror-1									  ONLINE	   0	 0	 0												 
			gptid/8ca7c3a1-bda4-11e7-87cf-000c29005692  ONLINE	   0	 0	 0												 
			gptid/8d6d22ad-bda4-11e7-87cf-000c29005692  ONLINE	   0	 0	 0												 
		  mirror-2									  ONLINE	   0	 0	 0												 
			gptid/11e2308d-bf6b-11e7-a472-0025905dfe3f  ONLINE	   0	 0	 0												 
			gptid/14b4f149-bf6b-11e7-a472-0025905dfe3f  ONLINE	   0	 0	 0												 
		  mirror-3									  ONLINE	   0	 0	 0												 
			gptid/1acfbaaf-bf6b-11e7-a472-0025905dfe3f  ONLINE	   0	 0	 0												 
			gptid/1d8e0647-bf6b-11e7-a472-0025905dfe3f  ONLINE	   0	 0	 0												 
																																   
errors: No known data errors																									   
																																   
  pool: freenas-boot																											   
state: ONLINE																													 
  scan: none requested																											 
config:																															 
																																   
		NAME		STATE	 READ WRITE CKSUM																					 
		freenas-boot  ONLINE	   0	 0	 0																				   
		  da0p2	 ONLINE	   0	 0	 0																					 
																																   
errors: No known data errors   


The old server zpool was just stripped giving me a little over 7TB.
 
Status
Not open for further replies.
Top