Lose a lot of performance when I added multiple disks

Status
Not open for further replies.

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
Hi,

I drive with 6 disks of mirror and got good performance every disk write 200MB per sec but when i add 6 disks to it, it only writes 60MB per disk.

dd if=/dev/zero of=tmp.dat bs=2048k count=150k

Check my signature for the hardware.

Version: FreeNAS-11.1-U1


They are in the same pool and do not do the same work, is it because the counter is working on other things?
freenas disk cehck.PNG

freenas disk problem.PNG

freenas disk.PNG


Code:
[root@freenas ~]# zpool status																									
  pool: Store2																													
 state: ONLINE																													
  scan: none requested																											
config:																															
																																
	   NAME												STATE	 READ WRITE CKSUM											
	   Store2											  ONLINE	   0	 0	 0											
		 mirror-0										  ONLINE	   0	 0	 0											
		   gptid/9d17bcbc-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/9df33389-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		 mirror-1										  ONLINE	   0	 0	 0											
		   gptid/a03134f9-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/a0f17bac-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		 mirror-2										  ONLINE	   0	 0	 0											
		   gptid/a3334340-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/a3f64588-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		 mirror-4										  ONLINE	   0	 0	 0											
		   gptid/5d4a0fd3-1312-11e8-b729-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/5e12941f-1312-11e8-b729-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		 mirror-5										  ONLINE	   0	 0	 0											
		   gptid/612876f0-1312-11e8-b729-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/61eacf52-1312-11e8-b729-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		 mirror-6										  ONLINE	   0	 0	 0											
		   gptid/65096079-1312-11e8-b729-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/65d19969-1312-11e8-b729-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
	   logs																														
		 mirror-3										  ONLINE	   0	 0	 0											
		   gptid/a4d713ca-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
		   gptid/a549c4e3-066b-11e8-b8a8-0cc47a5808e8.eli  ONLINE	   0	 0	 0											
																																
errors: No known data errors																										
																																
  pool: freenas-boot																												
 state: ONLINE																													
  scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Thu Feb 15 03:45:05 2018											
config:																															
																																
	   NAME		STATE	 READ WRITE CKSUM																					
	   freenas-boot  ONLINE	   0	 0	 0																					
		 mirror-0  ONLINE	   0	 0	 0																					
		   da12p2  ONLINE	   0	 0	 0																					
		   da13p2  ONLINE	   0	 0	 0																					
																																
errors: No known data errors	


Code:
[root@freenas ~]# glabel status																									
									 Name  Status  Components																	
gptid/a4d713ca-066b-11e8-b8a8-0cc47a5808e8	 N/A  nvd0p2																		
gptid/a549c4e3-066b-11e8-b8a8-0cc47a5808e8	 N/A  nvd1p2																		
gptid/9d17bcbc-066b-11e8-b8a8-0cc47a5808e8	 N/A  da2p2																		
gptid/9df33389-066b-11e8-b8a8-0cc47a5808e8	 N/A  da5p2																		
gptid/a03134f9-066b-11e8-b8a8-0cc47a5808e8	 N/A  da8p2																		
gptid/a0f17bac-066b-11e8-b8a8-0cc47a5808e8	 N/A  da9p2																		
gptid/a3334340-066b-11e8-b8a8-0cc47a5808e8	 N/A  da10p2																		
gptid/a3f64588-066b-11e8-b8a8-0cc47a5808e8	 N/A  da11p2																		
gptid/ed20a3de-b526-11e7-bceb-002590e35b10	 N/A  da12p1																		
gptid/ed276c6c-b526-11e7-bceb-002590e35b10	 N/A  da13p1																		
gptid/5d4a0fd3-1312-11e8-b729-0cc47a5808e8	 N/A  da0p2																		
gptid/5e12941f-1312-11e8-b729-0cc47a5808e8	 N/A  da1p2																		
gptid/612876f0-1312-11e8-b729-0cc47a5808e8	 N/A  da3p2																		
gptid/61eacf52-1312-11e8-b729-0cc47a5808e8	 N/A  da4p2																		
gptid/65096079-1312-11e8-b729-0cc47a5808e8	 N/A  da6p2																		
gptid/65d19969-1312-11e8-b729-0cc47a5808e8	 N/A  da7p2																		
gptid/65becc6a-1312-11e8-b729-0cc47a5808e8	 N/A  da7p1																		
gptid/64f8aa3f-1312-11e8-b729-0cc47a5808e8	 N/A  da6p1																		
gptid/61da5155-1312-11e8-b729-0cc47a5808e8	 N/A  da4p1																		
gptid/6117fff7-1312-11e8-b729-0cc47a5808e8	 N/A  da3p1
 
Last edited by a moderator:

garm

Wizard
Joined
Aug 19, 2017
Messages
1,555
Please abide by the rules and post your hardware so we can see it and it’s persistent.. I have no idea what your SLOG drives are, but I’m guessing your maxing them out?
 

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
Please abide by the rules and post your hardware so we can see it and it’s persistent.. I have no idea what your SLOG drives are, but I’m guessing your maxing them out?

Check my signature for the hardware.

Code:
FreeNAS1: motherboard supermicro X9DRH-7TF, CPU X2 E5-2680V1, RAM 256GB ECC, HBA card LSI 9300-16i, HDD X12 WD gold 6TB, slog/zli X2 Intel DC P3520 1.2TB, OS disks X2 Samsung SM863 120GB
 
Last edited by a moderator:

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
One thing ZFS tries to do, is balance the space used across all the vDevs. So, if your pre-existing 3 x 2 way Mirrored vDevs were fullish, then all new writes would be to the new vDevs almost exclusively.

Please supply the output of zpool list -v in code tags.

Note that there is no automatic balancing of existing when adding new vDevs. Data stays where it was written originally. Other RAID solutions may do this, but ZFS does not. Partly due to to certain advanced features like snapshots & clones would complicate data balancing.
 
Last edited by a moderator:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
And why are you running a pair of drives in a mirror for your SLOG? That just adds overhead.
 

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
One thing ZFS tries to do, is balance the space used across all the vDevs. So, if your pre-existing 3 x 2 way Mirrored vDevs were fullish, then all new writes would be to the new vDevs almost exclusively.

Please supply the output of zpool list -v in code tags.

Note that there is no automatic balancing of existing when adding new vDevs. Data stays where it was written originally. Other RAID solutions may do this, but ZFS does not. Partly due to to certain advanced features like snapshots & clones would complicate data balancing.

Code:
[root@freenas ~]# zpool list -v																									
NAME									 SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT						
Store2								  32.6T  4.35T  28.3T		 -	 1%	13%  1.00x  ONLINE  /mnt							
  mirror								5.44T  1.41T  4.02T		 -	 2%	25%												
   gptid/9d17bcbc-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/9df33389-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
  mirror								5.44T  1.45T  3.98T		 -	 2%	26%												
   gptid/a03134f9-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/a0f17bac-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
  mirror								5.44T  1.46T  3.98T		 -	 2%	26%												
   gptid/a3334340-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/a3f64588-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
  mirror								5.44T  10.2G  5.43T		 -	 0%	 0%												
   gptid/5d4a0fd3-1312-11e8-b729-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/5e12941f-1312-11e8-b729-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
  mirror								5.44T  10.9G  5.43T		 -	 0%	 0%												
   gptid/612876f0-1312-11e8-b729-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/61eacf52-1312-11e8-b729-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
  mirror								5.44T  10.7G  5.43T		 -	 0%	 0%												
   gptid/65096079-1312-11e8-b729-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/65d19969-1312-11e8-b729-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
log										 -	  -	  -		 -	  -	  -												
  mirror								1.09T  39.2M  1.09T		 -	 0%	 0%												
   gptid/a4d713ca-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
   gptid/a549c4e3-066b-11e8-b8a8-0cc47a5808e8.eli	  -	  -	  -		 -	  -	  -									
freenas-boot							 111G  2.07G   109G		 -	  -	 1%  1.00x  ONLINE  -							  
  mirror								 111G  2.07G   109G		 -	  -	 1%												
   da12p2								  -	  -	  -		 -	  -	  -												
   da13p2								  -	  -	  -		 -	  -	  -												
[root@freenas ~]#				


And why are you running a pair of drives in a mirror for your SLOG? That just adds overhead.

Money is no big problem just to save a while.

3yhrsIV.jpg
 
Last edited by a moderator:

garm

Wizard
Joined
Aug 19, 2017
Messages
1,555
Han menar inte kostnaden, utan resurserna som krävs för en IO.

He doesn’t mean the cost, but the system resources needed for an IO
 

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
Han menar inte kostnaden, utan resurserna som krävs för en IO.

He doesn’t mean the cost, but the system resources needed for an IO

Ah, blir den sämre för att man gör en mirror med slog försämra IO på det sättet?
 

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
@Arwen

It's fine, it does not use them because there are no data there.

So the question is how to fix this? Otherwise, I can buy additional hard disks as backup disk.

freenas disk problem 2.PNG
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
And why are you running a pair of drives in a mirror for your SLOG? That just adds overhead.

It’s actually a best practice, if your transactions are very important, or if you need the system to stay performant even when a SLOG fails.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
It’s actually a best practice, if your transactions are very important, or if you need the system to stay performant even when a SLOG fails.
Eh, I suppose. That exceeds my level of paranoia.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
Eh, I suppose. That exceeds my level of paranoia.
Please note that at a earlier time, loss of a SLOG meant not just the loss of the transactions it was saving, but the entire pool!

So, Sun Microsystems added the ability to mirror SLOGs as a way to prevent pool loss.

It was years later that someone, (I don't know if it was Sun or outsider), fixed that issue.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
@kikotte, take a look at the busy disks and how much they have stored on them. You might be able to answer your own question. (Sorry, I can't today, I have FBS - Fried Brain Syndrome...)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
200MB -> 60MB/s sounds suspiciously like the slow down from an empty disk to a semi-full and fragmented disk.
 

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
@kikotte, take a look at the busy disks and how much they have stored on them. You might be able to answer your own question. (Sorry, I can't today, I have FBS - Fried Brain Syndrome...)

I guess I can buy two hard drives as I can move over to data and then redo the pool.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Please note that at a earlier time, loss of a SLOG meant not just the loss of the transactions it was saving, but the entire pool!

So, Sun Microsystems added the ability to mirror SLOGs as a way to prevent pool loss.

It was years later that someone, (I don't know if it was Sun or outsider), fixed that issue.
True... but I don't intend to jump back several years in FreeNAS versions to experience that particular pain point :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
So, as I hinted at, there are two reasons to mirror slog.

One is to deal with the rate failure case where a slog dies during a crash, Ie it disappears before the separate intent log can be rewritten to main pool, and the other reason is if loss of slog will cause a serious enough performance loss that it might as well be considered a system failure.
 

kikotte

Explorer
Joined
Oct 1, 2017
Messages
75
Is there no one who can solve my problem?

There are only 6 disks used when there are no data on the other 6 disks how do I move data so that all 12 disks can work?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
You don't. You use the pool normally and ZFS will optimize things to maximize performance.
 
Status
Not open for further replies.
Top