"Hard" limit?

Status
Not open for further replies.

aleks46

Cadet
Joined
Nov 2, 2016
Messages
3
Hi. Just bought my first FreeNAS box and build it. But run into some kind of problem. When the RAM is getting full, the read speed of disks get hard limited. If this is the right term.
At the moment there is scrub working in the background.

ram.png disc.png cpu.png


MOBO: AsRock C2550D4I
RAM: ECC 4X8GB
Disks: 10X3TB WD Red (WD30EFRX)
Storage: 1 pool in Raid-Z2
Boot: SanDisk Cruzer Fit 16G
 
Last edited:

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,477
hmm that is a bit strange. can you post the output of "zpool status" in code tags. You don't have any resilvers or anything out of the ordinary going on do you? any recent changes?

how many users are accessing the box? what is your network setup?

Your screen capture of the memory I believe is unrelated, FreeNAS will always use all the RAM it can, so that looks normal.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,477
I saw that, I was just double-checking and asking to make sure there was nothing else that is currently ongoing that was accidentally left out from the OP.
 

aleks46

Cadet
Joined
Nov 2, 2016
Messages
3
Sorry for long time no answer.
hmm that is a bit strange. can you post the output of "zpool status" in code tags. You don't have any resilvers or anything out of the ordinary going on do you? any recent changes?
how many users are accessing the box? what is your network setup?

Nothing out of ordinary except scrub.
The nas is using just me.

Code:
  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

		NAME										  STATE	 READ WRITE CKSUM
		freenas-boot								  ONLINE	   0	 0	 0
		  gptid/1603a0c5-9c87-11e6-9327-d05099c19979  ONLINE	   0	 0	 0

errors: No known data errors

  pool: pool
state: ONLINE
  scan: scrub in progress since Thu Nov 10 13:06:44 2016
		2.21T scanned out of 23.4T at 36.3M/s, 162h37m to go
		0 repaired, 9.46% done
config:

		NAME											STATE	 READ WRITE CKSUM
		pool											ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/b8b66271-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/b96e8e7e-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/ba24d3cd-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bae63e31-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bba2c657-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bc5cf0bf-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bd19f012-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bdd6ac59-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/be934c54-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bf50428c-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0

errors: No known data errors


In the begginig of scrub is working with around 100MB/s, but then it drop down to around 36MB/s
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Everything seems perfectly normal to me. How full is the pool and how long does the scrub actually take?
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,477
What @SweetAndLow said. I have often found that scrubs start off slow (mine usually right around 30mb/s then increase and top out after about 30 minutes. By no means is a scrub's speed uniform throughout the process.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
@aleks46 Could you post the output of "zpool status" again? I'm curious how far it got. I'm just curious if it's normal for a scrub to run 7+ days for Ten 3TB disks. Also post the output of "zfs list", the version of FreeNAS you are running and any hardware you are using.
 

aleks46

Cadet
Joined
Nov 2, 2016
Messages
3
When I first do scrub (2nd november), it was more than 40 and less than 50% (but then i have to cancel the operation, because I didn't expected to run so long and had nas still in my bedroom), but now is 88%.

Code:
  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

		NAME										  STATE	 READ WRITE CKSUM
		freenas-boot								  ONLINE	   0	 0	 0
		  gptid/1603a0c5-9c87-11e6-9327-d05099c19979  ONLINE	   0	 0	 0

errors: No known data errors

  pool: pool
state: ONLINE
  scan: scrub in progress since Thu Nov 10 13:06:44 2016
		12.0T scanned out of 23.4T at 115M/s, 28h42m to go
		0 repaired, 51.30% done
config:

		NAME											STATE	 READ WRITE CKSUM
		pool											ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/b8b66271-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/b96e8e7e-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/ba24d3cd-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bae63e31-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bba2c657-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bc5cf0bf-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bd19f012-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bdd6ac59-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/be934c54-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0
			gptid/bf50428c-9e0f-11e6-a470-d05099c19979  ONLINE	   0	 0	 0

errors: No known data errors

Version is FreeNAS-9.10.1 (d989edd)
Hardware is listed in first post.
zfs list:
Code:
NAME													USED  AVAIL  REFER  MOUNTPOINT
freenas-boot											641M  13.8G	31K  none
freenas-boot/ROOT									   633M  13.8G	25K  none
freenas-boot/ROOT/Initial-Install						 1K  13.8G   613M  legacy
freenas-boot/ROOT/default							   633M  13.8G   630M  legacy
freenas-boot/grub									  6.33M  13.8G  6.33M  legacy
pool												   17.8T  2.29T   256K  /mnt/pool
pool/.system										   6.72M  2.29T  1.83M  legacy
pool/.system/configs-7f4d67ae16c94917b949456bb9f364ad   768K  2.29T   768K  legacy
pool/.system/cores									 1.21M  2.29T  1.21M  legacy
pool/.system/rrd-7f4d67ae16c94917b949456bb9f364ad	   219K  2.29T   219K  legacy
pool/.system/samba4									 878K  2.29T   878K  legacy
pool/.system/syslog-7f4d67ae16c94917b949456bb9f364ad   1.86M  2.29T  1.86M  legacy
pool/TV												1.46T  2.29T  1.46T  /mnt/pool/TV
pool/xxxxxx											 16.3T  2.29T  16.3T  /mnt/pool/xxxxxx
pool/xxxxxx											   102G  2.29T   102G  /mnt/pool/xxxxxx
pool/guest											  481M  2.29T   481M  /mnt/pool/guest
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm not sure, but I find scrubs much slower than resilvers. As in 10x.

I'm guessing the scrub is trying to not degrade performance.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I have forgotten a lot of scrubs which is why I just did a Google search on the topic "freenas scrub small files", because I think it's a small file thing, and here is what I came up with and the posting from @jgreco it the main one to read...
https://forums.freenas.org/index.php?threads/slow-scrubs-and-resilvers-on-a-freenas-mini.40521/

While I can't say for sure because the data wasn't provided, I suspect the OP has a lot of small files and the fact that the drives are almost full, both contribute to how slow the operation is. My speeds are mostly in the 430M/sec range because most of my data are large backup files but my system slows down to around 120M/sec when it hits all the small files on my system and that slowdown lasts almost 20 minutes of the overall 4.7 hours my scrub runs.
 
Status
Not open for further replies.
Top