Abysmal performance for write speed within RAIDZ2 pool

Status
Not open for further replies.
Joined
Sep 5, 2017
Messages
8
I noticed an issue with my rsync speeds to my RAIDZ2 pool. I typically would've gotten 50MB/s but now I can barely write more than single MBps. My pool is nearly 100% capacity so I expected some performance degradation but I still have 150GBs of free space and not for my performance to fall into KBps territory. Logs have not shown me anything. Is there anything else I should check? I suspected a faulty HBA or some bad disks but I've not seen any issues anywhere.
Code:
[root@nas-1] /mnt/data-2# dd if=/dev/random of=test bs=20M count=1
1+0 records in
1+0 records out
20971520 bytes transferred in 66.804719 secs (313923 bytes/sec)

[root@nas-1] /mnt/data-2# zpool status data-2
  pool: data-2
 state: ONLINE
  scan: scrub repaired 0 in 64h16m with 0 errors on Sat Feb 17 20:16:49 2018
config:

   NAME											STATE	 READ WRITE CKSUM
   data-2									   ONLINE	   0	 0	 0
	 raidz2-0									  ONLINE	   0	 0	 0
	   gptid/ccf1de64-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/f69a14cf-8ea0-11e7-865b-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/ce60619c-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/cf174b89-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/cfcc414a-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/d0863aab-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/d13c7b54-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/d1f2daa9-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/d2ad0683-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/d3627b12-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
	   gptid/d415a1c4-fa35-11e3-98e0-90e2ba6dbb00  ONLINE	   0	 0	 0
   cache
	 gptid/d47a78bd-fa35-11e3-98e0-90e2ba6dbb00	ONLINE	   0	 0	 0
	 gptid/d49de253-fa35-11e3-98e0-90e2ba6dbb00	ONLINE	   0	 0	 0
errors: No known data errors

Code:
[root@nas-1] /mnt/data-2# df -hl
Filesystem				 Size	Used   Avail Capacity  Mounted on
***
data-2				  4.3G	 78k	4.3G	 0%	/mnt/data-2
data-2/dataset-1	  17T	 17T	152G	99%	/mnt/data-2/dataset-1
data-2/dataset-2	 14T	 14T	4.3G   100%	/mnt/data-2/dataset-2
 
Last edited by a moderator:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, fullness is THE cause. ZFS went into space saving & potentially extreme fragmentation mode. Some of the slowness may now be semi-permanent due to fragmented files.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Don't forget that ZFS is a copy-on-write filesystem, so any assumption you have about full disk performance based on NTFS or ext4 is not applicable here.
 
Status
Not open for further replies.
Top