SOLVED 2 datasets on same pool having different performances

Status
Not open for further replies.

Artix

Cadet
Joined
May 25, 2016
Messages
8
Hello,
I have a big performance difference when reading between 2 datasets on the same vdev :
Code:
[root@NAS-02 /mnt]# dd if=/mnt/earth/homes/file.mkv of=/dev/null bs=2048k
10210+1 records in
10210+1 records out
21413851234 bytes transferred in 71.962681 secs (297568837 bytes/sec) (283 MB/s)
[root@NAS-02 /mnt]# dd if=/mnt/earth/multimedia/file.mkv of=/dev/null bs=2048k
10210+1 records in
10210+1 records out
21413851234 bytes transferred in 293.451611 secs (72972342 bytes/sec) (69 MB/s)

Code:
[root@NAS-02 /mnt]# zpool status earth
  pool: earth
 state: ONLINE
  scan: scrub repaired 0 in 0 days 02:48:10 with 0 errors on Sun Jul 15 03:48:10 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		earth										   ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/08684652-3041-11e8-8855-0cc47abbd230  ONLINE	   0	 0	 0
			gptid/09257735-3041-11e8-8855-0cc47abbd230  ONLINE	   0	 0	 0
			gptid/09e317f1-3041-11e8-8855-0cc47abbd230  ONLINE	   0	 0	 0

errors: No known data errors

Code:
[root@NAS-02 /mnt]# zfs list -r earth
NAME			   USED  AVAIL  REFER  MOUNTPOINT
earth			 2.16T  4.86T   117K  /mnt/earth
earth/homes		152G  4.86T   152G  /mnt/earth/homes
earth/multimedia  2.01T  4.86T  2.01T  /mnt/earth/multimedia

How is it possible ?

I checked, hard drive are used at ~100MB/s when using earth/homes but only ~30MB/s when using earth/multimedia

Checked all settings with zpool -o all and datasets have the same (for ex: 128k record size)

What can I do to investigate further?
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Vdevs do not have datasets. Vdevs provide storage for the pool. The Dataset and Snapshot Layer (DSL) provides Posix filesystems backed by the pool.

Most likely, one of them happened to end up with a more fragmented version of the file, if the properties are indeed the same. Or maybe the metadata for one of them was in ARC.
 

Artix

Cadet
Joined
May 25, 2016
Messages
8
Yeah sorry for the confusion. (vdev/pool)

Seems like the fragmentation of the file is probably the answer.
I copied the file on another directory on the same dataset and my performances are back.

Checked my migration procedure, I used robocopy with MT:4 so... could explain it too :D

Sadly, I didn't found a working solution to check file fragmentation on freebsd/zfs to be sure that's the problem.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Sadly, I didn't found a working solution to check file fragmentation on freebsd/zfs to be sure that's the problem.
That's because there isn't one. ZFS is complicated.
 

Artix

Cadet
Joined
May 25, 2016
Messages
8
Ok, I can confirm that fragmentation was the problem.

Always migrate data with single thread ^^
 
Status
Not open for further replies.
Top