Slow pool to pool transfer speeds

cyrus104

Explorer
Joined
Feb 7, 2021
Messages
70
After logging in through SSH, when I rsync a large movie (50GB) file from one pool to another pool I get 30-45MB/s. When I try to download any file from either pool via SMB, I get 700MB/s. The any file (test different ones to make sure they were not cached) takes only a few seconds to copy instead of 5 minutes. With the ZFS memory cache working at the ZFS level, I didn't think it would affect an rsync or cp (used time command to test it) differently than an SMB copy.

New install:
CPU: AMD Epyc 7282 (16-core)
RAM: 128GB DDR4 ECC 3200MHz
Network: 10GBe

NVME-Pool
4x Intel P4610 NVME 6.4TB (raidz1) - going to be quick pool for vms, incremental backup daily

SSD-Pool
8x Intel D3-S4610 SATA 1.92TB (4x mirrored vdevs) - similar to NVME-Pool, incremental backup daily

I'm happy to run any tests to help troubleshoot this issue.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
NVME-Pool
4x Intel P4610 NVME 6.4TB (raidz1) - going to be quick pool for vms, incremental backup daily
Using RAIDZ1 will limit the IOPS of that pool to a single NVME disk... not sure you want to throw away 3/4 of your IOPS like that if it's block storage. Have a read of this: https://www.truenas.com/community/threads/the-path-to-success-for-block-storage.81165/

After logging in through SSH, when I rsync a large movie (50GB) file from one pool to another pool I get 30-45MB/s.
what do you get with cp ?
 

cyrus104

Explorer
Joined
Feb 7, 2021
Messages
70
@sretalla , thanks for your quick reply.

I am using this pool right now but I can temporarily migrate the data off and then convert this to similar to the SSD-pool of being:
2x mirrored vdevs (are they mirrored or stripped vdevs?)
- vdev 1 (mirrored)
--Intel P4610 U.2 6.4TB
--Intel P4610 U.2 6.4TB
- vdev 2 (mirrored)
--Intel P4610 U.2 6.4TB
--Intel P4610 U.2 6.4TB

I understand I will get less space than all of these drives in Raidz1 but reading through the article I should get "double-ish" the IOPS.

I get use
Code:
time cp
to get an idea of how long it takes to copy files from one pool to another.

I have been doing testing with fio but because I need to use ioengine=posixaio versus libaio I get wildly different results than on another system I have locally. The numbers would be relative to this system though.

I don't have a cache drive in yet (have an Intel 905p to install) and when I copy a file from my desktop to either pool I get 1.06GB/s (pretty much the max speed for a 10GBe connection). I thought ZFS only used ram to cache the reads not the writes?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

cyrus104

Explorer
Joined
Feb 7, 2021
Messages
70
Even without making the change to a mirror'd vdev, based on the hardware the speeds should be much better.

rsync test:
ssd-pool -> nvme-pool = 47.67MB/s
nvme-pool -> ssd-pool = 219MB/s
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

cyrus104

Explorer
Joined
Feb 7, 2021
Messages
70
In theory yes but also tested each of these drives on an ubuntu box not using zfs and with the nvme drive I get about 3000MB/s seq read and writes while I get about 450MB/s seq read and writes on the SSDs.

I can use cp but that doesn't really output any stats to help troubleshoot. When I do a file copy with cp I get the same results in time to copy as with the rsync command.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
When I do a file copy with cp I get the same results in time to copy as with the rsync command.
OK; that's the answer I was looking for (eliminating rsync pre-checking slowness from the equation)
 
Top