scurrier
Patron
- Joined
- Jan 2, 2014
- Messages
- 297
I have two pools. Pool "backupone" is a 2 disk mirror of Seagate ST4000VN000 4TB drives. Pool "firstvol" is striped mirrors with the 6 of the same disks, 2 disks per vdev.
When transferring many large files (>5 GB) from backupone to firstvol, I am noticing via
Here's an example, note the difference in bandwidth and ops for the two disks backupone.
One disk is doing nothing and the other disk is doing all the reads, for a period of 5 seconds or more. It is not consistent which disk is doing all the work. It does this very frequently but not all the time. Sometimes both disks are reading.
Here's a look at the fragmentation and other stats.
When I add a properly executed
Can anyone explain this unbalanced behavior? I'm disappointed that transfers do not seem to be maxing out the throughput of the disks in the mirror.
Full disclosure. In another scenario, I am getting what I consider to be poor performance from firstvol during reads of large, "sequential" files. Approximately 160 MB/s in a properly executed
Hardware in my signature.
When transferring many large files (>5 GB) from backupone to firstvol, I am noticing via
zpool iostat
that the reads on backupone can be extremely uneven.Here's an example, note the difference in bandwidth and ops for the two disks backupone.
Code:
capacity operations bandwidth pool alloc free read write read write -------------------------------------- ----- ----- ----- ----- ----- ----- backupone 2.31T 1.31T 110 0 110M 0 mirror 2.31T 1.31T 110 0 110M 0 gptid/cb74ec4e-42a9-11e5-82d0-002590f06808 - - 110 0 110M 0 gptid/c64e0866-c650-11e3-a8b9-002590f06808 - - 0 0 0 0 -------------------------------------- ----- ----- ----- ----- ----- ----- firstvol 7.29T 3.59T 42 689 315K 124M mirror 2.39T 1.23T 16 248 91.6K 47.5M gptid/5fa53587-4121-11e5-82d0-002590f06808 - - 9 116 66.1K 47.6M gptid/5a40f730-c136-11e3-b86a-002590f06808 - - 6 118 25.5K 48.6M mirror 2.52T 1.10T 11 201 71.7K 26.2M gptid/f800d612-421f-11e5-82d0-002590f06808 - - 6 85 31.9K 26.2M gptid/95397879-c136-11e3-b86a-002590f06808 - - 4 88 39.8K 26.2M mirror 2.37T 1.25T 14 239 151K 50.2M gptid/2d62c253-42df-11e5-82d0-002590f06808 - - 6 125 82.8K 50.3M gptid/2def5aed-42df-11e5-82d0-002590f06808 - - 7 124 68.5K 50.3M -------------------------------------- ----- ----- ----- ----- ----- ----- freenas-boot 9.69G 5.06G 0 0 0 0 da7p2 9.69G 5.06G 0 0 0 0 -------------------------------------- ----- ----- ----- ----- ----- -----
One disk is doing nothing and the other disk is doing all the reads, for a period of 5 seconds or more. It is not consistent which disk is doing all the work. It does this very frequently but not all the time. Sometimes both disks are reading.
Here's a look at the fragmentation and other stats.
Code:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT backupone 3.62T 2.14T 1.48T - 10% 59% 1.00x ONLINE /mnt firstvol 10.9T 7.46T 3.41T - 30% 68% 1.00x ONLINE /mnt
When I add a properly executed
dd
command to fully burden the reads on backupone, this uneven behavior keeps occurring. So it's not like the other pool is too slow to receive so zfs decides to loaf around with the reads on the mirror.Can anyone explain this unbalanced behavior? I'm disappointed that transfers do not seem to be maxing out the throughput of the disks in the mirror.
Full disclosure. In another scenario, I am getting what I consider to be poor performance from firstvol during reads of large, "sequential" files. Approximately 160 MB/s in a properly executed
dd
test of a file with only one segment. I will address this in another thread eventually, but I mention it here for full disclosure.Hardware in my signature.
Last edited: