Largeblocks?

Status
Not open for further replies.

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
The default value there is 8MB. Try to increase it to 2-4MB per disk in your pool. Bigger value should improve sequential read speeds, but possibly increase latency for random I/O running at same time.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I don't seem to have that sysctl. I'm on FreeNAS 9.3 Stable.

9.3 had completely different prefetcher code in ZFS, and so those tunables were called differently. Present FreeNAS version is 9.10.2.
 

RAIDTester

Dabbler
Joined
Jan 23, 2017
Messages
45
Running 15x2 mirrored vdevs
Here's what I got with max_distance set to 32MB


Record Size 1024 KB
File size set to 104857600 KB
No retest option selected
Include fsync in write timing
Setting no_unlink
Command line used: /Iozone3_414/iozone -t 5 -i 0 -i 2 -i 8 -r 1M -s 100G -+n -e -w
Output is in Kbytes/sec
Time Resolution = -0.000000 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 5 processes
Each process writes a 104857600 Kbyte file in 1024 Kbyte records

Children see throughput for 5 initial writers = 974178.02 KB/sec
Parent sees throughput for 5 initial writers = 929435.04 KB/sec
Min throughput per process = 186521.83 KB/sec
Max throughput per process = 202927.11 KB/sec
Avg throughput per process = 194835.60 KB/sec
Min xfer = 95158272.00 KB

Children see throughput for 5 random readers = 126651.16 KB/sec
Parent sees throughput for 5 random readers = 126650.45 KB/sec
Min throughput per process = 25064.70 KB/sec
Max throughput per process = 25514.94 KB/sec
Avg throughput per process = 25330.23 KB/sec
Min xfer = 103008256.00 KB

Children see throughput for 5 mixed workload = 917896.87 KB/sec
Parent sees throughput for 5 mixed workload = 832083.92 KB/sec
Min throughput per process = 14500.82 KB/sec
Max throughput per process = 463400.19 KB/sec
Avg throughput per process = 183579.37 KB/sec
Min xfer = 3047424.00 KB

Children see throughput for 5 random writers = 928103.17 KB/sec
Parent sees throughput for 5 random writers = 877307.06 KB/sec
Min throughput per process = 169603.14 KB/sec
Max throughput per process = 192635.41 KB/sec
Avg throughput per process = 185620.63 KB/sec
Min xfer = 89324544.00 KB


Disabling MPIO yields faster reads
Random performance seems horrible
 

RAIDTester

Dabbler
Joined
Jan 23, 2017
Messages
45
On system FreeNAS performance is >1.6GB/s
No ZIL or L2ARC
Over iSCSI, we barely break 400MB/s for sequential read
Am I right to assume there is significant overhead due to 64K NTFS block size? IOPS goes up significantly over iSCSI, but disks are only 30-50% "busy" in gstat

Are there other sysctls to tune to make iSCSI faster for read?
Playing with the max_distance helped a little. I believer we saw the best performance at 128MB.
We're still losing about 75% of actual demonstrated "on-system" read-speed capability. I have a hard time believing it's all iSCSI overhead, since write speed is hardly affected by it and iSCSI is fine for writes - and performs around 90+% of on-system speed.

Any tips?
Thanks!
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Maybe try the iSCSI tweaks listed here on your Windows server...especially the last two. There are quite a few articles about tuning for Windows Server 2008 R2, but these two settings seem to be universally accepted as a possible performance boost for iSCSI on Windows Server 2008 or newer. Reboot after applying the changes and test.
 
Status
Not open for further replies.
Top