Slow read performance with wide vdev

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
I'm testing sequential read and write with a single vdev of varous stripe sizes, and I found that write performance scales pretty well. But read performance actually gets worse after 8 disks. I'm trying to figure out what causes this. CPU usage is minimal during reads. I'm currently using FreeNAS 11.3-U1. I ran the same test a few months ago with slightly different hardware on version 11.2 and got much better results. Is there something I can tune to improve this?

For the test, I set compression=off and recordsize=1M. I'm using this script to write 10GB, then export and re-import the pool to clear the ARC, ensuring we are actually reading from the disks. I sync after each write to get the real time.

Here are the results I'm getting in MiB/s
Code:
width  write  read
1      241    239
2      465    460
4      850    818
8      1429   1205
16     2571   1062
24     2801   872


I tested controller bandwidth by running dd to all disks at once and measure 5300MBps combined both read and write like this
for i in {0..23}; do dd if=/dev/zero of=/dev/da$i bs=1M count=100000 &; done for i in {0..23}; do dd of=/dev/null if=/dev/da$i bs=1M count=10000 &; done

I'm using this script to repeat the test:
Code:
# Run with pool name parameter
zfs set primarycache=all $1
zfs set compression=off $1
zfs set recordsize=1M $1
echo "Performing write test..."
time sh -c "dd if=/dev/zero bs=1M count=10000 of=/$1/ddtest; sync"
zpool export $1
zpool import $1
echo "Performing read test..."
dd if=/$1/ddtest of=/dev/null bs=1M


Machine specs:
FreeNAS 11.3 U1
Dual Xeon 4216
96GB RAM
Disks are Seagate ST10000NM0206. 10TB each, 12G SAS, 7200 RPM connected to LSI 3008 controller.
 
Last edited:

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
I should note that for simplicity, I've been creating pools in command-line instead of FreeNAS UI. Simply like
zpool create tank24 da{0..23}
So it's using the entire disk without partitioning. I wanted to see if partitions make any difference, so I created a 24-wide stripe in FreeNAS and the read/write speeds are identical. This is how each disk looks after creating the pool in FreeNAS:
Code:
# gpart show da0
=>         6  2441609205  da0  GPT  (9.1T)
           6         122       - free -  (488K)
         128     4194304    1  freebsd-swap  (16G)
     4194432  2437414779    2  freebsd-zfs  (9.1T)

These disks are 4kN and I have ashift=12 so I think this alignment is okay.
 
Top