woyteck
Dabbler
- Joined
- Jul 10, 2018
- Messages
- 13
I have a problem with my L2ARC.
It seems to have reads capped at ~175MB/s.
Why am I bothered? - Well, it's a 2TB Intel SSD - P3520, PCIe based NVMe drive.
When testing reads directly from /dev/nvd0, I easily and consistently get aroung 600MB/s
I however noticed, that with smaller block size, the output is slower from this SSD:
Is there something that can adjust the configuration so this drive has better throughput?
The problem is, that, if data is read from ARC, I get line speed at 10Gbps interface (about 1.1GB/s via NFS).
When data is read from L2ARC, I get between 250-300MB/s, only because it also partially reads from disks.
I've done some tests and I've ran everything 3 times and gave you the middle results, on a lightly used server.
When data is in ARC and filesystem is set to primarycache=all and secondarycache=all
When filesystem is set to primarycache=none and secondarycache=none:
When filesystem is set to primarycache=metadata and secondarycache=none and metadata is in ARC:
When filesystem is set to primarycache=none and secondarycache=metadata and metadata is in L2ARC:
When filesystem is set to primarycache=none and secondarycache=all and metadata and data are in L2ARC:
Also, maximum number of read transactions I've ever seen on this cache is 1.52K
I've seen 3K on spinning disks, so just wondering why?
I found this as well:
vfs.zfs.arc_average_blocksize: 8192
Would changing it change the speed access to L2ARC??
Hardware:
Supermicro 36Bay server
Dual Xeon Scallable 8core/2.1GHz
128GB RAM
12x 10TB Hitachi Disks in RAIDZ2
LSI 9300 HBA 12Gbps
200GB Hitachi SAS as SLOG
2TB Intel P3520 as cache (the one we are talking about).
It seems to have reads capped at ~175MB/s.
Why am I bothered? - Well, it's a 2TB Intel SSD - P3520, PCIe based NVMe drive.
When testing reads directly from /dev/nvd0, I easily and consistently get aroung 600MB/s
# dd if=/dev/nvd0 of=/dev/null bs=128k count=100000
100000+0 records in
100000+0 records out
13107200000 bytes transferred in 21.553133 secs (608134336 bytes/sec)
I however noticed, that with smaller block size, the output is slower from this SSD:
# dd if=/dev/nvd0 of=/dev/null bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes transferred in 0.169981 secs (240968499 bytes/sec)
Is there something that can adjust the configuration so this drive has better throughput?
The problem is, that, if data is read from ARC, I get line speed at 10Gbps interface (about 1.1GB/s via NFS).
When data is read from L2ARC, I get between 250-300MB/s, only because it also partially reads from disks.
I've done some tests and I've ran everything 3 times and gave you the middle results, on a lightly used server.
When data is in ARC and filesystem is set to primarycache=all and secondarycache=all
# echo 3 > /proc/sys/vm/drop_caches;dd if=/test/randomfile.bin of=/dev/null bs=1M
9129+1 records in
9129+1 records out
9573093376 bytes (9.6 GB, 8.9 GiB) copied, 10.6273 s, 901 MB/s
When filesystem is set to primarycache=none and secondarycache=none:
# echo 3 > /proc/sys/vm/drop_caches;dd if=/test/randomfile.bin of=/dev/null bs=1M
9129+1 records in
9129+1 records out
9573093376 bytes (9.6 GB, 8.9 GiB) copied, 44.2833 s, 216 MB/s
When filesystem is set to primarycache=metadata and secondarycache=none and metadata is in ARC:
# echo 3 > /proc/sys/vm/drop_caches;dd if=/test/randomfile.bin of=/dev/null bs=1M
9129+1 records in
9129+1 records out
9573093376 bytes (9.6 GB, 8.9 GiB) copied, 25.4434 s, 376 MB/s
When filesystem is set to primarycache=none and secondarycache=metadata and metadata is in L2ARC:
# echo 3 > /proc/sys/vm/drop_caches;dd if=/test/randomfile.bin of=/dev/null bs=1M
9129+1 records in
9129+1 records out
9573093376 bytes (9.6 GB, 8.9 GiB) copied, 53.1836 s, 180 MB/s
When filesystem is set to primarycache=none and secondarycache=all and metadata and data are in L2ARC:
# echo 3 > /proc/sys/vm/drop_caches;dd if=/test/randomfile.bin of=/dev/null bs=1M
9129+1 records in
9129+1 records out
9573093376 bytes (9.6 GB, 8.9 GiB) copied, 53.8619 s, 178 MB/s
Also, maximum number of read transactions I've ever seen on this cache is 1.52K
I've seen 3K on spinning disks, so just wondering why?
cache - - - - - -
gptid/a3ee0c74-46e1-11e8-86ce-ac1f6b0c24d0 141G 1.68T 1.52K 0 195M 0
I found this as well:
vfs.zfs.arc_average_blocksize: 8192
Would changing it change the speed access to L2ARC??
Hardware:
Supermicro 36Bay server
Dual Xeon Scallable 8core/2.1GHz
128GB RAM
12x 10TB Hitachi Disks in RAIDZ2
LSI 9300 HBA 12Gbps
200GB Hitachi SAS as SLOG
2TB Intel P3520 as cache (the one we are talking about).