PhiloEpisteme
Guru
- Joined
- Oct 18, 2018
- Messages
- 969
So I'm looking into whether a SLOG device would be worth the money to me or not. I did some tests, which I will describe below and am somewhat surprised by the results. Specifically, with my test SLOG device it seems I was not able to get close to saturating my 1G network. If folks have suggestions for better ways to run these tests or settings I can change to improve performance please, let me know.
My system
FreeNAS Release: FreeNAS-11.2-RELEASE-U2
Board: SuperMicro X11SSM-F
Processor: Intel Core i3-7100 @ 3.90GHz, 2 Cores
Memory: 32 GB of 2x16GB
HBA: LSI/Broadcom SAS9207-8i, 6Gbps SAS PCI-E 3.0 HBA, Firmware 20.00.07.00
Storage Pool 1: 1 vdev
vdev-0 RAIDZ2:
2x 7200RPM 3TB Seagate Constellation ST3000NM0033
4x 5900ROM 3TB Seagate IronWolf NAS ST3000VN007
Storage Pool 2 (encrypted): 1 vdev
vdev-0 RAIDZ2:
3x 7200RPM 2TB mixed desktop drives
3x 5900ROM 2TB Seagate IronWolf NAS ST2000VN007
Test SLOG devices: 2x 240GB Samsung 970 EVOs plugged into a Supermicro AOC-SLG3-2M2 PCIe3.0x8 -> 2x M.2 @ PCIe3.0x4. No PLP protection so obviously poor long-term SLOG devices but I had them lying around to test with.
The Tests
I did not write or use a script to process the data automatically. I took samples and averaged for network speed tests and watched values and recorded the stable range for both io and cpu idle %. All tests were done while both machines were connected via a router using wired 1G connection.
Network performance*:
Disk performance:
CPU performance**:
Large file transfer: Using an NFS share I transferred a directory to the pool containing 10 1.024GB files created with
Small file transfer: Using an NFS share I transferred a directory to the pool containing 10000 1.024MB files created with
* Network speed for the first benchmark was taken directly from
** I only performed cpu benchmarks where listed. Given the low idle percentage and similar network speeds of both pools with the SLOG I don't think the cpu is the bottleneck.
Initial Benchmarks
Result: 935Mb/s
Large file results
Network: 840Mb/s
Disk: pool 170-240MB/s; vdev 70-130MB/s; SLOG 100-110MB/s
CPU: 80-85% idle
Time: 1:41
Small file results
Network: 670Mb/s
Disk: pool 130-210MB/s avg 150MB/s; vdev 65-120MB/s avg 70MB/s; 75-85MB/s
CPU: 80-85% idle
Time: 2:06
These values looked pretty reasonable to me. For large files I was nearly able to saturate my 1Gb/s network. Considering the encoding, router, etc I would guess that with the ramdisk the network was the bottleneck.
Actual Tests
Without SLOG
Encrypted Pool Large file results
Network: 323Mb/s
Disk: pool 75-110MB/s
CPU: -
Time: 4:48
Encrypted Pool Small file results
Network: 123Mb/s
Disk: pool 20-50MB/s
CPU: -
Time: 11:23
Regular Pool Large file results
Network: 335Mb/s
Disk: pool 50-100MB/s avg 75MB/s
CPU: -
Time: 4:26
Regular Pool Small file results
Network: 135Mb/s
Disk: pool 20-60MB/s
CPU: -
Time: 11:16
With SLOG
Encrypted Pool Large file results
Network: 460Mb/s
Disk: pool 90-120MB/s; vdev 40-65MB/s; SLOG 50-55MB/s
CPU: -
Time: 3:10
Encrypted Pool Small file results
Network: 490Mb/s
Disk: pool 120MB/s; vdev 60-65MB/s; 50-60MB/s
CPU: -
Time: 2:57
Regular Pool Large file results
Network: 466Mb/s
Disk: pool 80-120MB/s avg 115MB/s; vdev 30-65MB/s avg 60MB/s; SLOG 50-60MB/s
CPU: 70-90% idle
Time: 3:06
Regular Pool Small file results
Network: 445Mb/s
Disk: pool 80-120MB/s avg 116MB/s; vdev 40-65MB/s avg 60MB/s; SLOG 50-55MB/s
CPU: 85-90% idle
Time: 3:14
I would assume because the ram disk performed so well that the bottleneck isn't my pool, which is
Edit: If I decide to get a SLOG device I've looked around and figure I should get
2 x 120GB Samsung SM863 SATA SSDs
2 x 100GB Intel Optane SSD DC P4801X M.2 PCIex4 SSDs
Would happily consider other devices keeping in mind that I have 2 pools to add SLOG devices to, plenty of available SATA ports (hence the SM863s), 2 M.2 slots thanks to that adapter and no remaining PCIe slots since I have plans for my final x4 slot.
My system
FreeNAS Release: FreeNAS-11.2-RELEASE-U2
Board: SuperMicro X11SSM-F
Processor: Intel Core i3-7100 @ 3.90GHz, 2 Cores
Memory: 32 GB of 2x16GB
HBA: LSI/Broadcom SAS9207-8i, 6Gbps SAS PCI-E 3.0 HBA, Firmware 20.00.07.00
Storage Pool 1: 1 vdev
vdev-0 RAIDZ2:
2x 7200RPM 3TB Seagate Constellation ST3000NM0033
4x 5900ROM 3TB Seagate IronWolf NAS ST3000VN007
Storage Pool 2 (encrypted): 1 vdev
vdev-0 RAIDZ2:
3x 7200RPM 2TB mixed desktop drives
3x 5900ROM 2TB Seagate IronWolf NAS ST2000VN007
Test SLOG devices: 2x 240GB Samsung 970 EVOs plugged into a Supermicro AOC-SLG3-2M2 PCIe3.0x8 -> 2x M.2 @ PCIe3.0x4. No PLP protection so obviously poor long-term SLOG devices but I had them lying around to test with.
The Tests
I did not write or use a script to process the data automatically. I took samples and averaged for network speed tests and watched values and recorded the stable range for both io and cpu idle %. All tests were done while both machines were connected via a router using wired 1G connection.
Network performance*:
netstat -w 1 -I igb0
. 2-3 sets of 20-30 values were taken, averaged, and converted to Mb.Disk performance:
zpool iostat -v <pool> 1
CPU performance**:
top -P
Large file transfer: Using an NFS share I transferred a directory to the pool containing 10 1.024GB files created with
for n in {1..10}; do dd if=/dev/urandom of=${n}.txt bs=64000000 count=16; done
. Time to complete was recorded by simple timing with a digital watch.Small file transfer: Using an NFS share I transferred a directory to the pool containing 10000 1.024MB files created with
for n in {1..10000}; do dd if=/dev/urandom of=${n}.txt bs=1024 count=1000; done
. Time to complete was recorded by simple timing with a digital watch.* Network speed for the first benchmark was taken directly from
iperf
.** I only performed cpu benchmarks where listed. Given the low idle percentage and similar network speeds of both pools with the SLOG I don't think the cpu is the bottleneck.
Initial Benchmarks
Code:
[client] $ iperf3 -c <ip> -F <1.024G file> -f k [server] $ iperf3 -s
Result: 935Mb/s
Code:
$ mdconfig -a -t swap -s 6g -u 1 $ zpool add <pool> log md1
Large file results
Network: 840Mb/s
Disk: pool 170-240MB/s; vdev 70-130MB/s; SLOG 100-110MB/s
CPU: 80-85% idle
Time: 1:41
Small file results
Network: 670Mb/s
Disk: pool 130-210MB/s avg 150MB/s; vdev 65-120MB/s avg 70MB/s; 75-85MB/s
CPU: 80-85% idle
Time: 2:06
These values looked pretty reasonable to me. For large files I was nearly able to saturate my 1Gb/s network. Considering the encoding, router, etc I would guess that with the ramdisk the network was the bottleneck.
Actual Tests
Without SLOG
Encrypted Pool Large file results
Network: 323Mb/s
Disk: pool 75-110MB/s
CPU: -
Time: 4:48
Encrypted Pool Small file results
Network: 123Mb/s
Disk: pool 20-50MB/s
CPU: -
Time: 11:23
Regular Pool Large file results
Network: 335Mb/s
Disk: pool 50-100MB/s avg 75MB/s
CPU: -
Time: 4:26
Regular Pool Small file results
Network: 135Mb/s
Disk: pool 20-60MB/s
CPU: -
Time: 11:16
With SLOG
Encrypted Pool Large file results
Network: 460Mb/s
Disk: pool 90-120MB/s; vdev 40-65MB/s; SLOG 50-55MB/s
CPU: -
Time: 3:10
Encrypted Pool Small file results
Network: 490Mb/s
Disk: pool 120MB/s; vdev 60-65MB/s; 50-60MB/s
CPU: -
Time: 2:57
Regular Pool Large file results
Network: 466Mb/s
Disk: pool 80-120MB/s avg 115MB/s; vdev 30-65MB/s avg 60MB/s; SLOG 50-60MB/s
CPU: 70-90% idle
Time: 3:06
Regular Pool Small file results
Network: 445Mb/s
Disk: pool 80-120MB/s avg 116MB/s; vdev 40-65MB/s avg 60MB/s; SLOG 50-55MB/s
CPU: 85-90% idle
Time: 3:14
I would assume because the ram disk performed so well that the bottleneck isn't my pool, which is
RAIDZ2
and I know is not as performance as many striped mirror
vdevs
. It appears that with the ramdisk I am bottlenecked by my network but with the Samsung 970 evos I am bottlenecked by the SLOG devices themselves despite them being listed as capable of handling much greater write speeds. I did not over provision the devices or change other settings yet and would love advice on what are better ways to test my system and specific settings/tweaks I should be making.Edit: If I decide to get a SLOG device I've looked around and figure I should get
2 x 120GB Samsung SM863 SATA SSDs
2 x 100GB Intel Optane SSD DC P4801X M.2 PCIex4 SSDs
Would happily consider other devices keeping in mind that I have 2 pools to add SLOG devices to, plenty of available SATA ports (hence the SM863s), 2 M.2 slots thanks to that adapter and no remaining PCIe slots since I have plans for my final x4 slot.
Last edited: