dpskipper
Dabbler
- Joined
- Jan 25, 2020
- Messages
- 13
I've recently put together an all SSD pool containing 6 Segate 1200 SAS3 drives which i'll be using as a really fast scratch disk over the network. Network hardware is a 10gig switch and SFP+ fiber to my PC and the FreeNAS VM inside of an ESXi host.
Freenas has 32GB ram and 4 cores two sockets. 8 cores total.
The drives are running in an MD1220 diskshelf through a 9207-8e. The HBA is passed to FreeNAS. I understand that i'm running SAS3 drives at SAS2 speeds however I should have the bandwidth between the disk shelf and the host system to get much better speeds.
Problem is I'm seeing terrible numbers over SMB.
Crystal disk mark gives me Sequential read speeds of ~800MB/s and write speeds of no more than 400MB/s. Considering each drive has a stated sequential write of 750MB/s i'm a little confused why it so low.
I ran this command in fio to benchmark the writes on the pool:
and the results are as follows:
I've done no performance tweaking at all because I really have no idea what to tweak. My network is on MTU 1500. I have confirmed that I can I can get about 6-7Gbps from the freenas ARC to my PC. So i know my network can do at least 700MB/s. Iperf tests have been conducted on windows but I only see about 4Gbps with 1 thread. I've heard that iperf on windows is not reliable though.
I guess that my first step should be diagnosing why my network isn't getting at least 9Gbps over iperf? I don't have another physical PC to test on. Just my PC on Windows 10 with a Mellanox connect x3 card. The other end is a Mikrotik 10gig SFP+ switch and an Intel 10gig NIC going into a Dell R720 server. Freenas has the correct 10gig vNIC.
Freenas has 32GB ram and 4 cores two sockets. 8 cores total.
The drives are running in an MD1220 diskshelf through a 9207-8e. The HBA is passed to FreeNAS. I understand that i'm running SAS3 drives at SAS2 speeds however I should have the bandwidth between the disk shelf and the host system to get much better speeds.
Problem is I'm seeing terrible numbers over SMB.
Crystal disk mark gives me Sequential read speeds of ~800MB/s and write speeds of no more than 400MB/s. Considering each drive has a stated sequential write of 750MB/s i'm a little confused why it so low.
I ran this command in fio to benchmark the writes on the pool:
Code:
fio --name=seqwrite --rw=write --direct=1 --bs=256k --numjobs=8 --size=256G --runtime=600 --group_reporting
and the results are as follows:
Code:
WRITE: bw=1039MiB/s (1089MB/s), 1039MiB/s-1039MiB/s (1089MB/s-1089MB/s), io=609GiB (654GB), run=600003-600003msec
I've done no performance tweaking at all because I really have no idea what to tweak. My network is on MTU 1500. I have confirmed that I can I can get about 6-7Gbps from the freenas ARC to my PC. So i know my network can do at least 700MB/s. Iperf tests have been conducted on windows but I only see about 4Gbps with 1 thread. I've heard that iperf on windows is not reliable though.
I guess that my first step should be diagnosing why my network isn't getting at least 9Gbps over iperf? I don't have another physical PC to test on. Just my PC on Windows 10 with a Mellanox connect x3 card. The other end is a Mikrotik 10gig SFP+ switch and an Intel 10gig NIC going into a Dell R720 server. Freenas has the correct 10gig vNIC.