RKMStudios
Dabbler
- Joined
- Feb 28, 2019
- Messages
- 14
Hello,
This is my first post in this community, so I apologize if I don't give all the information that is needed. I am eager to learn, but very new to a lot of this.
I have a 2U all flash system purchased from iXsystems and I am having some difficulty maximizing the throughput to my clients. I've tried all types of shares, iSCSI came the closest to saturating the 10gbe connection on both read and write (~900w, ~1150r), but iSCSI doesn't quite work for our environment. We really need shared storage with simultaneous R/W access to each client. So NAS.
I have a lot of questions, but I am going to try and keep this first post narrowed to improving speeds on a AFP share. I am not using SMB or NFS, because the throughput was far worse. AFP gives me the best performance.
I've used four benchmarks and a control to arrive to the conclusion that I should expect more.
iXSystems Benchmarks:
iPerf: 9.41 Gbits/sec (client to server)
iPerf: 9.41 Gbits/sec (server to client)
Blackmagic Disk Speed Test: 706MBw 270MBr
AJA Disk Test: ~730MBw ~1000MBr
Internal DD:
[root@Phoenix /mnt/Avalon_Pool]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k51200+0 records in
51200+0 records out
107374182400 bytes transferred in 42.921751 secs (2501626342 bytes/sec)
[root@Phoenix /mnt/Avalon_Pool]# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 44.755781 secs (2399113164 bytes/sec)
So as a control of sorts, I have a second home-built FreeNAS that i just made to learn the ropes. Here's the same benchmarks using the same client on the same network:
Homebrew FreeNAS
iPerf: 9.40 Gbits/sec (client to server)
iPerf: 9.40 Gbits/sec (server to client)
Blackmagic Disk Speed Test: 800MBw 1034MBr
AJA Disk Test: ~818MBw ~1049MBr
Internal DD:
[root@Resolve /mnt/Resolve_Database]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 17.173343 secs (6252375265 bytes/sec)
[root@Resolve /mnt/Resolve_Database]#
[root@Resolve /mnt/Resolve_Database]# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 10.285223 secs (10439655227 bytes/sec)
Now for the system specs:
iXSystems:
Intel Six Core Xeon 3104 (1.7Ghz)
32GB DDR4 2666Mhz
Pool: Striped 9x 1TB Samsung QVO
Intel X540-T2
Homebrew FreeNAS:
Intel Core i7 Quad-core (4.5Ghz)
32 GB Crucial 1600Mhz
GTX Titan X (Lol, it was actually all i had laying around)
Pool: 1 USB 120GB SSD drive. (Gets about 200-300MB/s on its own, not fast)
Chelsio Dual SFP+ Card
Client:
MacOS Mojave
Intel Core i7 Quad-core (4.5Ghz)
32 GB Crucial 1600Mhz
Chelsio Dual SFP+ Card
Closing thoughts: By all accounts, it confuses me that the internal read write tests were so off base. Shouldn't the system with 9 striped SSDs be blazing fast? There's nine!
Questions for the community:
- How is the internal performance of the 9 striped raids only hitting about 2.4GB/s? Should it not be 9x 550MB/s?
- How can a much more meager (homebrew) system toast the iXSystem's performance?
- How can I improve tuning or settings on the iXSystems machine to improve the AFP performance? (I'm already using Jumbo frames across the board)
Caveats:
- I know everyone is going to point out that BM Speed Test and AJA Benchmarks are circumstantial, which is why i put together a second test with the same client and different NAS, same tests.
- Beyond these tests, I can anecdotally say that all share types preform better on the Homebrew machine than the iXSystems build.
Thank you all for your time and thoughts, I really appreciate it!
This is my first post in this community, so I apologize if I don't give all the information that is needed. I am eager to learn, but very new to a lot of this.
I have a 2U all flash system purchased from iXsystems and I am having some difficulty maximizing the throughput to my clients. I've tried all types of shares, iSCSI came the closest to saturating the 10gbe connection on both read and write (~900w, ~1150r), but iSCSI doesn't quite work for our environment. We really need shared storage with simultaneous R/W access to each client. So NAS.
I have a lot of questions, but I am going to try and keep this first post narrowed to improving speeds on a AFP share. I am not using SMB or NFS, because the throughput was far worse. AFP gives me the best performance.
I've used four benchmarks and a control to arrive to the conclusion that I should expect more.
iXSystems Benchmarks:
iPerf: 9.41 Gbits/sec (client to server)
iPerf: 9.41 Gbits/sec (server to client)
Blackmagic Disk Speed Test: 706MBw 270MBr
AJA Disk Test: ~730MBw ~1000MBr
Internal DD:
[root@Phoenix /mnt/Avalon_Pool]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k51200+0 records in
51200+0 records out
107374182400 bytes transferred in 42.921751 secs (2501626342 bytes/sec)
[root@Phoenix /mnt/Avalon_Pool]# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 44.755781 secs (2399113164 bytes/sec)
So as a control of sorts, I have a second home-built FreeNAS that i just made to learn the ropes. Here's the same benchmarks using the same client on the same network:
Homebrew FreeNAS
iPerf: 9.40 Gbits/sec (client to server)
iPerf: 9.40 Gbits/sec (server to client)
Blackmagic Disk Speed Test: 800MBw 1034MBr
AJA Disk Test: ~818MBw ~1049MBr
Internal DD:
[root@Resolve /mnt/Resolve_Database]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 17.173343 secs (6252375265 bytes/sec)
[root@Resolve /mnt/Resolve_Database]#
[root@Resolve /mnt/Resolve_Database]# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 10.285223 secs (10439655227 bytes/sec)
Now for the system specs:
iXSystems:
Intel Six Core Xeon 3104 (1.7Ghz)
32GB DDR4 2666Mhz
Pool: Striped 9x 1TB Samsung QVO
Intel X540-T2
Homebrew FreeNAS:
Intel Core i7 Quad-core (4.5Ghz)
32 GB Crucial 1600Mhz
GTX Titan X (Lol, it was actually all i had laying around)
Pool: 1 USB 120GB SSD drive. (Gets about 200-300MB/s on its own, not fast)
Chelsio Dual SFP+ Card
Client:
MacOS Mojave
Intel Core i7 Quad-core (4.5Ghz)
32 GB Crucial 1600Mhz
Chelsio Dual SFP+ Card
Closing thoughts: By all accounts, it confuses me that the internal read write tests were so off base. Shouldn't the system with 9 striped SSDs be blazing fast? There's nine!
Questions for the community:
- How is the internal performance of the 9 striped raids only hitting about 2.4GB/s? Should it not be 9x 550MB/s?
- How can a much more meager (homebrew) system toast the iXSystem's performance?
- How can I improve tuning or settings on the iXSystems machine to improve the AFP performance? (I'm already using Jumbo frames across the board)
Caveats:
- I know everyone is going to point out that BM Speed Test and AJA Benchmarks are circumstantial, which is why i put together a second test with the same client and different NAS, same tests.
- Beyond these tests, I can anecdotally say that all share types preform better on the Homebrew machine than the iXSystems build.
Thank you all for your time and thoughts, I really appreciate it!