Greg10
Dabbler
- Joined
- Dec 16, 2016
- Messages
- 24
Please help me determine why my IOmeter test configuration never gives me more than 31-33,000 total IOPS.
Specs of the drives (4x250gb EVO 850 SSD in RAID-0) says that the drives are supposed to deliver "up to" 98,000 IOPS, so where is my bottleneck?
I've just stood up a FreeNAS box and connected to it via iSCSI. I have created four extents and mounted them on a Windows Server 2016 box. I have tried tests with 2, 3, 4 SSDs in RAID-0, a SATA II (3Gb/s) RAID Card, a SATA-III (6Gb/s) card.I have tried varying the packet size from 512bits to 16k, I have tried 100% Random reads and 100% sequential reads.
Here's my setup:
FreeNAS box:
AMD Athlon II X4 640 Processor (4 cores @ 2+GHz)
MSI 870-G45 Motherboard
8GB RAM
Samsung Evo 850 OS drive
IBM M1015 flased to IR mode
4 Evo 850 250gb drives in RAID-0
2-port Broadcom NIC
Windows Server box:
Dell Precision T5400
2 Xeon E5405 CPU @ 2GHz
20GB RAM
2-port Broadcom NIC
Each box is set up with the on-board NIC as a management interface and both ports on the Broadcom NIC are set up in individual VLANs (so ISCS1 is 192.168.50.x and ISCSI2 in a different VLAN in 192.168.51.x).
FreeNAS is set up with both ports on the NIC in one target portal, and the Windows box is set up using MPIO across both iSCSI interfaces using round robin. The RAID card and the NIC are in the PCI-E x16 slots on the motherboard and the four SSDs are in straight passthrough mode.
When I run the IOmeter test, CPU on FreeNAS is around 30%, Wired Memory is less than one GB, no swap. Network utilization on FreeNAS is about 36gb on each iSCSI interface.
On the Windows box CPU utilization is around 20%, memory is less than 2gb and network traffic across both iSCSI interfaces matches the FreeNAS box at 36gb each.
IOmeter is set up with 8 workers (1 for each core on the Dell) each with the following worker specification:
100% Read
100% Random
512b packet size
32 queue depth
Specs of the drives (4x250gb EVO 850 SSD in RAID-0) says that the drives are supposed to deliver "up to" 98,000 IOPS, so where is my bottleneck?
I've just stood up a FreeNAS box and connected to it via iSCSI. I have created four extents and mounted them on a Windows Server 2016 box. I have tried tests with 2, 3, 4 SSDs in RAID-0, a SATA II (3Gb/s) RAID Card, a SATA-III (6Gb/s) card.I have tried varying the packet size from 512bits to 16k, I have tried 100% Random reads and 100% sequential reads.
Here's my setup:
FreeNAS box:
AMD Athlon II X4 640 Processor (4 cores @ 2+GHz)
MSI 870-G45 Motherboard
8GB RAM
Samsung Evo 850 OS drive
IBM M1015 flased to IR mode
4 Evo 850 250gb drives in RAID-0
2-port Broadcom NIC
Windows Server box:
Dell Precision T5400
2 Xeon E5405 CPU @ 2GHz
20GB RAM
2-port Broadcom NIC
Each box is set up with the on-board NIC as a management interface and both ports on the Broadcom NIC are set up in individual VLANs (so ISCS1 is 192.168.50.x and ISCSI2 in a different VLAN in 192.168.51.x).
FreeNAS is set up with both ports on the NIC in one target portal, and the Windows box is set up using MPIO across both iSCSI interfaces using round robin. The RAID card and the NIC are in the PCI-E x16 slots on the motherboard and the four SSDs are in straight passthrough mode.
When I run the IOmeter test, CPU on FreeNAS is around 30%, Wired Memory is less than one GB, no swap. Network utilization on FreeNAS is about 36gb on each iSCSI interface.
On the Windows box CPU utilization is around 20%, memory is less than 2gb and network traffic across both iSCSI interfaces matches the FreeNAS box at 36gb each.
IOmeter is set up with 8 workers (1 for each core on the Dell) each with the following worker specification:
100% Read
100% Random
512b packet size
32 queue depth