EnlightenCor
Cadet
- Joined
- Jul 28, 2020
- Messages
- 4
First, I'll say I have been a long time lurker to these forums and I've tried to do a fair bit of investigation and problem solving on my own, but I think I've gone as far as I can and would like some insight from those who have been around and in the mix a lot longer than I.
I'm attempting to use Freenas ISCSI with ESXI 6.5 ( Dell R610). I am not seeing the performance I believe I could attain, but I'm leaning towards my drives at this point being the bottleneck, but before spending any further cash, I would like others to take a look.
Continued.... after the Hardware section.
Needed Information:
FreeNas Version: FreeNAS-11.3-U3.2
Hardware:
MOBO: SuperMicro X8SIE
Intel(R) Xeon(R) CPU X3460 @ 2.80GHz amd64
32 GB ECC Memory. Memory brands match and in the correct slots per Supermicro Manual.
02:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller (rev 10)
LSI SAS2008 PCI Flashed to IT mode.
Drive Configuration:
2X4 Mirror
Drive Models:
5 of ATA ST2000DL003 1.82 TB(5900 RPM)
2 of ATA ST2000DM001 1.82 TB (7200 RPM)
1 of ATA ST32000542AS 1.82TB (5900 RPM)
1 KINGSTON SUV400S37240G 240GB as a LOG device
(It was a spare SSD lying around. Including it seemed to improve performance)
DD_Tests:
1) DD 10GB write of Zeros
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 3.951187 secs (2653825470 bytes/sec)
2) DD 10GB read of Zeros
dd of=/dev/null if=/mnt/tank/ddtest1.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 1.702495 secs (6159055428 bytes/sec)
3) DD 20GB Write of Zeros
dd if=/dev/zero of=/mnt/tank/ddtest2.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 7.574889 secs (2768557911 bytes/sec)
4) DD 20GB read of Zeros
dd of=/dev/null if=/mnt/tank/ddtest2.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 3.387624 secs (6190628305 bytes/sec)
5) DD 10GB write of Random (CPU Intensive, but CPU only 14% Utilized)
dd if=/dev/random of=/mnt/tank/ddtest3.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 100.059708 secs (104795029 bytes/sec)
6) DD 10GB Read of Random (Same CPU utilization)
dd of=/dev/null if=/mnt/tank/ddtest3.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 3.565204 secs (2941138916 bytes/sec)
The ESXI 6.5 Server Dell R610 has 2 ISCSI Datastores from the Freenas. One is over the 10GB Link and I have another over 2 1GB ISCSI Links. The 1Gb setup is through another VSwtich entirely.
When spinning up VMs, I can't seem to get beyond 30 to 40 MB/s on the disks and datastores, maybe 50 MB/s if I am lucky. It doesn't matter if I am using the 10GB Datastore or the 1GB Datastore. I don't see much of a difference between using 10GB or 1GB in this scenario.
I can see consistent writes across all disks. 35 to 50 MB/s is usually what I am seeing.
The CPU rarely goes beyond 10% utilization and load is a consistent ~0.16
I've tested Iperf between the Freenas Box and ESXI and it's a sustained 9 Gb/s either way. Networking seems okay. I haven't ruled out using a better driver for the 10GB, but I'm not sure that's an issue. (nmlx4 driver ) currently.
I've added more RAM (was 12GB now 32GB ) because I know ISCSI likes a lot of RAM, but when it's cranking away I still have 15GB of RAM unused and I didn't see a huge difference in performance.
I added an ISCSI connection from a separate Windows Box over 1GB connection and it starts off well, but then normalizes around 65 MB/s.
That being said, I'm pretty confident the bottleneck exists in the drives I have, but before I run out and purchase more I want to do any further testing to determine any outliners that may exist. IOMeter Read tests are good (160 MB/s 32k 100% Reads), but any mix of writes and the performance deteriorates.
I've tried different combinations of pools with and without the SSD.
4X2 Mirrors
4X2 Stripe within Vdev. Lots of space available, but not really reliable in the long run.
The workloads aren't anything special and mostly just testing things like Ansible, Terraform, and some Shotty Python Applications. I like having the ability to test setups and spin up a VM for testing. It doesn't need to be lightning-fast, but any improvements are always appreciated.
Thanks and I hope I've listed enough info to get everyone up to speed.
I'm attempting to use Freenas ISCSI with ESXI 6.5 ( Dell R610). I am not seeing the performance I believe I could attain, but I'm leaning towards my drives at this point being the bottleneck, but before spending any further cash, I would like others to take a look.
Continued.... after the Hardware section.
Needed Information:
FreeNas Version: FreeNAS-11.3-U3.2
Hardware:
MOBO: SuperMicro X8SIE
Intel(R) Xeon(R) CPU X3460 @ 2.80GHz amd64
32 GB ECC Memory. Memory brands match and in the correct slots per Supermicro Manual.
02:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller (rev 10)
LSI SAS2008 PCI Flashed to IT mode.
Drive Configuration:
2X4 Mirror
Drive Models:
5 of ATA ST2000DL003 1.82 TB(5900 RPM)
2 of ATA ST2000DM001 1.82 TB (7200 RPM)
1 of ATA ST32000542AS 1.82TB (5900 RPM)
1 KINGSTON SUV400S37240G 240GB as a LOG device
(It was a spare SSD lying around. Including it seemed to improve performance)
DD_Tests:
1) DD 10GB write of Zeros
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 3.951187 secs (2653825470 bytes/sec)
2) DD 10GB read of Zeros
dd of=/dev/null if=/mnt/tank/ddtest1.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 1.702495 secs (6159055428 bytes/sec)
3) DD 20GB Write of Zeros
dd if=/dev/zero of=/mnt/tank/ddtest2.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 7.574889 secs (2768557911 bytes/sec)
4) DD 20GB read of Zeros
dd of=/dev/null if=/mnt/tank/ddtest2.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 3.387624 secs (6190628305 bytes/sec)
5) DD 10GB write of Random (CPU Intensive, but CPU only 14% Utilized)
dd if=/dev/random of=/mnt/tank/ddtest3.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 100.059708 secs (104795029 bytes/sec)
6) DD 10GB Read of Random (Same CPU utilization)
dd of=/dev/null if=/mnt/tank/ddtest3.dat bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 3.565204 secs (2941138916 bytes/sec)
The ESXI 6.5 Server Dell R610 has 2 ISCSI Datastores from the Freenas. One is over the 10GB Link and I have another over 2 1GB ISCSI Links. The 1Gb setup is through another VSwtich entirely.
When spinning up VMs, I can't seem to get beyond 30 to 40 MB/s on the disks and datastores, maybe 50 MB/s if I am lucky. It doesn't matter if I am using the 10GB Datastore or the 1GB Datastore. I don't see much of a difference between using 10GB or 1GB in this scenario.
I can see consistent writes across all disks. 35 to 50 MB/s is usually what I am seeing.
The CPU rarely goes beyond 10% utilization and load is a consistent ~0.16
I've tested Iperf between the Freenas Box and ESXI and it's a sustained 9 Gb/s either way. Networking seems okay. I haven't ruled out using a better driver for the 10GB, but I'm not sure that's an issue. (nmlx4 driver ) currently.
I've added more RAM (was 12GB now 32GB ) because I know ISCSI likes a lot of RAM, but when it's cranking away I still have 15GB of RAM unused and I didn't see a huge difference in performance.
I added an ISCSI connection from a separate Windows Box over 1GB connection and it starts off well, but then normalizes around 65 MB/s.
That being said, I'm pretty confident the bottleneck exists in the drives I have, but before I run out and purchase more I want to do any further testing to determine any outliners that may exist. IOMeter Read tests are good (160 MB/s 32k 100% Reads), but any mix of writes and the performance deteriorates.
I've tried different combinations of pools with and without the SSD.
4X2 Mirrors
4X2 Stripe within Vdev. Lots of space available, but not really reliable in the long run.
The workloads aren't anything special and mostly just testing things like Ansible, Terraform, and some Shotty Python Applications. I like having the ability to test setups and spin up a VM for testing. It doesn't need to be lightning-fast, but any improvements are always appreciated.
Thanks and I hope I've listed enough info to get everyone up to speed.