Hey guys and girls,
I play around with FreeNAS since last year. Till now I only had some small test machines. Now I use 2 FreeNAS boxes for only serving iSCSI (and one NFS share for the ESXi logs). But I get strange results in read/write speed.
This is the setup:
#1 FreeNAS 9.3 x64
On #1 there are 7 ZVOLs all shared via iSCSI. Compression is off and the ZVOLs are created using the default values. All targets are accesed through Windows Servers using MPIO, all formatted with NTFS).
On #2 I have created 5 ZVOLs. 3 for an ESXi 5.1 host, running a Windows Server 2008 R2, Windows 8.1 x64 and XUbuntu 14.04.1. The other 2 are accesed by the Windows DC for Exchange-DB and some files for sharing (CIFS sharing with FreeNAS caused some trouble with different devices, like scanners).
Everything is connected on the same subnet.
running "dd if=/dev/zero of=/mnt/ZPOOL/testfile bs=4M count=10000" results in
#1
41943040000 bytes transferred in 209.444176 secs (200258803 bytes/sec) => ~190 MB/s, CPU ~20%
#2 (with running VMs, but idle)
41943040000 bytes transferred in 238.388392 secs (175944137 bytes/sec) => ~167 MB/s, CPU ~20%
This seems ok to me. But when I try to copy files on the mounted iSCSI drives I'm confused.
Server
ESXi 5.1 host
~50-60 MB/s, CPU ~25%, Network 200MBit/s
Does this look right?
I believe, reading has to be much more faster and the ESX should get much more MB/s?!
Yeah, I know, that I should use MPIO with more NICS and different subnets. But I think this is not the bottleneck... Please correct me, if I'm wrong.
Thanks in advance.:)
I play around with FreeNAS since last year. Till now I only had some small test machines. Now I use 2 FreeNAS boxes for only serving iSCSI (and one NFS share for the ESXi logs). But I get strange results in read/write speed.
This is the setup:
#1 FreeNAS 9.3 x64
- Intel S1200KPR miniITX
- Celeron G1610 (2.60GHz Dual-Core)
- 8GB Kingston ECC RAM
- 4x WD Red 1TB (WD10EFRX)
- Dual-NIC onboard
- RAIDZ2
- Intel S1200KPR miniITX
- Celeron G1610 (2.60GHz Dual-Core)
- 16GB Kingston ECC RAM
- 4x WD Red 1TB (WD10EFRX)
- Dual-NIC onboard
- RAIDZ2
On #1 there are 7 ZVOLs all shared via iSCSI. Compression is off and the ZVOLs are created using the default values. All targets are accesed through Windows Servers using MPIO, all formatted with NTFS).
On #2 I have created 5 ZVOLs. 3 for an ESXi 5.1 host, running a Windows Server 2008 R2, Windows 8.1 x64 and XUbuntu 14.04.1. The other 2 are accesed by the Windows DC for Exchange-DB and some files for sharing (CIFS sharing with FreeNAS caused some trouble with different devices, like scanners).
Everything is connected on the same subnet.
running "dd if=/dev/zero of=/mnt/ZPOOL/testfile bs=4M count=10000" results in
#1
41943040000 bytes transferred in 209.444176 secs (200258803 bytes/sec) => ~190 MB/s, CPU ~20%
#2 (with running VMs, but idle)
41943040000 bytes transferred in 238.388392 secs (175944137 bytes/sec) => ~167 MB/s, CPU ~20%
This seems ok to me. But when I try to copy files on the mounted iSCSI drives I'm confused.
Server
- Intel XEON 5130
- 24GB ECC Kingston RAM
- SAS LSI-RAID10
- Dual Intel Pro/1000 EB
- Server 2008 R2 SP1 x64
ESXi 5.1 host
- XEON E3-1225
- 32GB ECC Kingston RAM
- iSCSI with Intel Gigabit ET Dual Port Server Adapter (E1G42ET)
- Multipath, roundrobin, IOPS=1
~50-60 MB/s, CPU ~25%, Network 200MBit/s
Does this look right?
I believe, reading has to be much more faster and the ESX should get much more MB/s?!
Yeah, I know, that I should use MPIO with more NICS and different subnets. But I think this is not the bottleneck... Please correct me, if I'm wrong.
Thanks in advance.:)