Sure, stupid that I didn't provide you already with that information.
1: CPU: Xeon(R) CPU E3-1230 v3 @ 3.30GHz RAM: 16GB ECC
2: FreeNAS-9.3-STABLE-201511280648
3: file extents
4: QLogic QLE 2460
5: Link Speed is 4G (color is orange)
6: Windows is a VM on an proxmox host, which use the same HBA as FreeNAS
I am setting up an Debian 8.2 as an VM to check if it has the same performance.
EDIT:
With Debian 8.2 and virtio drivers the same result.
I just set up my first test with FC-FreeNAS setup, and had fantastic benchmark results.
Here is my Test Setup:
Free NAS Server:
- Dell T105
- 8 GB ECC DDR2
- 1.6 ghz Dual core AMD Athlon II u250 (so not that powerful)
- QLE2462 using latest QLOGIC bios (available on qlogic's website) version 3.29
- 1m LC-LC Duplex 50/125 Multimode 10Gb Fiber Patch Cable
- Latest FreeNAS build
- 3 x 1TB 7200 RPM SATA drives using onboard ATA Controller
- Drives configured into RAID-Z1, default settings (No GEIL Encryption - CPU doesn't support hardware AES)
- Test Setup with 4 x 100gb ZVol's, created with a mixture of "sparse" on/off and lz4 on/off
Initiator Server:
- Dell R210
- i3 Sandy Bridge CPU
- 32 GB Ram
- Windows 7 SP1
- QLE2460 using latest QLOGIC bios (available on qlogic's website) version 3.29
- Latest Windows 2008/2012 Driver from Qlogic's web-site
- 4 x 100 GB disks formatted NTFS with GTP Partition format
- ATTO Disk Benchmark
ATTO Results:
* See Screenshots below.. Run1.. On LZ4 Compressed ZVOL, RUN2, No ZFS Compression.
- Results beyond 32kb block size was getting 350-410 MByte/Second Read/Write speeds (according to benchmark) across all types
- the Non-compressed ZVOL (lz4 off) was slightly faster with smaller KB IOPS
- Sparse didn't matter at all on benchmarks.
So as you can see, depending on the workload and configuration (how large your read/write IO block size is), the throughput scales up to the link speed of 4 Gbit Fiber Channel. Most of the time most people aren't going to be in optimal situation (Desktops / VMware typically use 8KB to 128KB block size, depending). But latency, throughput, etc should be better over FC than over Ethernet.
I think next I'll be throwing ESXi 6 on this R210 and running a few Windows VMs off of "SAN" Datastores to see what performance looks like (RDM and through VMFS 5).. Stay tuned