Intel D99083 10GB XF Series 10GbE Single Port PCI-e Server Adapter Dell RN219

Status
Not open for further replies.

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Hi,

My plan is to start really ramping up the number VM's I'm running on a Windows 8.1 PC for work/study. I'm very conscious that my 1 GBe's NICs are a bottleneck.

I'm contemplating buying 2 x Intel D99083 10GB XF Series 10GbE Single Port PCI-e Server Adapters Dell RN219 for my lab, but have a couple of questions before I commit the cash,

1) Will they actually work in a crossover (no switch) configuration
2) What would the estimated expected increase in performance be? Crystal Benchmark (1 gb file) currently getting the following stats from the Win 8.1 PC to a iSCSI drive on the FreeNAS server

SEQ
Read 108.3
Write 78.42
512K
Read 98.94
Write 75.1
4K
Read 9.296
Write 6.523
4K QD32
Read 86.66
Write 73.33
My Config is
FreeNAS 9.2.1.3
  • Supermicro Motherboard MBD-X10SL7-F-O
  • Intel I3-4340
  • 32 GB ECC Hynix Memory
  • 6 x Seagate NAS HDD 4TB SATA in RAIDZ2
Windows 8.1
  • Shuttle SH87R6 Barebone XPC with Intel H87 MBD and 1 x 1 gbe NIC
  • 16 GB 1600MHz RAM (Might up this to 32GB)
  • Intel Core i7 4770S
Thanks in advance
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
To follow up on my own question.
1) Still don't know if this will work, anyone?
2) I'm thinking the performance improvement won't be as good as I'd hope. The card is a PCI 1.1, so according to this web site

http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

A single PCIe 1.0 (or 1.1) lane can carry up to 2.5 Gigatransfers per second (GT/s) in each direction simultaneously
...
After overhead, the maximum per-lane data rate of PCIe 1.0 is eighty percent of 2.5GT/s. That gives us two gigabits per second, or 250MB/s

Thoughts?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
1. It should work, but that may depend on the controller. My 10Gb are direct-linked since I'm not about to drop $1000+ on 10Gb switches.
2. If you look at the card, its an 8x PCIe card... http://www.ebay.com/itm/Intel-D9908...I-e-Server-Adapter-DELL-RN219-Z-/111268860766

8x means 250MB/sec x 8, which gives you enough to saturate both uplink and downlink simultaneously with plenty of headroom to spare. Not sure why you think it's going to be a disappointment.

Your disappointment is probably going to be the fact that your pool can't keep up with 10Gb. ;)
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
OK, all installed, and its rocking

SEQ
Read 260.5
Write 247.8
512K
Read 249.1
Write 232.0
4K
Read 9.572
Write 15.31
4K QD32
Read 177.1
Write 138.7

So cards were $75ea from ebay, plus $30 in shipping to Australia.. Then a LC - LC fiber cable between the two, no need to cross over.
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Interestingly I'm only getting 2.48GBits/sec on iperf

iperf -c 192.168.2.200
------------------------------------------------------------
Client connecting to 192.168.2.200, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.1 port 50162 connected with 192.168.2.200 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.88 GBytes 2.48 Gbits/sec

Not sure if that is a coincidence? Is the bottleneck still my network, or is that the max speed of my pool I wonder, will continue to tinker.
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Ok, after some tweaking of the MTU, we're doing much better. (jumbo frames)

Based on this article, I set them to 9000? No idea if that is ideal or not. http://dak1n1.com/blog/7-performance-tuning-intel-10gbe

http://windowsitpro.com/windows/q-how-do-i-enable-jumbo-frames

iperf -c 192.168.2.200 -w256k -t60
------------------------------------------------------------
Client connecting to 192.168.2.200, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.2.1 port 49446 connected with 192.168.2.200 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 58.4 GBytes 8.35 Gbits/sec

CrystalMark is much better too

SEQ
Read 665.0
Write 285.9
512K
Read 478.0
Write 258.2
4K
Read 9.926
Write 16.19
4K QD32
Read 223.3
Write 150.4
 

Raptor_007

Cadet
Joined
Oct 29, 2016
Messages
2
Hey there, I am thinking of doing the same thing, using these same cards. Sorry to dig up an old topic, but I'm curious if you're still using this fiber setup and if you had any additional thoughts regarding your experience with it.
Thanks!
 

MartynW

Dabbler
Joined
Feb 23, 2014
Messages
39
Yes all works a treat, mostly using iSCSI disks for VM's. I didn't quite get the speed I wanted as the pool became the bottleneck. I added L2ARC and ZIL too, that helped.
 

Raptor_007

Cadet
Joined
Oct 29, 2016
Messages
2
Yes all works a treat, mostly using iSCSI disks for VM's. I didn't quite get the speed I wanted as the pool became the bottleneck. I added L2ARC and ZIL too, that helped.

Awesome, thanks for the quick response! Glad to hear it's worked well for you. I think I'll jump in and buy these.
 
Status
Not open for further replies.
Top