My First post, constant lurker.......
Hopefully this helps someone else and my reasons for wanting to do the above are clarified below.
I have had multiple problems with the SAS2008 HBA and Freenas 8.0x (FreeBSD 8.2) running under ESX5i as a VM.
Being a FreeBSD noob did not help!
All credit for solving it to the Web and Google! EVENTUALLY!!!!
The cards (LSI SAS2008: IBM M1015 / Dell H200 + SAS Expander) work fine under Freenas 8 on a standalone PC.
I could not get them to work under Freenas 8.0.x under ESX5 (Passed Through), ALTHOUGH, ESX5i could work with the card without any problems whatsoever.
I did not want to add a VMFS layer in-between ESX5 and Freenas ZFS.
OpenIndiana can also access the card fine under ESX5 (passed though) but I have already gone down the Freenas road with about 30TB of storage (Norco 24 Drive with Chenbro CK23601 6GB/s 36 Port SAS Expander)
So I figured that it’s the Freebsd SAS2008 Driver.
Please note that the Chenbro CK23601 card played no role in my problems experienced. My problems started before I even attached it to the SAS HBA.
Running an AMD FX 8150 with 16GB of RAM and a 4 port Intel Network card (LAGG on a managed switch) just for Freenas seemed like a bit of a waste.
My ESX server with all my other Virtual Machines was underutilized.
As I was running ESX anyway for SABNzb/Couchpotato/Sickbeard/Win Domain/ Asterix/ Exchange and various other SIP servers with multiple snapshot capability and configs I wanted to utilize it better.
Installing Freenas 8.0.x under ESX5i with the SAS2008 card passed through (VMDirectpath) produced the following errors during Freenas boot:
run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config mps_startup
run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config mps_startup
and it never gets past that stage.
To solve that problem:
Shut down the VM
Edit the Freenas VM Settings.
Detach the SAS Card.
Boot Freenas.
Edit the loader.conf
Add the following to it:
hw.pci.enable_msi="0" # Driver Interrupts problem SAS2008
hw.pci.enable_msix="0" # Driver Interrupts problem SAS2008
Shut down the VM
Edit the Freenas VM Settings.
Add the PassThrough SAS Card. (Your reserved memory should be the same value as the memory allocated to the VM (VMDirectpath requirement) otherwise it will not boot.
Boot Freenas.
Problem Solved!
If you use Multiple NICS for lagg with MTU 9000 you may have to add the following (I have 4 NICS) to loader.conf (or loaders in 8.0.3 p1)
kern.ipc.nmbclusters="262144" # Network Buffers Problem with MTU9000 and Multiple NICS
Otherwise not all my NICS are able to be utilized due to buffer constraints. (And a lot of my Jumbo frames get thrown away)
Under ESX5i, if you add more than 1 CPU, I encounter “Interrupt Storm on IRQx” (IRQ18) in my case.
To solve that, boot into Freenas VM Bios: Disable Floppy drive, Com ports, Printer ports and anything else that you are not going to use.
Save changes.
Reboot
Solved!
STATS:
ESX5i
Freenas 8.0.3 p1
Virtual Machine: Freebsd 64 Bit: 16GB RAM: 2 vCPU of 1 socket each: E1000 VMWare adapter
Passthru PCIe Card: Dell H200 Flashed to LSI 9211_8i (P12) Firmware (Chipset: SAS2008)
For Testing I used some old drives:
4 x 750GB WD Green Drives in Striped UFS config:
[root@freenas8-test] /mnt# dd if=/dev/zero of=/mnt/raid0/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 25.843491 secs (324592679 bytes/sec)
[root@freenas8-test] /mnt# dd if=/dev/zero of=/mnt/raid0/testfile bs=8192k count=10000
10000+0 records in
10000+0 records out
83886080000 bytes transferred in 271.635798 secs (308818207 bytes/sec)
3 x 750GB WD Green Drives in Raidz1 ZFS config:
[root@freenas8-test] ~# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 48.232895 secs (173918817 bytes/sec)
[root@freenas8-test] ~# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=10000
10000+0 records in
10000+0 records out
83886080000 bytes transferred in 536.209856 secs (156442630 bytes/sec)
4 x 750GB WD Green Drives in Raidz1 ZFS config:
[root@freenas8-test] /mnt/test1-raidz1# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 29.539966 secs (283974871 bytes/sec)
[root@freenas8-test] /mnt/test1-raidz1# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=10000
10000+0 records in
10000+0 records out
83886080000 bytes transferred in 378.921389 secs (221381222 bytes/sec)
Samba Transfer rate: 60MB/s Write and 75MB/s Read with AIO on: Read size: 8192 / Write size: 8192

Hopefully this helps someone else and my reasons for wanting to do the above are clarified below.
I have had multiple problems with the SAS2008 HBA and Freenas 8.0x (FreeBSD 8.2) running under ESX5i as a VM.
Being a FreeBSD noob did not help!
All credit for solving it to the Web and Google! EVENTUALLY!!!!
The cards (LSI SAS2008: IBM M1015 / Dell H200 + SAS Expander) work fine under Freenas 8 on a standalone PC.
I could not get them to work under Freenas 8.0.x under ESX5 (Passed Through), ALTHOUGH, ESX5i could work with the card without any problems whatsoever.
I did not want to add a VMFS layer in-between ESX5 and Freenas ZFS.
OpenIndiana can also access the card fine under ESX5 (passed though) but I have already gone down the Freenas road with about 30TB of storage (Norco 24 Drive with Chenbro CK23601 6GB/s 36 Port SAS Expander)
So I figured that it’s the Freebsd SAS2008 Driver.
Please note that the Chenbro CK23601 card played no role in my problems experienced. My problems started before I even attached it to the SAS HBA.
Running an AMD FX 8150 with 16GB of RAM and a 4 port Intel Network card (LAGG on a managed switch) just for Freenas seemed like a bit of a waste.
My ESX server with all my other Virtual Machines was underutilized.
As I was running ESX anyway for SABNzb/Couchpotato/Sickbeard/Win Domain/ Asterix/ Exchange and various other SIP servers with multiple snapshot capability and configs I wanted to utilize it better.
Installing Freenas 8.0.x under ESX5i with the SAS2008 card passed through (VMDirectpath) produced the following errors during Freenas boot:
run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config mps_startup
run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config mps_startup
and it never gets past that stage.
To solve that problem:
Shut down the VM
Edit the Freenas VM Settings.
Detach the SAS Card.
Boot Freenas.
Edit the loader.conf
Add the following to it:
hw.pci.enable_msi="0" # Driver Interrupts problem SAS2008
hw.pci.enable_msix="0" # Driver Interrupts problem SAS2008
Shut down the VM
Edit the Freenas VM Settings.
Add the PassThrough SAS Card. (Your reserved memory should be the same value as the memory allocated to the VM (VMDirectpath requirement) otherwise it will not boot.
Boot Freenas.
Problem Solved!
If you use Multiple NICS for lagg with MTU 9000 you may have to add the following (I have 4 NICS) to loader.conf (or loaders in 8.0.3 p1)
kern.ipc.nmbclusters="262144" # Network Buffers Problem with MTU9000 and Multiple NICS
Otherwise not all my NICS are able to be utilized due to buffer constraints. (And a lot of my Jumbo frames get thrown away)
Under ESX5i, if you add more than 1 CPU, I encounter “Interrupt Storm on IRQx” (IRQ18) in my case.
To solve that, boot into Freenas VM Bios: Disable Floppy drive, Com ports, Printer ports and anything else that you are not going to use.
Save changes.
Reboot
Solved!
STATS:
ESX5i
Freenas 8.0.3 p1
Virtual Machine: Freebsd 64 Bit: 16GB RAM: 2 vCPU of 1 socket each: E1000 VMWare adapter
Passthru PCIe Card: Dell H200 Flashed to LSI 9211_8i (P12) Firmware (Chipset: SAS2008)
For Testing I used some old drives:
4 x 750GB WD Green Drives in Striped UFS config:
[root@freenas8-test] /mnt# dd if=/dev/zero of=/mnt/raid0/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 25.843491 secs (324592679 bytes/sec)
[root@freenas8-test] /mnt# dd if=/dev/zero of=/mnt/raid0/testfile bs=8192k count=10000
10000+0 records in
10000+0 records out
83886080000 bytes transferred in 271.635798 secs (308818207 bytes/sec)
3 x 750GB WD Green Drives in Raidz1 ZFS config:
[root@freenas8-test] ~# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 48.232895 secs (173918817 bytes/sec)
[root@freenas8-test] ~# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=10000
10000+0 records in
10000+0 records out
83886080000 bytes transferred in 536.209856 secs (156442630 bytes/sec)
4 x 750GB WD Green Drives in Raidz1 ZFS config:
[root@freenas8-test] /mnt/test1-raidz1# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 29.539966 secs (283974871 bytes/sec)
[root@freenas8-test] /mnt/test1-raidz1# dd if=/dev/zero of=/mnt/test1-raidz1/testfile bs=8192k count=10000
10000+0 records in
10000+0 records out
83886080000 bytes transferred in 378.921389 secs (221381222 bytes/sec)
Samba Transfer rate: 60MB/s Write and 75MB/s Read with AIO on: Read size: 8192 / Write size: 8192






