LSI SAS2008 IT vs IR Firmware

Status
Not open for further replies.

markw78

Dabbler
Joined
Oct 17, 2012
Messages
22
I just built up a FreeNAS server and had good results, about 110MB/sec to the pool with zpool iostat. Had 8 Windows servers running a 75% write, 75% random 4K io pattern with about 20-30ms latency.

I shutdown FreeNAS, flashed the controller to the IT firmware, and started the VM's back up....

Much to my suprise, the same test now only yields about 50-70MB/sec and the VM's running the exact same test as above, now see a 40ms latency.

I'm going to flash back to IR... Unless I need to go so far as to rebuild the Zpool now that it's changed or something...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Keep in mind what the read/write cache settings are on the controller itself(if there are any). On my system I found that disabling the write cache and setting the read cache to read-ahead the smallest amount(for me the setting was "conservative") provides a 4 fold increase in speeds versus having any other settings.
 

markw78

Dabbler
Joined
Oct 17, 2012
Messages
22
as near as I could tell, I was unable to find any adjustable cache settings on the controller, at all... Nothing in the config menus talked about write cache etc...

FreeNAS Server:
Older MN2-LR board + process (not sure what it is)
PCIe LSI SAS2008 Controller - Connects 1, 120GB Log SSD, 1, 1.5TB Disk, and 1, 128GB cache SSD.
6 On-board SATAII ports Connect another 5, 1.5TB Disks, and the other 120GB Log disk
8 GB RAM - Max supported on this board (4x2)
Single on-board NIC dedicated to Storage traffic
Single on-board NIC deidcated to management

The test utlizes IOMeter and VMware ESX from 2 ESX Hosts, each with their own LUN and set of VM's.
ie. VMSERVER1 = 4 VM's on LUN1 and VMSERVER2 = 4 VM's on LUN2.

Each VM has 2GB of RAM. IOMeter configured as follows:
Test File Size: 7.5GB
Disk Queue Depth: 3
Write - 100%
Random - 100%
IO Transfer Size - 4K

Each test ran for 30 minutes. The first was thrown out due to caching / data building processes.

Surely some formatting will be lost below...

Code:
IOps	MBps	Avg. Write Response Time	Bytes Written	Write I/Os
IR Firmware - NFS				
518.116029	2.023891	46.317555	3810344960	930260
537.77005	2.100664	44.590117	3955040256	965586
634.490623	2.478479	37.789613	4666241024	1139219
632.681712	2.471413	37.932419	4652875776	1135956
				
IT Firmware - NFS				
674.84783	2.636124	35.561073	4963000320	1211670
629.366709	2.458464	38.108609	4628602880	1130030
661.849424	2.585349	36.258458	4867469312	1188347
658.876941	2.573738	36.413746	4845559808	1182998
			
				
IT Firmware - ISCSI				
683.656827	2.670534	35.103562	5027811328	1227493
697.390624	2.724182	34.402822	5128740864	1252134
685.238942	2.676715	35.022641	5039288320	1230295
660.814984	2.581309	36.317034	4859723776	1186456


Result:

IT Firmware is slightly faster than the IR firmware, and iSCSI is slightly faster than NFS, though the differences is small.

One thing I noticed is that when using NFS, zpool iostat shows upto 80-90MB writes, while when using iSCSI it was only around 20-25MB writes, though the VM's themselves were getting about the same IO - kind of unexpected.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Flashing to IT or IR mode gets rid of lots of crap. In IR mode, the most readily apparent difference (between IR and IT) is that in the BIOS setup utility, a new line item appears to allow for setting up basic RAID functionality. I just tossed an IR controller in a FreeNAS box to confirm that drives show up under the mps driver, but a RAID 1 volume does show up as "LSI Logical Volume 3000". Oh and yikes about a 3.5MB/sec write rate (the individual disks are around 60MB/sec). Yow, even the RAID1 read sucks, 33MB/sec (disks around 70MB/sec).

Always fun getting a slow little PowerPC core involved in the data path.
 
Status
Not open for further replies.
Top