m1015 and Dell MD1000

Status
Not open for further replies.

rfink02

Cadet
Joined
Feb 27, 2017
Messages
4
This question below is primarily for anyone out there with an MD1000 with an m1015, but any feedback/thoughts are greatly appreciated.

My configuration:
FreeNAS-9.10.2-U1 (USB)
Dell R610 (latest BIOS) 12GB RAM E5620/2.4 Ghz
m1015 cross flashed to latest P20 IT mode (20.00.07.00)
Dell MD1000 A.04 (unified mode) populated with 14x1GB Hitachi HUA72101 A74A (SATA + interposers)
MD1000 connected via single SAS cable from m1015 to EMM#1

I've bench-marked the system (compression off, using dd if=/dev/zero of=/mnt/vol1/dataset1/ddfile bs=2048k count=10000 and dd of=/dev/null if=/mnt/vol1/dataset1/ddfile bs=2048k count=10000). Generalized results:
1. For reference, a single disk, read/write ~80MB/s
2. Any other configuration read/write <200MB/S (Mirror, 6 disk RAIDZ2, 4xMirror, 2xRAIDZ2-7 disk) with individual drive performance well below 80MB/s, CPU load negligible

My thoughts/observations:
1. Based on the (admittedly) little I know, I would have thought the dd results with a 2xRAIDZ2 (7 disk) would be somewhere around 500-600 MB/s for this hardware before tapping out the SAS 6Gbps/4 lane interface.
2. Seeing as how the individual drive performance drops off roughly proportionately to the number of drives in a volume tested, I suspect there is an issue between the m1015 and the MD1000 resulting in only a single SAS lane being used.
3. I'm not necessarily trying to eke out every last drop of performance. Rather, my concern is that there is an underlying hardware or configuration problem. This system is for home use.
4. It looks to me like the m1015 has LEDs to indicate the SAS lane activity (marked CR2-CR9 on the board). I am seeing only a single LED lit when under load - CR5 or CR9 depending on which SAS port is connected to the MD1000.
5. I've tried swapping SAS cables, pulling an EMM, and changing EMM's with no change noted in behavior.
6. SMART tests look ok for every drive.
7. When in the m1015 BIOS, I can't expand the MD1000 enclosure to view individual drives and the signaling summary info is blank. Not sure if this is an issue or to be expected. FreeNAS has no issue seeing any drives.

Question 1 (for anyone with m1015 and MD1000): What level of performance are you seeing with your system using dd? Are my expectations way off?
Question 2 (for anyone with m1015): If you physically look at your m1015, is there more than one LED lit while the system is under load?
Question 3: Is there anyway to determine the number of SAS lanes active on a SAS HBA from within FreeNAS/FreeBSD?
Question 4: Should a single process (i.e. dd) activate/use all available SAS lanes? Or are lanes place into use as additional processes request/write data (similar to LACP)?
Question 5: Am I doing something wrong or are my expectations on performance or thought processes way off?

I am obviously new to FreeNas, so any help or direction is greatly appreciated. BTW, there is no data on this system yet, so if it would be helpful for me to provide specific dd results using differing drive configurations, I am able to make changes as needed.

Thank you in advance!

Rob
 

toyebox

Explorer
Joined
Aug 20, 2016
Messages
87
Hey Rob,

I do not have an MD1000 but quite frequently was using an m1015 before i recently upgraded my mobo with an onboard LSI adapter.

I believe my m1015 has one solid light and one flashing LED when there's activity.

I'm not sure how much you have dealt with the m1015, but i know mine would overheat rather quickly due to the small heatsink and the cramped area i put it in. I put a small fan on it. It doesn't sound like this is your issue though, given when it was overheating, it would actually freeze and reboot my machine :S

Just as a benchmark, here are the results of my given commands:

-dd if=/dev/zero of=/mnt/vol1/dataset1/ddfile bs=2048k count=10000
20971520000 bytes transferred in 40.128083 secs (522614549 bytes/sec)

-dd of=/dev/null if=/mnt/vol1/dataset1/ddfile bs=2048k count=10000
20971520000 bytes transferred in 47.267360 secs (443678683 bytes/sec)

this is on a Mirror setup (6 4tb disks, 3 mirrors)
 

rfink02

Cadet
Joined
Feb 27, 2017
Messages
4
Thanks for the reply! Did some more digging - there may be something going on between the HBA and the MD1000. I think there should be phy 0-3 in the output below (one connection for each SAS lane) not just phy 0. Aside from both SAS cables (brand new - IBM 95P4588) being bad, I don't have a clue where to go with this.

Code:
[root@freenas ~]# smp_discover /dev/ses0										
  phy   0:S:attached:[500605b003646fe0:07  i(SSP+STP+SMP)]  3 Gbps			  
  phy   8:T:attached:[5001e4f23de59840:00 exp t(SMP)]  3 Gbps				  
  phy   9:T:attached:[5001e4f23de59840:01 exp t(SMP)]  3 Gbps				  
  phy  10:T:attached:[5001e4f23de59880:00 exp t(SMP)]  3 Gbps				  
  phy  11:T:attached:[5001e4f23de59880:01 exp t(SMP)]  3 Gbps				  
  phy  12:D:attached:[5001e4f23de5980c:00  V t(STP)]  3 Gbps		

[root@freenas ~]# smp_rep_manufacturer ses0									
Report manufacturer response:												  
  SAS-1.1 format: 1															
  vendor identification: DELL												  
  product identification: MD1000												
  product revision level: A01												  
  component vendor identification: LSI										  
  component id: 517															
  component revision level: 1												  

[root@freenas ~]# smp_discover --phy=0 ses0									
Discover response:															  
  phy identifier: 0															
  attached device type: SAS or SATA device									  
  negotiated logical link rate: phy enabled, 3 Gbps							
  attached initiator: ssp=1 stp=1 smp=1 sata_host=0							
  attached sata port selector: 0												
  STP buffer too small: 0													  
  attached target: ssp=0 stp=0 smp=0 sata_device=0							  
  SAS address: 0x5001e4f23de59800											  
  attached SAS address: 0x500605b003646fe0									  
  attached phy identifier: 7													
  programmed minimum physical link rate: 1.5 Gbps							  
  hardware minimum physical link rate: 1.5 Gbps								
  programmed maximum physical link rate: 3 Gbps								
  hardware maximum physical link rate: 3 Gbps								  
  phy change count: 7														  
  virtual phy: 0																
  partial pathway timeout value: 7 us										  
  routing attribute: subtractive				
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't know about the speed issue directly, but it appears your MD1000 is a SAS 1 device, (3Gbps).
So if for some reason you are using just 1 SAS lane, then less than 300Mega bytes per second is what
I would expect.

Just in case you don't know, SAS 1 has a limit of 2TB on disk sizes, (if I recall correctly). It appears
you are using 1TB disks at present, so it's good for now. And later if you upgrade to 2TB, (or larger,
with the limitation that the disk array won't see beyond 2TB).

On question 4, yes, a single dd process should cause all connected SAS lanes to be used, if there are
more disks than SAS lanes. A single disk transaction will be limited to a single SAS lane. But that
should be okay.
 

rfink02

Cadet
Joined
Feb 27, 2017
Messages
4
Just a follow-up to my original post for anyone that stumbles upon a similar problem.

The root cause of the poor performance was the cabling - apparently I originally purchased a single lane cable. Recently replaced the cable with a new one (Monoprice SFF-8470 to SFF-8088) and the issue is resolved. Read/write is now in the neighborhood of 500MB/s and "smp_discover /dev/ses0" clearly shows all four lanes connected.
 

mcooper139

Cadet
Joined
Aug 11, 2016
Messages
1
Greetings. I have a PE2950II with an IBM M1015 flashed to a 9211-8i IT mode with the latest firmware P20. I'm running FreeNAS 9.10 The six slots in my PE2950 were a space limiting problem. I bought an MD1000 and the cables to hook it up to the two internal ports on the IBM M1015 HBA. (cable - 2m 28AWG External SAS 34pin (SFF-8470) Male to Internal Mini SAS 36pin (SFF-8087) Male - https://www.amazon.com/gp/product/B005E2XVA0/ref=oh_aui_search_detailpage?ie=UTF8&psc=1). The HBA works, and has for a few years when connected to the backplane of the pe2950. I can see all 6 drives (4 TB WD RED Nas sata drives) when connected to the backplane of the PD2950 when I boot up the PE2950.
When I switch the drives into the MD1000 and plug the cables into the HBA and into the MD1000, none of the drives or the md1000 box are seen on bootup. I have no SAS landscape archit. in the config. utility for the M1015 on boot, (utility povided by the LSI 9211-8i flash utility).
I've tried unified and split mode on the MD1000. I've tried using only one controller on the MD1000. I have the cables plugged into the "inlet" port of the MD1000.
The blue light is illuminated on the MD1000.
All my drives are sata drives. I do not have interposers between the sata drives and the md1000.

What am I missing? Do I need an "expander" plugged into my PE2950? I thought that is what the ATA controllers on the MD1000 were (they say they also control sata drives), but I'm not sure. Thanks in advance.
 
Last edited by a moderator:

rfink02

Cadet
Joined
Feb 27, 2017
Messages
4
I can't see any drives or the SAS topology (except for the MD1000 iteself) from the m1015 BIOS either. But I can see each detected during a full BIOS boot. Not sure why. They are of course subsequently detected and mounted by freebsd as it loads.

As far as I know, the SAS interposers are not strictly required (but I do have them on every drive). All of my drives are SATA. I have my MD1000 in unified mode.

Is your m1015 in the internal controller slot on the PE2950? Perhaps that has something to do with it. I don't recall how, but I'm pretty sure it is restricted to internal storage.

Sorry I can't be of more help.
 
Status
Not open for further replies.
Top