Performance issues on HP Server and M1015 (both with HDD and SSD)

Matute

Dabbler
Joined
Apr 12, 2017
Messages
21
Hi all, though I've been working with FN in production for 3 years now (and I LOVE IT) I still consider myself a newbie in some aspects.

Right now I was trying to build a new FN box using this hardware:
- HP DL 180 G5 (Old 2u Server)
- 2 IBM M1015 cards (IT flashed with latest version, sas2flash output: SAS2008(B2) 20.00.07.00 14.01.00.08)
- 12 Intel DC S3610 800GB SSD drives in 6 2-drive mirrors (similar to RAID 10)
- Also tried and alternative 5x3TB WD Red with an Intel DC S3500 80GB ZIL SSD
- 16 GB ECC RAM (I know it's a low amount, the server can't handle more)
- Server has PCIe v1.0 4x and 8x
- 10 GB fibre network card
- 1 GB Ethernet network card

Also one of my production backup server is:
- IBM SystemX3200 M2
- 1 IBM M1015 cards (IT flashed with latest version, sas2flash output: SAS2008(B2) 20.00.07.00 14.01.00.08)
- 6x3TB WD Red with an Intel DC S3500 80GB ZIL SSD in RaidZ2
- 16 GB ECC RAM
- Server also has PCIe v1.0 4x and 8x
- 1 GB Ethernet network card

Situation: I created a Jail and installed Bonnie++ in both this new server and the mentioned production server

In both I run:
Code:
bonnie -d /test -s 64000

Results are:
Code:
                    -------Sequential Output--------                            ---Sequential Input--                    --Random--     
                    -Per Char-        --Block---        -Rewrite--            -Per Char-        --Block---                --Seeks---     
New server - SSD    K/sec    %CPU    K/sec    %CPU    K/sec    %CPU        K/sec    %CPU    K/sec    %CPU             /sec    %CPU 
                    245921    93.8    333140    90.3    257534    91.7   214043    99.8    658649      99.8            12265.8    244.2 

Prod srvr - HDD     K/sec    %CPU    K/sec    %CPU    K/sec    %CPU        K/sec    %CPU    K/sec    %CPU             /sec    %CPU 
                    360194      90.4      517828    87.4     399126      90.1     321813       99.3       945104       96.7               12128.0       138.5  

New server - HDD    K/sec    %CPU    K/sec    %CPU    K/sec    %CPU        K/sec    %CPU    K/sec    %CPU             /sec    %CPU 
                    241881      93.3      349314    93.8    269634      94.6     214645       99.9       663871       99.8               12350.7    250.7
 

Notice that the Prod Server has a 50% higher performance in every aspect than the new one, and the configuration in the HDD scenario is fairly similar. I concluded that there's an issue with the m1015 cards and the new (HP) server. I verified:
- IRQ (assignment, collisions, etc and though I 'm no expert I didn't find any thing suspicious and confs look fairly similar in both servers)
- left only one m1015 card on new server to perform the HDD test and placed it in a 8x slot. Same stuff.
- checked with lspci -vvv on both servers, found no significant (to my eyes) differences.

If someone can point me in the right direction as to how can I troubleshoot this I would highly appreciate it.

Briefing all the above info:
- I seem to be having some kind of bottleneck in the HP server, when I look at the reports in the FN UI while running the tests I see 10MB/s speeds on the Prod server and 2 to 4 MB/s speed on the new one, and I can't figure why. Same disks, same card, similar server specs...

Thanks! (and sorry if I didn't comply with anything on this post, I haven't posted much)
Matias.
 

Matute

Dabbler
Joined
Apr 12, 2017
Messages
21
Just in case it makes it easier for someone to help me, here's the lspci -vvv on the new server (slow one)
Code:
06:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)           
        Subsystem: LSI Logic / Symbios Logic Device 3020                                                                            
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+                      
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-                        
        Latency: 0, Cache Line Size: 64 bytes                                                                                      
        Interrupt: pin A routed to IRQ 16                                                                                          
        Region 0: I/O ports at e000                                                                                                
        Region 1: Memory at fcefc000 (64-bit, non-prefetchable)                                                                    
        Region 3: Memory at fce80000 (64-bit, non-prefetchable)                                                                    
        Expansion ROM at fce00000 [disabled]                                                                                        
        Capabilities: [50] Power Management version 3                                                                              
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)                                          
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-                                                              
        Capabilities: [68] Express (v2) Endpoint, MSI 00                                                                            
                DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us                                              
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W                                      
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-                                                  
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-                                                      
                        MaxPayload 128 bytes, MaxReadReq 512 bytes                                                                  
                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-                                                
                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns                                            
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-                                                            
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-                                                              
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-                                                              
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-                                  
                DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported                                        
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled                                        
                         AtomicOpsCtl: ReqEn-                                                                                      
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-                                                      
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-                          
                         Compliance De-emphasis: -6dB                                                                              
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-                                
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-                                        
        Capabilities: [d0] Vital Product Data                                                                                      
                Not readable                                                                                                        
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+                                                                  
                Address: 0000000000000000  Data: 0000                                                                              
        Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-                                                                          
                Vector table: BAR=1 offset=00002000                                                                                
                PBA: BAR=1 offset=00003800                                                                                          
 

Matute

Dabbler
Joined
Apr 12, 2017
Messages
21
Hi, If any moderator can do me the favor of moving this thread to "Will it FreeNAS? - FreeNAS Build Discussion" I would really appreciate it (looking forward not to duplicate content but so far haven't been able to get support here)
 
Top