Proliant ML350 G5

Status
Not open for further replies.

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
I just purchased one of these for $75 and have a few questions.

HP QuickSpecs: http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=c04284193

I understand the E200i will not work with ZFS. Can i get away with using UFS/HW Raid until i get the funds to upgrade to the M1015/cables/drives?

Is there a way i can still use the backplane and drive cage?

Will PCIE 1.0 be a bottleneck for me? This will be a simple home NAS for media/files.

Any other thoughts or suggestions?

Thanks for your time.

Dan
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You may not even be able to get it to boot FreeNAS/FreeBSD. My idiot check is to see if FreeBSD is listed as a supported OS when dealing with HP and Dell hardware. It's not. So in my experience your chances of this even working have decreased significantly. It *may* work, but you're already in the 10% chance category.

You can do whatever you want with UFS and hardware RAID. I wouldn't recommend it, but you can do it.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
I've had FreeBSD istalled on a HP XW6400/XW8400 and it worked fine. I understand its not a supported OS, I've never been one to pay head to that information when it comes to installing *nix. Im more in this for the learning experience, if i end up with a rock solid NAS at the end then thats great too.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
From FreeBSD 9.3 Hardware Support.

[i386,ia64,amd64] Controllers supported by the ciss(4) driver include:

  • HP Smart Array E200

  • HP Smart Array E200i
So i should be good to go. (Famous last words)

I understand that you are absolute on your use of ZFS and specific hardware however one does not wait until they own a Lamborghini before they go for a drive. Try to keep everything in perspective before you adamantly dismiss a technology that has kept redundant data safe for over a decade.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, that's not what I was referring too. Many people can't even get FreeNAS to boot because of peculiarities in the hardware. No clue why it panics or locks up. I just see people upset about it and I don't know what to say.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
"Will PCIe 1.0/1.1 be a bottleneck?"

Doing some reading i found that each version of PCIe has the following bandwidth accounting for overhead:

1.0/1.1 = 250MB/s per lane.
2.0 = 500MB/s per lane.
3.0 =985MB/s. per lane.

The HBA:

The IBM M1015/LSI SAS2008 (The preferred HBA for FreeNAS and ZFS) is a PCIe 2.0 x8 card meaning it can use 8 lanes of PCIe 2.0 at 500MB/s per lane or 4000MB/s total bandwidth. The ML350 G5 has 3 PCIe 1.1 x8 slots but electrically they are only x4 so that leaves us with 250MB/s per lane or 1000MB/s total bandwidth available to the HBA. The PCIe standard requires that all new versions of the standard be backwards compatible with previous versions. So its fine to use a PCIe 2.0 card in a PCIe 1.0/1.1 slot (I own two computers using PCIe 3.0 graphics cards in PCIe 1.1 slots and have no issues playing games.).

The Drives:

Reading reviews on the WD red drives they average a MAXIMUM throughput of 100-140MB/s under benchmarks. Real world numbers are likely lower but lets stick with 100MB/s just to be on the safe side and not over load the 4x PCIe 1.1 interface. 1000/100=10 We can have a maximum of 10 drives. So i can use up all 8 ports of the M1015/SAS2008 and not have to worry about bottle-necking at the PCIe bus. This is good news! However if i desire to add HDDs i can not simply add SAS expanders but will have to by a second HBA as the first one has maxed out the throughput for its PCIe slot on the motherboard.

Other Thoughts:

One might think that since this is all going through a Gb/s Ethernet connection that you can add more drives and not have to worry about bottle-necking the PCIe bus however if/when you have to rebuild/resilver an array/drive its going to make a big difference when it comes to how long those actions will take and may put unwanted strain on the hardware.
 
Last edited:

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
If you decide to use the ML350 G5, do note that they draw a fair bit of electricity.
Depending on the processor you need to populate half of the memory slots to get 8GB and they are 5W a pop.
So 180-200W. Oh, they need to cool that as well, as a fair amount is from memory you will not see whisper quiet fans here.
Then please get some fans for the internal heatsinks, otherwise you could burn yourself (speaking from experience here).


I use one for ESXi and one for gaming. If you hacksaw one of the PCIe slots it can power a bus powered graphicscard without problem. I use a Quadro 2000.
On my ESXi box I removed the front bays and put in a big fan. PSU fans where acting up a bit(running at 100% which was loud) and I want to keep it alive long enough to replace it with something more quiet that won't drain UPS so quickly.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
For $75.00 i couldnt pass it up. With redundant power supplies/fans an internal usb port so the key drive stays nice and safe i thought it would be perfect for the job even though it uses a bit more electricity.

Yes i noticed that northbridge gets pretty darn hot! I cant believe it does not come with a fan.

This is how its going to be setup:

1x Xeon 5160
16GB ECC RAM (8x2GB)
LSI SAS2008 using the 6-bay hot-swap cage
4x WD Red 2TB in RAIDZ2

I also own an HP XW6400 and XW8400 so im familiar with the platform its built on.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
All of the parts will be here Thursday so hopefully everything comes together well. With 6 15k drives it was pretty darn loud. Damn thing was whistling lol. After taking them out and running it the noise was not so bad. For me this is a first draft and after FreeNAS has proven itself i will invest in some better hardware down the road.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
Do you think you could test if the backplane will run all 8 drives with one cable connected or if it is 1:1 ratio between drives and channels. (with LSI card)

I did see a diagram of the power connector somewhere if you want to reuse the backplane.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
My ML350 has the 6 bay LFF cage so i cant test that but i do know that the way its setup on the E200i is bays 1-4 are on port/cable 1 of the controller and bays 5-6 are on port/cable 2. If you use the cable i linked to above and you have the 8 bay SFF cage it would be 4 on each and should work fine.

These servers come with the option of a 6 bay LFF (Large/Long Form Factor) for standard 3.5in desktop drives and an 8 bay SFF (Small/Short Form Factor) for 2.5in drives like you find in laptops. SSDs may fit as well.

The power plug should already be plugged into the backplane unless it was removed and was unplugged.

This is the service and maintenance manual for the ML350 G5: Here

Page 34 illustrates and explains how to plug it in.
 
Last edited:

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
The last of the parts arrived yesterday and was able to put everything together. I had a bit of difficulty flashing the Dell card over to LSI firmware but was able to get it done. It is plugged into the hotswap cage and is recognizing all of the drives, the only issue is the activity lights built into the backplane do not work. Something i can definitely live with. Its currently booting off of an SSD hooked up to the E200i which is temporary as without direct communication to the SSD there is no way for the OS to send TRIM commands.
 

JasonRosen

Cadet
Joined
Sep 5, 2014
Messages
1
Hi - I just purchased all the parts for this exact same setup:
HP ML350 G5 (1 x Xeon 32GB ECC RAM)
Dell H200 (LSI 9210-8i) flashed to IT mode
4 x Seagate NAS SATA drives (3.5")

All I need now are the cables to connect the HBA to the drive cage backplane.

I found this looking for the same info about SFF-8087 host to SFF-8484 backplane cables so I could use the drive cage.

I noticed there are 2 4 pin connectors on the back of the drive cage that look like they might be for LED activity (1 connector for each of 3 drives in the cage).

Have you found a way to get the HDD LED's to function in the drive cage?

How has your setup worked for your FreeNAS build?

Thanks!
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
From what I understand the pins are labeled as a programming interface for the backplane. And not for the lights. I have been searching hi and low for a way to get them to work with an HBA but have not been able to find a solution besides writing the disk I'd on the case. After playing g around with FreeNAS I decided to go another route. Not saying FreeNAS isn't great it just didn't suit all my needs.
 
Status
Not open for further replies.
Top