Any gotchas with a HP DL360 G5 and MSA70's?

Status
Not open for further replies.

fragged

Cadet
Joined
Mar 5, 2013
Messages
6
Planning to build a FreeNAS system using a DL360 G5 and P600 array controller with 2 - MSA 70 enclosures. This is 6x drives in the server and 24 in each enclosure. All SAS 149GB 10k rpm. Processing is 2x 4core with 32 GB ram. Anyone know of any showstoppers or gotchas with this type of setup? One article mentioned the P800 and not being able to set it to JBOD.

This is to serve iSCSI to two ESXi 5 hosts. Just that, nothing else.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If it doesn't do JBOD, then ZFS isn't a very good idea.. at all.

As for the rest of the hardware, I'd check the FreeBSD compatibility matrix and see if all of the hardware is listed.

There's also plenty of threads where iSCSi + ZFS = fail. You should read up on the limitations of ZFS with iSCSI and make sure that that's the route you want to take. You may find performance is far too low for your needs.
 

fragged

Cadet
Joined
Mar 5, 2013
Messages
6
So, I shouldnt use ZFS, but UFS for this setup since its iSCSI only. Stick with hardware raid in that situation?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So, I shouldnt use ZFS, but UFS for this setup since its iSCSI only. Stick with hardware raid in that situation?

Not sure. I know there are UFS-RAID options, but I'm not a UFS user so I can't really provide feedback for how to best deal with UFS.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
If you find iscsi over zfs to not do what you want, and since you have a hardware raid controller anyway, can't you just do hardware raid straight into iscsi?

Setup the hardware raid array however you like, and setup iscsi to use /dev/whatever-the-raid-array-is. As far as I understand it, freenas will be simply passing iscsi commands directly to your raid controller. Also as I understand it, most of the ram in the system will go unused, as zfs won't be doing any caching. You'll be relying on the cache on the hardware raid card.

If you can't setup the raid card in HBA mode, I'd be wary of using zfs at all really.
 

cheezehead

Dabbler
Joined
Oct 3, 2012
Messages
36
The problem with the controller is the damn Smart Arrays don't have a true JBOD mode, you still have to carve LUNs manually for each drive to have a quazi-JBOD but in the event a drive dies you need to reboot and then replace the drive within ORCA. Last time I did a build on a DL380 G5 with 8x146's connected to a P400, I ended up created 8 single drive LUNs and then ran ZFS on-top of it. Did it work, yes but it was fuggly and wouldn't suggest it outside of just getting familiar with the software.

Which leaves you with three options
1) replace the smart array with a pass-through controller, though you'd also be looking at SAS cables as the Proliant's use the SFF-8484 ports which likely means new cables
2) Use the smart array as it was intended carving up LUNs as you normally would
3) Present multiple LUNs to the host with resiliency and run UFS or ZFS on-top of it.

Considering the size of the array, personally I would pickup a new controller and a pair of SAS cables for inside the DL360 (or run the internal off the P600 and the MSA70's off the replacement card.

There's a good post over on the ServeTheHome forums (http://forums.servethehome.com/f19/lsi-raid-controller-hba-complete-listing-plus-oem-models-599.html) which gives a full rundown on the re-branded controllers. I'm using an IBM M1015 @home IT flashed so it handles the pass-through easy and the price was great but it doesn't have the external SFF-8088 ports. Off-hand the cheapest way I can think of would be to get a card with 4xSSF-8087 ports and then get a dual-port (or quad..might be cheaper, never know) SSF-8087 to SSF-8088 adapter for the MSA70's.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
cheezehead,

What did you come up with as a solution for this?


Which leaves you with three options:
1) replace the smart array with a pass-through controller, though you'd also be looking at SAS cables as the Proliant's use the SFF-8484 ports which likely means new cables
Any examples?
2) Use the smart array as it was intended carving up LUNs as you normally would
Do you have any performance data for this? Like results of DD?
3) Present multiple LUNs to the host with resiliency and run UFS or ZFS on-top of it.
I've seen some hateful threads over mixing hardware RAID and software/ZFS on top of each other, you have any experience?


There's a good post over on the ServeTheHome forums (http://forums.servethehome.com/f19/lsi-raid-controller-hba-complete-listing-plus-oem-models-599.html) which gives a full rundown on the re-branded controllers. I'm using an IBM M1015 @home IT flashed so it handles the pass-through easy and the price was great but it doesn't have the external SFF-8088 ports. Off-hand the cheapest way I can think of would be to get a card with 4xSSF-8087 ports and then get a dual-port (or quad..might be cheaper, never know) SSF-8087 to SSF-8088 adapter for the MSA70's.
Link was broken when I tried, did you have any examples of cards like the M1015 with external ports that are known to work well with FreeNAS?
 
Last edited:

fragged

Cadet
Joined
Mar 5, 2013
Messages
6
I ended up using the servers in another project and didn't proceed with this setup.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Hi,

I also have a HP Dl360 G5 server, currently running with a P400i controller card. Apparently the battery in the card is giving up the cost, so it's time to replace it. However, I thought this might be a good situation to move away from hardware RAID. (We're actually not using FreeNAS on this particular server...haha, but this thread seemed to target our question perfectly).

Does anybody have any recommendations or experiences with alternative controller cards that are compatible with the DL360 G5, and also support JBOD mode for the disks?

Regards,
Victor
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Does anybody have any recommendations or experiences with alternative controller cards that are compatible with the DL360 G5, and also support JBOD mode for the disks?

I ended using an LSI Logic SAS3801E, but I connected to an MSA70 and am planning to get an MSA60 also. Only having 8 disk slots was not enough for me because very large SAS disks are hella expensive. Was cheaper for me to use disk enclose and bunch of small disks.

The P400/600/800 will not work for ZFS (I played with them like everyone else). If all you want to use is the internal 8 disk slots, the IBM raid card flashed in IT mode is the recommended favorite, I suspect it would work fine in the 380, but remember the PCIe bus is slower in the G5s. I forget of top of my head, maybe version2? I think it has a x4 and 2 x8 slots. I suppose there is a SAS cable that will allow you to connect many different cards to the internal backplane, but I am not real sure about that. Might need some research.

The one good use I was able to do with my P400 was turn it into a SLOG. I created a mirror disk set on it, and then set it's write cache to 100% write. Since I had a 512 MB cache, it does OK for a SLOG. It's obviously not as good as some smoking fast solid states like the ones people recommend on here, but it was much cheaper since I had all the parts and can handle the performance trade off.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The one good use I was able to do with my P400 was turn it into a SLOG. I created a mirror disk set on it, and then set it's write cache to 100% write. Since I had a 512 MB cache, it does OK for a SLOG. It's obviously not as good as some smoking fast solid states like the ones people recommend on here, but it was much cheaper since I had all the parts and can handle the performance trade off.

Keep in mind that this can break ZFS. The SLOG is supposed to be non-volatile. You've potentially broken that with enabling the write cache. Assuming your write cache has a BBU (and if it doesn't you are even crazier than I am), on an improper shutdown the data in the BBU will be lost if your BBU goes dead before the system is powered back on. In this case you'll be in recovery mode because the data in your controller's cache will be lost and your actual SLOG devices will not have all of the data that they should.

This is why we don't recommend this configuration, especially when a decent SSD can be purchases for just a few hundred bucks.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
The SLOG is supposed to be non-volatile.
Assuming your write cache has a BBU (and if it doesn't you are even crazier than I am), on an improper shutdown the data in the BBU will be lost if your BBU goes dead before the system is powered back on.

Yep, got it. My P400 hosting that mirror for SLOG is absolutely battery backed. Otherwise, I wouldn't have done a SLOG at all. I think everyone who can properly spell IT should understand the need for write caching to be backed.

I've entertained the idea of using a P800 (otherwise just laying around) simply because it actually has two batteries instead of one. It would keep the data for longer. Only down side to using a RAID controller at all instead of two directly connected drives is the fact that FreeBSD/FreeNAS hates RAID and can't monitor for hardware failure. I guess that one's on me to pay attention until I get some enterprise SSDs. (Wish in one had and check my bank account with the other)

All this said, my G5 is running off a UPS which is monitored by my VMware vCenter. When it goes on battery, it triggers a remote shutdown of FreeNAS after powering off all the VMs in the cluster FreeNAS is supporting. Further details are off topic...

I'm beginning the search for SSD drives that are hot swappable in an HP G5, but my experience is that it's not as easy to run to the store and get a nice SSD drive like all the users buying home PC products. Enterprise stuff is usually expensive. God bless ebay though...

I'll wanting to turn on sync=all to test a battery backed RAID controller with mirror vs. an enterprise SSD. I think it would be a cool comparison to add to the 'Using an HP G5' experience for people to evaluate their decision whether to seriously consider it or not.

I know this thread is older, but it's a pretty cool one since it relates to recycling old enterprise hardware most IT shop have laying around and helping with the decision to chuck it, or re-purpose it into a storage system for maybe low value or backup data.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I'd like to see the same thing done with ssd's and a controller with a decent sized cache. Once you blow through the cache you are limited by disk speed. But before that you basically have a tiny nvram solution.

I like the threads in the old repurposed gear as well. A lot of this stuff is retired as off lease or power hungry. Not because it is unreliable.

Unfortunately we may see fast enough pcie cards near term to make this moot.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
But, on the bright side, most RAID controllers are 512MB or larger, and that's going to be difficult to fill for most people.

For starters, blocks that are >32KB are committed to the pool immediately. Then, of those that are left, they have to come in from the LAN port. If you are doing 1Gb LAN then you are going to see, best case, about 80MB/sec or so. Since a transaction is written every 6 seconds regardless, best case is 480MB or so.

Now, if you are doing 10Gb LAN then things change quite a bit. Your LAN throughput can be quite large, and your ZIL can, in theory, be a limiting factor. This isn't generally a problem because very very few workloads are capable of generating that kind of consistent writes that happen to be 32KB or smaller.

On the downside though, anything you write to the SLOG must, eventually, be written to the SLOG drive. Yes, this means it's possible to write to the RAID controllers cache, write to the pool at the next transaction, and then after that it end up dedicated to the SLOG. Very backwards, but as long as you don't have a scenario were the BBU goes dead with ZIL transactions stored, all will be fine.

If you are running 10Gb (or 1Gb with a horribly slow ZIL drive) it is totally possible to choke the RAID controller's cache and ZIL drive, which results in your pool basically hitting a brick wall on writes. The ZIL doesn't take kindly to suddenly having high latency because you've filled the RAID controller's cache and the ZIL drive can't keep up. However, again, unless you're doing something stupid like using a drive that is wholly inadequate for your server anyway, you shouldn't run into this problem.

Edit: This is why I tell people that even an 8GB ZIL drive is ginormous. It's so oversized you are incapable of filling it. It is also why those $3000+ STEC RAM drives that are 8GB are just freakin' amazing. 8GB is plenty for a ZIL/SLOG and is rocketship fast since it's all RAM. :)
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
If you are running 10Gb (or 1Gb with a horribly slow ZIL drive) it is totally possible to choke the RAID controller's cache and ZIL drive, which results in your pool basically hitting a brick wall on writes. The ZIL doesn't take kindly to suddenly having high latency because you've filled the RAID controller's cache and the ZIL drive can't keep up. However, again, unless you're doing something stupid like using a drive that is wholly inadequate for your server anyway, you shouldn't run into this problem.
I would really like to know some commands and how to interpret the data for this type of scenario. Can you explain how to determine if the described scenario was taking place? Maybe another thread since it's off topic? Or a link to something that already explains the exact same thing?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sorry, what you are asking for is very advanced level ZFS. It's not something I could write and explain in less than a few hours. You're talking deep level analysis and that's far beyond the purview of this forum (and definitely far far beyond free support).
 
Status
Not open for further replies.
Top