SOLVED FreeNAS with HP SmartArray - Tips and Tricks

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Here is my experience with HP SmartArray controllers with FreeNAS. Maybe this can help out others :).

Preface: Old HP server hardware is cheap! I have over 75TB of data and not much extra cash to drum up a proper solution. Keep in mind that this is for my home and the data isn't mission critical. I'm not a daily linux user. I know that you lose out on a lot of features by running each drive in an individual RAID0 logical volume. Here are some of the things I've done to mitigate potential issues.

  • Update your HP SmartArray (SA) controller to start with. Originally, the older controllers did not support drives over 2TB but the new firmware does. I'm running 20x 5TB drives.
  • As I stated in the preface, the HP SAs don't support JBOD mode, so you need to put each drive in its own RAID0 logical volume.
  • After creating your first vdev and before loading any data in it, run a 'zpool status' and copy the outputs. This will give you a list of device names and gptid. If you attempt to view the hard disk serial from the web UI, you'll see the same serial number for all your drives because this is coming from the RAID controller. You will want to disconnect/remove one drive at a time and see what device goes offline, this way you can label your drives on your chassis/enclosure. This makes life a lot easier in the event of a drive failure.
  • Since you lose out on some of the automatic SMART drive monitoring, you'll have to do this manually through the use of cciss_vol_status and smartctl. I got cciss_vol_status installed by:
  1. Create a jail.
  2. Download, compile, and install cciss_vol_status which is found here: https://sourceforge.net/projects/cciss/files/cciss_vol_status/ (look for cciss_vol_status-1.12.tar.gz)
    tar zxvf cciss_vol_status-1.12.tar.gz
    ./configure
    make
    make install
    (if you get permission errors, I found it easier to copy the archive to /tmp/ and working from there)
  3. This will install to the jail's /usr/local/bin/, so if you were to access this from outside the jail, it would be at /{jail storage}/{jail name}/usr/local/bin/cciss_vol_status
  4. You can read up on the docs, but the typical usage would be something like: cciss_vol_status -s /dev/ciss0
    note that /dev/ciss0 is your RAID controller, not a particular drive. In my box I have two: ciss0 which is a p410 and ciss1 which is a p812.
  • Luckily, smartctl is already installed. So your typical command would be like: smartctl -d cciss,0 -a /dev/ciss0
    where cciss,0 is the disk and again /dev/ciss0 would be your controller. So if you want to look at disk 2 on the controller, you would do: smartctl -d cciss,1 -a /dev/ciss0
    Or disk 5 on controller 2: smartctl -d cciss,4 -a /dev/ciss1
  • I then hacked together a daily health script to email me the output of some of these commands. Note that I used BiduleOhm's scripts (https://forums.freenas.org/index.php?posts/175548/) as a starting point.
    Code:
    #!/bin/sh
    
    ### Parameters ###
    logfile="/tmp/smart_report.tmp"
    email="email@gmail.com"
    subject="Status Report for FreeNAS"
    drives="0 1 2 3 4 5 6 7 8 9 10 11"
    devdrives="da0 da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11"
    cciss_vol_status="/mnt/data/jails/cciss/usr/local/bin/cciss_vol_status"
    tempWarn=40
    tempCrit=45
    sectorsCrit=10
    testAgeWarn=1
    warnSymbol="?"
    critSymbol="!"
    
    
    rm "$logfile"
    
    
    ###### zpool status ######
    echo "########## zpool Status ##########" > "$logfile"
    zpool status >> "$logfile"
    echo "" >> "$logfile"
    
    
    ###### raid status ######
    echo "" >> "$logfile"
    echo "########## raid Status ##########" >> "$logfile"
    "$cciss_vol_status" -s /dev/ciss0 >> "$logfile"
    echo "" >> "$logfile"
    
    
    ###### SMART status for each drive ######
    for drive in $drives
    do
    	brand="$(smartctl -d cciss,"$drive" -i /dev/ciss0 | grep "Device Model" | awk '{print $3, $4, $5}')"
    	serial="$(smartctl -d cciss,"$drive" -i /dev/ciss0 | grep "Serial Number" | awk '{print $3}')"
    	(
    		echo ""
    		echo "########## SMART status report for ${drive} drive (${brand}: ${serial}) ##########"
    		smartctl -d cciss,"$drive" -H -A -l error /dev/ciss0
    		smartctl -d cciss,"$drive" -l selftest /dev/ciss0 | grep "# 1 \|Num" | cut -c6-
    		echo ""
    	) >> "$logfile"
    done
    sed -i '' -e '/smartctl 6.5/d' "$logfile"
    sed -i '' -e '/Copyright/d' "$logfile"
    sed -i '' -e '/=== START OF READ/d' "$logfile"
    sed -i '' -e '/SMART Attributes Data/d' "$logfile"
    sed -i '' -e '/Vendor Specific SMART/d' "$logfile"
    sed -i '' -e '/SMART Error Log Version/d' "$logfile"
    
    
    ### Send report ###
    cat "$logfile" | mail -s "$subject" "$email"
    
In summary: I know it's not perfect and I'm open for any suggestions as I'm still new to FreeNAS. Please don't suggest to buy new hardware because I'm making due with what I have. I hope this helps anyone who is in the same boat as me.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
Please don't suggest to buy new hardware because I'm making due with what I have.
I think you need new hardware ;)
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
I'm taking donations :D

Seriously though, you must have spent a fortune on your drives. Why not get a couple of 8 port SAS HBAs and use the motherboard ports for the other 4? It'd probably cost not much more than a single drive ...

75TB of data is one hell of a lot to be taking risks with. Even if you are able to redownload, re-rip (or however you got hold of your "non mission critical data" in the first place ;) - that's a lot of time to spend!

Each to their own, though :smile:

---

Chenbro RM23212-O12C 12-bay 2u chassis/backplane
2 x E5645 6-core Xeon
128GB RAM
LSI 9211-16i HBA
4 x WD RED 4TB, mirrored in zpool with Intel 320 80GB slog
2 x Kingston SSDNov UV400 120GB in mirrored boot zpool
 
Last edited:

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
So this new FreeNAS setup is a definite move in the right direction for me. Over the past few years I've just been buying a new 5TB external drive every few months or so and they've all been connected via USB. It started getting to be a pain managing data across 14 individual drives. Another thing is I was dreading the migration of the data.

I finally broke down recently and bought an HP DL180se for $237 with 2x quad-core Xeons and 32GB ECC RAM. It has 12 bays, so I loaded 6 new drives in. I setup FreeNAS and transferred data from 6 existing drives, then added those 6 to the pool. Now I'm currently transferring my third batch of 6 drives. Now that the server chassis is full, I ordered a Norco 20 bay case and power supply. I'm going to use an HP SAS Expander (only $20 from eBay) and a 1x PCIE to 16x PCIE riser (similar to this http://www.ebay.com/itm/162572171188) so that I can get power to the HP SAS Expander. The SAS Expander doesn't require a motherboard--it's fully independent and just gets power via the PCIE slot. This will allow me to run the entire Norco 20 bay off the same controller (it has like 128 data lanes) AND since there's no motherboard in the case, I can probably squeeze another 20 drives or so inside--they just won't be hot swappable. That should set me up to where I can continue to buy groups of 6 drives and not run out of physical space in the case. I figure by the time that happens there will be some storage technology breakthrough.

So my shopping list in order to get this up has been (rough prices here):
HP DL180se - $237
HP p812 Controller - $40
HP SAS Expander - $20
PCIE Riser - $4
Norco Case - $325
Power supply - $60

Everything else I already owned.

So I came from individual drives with absolutely no redundancy at all to RAIDZ1 groups of 6 drives in a single zpool. Yes, I know, everyone loves to hate on RAIDZ1 but again, this is an acceptable risk since I'm not pouring money into it.
 

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
So this new FreeNAS setup is a definite move in the right direction for me. Over the past few years I've just been buying a new 5TB external drive every few months or so and they've all been connected via USB. It started getting to be a pain managing data across 14 individual drives. Another thing is I was dreading the migration of the data.

That was sort of what I meant - doing *anything* with this volume of data is painful, let alone losing it.

This will allow me to run the entire Norco 20 bay off the same controller (it has like 128 data lanes) AND since there's no motherboard in the case, I can probably squeeze another 20 drives or so inside--they just won't be hot swappable.

That sounds kind of interesting from a cooling perspective - albeit I have no real experience in this. Certainly sounds like a fairly unique setup that's worth getting some proper advice on from one of the forum experts.

So I came from individual drives with absolutely no redundancy at all to RAIDZ1 groups of 6 drives in a single zpool. Yes, I know, everyone loves to hate on RAIDZ1 but again, this is an acceptable risk since I'm not pouring money into it.

I'm sure you know this, but one advantage of the USB drive setup is that, while there's no redundancy, a failure is localised to the data stored on one drive, and there's a good chance you might be able to restore some data as it's on a consumer filesystem. A slip-up with a zpool and everything is gone. Irretrievably.

If you haven't already read @cyberjock's intro linked from https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/ it's definitely worth scaring yourself with it. While it may feel like doing FreeNAS/ZFS with some RAIDZ1 is a step up (and it is, in terms of manageability etc.), it can actually be be a step back in terms of resilience in practice. Especially if you don't have a good off-server backup.

I've (sometimes knowingly, sometimes not) played a bit fast and loose with hardware before, but everything's snapshotted and replicated to a second ZFS box, everything I care about is also sent to CrashPlan cloud, and the really important photos/videos/keepsakes etc are also on Dropbox and Google Drive. Which gives me a certain amount of confidence that if I stuff up, it won't be disastrous. You may well have something similar, so apologies if I'm stating the obvious above.

Also, I'm paranoid, mostly because I don't feel qualified to accurately assess the real risks of the corners I might cut - and it appears that the storage engineers on the forum are the most paranoid of all. But we all have different risk tolerances :smile:

Martin


Sent from my iPad using Tapatalk
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Thank you for your comments! I really appreciate getting other views. It's one of those things where I feel like there is no true solution: "unlimited storage for free" :p


That was sort of what I meant - doing *anything* with this volume of data is painful, let alone losing it.
Yeah I understand. The biggest issue was data duplication just due to the fact that I had multiple places to store the same type of data. I don't use the de duplication feature on FreeNAS but it shouldn't be too much of an issue now.

That sounds kind of interesting from a cooling perspective - albeit I have no real experience in this. Certainly sounds like a fairly unique setup that's worth getting some proper advice on from one of the forum experts.
You can call it a rudimentary BackBlaze Pod :). I do plan on having several fans throughout. But from my limited research the biggest producers of heat are CPUs. I believe the wattage consumption of 20 drives is close to a couple CPUs under load.

I'm sure you know this, but one advantage of the USB drive setup is that, while there's no redundancy, a failure is localised to the data stored on one drive, and there's a good chance you might be able to restore some data as it's on a consumer filesystem. A slip-up with a zpool and everything is gone. Irretrievably.
That's something I struggled with a bit but I decided to just go for it. I dislike the fact that a failed vdev results in a total pool loss--very inconvenient.

If you haven't already read @cyberjock's intro linked from https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/ it's definitely worth scaring yourself with it. While it may feel like doing FreeNAS/ZFS with some RAIDZ1 is a step up (and it is, in terms of manageability etc.), it can actually be be a step back in terms of resilience in practice. Especially if you don't have a good off-server backup.
I did actually come across that. Not sure how I found it but I did when doing initial research. Thank you for the link because I didn't save it :)

I've (sometimes knowingly, sometimes not) played a bit fast and loose with hardware before, but everything's snapshotted and replicated to a second ZFS box, everything I care about is also sent to CrashPlan cloud, and the really important photos/videos/keepsakes etc are also on Dropbox and Google Drive. Which gives me a certain amount of confidence that if I stuff up, it won't be disastrous. You may well have something similar, so apologies if I'm stating the obvious above.
I will be keeping 2 separate copies of the important stuff like family photos, videos, and docs. I'll be rsyncing them to 2 different drives (NTFS) stored in different locations.

I've thought about doing the Amazon unlimited cloud storage but I haven't pulled the trigger yet. That would solve many problems but I can't imagine sending all that data up there..

As for the hardware, I read that since I'm doing single RAID0 logical unit f a controller dies I should be able to drop in a replacement. Those old controllers are cheap to so I considered having one as a spare.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
I would not have used raidz1 personally.
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
I would not have used raidz1 personally.
I didn't want to use raidz2 just because the extra disk, but I suppose I'll create a new pool in raidz2 and slowly move over to that. I read the article in cyberjock's signature and it makes sense. With everyone saying not to use it I guess I'd have to be pretty stupid not to listen.

Also, I can say that all this extra monitoring might have saved some headache. One of my old external 5TB drives that I added to the pool is starting to throw SMART errors. I wouldn't have noticed until it actually failed if it were still connected via USB.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
If you do decide to migrate to Raidz2 you might want to consider 8 way Raidz2, will give you 25% parity vs 33% with 6 way. Or even 9 way. I wouldn't go beyond 9 way.
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Right. The storage loss is too much with a 6-way raidz2. I was thinking of doing 8. Is there a particular reason why you don't recommend going beyond 9? Just an increased risk of multiple failures? To be honest, I probably won't go past 8 just because I'll have to get 8 drives in order to expand the pool, lol.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Right. The storage loss is too much with a 6-way raidz2. I was thinking of doing 8. Is there a particular reason why you don't recommend going beyond 9? Just an increased risk of multiple failures? To be honest, I probably won't go past 8 just because I'll have to get 8 drives in order to expand the pool, lol.

I've heard performance drops off a cliff. Also 9 is quite efficient with relatively minimal padding loss, padding loss increases dramatically after 9.

Have a play with this to see what I mean:
http://wintelguy.com/zfs-calc.pl
 

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
I've thought about doing the Amazon unlimited cloud storage but I haven't pulled the trigger yet. That would solve many problems but I can't imagine sending all that data up there.

I think unfortunately they've just shuttered that anyway:

https://www.theregister.co.uk/2017/06/08/last_unlimited_storage_holdout_amazon/

And are cracking down on rclone, at least:

https://www.theregister.co.uk/2017/05/23/amazon_drive_bans_rclone_storage_client/

Plays to what you were just saying about being no such thing as a storage free lunch - even if you're Amazon ;-)


Sent from my iPad using Tapatalk
 

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
You can call it a rudimentary Blazeback Pod :). I do plan on having several fans throughout. But from my limited research the biggest producers of heat are CPUs. I believe the wattage consumption of 20 drives is close to a couple CPUs under load.

Again, playing devil's advocate, the other thing to think about is vibration - especially if you're using normal desktop hard drives, which I imagine you may well be, if you were purchasing them without FreeNAS and your current setup in mind. One of the differences with the WD RED and similar is that they are supposedly designed to be more tolerant of running in close proximity to other drives. There might be something you can do with rubber grommets that would partially mitigate, as you'll be running some seriously dense storage. BackBlaze appear to spend quite a lot of time thinking about this as apparently it makes quite a difference - https://www.backblaze.com/blog/180tb-of-good-vibrations-storage-pod-3-0/

I think people have had good success with consumer drives, but it does increase the risk of failure, so your decision to move to RAIDZ2 sounds like a good one.

BTW, if you have any WD Greens, I can personally attest to the point in Cyberjock's article about Intellipark configuration being really, really important. I hadn't done sufficient research before putting them into my FreeNAS array and after a number of months' use, one drive started to throw SMART and scrub errors due to bad sectors. Even though I immediately reconfigured all the drives in the array once I'd realised my mistake, by the time I was able to purchase replacement WD REDs and transfer all of the data across, three of the four drives in my pool were on their last legs. The experience was enough to make me much more cautious about using kit designed for the task at hand ...

I still keep them, as it's useful to add them (without using them) to a server, just to check the SMART notifications are working correctly :smile:

Cheers,

Martin


Sent from my iPad using Tapatalk
 
Last edited:

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Dang, that sucks, haha. I didn't realize that they axed that plan. I suppose I'll just have to keep to backing up to various external drives stored at different places for the important stuff. After Stux shared that zfs calculation tool I think I'll move to a 9-drive raidz2 since it provides the best bang for your buck. I'll keep up my habit of buying a new drive every month or so that way by the time I need to expand it won't be a huge cost at once.

Again, playing devil's advocate, the other thing to think about is vibration - especially if you're using normal desktop hard drives, which I imagine you may well be, if you were purchasing them without FreeNAS and your current setup in mind. One of the differences with the WD RED and similar is that they are supposedly designed to be more tolerant of running in close proximity to other drives. There might be something you can do with rubber grommets that would partially mitigate, as you'll be running some seriously dense storage. BackBlaze appear to spend quite a lot of time thinking about this as apparently it makes quite a difference - https://www.backblaze.com/blog/180tb-of-good-vibrations-storage-pod-3-0/
So my plan was to make similar rails where I could make rows of 6 drives and I also planned on using the screws with rubber grommets. I was hoping that would be enough but maybe not. One trick I've used on other applications (smoothing out 3-space sensor/accelerometer readings on drones) was to strap the boards to about half-inch thick memory foam. I might try something like that.

BTW, if you have any WD Greens, I can personally attest to the point in Cyberjock's article about Intellipark configuration being really, really important. I hadn't done sufficient research before putting them into my FreeNAS array and after a number of months' use, one drive started to throw SMART and scrub errors due to bad sectors. Even though I immediately reconfigured all the drives in the array once I'd realised my mistake, by the time I was able to purchase replacement WD REDs and transfer all of the data across, three of the four drives in my pool were on their last legs. The experience was enough to make me much more cautious about using kit designed for the task at hand ...
I've been using Seagate drives strictly due to cost. That's an interesting post by cyberjock (https://forums.freenas.org/index.php?threads/hacking-wd-greens-and-reds-with-wdidle3-exe.18171/). Based off the cycle counts he's talking about it seems like these Seagates don't have the same issue with the heads parking. Although, I have been using them in their USB enclosures so it could be that they perform differently when removed. This is something I'll monitor for sure. The drive that was showing signs of failure I previously mentioned only showed 60k load cycles with 1.87 years of lifetime.


Again, I want to thank everyone who has taken the time to post. I really have gained a wealth of knowledge.
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Just wanted to follow up on this nearly 2.5 years later in case anyone else finds it helpful:
  • I'm still running on the same server and SAS controller. I did buy another SAS controller and expander as spares just in case.
  • I went the route of 9x drives in raidz-2. I now have 4 vdevs like this for a total of 211.5 TB at 73% utilized (buying another 9 drives in the next 6 months most likely).
  • I've had a total of two drive failures which were replaced a day or so later without any issues. Both were older 5TB drives that were 3 and 4 years old--so I feel like I got a fair life-expectancy with them.
  • Vibration isn't bad enough to notice within the Norco case that holds a power supply, SAS expander, and drives.
  • Using 11.1-U2 still and it has been going strong.
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Another update after 4 years of service:

I decided to move away from the HP Smart Array cards because it was a hassle to deal with. There were times where it thought a drive 'moved' and it wouldn't bring ANY of the drives online until I returned the drive to its original position. If there is a single drive failure, again, it wouldn't bring any drives online until that one drive was replaced. I had to delete that logical drive to get the pool online (in degraded state) until I got a replacement and just created a new logical drive.

I got a Supermicro motherboard and chassis with 36 bays, so I'm going to migrate everything to that. The 36 bays run off an LSI SAS3008 HBA that I updated to the latest firmware. I also got another LSI 9206-16e HBA so I can attach my other JBOD boxes. I suspect having HBAs instead of RAID cards will make life much easier.

Another lesson I took away is that while it is nice to have one massive pool, it becomes a big pain if you need to migrate it. I have a single pool with 45 drives and a lot of storage space. I believe the HP Smart Array cards change the gptid via the logical drive, so if I were just to plug the drives into an HBA I don't think it would work based off the experience I had with drives "moving". For example, if I were to delete the logical drive in the HP Smart Array, and just simply use the same drive in a new logical RAID0 single drive, then TrueNAS would think its a new drive. On my new server I'm going to create multiple pools made of a single RAIDZ-2 vdev of 12 drives. I'll just have to deal with the inconvenience of multiple pools in case I ever need to move the data again. This way I don't have to worry about doubling my storage just to migrate data.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
The benefit of having all your vdevs in one pool of course is that you have more vdevs in the pool :), which increases IOPs.

I recently reconfigured my primary pool into mirrors because I needed more vdevs to make it a 10gbe class node, and my storage requirements are not growing as fast as they once were.

Still using a 2016 era system :) and its doing fine
 

ethanf

Dabbler
Joined
Jul 1, 2017
Messages
26
Hmm. You bring up a good point about having enough vdevs for a 10GB connection. I intend to migrate my 10GB connection to the new server. I use TrueNAS to store VM images and that faster connection makes a difference. Once I move that connection I’ll have to test it out with a single vdev of 12 drives.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,906
I intend to migrate my 10GB connection to the new server. I use TrueNAS to store VM images and that faster connection makes a difference. Once I move that connection I’ll have to test it out with a single vdev of 12 drives.
A single RAIDZ2 VDEV for use with VM images is already not ideal with a 1 Gbps connection when we talk about classic HDDs. For 10 Gbps you should seriously look at SSDs, or have a lot (possibly more than 20) of VDEVs. A single VDEV with HDDs will not give you more than roughly 300-400 IOPS. Whereas with SSDs you will get 10k+.

An alternative approach, which I have chosen for my home lab, is to put a single 1 TB SSD into my VM hosts and then have regular backups and snapshots onto my RAIDZ2 pool. With this the worst-case scenario is to loose 1 hour of data changes. That case, however, is pretty unlikely and I could also live with it.
 
Top