Dell R620 w/ 10Gbps and SAS 6Gbps HBA?

Status
Not open for further replies.

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
We have an R620 with Intel X540T2 10Gbps adapter (2 x 10Gbps), a pair of Intel 1Gbps ports, plus a pair of Dell SAS 6Gbps HBAs connected to a pair of PowerVault chassis. The server has a Xeon E5-2637 v2 with 32GB of ECC RAM.

In the chassis I have the following SAS/SATA drives:

16 x 136GB 15k RPM SAS 6Gbps (2.17TB raw)
14 x 931GB WD Red SATA (13TB raw)
4 x Intel 250GB DC3500 SATA SSD (1TB raw)

They are split evenly across both PowerVault chassis and each chassis has 2 x 6Gbps SAS connections, 1 to each HBA.

We want to turn this from a Windows Storage Server into a FreeNAS server. We'd like to use about 10TB for backups, then use the SAS drives and SSD drives for a fast iSCSI or NFS or SMB mount for various uses.

First - should I have any issues with the 10Gbps NIC I have? Second - what about the HBAs?

If the answer to the previous two is no - how should I configure the pools? Backup performance is not that important, 100MB/s-200MB/s should be plenty. Data security is not very important - being able to lose 1 or 2 drives without losing everything would be sufficient.

For the iSCSI/NFS/SMB mounts, I'm assuming I'll want to create some sort of ZFS pool and use the SSD drives as a caching tier?

Thanks for any guidance you can provide!
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm not sure about your hardware, but I'll comment on the storage config if the hardware turns out OK.

For the backup storage, I'd do 2x 7 drive RAIDZ2 vdevs. You might be able to do 1x 14 RAIDZ2 vdev, but I feel like 14 drives is a bit wide for a single ZFS vdev. I want to say that Sun recommended no more than 8 drives per vdev, though I can't find a source for that statement at the moment.

For the fast mounts, I suppose I would use the SAS drives as striped mirrors, though you could do 2x 8 drive RAIDZ2 vdev. I would use two of the SSDs as a mirrored SLOG. With only 32GB of memory, you won't benefit much from adding an L2ARC. As an alternative to using the SAS drives, you could just use the four SSDs in stripped mirrors.

Honestly, for your application, I feel like your collection of SAS drives is fairly useless. They're so small, it's almost like why even bother? Modern SSDs are not only faster, but they just as much (if not more) space, and using hybrid storage with SSDs and big HDDs gives you pretty much all the benefits of both. If it was up to me, I'd sell the SAS drives for whatever I could get, and get some large multi-TB drives. If you can get $50/SAS drive, that buys 4 3TB WD Red drives. Put those in striped mirrors, and with the SLOG ssds, you'd have 6TD of space for your shares.
 

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
IIf it was up to me, I'd sell the SAS drives for whatever I could get, and get some large multi-TB drives. If you can get $50/SAS drive, that buys 4 3TB WD Red drives. Put those in striped mirrors, and with the SLOG ssds, you'd have 6TD of space for your shares.

Luckily, we're a small company and in 12 months, they haven't even exceeded 90GB of total storage usage. The main reason I am exploring this is that I want a little more flexibility and for some reason, I'm seeing horribly inconsistent performance, as this server is currently running Windows Storage Server and using the drives (via JBOD) under Storage Spaces.

I do have budget to add anywhere from 32GB to 96GB (64Gb to 128GB total) of RAM however. I have a couple hundred GB of RAM lying around, I'm just not sure it's compatible with this server. Would that enable me to use L2ARC?

Thanks for your expertise!
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
You really need a lot of memory to make an L2ARC worth while.

The level 1 ARC is in memory. Part of the level 1 ARC is needed to manage the data in the level 2 ARC (L2ARC). If you have too little memory, then adding an L2ARC would actually slow you down, because you'd be using your fast cache to manage your slower cache. Usually, I would not recommend an L2ARC unless you had at least 64GB of memory, and were still seeing performance issues. If you don't have any performance issues with 32GB of memory, then adding an L2ARC won't help.

The best rule of thumb for enterprise use is that you should only turn to an L2ARC once you've added all the memory you can to your system.
 

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
The best rule of thumb for enterprise use is that you should only turn to an L2ARC once you've added all the memory you can to your system.

Gotcha. I don't think our use case is anywhere near high enough to require that then. Right now, with the inconsistent performance I see as an admin and power user, our users aren't complaining at all. They mainly do small Office documents, whereas I am copying huge ISOs around. It seems after 100MB or so, Windows Storage pukes, then recovers, then pukes, then recovers to a modest throughput rate.

I'd like to move my VMware Horizon UEM profile storage and VMware vDP storage to this system and off my SAN however. But even then, we're talking <50 users.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'd like to move my VMware UEM profile storage and VMware vDP storage to this system and off my SAN however.
There should be no problem with that. If you used your SAS drives in stripped mirrors, then you might not even need a SLOG, though I'd still set it up that way since you've already got the SSDs.

The one gotcha that trips people up is that you want to stay below 50% utilization for iSCSI. Once you get above 50%, your performance will plummet very painfully.
 

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
There should be no problem with that. If you used your SAS drives in stripped mirrors, then you might not even need a SLOG, though I'd still set it up that way since you've already got the SSDs.

Excellent. Now here's to hoping my HW is compatible.

Do you offhand know what version of FreeBSD FreeNAS is based on? I could probably get a decent idea by looking at the FreeBSD HCL.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
Thanks for the help, everyone. I got this built yesterday and all seems well. On the RAIDZ2 array used for backups I am seeing a consistent 500MB/s over the network via SMB, which is very impressive for 2.5" 'Intellipower' WD Red drives. Running under Windows Storage Server, I was lucky to see 75MB/s sustained - and it would rise and fall like the stock market. Under FreeNAS, it was a consistent 450MB/s-500MB/s for the entire 3TB transfer.

I am getting an alert about firmware not matching the driver for both my HBAs, but since it's a Dell branded card (and firmware) I'm assuming there isn't anything I can do about that? It's updated to the latest firmware, which was released in 2015.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
What is the output of sas2flash -listall (In CODE Tags please)...

index.php
 

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
What is the output of sas2flash -listall (In CODE Tags please)...

Code:
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B2)

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS2008(B2)     07.15.08.00    07.00.00.19    07.11.10.00     00:03:00:00
1  SAS2008(B2)     07.15.08.00    07.00.00.19    07.11.10.00     00:04:00:00

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.


Here is the actual alert also, if it helps: Firmware version 7 does not match driver version 21 for /dev/mps1. Please flash controller to P21 IT firmware.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
For the record, I have a system that has a Perc H200 flashed to LSI 9211-8i and a LSI SAS 9200-8e.

This is my output:
Code:
# sas2flash -listall
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B1)

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS2008(B1)     20.00.04.00    14.01.00.08    07.39.00.00     00:03:00:00
1  SAS2008(B2)     20.00.07.00    14.01.00.07    07.39.02.00     00:04:00:00

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.


Keep in mind that with the most recent updates to FreeNAS, you will get an error message regarding version 21. However, I do believe that message can be ignored if you have version 20. Reference: https://forums.freenas.org/index.php?threads/firmware-version-p21.45130/page-4

Seeing that your cards have the same chipset (SAS2008), feasibly you could cross-flash them to LSI SAS 9200-8e and then apply the updated firmware(s). This may be an option, but I don't own one of those cards and can't vouch 100%, although I am 95% sure... ;)

There is an update from Dell, but am not sure if that will get you up to version 20. But you may not need to be at version 20 with the Dell update...
http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=K161K
 

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
That is the (Dell) version I am running. I believe I have another SAS 6Gbps controller here than I could test on. Where can I find the correct firmware version and are there any special instructions for flashing a non-OEM firmware?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

mevans336

Dabbler
Joined
Aug 16, 2016
Messages
23
Honestly, for your application, I feel like your collection of SAS drives is fairly useless.

I'd just like to update this thread and state, you are correct. They wound up being useless. I could never find an adequate use for them.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I'd just like to update this thread and state, you are correct. They wound up being useless. I could never find an adequate use for them.
How about making them a VM DataStore (say ISCSI) and tossing some VMs there for testing purposes if you are so inclined? :)
 
Status
Not open for further replies.
Top