SCALE on R510

DaAwesomeP

Dabbler
Joined
Jun 19, 2020
Messages
14
Hey everyone,

I recently acquired the following hardware and plan to run SCALE on it:
  • Dell R510
    • PERC H700 internal RAID controller 512MB cache, original backplane
    • PERC H800 external RAID controller 1GB cache
    • 2x internal 2.5" 146GB HDD (for boot)
    • 12x 2TB 7200 RPM SATA drives in hot-swap bays
    • 28GB RAM PC3-10600 ECC (will work toward upgrading this over time, this server will not be high-traffic)
    • Redundant PSU
    • Dual Gigabit onboard NICs (probably Intel, unsure)
    • Broadcom 5709 Dual Port 1GbE NIC w/TOE iSCSI
    • Dual Xeon E5640
  • Dell PowerVault MD1200
    • Redundant/in-out SAS
    • 12x 2TB 7200 RPM SAS drives in hot-swap bays
    • Redundant PSU
And now for my hardware and setup questions:
  1. What preparations or configurations (if any) do I need to do to the RAID cards? I do not know how they are currently configured.
  2. I realistically am only going to use one NIC or two of them with LACP. Should I use the onboard NICs or the Broadcom card ones?
  3. Will I be able to monitor the redundant PSUs from SCALE, or is only the Dell management controller going to be able to do that?
  4. Anything else I need to do before I install SCALE?
And my ZFS questions:

I am unfortunately in a situation where this server may not be physically accessible for a week or more at a time to put in new drives, or a new drive might not be on hand. So RAID-Z2 with some hot spares seems like a good solution for me. I am thinking of the following options. Please correct my misunderstandings and make recommendations:
  1. Each chassis (R510, MD1200) is an independent RAID-Z2 array (11 drives) w/ one hot spare. This separates the SATA/H700 and SAS/H800 drives. If performance or response characteristic differences between the drives and RAID cards matter, then this allows for 2 drives and the hot spare to fail before data is lost on the next failure. The downside is that all of the spares cannot be used between the two chassis. Do I need to worry about the RAID card/SATA/SAS differences?
  2. The combination of the chassis as a single RAID-Z2 array (20 drives) with 4 hot spares. This gives the same usable capacity as the above option but it seems more fault-tolerant, as all 4 hot spares will have to be used up and two more drives must fail before I risk data loss on the next failure (6 drives). The spares are also usable across the entire array. This of course is what I will choose if the mismatched RAID cards and SAS/SATA drives don't matter within one array.
  3. But maybe that is overly redundant? I could also do a 22-drive RAID-Z2 array with 2 hot spares. That gives me 40 TB of usable storage and 4 drives must fail before the next failure causes data loss.
Thanks!
 

DaAwesomeP

Dabbler
Joined
Jun 19, 2020
Messages
14
As a side note, I know that I do NOT want to use the RAID cards as RAID cards, so really my question is can I use them as HBAs?
 

DaAwesomeP

Dabbler
Joined
Jun 19, 2020
Messages
14
Well with more reading I guess I am probably replacing the two RAID cards, not reflashing them (lots of mixed results it seems). I will definitely try if it is worth it though.

Any suggestions? One needs internal SAS connections and one needs external. This document seems to suggest the H200 and the "6Gbps SAS HBA LSI 2032."

EDIT: I see tons of listings on Ebay for reflashed H200 and H200e cards that claim to work in the Dell storage slots and work with TrueNAS. Are these what I want?
 
Last edited:
Top