supermicro hardware help

Status
Not open for further replies.

Robert Hart

Dabbler
Joined
Oct 23, 2014
Messages
15
Hi All,

So I thought I had a system all chosen when I have now been told to buy something else, any help would be greatly appreciated.

Firstly my requirements

12 bay NAS with sata 4tb HDD in raid 10, primary use for NAS is media storage and running Plex sever for local and remote use. I thought software raid was best but am open to recommendations as I have no real clue and only going by what I've read.

Original system I was going for

CSE-826BE1C-R920LPB
MBD-X10SL7-F-O
Xeon E3 1270
16gb crucial memory

I've now been told to buy the following system instead.

CSE-826BE16-R920LPB
MBD-X10SLL-F-O
AOC-S2308L-L8I
Same CPU and memory

I would be very grateful of any help.

My HDD's are HGST deskstar and will also have 2 x 2.5" ssd in raid 0 for the freenas os. I will change HDD's if I have to.

Thanks in advance.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You are correct that software RAID is best with FreeNAS--you do not want to use a hardware RAID controller. If you're planning on using 12 disks, why would you use striped mirrors? You'll be wasting a lot of capacity that way, and the use case you mention doesn't need the potential higher performance.

The two systems you describe seem pretty much equivalent, though I'd expect the first would be less expensive. Who told you you needed the second system, and why did they say that was? I'd lean toward the first myself. You'll need a reverse SAS breakout cable to connect to the backplane. You'll notice that I have a similar chassis; I've found it works well, but if you want to mount it in a rack cabinet, make sure you get one that's deep enough.

You will not have your boot devices in RAID 0 with FreeNAS. You could mirror them (which would be wise), but that would be most similar to RAID 1, not RAID 0. Try to break the habit of referring to numbered RAID levels (0, 1, 10, 5, 6, etc) with FreeNAS, as they're really not valid with ZFS. ZFS RAID is similar, but not the same. For your boot devices, a pair of 16 GB USB sticks or SATA DOMs would be fine--any more capacity would be wasted.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Why on Earth would you want to buy a separate motherboard and HBA, if the X10SL7-F is somewhat cheaper?
 

Robert Hart

Dabbler
Joined
Oct 23, 2014
Messages
15
Thank you danb35 and EricLoewe for your replies.

Dan

I want to have full redundancy and as I was previously using a Synology NAS I was using Raid 10 so will be using the FreeNAS equivalent. I may in the future have 8 simultaneous transcoding tasks running at the same time or at least need the power for it to handle that.

I was told that the first system had compatibility issues but this may have been if I was using hardware raid, unfortunately I'm not sure. I basically picked the original system myself by reading the recommended hardware thread on here and just choosing a chassis that fit my requirements, as this is my first build I didn't know what cables I would need so called Supermicro who then mentioned the compatibility and then recommended the second system instead, as I say, I think this may be because of thinking I'm using hardware raid instead of software. With the OS I want to mirror which is what i meant to say being a raid 1 not raid 0 like I said but again I will be using the FreeNAS equivalent. Lastly do you have a link or model number for the cables i would need for the first system. Thank you so much for your help.

Eric

Although I don't have a massive budget I'm not too fussed on price, I just want to get the right system the first time as this will be my 4th NAS drive although the other were QNAP and Synology versions. I believe the first system (the one I chose albeit with help from this site) is the best system for my requirements. If the Second system is better I’d gladly go for it but I think it may have been wrongly spec'd.

Thanks very much to every one that has replied
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
"Full redundancy" isn't a very meaningful phrase--what, exactly, do you mean by it? There is no level of redundancy that makes it impossible for you to lose your data, but certainly some levels give more protection than others. How many disks are you planning to use at the outset, and how (if at all) are you planning to grow?

Supposing you're intending to fill your chassis at the outset with 12 disks, probably the most commonly-recommended pool configuration ("configuration 1") would be two, six-disk RAIDZ2 vdevs striped into a single pool. The alternative that you're suggesting ("configuration 2") would be six two-disk mirrors striped together. Let's compare, assuming all your disks are the same size:
  • Capacity
    • Configuration 1 has 8 disks' net capacity
    • Configuration 2 has 6 disks' net capacity
  • Parity/Redundancy
    • Configuration 1 has 4 disks of redundancy, two in each of two vdevs
    • Configuration 2 has 6 disks of redundancy, one in each of six vdevs
  • Fault tolerance
    • Configuration 1 can tolerate the failure of up to four disks, with a maximum of two in each vdev. If three disks fail in any single vdev, your pool will fail and all your data will be lost. Configuration 1 will thus tolerate the failure of any two disks, and possibly as many as four if they're the "right" four.
    • Configuration 2 can tolerate the failure of up to six disks, but only one in each vdev. If any two of your mirrored disks fail at the same, your pool will fail and all your data will be lost. Configuration 2 will tolerate the failure of any one disk, and possibly as many as six if they're the "right" six. Put differently, configuration 2 can be destroyed by the failure of as few as two disks.
  • Performance
    • Configuration 2 will have higher IOPS than Configuration 1
For media storage, I'd think configuration 1 would be better--it's more space-efficient, and the redundancy works out better in the end.
 
Last edited:

Robert Hart

Dabbler
Joined
Oct 23, 2014
Messages
15
Thank you again Dan for your response. I just worry about data loss and would just like to be as safe as possible. I currently have 6 x 4TB HDD's and 2 x 2TB Drives, I was thinking of having 3 Volumes (Pools?)
Films 4 x 4TB,
TV Box Sets 2 x 4TB
Other for Music, Pictures etc 2 x 2TB.

for the following question lets say I have an 8 bay NAS with the above configuration, could I temporarily remove the HDD's for Other (Pictures, Music), Install 2 x 6TB, create a new volume, move the data from box sets to the new larger volume, remove the old TV Box sets HDD's and then put back in the Other (Picture, Music) HDD's

I am very grateful of your help and guidance. If you can point me in the right direction for the cable I need from the Mobo to the Backplane that would be much appreciated. Also, the number got deleted from your recommendation in your last post so don't know if you recommended Config 1 or 2, if i had to guess id go with 1
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Both systems you quote are very similar. The difference is that the X10SL7-F already has an LSI SAS 2308 HBA, which you are adding to the X10SLL-F. Unless someone is offering you a major discount, the first option is cheaper and will be the same (a bit better, even, since it has two Intel i210 GbE controllers and has been verified with Micron/Crucial RAM).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
As to the cabling, you should be able to use this reverse breakout cable, or something comparable, to connect four of the SAS ports on your motherboard to the SAS backplane in your enclosure. This would enable all 12 bays. Hopefully @Ericloewe or @jgreco can confirm.

I wouldn't use the 2.5" bays for anything unless and until you can confirm that you need L2ARC or SLOG devices. From what you describe of your usage scenario, the latter seems very unlikely, and the former is generally only useful once you already have loads of RAM.

For your disk configuration, it's going to depend a bit on how much capacity you need. I'd probably make a single pool of 6 x 4 TB disks in RAIDZ2, and not use the 2 TB disks at all. This will give you about 14.4 TiB of net capacity. You want to keep your pool less than 80% full, so this would mean you'd have 11.5 TiB of usable space before you reached that level. Your system would tolerate the loss of any two disks without any data loss. If you needed to upgrade your capacity in the future, you could add six more disks to the pool in another RAIDZ2 vdev. Or you could upgrade by replacing your 4 TB disks with larger ones, one at a time. Or both.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As to the cabling, you should be able to use this reverse breakout cable, or something comparable, to connect four of the SAS ports on your motherboard to the SAS backplane in your enclosure. This would enable all 12 bays. Hopefully @Ericloewe or @jgreco can confirm.

I wouldn't use the 2.5" bays for anything unless and until you can confirm that you need L2ARC or SLOG devices. From what you describe of your usage scenario, the latter seems very unlikely, and the former is generally only useful once you already have loads of RAM.

For your disk configuration, it's going to depend a bit on how much capacity you need. I'd probably make a single pool of 6 x 4 TB disks in RAIDZ2, and not use the 2 TB disks at all. This will give you about 14.4 TiB of net capacity. You want to keep your pool less than 80% full, so this would mean you'd have 11.5 TiB of usable space before you reached that level. Your system would tolerate the loss of any two disks without any data loss. If you needed to upgrade your capacity in the future, you could add six more disks to the pool in another RAIDZ2 vdev. Or you could upgrade by replacing your 4 TB disks with larger ones, one at a time. Or both.

In principle, you're right. But the expander is SAS3, so this requires a reverse-breakout cable for whatever the new connector is called, not for SFF 8087.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
With the X10SL7 it's easier to buy the chassis with SATA connectors on the Backplane, called 826TQ-R500LPB. No adaptor cable voodoo needed, just straight SATA to SATA cables and cheaper than the Expander ones. If you want to go used, on ebay is a 826TQ-R800LPB with single PSU on sale.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
With the X10SL7 it's easier to buy the chassis with SATA connectors on the Backplane, called 826TQ-R500LPB. No adaptor cable voodoo needed, just straight SATA to SATA cables and cheaper than the Expander ones. If you want to go used, on ebay is a 826TQ-R800LPB with single PSU on sale.

You know what, for 12 drives, that's actually feasible with the X10SL7-F. Nice thought.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
With the X10SL7 it's easier to buy the chassis with SATA connectors on the Backplane, called 826TQ-R500LPB. No adaptor cable voodoo needed, just straight SATA to SATA cables and cheaper than the Expander ones. If you want to go used, on ebay is a 826TQ-R800LPB with single PSU on sale.

No, just a different kind of cable voodoo... you have to cross your fingers and hope all those frickin' SATA cables don't give you problems, else you're digging through a forest of connectors, and then you end up inadvertently screwing up OTHER connectors. As one of the early proponents of large storage servers with SATA, I can tell you that digging around in a chassis with discrete SATA connectors, especially non-locking ones, is hazardous.

If you're just doing hard drives, there's not a real good reason not to get the CSE-826BE16-R920LPB and keep everything SAS 6Gbps. The cabling is readily available. Even if you were to get a SAS 12Gbps controller, you can still wire a SFF8643-SFF8087 cable to a SAS 6Gbps expander backplane; of course it runs at 6Gbps then.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
With the exception that Supermicro will only support 12Gbps expanders with 12Gbps controllers. No mix'n'match there.

Also reverse breakout cables are 4 SATA cables too - just with a SFF-8087 at one end. I'd use cables with locking connectors only - for SATA, SAS and SFF-8087. I don't see the need for a way more expensive Expander chassis when you can utilize 12 SATA bays via straight cabling. On the plus side, you can swap out single SATA cables easily, whereas you need to get another one of those not as common reverse breakout cables if one cable in a x4 link fails.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Also reverse breakout cables are 4 SATA cables too - just with a SFF-8087 at one end.
That's four SATA cables, going to a single locking connector on the backplane, compared to 12 with the TQ backplane. I have the 826TQ backplane, with all the slots populated, and the cables get more than a little messy there, to the point where I'm contemplating replacing the backplane with the BE16 so I can use a single SAS cable instead. Probably won't, since it won't actually buy me anything in terms of capacity or performance, but it is an attractive thought.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
to expand a particular pool i can replace 1 HDD at a time and once all HDD's are replaced the capacity will increase.
Correct. You can also add another vdev to your pool. When more than one vdev is present, the pool is striped across all vdevs. If you follow my suggestion and build your pool with a single six-disk RAIDZ2 vdev, your chassis will have six more bays free, and you can easily expand your pool by adding six disks as a second RAIDZ2 vdev. If you do that, you can later increase pool capacity by replacing either group of six disks, one at a time (you don't have to replace all 12).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
With the exception that Supermicro will only support 12Gbps expanders with 12Gbps controllers. No mix'n'match there.

Also reverse breakout cables are 4 SATA cables too - just with a SFF-8087 at one end. I'd use cables with locking connectors only - for SATA, SAS and SFF-8087. I don't see the need for a way more expensive Expander chassis when you can utilize 12 SATA bays via straight cabling.

You don't need an expander backplane if you don't want to. The point is that the TQ chassis is the worst choice available, because of the sheer number of individual connectors. You can get the backplane that brings the channels out to SFF8087 (826A, three SFF8087) which omits the expander and gives you vastly simplified/more reliable cabling.

On the plus side, you can swap out single SATA cables easily, whereas you need to get another one of those not as common reverse breakout cables if one cable in a x4 link fails.

And while you're poking around in there amongst the "easily" swapped SATA cables, you end up knocking another connector loose and cause a new catastrophe. We switched to multilane SAS when it became apparent that this was a significant issue with the individual SATA connector method. That was a decade ago.

I have the 826TQ backplane, with all the slots populated, and the cables get more than a little messy there,

Exactly that.
 
Status
Not open for further replies.
Top