Server Build? I [Still] Have No Idea What I'm Doing *Updated*

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
Hello,

I've been toying with the idea of buying/building a proper home server, which would primarily be used for Plex, in order to off-load the media server handling from my main PC (Plex doesn't play nice with my VPN) and consolidate my storage instead of having WAY too many external drives (currently at 14, with more that are not in use. I have a problem, don't judge me). I keep seeing FreeNAS come up in my research on what I should be doing, so I figured this was a good place to start for feedback on if I am as lost as I think I am.

Currently the build I am thinking about looks like this:
Amazon List (for reference): https://www.amazon.com/hz/wishlist/ls/1C5QE7W54FOBN?ref_=wl_share

Case: Norco 24 Bay 4U Rack Mount Server Chassis (RPC-4224)
Motherboard: SuperMicro E-ATX Server Motherboard, Dual Socket LGA-2011 (MBD-X9DRI-F-0)
Processors: (2x) Intel Xeon E5-2630 v2 6-Core 2.6GHz (BX80635E52630V2)
CPU Coolers: (2x) Cooler Master Hyper 212 Evo
Memory: (4x) Samsung DDR3-1600 16GB ECC/REG Server Memory (M393B2G70BH0-YK0) (64GB total)
Graphics Card: HP Nvidia Quadro K2000 Workstation Graphics Card
OS Drive: (2x) Patriot Scorch 256GB NVMe M.2 PCIe SSD (Mirrored)
Power Supply: EVGA Supernova 1600 T2 80+ Titanium, 1600W Power Supply (220-T2-1600-X1)
Storage Drives: (4x) Western Digital 8TB Elements Desktop Hard Drive (WDBWLG0080HBK-NESN) (Shucked, WD Red Drives)

I already have the WD storage drives, as well as a bunch of other SATA drives I plan to use, which is why I went with the 24 bay case that has room for all I've got plus more room to grow. The prices on Amazon can be disregarded, as I would be order from Amazon, Newegg, and Ebay and currently that list (not including the HDDs) is coming in at ~$1,200 at best. I don't really have a hard budget set for this, but the cheaper the better. I definitely don't want to go over $2k, but I was hoping to stay well under that number.

So, first things first, is there anything obvious that I missed here regarding compatibility with the parts listed or other parts I would need? Second, are there any parts that are ill-advised and should be changed out for something better? Oh, and of course, would this work for a FreeNAS/Plex server?

Also, something I've been a little confused about, does FreeNAS support running a Plex server independent of any other PC being involved? Would FreeNAS be my best option for a Plex media server + mass storage and backup of other files?

Thank you for reading and for any feedback you can provide. :D
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
Graphics Card: HP Nvidia Quadro K2000 Workstation Graphics Card
You won't need a graphics card.
The mainboard you selected has IPMI (remote console, even including BIOS), so you can see all output on any PC via LAN.
If you're planning to use that graphics card for Plex transcoding, that won't work.
There is work in progress to use the IGP of the CPU, but that would mean to use another CPU and mainboard.

Or are you planning to virtualize everything and pass-through the graphics card?

OS Drive: (2x) Patriot Scorch 256GB NVMe M.2 PCIe SSD (Mirrored)
This is overkill, you don't need that much capacity and speed.
Moreover, your board has no M.2 slots, and I doubt it will boot if you use a riser card, because the BIOS/UEFI will know nothing about NVMe (that would require a modded BIOS/UEFI).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
does FreeNAS support running a Plex server independent of any other PC being involved?
Absolutely, and it works quite well.

On your build overall, 2x Xeon E5 chips seems quite excessive for your stated use case. And if you're looking at that kind of chassis anyway, seriously consider used gear from eBay.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
You will need a SAS card and expander EDIT: or a couple of reverse breakout cables and an expander

I'm 99% the Hyper 212 Evo cooler will not fit, you need narrow ILM coolers (double check)

Have Fun
 
Last edited:

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
Wow, OK. Thanks for the input, and clearly I was right about not knowing what I was doing. I've done a few desktop builds for myself over the years, but getting into servers opens a whole bunch of new things up that I haven't used before.

New plan: I was debating getting a refurbished system and making minor modifications to it, and based on this and danb35's feedback I think that is probably the better way to go.

Here is my new proposal:
Newegg list: https://secure.newegg.com/Wishlist/SharedWishlistDetail?ID=cO2jp8KkVbE=

Base: Supermicro SuperServer 24 Bay LFF 4U Rackmount Server (6047R-E1R24N)
Specs:
System: Supermicro SuperServer 6047R-E1R24N 24-Bay LFF 4U Rackmount Server
Processor: 2x Intel Xeon E5-2670 2.6GHz 8 Core 20MB Cache Processors
Memory: 16GB DDR3 ECC Registered Memory (4 x 4GB)
Controller: LSI 9265-8i 6Gbps SAS/SATA RAID Controller; RAID (0 1 5 10 50 60)
Hard Drive Bays: Supports 24x 3.5" SATA Drives (24x Trays with screws included)
Backplane: 24-Port 4U SAS2 6Gbps Backplane
Management: IPMI 2.0 / KVM over LAN / Media over LAN
Network: Intel i350 Quad-Port 1 GbE
Power Supplies: Redundant 920W Power Supplies
Rails: Rail Kit Not Included

The 6047R-E1R24N is a high-end storage system comprised of two main subsystems: the SC846E16-R920B 4U/rack mount chassis and the X9DRi-LN4F+ dual processor serverboard.

Then I would remove the pre-installed memory and make the following additions:

OS Drive Mount: Rear side 2x 2.5" HDD kit (MCP-220-84606-0N)
OS Drives: (2x Mirrored) Crucial BX500 2.5" 120GB SATA III SSD (CT120BX500SSD1)
Memory: (4x) Samsung DDR3-1600 16GB ECC/REG Server Memory (M393B2G70BH0-YK0) (64GB total)

And the storage drives would remain the same.

Does this seem like a better option? It does bring the price down, which is nice, but I still don't know if this would make it so that I had everything I need for what I want to do. If I do go this route I would probably just get everything from Newegg as it is on that list, since I believe those were the best prices for all of the items.

Thanks.

Side Note: I've also been considering a rack-mount UPS to go with this, but I'm not sure what kind of voltage/wattage I would need, as I have never used a UPS before (shame on me, I know).
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
Controller: LSI 9265-8i 6Gbps SAS/SATA RAID Controller; RAID (0 1 5 10 50 60)
This is a RAID card, but you should use a HBA instead, with SAS2008, SAS2308 (or SAS3008) chipset.

And you should also think of getting 2 more 8TB drives, to build up a RAIDz2 with 6 HDDs.
You shouldn't use RAIDz1 with HDDs bigger than 1TB.
Then you can later extend your pool by adding another vdev of 6 HDDs.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
I reiterate that dual E5s are gross overkill for your stated use case. Even though the older ones are cheap these days, you should also consider power consumption; my similar system idles at around 300 watts. A current-generation E3 (or even i3) can handle up to 64 GB of RAM and burns a lot less power. But with that said, there's no kill like overkill, and DDR3 prices seem to be much lower than DDR4. A few other points:

  • The server you list seems to only come with 12 trays. You'd really want to get all 24, partially for future expansion and partially for airflow concerns in the interim. They're available, and not too expensive, but you should plan for that.
  • You don't really need the boot devices in hot-swap bays; Supermicro have a separate bracket to mount them internally that's a good bit cheaper.
  • I'd even question whether you need mirrored SSDs; I've found a single SATA SSD to be plenty reliable.
  • As was already mentioned, the link you shared includes a RAID card; you'd want to replace that with a SAS HBA.
but I'm not sure what kind of voltage/wattage I would need,
Well, voltage is easy; just use whatever line voltage is in your location (I assume you're in .us, where it's 120V). Wattage is harder because of the way manufacturers rate their units--the rating indicates the maximum load the unit can handle, even for a very short period of time. That is important--overloading the UPS could damage it, or more likely trip a circuit breaker--but it doesn't say anything about runtime, which is (at least to me) a much more important issue. APC, at least, publishes runtime charts based on load; you can determine the watt load of your system by plugging it into a Kill-A-Watt meter. Be aware that those runtime charts are going to be based on new batteries; you should probably derate by half.

You can get used UPSs too, and the batteries are designed to be easily replaced by the end user in most cases.
 

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
Alright, so I've made some more changes to the list, and I think I may be getting closer to a working build (I hope).

I went down to a cheaper base to start with that has lesser processors (dual 2.0GHz 6-core rather than 2.6GHz 8-core). I see what you are saying about the power draw, but unfortunately it seems like all the SuperMicro 24-bay servers available on Newegg have dual E5s, so I don't have a choice unless I find another manufacturer that has 24-bay chassis or I build my own as originally proposed. While less power consumption would be nice, I don't mind overkill, as I'd rather have too much than too little.

Here is the new list on Newegg: https://secure.newegg.com/Wishlist/SharedWishlistDetail?ID=cO2jp8KkVbE=

The new base is: Supermicro SuperServer 24 Bay LFF 4U Rackmount Server (6047R-E1R24N)
Specs (per Newegg):
System: Supermicro SuperServer 6047R-E1R24N 24-Bay LFF 4U Rackmount Server
Processor: 2x Intel Xeon E5-2620 2.0GHz 6 Core 15MB Cache Processors
Memory: 16GB DDR3 ECC Registered Memory (4 x 4GB)
Controller: LSI 9265-8i 6Gbps SAS/SATA RAID Controller; RAID (0 1 5 10 50 60)
Hard Drive Bays: Supports 24x 3.5" SATA Drives (24x Trays with screws included)
Backplane: 24-Port 4U SAS2 6Gbps Backplane
Management: IPMI 2.0 / KVM over LAN / Media over LAN
Network: Intel i350 Quad-Port 1 GbE
Power Supplies: Redundant 920W Power Supplies
Rails: Rail Kit Not Included

The 6047R-E1R24N is a high-end storage system comprised of two main subsystems: the SC846E16-R920B 4U/rack mount chassis and the X9DRi-LN4F+ dual processor serverboard.
I do intend to get all 24 trays, but I'm not exactly sure what it includes, as the listing has conflicting info. The product name and short description say 12 trays, but the overview says it comes with all 24. Either way, I would plan to fill out the remaining trays if 24 are not included.

The modifications would stay somewhat the same, however I changed the SSD mount and dropped the mirror. The dual hot-swappable mount was the only thing I could find from SuperMicro that said it worked with this chassis, so I went with a generic bracket instead that sits in an open card slot.

OS Drive Mount: EnLabs PCI Slot 2.5" HDD/SSD Mounting Bracket (PCIBR25SSDKIT) w/ 40CM SATA Data & Power Combo Cable
OS Drive: Crucial BX500 2.5" 120GB SATA III SSD (CT120BX500SSD1)
Memory: (4x) Samsung DDR3-1600 16GB ECC/REG Server Memory (M393B2G70BH0-YK0) (64GB total)

Taking the suggestion to drop the RAID card and put in a SAS HBA, I found this:

Card: LSI megaRaid SAS9200-8e 6Gb/s PCIe Sas Host Bus Adapter (617824-001)
Cable: QSFP to Mini-SAS (SFF-8088) DDR Cable, 1-Meter 3.3ft

Found these via a Reddit post I came across while trying to research how a SAS HBA works and what exactly I would need. According to that source this card/cable combo allows a 24-bay chassis to run with all bays full, but I don't know. These two items are cheaper on Ebay than Newegg, so the prices in the linked list can be ignored.

And lastly, since I had mentioned a UPS before, I found this (which should be significantly more wattage than needed):

UPS: Dell 2700R UPS 2700W 120V 4U Rack Mount Battery Backup (CN-0KMGMW CN-0K803N)

Again, the reason for choosing this one is that I found it on Ebay used/like new for significantly less than the Newegg price.

The final item on the list is the frame itself for rack mounting everything, but that doesn't really matter. Anyway, does this look better with the cheaper server to start and the SAS HBA added? Disregarding the cost of the UPS and rack frame this gets the server itself down to under $1,000, which is great though if anyone has suggestions to cut the price down further I'd love to hear them.

Thanks.

Another side note: I just remembered one more question while proofreading my post. What Stevie_1der said about adding 2 more HDD... I don't really get it, or I don't get how the different RAIDs work I guess. How do the hot-swappable drive bays work exactly? Do I need to have some specific array of matching drives/sizes, or can I just put whatever SATA/SAS drives I want in each bay? Currently in addition to the previously mentioned 4 8TB WD externals, I have an 8-bay enclosure with 4 6TB drives and 4 3TB drives in it, so I was planning on having those 12 loaded as a start. I also have some other drives that are not in use right now which I could put in, but I don't recall what they are off the top of my head.

Also, since I've never used a NAS before, how exactly does this appear to the other computers on my network? Would I see each individual drive the way I do now with them external, or would it appear as one network drive with the combined capacity of all the drives installed? If it is the latter can I assume that the NAS software allocates the files in such a way that if one drive fails you would only lose the data from that one drive, rather than it being like a striped RAID where one failure makes the whole RAID un-readable?

Sorry for asking all these questions. I have a habit of trying to get 100% understanding of something before I try it, so that I don't make any costly mistakes either monetarily or via data loss. Thanks again.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
The dual hot-swappable mount was the only thing I could find from SuperMicro that said it worked with this chassis
The part number you'd be looking for is MCP-220-84603-0N.
How do the hot-swappable drive bays work exactly?
I don't think this is really your question. Your uncertainty really seems to focus on how ZFS works; I'd suggest you start here on that subject.
how exactly does this appear to the other computers on my network?
It appears as a file server with whatever sharing protocol you choose (likely SMB), with one or more shared directories (as many as you've set up).
If it is the latter can I assume that the NAS software allocates the files in such a way that if one drive fails you would only lose the data from that one drive
No, FreeNAS does not work in that way, and cannot readily be made to do so. In the most common use case, you'd put all your disks into a single ZFS pool, and you'd need to construct that pool with a suitable level of redundancy. Again, the link above should get you started.
 

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
I looked over the ZFS guide you linked, thanks. I think I understand now, but let me see if I have this straight.

As I said, I already have 4x8TB, 4x6TB & 4x3TB. If I get 2 more of each size I could make 3 pools as RAIDZ-2, which would make it:

6x8TB, double-parity, 32TB usable
6x6TB, double-parity, 24TB usable
6x3TB, double-parity, 12TB usable

For a total of 68TB usable storage, in which data would not be lost unless 3 or more drives in the same pool fail, right? Then I would have 6 open bays to add another RAIDZ-2 pool in the future. For the purpose of the file server would I see this as one 68TB server, or would each pool appear separately?

If I have that right, that just brings me back to whether the build itself is good. Does the last part list I gave with the cheaper SuperMicro server look good? How much power do I really need for this use? Would I be better off going back to the original chassis I linked in the first post and building my own with a single CPU board and a Xeon E3 (or staying with dual and just dropping the CPUs to E3), in order to reduce the cost to compensate for needing to buy more HDDs?

Also, it just occurred to me that the drives probably need to be blank for setting this up, or will be reformatted, right? Since I already have drives in use, would it work to set up one pool, move data onto it to clear the other drives, then set up the other pools? If I need to have all 18 drives I'd be using blank at the same time that opens up a whole other issue I would need to work out...

Thanks again.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
If I get 2 more of each size I could make 3 pools as RAIDZ-2
You would most likely create a single pool with three vdevs, which would then behave as you describe, barring losses to TiB vs TB, ZFS overhead, etc.
For the purpose of the file server would I see this as one 68TB server
If configured as I just described, it would appear as a single volume. However, you'd generally want to share more than a single directory (and you'd never want to share the whole volume), and then each shared directory would appear separately.

Your build looks fine. It is overkill (but note mine is very similar), but used gear will save a good bit of money--I think your numbers have a dual E5 system costing less than a single E3.

Since I already have drives in use, would it work to set up one pool,
Again, you'd most likely end up with a single pool consisting of three vdevs. You can add vdevs to a pool at any time (though not, with very specific exceptions, remove vdevs). So build your pool with 6 x 8 TB disks. Move any data living on any of the 6 TB disks to the pool, then add those to the pool as a second vdev. Repeat with the 4 TB disks.
 

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
Right, vdevs, not pools. I meant that, but skimmed the ZFS info you gave quickly while at work and mixed up my terminology. Anyway, now that I seem to have it all figured out, I've officially ordered parts. The final build + accessories came out to this:

SERVER (not including storage)
Base: Supermicro SuperServer (6047R-E1R24N) 24 Bay LFF 4U Rackmount Server
Includes:​
Chassis: SuperMicro 4U/rack mount chassis (SC846E16-R920B)​
Motherboard: SuperMicro dual processor serverboard (X9DRi-LN4F+)​
Processor: (2x) Intel Xeon E5-2620 2.0GHz 6 Core 15MB Cache Processors​
Memory: 16GB (4 x 4GB) DDR3 ECC Registered Memory (will be removed, not used)​
Controller: LSI 9265-8i 6Gbps SAS/SATA RAID Controller; RAID (0 1 5 10 50 60) (will be removed, not used)​
Hard Drive Bays: Supports 24x 3.5" SATA Drives (24x Trays with screws included)​
Backplane: 24-Port 4U SAS2 6Gbps Backplane​
Management: IPMI 2.0 / KVM over LAN / Media over LAN​
Network: Intel i350 Quad-Port 1 GbE​
Power Supplies: Redundant 920W Power Supplies​
Memory: 64GB (4x16GB) Samsung DDR3L 1600 (PC3L 12800) Server Memory Modules (M393B2G70BH0-YK0)
OS Drive Mount: SuperMicro Dual 2.5" Fixed HDD Tray (MCP-220-84603-0N)
OS Drive: Crucial BX500 2.5" 120GB SATA III 3D NAND Internal Solid State Drive (SSD) (CT120BX500SSD1)
SAS HBA: LSI megaRaid SAS9200-8e 6Gb/s PCIe Sas Host Bus Adapter (617824-001)
SAS Cable: QSFP to Mini SAS (SFF-8088) DDR Cable, 1-Meter 3.3ft

STORAGE
Already Had:
(3x) Seagate 3TB Desktop HDD SATA 6Gb/s 64MB Cache 3.5-Inch Internal Bare Drive (ST3000DM001)
(1x) Western Digital Caviar Green 3 TB SATA III 64 MB Cache Bare/OEM Desktop Hard Drive (WD30EZRX)
(4x) Toshiba X300 6TB Desktop HDD 7200 RPM 128MB Cache SATA 6.0Gb/s 3.5 Inch Internal Hard Drive (HDWE160XZSTA)
(4x) Western Digital 8TB Elements Desktop Hard Drive - USB 3.0 - (WDBWLG0080HBK-NESN)
Shucked: (2x) WDC WD80EMAZ-00WJTA0 (Helium) (2x) WDC WD80EMAZ-00M9AA0 (Air)​
Newly Acquired for Server:
(2x) Hitachi Ultrastar 7K3000 3TB 7200 RPM 64MB Cache SATA III 6.0Gb/s 3.5" Enterprise Hard Drive (HUA723030ALA641)
(2x) WD Elements 6TB USB 3.0 Desktop Hard Drive (WDBWLG0060HBK-NESN) (will be shucked)
(2x) WD easystore 8TB External USB 3.0 Hard Drive (WDBCKA0080HBK-NESN) (will be shucked)
(6x) WD "White Label" RED 10TB 5400RPM 256MB 3.5" Hard Drive (WD100EZAZ-11TDBA0)

I know some of these drives are not ideal for a NAS, but hopefully they will be fine since I am just using it for Plex and as a personal file server/backup. I will be pooling them into 4 RAIDZ-2 vdevs as 6x3TB, 6x6TB, 6x8TB & 6x10TB, which I believe will give me 108TB usable with full double-parity. Since I found a good deal on the 10TB drives I opted to just fill out the remaining 6 bays so that I could do those first rather than trying to get enough data moved around on my existing drives to set it up.

RACK & ACCESSORIES
Rack Frame: 32U Frame 4 Post Open Relay 19" Network Server Rack Cabinet Adjustable Depth 24"-37"
UPS: Dell 2700R Telco HV-US 2700W 208V 2700VA 4U Battery Backup Online Rack UPS (H950N)
Surge Protector/Power Strip: Tripp Lite 15 ft. Cord 14 Outlets 3000 Joules Rackmount Surge Suppressor (DRS-1215)
Network Switch: TP-Link 24-Port Gigabit Rackmount Switch (TL-SG1024)
Server Rails: SuperMicro Short Rail Set (MCP-290-00058-0N)
Spacers/Ventilation: Rising 2 Pack 2U Vents Rack Mount Panel Server Network Racks Enclosures Spacer 19 inch (2U 3.5")
Rack Shelves: (3x) Raising Electronic 19''2U Relay Rack Mount Cantilever Network Shelf 14'' Deep 40LBs Capacity

I swapped the 42U rack I had listed for a 32U rack, because I realized the ceiling in the room I plan to put this is less than 7' so the 42U wouldn't have fit (oops). Threw in some venting spacers so that it will look better, and some shelves to fill out the remainder of the rack for functional use. I'll most likely be using the shelves for various things I plan to do video capture from, such as a VCR, Beta VCR, and HD-DVD Player (note: this will be in the same room as my primary desktop, I don't expect to be able to do video capture with the server) (Also note: Yes, I have a thing for dead formats. Copying and backing up various types of physical media are a big part of why I need so much storage).

Luckily for me special financing is a thing, so this wasn't quite as crazy as it looks (but only not quite, do not be fooled, I am crazy).

The final totals for this were:
The server itself (no drives): $939.43
6 HDDs (3,6,8TB): $580.06
6 HDDs (10TB): 1,019.94
Rack, UPS, the rest: $812.59

Grand Total: $3,352.02

The parts came from Newegg (12 month 0%), Best Buy (6 month 0%) and Ebay (6 month 0% via PayPal), so I will be paying for this for the next year... $415.94 a month for the first 6 months then $142.73 a month for the remaining 6 months... so that isn't so bad. I admit, I had been trying to move and had put an offer in on a condo. Actually going through with the purchasing here was a bit of an impulse buy when my offer was rejected, so I decided to pay for a server instead of a mortgage. Like I said, I'm crazy.

In conclusion: I think you all for your input and support on this endeavor. Shipping estimates make it look like I won't have all the parts until mid-June or later, so I can't start assembly for a month or so (the suspense will kill me). Once everything arrives I'll be sure to come back with updates and pictures (if that's OK), and I'm sure I'll have more questions once I actually start trying to get it all set up and installed.

Thanks again!
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
...I did not. Uh oh...

So, I just looked at that site but I'm not sure what it means or how to tell if it is compatible. For the model UPS I am getting it says the driver is MGE-SHUT, and based on some cursory googling I think that means it should work with FreeNAS and this system? The connection appears to be serial (based on images, the seller didn't specify) which the motherboard on this system has.

I also realized another error with my UPS selection... it has the 208V back panel, not the 120V that Dell offers. This means I will need to figure out how to power it, since I don't have an outlet that can take a 250V cord in the room I plan to put this. It looks like I should be able to get a step-up transformer, I guess, that would convert a standard US type B 110V outlet to power the 250V cord, but I'm not certain what type plug the included cord will need since the pictures are too small to see the prongs and again, the seller didn't specify. The alternative would be to have a new outlet run to the room that could provide the higher voltage.

It also looks like to be able to plug the server into the UPS I would need 2 C13 to L6-30 power cords, since it has 2 redundant power supplies. The good news is that the power supplies are rated for a range that will accept the higher voltage, so that shouldn't be a problem.

I suppose at this point all I can do is wait for everything to be delivered, then I can see what else I need before I'll be able to power it all on.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
The alternative would be to have a new outlet run to the room that could provide the higher voltage.
I'd think this would be a better option than a step-up transformer, if you can't either (1) cancel the order for the UPS, or (2) economically convert it to use 120V.
 

JohnnyGrey

Dabbler
Joined
Jul 1, 2017
Messages
45
Holy crap, this is going to be one heck of a machine! Just for comparison, I’m on the opposite end of the scale, I have a Core i3 6100T, which is a dual core 35w chip, and I’m easily able to handle a single transcoding Plex stream. You’d probably be able to handle like a dozen at the same time :)
 

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
Hello again! I have returned as I said I would, as I am lost, as I said I would be!

I've now received all the parts I ordered (and them some), but upon trying to assemble everything I seem to have hit a snag with the SAS card I got, or more specifically the cable for it.

The pre-installed LSI 9265-8i 6Gbps SAS/SATA RAID Controller card has 3 cables attached to it:

20190610_212811.jpg


One goes to that panel attached to the side of the chassis (not sure what that is honestly) and the other two go to the back of the HDD array:

20190610_212848.jpg


I tried to remove the card and cable that was there, but after doing so I couldn't figure out where I needed to plug in the QSFP to MiniSAS(SFF-8088) DDR Cable I got. The excessively large metal pieces on both ends of the cable make it unable to fit in anywhere.

20190610_212547.jpg

20190610_212502.jpg


For reference, the SAS card I got is a LSI megaRaid SAS9200-8e 6Gb/s PCIe Sas Host Bus Adapter 617824-001.

20190610_212723.jpg

20190610_212635.jpg


I'd also note that there doesn't seem to be any internal connections on this card like there were on the previous RAID card, just the two external ones. I've tried searching online but I can't seem to find any sort of guide or instructions for getting this set up correctly. Can anyone help me to figure out what needs to be connected where, or point me in the direction of some instructions that may be able to help me?

I still need to swap out the memory and install the mounting bracket and SSD for the OS drive, but I shouldn't have any problem with those.

Thanks!
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
The external ports on a SAS card are used to handle JBOD arrays usually. That's one way of increasing the storage with an existing system rather than having to completely build a new system if you need more disks etc.

It seems you will need to buy a different SAS card with internal ports OR mcguyver a solution where an external cable can be re-routed back into the chassis and connected to the disk back-plane.
 

Seiryu

Cadet
Joined
May 23, 2019
Messages
9
OK, so I've done more tinkering and research. "Back-plane" was the magic word I couldn't think of that unlocked the Google results I needed. Now, after completely removing the back-plane panel so that I could see the ports without any obstruction and also get the model number (BPN-SAS2-846EL1) I've had two revelations:

1. Both ends of that cable are intended for external use, hence the giant metal ends, and neither will fit anywhere internally on this or likely any other system.

and

2. I have no idea why the source I found online was recommending a QSFP (external) to SFF-8088 (also external) in the first place. It seems like the cable I need is a SFF-8087 (internal) to SFF-8088 (external) if I want to use that card, which I could just run through an open slot next to the card to attach to the back, or to get a HBA with internal ports, as Inxsible said, and an SFF-8087 to SFF-8087 cable. The kicker? Those two cables running to the RAID card the unit came with? As far as I can tell those are SFF-8087 on both ends.

That said, I have now ordered a different SAS HBA card (LSI SAS 9207-8i SATA/SAS 6Gb/s PCI-E 3.0 Host Bus Adapter IT Mode SAS9207-8i US) which, if the estimate is to be believed, I should have no later than next Monday, 6/17. I REALLY hope that this will be the last part I have to order to get this thing running, cause I am getting real tired of playing Ebay roulette for a compatible part...

In the meantime I have now installed everything else, though the OS drive mount seems weird. It didn't include any instructions, and the ones on SuperMicro's site show pictures of something completely different. There was only one place inside the chassis I could find that seemed right at all, but it blocks one of the memory slots and makes the plastic ventilation an extremely tight fit. Maybe I put it in wrong? But regardless, it's in.

With the SSD installed (I went ahead and got 2 to mirror. With as much as this mess has cost what harm is $20 more for the second SSD) and the memory swapped out it should run now in theory, right? I haven't put in any of the storage drives yet, since I don't have the back-plane connected anyway, but I should be able to test everything else I think.

Would there be any harm in installing FreeNAS on the internal SSD when there are no other HDDs present yet and a different HBA to be installed later? I figure I should be able to get the basic framework up and running then just set up the pool & vdevs later, assuming that's possible.

Thoughts? Advice? Have I made a terrible mistake?

Thanks.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
OK, so I've done more tinkering and research. "Back-plane" was the magic word I couldn't think of that unlocked the Google results I needed. Now, after completely removing the back-plane panel so that I could see the ports without any obstruction and also get the model number (BPN-SAS2-846EL1) I've had two revelations:

1. Both ends of that cable are intended for external use, hence the giant metal ends, and neither will fit anywhere internally on this or likely any other system.

and

2. I have no idea why the source I found online was recommending a QSFP (external) to SFF-8088 (also external) in the first place. It seems like the cable I need is a SFF-8087 (internal) to SFF-8088 (external) if I want to use that card, which I could just run through an open slot next to the card to attach to the back, or to get a HBA with internal ports, as Inxsible said, and an SFF-8087 to SFF-8087 cable. The kicker? Those two cables running to the RAID card the unit came with? As far as I can tell those are SFF-8087 on both ends.

That said, I have now ordered a different SAS HBA card (LSI SAS 9207-8i SATA/SAS 6Gb/s PCI-E 3.0 Host Bus Adapter IT Mode SAS9207-8i US) which, if the estimate is to be believed, I should have no later than next Monday, 6/17. I REALLY hope that this will be the last part I have to order to get this thing running, cause I am getting real tired of playing Ebay roulette for a compatible part...

In the meantime I have now installed everything else, though the OS drive mount seems weird. It didn't include any instructions, and the ones on SuperMicro's site show pictures of something completely different. There was only one place inside the chassis I could find that seemed right at all, but it blocks one of the memory slots and makes the plastic ventilation an extremely tight fit. Maybe I put it in wrong? But regardless, it's in.

With the SSD installed (I went ahead and got 2 to mirror. With as much as this mess has cost what harm is $20 more for the second SSD) and the memory swapped out it should run now in theory, right? I haven't put in any of the storage drives yet, since I don't have the back-plane connected anyway, but I should be able to test everything else I think.

Would there be any harm in installing FreeNAS on the internal SSD when there are no other HDDs present yet and a different HBA to be installed later? I figure I should be able to get the basic framework up and running then just set up the pool & vdevs later, assuming that's possible.

Thoughts? Advice? Have I made a terrible mistake?

Thanks.

i would burn in / test your server while i waited for the hba - then i'd install freena as you said
 
Top