Used Supermicro 4U server or homebuilt?

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
I've wanted to build a FreeNAS system for a while, and it's finally a priority. I've read a lot of articles on building one myself, and I'm confident I could do it, but I wonder if a used commercial server is easier/cheaper/more reliable (I've built plenty of computers over the years, but never a fileserver). I want it to be rack-mount, if possible, to live in the rack in my basement "tech room." It's a room dedicated to my lighting system, network rack, etc., so noise, heat, and power are non-issues (except power cost).

I've found a used Supermicro server on EBay - the link is: https://www.ebay.com/itm/Supermicro...157339?hash=item28709ef49b:g:IjAAAOSwqlxcEUYV

In summary, it's a 4U server, dual Xeon, 64 GB ECC, LSI SAS Controller, etc. It looks like it would make a good FreeNAS box, but I want to be sure before spending that much money. It seems to have plenty of CPU and RAM, the needed SAS2 backplane, and good Ethernet and SAS controllers. I suppose I'll have to flash the LSI controller into IT mode - I did that on the LSI 8207-8i card I have in my PC. Is that still true?

My primary usage is as a file/backup server for 3 to 5 PCs, media server for home theater, etc. I am a serious photographer with several TB of photos, and plan to scan/edit many thousands of 35mm slides and negatives from my pre-digital days. I will be buying a few new disks, but mostly moving several 4TB and 6TB disks from my current PC. I thought that Win10 Storage Spaces would do what I needed, but it was a disaster.

The detailed configuration is:
  • Supermicro SuperChassis 846E1-R1200B Dual 6 Core Xeon 24 x HDD Storage Server
  • Dual Intel Xeon E5-2620 15M 2Ghz
  • 64GB Ram (8x 8GB PC3-12800R)
  • 24x 3.5'' Drive Caddies with Screws Installed
  • SAS Controller: LSI 9266-8i
  • Ethernet: Intel I350T4BLK (4 Port Gigabit)
  • Motherboard X9DRi-F
  • Backplane: BPN-SAS2-846EL1
  • Power Supplies: Dual PWS-1K21P-1R 1200W 80 Gold Plus
Thanks to you all for your input!
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
That's a good configuration to have. In fact that is actually a great price too. I have seen barebones cases that go for around that price. So getting the board/RAM alongwith is great.

Usually, finding your own components turns out be a bit cheaper and also allows you to choose what you want (in terms of a board which would allow further expandability etc) whereas a pre-built system like this is pretty much turn-key and can get you up and running much sooner.

Building your own can sometimes put you in a rabbit hole because searching for some boards might bring up other boards which have this one special feature that you think you must have -- but now you might have to change the RAM and the CPU to match this new board etc etc. -- Sometimes that effort is too tedious and the build keeps getting postponed. Ask me how I know. :rolleyes:

Depending on how soon you want to be up and running, I'd choose one or the other option. Just make sure the SAS controller is one that can be converted into IT mode.
 

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
Thanks for the input. I know I'll have to replace the LSI card with a SAS9207-8i in IT mode but that's no problem, I'm running that setup in my current PC. With 24 slots for disks I assume that more spindles is better for redundancy and performance than fewer big disks.

Any suggestions about using the onboard Intel NIC vs. the 4 port PCIE card? I have Cat-6 cabling and decent network infrastructure throughout my house (30 years in the network hw/sw industry, so i kinda overdid things when we built the house).

Thanks again!
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Any suggestions about using the onboard Intel NIC vs. the 4 port PCIE card? I have Cat-6 cabling and decent network infrastructure throughout my house (30 years in the network hw/sw industry, so i kinda overdid things when we built the house).

Thanks again!
Just use the on-board NIC. Having multiple NICs on a FreeNAS system is only useful if you are willing to use LAGG -- which provides best value/performance if you have other network devices capable of it -- like managed switches etc.
Since you have been in the hw/sw industry, you know that "things" depend on many factors. I have 2 on-board NICs and 1 IPMI port. I just use 1 NIC for the data and the IPMI port.

You can use the 4-port NIC in another build -- say a pfSense or OPNSense router build or something like that.

Although, if you intend to use the server as a hypervisor with FreeNAS as a VM -- then having multiple ports "may" be useful if you want to segregate different VMs on different subnets altogether or create a DMZ etc.
 

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
Thanks for the details -- it pretty much confirms what I was thinking after doing a bunch of reading. I tried building a large drive "pool" (~36TB) using Windows 10 Storage Spaces (4TB and 6TB Ironwolf drives), but the performance was horrible, drives would go offline for no apparent reason, and I spent hours and hours rebuilding virtual drives and moving data to deal with fragmentation and failure to reclaim unused space. Now I've gone back to 12 individual drives, with no RAID protection, and that's a real pain, too. I KNOW a drive will fail someday and then I'll lose a bunch of stuff. I assume Storage Spaces works on Windows Server, but the Windows 10 implementation sucks. It sure would be nice if the UNIX/Linux and Windows worlds used terms like "pool" and "virtual device" to mean the same things. My background is PDP-11, VAX/VMS, WIndows-NT, and such, so the Unix terminology is often confusing.

Everything I've read tells me that 1GbE shouldn't saturate very often, if at all. I will definitely use IPMI; I assumed that would work over the onboard NIC. Video and photo editing & streaming will probably be my biggest bandwidth drivers - for example Lightroom will try to load a whole folder of perhaps 100 to 250+ 30MB images at a time, and then write out Jpegs (8MB ea?) of all of them at once. I expect it to take some time, but a really long delay going from image to image, or writing the edited images would be maddening. If I run into issues I can run link aggregation, I suppose (my primary ASUS RT-AC88U router supports it), but I'd have to upgrade my "core" switch and the NIC in my (being rebuilt) desktop workstation PC. If I have to upgrade these parts I could just go to SFP+ using fiber and 2 SFP+ NICs (I planned ahead, there's conduit from a box in the office to the "server" room, and it's only short pull). Or, I could use my existing CAT6 cabling, a 10GbE copper switch and 10GbE NICs. So many choices...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
With 24 slots for disks I assume that more spindles is better for redundancy and performance than fewer big disks.
More disks are usually better.
I will definitely use IPMI; I assumed that would work over the onboard NIC.
NIC-0 can also carry the IPMI connection but there is a dedicated IPMI port if you want it connected to a discrete network. I put a 10Gb interface in my system, so I use the dedicated IPMI port instead.
If I run into issues I can run link aggregation,
I tried that. Link Aggregation doesn't help. That is why I went to 10Gb.

One of the posts I made about it:
https://forums.freenas.org/index.ph...-for-ssd-or-simply-overkill.71587/post-495878

One where I talked about the network cards:
https://forums.freenas.org/index.php?threads/10-gig-networking-primer.25749/post-492357

The post where I gave links to the firmware / software that is needed for the switch:
https://forums.freenas.org/index.php?threads/upgrade-recommendations.69981/post-485285
 
Last edited:

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
More disks are usually better.

NIC-0 can also carry the IPMI connection but there is a dedicated IPMI port if you want it connected to a discrete network. I put a 10Gb interface in my system, so I use the dedicated IPMI port instead.

I tried that. Link Aggregation doesn't help. That is why I went to 10Gb.

One of the posts I made about it:
https://forums.freenas.org/index.ph...-for-ssd-or-simply-overkill.71587/post-495878

One where I talked about the network cards:
https://forums.freenas.org/index.php?threads/10-gig-networking-primer.25749/post-492357

The post where I gave links to the firmware / software that is needed for the switch:
https://forums.freenas.org/index.php?threads/upgrade-recommendations.69981/post-485285

Thanks so much for the help. My box will be here on Friday and I'll report my experience to the forums after I get things up and running. I have a SAS9207-8i to replace the SAS9266-8i. I figure I'll get it all up and running on the 1GbE link and then move over to 10GbE if/when I see a bottleneck.
 

Death Dream

Dabbler
Joined
Feb 18, 2019
Messages
24
I ended up getting my server today as they didn't want to deliver to my house yesterday. Arrived in great shape though. Still have to power it on and test a few things.

Still have to order a SAS controller and hard drives though.

How's your server coming along Andrew?
 

Death Dream

Dabbler
Joined
Feb 18, 2019
Messages
24
What is this card ontop of the SAS controller? Looking online at SAS9266-8i I don't see this card attached by default and cannot find any info on the smaller card.

I ask because the battery pack seems useful if there may be power issues but it connects directly to the smaller card. Looking for replacement SAS controllers in IT mode to see if I could use that battery pack still.

Andrew, were you able to still make use of that stuff with your SAS9207-8i?
 

Attachments

  • SAScards.jpg
    SAScards.jpg
    341.2 KB · Views: 516
  • Cardontop.jpg
    Cardontop.jpg
    294.2 KB · Views: 457

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What is this card ontop of the SAS controller? Looking online at SAS9266-8i I don't see this card attached by default and cannot find any info on the smaller card.
The card you have is a hardware RAID controller with memory module and external battery pack. Those just need to come out and be replaced by the SAS HBA that FreeNAS needs so it can have direct access to the drives.
I ask because the battery pack seems useful if there may be power issues but it connects directly to the smaller card. Looking for replacement SAS controllers in IT mode to see if I could use that battery pack still.
The battery pack is only to maintain the data in RAM on the hardware RAID card but an HBA does not have RAM, so it doesn't need a battery.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Death Dream

Dabbler
Joined
Feb 18, 2019
Messages
24

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My goodness that is expensive! I could only dream that I could sell it for that much!
Probably only half that. I think that seller is very optimistic. They had good photos though.
 

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
Yes, my box arrived last week. The swap of the SAS9266 to the SAS9207 was completely painless. My 8 3TB Seagate Constellation ES.3 drives ($39 each on EBay) arrived on Monday, and have been running badblocks since. 36 hours in and it's just finishing the 3rd of 4 patterns with no errors so far. I have a feeling setting up the shares will be more complex, but there seems to be a lot of online info to be digested; as soon as badblocks and a Long SMART test complete I'll try building my Vdev(s) and pool.

I'm going to list the SAS9266-8i on EBay, too - as long as I get what I paid for the SAS9207 I'll be happy. It also came with a 4 port Intel GbE NIC that I'll list. The IPMI on the dedicated port works fine, and that still leaves 2 GbE onboard ports. My biggest challenge is going to be mounting the thing in my rack - I hadn't remembered that when I bought the rack (in 2001) that I saved money by buying a rack with only front mounting capability - the back is designed to take a door that i didn't buy. Plus, the server came with rails for square hole racks, which mine isn't. I'll figure something out.

Other than the frustration of un/relearning 40 years of command line syntax (and the totally mystifying Unix jargon) it has been pretty easy. I'm not a Unix/Linux person - my background is RSTS/E, RT-11, RSX-11, VMS, TOPS-10, TOPS-20, Windows-NT [yes, I worked for Digital Equipment]. I understand that Unix was built to run telephone switches, not for general computing, but I'd have thought that over the years it would have become at least a little more intuitive... [Wow - I just realized that April 19th will mark 40 years since I started in the computer biz.]
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. I can't recall if I shared the configuration guide with you. If you have not seen this, you might want to take a look, and at these scripts:

Uncle Fester's Basic FreeNAS Configuration Guide
https://www.familybrown.org/dokuwiki/doku.php?id=fester:intro

Github repository for FreeNAS scripts, including disk burnin
https://forums.freenas.org/index.ph...for-freenas-scripts-including-disk-burnin.28/

Building, Burn-In, and Testing your FreeNAS system
https://forums.freenas.org/index.php?resources/building-burn-in-and-testing-your-freenas-system.38/
 

Death Dream

Dabbler
Joined
Feb 18, 2019
Messages
24
Here we go....

I have another 64GB incoming but figured I'd test this stuff out while I wait.
 

Attachments

  • memtest.JPG
    memtest.JPG
    65.2 KB · Views: 447
Top