New Build - First FreeNAS - Supermicro 847

bsotak

Dabbler
Joined
Aug 30, 2020
Messages
11
Hi all...I've been reading a lot in the forums for the last month and figured I would ask some questions. I've been using a QNAP 8-bay system for 8 years (TS-869 Pro) and I'm looking for an upgrade. I have a couple of reasons for the upgrade:
  • Looking for more drive bays, which led me to the Supermicro cases. I think I'd like the 846 (24 bay) better, but those seem pretty few and far between right now on ebay used. The 847 market seems pretty big, although I think the compressed inside is going to lead to more sound (I'll hopefully address that with some fan updates and picking a SQ power supply).
    • I currently have 8 6TB WD Reds and 6 3TB WD Reds (all SATA) that I plan on using, so I wanted something over the 12 bay Supermicro.
  • 10Gbe for multiple users of video editing directly on the NAS
  • Plex media server for a couple transcoding at the same time
I was originally looking at doing an X9 motherboard, but with the price of ECC RAM not that different from DDR3 and DDR4, I was thinking I'd go with X10 (X10DRi-T4+) and 64GB RAM. I do have some questions though.
  1. What processor should I go with? As I said, I want to do a couple transcoding (4k to 1080p) at the same time. I saw some info that maybe suggested needing a E5-2660, but can't seem to find that information anymore. What do you think?
  2. I'd like to knock down the sound some, so going with PWS-1K28P-SQ. And if I can, swapping out the internal fans for a Noctua fan wall (120mm, 3000rpm). I won't have all drives populated, so does that sound reasonable? Or do I need to go with a low-profile active cooler on the CPUs as well?
  3. Seems like most have the SAS3 BPs in them, but don't think I will ever need it. And I'd go with a 9311-8i controller in IT mode.
  4. What size base drive SSD should I get? 120s are ridiculously cheap, or should I go to something bigger? Do I need one for caching? If I do any VMs, should I run them on a separate SSD?
  5. As I grow the system, is there any reason to get SAS drives over SATA?
I think that's all for now, and appreciate the help! Any insight is welcome into the build.
 

Christopher_P

Dabbler
Joined
Nov 10, 2019
Messages
10
With everything I've read, it sounds to me like you really do not want to replace the fans for this case with Noctua fans. The reason has to do with creating pressure within the case. The louder stock fans allow air to be moved across the tightly packed drives.

I just ordered the CSE-847 with the same motherboard and have similar questions about the right dual CPUs to load it with. Wondering if I should go with used processors or brand new and whether there are any meaningful improvements with top of the line E5 vs. bottom of the line E5 vs. any of the E3s. Which is most cost effective with the best performance?

Anyway, going to lurk here for a bit and hope this thread gets some traffic!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,909
@Christopher_P , Noctua also has fans that are optimized for static pressure. I have two of those with 3000 rpm and they are awfully loud, so not much to gain in terms if making the server more quiet by that move.

But your are absolutely right in that this aspect is critical.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,946
How many transcodes do you plan on running at the same time.
How many users at any one time?
SMB is single threaded - so the numbers of users should help define the number of cores you need. The transcodes should help define how fast a clock is needed. I would go for higher clock and fewer cores as a general rule.
120GB a a boot drive is more than what is needed - so you are fine there. I can't comment on the fans other than to say that compromises here will tend to come back and bit you later
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi all...I've been reading a lot in the forums for the last month and figured I would ask some questions. I've been using a QNAP 8-bay system for 8 years (TS-869 Pro) and I'm looking for an upgrade. I have a couple of reasons for the upgrade:
  • Looking for more drive bays, which led me to the Supermicro cases. I think I'd like the 846 (24 bay) better, but those seem pretty few and far between right now on ebay used. The 847 market seems pretty big, although I think the compressed inside is going to lead to more sound (I'll hopefully address that with some fan updates and picking a SQ power supply).
    • I currently have 8 6TB WD Reds and 6 3TB WD Reds (all SATA) that I plan on using, so I wanted something over the 12 bay Supermicro.
  • 10Gbe for multiple users of video editing directly on the NAS
  • Plex media server for a couple transcoding at the same time
I was originally looking at doing an X9 motherboard, but with the price of ECC RAM not that different from DDR3 and DDR4, I was thinking I'd go with X10 (X10DRi-T4+) and 64GB RAM. I do have some questions though.
  1. What processor should I go with? As I said, I want to do a couple transcoding (4k to 1080p) at the same time. I saw some info that maybe suggested needing a E5-2660, but can't seem to find that information anymore. What do you think?
  2. I'd like to knock down the sound some, so going with PWS-1K28P-SQ. And if I can, swapping out the internal fans for a Noctua fan wall (120mm, 3000rpm). I won't have all drives populated, so does that sound reasonable? Or do I need to go with a low-profile active cooler on the CPUs as well?
  3. Seems like most have the SAS3 BPs in them, but don't think I will ever need it. And I'd go with a 9311-8i controller in IT mode.
  4. What size base drive SSD should I get? 120s are ridiculously cheap, or should I go to something bigger? Do I need one for caching? If I do any VMs, should I run them on a separate SSD?
  5. As I grow the system, is there any reason to get SAS drives over SATA?
I think that's all for now, and appreciate the help! Any insight is welcome into the build.
I'll jump in and offer my 2 cent's worth!

There's no such thing as a 'quiet' 24 or 36-bay Supermicro server. I've got a collection of Supermicro servers in a rack in my shop, where noise isn't a concern. I would never install these beasts here in my office or anywhere else in the house. They're just too noisy.

Certainly you can try to quiet one down a little, I suppose, by tinkering with the fan setup. But these systems were designed the way they are for a reason -- cooling. The more likely result of changing the fan system is that you'll fry your hard drives. But the only way to find out is to try it. You would definitely want active CPU coolers if you replace the Supermicro-spec chassis fans.

The 847-series chassis are tougher to modify in that you really only have a 2U compartment for the motherboard. So you only have half the height to work with versus the 24-bay chassis. I may be mistaken, but I'm pretty sure this means installing 120mm Noctua fans isn't an option.

A 24-bay Supermicro 846-type chassis is a better choice, but as you pointed out -- these are hard to find on the used market nowadays.

For a standard Samba-based file server, I suggest a CPU with fewer cores and a faster clock speed. Why? Samba is single-threaded, so you want a CPU with better single-thread performance. On the other hand, if you're planning on running a large number of virtual machines, you might want to go the other way and choose a CPU with more cores.

You can pore over the Intel Xeon e5-2600 v4 specs on Intel's website to research this:


The 8-core E5-2667 v4 @ 3.2GHz looks like a good choice. Here it is, compared to 4-core and 22-core Xeons from the same E5 family; you can see that it has the best single-thread rating of the 3:


Two of these 8-core CPUs, with hyperthreading, will give you 32 cores -- which is plenty for supporting a small number of virtual machines.

If you selected the X10DRi-T4+ motherboard for its 4 x 10GbE ports, I'd keep shopping. Those are 10Gbase-T ports, which means RJ-45 / copper. The industry is really oriented towards optical for high-speed networking, meaning SPF+ ports. There are plenty of switch choices available with SFP ports, but 10Gbase-T? Not so much. So for a motherboard, I'd be content with 1GbE ports as long as you have at least 1 PCIe x8 slot to plug a NIC into for SFP+/optical 10GbE networking. Ignore this advice, of course, if you already have a suitable switch supporting 10Gbase-T.

The X10DRi-T4+ doesn't support M.2 NVMe. You can still install these using an adapter card, but if M.2 NVMe support is important to you, that's another reason to search for a different motherboard.

A SAS3 backplane with an LSI SAS9300 IT-mode HBA is great, even if you only use 6Gbps SATA disks. It gives you the option of gaining performance if you switch to SAS3 12Gbps disks in the future, because yes, SAS3 disks are an improvement over SATA & 6Gbps SAS2 devices.

For booting, the smallest SATA SSDs you can find are more than enough.

With multiple users, you might benefit from an L2ARC. The rough rule-of-thumb guide for sizing these is 4 or 5 x RAM, so a 256GB SSD would work with a 64GB system. I would use NVMe SSDs for the L2ARC.

Good luck!
 

salty_penguin

Dabbler
Joined
Mar 13, 2021
Messages
10
Hi all...I've been reading a lot in the forums for the last month and figured I would ask some questions. I've been using a QNAP 8-bay system for 8 years (TS-869 Pro) and I'm looking for an upgrade. I have a couple of reasons for the upgrade:
  • Looking for more drive bays, which led me to the Supermicro cases. I think I'd like the 846 (24 bay) better, but those seem pretty few and far between right now on ebay used. The 847 market seems pretty big, although I think the compressed inside is going to lead to more sound (I'll hopefully address that with some fan updates and picking a SQ power supply).
    • I currently have 8 6TB WD Reds and 6 3TB WD Reds (all SATA) that I plan on using, so I wanted something over the 12 bay Supermicro.
  • 10Gbe for multiple users of video editing directly on the NAS
  • Plex media server for a couple transcoding at the same time
I was originally looking at doing an X9 motherboard, but with the price of ECC RAM not that different from DDR3 and DDR4, I was thinking I'd go with X10 (X10DRi-T4+) and 64GB RAM. I do have some questions though.
  1. What processor should I go with? As I said, I want to do a couple transcoding (4k to 1080p) at the same time. I saw some info that maybe suggested needing a E5-2660, but can't seem to find that information anymore. What do you think?
  2. I'd like to knock down the sound some, so going with PWS-1K28P-SQ. And if I can, swapping out the internal fans for a Noctua fan wall (120mm, 3000rpm). I won't have all drives populated, so does that sound reasonable? Or do I need to go with a low-profile active cooler on the CPUs as well?
  3. Seems like most have the SAS3 BPs in them, but don't think I will ever need it. And I'd go with a 9311-8i controller in IT mode.
  4. What size base drive SSD should I get? 120s are ridiculously cheap, or should I go to something bigger? Do I need one for caching? If I do any VMs, should I run them on a separate SSD?
  5. As I grow the system, is there any reason to get SAS drives over SATA?
I think that's all for now, and appreciate the help! Any insight is welcome into the build.
According to veterans here, SAS provides hardly any noticeable difference over SATA. There’s a bump in IOPS, but it varies by workload.

Any cheap SSD for a boot disk works. Run a Raid1 (I mean, when in rome...) and because my boot disk did get corrupted once and forced me to do unhappy administrator things.

You asked about processor for encoding? That depends on what you mean by video editing “directly on the NAS”. If you’re planning on running publishing software inside a VM inside Truenas, decide based on your software developer’s guidelines. Otherwise, truenas CPU wont be processing the data... your workstation will be. It’ll copy/cache the images, and your workstation will process the changes using its compute resources, then copy the changes over.

The Plex plugin can utilize a dedicated GPU for video encoding to alleviate nas-CPU Usage though! (Because that lives directly inside truenas).

Also... YES! On the 24 bays for 10GbE. And TRUST ME!!! Mirrored vDevs will give the best performance! ...it comes at the cost of usable storage volume, but IOPS mean a LOT when you start using your NAS as virtual disks for VM’s.

If you still want high performance with 24 spindles, but want more reliability, consider 4x vDev’s of 6 drives each in a raidz2 (4 storage, 2 parity in each of the 4 vdevs) instead of 12 mirrored vdevs. More vDevs = more IOPS. Or go balls to the wall with 8 vdev’s of triple mirrors for reliability and performance. test with your final hardware to find what trade off between space, performance, and reliability is best for you.

You should already be getting SSD-like performance from the NAS so separate ssd’s for VM’s likely wont help? I recommend using a separate machine as a hypervisor for VM’s, and simply mount network storage to your hypervisor from your NAS. I recommend Proxmox!! I run 3 old dell-optiplex SFF desktops in a cluster and mount an NFS share as cluster storage for all my VM disks, snapshots, and backups. It’s absolutely the way to go.

I Tune my storage machine to store.
I Tune my hypervisors to virtualize.

I feel it serves me well in overall security and stability, and keeps my NAS safe from networking disruptions and other threats that VM’s can create in the network. I consider my NAS a “production” machine... it is NOT for “development” or “testing” in my lab. If I’m spinning up VM’s, 98% of the time the machine will be torn down again.

Dont add cache or “L2ARC” drives until after you have installed the maximum RAM your system will allow. That’s 1.5 Terabytes for YOU, sir!!

The advice on L2ARC is HORRIBLE!!! DON’T DO IT!!! Supermicro is letting you install 24 x 64GB sticks of ram for a total of 1.5 massive TeraBytes of quad-channel DDR4, chewing up 59.7 GB/s of sweet memory bandwidth at your caching service.

That motherboard literally cannot support the number of PCI lanes required to stripe together enough nvme “l2arc” to come anywhere CLOSE to the added performance of adding RAM. Lol.

The rough rule-of-thumb guide for sizing these is 4 or 5 x RAM

Yeah, so just stripe together 16-24 NVME3x4 drives together... you’ll need a total of 6TB so definitely do 300Gb-512Gb each.

Yeah, don’t “help” your system by adding “cache” that runs 10% of the speed it was designed for

Ignore the FUDD regarding SFP+ vs Base-T on the 10gbe network. If your cables are less than 30 meters in length, you won’t come near seeing any performance deterioration.

SFP+ means buying DAC’s, transceivers, Fiber optic cables, and all that extra garbage that adds up and gives zero return on capital expenditure. God forbid your switch has no RJ45 ports, dig a transceiver out of the drawer if you dont wanna blow a few hundred bucks on equipment that already performs great.

Good luck!

Post photos!!!

Post stats!!!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,909
Those are rather assertive statements, especially relative to what you state as your experience. To pick one: Saying that 4-12 vdevs of traditional HDDs come even close to an SSD is just plain wrong. A single HDD has about 300-400 IOPS. Now compare this to even the slowest SATA SSD you can find.

This is the very first time that I publicly criticize someone like this here. But I think some statements are made too forceful and not necessarily universally true.
 
Top