BUILD 24U FreeNAS Build - Your feedback is welcome!

Status
Not open for further replies.

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
Hello all,

I am very new to the forum, as well as FreeNAS in general, and I must apologise in advance for how long this post has become!

I started out 5 or 6 years ago using various Drobos, then moved on to 2x Synology DS2413+ (12 bay) units (using 1 as a backup) - which I’ve been using for the last 3 years.

Having just suffered major data losses on both Synologys, I decided to use it as an opportunity to upgrade. Suddenly realising how underwhelming Synology's hardware is, I was considering a 16 bay QNAP Enterprise device.

However, it seems that by going the FreeNAS route I could get a lot more hardware for a lot less money, and end up with a more secure NAS to boot, and who doesn’t like that?!

Having spent the past 10 days or so reading through the forums, following links and trying to configure an ideal setup, I have become utterly confused reading through all the different specs on Supermicro's and Intel's websites.

For example, what I’ve seen praised here as ‘a great setup’ often appears not to follow Supermicro's recommended compatibility, and I trust the experts here more than Supermicro when it comes to getting the best out of FreeNAS!

GOALS

This is what I’d like to achieve (includes some future proofing)
  1. Very large Family photo Library
  2. iTunes media library (music / films / TV)
  3. Plex media library (media converted from iTunes, plus more from (8) below)
  4. Plex transcoding up to 9 simultaneous 1080/4k streams to TVs and iOS devices
  5. Sonos music server playing 24/7 in up to 12 zones
  6. nextCloud server for 4-5 people
  7. Backup for 5 computers (Mac / Linux)
  8. SickRage / SABnzbd / Transmission / CouchPotato possible running in a Jail 24/7
Plus virtualisation of up to 6 Linux-based VMs, with the ability to run a couple of them as a day-to-day computer from a dumb terminal via HDMI output if that's possible?

Notes:

I’ve currently got 25TB of data, of which 20TB is iTunes media. However, because I want to start using Plex, I first need to convert it to a usable format. This will virtually double my space requirements, giving me an immediate need for 45 TB useable space.

Having done my best to work through the FreeNAS guidelines, I believe I would want VDevs of 8 drives in RAIDZ2 configuration. My Synology units both have 12 x 4TB WD Red drives, and I will be able to use 12 of them towards the FreeNAS.

My thought was to start off by creating a brand new ‘future proof’ ~40TB Plex VDev by buying 8 x 8TB WD Red for use in RAIDZ2 configuration.

I will maintain triple backups of crucial data (family photos, personal files etc.) in addition to the FreeNAS, but everything else is media so I think it’s reasonable to rely on RAIDZ2 for now, and perhaps add secondary backup later? (to save having to download everything again in case of catastrophic loss / failure)

PROPOSED HARDWARE

If the following list seems familiar to you, it's because I've copied it directly from @DataKeeper’s first build (because I’m confused and his planned usage was most similar to mine, and he kindly invited me to do so!)

Chassis:
SUPERMICRO CSE-846E16-R1200B 1200W PSUs

Mainboard:
SUPERMICRO MBD-X10SRL-F

Processor:
Intel Xeon E5-1650 v3 Haswell-EP 3.5GHz

CPU Cooler:
Noctua NH-U9DX i4

RAM:
4 x SAMSUNG 16GB SDRAM ECC Reg DDR4 Model M393A2G40DB0-CPB (RDIMM)
Or something else? (Not Kingston! Not Kingston! Not Kingston!)

Install Drives:
2 x SUPERMICRO SSD-DM064-PHI SATA DOM(they only cost $3 more than 32GB)

Data Drives:
8 x 8TB WD Red / 8 x 4TB WD Red (Another 8 later on)

Data Cables:
2 x Supermicro CBL-0281L-01 75cm Mini-SAS (SFF-8087) to Mini-SAS (SFF-8087) Cable

HBA:
IBM ServeRAID M1015 SAS/SATA Controller

NIC:
At the moment I only have a 1GbE network and I’m using the Synology units with link aggregation, but I’d like to go to 10GbE or Fibre equivalent at some point.

I was thinking of starting with a 4 x 1GbE NIC (2 in link aggregation and 2 in failover), with ability to add 10GbE / Fibre later on, but perhaps I should upgrade to NETGEAR ProSAFE 10 Gigabit switch now with money saved by not buying QNAP…? Or is fibre better?

HDMI Output?
This is something offered by QNAP, but I’ve not seen anyone mention it here. Can it be done?

What would you do?

I have contacted DataKeeper, and I'm told that his NAS is running perfectly and that he’s extremely happy with it, and I don’t doubt him for a moment!

So why am I posting this?

1 Simply because it is 15 months since his build
2 It's possible that the landscape has changed with regard to best component usage recommendations
3 I don't know anyone who has their own NAS (let alone FreeNAS), so I’ve got no-one to bounce ideas off, and I would really appreciate some expert feedback.

If you’ve made it this far, THANK YOU very much for the time you’ve already given me. I’m sure you’re busy and I recognise that it’s been an extremely long post.

And now, if you’ve still got the energy and inclination, please tell me what you think. Have hardware recommendations changed since then? What would you do? Exactly as above, or something different?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
HDMI isn't easily possible and the purpose of such would be unclear.

Might want to head on over and read the 10 Gig networking primer.

The Intel E5-1650 v4 might be available now. Don't get all OCD over it though, not particularly important.
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
Thanks for your reply, @jgreco

HDMI isn't easily possible and the purpose of such would be unclear

I'd like to replace a couple of desktop computer boxes and run them from the server using VirtualBox. Perhaps I should consider a dedicated 1U/2U server to run such things instead?

Might want to head on over and read the 10 Gig networking primer

Thank you for the reminder! I've read myself senseless over the past week, and I forgot about that thread. I am reading/re-reading now

The Intel E5-1650 v4 might be available now. Don't get all OCD over it though, not particularly important.

I'll have a look. Was there anything else that you'd consider doing differently? I recall you saying the other day that thought DataKeeper's setup was 'almost' your idea of perfect!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for your reply, @jgreco

HDMI isn't easily possible and the purpose of such would be unclear

I'd like to replace a couple of desktop computer boxes and run them from the server using VirtualBox. Perhaps I should consider a dedicated 1U/2U server to run such things instead?

I'm entirely unfamiliar with the possibilities here. It may be that VirtualBox could support a GPU via PCI passthru, in which case I'd say that this could make sense, but you would probably be on the leading edge/bleeding edge of things.

Might want to head on over and read the 10 Gig networking primer

Thank you for the reminder! I've read myself senseless over the past week, and I forgot about that thread. I am reading/re-reading now

The Intel E5-1650 v4 might be available now. Don't get all OCD over it though, not particularly important.

I'll have a look. Was there anything else that you'd consider doing differently? I recall you saying the other day that thought DataKeeper's setup was 'almost' your idea of perfect!

'Cuz it's a really solid platform.

Also, and I want to put out the big red caution flag when I say this, that platform WOULD be capable of running ESXi, meaning virtualized FreeNAS and virtualized Windows instances would be a possibility. I strongly advise against this if you are not familiar with ESXi and FreeNAS already, but it is the way I'd approach virtualizing some desktops on the same box. You use PCI passthru for the M1015 and give that to FreeNAS. Add an internal SSD for the ESXi VM datastore, using the Supermicro internal drive bracket, which is where your Windows VM's live. Then you PCI passthru a GPU or two (theoretically/not something I've tried/should work though). Absolutely positively follow my guide to virtualizing FreeNAS: https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/

Don't do that unless you feel comfortable with everything you read though.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
24U FreeNAS Build
This I gotta see... That is one heck of a case... Pics or it didn't happen... ;)

Plus virtualisation of up to 6 Linux-based VMs, with the ability to run a couple of them as a day-to-day computer from a dumb terminal via HDMI output if that's possible?
Would you consider just using an RDP type of solution instead?

Don't do that unless you feel comfortable with everything you read though.
I would agree that leveraging ESXi would be my preferred method. But also doubly agree with the warnings and need of thorough understanding required that jgreco mentioned.
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
I'm entirely unfamiliar with the possibilities here. It may be that VirtualBox could support a GPU via PCI passthru, in which case I'd say that this could make sense, but you would probably be on the leading edge/bleeding edge of things.

In all honesty, that doesn't sound like a comfortable place for me to be at the moment!

'Cuz it's a really solid platform.
I was simply wondering if you would do anything differently 15 months on, but I'm more than happy to simply copy @DataKeeper's build in its entirety. :smile:

Also, and I want to put out the big red caution flag when I say this, that platform WOULD be capable of running ESXi, meaning virtualized FreeNAS and virtualized Windows instances would be a possibility. I strongly advise against this if you are not familiar with ESXi and FreeNAS already, but it is the way I'd approach virtualizing some desktops on the same box. You use PCI passthru for the M1015 and give that to FreeNAS. Add an internal SSD for the ESXi VM datastore, using the Supermicro internal drive bracket, which is where your Windows VM's live. Then you PCI passthru a GPU or two (theoretically/not something I've tried/should work though). Absolutely positively follow my guide to virtualizing FreeNAS: https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/

Don't do that unless you feel comfortable with everything you read though.
Big Red Flag noted, and crisis averted, thank you. After all I've read, I have absolutely no intentions of virtualising FreeNAS, so a separate ESXi server seems like the way to go.
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
This I gotta see... That is one heck of a case... Pics or it didn't happen... ;)
Fair enough! I must admit to feeling duly daunted, but I believe I can follow instructions, so hopefully it will all work out well!

Would you consider just using an RDP type of solution instead?
Thank you for the suggestion. I've just done some searches, and perhaps that will work. The problem is that this is not my field of expertise (at all) and I only know what I want it to *do*, so I'm scrabbling around for answers.

From what I understand, QNAP allows one to access Linux desktop environments with nothing more than an HDMI connection, and the idea of getting rid of some boxes that are cluttering up my office is very appealing.

I would agree that leveraging ESXi would be my preferred method. But also doubly agree with the warnings and need of thorough understanding required that jgreco mentioned.
I have no understanding of EXSi, let alone a thorough one, so this is not an avenue I will be exploring. Thanks for the warning.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Thank you for the suggestion. I've just done some searches, and perhaps that will work. The problem is that this is not my field of expertise (at all) and I only know what I want it to *do*, so I'm scrabbling around for answers.

From what I understand, QNAP allows one to access Linux desktop environments with nothing more than an HDMI connection, and the idea of getting rid of some boxes that are cluttering up my office is very appealing.
FreeNAS does have the ability to run VMs and you could access them via VNC (I have used TightVNC in my past testing). So from a technical standpoint having multiple Linux VMs within FreeNAS is totally feasible.

If you are needing HDMI though that is perhaps another issue. If you were mentioning HDMI purely based on thinking that was a method needed to connect to the VMs, then foregoing that and using a VNC Client and VMs on FreeNAS should be a viable route. Bonus would be no need for additional cabling since it is all done over LAN.
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
FreeNAS does have the ability to run VMs and you could access them via VNC (I have used TightVNC in my past testing). So from a technical standpoint having multiple Linux VMs within FreeNAS is totally feasible.

Excellent, thank you!

If you are needing HDMI though that is perhaps another issue. If you were mentioning HDMI purely based on thinking that was a method needed to connect to the VMs, then foregoing that and using a VNC Client and VMs on FreeNAS should be a viable route. Bonus would be no need for additional cabling since it is all done over LAN.

I mentioned HDMI simply because I thought it was necessary, but I'm delighted to know that I was wrong on this occasion! Thank you again.
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
Can someone please confirm whether the following backplane would be appropriate for my needs?

Supermicro BPN-SAS2-846EL1

It's included with a second hand chassis I've seen, but not sure if it's what I need? Thanks.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Supermicro BPN-SAS2-846EL1
That should work--it's the same one I have in my chassis.

@Mirfster's comment upthread ("this I gotta see") is because you've captioned this as a 24U build, not a 24-bay build. 24U would have the chassis over 3' tall. Your proposed build is still pretty serious, but not that serious.

The only points I'd make that haven't been pointed out already relate to your boot devices. SATA DOMs are generally considerably more expensive than plain SSDs, and mirrored SSDs/DOMs are generally regarded as overkill (which didn't stop me from putting mirrored DOMs on my last server). You might be able to save a few bucks by going for a single regular SSD.

Also, keep an eye out for complete servers--I snagged my system for just over $1k, and it would give you another 12 bays. I've seen similar deals come up a couple of times since then.
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
That should work--it's the same one I have in my chassis.
Good to know, thank you!

@Mirfster's comment upthread ("this I gotta see") is because you've captioned this as a 24U build, not a 24-bay build. 24U would have the chassis over 3' tall. Your proposed build is still pretty serious, but not that serious.

Aaarrrrgggghhhh... (hides face in embarrassment) And I didn't even notice my mistake when @Mirfster pointed it out.

The only points I'd make that haven't been pointed out already relate to your boot devices. SATA DOMs are generally considerably more expensive than plain SSDs, and mirrored SSDs/DOMs are generally regarded as overkill (which didn't stop me from putting mirrored DOMs on my last server). You might be able to save a few bucks by going for a single regular SSD.

OK, thank you. I had understood from my reading that mirrored SATA DOMs were what was recommended!

Also, keep an eye out for complete servers--I snagged my system for just over $1k, and it would give you another 12 bays. I've seen similar deals come up a couple of times since then.

I would love to find a complete server! I hesitate to ask, but if by any chance you come across one, please do let me know. An extra 12 bays would allow me to use 4TB drives throughout, which would be quite an initial saving (let alone what I'd save if bought a complete 36 bay server for less than the cost of a new 24 bay one!)
 

philhu

Patron
Joined
May 17, 2016
Messages
258
I bought a 4U SC-847/X8DTN+ with IPMI complete with 2 XEON quad processors and 48G memory and 36 HD slots for $700, so they are out there. I added 4 SSD disks to the internal controller, 2 for caching and 2 to boot as mirrored.

System works great, I have 80TB in it, running 2 jails for Plex,Pytivo and Bacula (tape backup) and 2 Centos6 VM's under a VirtualBox jail.

It takes 288 watts all the time, but it let me retire my old Dell 2950 which used, gulp, 1777 watts all the time
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You can also buy a server and add a JBOD expansion chassis to it. Also, since you want triple protection, I would suggest thinking about a second system to be a replication target. Preferably somewhere separate from the primary.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I had understood from my reading that mirrored SATA DOMs were what was recommended!
FreeNAS has supported mirrored boot devices since the 9.3 release. If you're using USB sticks as boot devices, mirroring them is highly recommended, as they're notoriously unreliable. SSDs and DOMs are much more reliable, so there's less incentive to mirror them. Also, reinstallation is simply a matter of doing a clean install and uploading a saved copy of your config file, so the consequences of a failed boot device are minimal. Mirrored DOMs are certainly fine, but probably not necessary.
I would love to find a complete server! I hesitate to ask, but if by any chance you come across one, please do let me know.
I'll see, though I'm not looking as much now that I've found one (though I did get an email a couple of weeks ago from the folks I bought mine from--if I get another I'll let you know). I think my ebay search when I found mine was simply "supermicro 4u". My server came with E5-2660s, which I upgraded to E5-2670s for no other reason than that the 2670s were too cheap to pass up, and a RAID card that I sold to recoup some of my cost. I'll warn you that energy-efficiency isn't its strong suit--it idles at over 300 watts, and burns 400-450 most of the time.
 
Last edited:

philhu

Patron
Joined
May 17, 2016
Messages
258
FreeNAS has supported mirrored boot devices since the 9.3 release. If you're using USB sticks as boot devices, mirroring them is highly recommended, as they're notoriously unreliable. SSDs and DOMs are much more reliable, so there's less incentive to mirror them. Also, reinstallation is simply a matter of doing a clean install and uploading a saved copy of your config file, so the consequences of a failed boot device are minimal. Mirrored DOMs are certainly fine, but probably not necessary.

Well, when you can get Sandisk 140G SSD brandnew for $59 for *2*, why wouldn't you mirror them? I just plugged all the SSD drives into the unused 6 port SATA motherboard controller

And yes, USB sticks wear out quickly. One of mine died in less than 2 weeks
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
I bought a 4U SC-847/X8DTN+ with IPMI complete with 2 XEON quad processors and 48G memory and 36 HD slots for $700, so they are out there. I added 4 SSD disks to the internal controller, 2 for caching and 2 to boot as mirrored.

System works great, I have 80TB in it, running 2 jails for Plex,Pytivo and Bacula (tape backup) and 2 Centos6 VM's under a VirtualBox jail.

It takes 288 watts all the time, but it let me retire my old Dell 2950 which used, gulp, 1777 watts all the time
Wow, that sounds like an incredible setup for $700, and what an amazing power saving! Thanks for your feedback!
 

fn369

Explorer
Joined
Jun 17, 2016
Messages
60
You can also buy a server and add a JBOD expansion chassis to it. Also, since you want triple protection, I would suggest thinking about a second system to be a replication target. Preferably somewhere separate from the primary.
Thanks for your suggestions. I only need triple backup for photos and some work files, so external HDs and encrypted cloud (SpiderOak) should be ok I hope.

I did wonder about a 2U server + expansion chassis, but it seemed that all in one solutions were favoured here?
 
Status
Not open for further replies.
Top