Does my build make sense?

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
I'm putting together my second FreeNAS system, and would greatly appreciate a sanity check (constructive criticisms also welcome).

The goals for the system are as follows:
  1. Provide redundant storage for pictures, home videos documents, etc. Losing files is not an option, as these are things like family pictures and videos, so RAIDZ2 is a must have minimum.
  2. Provide an offsite backup of my first FreeNAS system, and vice-versa for anything added to this system.
  3. Run OpenVPN in a jail, to allow for remote login, and then run rsync with my first FreeNAS system.
  4. Run the Plex plug-in/jail, and serve up to 5 devices. In the future, I would like the option to put in a dedicated GPU for hardware transcoding, when Plex supports it while running in a jail.
  5. Be quiet enough to place in the living room, with the tv.
  6. Run the Minecraft Server plug-in/jail.
  7. Run a git server, along with Gogs in a jail.
  8. Run a VM in a jail, for things like an automated build system, continuous integration, etc.
  9. Don't break the bank, but also don't be so cheap as to regret it later.

The hardware I'm considering is:
  1. AsRock Rack E3C246D4U Micro ATX LGA1151 Server Motherboard.
  2. Intel Xeon E-2146G 3.5GHz 6-Core Processor
  3. be quiet! Shadow Rock LP 51.4 CFM CPU Cooler
  4. NEMIX 32GB DDR4-2666 ECC UDIMM (Replacement for Samsung M391A4G43MB1-CTD)
  5. be quiet! Straight Power 11 650W 80+ Gold Certified Fully Modular ATX Power Supply
  6. Fractal Design Node 804 MicroATX Mid Tower Case
  7. Fractal Silent Series R2 40.6 CFM 120mm PWM Fan (x3) [provided with the case]
  8. be quiet! Silent Wings 3 50.5 CFM 120mm PWM Fan (x2)
  9. be quiet! Silent Wings 3 59.5 CFM 140mm PWM Fan (x1)
  10. Seagate Ironwolf 510 240 GB M.2 2280 NVMe SSD [boot drive]
  11. Seagate Ironwolf 3TB CMR (x6) [configured in RAIDZ2 as vdev]
An area I'm not as sure about is the RAIDZ2 drive configuration. I had understood to get the redundancy, and ability to lose two drives, while still being able to recover the data, 6 drives were needed. However, from something I read this evening, it seemed like the same level of redundancy and recoverability was achievable with 4 drives. So, I'm wondering, would I be better off going with 4 larger drives?

I've noticed the TrueNAS Mini systems use the Intel Atom C3000 series processors and compatible motherboards. Looking at the Intel Atom C3000 series PassMark scores, as compared to the Intel Xeon E-2100 series, I'm wondering if maybe the Xeon is overkill, or if iXSystems is not expecting the TrueNAS Mini users to run so many jails, VMs, etc. as I'm planning?

I'm tempted to go with two 1x16GB UDIMM modules, for 32GB, to actually get the dual channel. But my thought is that I would like to eventually expand the memory, possibly to 128GB, and I don't like buying modules I would need to replace. Is the performance of the dual channel now really worth the interim spend?

I know a lot of people prefer the SuperMicro motherboards. My experience with the X11SSM-F-O SuperMicro motherboard, in my first FreeNAS system, has made me want to consider other manufacturers though. If anyone knows of a good alternative, please let me know. Some of the reasons I'm looking are:
  • the BMC IPMI interface requires the use of an old/insecure version Java (SuperMicro rarely provides updates), which requires the IE Tab plug-in to run in the Chrome browser.
  • the fan power management control is not granular enough, and does not work well with the Noctua PWM fans (the Noctua PWM fans tended to ramp up and down a lot, until I set them to the Standard setting in the BMC IPMI interface and used a fan control script I found in the forums to manage them based on the temperatures)
  • the BIOS interface is very terse/old; seems like a 90's American Megatrends BIOS, rather than a current, or modern, BIOS/UEFI interface.
One of the things I did like about the SuperMicro board was the two SATA ports with power for the SATA DOMs. I used dual 32GB SuperMicro SATA DOMs, in a mirrored configuration for the boot drive; though I've had one of the drives go flaky on me. The AsRock Rack motherboard only supports 1 powered SATA DOM, and that shares the PCIe lane with the M.2. This time I thought I might try the M.2 for the boot drive, though it is not mirrored (riskier, yes, but since the data is what I'm worried about. I figure I can rebuild the boot drive, and mount the ZFS volume, if need be.) If you have an alternative suggestion or think there's a risk I'm not considering with that approach, please share.
 

demon

Contributor
Joined
Dec 6, 2014
Messages
117
Certainly if you're going to run Plex and be doing much in the way of heavy transcoding, you're likely to find the Atoms underpowered. So that's likely a consideration.

As for the config you have here, I would only question the memory because I don't see that part on the QVL for that board. You'll do best to stick to the QVL if at all possible. Also, the 650W power supply might be a bit overkill for the config you're positing, but check your power budget math to see if or how far that'll put you outside the main efficiency range for it.

I always opt for dual-channel memory configurations myself whenever possible. I don't know if anyone has done a real head-to-head to see how much of a real world difference it makes.

Also, re: SuperMicro boards, the one I have in my newest NAS (an X11SCH-LN4F, see signature) AFAIK works with Java 11, and also includes an HTML5-only console app, though it does not support remote media.

If you are going to go with 4 larger drives, maybe do a RAID10-ish configuration (2 mirrored ZVOLs in your pool)? I went with 16TB drives though, so I'm not the best one to ask...
 

dak180

Patron
Joined
Nov 22, 2017
Messages
310
I know a lot of people prefer the SuperMicro motherboards. My experience with the X11SSM-F-O SuperMicro motherboard, in my first FreeNAS system, has made me want to consider other manufacturers though. If anyone knows of a good alternative, please let me know. Some of the reasons I'm looking are:
  • the BMC IPMI interface requires the use of an old/insecure version Java (SuperMicro rarely provides updates), which requires the IE Tab plug-in to run in the Chrome browser.
  • the fan power management control is not granular enough, and does not work well with the Noctua PWM fans (the Noctua PWM fans tended to ramp up and down a lot, until I set them to the Standard setting in the BMC IPMI interface and used a fan control script I found in the forums to manage them based on the temperatures)
  • the BIOS interface is very terse/old; seems like a 90's American Megatrends BIOS, rather than a current, or modern, BIOS/UEFI interface.
You should checkout my sig both for the system and for scripts including very fine grained fan control based on HDD temps.

Run the Plex plug-in/jail, and serve up to 5 devices. In the future, I would like the option to put in a dedicated GPU for hardware transcoding, when Plex supports it while running in a jail.
You can check with AsRock Rack to be sure (and what the right bios setting are) but you should be able to us the igpu for hardware transcode.

I'm tempted to go with two 1x16GB UDIMM modules, for 32GB, to actually get the dual channel. But my thought is that I would like to eventually expand the memory, possibly to 128GB, and I don't like buying modules I would need to replace. Is the performance of the dual channel now really worth the interim spend?
While I cannot speak to the rest of the system; zfs is going to care more about the amount of ram than the speed.

The AsRock Rack motherboard only supports 1 powered SATA DOM, and that shares the PCIe lane with the M.2.
This is not exactly true (check the block diagram for details); that m.2 slot is dual mode PCIe or SATA. If you put a SATA m.2 in it it will deactivate the associated SATA port but a PCIe NVMe drive will not.

An area I'm not as sure about is the RAIDZ2 drive configuration. I had understood to get the redundancy, and ability to lose two drives, while still being able to recover the data, 6 drives were needed. However, from something I read this evening, it seemed like the same level of redundancy and recoverability was achievable with 4 drives. So, I'm wondering, would I be better off going with 4 larger drives?
Can you do a 4 drive z2 array, sure but why would you not do mirrors for easier expandability later then? The question you really want to ask yourself is when, in the future you want to expand storage in this machine how do you want to that? Add more disks or replace existing ones with larger ones?
 

demon

Contributor
Joined
Dec 6, 2014
Messages
117
You can check with AsRock Rack to be sure (and what the right bios setting are) but you should be able to us the igpu for hardware transcode.


As it has a C246 PCH, it should work. The SuperMicro board I have has the same PCH and works fine, while the C242-based equivalents don't. But yeah, definitely confirm with ASRock Rack that they have the support code in the BIOS ROM.
 

dak180

Patron
Joined
Nov 22, 2017
Messages
310
But yeah, definitely confirm with ASRock Rack that they have the support code in the BIOS ROM.
One of the reasons I suspect it will work is that when I asked about the E3C246D4U2-2L2T (a cousin of the E3C246D4U) I was told that it would work there.
 

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
First of all, thank you for your in-depth, and well thought out responses.

As for the config you have here, I would only question the memory because I don't see that part on the QVL for that board. You'll do best to stick to the QVL if at all possible.
Yeah, this is a great reminder. I had trouble locating anyone actually selling this memory:
DDR42666ECC32GBSamsungM391A4G43MB1-CTDQ901 K4AAG085WM BCTDSec
Listed on the QVL list, so I did a little research on NEMIX, since they claim their memory is a compatible replacement. They seem to serve some of the larger government and aerospace clients. However, they seem to be relatively new to the pro-sumer market. It's definitely a bit of a gamble, and may cost me in having to buy more ram that actually works.

Also, the 650W power supply might be a bit overkill for the config you're positing, but check your power budget math to see if or how far that'll put you outside the main efficiency range for it.
Yeah, it's def. overkill right now. The thought here was if I end up eventually getting a beefy GPU for hardware transcoding, I don't know what it's power requirements will be, so I was trying to give myself room. The 2080Ti in my last desktop build lists 250W TDP, and recommends a 600W power-supply. The build as configured above was around 150W TDP, before the drives, and approximately 240W TDP with the drives, iirc. That should leave plenty of room for incidentals, like fans, as well as give me some breathing room if I choose a different GPU, or change to different hard-drives, and also not go near the limit during a start-up. My hope is a second-hand NVidia RTX 2000 series may be cheap enough next year to pickup fairly in-expensive.

I always opt for dual-channel memory configurations myself whenever possible. I don't know if anyone has done a real head-to-head to see how much of a real world difference it makes.
Yeah, I tend to err on the side of dual-channel as well. From what I know of the gaming builds I've programmed on for work, dual-channel can make a significant difference, depending on how much it needs to access memory all at once. My uncertainty here is will the OS optimize for memory read/write access as well, and would it make a difference in the ZFS checks, as well as the transcoding by Plex. It really depends on the memory access patterns, which is something I'm not familiar enough with to answer the questions.

Also, re: SuperMicro boards, the one I have in my newest NAS (an X11SCH-LN4F, see signature) AFAIK works with Java 11, and also includes an HTML5-only console app, though it does not support remote media.
Thank you, I'll definitely take another look, with that new knowledge. I'm not too concerned about remote media support, since I'll have pretty much 24/7 physical access to one box, and at least monthly access to the off-site box (100 miles between, so I can always hop in the car and drive if absolutely necessary).

If you are going to go with 4 larger drives, maybe do a RAID10-ish configuration (2 mirrored ZVOLs in your pool)? I went with 16TB drives though, so I'm not the best one to ask...
I thought with the mirrored ZVOLs it would only guarantee recovery from 1 drive failure, because if the mirrored drive failed in the second ZVOL, the data would be lost. Where-as with the RAIDZ2 configuration, you could lose any 2 drives, and still be able to recover. Is my understanding wrong?
 

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
Thank you for your review and insights.

You should checkout my sig both for the system and for scripts including very fine grained fan control based on HDD temps.
Will do. Just a quick peek at your scripts, and it looks like it's a lot more in-depth than the previous ones I found and borrowed from the forums.

You can check with AsRock Rack to be sure (and what the right bios setting are) but you should be able to us the igpu for hardware transcode.
To be honest, I'm not quite sure exactly what to ask AsRock Rack, other than will your board support hardware transcoding with Plex, and what do I need to enable in the BIOS (if anything) to support it. Having been on the receiving end of similar kinds of questions at work, I know it can be difficult to answer questions like that without having actually run the specific software/configuration.

While I cannot speak to the rest of the system; zfs is going to care more about the amount of ram than the speed.
Yeah, my understanding from the documentation and having read CyberJock's posts years ago, is each terabyte of harddrive space requires a gigabyte of RAM, no exceptions. With six 3TB drives, that comes to 18GB, leaving 12GB for everything else (pretty much on the cusp of needing more memory). Each Minecraft server gets about 4GB, and we currently swap between two servers. If I did four 6TB drives, it would be 24GB, which only leaves 8GB for everything else. At that point, I would probably have to think about going to 64GB of RAM now.

This is not exactly true (check the block diagram for details); that m.2 slot is dual mode PCIe or SATA. If you put a SATA m.2 in it it will deactivate the associated SATA port but a PCIe NVMe drive will not.
Yep, I misspoke here when I said, "shares". My AsRock Z390 Phantom-9 motherboard gaming build does the same thing.

Can you do a 4 drive z2 array, sure but why would you not do mirrors for easier expandability later then? The question you really want to ask yourself is when, in the future you want to expand storage in this machine how do you want to that? Add more disks or replace existing ones with larger ones?
Definitely great questions to think about. My number one priority is recoverability. The choice of RAIDZ2 is entirely based on the ability to lose any two drives and still recover. With my current system (six WD 3TB Red CMR drives in RAIDZ2), I have yet to reach the 10TB of usable space, in four years. My second priority, and really the impetus for investing in this second system, is having an off-site back-up for my first system (other than portable hard-drives; yes, shame on me). The second system's location also provides the benefits of off-site document backup (rsync'd to the first system), Plex and such to some family members, for the cost of allowing me to house and run the server at their place. I don't expect them (or me by rsync-ing the first server's contents to the second system) to fill up the server any-time soon, so expanding the ZVOLs is less of a concern. I did have the joy, and anxious experience, of replacing two drives earlier this year, when two of the WD drives started reporting SMART errors. About the same time I learned about the CMR/SMR debacle (fun). Fortunately, all my drives are CMR, so the re-silvering process went off without a hitch. But, this put a fine point, for me, on the need to get a proper off-site back-up, with good recoverability. Additionally, the need to store and back-up pictures, videos and documents at my family's place has recently hit a critical point, due to some recent events. :)
 

dak180

Patron
Joined
Nov 22, 2017
Messages
310
To be honest, I'm not quite sure exactly what to ask AsRock Rack, other than will your board support hardware transcoding with Plex, and what do I need to enable in the BIOS (if anything) to support it. Having been on the receiving end of similar kinds of questions at work, I know it can be difficult to answer questions like that without having actually run the specific software/configuration.
What you want to know is if the igpu is still available for quicksync transcoding while the bmc is active. If the os can see it then you can make it available to a jail.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
AsRock Rack E3C246D4U Micro ATX LGA1151 Server Motherboard.
the BMC IPMI interface requires the use of an old/insecure version Java (SuperMicro rarely provides updates), which requires the IE Tab plug-in to run in the Chrome browser.
I am using a X10SLH-F with the socket H3 and it has a HTML5 IPMI support. That was 1 of the most important features that I was looking for since I also have a X9SCL-F which never got the HTML5 support and the Java based remote console sucks !

I don't know if the BIOS & BMC updates for the X11 gen have been released yet which might support HTML5 based iKVM on those boards. Some may, some may not.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Since your goal here is more about running jails rather than storage as you already have a separate system, have you considered using a hypervisor like ESXi or Proxmox which might offer virtualization/containerization?

At this moment, ESXi and Proxmox are better suited to virtualization as compared to TrueNAS/Bhyve combo. But if you only intend on running pure FreeBSD jails then obviously Bhyve doesn't come into play and it would depend more on whether you are comfortable with FreeBSD vs Linux but
 

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
I am using a X10SLH-F with the socket H3 and it has a HTML5 IPMI support. That was 1 of the most important features that I was looking for since I also have a X9SCL-F which never got the HTML5 support and the Java based remote console sucks !

I don't know if the BIOS & BMC updates for the X11 gen have been released yet which might support HTML5 based iKVM on those boards. Some may, some may not.
After reading your comment that the X10SLH-F board has HTML5 IPMI support, I was wondering why the X10SLH-F would get it, but the X11SSM-F board I'm running wouldn't. So, I went looking at SuperMicro's site, and found the BMC/IPMI Firmware updates. Reading through the BMC/IPMI Firmware Release notes, it appears somewhere around June of 2018 they had HTML5 support.

2. Fixed problem of the help page of iKVM/HTML5 console not supporting multi-language content and only supporting English.

Since the release notes only go back to June 2018, I bought the board in 2016 and remember having looked for updates awhile after building it, I must have given up on an update sometime in-between. Looks like I'll be updating my system shortly, and seeing what new updates they put in. Maybe I'll re-examine SuperMicro for the new board afterwards.

Thank you for your prompting. :)
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
After reading your comment that the X10SLH-F board has HTML5 IPMI support, I was wondering why the X10SLH-F would get it, but the X11SSM-F board I'm running wouldn't.
I think it was a given that X10 and above generations would get HTML5 support for IPMI. Initially they had declared that X9 gen would get them too but later they backed out. Any new board that comes out of Supermicro will have HTML5 based IPMI from the get go, I think.
 

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
Since your goal here is more about running jails rather than storage as you already have a separate system, have you considered using a hypervisor like ESXi or Proxmox which might offer virtualization/containerization?

At this moment, ESXi and Proxmox are better suited to virtualization as compared to TrueNAS/Bhyve combo. But if you only intend on running pure FreeBSD jails then obviously Bhyve doesn't come into play and it would depend more on whether you are comfortable with FreeBSD vs Linux but
I have not considered either option yet. While I'm not a day to day user of FreeBSD or Linux, I'm not afraid of digging into either, and have done so on a number of occasions, ranging from working on my current NAS, to work related *nix items. The day to day work is usually done on a Windows laptop, and sometimes crosses over onto Macs with their flavor of *nix.

My number one goal, is recoverability of the data, and a close second is the off-site backup of the data. This is for things like pictures, home-videos, etc. that cannot be replaced if I don't have those two things. Some of the jails I run, and plan to run on the new box, are TrueNAS plugins (like Plex, Minecraft, Gogs), so I believe those are running in Bhyve. Before Bhyve was fully implemented, I also manually setup jails using iohyve, and experimented with running Ubuntu Server and Desktop, and Windows 7 and 10 in those. The purpose was to see what the performance was of something like Handbrake in a VM; it didn't perform as well as I had hoped, so I shut them down. I don't usually spin up/down a bunch of VMs/containers; it's more about serving the purpose for which I need them. Once I have them setup and running, I tend to leave them, and don't blow them away, duplicate them, or rebuild them all that often. So, I'm unsure, beyond isolating the programs to the VMs what other benefit the extra efforts might bear out.
 

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
I think it was a given that X10 and above generations would get HTML5 support for IPMI. Initially they had declared that X9 gen would get them too but later they backed out. Any new board that comes out of Supermicro will have HTML5 based IPMI from the get go, I think.
Nice! Tbh, I don't see much from SuperMicro. Supporting my personal FreeNAS/TrueNAS is about my only interaction with them. My other issue with the SuperMicro board is the fan control. Since this runs in my living room, I can't have the typical server fans that blast at 60+dB. Instead I went with Noctua PWM fans on my current build, and there seems to be some issue with the low-end cutoff, as well as what I can best describe as power revving/cycling. This was mitigated with a script I found in the FreeNAS/TrueNAS forums, and works well. I'm hoping the next board won't need the scripts, and can manage the fans itself. You'll notice, I'm giving the `be quiet!` fans a try this time around. I thought I might try them with the old system too, when I get them. :)
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
My number one goal, is recoverability of the data, and a close second is the off-site backup of the data. This is for things like pictures, home-videos, etc. that cannot be replaced if I don't have those two things.
Then having a 2nd FreeNAS/TrueNAS system would probably be the go-to route as it would be just a matter of setting up ZFS replication.
Some of the jails I run, and plan to run on the new box, are TrueNAS plugins (like Plex, Minecraft, Gogs),
Try setting up manual jails and then installing whatever you want in them. It's easier to upgrade etc.
so I believe those are running in Bhyve.
I don't think so. If you are talking about the plugins in FreeNAS, they simply create a separate jail and don't use Bhyve. Did you mean iocage?
So, I'm unsure, beyond isolating the programs to the VMs what other benefit the extra efforts might bear out.
Isolation of programs can be had even with FreeBSD jails. The jails are completely isolated from the host. What Proxmox and ESXi provide are better virtualization/containerization options.
 

ZeroBit

Dabbler
Joined
Nov 6, 2016
Messages
28
Then having a 2nd FreeNAS/TrueNAS system would probably be the go-to route as it would be just a matter of setting up ZFS replication.
I will now be looking up ZFS replication. I have some limited experience with rsync, and was planning on going that route. However, if this is easier, and requires less management, then Yay! The key question I have after a quick search and read is, if I have a directory (call it Shared/Pictures), and I add files to it on the first server, while my family adds files to it on the second server, will ZFS replication support syncing the changes from both servers to each other?

Try setting up manual jails and then installing whatever you want in them. It's easier to upgrade etc.
I've played a bit with this in the past, and for plug-ins that don't get updated regularly this is definitely the way to go. Almost switched my Plex plug-in to a manual jail, after a snafu with Plex itself, but opted for the expediency of reinstalling the plug-in after days of frustration from the snafu.

I don't think so. If you are talking about the plugins in FreeNAS, they simply create a separate jail and don't use Bhyve. Did you mean iocage?
Yeah, iocage. It's been a long while since I messed with this, and apparently blended the two names.

Isolation of programs can be had even with FreeBSD jails. The jails are completely isolated from the host. What Proxmox and ESXi provide are better virtualization/containerization options.
I apparently have a lot to learn here, and need to go do some reading to change my understanding of what virtualization/containerization do. I've briefly used VirutalBox, VMWare and Docker in my past jobs. We used them to spin up server configurations, with pre-configured OS images, for running game servers we were writing. We would set the number of cores, RAM, etc., and then designate the OS image to use, and finally run the game server applications once the VM was up and running. All of this is more of a one time setup, for what I want to do with an automated build, and continuous integration system, at home. What are the better options you're referring to?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Yeah, definitely read up on virtualization and see if it is even useful for you. I just threw it out there since you already had a FreeNAS system and were specifying your use case to involve creating many jails etc.

Again, you don't have to use virtualization, but it might be useful. I have a FreeNAS server which initially started as my Plex server, transmission and couchpotato.
Then I built a Proxmox server and have since created a
  1. VM for a desktop -- PCOIP (desktop to use from my chromebook whenever I don't feel like sitting in my office)
  2. A container for :
    • transmission
    • self-hosted password manager syncing to my phone and other desktops/browser add-ons
    • guacamole server
    • a container for self hosted home page
    • self hosted Nextcloud instance
    • self hosted OnlyOffice connected to the above NextCloud
    • Syncthing server to sync my phone and backup my photos etc to the NAS
    • Caddy 2 -- as a reverse proxy -- so that I don't have to remember any of the links and port numbers for the various services
    • Security Camera software which connects to the cameras around my house.
    • Omada Controller software which handles all the Access Points in my house. -- similar to Unifi Controller software
 
Top