Want an initial check in on my hardware upgrade path...

Bageland2000

Dabbler
Joined
Aug 24, 2014
Messages
48
Hello!

I have an existing build running TrueNAS-13.0-U6.1.
  • ASRock E3C226D2I Mini ITX LGA1150
  • Intel Xeon E3-1275 V3 3.5 GHz Quad-Core
  • Crucial 16 GB (2 x 8 GB) DDR3-1600 CL11
  • Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20
    • Western Digital Red 3 TB 3.5" 5400RPM Internal (x5)
    • Western Digital Red 4 TB 3.5" 5400RPM Internal (x3)
My uses are:
  • Media Server (Plex, Radarr, Sonarr)
  • Backup server (resilio, cron to AWS S3)
  • Other "fun" stuff
    • Pi-hole
    • Unifi controller (future)
    • Other fun stuff.
I've noticed that my current setup struggles with Plex (high bitrate, transcoding), and I'm already short on RAM for my existing pool(s). I'd like to upgrade, and I'd like feedback. I really have an open budget, If it makes sense for my use case, I can find the money to make it happen.

Current upgrade outlook
  • SuperMicro X11 (I've read through this and I still have no idea which to get. Could I benefit from nVme support for the iocage pool?)
  • Intel Xeon (can someone recommend one?)
  • 128GB DDR4 ECC RAM (or maybe 64? seems like that might be my limit and/or more than 64 would be overkill)
Should I at least explore a rack server? I'd love to have a network/server cage mounted in my basement in the future, but maybe this is overkill?
Should I explore other hardware or do I seem to be on the right track given my use case?

I want to ability to play with more "server" capabilities in the future (VM, web hosting, etc.)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First, please clarify your Western Digital Red models. Are they SMR or CMR?

If they are SMR, of course performance will suck after a while. SMR are simply unsuitable for ZFS.


As for the other questions, I don't have much advice. Except that if you are using 16GBs of RAM now, then 64GBs should be good enough. With the caveat if you can do 64GBs and still have memory slots free to expand later, that would be helpful.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
SuperMicro X11 (I've read through this and I still have no idea which to get. Could I benefit from nVme support for the iocage pool?)
I can't access the link, however given your use case I would not recommend use of NVMe. However if you want to use it, it can be fun to explore a bit.

128GB DDR4 ECC RAM (or maybe 64? seems like that might be my limit and/or more than 64 would be overkill)
Given you are running 16GB RAM now and for your use case, 64GB is a LOT! But any unused RAM will still be used as the ARC. If the price is right for you, get the 64GB.
I'm already short on RAM for my existing pool(s)
I don't know what you are referencing unless it's the 1GB RAM for 1TB Storage. Ignore that rule, it does not really factor in here.

Intel Xeon (can someone recommend one?)
I've noticed that my current setup struggles with Plex (high bitrate, transcoding)
You should look into a Plex forum to see what they are using for transcoding. Do not focus on high core count, everything will come with many cores, but it is the single core processing speed that will be your measuring stick. The Plex forums should help a lot.

Should I at least explore a rack server? I'd love to have a network/server cage mounted in my basement in the future, but maybe this is overkill?
That depends on if you are okay with screaming fans. Okay, not all rack mounted servers are screamers but most are. My question is, do you NEED a new case? If yes then do you want the rack mount? You do not need it and I personally do not recommend Hot Swap drive bays, or really having the mindset of using one that way vice powering off first.

I want to ability to play with more "server" capabilities in the future (VM, web hosting, etc.)
64GB RAM id good but if you have grand plans then be able to expand to 128GB RAM and a Fast CPU with at least 2 threads (1 core) per VM you plan to use. I pulled that out of my butt. Look at my build and you can see I can run multiple VMs effortlessly.

Plan, plan, and plan some more before spending any money. The motherboard, CPU, and RAM are the main things to focus on.

Good luck.
 

Bageland2000

Dabbler
Joined
Aug 24, 2014
Messages
48
First, please clarify your Western Digital Red models. Are they SMR or CMR?

If they are SMR, of course performance will suck after a while. SMR are simply unsuitable for ZFS.


As for the other questions, I don't have much advice. Except that if you are using 16GBs of RAM now, then 64GBs should be good enough. With the caveat if you can do 64GBs and still have memory slots free to expand later, that would be helpful.
EDIT: They're non-shingled...

They're SMR CMR I'm not really concerned about the storage. I think I have that pretty well figured out.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
SuperMicro X11 (I've read through this and I still have no idea which to get. Could I benefit from nVme support for the iocage pool?)
Putting iocage on NVMe cannot hurt, but is probably not a priority. Priorities should be:
1/ Get rid of these SMR drives, especially if the pool is raidz. These are old drives anyway, a handful of 16+ TB (whatever the best TB/$ ratio is these days…) shall replace them with benefits: Less drives, no more HBA, less power use.
2/ Bump RAM beyond the 16 GB your motherboard is capped at. Your target here defines the next step.

For 64 GB, which should be enough, a Core i3-9100 and a X11SCH (SCM, SCL) or AsRock Rack equivalent (E3C246D4U) will do.
For more and/or for more than 4 cores/8 threads, a Xeon E-2100/2200 with the same motherboard. Or go RDIMM with the usual suspects (Xeon D-1500: X10SDV series; Atom C3000: A2Sdi series). Or go RDIMM and wild with the "total overkill but hey they're cheap these days" options of second-hand X11 boards: X11SP and first generation Xeon Scalable (I did say "total overkill"…) or X11SRM and Xeon W-2100.

Intel Xeon (can someone recommend one?)
That entirely depends of your choice of motherboard above…
 

Bageland2000

Dabbler
Joined
Aug 24, 2014
Messages
48
Putting iocage on NVMe cannot hurt, but is probably not a priority. Priorities should be:
1/ Get rid of these SMR drives, especially if the pool is raidz. These are old drives anyway, a handful of 16+ TB (whatever the best TB/$ ratio is these days…) shall replace them with benefits: Less drives, no more HBA, less power use.
2/ Bump RAM beyond the 16 GB your motherboard is capped at. Your target here defines the next step.

For 64 GB, which should be enough, a Core i3-9100 and a X11SCH (SCM, SCL) or AsRock Rack equivalent (E3C246D4U) will do.
For more and/or for more than 4 cores/8 threads, a Xeon E-2100/2200 with the same motherboard. Or go RDIMM with the usual suspects (Xeon D-1500: X10SDV series; Atom C3000: A2Sdi series). Or go RDIMM and wild with the "total overkill but hey they're cheap these days" options of second-hand X11 boards: X11SP and first generation Xeon Scalable (I did say "total overkill"…) or X11SRM and Xeon W-2100.


That entirely depends of your choice of motherboard above…
1. I meant CMR. They're non shingled. Why no HBA? I thought that was generally the guidance around here? How much power is an HBA typically drawing?
2. current mobo maxes at 16GB (2x8GB). I would've dropped denser DIMMs in a long time ago if this wasn't the case.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Using a SAS HBA is the recommendation if you have more drives than can attach to your motherboard OR if you run TrueNAS virtualised.
When running bare metal with a limited number of drives, it's easier to just use SATA ports from the chipset. Keep it simple…
 

Bageland2000

Dabbler
Joined
Aug 24, 2014
Messages
48
Using a SAS HBA is the recommendation if you have more drives than can attach to your motherboard OR if you run TrueNAS virtualised.
When running bare metal with a limited number of drives, it's easier to just use SATA ports from the chipset. Keep it simple…
OK, that makes a lot of sense. The HBA allows the passthrough to the TrueNAS OS if it's being virtualized...
 

Bageland2000

Dabbler
Joined
Aug 24, 2014
Messages
48
It just occurred to me that maybe the right approach here is to basically keep the NAS as is, and build a new, separate server/Proxmox to run all the other "stuff." I can offload the Plex and other plugins to the new server...
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
It just occurred to me that maybe the right approach here is to basically keep the NAS as is, and build a new, separate server/Proxmox to run all the other "stuff." I can offload the Plex and other plugins to the new server...
That is a possibility. I would recommend that you price out a new build for the separate server, then think about if you want to power and cool (HVAC) two systems? I only mention this because if you are building a sever to be a Type 1 Hypervisor, that means you will drop a lot of RAM into it (64 to 128GB, possibly more if you really wanted to), have a very nice high core count CPU, drop it into a proper sized case to ensure things stay cool, and then figure out your HDD/SSD/NVMe storage medium.

If you just want to make a physically small server to run several VMs, I just came up with this idea as I'm here typing but you can buy something like the Asus DeskMini or DeskMeet, install 64GB to 256GB RAM depending on the moel you get, Install an AMD Ryzen 5 5600G (Passmark CPU Score 21008) which will transcode 4K HDR according to the Plex forums, and up to two NVMe (Gen 3 or Gen 4, but Gen 3 will stay cooler), and up to two much cheaper SSDs with large capacity to boot Proxmox and have a ton of storage for the VMs. The AMD version of the DeskMeet 600 supports ECC RAM (hint hint). The DeskMeet also has one PCIe slot, nice if you needed to add a different NIC. I'm not telling you that you should build this, I'm just showing you what can be built if you are building just a server and not for high storage capacity.

All I'm saying is to really plan it out to determine the PROs and CONs of each path.

EDIT: The Asus Desk series does not have IPMI, something most people really like, but so long as Proxmox boots without issue every time, this isn't a huge issue. I love my IPMI.
 
Last edited:
Top