CPU choice; hyperthreading or not? (first build)

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
To cut down some advice (ECC etc), my choice of CPU for my first build is limited by the motherboard so ECC is not an option.
My choices are basically between Intel Core i5 or i7 of the 6th or 7th generation (Z270 chipset/Socket 1151).
I will not do any virtualization, but I will do iSCSI (datastore for a ESXi server).
So my limit is 4 cores and I assume I should get the fastest I can, but other han that?
Will FreeNAS benefit enough from hyperthreading (adding 4 threads to the mix) to validate the premium cost of an i7, or would it be better to just grab the fastest gen 7 i5? (7600k).
 

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
Would you be willing to consider used enterprise gear off eBay? You can get huge value by going this . Perhaps enough value to get ECC memory and server grade hardware. Not sure what Sweden has for a used server market.
 

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
Would you be willing to consider used enterprise gear off eBay? You can get huge value by going this . Perhaps enough value to get ECC memory and server grade hardware. Not sure what Sweden has for a used server market.
Oh, I'm no stranger to that route, that's how I got my ESXi host (Dell PowerEdge R710), unfortunately I'm kinda locked in right now as I already got most hardware (paid for a rather expensive motherboard, well two as I bought the wrong one at first).
This is getting discouraging by now (also asked on facebook) as so many tells me my build is pointless and I might just not do FreeNAS at all, that I WILL get corruption and crashes pools etc. Been running a homemade NAS with low end consumer hardware for years and never had any issues despite incorrect shutdowns etc, but that was with Xpenology (BTRFS). In the end though that is emulation of a more limited set of hardware and I thought for upgrades and stability something else would be better and got quite interested in FreeNAS.
As for used market (as well as new stuff), everything here in Sweden is VERY expensive and tariffs and taxes as well as penalties on imports makes buying outside of the EU prohibitive. I'm on a very low fixed income so most of the gear I do get to use are through donations from friends and kind people, so basically I make due and make the best of whatever I can get my hands on.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
If you aren’t going to run VMs, and only need to share via iSCSI, you won’t need a hugely powerful CPU. Even a low-power i3 will work. What you shouldn’t scrimp on is RAM.
 

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
If you aren’t going to run VMs, and only need to share via iSCSI, you won’t need a hugely powerful CPU. Even a low-power i3 will work. What you shouldn’t scrimp on is RAM.
Thank you, I was planning for 16GB of ram, but might be able to manage 32GB, but it would still be non-ECC due to the motherboard/chipset. My main concern with the CPU choice was if I would benefit from hyperthreading in any way for my user case (only storage and shares via iSCSI and SMB, no VMs etc). The money saved on not getting the i7 would definitely be put into more ram instead.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
For just storage, you won’t need a lot of CPU cycles, and definitely not hyper threading. You will want lots and lots of RAM for iSCSI. Even 32GB is low for that use case. On mobile right now, but go find the resource(s) on here that talk about block storage and how to build for it.
 

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
For just storage, you won’t need a lot of CPU cycles, and definitely not hyper threading. You will want lots and lots of RAM for iSCSI. Even 32GB is low for that use case. On mobile right now, but go find the resource(s) on here that talk about block storage and how to build for it.

OK, I opted for a cheaper CPU than I intended in order to secure more than 16GB ram. I will now get 32GB, might be possible to stretch that to 48GB. (or maybe wait a bit and then do 64GB?)
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You said you'll share iSCSI volumes to ESX. I assume these will be working drives for the guest VMs? What mix of OSs? What iSCSI performance do you need?

RAM is beneficial to keep more blocks in ARC. However, you also need to consider if your budget would be better allocated to faster networking between FreeNAS and the ESX server, and in particular, MPIO, which would require multiple fast links between FreeNAS and ESX.
 

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
You said you'll share iSCSI volumes to ESX. I assume these will be working drives for the guest VMs? What mix of OSs? What iSCSI performance do you need?

RAM is beneficial to keep more blocks in ARC. However, you also need to consider if your budget would be better allocated to faster networking between FreeNAS and the ESX server, and in particular, MPIO, which would require multiple fast links between FreeNAS and ESX.

This is strictly for personal use in my home, I just like to tinker around and most of this is wildly over specced for my needs. On the rack server with ESXi I run a handful of Server 2016 machines that works as web, mail, plex, ftp servers etc as well as a domain controller machine and linux haproxy machine. Yes, the datastore will hold everything for the ESXi host so VMs will be stored and boot from it. (current'y they run from the internal raid controller off a raid mirror with 1 dead disk, sata disks running at 3gb speed.. so seriously need to move asap).
As for the iSCSI performance it's more about "I'll take whatever I can get" than need.
The core job for the NAS will be holding the 18 or so TB of media files for my Plex server. All those files are now stored on an Xpenology NAS using older consumer hardware, it's been very stable and reboots every time from power losses without complaining (I just got a UPS that will be used with TrueNAS if I can get that to work).
I have considered the networking and when budget allows I'm thinking of getting 2 10GB cards and make a direct link between the NAS and rack server, anything else on my lan will do fine with gigabit access. My choice of motherboard was basically to secure I had an upgrade path for an additional HBA card and 10GB nic in the future (I will have 4*8x PCIe slots and 1 4x available, currently only one slot will be used for my HBA)

Since my hardware have changed since I started planning this is what will be used now:
Asus Z270-WS motherboard (dual intel gigabit nics, 4*8x PCIe 3.0 and 1*4x PCIE 3.0 (PLX chip))
Intel i5-7600 cpu
32GB 2400mhz ram (2*16)
550W 80 Gold PSU
2* Crucial 120GB SSD for boot pool
PowerWalker VI1000CWS UPS
3*12TB Seagate IronWolf NAS HD (CMR)
4*4TB Seagate IronWolf NAS HD (CMR)
2*4TB older Seagate SATA HDs (unknown)
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
OK, for home lab use, 32 GB RAM is a good starting point. You should look into fast SLOG SSDs with PLP as well, as you'll see a mixed load of reads/writes. For your 10G NICs, I recommend 2-port models on both sides, as this will allow you to implement MPIO for faster throughput.
 

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
OK, for home lab use, 32 GB RAM is a good starting point. You should look into fast SLOG SSDs with PLP as well, as you'll see a mixed load of reads/writes. For your 10G NICs, I recommend 2-port models on both sides, as this will allow you to implement MPIO for faster throughput.
Thank you kindly, As I will have PCI 3.0 8x slots available dual port 10GB will work just fine, I've been suggested to use SFP+ variants for the dedicated link. I have a leftover 120gb ssd, but not sure if it will be good enough SLOG (is that the same as ZIL?). Just need a card that isn't to expensive and supported by both ESXi and TrueNAS.
I updated my previous post with the current hardware list if that affects anything)
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, SLOG and ZIL are the same thing. The Intel DC S3700s have a good reputation here for that purpose. You can add a single one initially, and another one later to establish the mirror. And these don't have to be large. 120GB each is plenty for SLOG.
 
Last edited:

Steiner-SE

Dabbler
Joined
Jul 13, 2020
Messages
37
Yes, SLOG and ZIL are the same thing. The Intel DC S3700s have a good reputation here for that purpose. You can add a single one initially, and another one later to establish the mirror. And these don't have to be large. 120GB each is plenty for SLOG.
Now the thread is veering off target a bit but have to ask.
I thought the Zil disk was one per volume, but you could/should raid 2 SSDs per pool? or is that per machine? Do you set the SSD mirror as a separate vdev and then add it to a pool as a ZIL?
You can tell I'm new to this ;) With Xpenology once you figured out the emulation there really wasn't that much to do or consider.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
ZIL is actually a VDEV you add to a pool. Like any VDEV, this can be either a single disk, or multiple disks. The system will only allow creation of mirror VDEVs for ZIL.
 
Top