Sizing TrueNAS CPU

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
Hi,

I am building a TrueNAS server to serve as a shared storage for xcp-ng. The plan is to have primarily NFS but maybe also iSCSI. We will have 4 xcp-ng host connected to it via 10G networking.

Is there a rule of thumb on how to size the CPU when building a TrueNAS server?

Thank you
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
If you're serving iSCSI or NFS, any reasonably recent Xeon should do it. CPU should not be a bottleneck for block storage.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
@Fred974 , well that depends on your requirements. What I was after, is that your reaction seemed to indicate to me that you took @sretalla 's response as "this CPU is recent and therefore ok". At least to me it certainly is not. This thing is from 2010 and also has a low clock frequency. So I would have expected a different response like "in that case I need a newer CPU". At least something like Sandy Bridge seems to be the consensus as a starting point, and that is for 1 Gbps.

But the storage will be the bigger fish to fry anyway. And that means you not only need SSDs, but also sufficient I/O capacity. The latter, I would assume, will be a problem with a system that is 10+ years old.

If this setup is for anything but playing around and only having small workloads, I think you need something newer (not more than 5-7 years old).

Please also check the 10 Gbps link in my signature (recommended readings).
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
@ChrisRJ I understand what you mean now... I was planning to run on 10G network so what am I looking for in a CPU to support 10G network?
The plan was to have 1x r610 connected to a powervault MD1220 via SAS (6g). The R610 has 2x x5650 @2.6Ghz (12 core total). From what you said, its too old as >10 year old.
The alternative could be PowerEdge R730 with 16 HDD but again CPU is E5-2620 v3
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A couple of notes.

The R610 is going to be relatively old, and therefore close to the end of its service life. If you have two of these handy, then that could be okay. Have a gameplan for what's going to happen when your shared storage platform dies. It will, however, likely work, and if you can fill it with a crapton of memory, it could work pretty well. I have a client with a bunch of R510's in service, the trick is that of the three R510's at each site, one is powered off and standing by to take over for the inevitable failures. DIMM's fail, host won't boot, etc.

Please make sure you've read all of


for information critical to success with block storage.

Please make sure you also read all of


which will explain why you cannot use whatever tragic RAID card is typically included in those servers (maybe H700?) These MUST be pulled and replaced with a proper HBA, which can definitely be the H200 or H310 HBA's crossflashed to IT mode.

Please also consider making sure your 10G network is appropriate. Please see the 10 Gig Networking Primer.


I think both your proposed platform choices could be workable, but perhaps not optimal. However, it's easy to make mistakes that will make them untenable.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
A couple of notes.

The R610 is going to be relatively old, and therefore close to the end of its service life. If you have two of these handy, then that could be okay. Have a gameplan for what's going to happen when your shared storage platform dies. It will, however, likely work, and if you can fill it with a crapton of memory, it could work pretty well. I have a client with a bunch of R510's in service, the trick is that of the three R510's at each site, one is powered off and standing by to take over for the inevitable failures. DIMM's fail, host won't boot, etc.

Please make sure you've read all of


for information critical to success with block storage.

Please make sure you also read all of


which will explain why you cannot use whatever tragic RAID card is typically included in those servers (maybe H700?) These MUST be pulled and replaced with a proper HBA, which can definitely be the H200 or H310 HBA's crossflashed to IT mode.

Please also consider making sure your 10G network is appropriate. Please see the 10 Gig Networking Primer.


I think both your proposed platform choices could be workable, but perhaps not optimal. However, it's easy to make mistakes that will make them untenable.
Thank you very much for the pointer. I seem to have a bit of reading to do :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I should also point out that lots of the points made above by previous posters are correct or at least correct-ish. The big upside to older DDR3 platforms at this point is that you can potentially find large amounts of RAM for them at pretty good prices. Having twice the RAM with an older system than you would have had with a newer system can significantly improve performance. The kernel-based iSCSI and NFS make CPU clock speed slightly less important than it was years ago, when iSCSI was done in userspace. However, you're still going to be limited in various ways.

The upside is that you can always swap out the storage platform if it isn't working out. So I feel it isn't bad to "try it and see".
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Flip side... DDR3 RAM is slow, and you're going to have slow PCIe lanes. These will support a 10GbE network, but you'll want to consider how many trips your data has to make across all these various components. A bandwidth budget isn't just for your network.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
Flip side... DDR3 RAM is slow, and you're going to have slow PCIe lanes. These will support a 10GbE network, but you'll want to consider how many trips your data has to make across all these various components. A bandwidth budget isn't just for your network.
Very interesting. I never considered the RAM as a bottleneck possibility before. Look like if I want a good production system, I need a beefer server with good CPU and RAM.
Could anyone please recommend any used make/model of server that is still relevant for what I want to achieve? Ebay is full of server but it'll be great to have a starting point.

What is the tough on a Dell PowerEdge R430 with 2 x Intel Xeon E5-2680 v4 or 2x E5-2670 v3 32GB RAM combine with a Dell Powervault Md1220 for the storage
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
DDR3 RAM is slow, and you're going to have slow PCIe lanes.

It's not that slow, and having twice or thrice the memory is going to be a lot faster than having to go out to HDD to fetch data. It can still be a winner.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
It's not that slow, and having twice or thrice the memory is going to be a lot faster than having to go out to HDD to fetch data. It can still be a winner.

Single channel DDR3 at 1600Mhz only gives you 12800 MB/sec, no? A 10GbE network can run upwards of 700MB/sec? So memory is only 18x faster than the network they're seeking to feed. Add in buffer copies, CRC calculations, etc... That 18x gets chewed up pretty quick. Yes having twice or thrice the memory will help, but don't discount the memory channel interleave that also results from such a config. It's a force multiplier that smaller DDR3 configs don't get to take advantage of.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Single channel DDR3 at 1600Mhz only gives you 12800 MB/sec, no?

Nehalem runs triple-channel, and even if he's capped at 800MHz by Dell peculiarities regarding total DIMM count, that still provides him with 19.2GB/s.

"Twice or thrice the memory" is a huge multiplier as well, and I would personally much rather have 3x the amount of data accessible at even 19.2GB/s vs. having to go to vdevs at even 1/10th the speed.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
What are your tough on Dell PowerEdge R430 with 2 x Intel Xeon E5-2680 v4 or 2x E5-2670 v3 32GB RAM combine with a Dell Powervault Md1220 ?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
32GB of RAM would be considered "not enough" for hosting VM storage (NFS or iSCSI) in any kind of larger/production capacity.

Personally, I would consider the R610 with X5650's to be a good option and certainly much less expensive to fill with RAM. A bit dated certainly but you'll also need to consider the SLOG and pool vdev speed before fretting too much about your RAM speed and CPU checksum as potential bottlenecks. The E5 v1 and v2 can also use DDR3 (at higher clock speeds than the E5600 series as well) so you wouldn't necessarily be throwing good money after bad.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thing is, the R430 will easily take a lot more than 32 GB of RAM. That's a single medium-density DIMM for the platform!
So... R430 plus extra RAM would be an excellent solution, if it fits your budget. You also get PCIe 3.0, which you wouldn't on Nehalem/Westmere.
iDRAC 8 is also a lot more usable than iDRAC 6 or 7, as it has an HTML5 console.

An R620 or similar might be an interesting intermediate option. You're stuck with iDRAC 7, but you get PCIe 3.0 with Xeon E5 v2 CPUs and you can use DDR3 for cheap RAM.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
@Ericloewe is bringing up similar points to mine. With the R430 you get PCIe 3.0 which is significantly improves the speed per lane (ala: x1, x4, x8, x16...) and switches lane encoding formats to deliver another ~20% throughput. Switching to the E5 v2 CPU gets you more than double the L3 cache per socket, which makes the DDR3's speed more tolerable too.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
is E5-2640 v3 2.6GHz Eight-Core classify as reasonably recent Xeon?

I found a deal for a Dell PowerEdge fx2 for £3,864

Dell PowerEdge FX2 with 1x4 Backplane
  • Chassis Model:
    1 x Dell PowerEdge FX2 1x4 Enclosure for up to 4 x Blocks
  • FC630 Blocks:
    2 x Dell PowerEdge FC630 1x2 2.5" SAS, 2 x E5-2650 v3 2.3GHz Ten-Core, 128GB, 2 x 400GB SAS SSD, PERC H730P, iDRAC8 Enterprise
    1 x Dell PowerEdge FC630 1x2 2.5" SAS, 2 x E5-2640 v3 2.6GHz Eight-Core, 128GB, PERC H730P, iDRAC8 Enterprise
  • FD332 Blocks:
    1 x Dell PowerEdge FD332 1x16 2.5" SAS, 4 x 900GB SAS 10k, Dual PERC9
  • Switch Modules:
    2 x Dell PowerEdge FN2210S 10GbE I/O Aggregator for FX2 Chassis - Ref
I read that I can dedicate the block to 1x server, so I was thinking of dedicating the storage block to the E5-2640 v3 and do a Non-RAID (pass-through) as the PERC H730P support pass-through.

What do you guy think?
 
Top