X9SRL-F w/E5-1650v2 or X9DRH-iTF w/dual-E5-2680v2 for FreeNAS?

pcmoore

Cadet
Joined
Jan 7, 2019
Messages
2
I posted this question recently on another forum, but it occurred to me I should probably ask the experts too :) Any input, thoughts, or experience you have with the configurations below would be greatly appreciated.

I currently have a FreeNAS system running on a Supermicro X9SRL-F with a E5-1650v2 that is humming along just fine, but due to some unrelated lab upgrades and some dumb luck I now find myself with an extra X9DRH-iTF and a pair of E5-2680v2 processors, and I'm wondering if it is worth moving from the X9SRL-F to the X9DRH-iTF? The E5-1650v2 wins on single core performance with a 3.5 GHz base (3.9 GHz turbo) as compared to the E5-2680v2 with it's 2.8 GHz base (3.6 GHz turbo); however there is no contest when it comes to cores, with the E5-1650v2 having six and the pair of E5-2680v2 having 20 between them. The X9DRH does offer on-board 10 Gb links, but I'm unlikely to use those for the networking (see below). However, the X9DRH does offer additional memory slots (I'm currently maxed out on the X9SRL), which could be useful for future upgrades as I understand ZFS/FreeNAS loves RAM.

If it helps, the system serves up a variety of clients via CIFS and NFS shares; I don't run any additional services/plugins on the FreeNAS system (e.g. no Plex, VMs, etc.). The CIFS clients tend to all be 1 Gb links while the NFS clients are a mix of 1 Gb and 10 Gb links. I expect to add 40 Gb links for some of the NFS clients within the next month or so (just need to install the NICs and cabling). I do not currently have any iSCSI or WebDAV clients, but I may experiment with that in the future.

For reference, the system currently has 128 GB of memory and eight WD Red 4 TB drives in a single RAID-Z2 pool split into two four drive vdevs; the pool has a 500 GB NVMe cache/ARC disk (Samsung something or other) and a 280 GB log/ZIL disk (Optane 900p). The WD Reds are split evenly across two LSI 9211-8i cards (flashed to IT mode). Network connectivity is supplied by a dual port Chelsio T520, soon to be replaced by a dual port Chelsio T580. Everything listed above would stay the same if I switched from the X9SRL to the X9DRH.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I guess it depends on how CPU constrained you are and I wouldn't think you are very. I have a similar system, same X9SRL-F, but with a Xeon E5-2650 V2 @ 2.6GHz by 8 Cores. What does the CPU graph show you during typical utilization?
I don't find that my system is CPU constrained. The number of vdevs appears to be the bottleneck in performance for me.
More vdevs for more IOPS is the direction I am looking or SSDs if I can afford them.
I have a system at work where we use gzip9 compression and it will completely slam the CPUs to 100% because gzip is much more processor intensive than LZ4 and that system runs a bit slow even with two 8 core processors because they can't fully keep up with the compression needs.
I would not switch to a dual socket board just to have it because it will use more power and generate more heat. It would be a different thing if you were using FreeNAS to run jails and VMs and you actually needed the compute power.

I know it isn't an attractive option, but you can always replace the memory modules in the X9SRL-F board with higher capacity modules. If I remember correctly, that board is able to support 512GB.

As to the question of memory. It might be a nice advantage to be able to have more memory slots but you gain more memory slots at the expense of having all your CPU cores in the same package. When one processor needs memory that is held in the RAM attached to the other processor, they have to push that data across from one processor to the other through a relatively slow channel (compared to inside a single package) ant that can actually reduce performance. It is always best if you can get the job done on a single socket system. On multi socket systems, you improve performance if you have a mechanism for binding the memory and the task to the same CPU, but FreeNAS does not have that. There has been a lot of research into that over the years. I have done a lot of reading about that lately and my take on it is, unless you are creating VMs using a hypervisor, there is no point having a dual socket system.
 

pcmoore

Cadet
Joined
Jan 7, 2019
Messages
2
I guess it depends on how CPU constrained you are and I wouldn't think you are very. I have a similar system, same X9SRL-F, but with a Xeon E5-2650 V2 @ 2.6GHz by 8 Cores. What does the CPU graph show you during typical utilization?

Thanks for the quick and thorough reply.

As I mentioned, the existing system is doing just fine so far; if I hadn't had the extra MB/CPUs fall into my lap I wouldn't have thought one minute about upgrading. It was more a matter of having these extra pieces and wondering if they would be an improvement, based on your comments (and my own suspicions) moving to the X9DRH wouldn't gain much, if anything, other than a higher power bill.

I don't find that my system is CPU constrained. The number of vdevs appears to be the bottleneck in performance for me.
More vdevs for more IOPS is the direction I am looking or SSDs if I can afford them.
I have a system at work where we use gzip9 compression and it will completely slam the CPUs to 100% because gzip is much more processor intensive than LZ4 and that system runs a bit slow even with two 8 core processors because they can't fully keep up with the compression needs.
I would not switch to a dual socket board just to have it because it will use more power and generate more heat. It would be a different thing if you were using FreeNAS to run jails and VMs and you actually needed the compute power.

The only thing I wasn't sure about is how ZFS handles I/O across multiple physical disks and/or vdevs; is it able to take advantage of additional cores/threads to complete more of the I/O in parallel? I do realize some of this is reliant on the upper level remote filesystem.

I know it isn't an attractive option, but you can always replace the memory modules in the X9SRL-F board with higher capacity modules. If I remember correctly, that board is able to support 512GB.

As to the question of memory. It might be a nice advantage to be able to have more memory slots but you gain more memory slots at the expense of having all your CPU cores in the same package. When one processor needs memory that is held in the RAM attached to the other processor, they have to push that data across from one processor to the other through a relatively slow channel (compared to inside a single package) ant that can actually reduce performance. It is always best if you can get the job done on a single socket system. On multi socket systems, you improve performance if you have a mechanism for binding the memory and the task to the same CPU, but FreeNAS does not have that. There has been a lot of research into that over the years. I have done a lot of reading about that lately and my take on it is, unless you are creating VMs using a hypervisor, there is no point having a dual socket system.

NUMA is equal parts good and bad. It's a relatively easy way for system builders to add capacity, but if the task affinity isn't managed correctly the shuffling between nodes can become a problem very fast.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
NUMA is equal parts good and bad. It's a relatively easy way for system builders to add capacity, but if the task affinity isn't managed correctly the shuffling between nodes can become a problem very fast.
Sounds like you know exactly what I am talking about.
 
Top