Sanity check on new build

Status
Not open for further replies.

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
See any issues with this?

CPU: Xeon E5-1603 V3
Mobo: X10SRi-F
RAM: 64 GB ECC RDIMM (Samsung M393A4K40BB0-CPB0Q)
NIC 1: Intel XL710-DA2 (twinax to switch)
NIC 2: Broadcom 57810S 10Gbase-T (direct connect to one particular workstation that can only be reached by cat6)
HBA: LSI 9211-8i
Drives: 7x8TB WD Red in raidz2 (eventually)

To start off, I'll be reusing the drives from my old system, which are a mix of 7x3TB drives - they all started off as Seagate ST3000DM008s, but as they failed, one by one, they've been replaced by Toshiba, WD, and HGST.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
E5-1630 v3, you mean? That's fine.

Nope, it's actually 1603. It's the lowest of the low end - 4 cores, no HT - but it's still got 10MB of L2. Budget is stretched tight like a drum at the moment, so this is just to get things going. I figured it was better to invest in the RAM and motherboard. If it's not able to keep up, I'll upgrade it later since this mobo can take a zillion different CPUs.

https://ark.intel.com/products/82761/Intel-Xeon-Processor-E5-1603-v3-10M-Cache-2_80-GHz

That's a bit dubious. The Intels work fine and the Chelsios are better.

Okay, gotcha. Just trying to save a few bucks, but I see the Intel X520-T2 cards for as little as $100 on eBay.

Oh, and don't pay attention to the other Intel card part number - I got mixed up with something at work. I'm not putting a 40GbE card in this thing ... at least not yet. :) It's actually an X520-DA2.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Nope, it's actually 1603. It's the lowest of the low end - 4 cores
It will probably do the job just fine, and you can always upgrade. What are you doing with the system besides plain file storage?
 

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
It will probably do the job just fine, and you can always upgrade. What are you doing with the system besides plain file storage?

Not a whole lot. My wife is a wedding photographer who deals in 350MB+ PSD files over CIFS, thus the need for 10G and enough RAM to try to keep her working set in ARC, but no real need for IOPS. That should barely even tickle the CPU.

I do audio work, but that's so slow the system won't even notice (8 tracks of 44.1kHz audio at 24 bits is 8.5 Mb/s).

I'll also probably move over my Plex server and a few other low-impact VMs, but they're quiescent 95% of the time.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll also probably move over my Plex server and a few other low-impact VMs, but they're quiescent 95% of the time.
Out of all that, the only thing that may be a reason for an upgrade would be the Plex. I run Plex on my FreeNAS and for some files there is almost no transcoding needed, but for others it can be a lot. When I had a lower powered processor, there were times when the processor would be maxed out doing the transcode. Keep an eye on the CPU utilization. It will be a bit of a judgement call on your part. If you see that the CPU is too slow, you can always upgrade it later, but you might be fine. So much of that depends on the source file and what you are playing the video on.
 

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
As it turns out, I was able to get ahold of another 64GB. With that much RAM, it makes more sense to run ESXi on the new box as an AIO, so I can move my existing VMs onto it. So that's what I did. That also allows me to let VMware handle the Broadcom card, which it does just fine.

Waiting to build my wife's new workstation before I can test how much of that 10G I can push. From a VM to the FreeNAS server over the internal vswitch, iperf is topping out at about 2.5 Gb/s - maybe that's CPU-limited, because the vswitch is all in software and has no offload? Hoping that bumps up when I start using the 10G NIC, but that still would pass traffic through the vswitch. If that's the case, then I'll try passing the NIC through to FreeNAS (but then I might hit compatibility issues, so ...)

Also, can I just say that IPMI is the bomb? Never buying a board without it again.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Also, can I just say that IPMI is the bomb? Never buying a board without it again.
Never buying a server board without it again.

Fixed it for you!!

You don't need IPMI for desktop computers and you don't need server boards for routers and such, eg. pfSense etc (which means you won't get IPMI, since IPMI is usually only available in server grade boards).

Also, be aware of the pitfalls of IPMI. It isn't the most secure.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Also, can I just say that IPMI is the bomb? Never buying a board without it again.
My feelings also and one of the reasons I suggest server boards to everyone that comes here asking about using an old workstation or desktop computer...
From a VM to the FreeNAS server over the internal vswitch, iperf is topping out at about 2.5 Gb/s - maybe that's CPU-limited, because the vswitch is all in software and has no offload?
No, that is likely a limitation of the number of drives you are using, or some other configuration settings within FreeNAS.
Drives: 7x8TB WD Red in raidz2 (eventually)
What is the storage configuration you are using right now?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
As it turns out, I was able to get a hold of another 64GB. With that much RAM, it makes more sense to run ESXi on the new box as an AIO
PS. As you work on your system, you might want to update the thread time to time. That way we can make suggestions.
For example, here is some good reading about virtualization of FreeNAS on ESXi, optimizing storage speed for virtualization and optimizing 10Gb networking:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

The ZFS ZIL and SLOG Demystified
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

10 Gig Networking Primer
https://forums.freenas.org/index.php?resources/10-gig-networking-primer.42/

Out of curiosity, what chassis are you building this in because you will probably need to add more drives to fully realize the speed of 10Gb network. Probably as many as 24 drives, if you really want all of that...
 
Last edited:

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
No, that is likely a limitation of the number of drives you are using, or some other configuration settings within FreeNAS.

What is the storage configuration you are using right now?

iperf is pure network bandwidth testing - doesn't touch storage at all. I know the disks will never be able to shovel data out anywhere near that fast, which is why I'm trying to keep her working set in ARC. At 96 GB for the FreeNAS VM, that should be doable.

Current storage configuration is 7x3TB drives in raidz2. They started off as all Seagate ST3000DM008s, but as some of those have failed out I've replaced them with various makes (WD Red, HGST, Toshiba). It scrubs at about 450 MB/s, so I know there's no way I would ever saturate 10G just from disk.

Regarding Stux's build report: I've read it over before, but haven't referenced it since I started my build. I'll run through it again and see if there are any network tweaks I need to make.

Regarding SLOG: I'm running sync=disabled. I am fully aware of the risks and have determined that this is the right choice for me. I've got solid backup power (UPS power for 7-10 minutes and a generator that fires up in about 15 seconds after power is out for two minutes), a quality power supply, 4x daily snapshots and daily offline backups. And frankly, my VMs just aren't all that important anyway - nothing I couldn't recreate in a couple hours while watching a movie.

The chassis is an NZXT, don't know the model number offhand. It has 8x3.5" bays and 3x5.25" bays.

Interesting side observation that will require more research: Between a Windows VM and the FreeNAS VM, iperf shows a consistent 2.4 Gb/s bandwidth. Between two Windows VMs, it does 4.3 Gb/s. That's about an 80% bump in speed. I'll continue to dig on that.

Thanks for all your help!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
Made a couple of quick tweaks and got a lot closer - close enough that I may not bother digging deeper.

First, I allocated 4 vCPUs to FreeNAS, instead of the 2 I had before. Then I implemented jumbo frames end to end (client VM, vswitch, FreeNAS, vmkernel port).

Et voila:
Screen Shot 2018-06-12 at 2.11.52 PM.png


Good enough for me!
 

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
Catching up on the last week.

So far, so good! Everything has been working great, for the most part. I had a little hiccup with my AD setup due to a bug in the monitoring, but that's resolved now. Also

I had to change the topology a bit, to one that required bridging through this box. After attempting various methods for this (bridging within FreeNAS, setting up a dedicated VM to bridge the traffic, etc.) I gave up and dug into the piggy bank to buy a Netgear GS110MX, which has 2x10Gbase-T ports. Problem solved!

Wife's workstation is built and happily talking 10G. I experimented moving around some large files (1GB+) with CIFS, and as expected, the disks can't shovel data that fast. Although I was happy to see peak transfer speeds of 4-500MB/s reading from disk. After a read to prime the ARC, I was able to sustain 1GB/s until my client's read buffer filled up, after which it dropped back to about 300 MB/s, which I believe is limited by the destination drive.

Writes are a little more of a mixed bag, which I also expected. Sustained writes bounce all over the place, maxing out around 350MB/s and averaging probably 200MB/s. I can't imagine that this would create any kind of a bottleneck in practice, but if it ever does I plan to dig into the transaction group tunables to see if buffering more writes is counterproductive or not.

So bottom line, this is the system as built:

Hypervisor: VMware ESXi 6.7
Mobo: Supermicro X10SRi-F
CPU: Xeon E5-1603 V3
RAM: 128GB of 192GB Samsung registered ECC
pNIC: Broadcom 57810S
vNIC: VMware VMXNET 3
Boot drive: 16GB vmdk hosted on Samsung MMCRE 128GB SSD
HBA: LSI SAS9211-8i (PCI passthrough)
Data drives: 7x3TB SATA drives, various marques, in raidz2
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I had to change the topology a bit, to one that required bridging through this box. After attempting various methods for this (bridging within FreeNAS, setting up a dedicated VM to bridge the traffic, etc.) I gave up and dug into the piggy bank to buy a Netgear GS110MX, which has 2x10Gbase-T ports. Problem solved!
Dude. You really must ask. I could have saved you from needing to spend for that, although if you only need two 10Gigabit ports, I guest it isn't a bad price.
However, if you like to tinker, you might want to take a look at this:
https://youtu.be/p39mFz7ORco
I built one like this and it gives me 10 ports at 1Gig and 4 ports at 10Gig. I have been using it for over a year now with no problems and the only thing I needed to buy was the dual port 10Gig cards, 2 at $35 each with shipping set me back about $60.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
I needed to buy was the dual port 10Gig cards, 2 at $35 each with shipping set me back about $60.
Do you have a link to the dual port one you used?
 

deafen

Explorer
Joined
Jan 11, 2014
Messages
71
I built one like this and it gives me 10 ports at 1Gig and 4 ports at 10Gig. I have been using it for over a year now with no problems and the only thing I needed to buy was the dual port 10Gig cards, 2 at $35 each with shipping set me back about $60.

Yeah, I saw a couple variants like that, but my challenge is that I need 10Gbase-T, and the cards are way more expensive than the SFP+ cards (like $90 minimum). However, this morning I happened to snag a deal on eBay for a couple of Intel X540-T2 cards for $30 apiece! I've also got a couple spare Intel 4x1Gb cards, and the Supermicro mobo has lots of PCIe slots, so I'm going to take another run at setting up a VM that will act as a switch across all of these ports. Need to do a little research on how to set up the vswitches properly for the FreeNAS VM to talk to everything, but I'm pretty sure I can make this all work. I'm still in the return period for the Netgear switch, so if this works I can get that $200 back.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Do you have a link to the dual port one you used?
I used a Mellanox card like this in my FreeNAS and in my Windows computer.
https://www.ebay.com/itm/RT8N1-0RT8...0GBe-ETHERNET-NIC-SERVER-ADAPTER/351416547732
I just ordered two more of those so I can get two more of my systems on the 10Gig side.

In the switch I used this kind of Chelsio card:
https://www.ebay.com/itm/Chelsio-110-1106-30-Dual-Port-10GB-Adapter-SFP/273237196387

For the 1Gig ports I used these Intel NICs:
https://www.ebay.com/itm/DELL-INTEL...-SERVER-ADAPTER-EXPI9404VT-YT674/201528557612
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Supermicro mobo has lots of PCIe slots, so I'm going to take another run at setting up a VM that will act as a switch across all of these ports.
You can run VyOS in a VM. That is actually one of the configs that they talk about on their site, but as a router.
https://vyos.io/

I am sure it would work as well as a switch, you would just need to pass all the cards into the VM. I ran it on bare metal using an older socket 1366 board I had that I wasn't using for anything else.
 
Status
Not open for further replies.
Top