A new TrueNAS build for a lab

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
Hi guys,
So we currently have a TrueNAS server running our lab, but the build was initially very simple and was suppose to handle around 20-30 VMs. Even for that, the performance wasn't that big of an issue and we only needed everything to work. Since then we have grown and there is like 90 VMs with around 40-60 of them running most of the time, so now even my general performance is pretty terrible and needs an upgrade. The server is 95% used as VM storage over iSCSI with a simple SMB share to share some of the files.

The current build is:
  • MB: X10SLM+-F
  • CPU: E3-1231 v3 @ 3.40GHz
  • RAM: 32GB ECC 1600
  • Disks: x4 1TB SSD (1 vdev RAIDZ1), x8 3TB WD RED (4 vdevs RAID10)
  • LSI HBA 9211-8i
  • Mellanox ConnectX4 10GBe

So I'm trying to build a new system to meet the new demand, this is currently what I'm looking into:
  • MB: Supermicro X11SSM-F
  • CPU: Intel Xeon E3-1220V6
  • RAM: 64GB ECC 2400 (Timetec Hynix IC 64GB KIT (4x16GB) DDR4 2400MHz)
Now for the disks, that's a great question and this is where I'm running into some questions. The MB here has 8 sata ports, which should be more then enough if I get my "dream" setup. Would it be possible to use a PCIe external card on this board with 4 NVME drives? It looks like I need an x16 port for a x4 NVME card and the board only has x16 that work over x8. so maybe I can use an x2 NVME card on one of the x8, 2 more nvme, 1 each on the x4 and then combine them to x4 RAID 10 in TrueNAS?
For the other 8 sata ports, I would like to deploy an x8 SA500 2TB WD RED SSD, to provide the other storage. and will use the last x8 PCIe slot for the 10Gbe card.

Can anyone confirm that this is actually possible? Specifically the the NVME setup. Are there even PCIe NVME cards that support TrueNAS?

Thanks!
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
90 VMs with around 40-60 of them running most of the time, so now even my general performance is pretty terrible
RAM: 32GB ECC 1600

Super impressed it works at all.

Make sure you walk through


to check your plans against best practices.

I need an x16 port for a x4 NVME card and the board only has x16 that work over x8. so maybe I can use an x2 NVME card on one of the x8, 2 more nvme,
other 8 sata ports, I would like to deploy an x8 SA500 2TB WD RED SSD,

This is a confusing mash of xN's. Start over. If you are thinking about one of the cards that breaks out an x16 to four x4 M.2 NVMe, that won't work on a board that has an electrical x8 in physical x16 slot. Guessing you meant "eight SA500 SSD's".

You CAN use something like a Supermicro AOC-SHG3-4M2P which will let you use 4 NVMe SSD's in an x8 slot (and should even work in an electrical x4-in-physical-x8 slot). This uses a PLX switch.

But overall, I wanted to say that you may just be setting yourself up for a failure here. If you actually need 40-60 VM's, you should really be looking at E5 kit, not E3, so that you can expand RAM past 64GB. 64GB is the starting point we suggest for small iSCSI setups, and that's for just a handful of modestly busy VM's. You feel like you may be up in the 128GB-needed or maybe even more. It's much harder to deploy stuff like L2ARC when you are all-flash, but this does NOT change the underlying system pressures that make larger RAM advantageous.


Just because you have lots of fast flash doesn't mean that you can cheap out on system memory. Burning IOPS thrashing pages in and out of memory unnecessarily is a great way to kill performance. You need enough RAM that a reasonable working set of blocks settles into the ARC and reduces your pool reads. The exact size is generally debatable, but at 32GB and 60 running VM's you are primarily thrashing and it is not going to go well. It might hurt "less" at 64GB but it is likely to still be very bad.
 

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
Super impressed it works at all.

Make sure you walk through


to check your plans against best practices.




This is a confusing mash of xN's. Start over. If you are thinking about one of the cards that breaks out an x16 to four x4 M.2 NVMe, that won't work on a board that has an electrical x8 in physical x16 slot. Guessing you meant "eight SA500 SSD's".

You CAN use something like a Supermicro AOC-SHG3-4M2P which will let you use 4 NVMe SSD's in an x8 slot (and should even work in an electrical x4-in-physical-x8 slot). This uses a PLX switch.

But overall, I wanted to say that you may just be setting yourself up for a failure here. If you actually need 40-60 VM's, you should really be looking at E5 kit, not E3, so that you can expand RAM past 64GB. 64GB is the starting point we suggest for small iSCSI setups, and that's for just a handful of modestly busy VM's. You feel like you may be up in the 128GB-needed or maybe even more. It's much harder to deploy stuff like L2ARC when you are all-flash, but this does NOT change the underlying system pressures that make larger RAM advantageous.


Just because you have lots of fast flash doesn't mean that you can cheap out on system memory. Burning IOPS thrashing pages in and out of memory unnecessarily is a great way to kill performance. You need enough RAM that a reasonable working set of blocks settles into the ARC and reduces your pool reads. The exact size is generally debatable, but at 32GB and 60 running VM's you are primarily thrashing and it is not going to go well. It might hurt "less" at 64GB but it is likely to still be very bad.
Thanks for the info. I guess the term works changes between people. As this is a lab environment, I didn't need any high end performance or IOPS, all that was required was an environment that works decently well to provide info regarding integration between different components. It also required quite a lot of tinkering and limits to actually get this to an operational state.

The need itself didn't really change, we just grew and require more machines. I don't think adding additional vdevs is even remotely close and this is why I'm trying to think how to correctly start over.
Regarding the NVMe drives, the link you gave for the addon card is exactly what I was looking for. A way to run 4 NVME drives on a board that doesn't have m.2's nor a x16 slot. Looks interesting.

Regarding the CPU, that's an interesting thought. I was aiming for something that was an upgrade to the existing setup without going too far, as at the end of the day there is a budget limit here. that's why I was even looking to maybe changing the board to X11SSH-CTF, which has the SAS and 10gbe built-in to save up some room, but I am open to suggestions.
Any recommendations about an E5 board that would fit my needs?

At the end of the day, the thought process here was if I'm able to go 30 VMs on the existing hardware, doubling that and moving to a full flash should be more then enough to handle the additional load. Would I ever put this in a prod environment? hell no. But I thought it should be OK for a lab (that I backup daily anyway, just in case)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Sometimes reducing the network latency can help with the responsiveness. So dual 10Gbps Ethernet, (load shared in iSCSI), will probably make a difference. (Assuming some / most of the iSCSI client servers also have 10Gbps Ethernet.)
 
Last edited:
Top