Upgrade to 10Gb, worth it with my config?

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Looks so. Before you do anything, make a backup.
Anyway, to explain my previous no to the question "Upgrade to 10Gb, worth it with my config?": you simply don't have enough drives to experience a substantial benefit from it. Please read the follwing resource.

Also, if you have space for a 10G card you have the space for an HBA, don't you?
 

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
Reason being?
I meant, my board has:
1 x PCIe 16x ==> Intel 82571EB Gigabit Ethernet Controller (4 ports) - All ports used
1 x PCIe 4x ==> Intel 82571EB Gigabit Ethernet Controller (4 ports) - Only 2 ports used
1 x PCIe 1x ==> Free

Coming to the original post, I'm still thinking to move my server to 10Gb speed (I'll use anyway the copper instead of SFP+ cause my switch have 4 RJ45 10Gb ports). To do that I can swap one of the Intel 82571EB with x540-T2 or 550-T2 cards.

I believe I cannot use the free PCIe 1x for HBA (I need 4 SATA connections and throughput will be affected, plus I don't see on the market any reliable card with this connection), but maybe I can do something like swap one Intel 82571EB 4 ports, with one 2 Gigs ports, and connect it to PCIe 1x. This will free the PCIe 4x slot where I can put a HBA card and then pass it to TrueNAS. Note, with this connection the PCIe 4x will becomes 2x cause it shares the lanes with PCIe 1x.

Problem is that, in this way, I cannot use PCIe 1x to connect dual 10Gb port, so I have to give up to have 10Gb on my server.

Other option, is use nvme for Proxmox OS, and pass the whole controller to TrueNAS. I already have a 2TB nvme connected to the board and currently is handling all VMs. If I'll install promxox on it, I can still use it for VMs and VMs backup. But if this m.2 fails, the whole server will fall. Currently I have 2 512GB SATA disks in RAID1 who handle Proxmox OS, so this is the reason because I cannot pass the whole controller to TrueNAS.

Decisions, decisions, decisions.....
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Why are you using so much connections? You have dual Gigabit native on the board, plus six more you added... with 4 drives in RAIDZ you are never going to come even close saturating all of that.
You can use the x16 for the HBA and the x4 for the 10G card... or the other way around since you only have 4 drives to passthrough.

As a side note, a hardware refresh might be a good idea. There are tons of much better server motherboards out there.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
And the x1 slot for a M.2 boot drive.
More exotic solutions could involve bifurcating the x16 slot to x8x8 or x8x4x4.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
More exotic solutions could involve bifurcating the x16 slot to x8x8 or x8x4x4.
I don't think the MB supports bifurcation... at that point he should change the MB.
 

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
Why are you using so much connections?
Proxmox handle not only TrueNAS, but several others VM (in total I have about 20 VMs). Included pfSense who will handles GREEN, RED and BLU networks in a separate way. Plus have GREEN and TrueNAS networks configured in LACP (2 nics each).

Using 10Gb nic will reduce the number of used ports to 3 if I join TrueNAS and Green networks into one.
1. Green (used also for TrueNAS)
2. Red
3. Blu
But at this point I have to find a 10Gb network card with 4 ports to use only one slot.

Proxmox OS boot drives on x1 could be a cheap solution. I can check a HBA card x1 with two SATA or m.2 drivers support.
There are so many options to check/validate, that my head is just smoking now :)
 
Last edited:

probain

Patron
Joined
Feb 25, 2023
Messages
211
This sounds like you could benefit from separating things into VLANs instead of physical network ports. If you're upgrading to 10Gbit, then you won't need the LACP in the same way either.
 

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
This sounds like you could benefit from separating things into VLANs instead of physical network ports. If you're upgrading to 10Gbit, then you won't need the LACP in the same way either.
I cannot use VLAN with my config, cause I need 3 physical network ports.
1. RED WAN coming from ISP modem
2. GREEN LAN going to Switch
3. BLU WIFI going to RBR850 netgear hotspot

And sure, LACP is useless with 10Gb and they will be both removed.

At the end, the config in my head, that will save money from my side, will be:

PCIe x16 ==> Intel x540-t2. Dual 10Gb RJ45 ports ==> Handles GREEN and RED
PCIe x4 (2x) ==> Single 2,5Gb port ==> BLU WIFI Network to RBR850 netgear hotspot (this Netgear router have only 2,5Gb port)
PCIe x1 ==> HBA to simple boot Proxmox

Then, I'll be able to pass the whole controller to TrueNAS and have also the 10Gb on my network.

What do you think guys for this solution that can be the final one?
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

probain

Patron
Joined
Feb 25, 2023
Messages
211
Ofc you know your LAN best. But I would've looked into putting Green and Blu on VLANs instead. But this of course requires switches being able to handle VLANs properly.

Red obviously needs to be on its own port. But the rest would be better handled as VLANs. IMHO :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Red obviously needs to be on its own port.
Though this wouldn't be my preferred way to handle it, red could also be a VLAN, at the cost of one more port on the switch.
 

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
Currently I can handle VLAN anyway. I have Netgear GS324T 1Gb 24p model, but in the past I preferred to use different eth ports on server.
I'm moving now to Netgear MS510TXM, that is able to manage 4x10Gb RJ45 + 2xSFP + 4x2,5 Gb RJ45. So I'll reduce the available ports number, that's why it will be not a good idea to use VLAN that will occupy other precious switch ports.

Below the basic network schema that I would like to use... Work in progress!
Immagine 001.png


And sure, BLU and GREEN can be handled by VLAN also in new network, but I prefer to keep separate the LAN and preserve switch ports.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I trust traffic is going the following steps:
  1. ISP
  2. pfSense
  3. Switch
  4. Other, including the Virtualized TN istance
Did you consider a motherboard upgrade?

Anyway, correct me if I'm counting wrong:
One 10G Base-T for RED (x16 SLOT card)
One 10G Base-T for GREEN (merged BLUE) (same x16 SLOT card)
Motherboard's two Gbit Base-T for various things.
HBA (in the x4 slot)
To me it looks like you are good to go.
 
Last edited:

Lucas Rey

Contributor
Joined
Jul 25, 2011
Messages
180
Did you consider a motherboard upgrade?
Sure, even because I would like to increase cores/threads, and possibly use a CPU less power hungry. Problem is that I'm using a Fractal Design 804 case, that suports only mATX board. So the search is reduced to a specific (expensive) model, apart the fact that I'm not sure that there is a mATX with 3 PCIe slot. Also, upgrade moterboard means change RAM and CPU and adding also the cost of the new 10Gb switch and the ethernet cards, this will be out of my budget.
Could you, or someone else, advice me for a good mATX board with a CPU with low power consumption?

Anyway, correct me if I'm counting wrong:
Nope. Currently I'm thinking to something like:
  • PCIe x16 ==> Intel x540-t2. Dual 10Gb RJ45 ports ==> Handles GREEN and RED
  • PCIe x4 (2x) ==> Single 2,5Gb port ==> BLU WIFI Network to RBR850 netgear hotspot (this Netgear router have only 2,5Gb port)
  • PCIe x1 ==> HBA 2 SATA to simple boot Proxmox
  • Onboard 1 1Gb ==> Proxmox management
  • Onboard 2 1Gb ==> free for future use
Or:
  • PCIe x16 ==> Intel x710-t2. Four 10Gb RJ45 ports ==> Handles GREEN, RED and BLU (one port free for future use)
  • PCIe x4 ==> HBA 2 SATA to simple boot Proxmox
  • Onboard 1 1Gb ==> Proxmox management
  • Onboard 2 1Gb ==> free for future use
In both cases, I'll pass the whole controller to TrueNAS cause it will only contains NAS disks (boot disks are moved on HBA).
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
There is Supermicro's X11SCL-F board, which can be found pretty easily on eBay for cheap: it's compatible with your current CPU, so you wouldn't need to change it; reducing the power consumption and increasing C/T count is not going to be cheap. Keeping the same would point towards a E-2136 while decreasing to 4C/8T would bring the price down with a E-2134.

The board support up to 128GB Unbuffered ECC RAM, has 3 PCIe slots (one phisycal x16 electrical x8 and two x8/x4), has a M.2 slot, two Gigabit ports and a dedicated IPMI port.

I would also look into AMD's garden. Imho your best shot would be to go with the second option but using the dual 10Gb card: do not go down the 2.5Gbps route... just merge blue into green using the switch and put them on different VLANs.
Or maybe get a NUC to run pfSense with something like this. Or maybe spend a lot of money to completely renew the homelab. Or maybe...​
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Could you, or someone else, advice me for a good mATX board with a CPU with low power consumption?
Same socket, tough you'll still need to replace CPU and RAM to have ECC.

More ambitious (but probably higher power):

Guaranteed extra-low power (2 PCIe slots… but LSI HBA and 10GBase-T are already on-board):
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
@Lucas Rey the gigabyte mobo is not doing you any favors. The cards that are perfect to use (Dell H310 or Supermicro aoc-stgn-i2s) are all pcie 2.0 x8 so pcie 3.0 x4 has the speed (bandwidth per lane) to handle 2.0 x8 --but it depends on the pcie card and the chipset. Some might have bandwidth lowered. Also --I own the x11scl-f board that @Davvo suggested and I have it in a fractal 804 case (TN-Core). That board would be perfect for what you are trying to accomplish. I have a x11ssh-f, x11scl-f and 3x x11spm-tpf mobos all in Fractal 804 cases.

I have no clue how proxmox is but on my esxi hosts, I run out of RAM far before I run out of cpu cores/resources.

Also break out pfsense: Don't virtualize it like that. It'll be a disaster when something happens.

I'm not sure if you can use the i9 cpu in the x11scl-f board. Also, to be honest, if I had to do it all over again, I would buy the C246 chipset version: x11sch-f.

The sfp+ cards are cheaper than rj45 and the switches you mentioned both have sfp and sfp+ interfaces so it would work. I believe the pcie 3.0 x4 slot on your motherboard is x16 size but x4 electrically. Technically the 3.0 x4 should handle 2.0 x8 bandwidth but I'm not sure how that works electrically. Also if you are doing 4x drives in rz2, why not do 2x pairs of mirrored vdevs?

This is an option too: https://www.amazon.com/ELUTENG-Expr...e&keywords=pcie+x1+sata&qid=1708317488&sr=8-3 But I'd make sure 100% that the card you buy works with truenas and also I'd probably only do one of those pcie x1 cards for the boot drives.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Also break out pfsense: Don't virtualize it like that. It'll be a disaster when something happens.
This is indeed something I would support. I used to run pfSense in ESXi and when I switched the hypervisor to XCP-ng, I moved pfSense to a small PC. It made my life so much easier!
 
Top