Is there a performance gain to segmenting physical NIC ports for the Web GUI and OS Network?

Status
Not open for further replies.

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
Hey all,

My MB came with two Intel 1g NIC ports and I'm curious if there's a performance or security gain to separate the web gui and operating system IP networks? Right now my internal IP for the FreeBSD server is the same as the Web GUI IP (say for example 10.1.1.20). This is for home use so my initial thought is no, it wont matter but I'd be interested in knowing if that's completely true or if there are situations within an enterprise where this would matter.

Also, would I want to utilize LACP and just team the two ports together? The switch I have my NAS connected to supports 1 gig so if I decided this setup would I just be wasting time teaming the ports even if I theoretically was able to saturate 2 gigs worth of data on the NAS as the switch only supports 1 gig per port?

Thanks in advance!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm a little unclear on what you're asking re "Web GUI" and "OS IP" - do you mean "sharing services" perhaps, so having your web GUI accessible on 10.1.1.20 but your CIFS/NFS mount point being 10.1.1.120?

It's a more complex issue.

Performance?
Generally no. Management traffic doesn't demand enough bandwidth/packets to impact most traffic. However, if you're serving up things like iSCSI/NFS to VM hosts, you're going to have this segmented for stability/air-gap reasons anyways.

Security?
Yes. In one of my more-paranoid configurations, I have the management IP cut into a separate VLAN/subnet which is only accessible by other devices sharing it. If I want to SSH to that machine, I bounce off of an SSH proxy inside the "management VLAN" - direct connection attempts aren't routed there and thus fail. The other ports on the machine are connected to a pair of iSCSI VLANs and the "public data" VLAN where end-users can access a CIFS share. Attempting to hit a management GUI or SSH to the CIFS IPs just gets denied.

LACP Teaming?
Only has value if you plan to run multiple concurrent CIFS/NFS sessions over that link, but each independent session will only hit 1Gbps at most. A single client won't see 2Gbps even if it had multiple links of its own.
 

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
I'm a little unclear on what you're asking re "Web GUI" and "OS IP" - do you mean "sharing services" perhaps, so having your web GUI accessible on 10.1.1.20 but your CIFS/NFS mount point being 10.1.1.120?

It's a more complex issue.

Performance?
Generally no. Management traffic doesn't demand enough bandwidth/packets to impact most traffic. However, if you're serving up things like iSCSI/NFS to VM hosts, you're going to have this segmented for stability/air-gap reasons anyways.

Security?
Yes. In one of my more-paranoid configurations, I have the management IP cut into a separate VLAN/subnet which is only accessible by other devices sharing it. If I want to SSH to that machine, I bounce off of an SSH proxy inside the "management VLAN" - direct connection attempts aren't routed there and thus fail. The other ports on the machine are connected to a pair of iSCSI VLANs and the "public data" VLAN where end-users can access a CIFS share. Attempting to hit a management GUI or SSH to the CIFS IPs just gets denied.

LACP Teaming?
Only has value if you plan to run multiple concurrent CIFS/NFS sessions over that link, but each independent session will only hit 1Gbps at most. A single client won't see 2Gbps even if it had multiple links of its own.

Thanks for the quick reply Badger, pretty much exactly what I was looking for. My mistake for the confusion on WebGUI and OS IP, you're correct on what I originally meant to say but picked a poor choice of wording to describe it =D. I'll also have to look into the SSH proxy you mention. I don't have a real need for it at home but I definitely think it's good practice and knowledgeable as I'm sure I can apply that practice to other instances and not just a FreeNAS setup. Good stuff
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112

barny

Dabbler
Joined
Feb 4, 2015
Messages
15
HoneyBadger,
This could be the wrong place to post this but my first box had 9.2.1.8 on a supermicro x7 motherboard. I had both ethernet ports on the mb plugged into my switch (by accident) one day and noticed a huge speed increase. I left it that way until I upgraded the mb and os. When I upgraded to 9.3 and an x10 MB it wouldn't work that way so I only use one ethernet port now. They were on the same subnet and the switch was a basic netgear 8 port. I often have wondered why the other worked so well and it doesn't work now?
Thanks for any thoughts.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
HoneyBadger,
This could be the wrong place to post this but my first box had 9.2.1.8 on a supermicro x7 motherboard. I had both ethernet ports on the mb plugged into my switch (by accident) one day and noticed a huge speed increase. I left it that way until I upgraded the mb and os. When I upgraded to 9.3 and an x10 MB it wouldn't work that way so I only use one ethernet port now. They were on the same subnet and the switch was a basic netgear 8 port. I often have wondered why the other worked so well and it doesn't work now?
Thanks for any thoughts.
Sounds like a placebo. There is no way (based on current computing and networking technology) that just having plugged both nics in would increase speed of anything. It is technologically impossible.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
If he flipped a magic switch then everything is possible... :)
 
Status
Not open for further replies.
Top