Chris Moore
Hall of Famer
- Joined
- May 2, 2015
- Messages
- 10,079
Sorry. I am still learning.- I definitely don't always agree with @Chris Moore -
Sorry. I am still learning.- I definitely don't always agree with @Chris Moore -
Sorry. I am still learning.
Sorry it looks like an attack to you. I don't know the specific incident you are thinking is an attack, but reason was likely an effort to convince the poster that they should not do what they were trying to do.
I just happend to reread this and I was wondering what industry your company is in?I work in a much different environment. .... It can literally take a week when it is fast-tracked.
I suspect the goal is to enforce best practices so FreeNAS doesn't get blamed for connectivity problems that are really poor network configuration.... iX is creating an enterprise product here, and NONE of their customers would be asking for this.
I work at a US Government agency and there are multiple departments involved (around a dozen people) between the approval and implementation of a network change request.I just happend to reread this and I was wondering what industry your company is in?
Sorry it looks like an attack to you. I don't know the specific incident you are thinking is an attack, but reason was likely an effort to convince the poster that they should not do what they were trying to do.
global configuration is where default route and name servers are set, regardless of DHCP
I think your Padawan made numerous errors and it would be a stretch to solely blame DHCP assignments - DHCP addresses don't disappear the second the server goes down.
DHCP servers are generally critical, it would have been noticed had it done down.
VRRP and multiple other technologies exist to produce redundancy for services, including DHCP
The question is that if the product is aimed at professionals and as you say "@siconic offers the classic reasoning for wanting servers to be DHCP-configured above, and that's fine for what it is worth. Tradeoffs. " then... why make the restriction. Tradeoffs are everywhere, let people make them?
FN is used in a multitude of environments and they're all different, while setting primary/management interfaces as DHCP may not be ideal, the fact that FN has had HV functionality for some time now and with many interfaces possible via VLAN's, it doesn't seem impossible that a valid configuration could include multiple DHCP addresses.
Exclusively in regard to servers, it has been best practice for as long as I can remember, interfaces should be configured by the administrator, locally in the system, not by DHCP. What you are suggesting, having a DHCP server hand devices the same IP every time they show up on the network, That works great, it is how we do things for everything that is not part of the 'infrastructure'. That means the printers and desktop computers and laptops, etc. Servers still need to come up with the same IP address after a reboot, even if they can't reach the DHCP server, which is why they get static assignments, because servers are often remotely managed and if they come up with some other IP address, you can't reach them unless you physically travel to the data-center, which is in another building at the site where I work. Switches, routers, servers and even some management systems need static addresses so they can still work when something else is not working properly.Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces.
Exclusively in regard to servers, it has been best practice for as long as I can remember, interfaces should be configured by the administrator, locally in the system, not by DHCP. What you are suggesting, having a DHCP server hand devices the same IP every time they show up on the network, That works great, it is how we do things for everything that is not part of the 'infrastructure'. That means the printers and desktop computers and laptops, etc. Servers still need to come up with the same IP address after a reboot, even if they can't reach the DHCP server, which is why they get static assignments, because servers are often remotely managed and if they come up with some other IP address, you can't reach them unless you physically travel to the data-center, which is in another building at the site where I work. Switches, routers, servers and even some management systems need static addresses so they can still work when something else is not working properly.
Just wanted to check my understanding of a couple of things:...
At work, we have several physically separated networks for security reasons and some have DHCP servers, but I manage three that have no DHCP server and I have to put the IP address into all of those systems manually. It is a huge management pain, but it is a requirement for the environment.
What is the downside of statically assigned DHCP other than if the server fails the network may get a bit funky until you fix it?Not sure if DHCP troubles are actually relevant for non-enterprise scenarios or just dogma.
Unless I am missing something, DHCP reservations in the router (especially if it has a nicely laid out UI, are a self-documenting way of configuring the network. In a small network where everything depends on one firewall/router/dhcp box like a pfSense (or got forbid a consumer router), if that box fails, everything is pooched anyway until it gets fixed.I like the flexibility. With all of the network configuration being stored on the router, I can pick up any machine, plug in a random RJ45 and it'll work out of the box. Even if the router dies and I need to drop in a replacement, all the machines have their SSH instantly available (albeit at random IP addresses) without the need to run around with a serial cable.
It looks like this may be a non-issue due to software changes since my OP.Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces. Maybe copy the solution Linux or Windows uses?
Absolutely. There are three offices that need to approve the action before the people with the capability to act are allowed to begin work. Then, one branch provides the hardware, another branch places the hardware and makes the physical connection to network, another branch sets port security so the device can communicate on the network and another branch makes the configuration active in the management server so DHCP can work. The MAC address and machine name must match records and there is software on the system that must check-in. It is all in the name of security and the process is lengthy. It can take a month to get a computer moved from one cubicle to another in the same building. It is divided among so many in an effort to prevent a bad actor from having unrestricted access to the network. All network ports are shut by default and if the MAC address of the hardware changes, they go back to shut. There are ways to spoof the MAC, which is why there is also software on the computer that must check-in with the network. It prevents booting from a Live-CD or something like that. They also go around periodically with signal sniffers looking for cellular or WiFi hardware as both are banned from the facility. USB flash drives are also forbidden but you can get special approval for USB hard disk drives.You mention multiple admins/signoff therefore you need a "conspiracy" to get a rogue machine authorized, not just a single bad actor. Correct?
We have a DHCP server in my environment, it is used for the desktop computers and printers, but not servers. The DHCP server will only assign an IP address to a recognized MAC address. It is intended to make it more difficult to get a physical device on the network. We have an array of Firewall and intrusion detection systems (IDS) and intrusion prevention systems (IPS), that are there to attempt to prevent or stop bad actors from outside. The network at work is operated on a "Trust No-one" principal because most data-breaches in the organization history have been from the inside.Does lack of DHCP make it significanlty more difficult for a bad actor who has remotely compromised a machine to piviot to another machine?
I guess it is a mater of perspective. If you have a system on the network that has the IP address assigned in the OS instead of receiving that IP address from a DHCP server, that system will not know or care if the DHCP server is up, down or sideways. At home, my network switch is a 48 port device and everything connects to it, including the device that issues DHCP addresses. The switch, my servers and my computer all have static IP addresses and they can communicate with each other always and forever, even if I disconnected the router from the network. That is the advantage, if your router or device that serves DHCP addresses should go down, nothing is "pooched" and you have some systems that are able to be pinged for testing to find out where the problem is. If the computer you are using has a DHCP address, you can't even ping the router to see if the router is down. I know it makes some additional management pain, I deal with that every day at home and at work, but it makes the network more fault tolerant and gives you more troubleshooting options. Also, if your only documentation of network configuration is in the router, where do you look when the router fails?Unless I am missing something, DHCP reservations in the router (especially if it has a nicely laid out UI, are a self-documenting way of configuring the network.
Did you try using the new interface with FreeNAS v11.2-U7? I haven't had the need to reconfigure my NICs, but AFAI can see the new interface allows DHCP on all interfaces. @iLikeWaffles if you check it out, please post back here and let us know if it works, or if it's just in the UI but does nothing or throws an error..Don't know why FreeNAS has to be the outlier that can't elegantly handle DHCP on multiple interfaces. Maybe copy the solution Linux or Windows uses?
DHCP Relay
will help bridge the quest for DHCP across interfaces.As the OP, I'm going to put my $0.02 worth in.... It's not that big a deal to just configure a static IP, and having lived that way for a couple of years I can see the point. My box lives on 3 sub-nets, and it actually worked out better having the ip configured where they could all have the same host number (x.x.x.hostid/24) and I can remember it if the DHCP has a hiccup.Resurrecting an old thread here because I've just run into this too.
There are multiple threads on this exact topic and none of them answer the question, instead they resort to attacking the person raising the question.
Why has someone seemingly wasted time coding it so you can't have multiple DHCP interfaces?