This actually doesn't really prove your point because I will bet, if you look at your routing table, ens256 is the interface for the default route with a metric of 0. In that case, this makes perfect sense because you are using a ping, which is not going to bind to a specific interface, and trying to demonstrate how a service/process that is bound to an interface will behave, which is to say they will not behave the same in any way, shape, or form. Additionally, ICMP does not replicate a TCP conversation that is involved in iSCSI, which is what the main topic is about, but I can replicate this on many fronts, DNS server, DHCP, etc. So all you actually shown is that ICMP can be received on one interface but will always go out the default interface, which is why ICMP is very limited in actual network troubleshooting and is not the end-all beat-all of network troubleshooting.
Now the default interface is handled pretty similarly in both Windows and Linux. I can't speak to FreeBSD because I do not know it's internals that well, but with the others the default interface use determined by three criteria, first, which one is assigned to the network that I am on (Source IP matching). If none match, then the default route (0.0.0.0) is selected. Then, which one of the remaining interfaces are connected (line protocol up), and finally, which one has the lowest metric. The metric is the "tie breaker" when multiple interfaces exist on the same subnet, and in the case of having multiple interfaces with a default gateway defined, if you do not explicitly state the metrics for each interface, it's literally first come, first serve. So which ever interface is initialized first (and get's DHCP first in your case) will be metric 0. Debian (and derivatives) will assign 100 as the metric on all other interfaces by default. What happens if you have multiple interfaces that meet all three criteria? You get a round-robin event, which can be a lot of fun. I don't think Linux will use the lower IP as the source every time, but then again, that's only been my observation of function, not an absolute statement.
Now, this is the caveat to the discussion that makes your broad statement true; if all the interfaces are ALL on the same subnet, then yes, you will have some goofy asymmetric routing because of that, but in the case of iSCSI, it doesn't matter because iSCSI is multipath aware and can handle having this occur. In other cases, such as OpenSSH Server, you bind it to an interface, and that interface is the only only that will accept incoming connections. As for the outgoing, it's not applicable because it's a constant connection that returns to the same interface and path it's received on, unless you really screw with your routing and force an asymmetric route.
Now, the key to why having multiple interfaces on the same subnet for the purposes of load balancing in a storage network is, the storage subnet being different than the one the console (or default gateway) is using. This is the same as what
@Arwen mentions. This means that the first criteria, Source IP <> Subnet matching, will cause one of the storage interfaces to be selected over the default because it was matched first. In the case of multiples, well, if you leave the metric the same, then you get a round-robin event, which is only a concern if you are the client and you are not using a multipath aware protocol. Also, on these interfaces, you don't assign a default gateway, because quite frankly, I wouldn't want my iSCSI traffic going across my router, or even allow the router to see the broadcast domain at all. Yes, Linux (and Windows surprisingly) attach a specific interface to every route entry, which is how this "magic" works.
Observe my routing table below:
Notice how each route entry has a specific interface assigned to it? This is how I know what interface to expect certain traffic on. I don't know if FreeBSD does the same by default, but I know Linux does (case in point) and Windows does as well, to the best of it's ability. BTW, only ens160 is statically assigned, the other 3 are DHCP in my example, which is why he's the default route for the entire system. I have not defined that in my netplan file, but certainly could.
Any traffic destined for the 10.27.204.0/24 subnet will go out either ens224 or ens256, it will not go out ens160 or ens192 UNLESS both the other interfaces are down. Now, since this is iSCSI, my observation has been that the iSCSI initiator/portal is smart enough to know that since the request from the target was received on ens224, it will send the response traffic out that interface. In this case, and I have found nothing to contradict this, the iSCSI server is aware of this and able to attach itself directly to the IP stack and send out the desired interface. Again, this is my observation using Wireshark and having found no documentation to state otherwise, I have to accept it as the expected behavior. I am using the interfaces as examples, so take that with a grain of salt.
You mention that "Some specialized equipment does this because they're based on custom high performance IP stacks that have very limited functionality" which while technically are true, they still use a BSD or Linux kernel behind the scenes. I'm specifically referring to the COTS equipment that enterprises typically will use, which is what my experience is in. Some of these vendors that use basic IP implementations probably wouldn't have hit my radar in my career, so I have no knowledge of what they can and cannot do.
Now, this statement, "Some of your other ideas about ESXi also appear to be errant but commonly held misconceptions", I challenge you to tell me what "ideas" I have that are errant or misconceptions. I have been an ESXi and vSphere SME for a number of years both internally in my company and externally, so please, educate me and the rest of the world on these errant misconceptions. Like
@HoneyBadger stated, there are some interesting things that are done in ESXi if you want to have multiple default gateways on a system, but I'm not even touching that here. I welcome the debate because what I have detailed above is exactly how my production environment runs, with absolutely no issues in performance, data degradation, or otherwise anything that says this is a problem. I also run my home network like this to balance across two physical switches, without using vPC, and having no issues because ESXi not only has multipath aware iSCSI, but because I don't have a single flat network. I am absolutely interested to hear this, because there may be something I learn from this, but honestly it is morbid curiosity on my part.
Overall, as a single broad statement, some of your article and what you've said is spot on, but when you start getting into specific use cases, the statement falls apart. I totally agree trying to do this on a single, flat network, is not the smartest idea in the world and things will be wonky, to say the least, but like anything in the IT world, there's not a one-size fits all scenario and blankets statements often lead to limited decision making. After looking back at my original post, I did not detail the separate interfaces on a segregated subnet piece, which honestly probably wouldn't have changed your response based on the "I do not wish to entertain a debate as to whether or not this is right or wrong." statement you make in the article. You are set in the knowledge you have and will not entertain even the possibility of being slightly incorrect or able to learn something new. Honestly, with that statement, why did you even bother to respond to my post, unless you are interested in just trolling folks and not entertaining open discussion that might alter your long held beliefs.
As for my qualifications and background, since you have made some assumptions. Like previously stated, I am a vSphere and ESXi SME, and I'm also a network "engineer" (I hate the legal distinctions with that word), and also manage OS level items in Linux and Windows. I actually am not a "storage person" by trade, to me it's all a network protocol to deal with, I don't deal with "It's a SAS. It's a SATA. It's a NVME..." That's all interface level stuff I care less about. What I care about is talking between the systems, let the storage folks geek out on the interface stuff and let me know the high level.
I see this as being an Advanced Feature that has all the appropriate warnings and the associated "I understand the risks" checkbox before proceeding, but to not even offer it as an option in edge cases is frustrating to say the least, and to have a "I'm right, you're wrong, piss off" stance is really a good way to make folks get defensive.
On the last point in my original post, which hasn't been addressed, why can't I have a 3 DHCP interfaces in different subnets? There's no technical reason behind this either, and could also be an Advanced Setting that we could enable if we want. I guess the main thing is I consider that management interface to be separate and isolated onto a management subnet that no services runs over, maybe I'm the edge case in this, who knows.