Well, I didn't say you wanted to skip STP... but you want to be very aware what the implications will be. Especially if you are doing this with a single vSwitch on ESXi.
In our most recent design here, we've got something that somewhat resembles the above. Two Dell Networking N4032F's form the network core in the server rack, and two Dell Networking 7048P's provide edge switching in the distribution frame (can be scaled as needed).
There are some differences: Nothing's stacked. We have many, many vlans in the environment which appear on all switches. For around two decades, we've done fully redundant core networks, but that used to be at layer 3 with OSPF. It is still designed that way but now we can handle redundancy at layer 2 as well.
The two N4032F's are connected at 40Gbps. Each N4032F has a 20Gbps LACP to each 7048P. Clients typically can't be redundant but in theory could be. To get from one edge switch to the other requires going to the core. The nonobvious thing is that LACP will shut down the 20Gbps links from the non-root N4032F to both 7048P's, meaning that traffic from that N4032F to the edge always traverses the 40Gbps link. Unless that's down, in which case one of the blocked 20Gbps LACP's will be enabled. Since the edges really don't need more than 10Gbps, this is a very robust design that ought to last a few years. ;-)
The ESXi hosts each have a pair of vSwitches, with 4 10G uplinks. In our environment we have LOTS of vlans, so some are presented on one vSwitch and some are on the other. This is tied up with the layer 3 routing protocols, and would be very confusing without the historical perspective, but it basically causes some load distribution between the switches. However, with modern gear we can have link aggregation in failover mode, so the links in green are backup links that only come up if the primary goes down. This means that either N4032F can be rebooted without an issue, all vlans and traffic fall back onto the other switch and take backup pathing as needed. It's also expandable as needed.
So that's all lots of networking fun and doesn't even touch on the layer 3 goodness.
Now back to yours.
In your diagram, you've tried to connect the two Zyxels together, but the thing I'd have to wonder is, how much traffic will actually be traversing that route? Keeping them separate is probably a better idea; in the unusual case where you would actually need traffic to flow from one edge switch to another, let it traverse the Junipers.
In the architecture I've drawn above, I have the advantage of multiple vlans, and so I can encourage traffic not to be traversing the 40Gbps by putting all the "vlan A, B, C" on vSwitch 0 on each ESXi, and then "vlan D, E, F" on vSwitch 1 on each ESXi. In that model, then, one core switch tends to handle most traffic for those specific vlans.
You don't have that advantage, and you might find that you get into an oversubscription situation if you allow traffic to flow between Zyxels in that manner, because your upstream connectivity on that one Juni is 10G. I'm unclear on an optimal way to deal with this. There seem to be a bunch of suboptimal possibilities, but I think it requires a better understanding of what your traffic patterns are likely to be, to understand the implications of STP blocking, and this probably goes beyond the sort of help you'll get on a forum that isn't even focused on networking topologies.
I got tired of all that crap years ago and so as you can see above, I just do a kind of maximal configuration and then don't worry about it.