It all comes down to your workload, at the end of the day. Writes suffer more than reads in QLC (and TLC!) drives, and the SSD manufacturer is well aware of this fact. Writes can get so bad that they can perform literally worse than a spinning HDD.
To mitigate this, the firmware of the SSDs have a buffer of NAND pre-programmed as SLC. This cache, when not full, performs at line rate for SATA devices and pretty stinken fast on NVME drives as well. If your workload is such that you don't often commit enough writes to fill the buffer then you might not even notice you are using QLC. If you are using mirror VDEVs the size of that buffer will linearly scale with the amount of mirrors. Write endurance will scale linearly in this fashion as well. The interaction between the logic of the drive firmwares and the logic of ZFS meld pretty well. Drives with DRAM caches vs ones that don't follow this same trend. DRAM is faster than SLC and acts as a first tier of caching, then it's flushed to either SLC or directly to TLC/QLC. If you write queue depths are shallow enough for the drives to commit the writes to their final destination, you are fine.
Generally I wouldn't recommend QLC NAND, or drives without DRAM but if the system is big enough the downsides aren't so bad. However, I personally have found that a single mirror VDEV of Optane 905ps was faster in all possible regards to a 24-drive 12-way mirror of crappy consumer 120GB DRAM-les TLC drives. Sometimes simplicity is your best bet xD.