Three thoughts:
1. After replacing all the drives in a RAIDz1, and completing the resilvering process one by one... You can in fact expand the pool capacity to use the additional space provided by the new drives. (provided you use larger drives...) These replacement drives need not occupy the same SATA or SAS port as the drive being replaced. As long as the boot pool can actually initiate the boot and the AHCI / SAS ports have driver support, ZFS just finds the roaming drives and stitches everything back together. (Just remember: SATA controllers don't talk to SAS drives, all SAS controllers talk to SATA drives)
2. The reason that 2007 article's predicted 2009 apocalypse hasn't panned out is it was written to a specific statistical benchmark, and that benchmark moved. The uncorrectable read 1 in 10^14 has for many manufacturers become 1 in 10^16 due to part commonality. The SATA spec requires 1 in 10^14 but the SAS spec required 1 in 10^16 even back in 2013. It's subtle, but significant. And remember SATA is a dead end, nobody is working on it anymore with a goal of advancing the state of the art. SAS is still being developed, so the 1 in 10^16 is the benchmark. (Actually I need to check and see what SAS4 requires...)
3. The resilver workload killing another drive is a good reason to keep a mix of drive models and ages in your pool. If they're all the same model, and all have the same number of hours accrued in the same environment, then this is a very real risk. But the ZFS scrub should be every bit as stressful as a resilver.