Right, but some bad RAM won't potentially trash every single file on the entire server array. Typically that corruption is limited to files that are created/modified after a certain date. So what if the system is trashed, you mount the array with a fresh copy of Windows and 80%+ of the stored files will be fine. Most people will live with a few corrupt files. Just look at how much data corruption occurs daily around the world because of UREs or silent corruption on hard drives that you have no clue about. That's just a reality today, unfortunately.
With ZFS, it can and pretty much is an expected side effect of bad RAM to eat every single file on your pool, then belch it out all over your screen with a "zpool unmountable" equivalent error. And then you think that you are awesome and have backups that are made every night and you'll just recover from backups. But then you find out that those religious scrubs you've been doing actually trashed virtually ALL of your files. You'll be less than thrilled with the outcome. We(the FreeNAS community) aren't privy to the information Sun knew when they first created ZFS, but we are definitely seeing firsthand what Sun probably knew all along. The consequences for bad RAM with ZFS cannot be overstated. It can cause data corruption that can relicate through all of your files, through all of your backups, and you aren't likely to know about the issue until its too late to fix it.
Before Apple everything was ECC. Apple wanted to save some money back in the late 70s and early 80s and also realized that while it would suck to have some stuff get corrupted because of RAM, it wasn't necessarily an "end of the world" scenario. For ZFS though, it pretty much is an "end of the world" scenario. But apple realized they could save a boatload on RAM costs if 1/9th of the RAM cost wasn't necessary. So non-ECC was born, and has stick around because its cheap.
Wanna know something cool about RAM? It was predicted that error rates would increase(due to background radiation) at a rate that would be about double the rate as you double the density. Makes sense, right? You have 4GB of RAM and you'd expect twice as many errors as 2GB of RAM. Except it's not double the error rate. In fact, it's not even close. It's far less than expected and nobody really understands why. It was also expected that as transistor size shrank that error rates would increase(due to background radiation). As the transistor gets small more high energy radiation would affect a larger % of the total electrons in the transistor, leading to more bitflips etc. Except, again, the error rates would increase, but at a far slower rate than expected. Nobody really understands this effect. Just that its been documented.