Mid term, couldn't the operating system have a process white-list that exempts some code from the fix? The question is whether you trust that code enough to do so.
No. I'm going to try to describe this in a general computer-science way, so some of the specific terms and issues are a little different, this is to see if I can get the concept across. So anyone who wants to "correct" me will get a Grinchly response.
So modern computing systems have a virtual memory system, which includes the ability to create a virtual address space for each process. This is how you can run several different processes each with a 3GByte size on a 4GByte 32-bit system... the virtual memory system can swap in pages from disk, or in some cases even share pages.
We also usually have a kernel, which is a portion of code that acts as the operating system, which has privileges to do ANYTHING to the system. Because this is dangerous, we create a second tier, for user-provided code, which doesn't have the sweeping privileges. Since users are bad and evil and out to crash your system, we require user programs to gateway risky things, such as talking to I/O devices, through the kernel.
The interface between the user and kernel is called a "syscall." A syscall causes certain things to happen and control to be passed off from the user process, which made the syscall, to the kernel function that implements the functionality, such as "write data to file" or "read stuff from network."
Now, remember where I talked about virtual memory systems? It takes a certain amount of time to switch contexts between processes, which means that the virtual memory system needs to be set up to run a different process. You can watch the rate at which a normal UNIX box does this by running "vmstat 1" and looking at the "cs" column. Usually in the hundreds-to-thousands-per-second.
Because context switches take a bunch of time, we have traditionally used tricks to map the kernel into the same address space as each running process, which means that a CPU can switch from user mode to kernel mode and back without suffering a context switch and without needing to remap the virtual memory spaces. This is generally a good thing, it makes your system run faster. This is fine, because a CPU and the virtual memory system is supposed to provide protections against user accesses to memory that is marked as privileged, such as the kernel.
Intel fscked this up.
The specifics are unclear, but modern CPU's speculatively begin processing commands several cycles ahead ("pipelining"), and apparently there isn't sufficient logic in Intel's pipeline design to protect privileged memory properly. Apparently a clever attacker can cause a byte, maybe a few bytes, maybe even a page, of privileged memory to be read through clever sequencing of instructions with only user privileges. So this means that a user process can see the kernel's memory space. This is largely boring, but it can also have important stuff there, encryption keys, information about other processes, etc.
So the problem is that you have a binary choice. You can trust user code not to be malicious, in which case things are fine as they are. This might be an okay decision in some cases. But for most of computing, it is a dangerous one to make, and the decision many or most will make is that the kernel needs to be protected. So we have to look at mitigation.
The problem is easily mitigated, FSVO "easily", by putting the kernel into its own virtual address space. This means that a user process doesn't know and cannot abuse the pipeline to peek into the kernel memory.
The PROBLEM is that putting the kernel into its own virtual address space means that each time you make a syscall, you have to do a context switch and map out the user process, and map in the kernel, and then resume execution in the kernel, then reverse those steps to return back to the user process when done. The CPU cost to do this has been measured at between 5% and 70%, depending on the workload. I'm pretty sure that even 70% is not an upper bound.
This SUUUUUUUUCKKKKSSSS.
Right now, this is causing a massive panic in the world of cloud, where it is looking fairly likely that cloud resources are suddenly going to get noticeably slower, which means that the cloud is going to have to expand. Because there hasn't been a clear disclosure, those of us who do virtualization infrastructure are expecting that this is going to significantly impact hypervisors, and there's a good case to be made that guest operating systems can examine or even escape into the hypervisor management plane. The NSA had been rumored to have tools that were capable of VM escape for quite some time now, and perhaps this is the vulnerability that they were using. If so, shame on them. This is an IT train wreck.
Tell me if you do or don't understand what I've written. This is the kind of train wreck many of us have feared as CPU's have gone from a few thousand transistors (really!) to billions.