jyavenard
Patron
- Joined
- Oct 16, 2013
- Messages
- 361
I think what's getting cyberjock's underwear all twisty is the fact that the usage of random data is completely unnecessary here. As he points out, encrypting a block of zeros and a block of random data will take the *same* amount of computational time. The same. Whether it's AES-CBC or AES-XTS at work (and 128 or 256 bit AES for that matter), the overhead of the algorithm is always in cycles/byte. Not in cycles/type-of-byte. :) All AESNI does is reduce the number of cycles per byte.
I understand the math behind encryption...
And I normally would have taken this further because of what logic dictates. But curiosity got the best of me, and I ran a little experiment:
In 9.1; writing 100GB of zeros on an unencrypted pool takes the same amount of time as writing random data: good so far
Yet, doing the same on an encrypted pool, I always found that writing zeros is slower on average than writing random data, and that by over 10%.
And I get greatly varying times: over 5 runs, I had a variance of 6200, while writing random the results were always consistent with one another.
I'm at loss why, it makes no sense: maybe it encounters a corner case in the encryption code used.
In 9.2: it makes no difference as it should.
That seems to contradict what you wrote earlier @jkh: in 9.1 encrypting zeros either isn't as quick as other data, or if it's not at the encryption level, there's something else at play.