ZFS disk encryption with safe init

Status
Not open for further replies.

damian

Dabbler
Joined
Oct 26, 2013
Messages
14
So last night I finally powered up my new system, and decided to go with the encryption since my CPU supports AES-NI. However, its been almost 12 hours now and the disks are still being written to, with what I presume is pseudo-random crap. I've got 6 2TB WD RE4's, how long does it usually take to finish this?

Doing some preliminary math (probably full of mistakes lol, I havent slept in like 3 days) - at 7.1TB usable space raid-z2, with the disks averaging 10mb/s write per disk (60mb/s total for 6 disks) it will take 300+ hours to fully complete. Thats 12+ days! Insane!

init.jpg


Am I crazy? or are those indeed the correct figures?
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Well .. i have two 2TB REDs in mirror and the "safe initialize" took approx 17hours where both disks were writing at ~30MB/s

If i will do the math when i had 2TB via 30MB/s speed ... then it is like 2TB/3600sec/30MBps => ~18.5hours which almost match mine 17hours. So for you it should be like ... 2TB/3600sec/10MBps => 55hours

Now, here is a question ...
If i will guess that the singe disk will be utilizing at full speed ~110MB/s, so for 2TB it will take like 5hours ... then six disks should take like 30hours (5h each) - separately (not in mirror/raid).

So my guess is that the safe initialization of mirrored/raidz pool is slowed down because of parity calculation (or syncing mirrored disks). If i am right then it will be significantly faster to do one disk after another (as a single ZFS stripe) and after the last one just destroy all six pools and re-create the mirror/raidz pool without "safe initialize" checked. The result will be same because each disk will be already full of some cr*ap. That "safe initialize" is actually "dd if=/dev/random of=/dev/gptid/blablabla.eli bs=1m" in the background ...

Or did i miss something ?
 

damian

Dabbler
Joined
Oct 26, 2013
Messages
14
Thanks HolyK, I think I'm gonna go with your approach here.. Taking waaaay too long to do the initialization on a raid-z2, the parity calculations are really slowing things down, doing the disks separately is definitely the way to go - wish I thought of it sooner lol
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Let us know the speed of the single disk and the duration.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
So my guess is that the safe initialization of mirrored/raidz pool is slowed down because of parity calculation (or syncing mirrored disks). If i am right then it will be significantly faster to do one disk after another (as a single ZFS stripe) and after the last one just destroy all six pools and re-create the mirror/raidz pool without "safe initialize" checked.
Actually, the initialization (dd) happens in parallel (there's a separate thread per disk) and it happens before the creation of the zpool, so there can't be any parity/syncing overhead.
See random_wipe (one thread per drive to wipe) in /usr/local/www/freenasUI/middleware/encryption.py and __create_zfs_volume (it calls random_wipe(device_list) before doing zpool create) in /usr/local/www/freenasUI/middleware/notifier.py.
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Aha, so i really missed something ... the pool is created after the initialization => make sense... Then the bottleneck is probably the "random" generator (CPU?) ? Or how else should i understand why the speed is drastically dropping when more disks are initialized?

Anyway thank you for the reply, i'll check both files, i'm curious :]

BTW: Zdravim bratry ^^
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Yes, I think so. My first thought was, doh, /dev/random blocks if it runs out of entropy, we better use /dev/urandom. However, that's a Linux thing, the FreeBSD /dev/random doesn't block and /dev/urandom is actually just a symlink to /dev/random.
I just did some quick tests. My E3-1230v2 can generate random numbers (/dev/random) at about 91MB/s. I tried a single thread and two threads and the total speed was the same -- with two threads the speed per thread was half of the single thread scenario. This indicates that the multithreaded wipe doesn't help much as you are limited by the /dev/random speed.

Re: BTW: Zdravim, uz som si vsimol, ze je nas tu viac :).
 
Status
Not open for further replies.
Top