Encrypting a large ZFS Volume... Initialize with Random Numbers

Status
Not open for further replies.

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
Hi,

I just got my hardware all installed and I got a chance to run FreeNAS 8.3.1 on my newly built server.

Here are the specs for my server:

E3-1230v2
32GB RAM
11 x 3TB WD Reds
RAIDZ3

Now, when I create an encrypted volume, if I do not check the "Initialize with Random Data" box, I can create an encrypted volume (21.3TB) perfectly fine. However, when I check that box, the system looks like its working and it runs for about 15 minutes. The popup window then closes automatically and I do not see any volume. After this event, I cannot wipe hard drives or create another volume UNTIL I reboot the server.

Has anybody experience this?

Thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The initialization can take hours. On my system of 6 drives it took about 700 minutes.
 

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
The initialization can take hours. On my system of 6 drives it took about 700 minutes.

I don't see the status though. Like I had mentioned, after 15 minutes or so I get back to the view volume page and no volume on the page... So I don't know how long it's going to take or how far it had gone... Is there anyway to check the status or do all these in command line?
 

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
Update,

So I let my system run while SSH'ed into the box and kept a window with 'top' on. The web UI does go back to view volume page with no volume but the 'top' shows 11 dd commands running. This is great. It means to me that the system is actually doing something and the web UI isn't reflecting it. I will let it run over night or a day or two and probably report back once its done..

But if anybody has experienced this and if anybody know where I can find the progress on the random initialization, please let me know.... I am half blinded because I don't know the % completed. I only know the system is doing something... haha.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I doubt you'll get a "status" update. The way dd works it doesn't really know how far to go. I believe the commands just run until it errors out(end of disk).

I believe that the last time I did an encrypted zpool I started it, went to bed, and when I woke up the next morning the zpool was online and waiting to be used.
 

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
I have an update for this. After 48 hours I went back and check on the machine... the dd's were still going and I got worried. I went to the box physically and looked at the attached monitor and it was flooding with errors.

Two repeating errors flooding every second was from syslogd and dd both saying "/var filesystem full".

It is at this point I decided to reboot the machine and just create an encrypted volume WITHOUT initializing with random data. Could the reason be I am running FreeNAS off a USB and the USB stick does not have enough room for the dd to write 3TB*11 drives of data?

I don't know how to approach this problem now...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, I think the webserver times out waiting for completion, I noticed that but didn't research it further. This is definitely confusing if you're not familiar with the system.
 

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
Yeah, I think the webserver times out waiting for completion, I noticed that but didn't research it further. This is definitely confusing if you're not familiar with the system.

Did you have any success initializing with random data while running on a USB drive? My logic leads me to think that problem lies in the USB drive not supplying enough room for dd to perform...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I ran it with no problems on a 4GB hard drive for the system, which is basically indistinguishable for the purpose. However, I was running it on relatively small data drives. I would not expect /var being full to be an earth-shattering issue, except that it might interfere with logging and reporting.
 

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
I ran it with no problems on a 4GB hard drive for the system, which is basically indistinguishable for the purpose. However, I was running it on relatively small data drives. I would not expect /var being full to be an earth-shattering issue, except that it might interfere with logging and reporting.

The /var full was causing dd to stall. The initialization started fine (at least until the 12 hour mark when I checked the box physically) but when I checked after 2 days, that error kept flashing on the screen... I thought 2 days data initialization was too long so I decided reboot the machine and create the volume without initialization... I can't do more tests because I don't have room for another hard drive to run FreeNAS off of and I am running FreeNAS off of the 4GB USB key.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How exactly was it causing dd to stall? Did you actually observe that activity to the member drives had ceased, or are you assuming that errors scrolling on the console meant that no progress was being made?
 

darkconz

Dabbler
Joined
Apr 23, 2013
Messages
32
I assumed the errors were causing dd to stall because the system would have been initializing for 48 hours when I checked the machine and saw that error on screen... Can it potentially take that long to initialize? This is also the time when I noticed the logs aren't appearing correct (no images in Report tab).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, /var being full will disrupt the ability of the system to maintain logs and charts.

A 4TB drive writing at 100MB/sec would take about half a day to write, I think. You would be best off checking the write rate and seeing if it was within the realm of plausibility. If it is eating /dev/random, speeds might be very much less than 100MB/sec.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I will tell you that /dev/random is relatively slow compared to /dev/zero.

Code:
# dd if=/dev/random of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 16.398468 secs (63943534 bytes/sec)
# dd if=/dev/zero of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 0.088124 secs (11898864807 bytes/sec)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I haven't actually looked at the encryption technique used by geli but it might be more effective to create the pool then fill it with a zero-filled file, which would presumably be encrypted to what would still appear to be random data. Hm.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Why not just write zeros to the drives? dd if=/dev/zero of=/dev/yourdrive bs=4M

Poof, instant wipe.

I know, there's all that rumor that "there might be a slim chance that someone with the right technology might be able to have a reasonable chance of getting at your data" but there's never been any evidence of this ever happening anywhere nor has anyone actually ever made a breakthrough that was mentioned to even remotely make the scenario possible.

The only reason I'd use /dev/random over /dev/zero is so that whomever gets the drive will spend lots of time trying to 'decrypt' random data versus realizing its all zeros and knowing there's nothing really there. I'd love to hear a story about some government somewhere spending big big dollars trying to decrypt a drive that has no data but was simply a random dump.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Because that leaks information that even a beginner can identify.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Because that leaks information that even a beginner can identify.

Elaborate please? I'm not sure where I'm getting the idea that /dev/random is safer than /dev/zero.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, this is basic crypto.

If you do:

Code:
# dd if=/dev/zero of=/dev/da0
<now create encrypted filesystem on da0>


then even a newbie can tell that a certain amount of space is or has been used on the encrypted filesystem: you count the number of blocks that still contain all zeroes. This is an information leak, a major crypto no-no.

However, if you do

Code:
<create encrypted filesystem>
# dd if=/dev/zero of=/mnt/my-encrypted-fs/file


and let it fill the filesystem, the data actually written to da0 is an encrypted stream of zeroes; each block is different from the next, and you need the decryption key to figure out that they're all actually zero.

Now, from a security point of view, that's still not really all that great, because someone who is attacking the system to recover data, and is able to obtain the encryption key, may be given clues as to which blocks are irrelevant to analyze.

So the ideal method is to fill the disks with /dev/random. However, the point I'm making is that if the process of filling your disks with noise takes so much time that it doesn't complete successfully or in a reasonable amount of time... well, filling with /dev/zero on the encrypted filesystem at least eliminates the first case.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I think there's two different thoughts here.

If you're wiping a drive purely so data is not recoverable, then a /dev/zero wipe should be sufficient. The drive is empty, and nothing retrievable. (I think the jury's still out on whether, with advanced analysis, you could still get to the original data when it's been overwritten with zero's. I don't think so.)

Then there's wiping a drive with the intent of using it to store encrypted data. This has to be wiped with random data so that further encrypted data can't be told apart from the non-used random data. 'Used' encrypted data should be indistinguishable from 'old' random data.

I don't think I'd use the gui for wiping drives that are going to be used in encryption. I'd startup dd's on the console, and monitor them from there. Depending on the cpu, there may be little point to starting all the drives at once. I assume /dev/random is single threaded, so with a quad core cpu, there may be no point in writing random data to more than 4 drives at a time.

After the dd random wipe, which may take quite a while, I'd use the gui to make the pool, and NOT check the option to initialize with random.
 
Status
Not open for further replies.
Top