Single drive RAID-Z

Status
Not open for further replies.

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
Well sort of.
I don't really care if I happen to lose a drive completely as the data is usually backed up.
However I do want to avoid silent corruption and avoid loosing data if just a few pieces of a drive is damaged.
So is it possible to have parity data on the same drive but in a different physical location so that any corruption could be reconstructed by resilvering?
Or is this already happening by just using ZFS by the built in checksum?

I guess you could make a few virtual volumes in ESXi and put it on the same disk but I'm not sure how that would work compared to ZFS actually seeing the drive.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
You don't necessarily need to add a second drive or vdev to a pool to achieve this, I believe. You can set the 'copies' attribute of a volume or dataset to 2, so that the data is automatically mirrored within the filesystem. If the checksumming fails when reading/writing your file, the 2nd copy is used to correct the error.

I believe you can also build a pool from 2 partitions on the same physical device, but I've never tried it.

I'm sure someone more knowledgeable will be along in a minute to tell me I'm smoking crack. I'm still new to all this ZFS nonsense :)
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
LOL Hexland, you're smoking crack! ;)

Just kidding, you're absolutely correct, but it would need to be done from the command line, and of course more copies means less space vs. redundancy.

ZFS and Unix/Linux will let you get away with doing all kinds of things, even if they are not correct. It gives users the power to think for themselves even though some of us aren't very good at that. ;)
 

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
Thanks for the answers.

I tried the multi vitrual volume idea by creating 5 50GB volumes, it worked but I only got 2MB sec which is a bit too little for it to be practical. Not surprising though as it has to write to 5 places on the same disk for every piece of data.

using commandline to configure it not my cup of tea so I rather stay out of that area if I can.

Someone has to come up with a simple half redundant way to ensure data integrity of a single disk without too much loss of space. Then you just choose how big the error can be before it becomes uncorrectable.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Command line is really easy...

Create your single drive pool in the Volumes Manager thought the GUI as you would do normally...

Open the shell (either through SSH, the GUI Shell, or option 9 on the head TTY) and type

zfs create -o copies=2 <POOL_NAME>/<DATA_SET_NAME>

(insert your names pool and dataset names inside the angle brackets)...

Now you can go back to the GUI and manage the volumes and datasets as normal...

Put all your data on the new DataSet you just created, and you should be covered
 

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
So you can think of that dataset as a mini RAID 1 just for the files you choose?

[root@freenas] ~# zfs create -o copies=2 <VM250GB>/<RAID1>
Ambiguous input redirect.

Don't think it worked.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
My bad... I meant replace the names inside the angle brackets (including the angle brackets)... sorry -- I often forget that not everyone is a programmer :)

zfs create -o copies=2 VM250GB/RAID1
 

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
Thanks.
That did create a dataset.
However it does not show up in the gui which means I can't change the default read only flag, so no data can be put on it.
I did restart FreeNAS but no change.

Also. Is scrubbing suppose to give any indication of doing anything except showing increased cpu usage?
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Would you be running a version number < 8.2.x by any chance?

Changes in the 8.2 branch versus 8.0.x

The 8.2 branch of FreeNAS introduces many functional changes when compared with the 8.0.x releases.

ZFS can be manipulated from the CLI, and changes for supported items tracked by FreeNAS will be reflected in the GUI. zvols, datasets, and entire volumes can be created, destroyed, or manipulated on the CLI and will be propagated to the GUI.


Scrubbing is an asynchronous event, so it happens in the background. I've never triggered one through the GUI, so I don't know if you ever get a dialog or status report when its done. It can potentially take a LONG time (last one I did took 22hrs for 5Tb of data)

You can get the scrub status from the command line (sorry, I know you don't like the command line, but its generally the only way I know how to do stuff)

'zpool status' will produce a status report of your storage pool... at the top will be the current scrub status (and an ETA of completion)
 

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
FreeNAS-8.0.4-RELEASE_MULTIMEDIA-p2-x64 (11405)
Should I use 8.2.x instead?
Took 17min to scrub what took a lot longer to put there.
 

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Whether you upgrade or not is up to you... I'm not going to say 'YES' since the 8.2.x branch is still beta (despite not having any real problems with it myself).

However, I can't see any way of doing what you want to do (and still using the GUI) without upgrading (but I'm still new to FreeNAS too, so I could be wrong)
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
Took 17min to scrub what took a lot longer to put there.
FYI, scrub time is a function of the speed of your zpool + the amount of data in the pool.
 

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
Hmm.
Upgrade from both GUI and CD failed to work for some reason.
Just created a new VM and added the ZFS disk though so no worries.

*edit* The dataset seems to work just fine as any file put on there eats up double the space

It would be neat to have 1/5 or 1/10 in factor for recovery but this should be sufficient for now. Thanks.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
*edit* The dataset seems to work just fine as any file put on there eats up double the space

It would be neat to have 1/5 or 1/10 in factor for recovery but this should be sufficient for now. Thanks.
Not sure what you mean by "1/5 or 1/10" or how it would be possible.

Keep in mind this is a crap alternative to a mirror. It gives you zero protection against a drive failure.

However, it will help if bad sectors form on the disk with a copy saved to a second place. Assuming there aren't any bad sectors there as well.
 

Krumelej

Cadet
Joined
Jun 15, 2012
Messages
7
Think of the 1/5 as a RAIDZ with 5 drives in it.
Then pretend that the single drive we want to fault prevent is divided up in 5 pieces.
Four of those pieces is written in a contiguous place while the parity information that the 5th disk would see is written in another place. Or rather, the parity for every disk is gathered up in one place while the usual data is in another resulting in two writes while taking up less space than a clean mirror. But you can only loose a small peace of data instead of 1/2 of it before it becomes uncorrectable.
Meaning a few bits flipped that would normally result in a broken file could be corrected, as long as it's a slow decay and you scrub fairly often.
From what I have heard, silent corruption in disks and arrays is a larger problem then drive failures and complete mechanical failures are less common than minor faults.

"It gives you zero protection against a drive failure."

Correct. But not any one can expand with four or five drives each time.
And if you use your NAS to store a music collection or similar you might be more interested to keep data integrity than to prevent a complete loss.

Hope that explains how I'm thinking, wrong or right.
 
Status
Not open for further replies.
Top