ZFS ZVOL replication

Status
Not open for further replies.
Joined
Apr 28, 2012
Messages
16
I was very excited to try this feature after many years using freenas/nas4free. But my happiness was short lived . I did RTFM and I did find that snapshots are stored together with the volume so here is the quandry - how is it possible to make snapshots of ZVOLS at all since all space will be allocated fo the zvol itself
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I don't even who wth you are asking.

zvols are an independent file system. Basically like a dataset. So snapshotting a zvol is no different than snapshotting a dataset.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Snapshot takes almost no space in time of its creation. Later it may consume some space if the original ZVOL is modified. Since your ZFS pool should any way have some free space for normal operation, it is usually not a problem to keep also few last snapshots, needed for replication. To estimate needed free space you should estimate amount of modified data within your snapshot interval.
 
Joined
Apr 28, 2012
Messages
16
so, in short, nobody knows or Freenas has a bug because I only manage to do 1 snapshot per zvol.

So, how is it possible to create space for the snapshots for a zvol?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It would be better if you described your problem with more details. There is no limitation of one snapshot per ZVOL.

If there is no space on pool for more snapshots because you created non-sparse ZVOL equal in size to pool capacity, then it may be considered as a configuration error. In such case the only ways out are either destroy and recreate ZVOL, or increase pool capacity, or switch ZVOL into sparse and monitor potential overflows.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
then it may be considered as a configuration error.

@mav@ misspelled "then that is a catastrophic configuration error." He was trying to be nice about it. A ZFS pool should never be filled. The normal percentage for file based storage is often quoted as 80%, but for block storage, to have a chance of acceptable performance, probably no more than 60% of pool capacity (potentially even less). You cannot just create a non-sparse zvol equal in size to the pool capacity... it'll all go to hell pretty quickly.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
@mav@ misspelled "then that is a catastrophic configuration error."

It is true that pool should never be filled completely, but now there may be no direct relation between ZVOL size and pool space it occupies. If initiator supports UNMAP, then ZVOL created equal or even bigger then pool capacity may still work fine if never filled enough to overflow the pool. If stored data are well-compressed, then creating ZVOL bigger then pool size may even be reasonable. It does require careful administration, but it is not fatal. And it indeed can be catastrophic if administrator has no idea what he does and/or he does not monitor the storage. There is a reason why FreeNAS supports UNMAP, Stun and Space Threshold VAAI primitives.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Right, but:

If there is no space on pool for more snapshots because you created non-sparse ZVOL equal in size to pool capacity, then it may be considered as a configuration error.

The sentence I was correcting already assumed that the space on the pool was effectively depleted.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
The sentence I was correcting already assumed that the space on the pool was effectively depleted.

I haven't expressed myself well. I wanted to say that there still may be free physical space on a pool, that can not be used because of some space reservation set by the big non-sparse ZVOL. Here is example:
Code:
# zpool list test0
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
test0  1,52T  6,51G  1,51T         -     1%     0%  1.00x  ONLINE  -
# zfs list test0
NAME    USED  AVAIL  REFER  MOUNTPOINT
test0  1,47T   830M    85K  /test0
# zfs list test0/huge
NAME         USED  AVAIL  REFER  MOUNTPOINT
test0/huge  1,20T  1,20T  1,00G  -
# zfs snapshot test0/huge@test
cannot create snapshot 'test0/huge@test': out of space
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I haven't expressed myself well. I wanted to say that there still may be free physical space on a pool, that can not be used because of some space reservation set by the big non-sparse ZVOL.

With all due respect I'd say that could be considered as just another example of bad configuration.

The typical problem we've seen over the last four years is that someone creates an X TB pool and then tries to create a Y TB iSCSI device on it, where Y = X. Users tend to assume that they'll be able to use the entire pool capacity as a block storage device.

That used to be a total disaster. It is now less of a disaster because of the kernel iSCSI improvements, UNMAP, etc., but this is basically just turning what used to be an immediate train wreck (in the old days) into a deferred train wreck (with today's improvements). It now degrades and eventually has issues as it fills.

Most new users are going to expect to be able to use up to 100% of their iSCSI device, and sooner or later many will try. If we take that into consideration, then the goal should be to educate people that they should not create zvols that are equal in size to their pools. Relying on people not to fill their zvols seems to be a losing strategy. I'm a little burnt out on that whole thing. But with that context in mind, my earlier reply might now make more sense. I'm not really interested in the clever ZFS possibilities and edge cases. In my experience, most people using iSCSI just need a reliable iSCSI device, and the usual problem is that they under-resource it due to various misunderstandings, and (eventually?) hit badness.

As a counterexample, the new FreeNAS VM storage server I'm working on has 24 x 2TB 2.5" drives in a 2U form factor (48TB raw). Seven vdevs of three wide mirrors, plus three warm spare drives. I expect to see about 5-7TB of actual guaranteed usable iSCSI space out of the device on a 14TB pool. Very conservative, I know.
 
Status
Not open for further replies.
Top