ZVol snapshot as iSCSI?

Status
Not open for further replies.

divB

Dabbler
Joined
Aug 20, 2012
Messages
41
Hi,

I use ZVol to export an iSCSI device target to a linux host which creates a dm-crypt with ext4 onto it. The linux host stores daily backups on it via rsync. Therefore I also want to create a snapshot after each backup.

How can I access these snapshots? I expected I can mount a specific snapshot via iSCSI but I can't work it out.

Any hints?
Regards
divB
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm not sure it's even possible.

Using an iSCSI file extent, you can definitely do the backup part of this, but you're going to have to do a little work to get that snapshot to be accessible via iSCSI - the iSCSI subsystem isn't going to automatically do this for you. If you only need it to be able to restore from, then I suggest setting up an iSCSI file extent, doing snapshots of that, and then if and when you discover a need to restore something from an older snapshot, you'll probably need to do a little CLI magic to determine the path to the snapshot, and then either CLI fudging of the iSCSI config, or maybe the GUI will let you do it. Haven't tried.
 

divB

Dabbler
Joined
Aug 20, 2012
Messages
41
Hmm, I do not understand what you mean with "backup part" and what file extend has to do with?

I figured out that I can make a clone from a snapshot which can be shared via iSCSI.

This would be an option. Without restarting iSCSI, the new volume does not appear at the client. Is there a way to "refresh" the exported targets apart from stopping and starting the iSCSI service?
 

Peter Bowler

Dabbler
Joined
Dec 18, 2011
Messages
21
Did you create a new Device extent then a target/extant pair? or at least add it to an exiting target?

The only way to access a snapshot is to clone it or roll back to it
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hmm, I do not understand what you mean with "backup part"

Your strategy has two parts. One part to backup, one part to recover. You can use a file extent to accomplish the backup-with-snapshot part.

and what file extend has to do with?

Because you can use file extents with ZFS snapshots, and it's easy to see how it works. I'm not sure ZFS supports snapshots with device extents, it might, but the paradigm is potentially unclear, and it isn't going to be well-supported under FreeNAS (unlike file extent snapshots, where at least the FreeNAS system understands that half of it all and will help you make it work).

I figured out that I can make a clone from a snapshot which can be shared via iSCSI.

This would be an option. Without restarting iSCSI, the new volume does not appear at the client. Is there a way to "refresh" the exported targets apart from stopping and starting the iSCSI service?

You'll have to figure out whether or not you can hack up istgt's configuration appropriately. It does a limited read on SIGHUP.

http://www.peach.ne.jp/archives/istgt/
 

divB

Dabbler
Joined
Aug 20, 2012
Messages
41
I am new to this so please apologize that I still do not understand.

After reading all answers 5 times I think I know what you mean: I can access a snapshot as a file in the file systen? E.g. /mnt/pool1/Zvol1/.zfs/manual-20120820 ?

When I tried it yesterday, I found no according file to use for a file extent though ...
 

divB

Dabbler
Joined
Aug 20, 2012
Messages
41
Ok, now I could try it again.

This attachment shows the snapshot of my ZVol: freenas-snapshot.jpg

This attachment shows that I can not find any file representing the snapshot: freenas-extent.jpg

Also:

Code:
[root@freenas] /mnt# ls -R
./       ../      .snap/   md_size  volume1/

./.snap:
./  ../

./volume1:
./  ../


I can find it in /dev:

Code:
[root@freenas] /dev/zvol/volume1# find /dev/zvol
/dev/zvol
/dev/zvol/volume1
/dev/zvol/volume1/test1
/dev/zvol/volume1/test1@manual-20120820


But when I manually enter /dev/zvol/volume1/test1@manual-20120820 into the "Add file extent, I get the error:

"You need to specify a filepath, not a directory."

Regards,
divB
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I am new to this so please apologize that I still do not understand.

After reading all answers 5 times I think I know what you mean: I can access a snapshot as a file in the file systen? E.g. /mnt/pool1/Zvol1/.zfs/manual-20120820 ?

When I tried it yesterday, I found no according file to use for a file extent though ...

I said ".zfs/snapshot" ... it's a directory. Look around inside it. Based on what you say above, I'm guessing the full directory name for your system is /mnt/pool1/Zvol1/.zfs/snapshot
 

Peter Bowler

Dabbler
Joined
Dec 18, 2011
Messages
21
I'm not sure ZFS supports snapshots with device extents, it might, but the paradigm is potentially unclear, and it isn't going to be well-supported under FreeNAS (unlike file extent snapshots, where at least the FreeNAS system understands that half of it all and will help you make it work).

This begs a followup question on my part.

I am using snapshots of device extents, so it works... but as for how well I'm not sure as it;s only been a few months.

I chose to Create device extents rather than normal extents in my iSCSI as freeNAS indicates they perform better (it seems to be relevant enough for them to put it directly into the WUI as you create extant/target pairings.)

does anyone have any more info on what the differences are from a performance standpoint?

I can start a new thread if needed.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've been wondering that myself. The problem with ZFS is that it is designed as copy-on-write, which means that when you're writing stuff, new writes are (hopefully) mapped "near" the old data, but possibly/probably a seek away. So if you've got this long contiguous file (iSCSI file extent) and you're randomly writing here and there inside it, you're fragmenting the file horribly, while with UFS, it's just writing in-place so you don't have the fragmentation issue. I'm guessing it'd work out fine and dandy if you have L2ARC to assist, but minimization of unnecessary I/O (especially writes) seems to be the best practice if you want to use iSCSI with ZFS.
 

Peter Bowler

Dabbler
Joined
Dec 18, 2011
Messages
21
How does ZFS treat a Device extent differently than a file extent in that scenario?
If a (file) extant can get fragmented, how does ZFS differentiate a Device Extent? Does it take contiguous regions from the devices? I can;t imagine it does that kin of goes against the whole pool situation
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How does ZFS treat a Device extent differently than a file extent in that scenario?
If a (file) extant can get fragmented, how does ZFS differentiate a Device Extent? Does it take contiguous regions from the devices? I can;t imagine it does that kin of goes against the whole pool situation

It's presumably similar to the way files are treated. I would imagine it takes a contiguous region from the pool, but then when the next write comes along, that's allocated elsewhere. That could lead to massive fun and fragmentation doing trivial stuff like atime updates. The question becomes whether or not it's a good idea to use ZFS for iSCSI, at least for applications that blithely write useless data frequently, which is, sadly, more than you'd expect.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It's presumably similar to the way files are treated. I would imagine it takes a contiguous region from the pool, but then when the next write comes along, that's allocated elsewhere. That could lead to massive fun and fragmentation doing trivial stuff like atime updates. The question becomes whether or not it's a good idea to use ZFS for iSCSI, at least for applications that blithely write useless data frequently, which is, sadly, more than you'd expect.

That's why the more I read about how ZFS operates the more I think that a defrag tool will become more important in the future. I've actually thought alot about "how" I will upgrade my ZFS pools someday. If I simply replace each drive one at a time with resilvering i'll be stuck with whatever the current fragmentation is. But if I make a whole new zpool and copy all of the data to it from command line then I can effectively "restart" the fragmentation from zero.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The design of modern file systems has become more aggressive over time; in the old days, storage was very expensive, files were small, and it was common to see systems tuned to minimums for free space reservations (which meant severely reduced write performance). It is somewhat different today. I mean, really, a terabyte of disk space? You're not storing mostly small files in that, are you? :smile:

So one of the things to note is that modern filesystems like ZFS are tending to be written - and optimized more - for handling larger files. That also includes some hidden assumptions: one of which is that big files are not as commonly randomly written, another of which is that violations of that (such as databases) frequently don't require sequential read access anyways, so fragmentation is less of an issue, and another is that sites implementing such things can address performance problems with other ZFS features such as L2ARC, which neatly addresses both sequential and random access read performance issues resulting from fragmentation.

On the other hand, ever-larger free space reservations are becoming more commonplace in environments where there are a lot of writes (good anti-fragmentation policy in any case), and the usage patterns of storage are changing as well, as much data is put on storage and then left for extremely long periods of time without further access, so even if somewhat fragmented, may not be a serious problem.

This turns out to be bad for small-scale iSCSI users, though, where you get the random writes and fragmentation and don't have an L2ARC to help "fix" the situation. There are probably other use cases that break as well. My suspicion, however, is that we're not going to see a "defrag" tool anytime soon. Those who have the most pressing need for such a thing also have other solutions to their performance issues.
 
Status
Not open for further replies.
Top