Strange results from zfs receive

Status
Not open for further replies.
Joined
Jul 13, 2013
Messages
286
Huh; the zfs receive completed apparently successfully (errors relating to certain Solaris options not on FreeBSD, I think, but final status was success). The 48.6G is the result of the send/receive, it wasn't there before.

(The duplicated "lydy" at the end may well be an error of mine in setting up the zfs send; or something. Not sure. But whatever, I should be able to find the files making up that 48.6G, shouldn't I?

[part of zfs list output]
zzback 48.7G 25.1T 153K /mnt/zzback
zzback/.system 4.09M 25.1T 166K legacy
zzback/.system/configs-5ece5c906a8f4df886779fae5cade8a5 665K 25.1T 460K legacy
zzback/.system/cores 153K 25.1T 153K legacy
zzback/.system/rrd-5ece5c906a8f4df886779fae5cade8a5 153K 25.1T 153K legacy
zzback/.system/samba4 1.24M 25.1T 729K legacy
zzback/.system/syslog-5ece5c906a8f4df886779fae5cade8a5 1.52M 25.1T 633K legacy
zzback/fsfs-backup 48.6G 25.1T 153K /mnt/zzback/fsfs-backup
zzback/fsfs-backup/fsfs 48.6G 25.1T 153K /mnt/zzback/fsfs-backup/fsfs
zzback/fsfs-backup/fsfs/zp1 48.6G 25.1T 153K /mnt/zzback/fsfs-backup/fsfs/zp1
zzback/fsfs-backup/fsfs/zp1/lydy 48.6G 25.1T 153K /mnt/zzback/fsfs-backup/fsfs/zp1/lydy
zzback/fsfs-backup/fsfs/zp1/lydy/lydy 48.6G 25.1T 48.6G /mnt/zzback/fsfs-backup/fsfs/zp1/lydy/lydy
zzback/jails 153K 25.1T 153K /mnt/zzback/jails
zzback/local 1.37M 25.1T 1.14M /mnt/zzback/local
zzback/rebma-backup 256K 25.1T 153K /mnt/zzback/rebma-backup

But, trying to find the actual files -- nothing:

[root@zzbackup ~]# ls -al /mnt/zzback/fsfs-backup/fsfs/zp1/lydy/
total 3
drwxrwxr-x+ 2 fsfs-backup wheel 3 Dec 10 00:15 .
drwxrwxr-x+ 3 fsfs-backup wheel 4 Dec 10 00:15 ..
-rwxrwxr-x+ 1 fsfs-backup wheel 0 Dec 10 00:15 .windows

[root@zzbackup ~]# ls -al /mnt/zzback/fsfs-backup/fsfs/zp1/lydy/lydy
ls: /mnt/zzback/fsfs-backup/fsfs/zp1/lydy/lydy: No such file or directory

What's up?
 
Joined
Jul 13, 2013
Messages
286
Further investigation, to no profit. Running a scrub doesn't turn up anything weird -- it takes the right amount of time for the amount of disk it says is consumed, with no indication of any error. Still can't actually find the files anywhere though. When I back up to external disks via ZFS send/receive, the resulting disk is a normal ZFS filesystem that I can access, but not here?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Weird...

I have no idea, but I have a couple of standard questions:
  • What are the versions of FreeNAS? (Assuming send/receive between the FreeNAS systems in your signature.)
  • What was the command line used to copy?
  • What were the errors (the entire output) of the send/receive process?
  • Any errors in /var/log/messages since the time you had initiated the send?
  • Do you still have the original data?
  • Is 48.6G the size of data on the sending side?
  • Did you reboot? (I hope not)
P.S.
You cannot see what was received until the receiving process is completely finished. However, you stated that you had seen a successful completion message.
 
Joined
Jul 13, 2013
Messages
286
This was between a Solaris system and the FreeNAS system, which I see I only half-mentioned originally; sorry! The FreeNAS is 9.3, I believe current at the time of the test (there was a release last night or so and I just upgraded). I must have given more detail in a previous question and forgotten this was a different one.

I do still have the original data (using my live server for a source, since it's the only place I have big chunks of data, and since moving that data to FreeNAS is coming up on the agenda pretty soon, like when I get my understanding of FreeNAS enough up to snuff to be willing to).

The data on the sending side describes itself as 62.1G (today; but knowing the usage patterns I'm quite confident it hasn't changed).

Command was:
localcmd="zfs send -Rp $nowsnap"
rmtcmd="zfs recv -Fudv $SERVERPATH/$HOSTNAME/$LOCALFS"
pfexec $localcmd | ssh "$SERVER" "$rmtcmd"​
for suitable values of the variables. That does reliably show the exact options used (initiated from the current Solaris server, then sent over ssh to the FreeNAS box).

I've rebooted since then I'm sure, at least for the upgrade. Didn't make the phantom disk used go away.

Don't have the errors; though I mentioned the two I saw (many copies of):
cannot receive sharesmb property on zzback/fsfs-backup/fsfs/zp1/lydy/lydy: permission denied
cannot receive quota property on zzback/fsfs-backup/fsfs/zp1/lydy/lydy: permission denied

That's from the thread "Solaris compatibility" in this same section of the forums.

I should emphasize that the data that's in "phantom mode" is completely expendable; this FreeNAS box is entirely a test system right now, the actual (source) data is safe, I can burn it down to the ground and start over if some interesting theory makes it seem worth doing.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I have read the other thread, now.

Since you do not use quotas, you can remove quota on Solaris.

And temporarily set sharesmb to off, then let's see what happens.
 
Joined
Jul 13, 2013
Messages
286
I think I'll start with a more artificial test -- create a new dataset on Solaris without those options, rather than disturbing them on the real copy. If that doesn't produce the same interesting symptoms (when I send it to FreeNAS) I can consider actually removing them; it should be perfectly safe in that they should go back on with no trouble, definitely. Actually messing with the real data scares me anyway.

Probably not for a couple of days; tomorrow is the final for a course I'm teaching and I'm not fully ready for that yet, and then there'll be a rush to get grades turned in. If I do it much sooner, it's avoidance behavior :smile:.

You're not, at least, laughing hilariously at my trying to use send/receive between Solaris and FreeBSD ZFS, right?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
[...] You're not, at least, laughing hilariously at my trying to use send/receive between Solaris and FreeBSD ZFS, right?
No, not at all. But, myself, I would have turned off quota and sharesmb properties as soon as the errors appeared. (Assuming a pool not in use. Otherwise, retrying with a fresh dataset is an excellent option.)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can I laugh at you for using Solaris and FreeBSD ZFS? Just kidding.

In all seriousness, this is something that a few people in years gone by have tried, with various definitions of "success". I, personally, would rather copy the data than try to use ZFS replication because of all of the problems that have occurred watching other users attempt it.

In theory, what you are trying to do is supposed to work. In practice, there's very few who have claimed it works successfully to a FreeNAS box.
 
Status
Not open for further replies.
Top