9.2.1.6-RC-71b05dd-x64 TOTAL DISASTER

Status
Not open for further replies.

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
You gave me an idea: one of these days I'll try a manual snapshot + manual send/receive :)


Finding things that shouldn't make a difference that do is one of the cherished tools of bug hunting... ;)

Oh, and "Just because it's a rare corner case, doesn't make the bug any less frustrating when YOU encounter it. If anything it makes it worse!"
 

panz

Guru
Joined
May 24, 2013
Messages
556
I'm astonished that this "bug" showed up on a standard configuration (see my signature). And on a system installed from scratch.

Only the pools were imported (this should be a common task) from a very close previous version.

User misconfiguration? My setup is so simple (and I NEVER touched the CLI) that even a child could figure it out. ;)
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
@panz: "Thank you for being a good Guinea Pig. Now hold still... " :eek:

I know it won't make you feel any better but in previous work lives I have been where you are now more than a few times (although not with FreeNAS). At the moment I'd be looking to see how your system differs from iXsystems test rigs (hence the request for the DB dump I'm guessing) and also trying to screen-capture the exact set of actions that provokes the issue. Repeatability is key. Like they say on CSI "follow the evidence". :cool:
 

panz

Guru
Joined
May 24, 2013
Messages
556
I'm waiting, waiting... And my server is shutdown!
 

marian78

Patron
Joined
Jun 30, 2011
Messages
210
Hi, i have two production servers with autosnapshot backup first server volume to second server recursively, with FreeNAS-9.2.1.6-RELEASE x64.

I also have message "cannot hold snapshot"... What is wrong? My second server (backup) is not wiped, but i do not compared data with first server (there are many files and folders - 10T)

log from first server:

Code:
fileserver1 autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 1024 192.168.0.11 "zfs list -Hr -o name -t snapshot -d 1 volume0 | tail -n 1 | cut -d@ -f2"
Jul 15 08:15:09 fileserver1 autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 1024 192.168.0.11 "/sbin/zfs inherit freenas:state volume0@auto-20140715.0800-4w"
Jul 15 08:15:09 fileserver1 autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state volume0@auto-20140714.2000-4w
Jul 15 08:15:09 fileserver1 autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl volume0@auto-20140714.2000-4w
Jul 15 08:15:11 fileserver1 autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST volume0@auto-20140715.0800-4w
Jul 15 08:15:11 fileserver1 autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl volume0@auto-20140715.0800-4w
Jul 15 08:15:12 fileserver1 common.pipesubr: cannot hold snapshot 'volume0@auto-20140715.0800-4w': tag already exists on this dataset



log from second server (backup)

Code:
Jul 15 08:01:15 fileserver1 autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs snapshot -r -o freenas:state=NEW volume0@auto-20140715.0800-4w
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can you post your hardware and FreeNAS version please?
 

marian78

Patron
Joined
Jun 30, 2011
Messages
210
Both servers have installed FreeNAS-9.2.1.6-RELEASE-x64 (ddd1e39). CPU Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz, RAM 24521MB ECC REG/1333. MOBO SUPERMICRO.

Output of dmesg is in attached file.
 

Attachments

  • dmesg.txt
    38.5 KB · Views: 212

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I too have the issue. However, if I check, the hold is there on the push system...apparently.

Code:
Jul 18 01:00:03 fileserver autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 backupserver.freenas.lan "zfs list -Hr -o name -t snapshot -d 1 bpool0/iscsizvol0 | tail -n 1 | cut -d@ -f2"
Jul 18 01:02:26 fileserver autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 backupserver.freenas.lan "zfs list -Hr -o name -t snapshot -d 1 bpool0/iscsizvol0 | tail -n 1 | cut -d@ -f2"
Jul 18 01:02:26 fileserver autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 backupserver.freenas.lan "/sbin/zfs inherit freenas:state bpool0/iscsizvol0@auto-20140718.0100-1y"
Jul 18 01:02:26 fileserver autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state pool0/iscsizvol0@auto-20140717.0100-1y
Jul 18 01:02:27 fileserver autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl pool0/iscsizvol0@auto-20140717.0100-1y
Jul 18 01:02:27 fileserver autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST pool0/iscsizvol0@auto-20140718.0100-1y
Jul 18 01:02:27 fileserver autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl pool0/iscsizvol0@auto-20140718.0100-1y
Jul 18 01:02:27 fileserver common.pipesubr: cannot hold snapshot 'pool0/iscsizvol0@auto-20140718.0100-1y': tag already exists on this dataset


Code:
fileserver# zfs holds -r pool0/iscsizvol0@auto-20140718.0100-1y
NAME  TAG  TIMESTAMP
pool0/iscsizvol0@auto-20140718.0100-1y  freenas:repl  Fri Jul 18  1:00 2014


System info:
FreeNAS-9.2.1.6-RELEASE-x64 (ddd1e39)
AMD Phenom(tm) II X4 B55 Processor
16329MB ECC
ASUS motherboard
LSI 9211 IT mode
 
Status
Not open for further replies.
Top