/mnt empty. Unable to locate volumes?

Status
Not open for further replies.

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
So last night I decided to start to add some new drives to my Freenas. I had a mirrored 260gb volume as well as a 40gb volume.

So for safety I decided to remove one of the 260 gb drives I have in a mirrored pool so my Pictures CIFS share will be safe.

Install a 300gb and 20gb drive. Power on. Go to volumes and make the name test, select both the 300gb and 40gb drive. Continue > ok (complete).

Go to my CIFS shares and try to add the 320gb pool to the share and error.

Now I have a yellow error blinking at the top of my Freenas web admin page with something about unable to locate volumes? (Pictures (260), NAs (40), and Test(320). My transmission is no longer running either.

I SSH into the box to check out my /mnt folder and empty. Did I just destroy everything? I really hope that the one (260gb) drive I removed with my pictures on it can still be saved.

Ideas? Sorry if this seems kind of foggy and scattered. I am working form memory here and work has taken a toll on me today. I really hope I am using the terminology right.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Removing your drive for "safe keeping" was a very big mistake.

Post the output of "zpool import" and "zpool status". please post them inside CODE blocks or as text file attachments. Just pasting it in this window will lose the formatting and the output will be near useless.
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
When I get back to the house tonight I will post the requested information.

What harm could have come from removing one of the mirrored drives? Isn't the idea of mirrored an exact copy?

Sorry I am kind of new to the the whole NAS storage thing.

Guess it is good thing I have all the pictures backed up to another drive! I just wanted to keep the 260gb drive as a "archive."
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
Code:
[root@freenas ~]# zpool import                                               
[root@freenas ~]# zpool status                                               
no pools available                                                           
[root@freenas ~]#                                                           
                                                                             
                                


These are my outputs from the commands you asked.. Looks bad..

I do remember using UFS if that matters..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yeah. You are in SERIOUS trouble. Here's where you start looking for your backups..

Post the output of:

gpart list
camcontrol devlist
gpart status
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
And what FreeNAS version are you using and what are your server components?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Serious trouble? He was using UFS. Zpool and zfs commands need not apply.

Sent from my Nexus 5
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
So what commands should I run with UFS? I started to panic a bit when I saw no output. I will run the other commands in about an hour
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
out puts
Code:
root@freenas] ~# gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7897087
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: da0s1
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r1w0e1
  attrib: active
  rawtype: 165
  length: 988291584
  offset: 32256
  type: freebsd
  index: 1
  end: 1930319
  start: 63
2. Name: da0s2
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 988356096
  Mode: r0w0e0
  rawtype: 165
  length: 988291584
  offset: 988356096
  type: freebsd
  index: 2
  end: 3860639
  start: 1930383
3. Name: da0s3
  Mediasize: 1548288 (1.5M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1976647680
  Mode: r0w0e0
  rawtype: 165
  length: 1548288
  offset: 1976647680
  type: freebsd
  index: 3
  end: 3863663
  start: 3860640
4. Name: da0s4
  Mediasize: 21159936 (20M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1978195968
  Mode: r1w1e2
  rawtype: 165
  length: 21159936
  offset: 1978195968
  type: freebsd
  index: 4
  end: 3904991
  start: 3863664
Consumers:
1. Name: da0
  Mediasize: 4043309056 (3.8G)
  Sectorsize: 512
  Mode: r2w1e4
 
Geom name: da0s1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 1930256
first: 0
entries: 8
scheme: BSD
Providers:
1. Name: da0s1a
  Mediasize: 988283392 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r1w0e1
  rawtype: 0
  length: 988283392
  offset: 8192
  type: !0
  index: 1
  end: 1930256
  start: 16
Consumers:
1. Name: da0s1
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r1w0e1
 
 
[root@freenas] ~# camcontrol devlist
<ST3300831A 3.03>                  at scbus0 target 0 lun 0 (pass0,ada0)
<Maxtor 2B020H1 WAH21PB0>          at scbus1 target 0 lun 0 (pass1,ada1)
<Maxtor 6E040L0 NAR61EA0>          at scbus1 target 1 lun 0 (pass2,ada2)
<USB 2.0 USB Flash Drive 0.00>     at scbus7 target 0 lun 0 (pass3,da0)
 
 
[root@freenas] ~# gpart status
  Name  Status  Components
 da0s1      OK  da0
 da0s2      OK  da0
 da0s3      OK  da0
 da0s4      OK  da0
da0s1a      OK  da0s1
[root@freenas] ~# 
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
Interestingly enough when I removed all the new drives I put in my volume was found. At least my 40GB one was. The Pictures was not, but that is because the drives are not in the machine..

Would changing the cable positions cause this issue?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Well, you said "mirrored pool" in the first post, and that implies ZFS. If you are using UFS you are on your own as I'm not good for UFS. In fact, none of us experienced guys are. :(

I will say that all of your drives are unpartitioned, which means you are looking at SERIOUS data recovery. None of this girly-man stuff of run a few commands and magically your data reappears. If you had used ZFS I could probably recover it right now though.

Sorry and good luck!
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
Well, you said "mirrored pool" in the first post, and that implies ZFS. If you are using UFS you are on your own as I'm not good for UFS. In fact, none of us experienced guys are. :(

I will say that all of your drives are unpartitioned, which means you are looking at SERIOUS data recovery. None of this girly-man stuff of run a few commands and magically your data reappears. If you had used ZFS I could probably recover it right now though.

Sorry and good luck!



I will keep that in mind after I do a rebuild, use ZFS!
None the less after I removed the new hardware and put the old stuff back in the same cable positions it found my volume. Would moving the cable positions cause this kind of issue?


Also I apologize my user of terminology is not good, I am new to storage building. I'm more of a CCNA network guy. I guess what I mean to say was I used UFS RAID 1.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
None the less after I removed the new hardware and put the old stuff back in the same cable positions it found my volume. Would moving the cable positions cause this kind of issue?

Maybe. I wouldn't think so, but I don't know much about how UFS is implemented. But you should seriously consider going to ZFS. UFS is being removed after FreeNAS 9.2.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Nope. Just delete the UFS volumes and create a ZFS volume.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Dusan, you just became the resident expert on UFS.. lol
 

Baconmanic

Dabbler
Joined
Jul 2, 2013
Messages
19
Weird things. As long as the drive is on the same cable it seems to be found. Also when running either of the UFS commands they have no output.
 
Status
Not open for further replies.
Top