Pool gone after reset, import does not help

Status
Not open for further replies.

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
Hey all,

Because I couldn't access the GUI anymore I reset my FreeNAS installation. Unfortunatly, this caused my pool to disappear to. Normally this shouldnt be a problem I read, because you can import the pool again with auto-import. But for me this doesn't work: Auto-import won't find any pool, and neither does "zpool import -f [whateveroptions]". I tried Google of course, but I can't only find people with a whole different scenario or where import did work.

I can see the disks though, but I don't want to create a new pool in fear of losing all my data, if there's any left anyway... I hope you can help me with this.

I use FreeNAS 8.3.1 with 3 disks in raidz setup.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Don't create a new pool. Let's see what we can do for you.

Post the following for me:

-hardware specs, FreeNAS version, anything that has been a problem recently or in the past. Please be detailed if it was very recent.
-output of the following commands either as an attachment file or as CODE.

zpool status
zpool import
gpart status
gpart list

Now listen closely.... this is where a lot of people start screwing themselves. Don't go running all sorts of commands. Many commands that people find and do on the internet cost them their data. People find commands and start running them without realizing what they do or that they can be destructive. So please don't start running all sorts of random commands you find on the internet and post that stuff back and be patient. I'll try to help you, but if you do anything to destroy your data that'll be on you. You've made the right choice by not creating a new pool, lets take this slowly and logically, ok?
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
Alright, thank you. I already figured I shouldn't try every command in the book :D

Hardware:
Virtualized Guest on ESXi 4.1 with 2 virtual CPU's and 4GB of RAM. One virtual disk with the FreeNAS installation. The 3 disks for my pool are 3 SAMSUNG F2 EcoGreen 1TB disks. Connected as Mapped Raw LUN to the guest, thus bypassing any interference from VMWare. The Virtual Disk with the installation and the 3 pool disks are on seperate SCSI controllers (virtual). None of the hardware has caused any problems recently.

Software:
FreeNAS 8.3.1 x64, rather recent installation because of a switch to new disks. No problems encountered expect the one that made me reset: GUI was unaccessible displaying a message: "This ??? is not available at the moment, please try again later" (Or something along those lines). This wouldn't go away to I used option 8) Reset to factory defaults. The original pool was in Raidz setup with one dataset and nothing fancy.

Outputs:
Code:
root@freenas ~]#
zpool status no pools available

Code:
[root@freenas ~]# zpool import                                             
[root@freenas ~]# 

Code:
[root@freenas ~]# gpart status                                             
  Name  Status  Components                                                 
da0s1      OK  da0                                                       
da0s2      OK  da0                                                       
da0s3      OK  da0                                                       
da0s4      OK  da0                                                       
da0s1a      OK  da0s1                                                     
[root@freenas ~]# 

Code:
Geom name: da0                                                             
modified: false                                                           
state: OK                                                                 
fwheads: 255                                                               
fwsectors: 63                                                             
last: 16777215                                                             
first: 63                                                                 
entries: 4                                                                 
scheme: MBR                                                               
Providers:
1. Name: da0s1                                                             
  Mediasize: 988291584 (942M)                                             
  Sectorsize: 512                                                         
  Stripesize: 0                                                           
  Stripeoffset: 32256                                                     
  Mode: r1w0e1                                                           
  attrib: active                                                         
  rawtype: 165                                                           
  length: 988291584                                                       
  offset: 32256                                                           
  type: freebsd                                                           
  index: 1                                                               
  end: 1930319                                                           
  start: 63   
2. Name: da0s2                                                             
  Mediasize: 988291584 (942M)                                             
  Sectorsize: 512                                                         
  Stripesize: 0                                                           
  Stripeoffset: 988356096                                                 
  Mode: r0w0e0                                                           
  rawtype: 165                                                           
  length: 988291584                                                       
  offset: 988356096                                                       
  type: freebsd                                                           
  index: 2                                                               
  end: 3860639                                                           
  start: 1930383
3. Name: da0s3                                                             
  Mediasize: 1548288 (1.5M)                                               
  Sectorsize: 512                                                         
  Stripesize: 0                                                           
  Stripeoffset: 1976647680                                               
  Mode: r0w0e0                                                           
  rawtype: 165                                                           
  length: 1548288                                                         
  offset: 1976647680                                                     
  type: freebsd                                                           
  index: 3                                                               
  end: 3863663                                                           
  start: 3860640
4. Name: da0s4                                                             
  Mediasize: 21159936 (20M)                                               
  Sectorsize: 512                                                         
  Stripesize: 0                                                           
  Stripeoffset: 1978195968                                               
  Mode: r1w1e2                                                           
  rawtype: 165                                                           
  length: 21159936                                                       
  offset: 1978195968                                                     
  type: freebsd                                                           
  index: 4                                                               
  end: 3904991                                                           
  start: 3863664
Consumers:                                                                 
1. Name: da0                                                               
  Mediasize: 8589934592 (8.0G)                                           
  Sectorsize: 512                                                         
  Mode: r2w1e4                                                           
                                                                           
Geom name: da0s1                                                           
modified: false                                                           
state: OK                                                                 
fwheads: 255                                                               
fwsectors: 63                                                             
last: 1930256                                                             
first: 0                                                                   
entries: 8                                                                 
scheme: BSD                                                               
Providers:                                                                 
1. Name: da0s1a                                                           
  Mediasize: 988283392 (942M)                                             
  Sectorsize: 512                                                         
  Stripesize: 0                                                           
  Stripeoffset: 40448                                                     
  Mode: r1w0e1                                                           
  rawtype: 0                                                             
  length: 988283392                                                       
  offset: 8192                                                           
  type: !0                                                               
  index: 1                                                               
  end: 1930256                                                           
  start: 16                                                               
Consumers:                                                                 
1. Name: da0s1                                                             
  Mediasize: 988291584 (942M)                                             
  Sectorsize: 512                                                         
  Stripesize: 0                                                           
  Stripeoffset: 32256                                                     
  Mode: r1w0e1
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So your hard drives aren't even being detected by the computer.

Try these commands:

dmesg
camcontrol devlist
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Whoa, stop the truck. I read your command outputs but not your paragraph above.

You are virtualizing. It's going to be really hard for me to help you. The possible causes and solutions aren't always evident. And in many cases if you don't know how it was done before it broke you may not be able to fix it without losing your data.

Do you have backups?
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
Well, they are:

Code:
[root@freenas ~]# camcontrol devlist
<VMware Virtual disk 1.0> at scbus2 target 0 lun 0 (pass0,da0)
<ATA SAMSUNG HD103SI 1AG0> at scbus3 target 0 lun 0 (pass1,da1)
<ATA SAMSUNG HD103SI 1AG0> at scbus3 target 1 lun 0 (pass2,da2)
<ATA SAMSUNG HD103SI 1AG0> at scbus3 target 2 lun 0 (pass3,da3)
[root@freenas ~]#


Code:
da0 at mpt0 bus 0 scbus2 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing enabled
da0: 8192MB (16777216 512 byte sectors: 255H 63S/T 1044C)
da1 at mpt1 bus 0 scbus3 target 0 lun 0
da1: <ATA SAMSUNG HD103SI 1AG0> Fixed Direct Access SCSI-5 device
da1: 6.600MB/s transfers (16bit)
da1: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C)
da2 at mpt1 bus 0 scbus3 target 1 lun 0
da2: <ATA SAMSUNG HD103SI 1AG0> Fixed Direct Access SCSI-5 device
da2: 6.600MB/s transfers (16bit)
da2: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C)
da3 at mpt1 bus 0 scbus3 target 2 lun 0
da3: <ATA SAMSUNG HD103SI 1AG0> Fixed Direct Access SCSI-5 device
da3: 6.600MB/s transfers (16bit)
da3: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C) SMP: AP CPU #1 Launched!
GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s).
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
Whoa, stop the truck. I read your command outputs but not your paragraph above.

You are virtualizing. I can't help you. You are totally on your own. That's a very very messy can of worms when you virtualize. You have to have the virtualization setup a certain way and passed through properly(if there is such a thing), then the actual VM itself has to be configured properly, then FreeNAS. Too many problems with it and people's data that spontaneously disappears without warning. http://forums.freenas.org/threads/p...nas-in-production-as-a-virtual-machine.12484/ was created because if you don't know what you are doing you shouldn't expect to recover your data if something goes wrong. Notice in that post it says not to do RDM. Yeah, you look like the next person to be made an example of :(

Unfortunately, at this point I have no way to assist you. The possible causes and solutions aren't always evident. And in many cases if you don't know how it was done before it broke you may not be able to fix it without losing your data.

Do you have backups?
Well, that's a pity. I had no real problems with virtualizing except some bad sectors on my previous disks. I think I have a backup... Well, I certainly hope so anyway; this setup was supposed to be my backup solution.

Thanks for your help anyway. I'll read the other topic and see if there's anything left to do..

And now that I think of it: the reset procedure 'hanged': it didn't respond for so long I just reset it manually. Maybe the metadata went crazy there or something?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
We just had a discussion over RDM in http://forums.FreeNAS.org/threads/disks-not-configured-in-FreeNAS-9-1-release.14287/ thread. I posted to that and hopefully those guys will be able to help you. It's not that I am choosing not to, it's that I'm not experienced enough to think I have a chance of recovering your data. Hopefully those guys who just argued with me days ago about how RDM should work, blah blah blah, can see that this really is a problem.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Don't do anything with those disks yet. See if one of those guys respond. They argued with me over this, now they can see how bad "bad" can be.

In all honesty, I'd seriously consider getting FreeNAS out of the VM. It's just dangerous and asking for lost data(as you may have found out for yourself). :(

Sorry, I can't edit my own posts. Forum software is broken with Firefox right now. /sigh.
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
I see. I'm not totally undocumented on this subject, but I really never read that there might be a danger there. (And a big one to it appears). Moreover: I mostly read about people encouraging ZFS in a VM!

I'll leave the disks be for the moment. I have been thinking about the idea of moving the pool out of the VM for some time now, maybe this is a 'subtle hint' to the right choice :P
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, its something that a lot of people complain about. If you read up on ZFS and how it works it needs direct disk access with as little as possible in between it and the hardware. Virtualizing is a dangerous game because you add a layer that ZFS can't see since it expects full direct disk access.

The manual has been revised with the appropriate warnings. It may or may not have existed when you setup the machine. This is something that has bit so many people it isn't funny. I really had hoped to not see another thread like this one. :(

Let's hope one of those guys that were upset with me for telling them they don't know what they are doing will respond. They might actually have a chance to recover your data. I know our ESXi forum god doesn't even entertain much for questions/comments regarding virtualizing because he's tired of telling people what not to do, they ignore him, then they beg him to spend time recovering their data. He of the opinion that if you can't figure it out from the manual and forum warnings you pretty much deserve what's coming(and I don't blame him). This used to be a weekly thread topic.

Good luck! If you do get your data back please report back with what was wrong and how you fixed it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You may want to post in that thread and ask if someone could provide any insight. I wouldn't be surprised if one or more of them have me on their ignore list.. lol.
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
Thank you Cyberjock! Reading the other thread I got some ideas though:

- FlynnVT: do you have any more information on VT-d? My hardware should be VT-d capable and I might use it to access the disks without RDM, thus maybe recovering my data...
- I am on 8.3.1 now, I'll try with 9.1/9.2. Who knows...
- I might run FreeNAS live besides ESXi (4.1) and try to access the disks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
One thing I will say is this(and you are welcome to try it if you agree with the logic). You might try pulling those physical disks out of the system and put them in another machine. "In theory" it should work. But, "in theory" RDM shouldn't be causing some of the problems it has caused. Go figure!
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
One thing I will say is this(and you are welcome to try it if you agree with the logic). You might try pulling those physical disks out of the system and put them in another machine. "In theory" it should work. But, "in theory" RDM shouldn't be causing some of the problems it has caused. Go figure!
Haha yea, I thought of that "theory"! Unfortunatly I don't have another machine at hand where the server stands, but if all else fails I'm going to try just that.
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
Well, no luck with a baremetal installation of FreeNAS either! :( Guess the VT-d option won't help either then. I'm stuck once more. Oh and I tried using 9.1, but that didn't work either.

Code:
Geom name: ada0 Providers: 1.
Name: ada0 Mediasize: 1000204886016 (931G)
Sectorsize: 512
Mode: r0w0e0
fwsectors: 63
fwheads: 16
 
Geom name: ada1
Providers: 1.
Name: ada1
Mediasize: 1000204886016 (931G)
Sectorsize: 512
Mode: r0w0e0
fwsectors: 63
fwheads: 16
 
Geom name: ada2
Providers: 1.
Name: ada2
Mediasize: 1000204886016 (931G)
Sectorsize: 512
Mode: r0w0e0
fwsectors: 63
fwheads: 16
 
Geom name: ada3
Providers: 1.
Name: ada3
Mediasize: 1000204886016 (931G)
Sectorsize: 512
Mode: r0w0e0
fwsectors: 63
fwheads: 16

Code:
[root@freenas ~]# gpart status
Name Status Components
ada1s1 OK ada1
da0s1 OK da0
da0s2 OK da0
da0s3 OK da0
da0s4 OK da0
da0s1a OK da0s1

Code:
[root@freenas ~]# camcontrol devlist
<SAMSUNG HD103SI 1AG01118> at scbus2 target 0 lun 0 (pass0,ada0)
<SAMSUNG HD103SJ 1AJ10001> at scbus3 target 0 lun 0 (pass1,ada1)
<SAMSUNG HD103SI 1AG01113> at scbus4 target 0 lun 0 (pass2,ada2)
<SAMSUNG HD103SI 1AG01113> at scbus5 target 0 lun 0 (pass3,ada3)
<JetFlash Transcend 4GB 8.07> at scbus6 target 0 lun 0 (pass4,da0)
[root@freenas ~]# 
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
From that recent conversation on the linked thread you might imagine that "we" wrote ESXi and forced everybody to use it based on the tone of certain replies! Nonetheless, here's my take on your situation. I hope I can help.

Something initially happened that caused the web GUI not to show? Would the VM still start and boot to the console? Even if the storage disk RDMs had messed up (in a content sense), BSD should still have started and booted to the console and GUI. If the RDMs/discs had disappeared from the bus the VM would have refused to start until the missing discs were removed or relinked.

In any case, you booted the FreeNAS ISO and asked it to reset/reinstall? Did you run this just once, and did you definitely select that 8GB da0 (virtual?) root disc as the target?

It seems odd that all 3 drives would be damaged simultaneously. Do they sound otherwise OK? Now, perhaps it's the ghost of ESXi, but could it have been a power spike / lightning strike? Have you checked the SMART status? (e.g. smartctl -a /dev/da1)

Running through what you've posted; under ESXi:
  1. ZFS didn't find any valid vdev/pool (blank zpool status & zpool import). This is definitely a problem.
  2. The emulated discs seem to be detected, enumerated on the SCSI buses OK (dmesg & camcontrol devlist) but have no disklabels (blank gpart status & gpart list). This isn't necessarily a problem if you were using a ZFS vdev running on "whole disks". Did you originally add the raw devices to the vdev by hand rather than using the FreeNAS GUI?
On bare metal:
  1. The emulated discs seem to be detected, enumerated on the SCSI buses, but again have no disklabels (seems like you've pasted geom disk list & camcontrol devlist). Again, the missing gpart status isn't necessarily a big deal.
  2. Did you try auto-import and the zpool commands at this stage too?
"geom disk list" from under ESXi would have been useful, but no worries.

I'd suggest looking at the raw block device(s) and seeing if there's a GPT/MBR or ZFS header. On bare metal, it looks like your ZFS drives have been picked up as ada0, ada2 & ada3. Under ESXi, they were da1, da2 & da3. Try one of these, depending on where you are running now. If you were using whole-disks then hopefully you'll simply see a ZFS header (notice the version and name strings), in which case retry the import routine:
Code:
~# cat /dev/ada0 | od -A x -c | head
<or>
~# cat /dev/da1 | od -A x -c | head
 
0000000   \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0003fd0   \0  \0  \0  \0  \0  \0  \0  \0 021   z  \f 261   z 332 020 002
0003fe0    ?   *   n 177 200 217 364 227 374 316 252   X 026 237 220 257
0003ff0  213 264   m 377   W 352 321 313 253   _   F  \r 333 222 306   n
0004000  001 001  \0  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0   $
0004010   \0  \0  \0      \0  \0  \0  \a   v   e   r   s   i   o   n  \0
0004020   \0  \0  \0  \b  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 034
0004030   \0  \0  \0      \0  \0  \0      \0  \0  \0 004   n   a   m   e
0004040   \0  \0  \0  \t  \0  \0  \0 001  \0  \0  \0 004   z   f   s   1


If not, then try this:
Code:
~# cat /dev/ada0 | file -
/dev/stdin: x86 boot sector; partition...

If it's detected as an MBR or GPT partition then it may be a case of a mangled partition table, rejected by BSD and hence ZFS can't find any block devices with a ZFS header. Unless the "cat /dev/xxx | od -Ax -c" command is showing all \0's, \377's or random rubbish (an outright disc failure [power spike?], or ESXi causing the referred-to worst case corruption), my next suggestion would be to search the block device for a ZFS header:
Code:
~# cat /dev/da1 | od -A x -x | grep 7a11 | head

0003fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
001ffd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0020fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0021fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0022fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0023fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0024fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0025fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0026fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210
0027fd0      0000    0000    0000    0000    7a11    b10c    da7a    0210


If found, but not at effectively 0K/256K from the start or 512K/256K from the end of the block device, then something may simply have mangled the partition table(s) (seems unlikely for all 3 discs). (Address line 3fd0 above is at ~16K, and refers to a ZFS header beginning at 0). You could then conceivably construct a partition table that creates block devices at the base of the ZFS structure.

No need to go to extremes just yet though. The first "cat | od | head" will be the most insightful here.
 

Thomas

Dabbler
Joined
Jun 30, 2013
Messages
29
First of all, thank you for your reply! I really appreciate you trying to help me, despite all of the arguing in the other topic.

From what data I have collected from your 'tests' I'm not getting much happier, but maybe you see more than I do. There wasn't a powerspike or anything like that. I reset the machine through the BSD console: I used option 8) Reset to factory defaults. The machine would boot up to BSD and the console fine, and everything worked there without error. The only thing not working was the web interface (I have no idea why and when it stopped working). I guess I'll call Ghostbusters and see if they can handle ESXi ghosts too :P

But without further ado, here are the results:

Bare metal "cat" command:
Code:
[root@freenas] ~# cat /dev/ada0 | od -A x -c | head
0000000    Q   < 256 325   P 226   O 216 243 017   : 315   @ 216   S 332
0000010  034   W 312 304 316   s 214   R   k   X   * 251 232   7   g 327
0000020  212 206 216   J 275 247   T 353   Y   $ 374 347   r 302 322 325
0000030    { 246 230 346 017 241 251 304   '  \b 241 334   , 252   ; 254
0000040    z 361   E 317 240   3   c 024 247 314      \a 371   * 356   <
0000050  355   s 341   " 364 276   )   [ 273   k   8   q 313   Q   h   &
0000060  370       p 373   G 203 330   0   6 004   N 340 240   y 263 002
0000070    n 372 277   1   U   # 314 246 006 032 367 344   L   T 221 245
0000080  257 376 016   =   K 302 245 032 233 315  \t  \r 250   G 246   a
0000090  034 362 327 324 356 031   7   % 004   i 201   > 177 305 317 301
[root@freenas] ~#


Bare metal "gpart status" command: (Please note: I switched around the cables, so ada2 is now the non-interesting disk. We're looking for ada0, ada1 and ada3 now)
Code:
[root@freenas] ~# gpart status
  Name  Status  Components
ada2s1      OK  ada2
 da0s1      OK  da0
 da0s2      OK  da0
 da0s3      OK  da0
 da0s4      OK  da0
da0s1a      OK  da0s1


The second "cat" command:
Code:
[root@freenas] ~# cat /dev/ada0 | file -
/dev/stdin: data


The search for "head":
Code:
[root@freenas] ~# cat /dev/ada0 | od -A x -x | grep 7a11 | head
0011c70      97d2    8e20    cbca    0f5d    0040    7b77    eaa6    7a11
0074cd0      467a    1327    7a11    cad3    f911    0087    b644    72bf
0076b60      1918    90ae    4073    368f    f3f8    8657    7a11    86eb
007a110      c591    d60c    1364    70a3    982d    20df    a5fa    985e
0080750      4a1b    8142    c918    1673    7a11    80d0    774d    438c
0083b90      dd85    f689    7a11    b734    2057    8411    f7f2    fba0
009bd80      a1c5    c930    519d    5f78    7a11    f682    9f36    2124
009ce70      f9b3    c976    8a55    0695    b72b    7a11    c2ec    2ff2
00c2430      640c    e011    7185    7232    7a11    0920    1ea0    1b46
0124280      301f    7a11    e296    4208    031f    31d1    0080    f4a7
[root@freenas] ~#


And a part of the SMART readout for ada0:
Code:
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     19102         -


Someone on another forum told me I could try with ZFSGuru, but I don't know if that'll do anything, because the ZFS verson is exactly the same... I'll just wait for your response.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I wouldn't try ZFSGuru just yet. It may make things worse without you knowing it. let FlynnVT try to help you. I'm interested to see if he see's what happened because you aren't the first to see this kind of problem and definitely won't be the last if people keep thinking RDM works... It would be nice to have a solution for those users that lose their data to the RDM gods.
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
OK, so the base of that disc is neither an MBR/GPT or ZFS header. Something was definitely corrupted. Best case: this oft-mentioned ESXi issue is simply that the address for a sector write was truncated to 0, wiping out the old disc label.

If you set up the discs via FreeNAS using standard options, then you'll have a 2GB swap partition at the head of each disc and ZFS will be after. That "cat /dev/ada0 | od -A x -x | grep 7a11 | head" search you ran reached 20 lines after only 1.2MB (possibly "random" swap data).

Let's look beyond and narrow the search:
"cat /dev/ada0 | dd bs=1024 skip=1900000 | od -A x -x | grep 7a11 | grep b10c | head"

Try dropping the grep b10c if this shows nothing - I'm not sure if those pairs will always be on the same 16-byte od line under all circumstances. (I don't have a live FreeNAS installation here, so am starting a bit back at about 1.9GB. Also deliberately using cat as the source here as it's safer than a dd typo).

Trivia lesson: endian issues aside, the ZFS tree root (or "uberblock") is marked with "00BA B10C", while the key/value pairs are "7A11 DA7A B10C" ("tall data block"?). Hex speak is the highest form of humour :)
 
Status
Not open for further replies.
Top