Problem with thin provisioned volume mount

Status
Not open for further replies.

jax157

Cadet
Joined
Apr 26, 2016
Messages
2
I have FreeNAS-9.2.1.3 installed as VM in VMware ESX environment. I have a problem with one of the volumes that is vmdk disk. The problem is that it was thin provisioned disk and after some period of time, probably after reboot freenas is unable to mount it.

There is an error in the gui/storage related with that volume:
error getting available space
error getting total space

I'm sure that this is a problem with thin provisioning because other volumes (all thick) are fine.
I converted that disk to thick but it did not help.

Need a help with:
- how to mount it manualy?
- or how to mount it manualy to the different OS (ubuntu, knopix etc.)?
 

Attachments

  • gui_volume.jpg
    gui_volume.jpg
    39.1 KB · Views: 201
  • esx_vmdk.jpg
    esx_vmdk.jpg
    75.6 KB · Views: 238
  • gpart_da5.jpg
    gpart_da5.jpg
    41.2 KB · Views: 209
  • gpart_da5_.jpg
    gpart_da5_.jpg
    42.6 KB · Views: 224

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
use your backup, vmware might have killed the virtual disk. read the forum faq.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Step 1: consult the vmware forums to figure out why a vmdk got borked. If not successful, proceed to step 2.
Step 2: Since this looks like it was a single disk pool, and therefore there is no disk protection, the only option if you've gotten this far is to restore from backups.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
You're using vmdks as storage disks for freeNAS?!? o_O

You've got some, and by some I mean a lot, of reading to do. Luckly you'll have time while the backup of the failed virtual disk is restoring.
 

jax157

Cadet
Joined
Apr 26, 2016
Messages
2
It turns out that repairing that pool was quite easy.
ZFS has functionality for repairing corrupted files or whole pool.
In my case there was a problem with the metadata.
During those steps I was worned that i lose only last 7 secconds of the data.
After succesfully import, the scrup process check the pool for any errors.
There was no single error.

Posting few simple commands how to do that:

Code:
zpool import (list pools for import and show potential problems with the pools)

zpool import pool_name (import pool)

zpool import -F pool_name (import pool with recovery option)

zpool status (status of the mounted pools)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It turns out that repairing that pool was quite easy.
ZFS has functionality for repairing corrupted files or whole pool.
In my case there was a problem with the metadata.
During those steps I was worned that i lose only last 7 secconds of the data.
After succesfully import, the scrup process check the pool for any errors.
There was no single error.

Posting few simple commands how to do that:

Code:
zpool import (list pools for import and show potential problems with the pools)

zpool import pool_name (import pool)

zpool import -F pool_name (import pool with recovery option)

zpool status (status of the mounted pools)
Repairing damage is very different from throwing out the last few transactions.
 
Status
Not open for further replies.
Top