SOLVED Unable to start or connect Virtual Machine - FreeNAS 11 - Intel i7 2600k

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
should I run zpool labelclear DISK on each disk to be sure
If it's resilvering the disk back into the pool, there wouldn't be any need to do the zpool labelclear--I was suggesting that just so FreeNAS would see it as a new disk. If it's working without it, that should be fine.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
For the next disk, you could try doing the recovery that is recommended. gpart recover ada4 If it succeeds, see if gpart show ada4 still shows corrupt.
 

moosethemucha

Dabbler
Joined
Feb 25, 2017
Messages
33
With regards to recovery should I run those commands once the disk has been offlined or can it be done whilst the pool is active. Currently still resilvering about 80% done - this is going to take a while.

Thank you everyone for the help.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
You can try it online, but if it complains, you will probably have to do it with the pool exported/offline. Wait until the resilver is complete, and you should consider this to be potentially creating a drive failure, so if something is wrong afterwards, don't take any other drives offline or replace.
 

moosethemucha

Dabbler
Joined
Feb 25, 2017
Messages
33
Im not really sure what you mean - so the resilver has completed

Code:
scan: resilvered 3.06T in 25h5m with 0 errors on Tue Dec 19 17:52:57 2017


What I don't unserstand is how can I run a gpart recover on a drive that doesn't really have a GPT table - for eg if I run it on ada4 i get an error.
Code:
gpart: arg0 'ada4': Invalid argument
as it is not in the gpart list:

Code:
Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: a2272f74-50b4-11e7-9238-50e549e84010nas
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada2p2
   Mediasize: 2998445412352 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: a23bf851-50b4-11e7-9238-50e549e84010
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445412352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 4194432
Consumers:
1. Name: ada2
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 62517214
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 104857600 (100M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   rawuuid: 1440971d-5108-11e7-91c2-50e549e84010
   rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
   label: (null)
   length: 104857600
   offset: 17408
   type: efi
   index: 1
   end: 204833
   start: 34
2. Name: da0p2
   Mediasize: 31903932416 (30G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 104878080
   Mode: r1w1e1
   rawuuid: 145e5293-5108-11e7-91c2-50e549e84010
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 31903932416
   offset: 104878080
   type: freebsd-zfs
   index: 2
   end: 62517207
   start: 204840
Consumers:
1. Name: da0
   Mediasize: 32008830976 (30G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   rawuuid: a885301b-e3b6-11e7-9160-50e549e84010
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada0p2
   Mediasize: 2998445408256 (2.7T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   rawuuid: a890b501-e3b6-11e7-9160-50e549e84010
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445408256
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533119
   start: 4194432
Consumers:
1. Name: ada0
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: e1f0094e-e3b6-11e7-9160-50e549e84010
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada1p2
   Mediasize: 2998445408256 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: e213522f-e3b6-11e7-9160-50e549e84010
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445408256
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533119
   start: 4194432
Consumers:
1. Name: ada1
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5


Im just going to continue to resilver the drives because when i try to run recovery on a drive that I've just resilvered such ada1 or ada0 (I resilvered two drives at the same time as I've got a raidz2 pool) i receive this

ada2 recovering is not needed

I'll let you guys know how the rest of this goes - then hopefully i can update to 11.1 and then see if i can get bhyve to work.
 
Last edited by a moderator:

moosethemucha

Dabbler
Joined
Feb 25, 2017
Messages
33
Ok so I finally finished resilvering all my drives and still the same issue with the VM - exactly the same including the lack of logs within messages. I'm going to try and run the VM manually via the command line. I'm also going to try installing 9.10 and see if the problem is persistent - the way I see it then its either a bug that has been preverlant since 9.10 (unlikely but possible) or its a hardware configuration issue.

Anyone with some more suggestions it would be greatly appreciated. Would my mobo or the fact I don't have ECC ram be the issue - from memory my mobo doesn't support ECC; i'll do some investigation.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I don't know that I have any other suggestions to offer on that, other than that it wouldn't be the lack of ECC RAM. ECC is a good idea for lots of reasons that have been thoroughly discussed, but it won't keep you from booting a VM.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
It appears to me that all of these drives were set up when they were attached to some sort of motherboard RAID. This puts the secondary GPT table in the wrong place, not really at the end of the disk. The real end of the disk is hidden by the RAID controller. The problem with trying to repair that is that it could overwrite some of the ZFS information at the end of the disk.

The first step here is the same as always: back up ALL DATA YOU WISH TO KEEP to external media.

After that, it would be quickest to detach the volume and wipe the disks (with the GUI option for that). A quick wipe is, well, quick. Then restore from backup.

Otherwise, detach one disk at a time, wipe it, then reattach it and wait for it to resilver. Continue until they are all done.

As above, do not start that until all data you wish to keep has been backed up.
 

moosethemucha

Dabbler
Joined
Feb 25, 2017
Messages
33
Otherwise, detach one disk at a time, wipe it, then reattach it and wait for it to resilver. Continue until they are all done.

I'm not sure if you've read the thread but I've done exactly what you've said.

As danb35 suggested - this is exactly what occurred

Yes, it is, and I don't think I have a good explanation of what's going on there. If I were to wildly speculate, I might guess something like this:
  • The other disks (ada0-1 and 3-5) were previously used in a FreeNAS pool which was created through the web GUI. This resulted in their being partitioned as FreeNAS typically does.
  • A GPT-partitioned disk has two copies of the partition table stored, one at the beginning of the disk and one at the end (this one I'm pretty sure is true)
  • When you created a new pool at the CLI using the whole disks, that overwrite the partition table at the beginning of the disks, but not the one at the end.
  • There's been some change in the disk/partition recognition code between FreeBSD 11.0 and 11.1, such that the remaining partition tables on the disks are seen in 11.1, but not in 11.0.
 

moosethemucha

Dabbler
Joined
Feb 25, 2017
Messages
33
So when I ran vm template show I recieved this msg.

Code:
/usr/local/sbin/vm: WARNING: $vm_enable is not set properly - see rc.conf(5).
/usr/local/sbin/vm: ERROR: $vm_enable is not enabled in /etc/rc.conf!


So the ENV variables for the vm isn't set up correctly. Now I'm not too sure how to set this up but then I found this thread - the vars I've tried to export i.e. sysrc ENVVAR=VALUE - (this was not maintained durring reboot.)
Code:
vm_enable=”YES”
vm_dir=”zfs:/dev/zvol/volume1/ubuntuVM-storage”
vm_list=””
vm_delay=”5″


What I'm not sure of is what my vm_dir would be - I've created a zvol in my volume - its path in /dev is /dev/zvol/volume1/ubuntuVM-storage (just a thought could the '-' be causing some sort of parsing issue) - Would this be my vm_dir path ? I doubt it as I recieve this error.

Code:
/usr/local/sbin/vm: ERROR: unable to locate mountpoint for ZFS dataset /dev/zvol/volume1/ubuntuVM-storage


Which makes sense as it not mounted - Im not really sure what I should do here - should my zvol be mounted ? Below is my current setup within the Freenas GUI for my storage.
Screen Shot 2017-12-24 at 9.57.30 am.png

Is this look ok ?

Anyway to the root of the problem (as per the thread above)- tuneables I've set this to be automatically generated by freenas (maybe not the best idea but wouldn't this be considered a bug ?) I haven't tried to disable it and all the tuneable variables - I thought maybe we should try and figure out why this is occurring. Below is a pic of the tunables that are there.

Screen Shot 2017-12-24 at 10.01.52 am.png


Also below is my rc.conf file - from what I can tell its basically the same as the original. Github Raw Code

example is that file above.
Code:
diff example /etc/rc.conf
43c43
< # We run savecore in middleware.  The rc script
---
> # We run savecore in ix-fstab manually.  The rc script
46a47,49
> # compress coredumps and keep a max of 5 around
> savecore_flags="-z -m 5"
>
75d77
< nginx_login_class="nginx"
88c90
< consul_alerts_args="--alert-addr=localhost:8542 --watch-events --watch-checks --log-level=warn"
---
> consul_alerts_args="--alert-addr=localhost:8542 --watch-events --watch-checks --log-level=info"
90d91
< consul_dir="/var/db/system/consul"
93c94
< consul_args="-server -bootstrap-expect=1 -ui -bind=127.0.0.1 -enable-script-checks=true -log-level=warn"
---
> consul_args="-server -bootstrap-expect=1 -ui"
97,99c98,101
<
< # Enable iocage jails to autoboot if enabled in the GUI
< iocage_enable="YES"
---
 

moosethemucha

Dabbler
Joined
Feb 25, 2017
Messages
33
I've got VM's working kinda. After some investigating I found my motherboard Intel VM support option was disabled - this is after I had changed it - turns out for some reason it wasn't saving my settings after reboot, it would constantly revert back to optimised defaults. After trying a couple of different versions of the BIOS i was able to enable VM support on the board. I then booted into my 11.1RC3 and set a new VM and BOOM it ran.....

Then when I connected to the VM via VNC to install ubuntu i was promoted with this error message in the VNC view - error 1006. After doing some internet searching it appears its a reported bug in RC3 - Bug 26990

I then reverted back to 11.0 U4 - set up the VM and BOOM now Im in business.

Thank You to everyone for the help and guidance.
 
Status
Not open for further replies.
Top