question about iscsi

Status
Not open for further replies.

TravisT

Patron
Joined
May 29, 2011
Messages
297
iSCSI - Only 4 targets show up

I'm trying to get iSCSI setup on FreeNAS-8.2.0-RELEASE-p1-x64 and I'm running into some trouble. I'm totally new to iSCSI, so keep that in mind.

I set up a 4 disk zpool named "raptor". I created a zvol for each virtual machine I am configuring. Lets say the names are:

machine1.disk1
machine2.disk1
machine3.disk1
machine4.disk1

I then created device extents for each of these, and created the Targets / Extents required to get iSCSI up and running. I scanned the virtual iSCSI adapter on my VMWare ESXi 5 box and all disks were detected. Cool.

When setting up a new virtual machine, it still wants to save the VM settings on my local hard drive of the VM. I'm trying to free this drive up and remove all local storage from it, so I think that I'll just create another iSCSI extent and attach it to the ESXi host to store the VM settings. When I setup a 5th target/extent, the ESXi box will not discover it. I've tried both manual and automatic discovery, and the 5th extent was created the exact same way the first four were created. Am I doing something wrong or missing something? Could this be a limitation on ESXi? Is there a better way to do this?

I've searched the web and can't find anything useful.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
So I figured this out. After restarting the service the additional target showed up.

Now I'm fighting with ESXi after reconfiguring the target configs. It says one drive is unattached and shows 5 paths but only 4 devices. Back to google...
 

matram

Dabbler
Joined
Aug 22, 2012
Messages
18
It says one drive is unattached and shows 5 paths but only 4 devices. Back to google...

A device can have multiple paths if it has more than one NIC through which it can send traffic. It would be normal to have two paths for one device to have failover.

/Mats
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Mats,

Thank for the reply. I plan to add multiple paths in the future, but now I only have one path. The paths I'm referring to is shown below:

Path.jpg devices.jpg

My terminology is probably all wrong but here's the deal. I added four disks, started iSCSI. Everything seemed to work. I added another disk and it didn't show up. Before I figured out that I needed to restart the iSCSI service, I added one of the drives that was named for use on a virtual machine to connect to ESXi to use only to store VM settings. Once I figured out the service thing, I wanted to have the disks correspond correctly to the machine they would be used for, so I unattached the drive from the ESXi host and added it to the correct VM. When I did that, the VM said I couldn't install windows because the disk was unbootable (had been formatted for ESXi).

Although I'm sure there was a better way to do it, I just deleted the zvol, recreated it and setup iSCSI again. When I did that, ESXi didn't seem to like it and saw the re-added disk as a new path, but doesn't list the newly re-added disk under devices. Also, note in the "runtime name", there is no vmhba39:C0:T3:L0. This was the original disk that I deleted and re-created. Maybe there's a way to re-assign these numbers, but I couldn't find it. I tried using the same serial number as the original one, but that didn't seem to work. Probably a simple fix, but I don't want to screw this up again/more.
 

matram

Dabbler
Joined
Aug 22, 2012
Messages
18
Some suggestions

Travis,

I am not sure if I am able to help, but ...

Have you tried to rescan the iSCSI adapter in vSphere, "Rescan all ..." on the to right of the Configuration > Storage Adapters page in the vSphere client.

A more complete "remount" of iSCSI on the ESXi side would be to
1. Unmount iSCSI datastore on Configuration > Storage > Datastores in vSphere client, right click datastore "Unmount" (assumes no VM is using the datastore)
(1b. Use "Delete" instead if you want to completely scrap the old datastore)
2. Detach iSCSI device on Configuration > Storage > Devices, right click iSCSI device, "Detach"
3. Remove freeNAS IP adresses from Dynamic Discovery and Static Discovery, Configuration > Storage Adapters, right click iSCSI adapters, "Properties"
4. Answer YES to the question if you want to "rescan all" and you should have no visible iSCSI datastore, devices or paths in vSphere client

Assuming you datastore was properly formatted you should be able to reattach and remount it by basically doing things in reverse order
1. Add freeNAS IP adress to Dynamic Discovery for the iSCSI storage adapter and answer YES to "rescan all" question
2. Attach iSCSI device by right clicking greyed out device on Storage > Devices an select attach
3. On Storage > Datastores select "rescan all" and then right click greyed out datastore and "Mount"
(3b. If you used "Delete" in the first step, there is no datastore to remount, you have to use "Add storage" instead)

When a datastore is created I believe a unique identifier is stored within the datastore which allows ESXi to identify it. This UUID is used by the VM's to find the datastore. Based on your description this seems to be one problem, i.e. you effectively deleted a data store from the FreeNAS end by deleting the zvol.

I am not sure on how to recover from that. If you had the configuration file on a local disk I guess you could go into VM settings and remove the hard disk from the VM and then create a new hard disk on good storage, that should break any link to old (non-existant) storage. Of course any data on that initial hard disk (on iSCSI) would be lost.

You can also ssh into the ESXi host to access the local filesystem, your datastores will be visible there. If you are not very sure about what you are doing I would probably not try it.

My approach in these situations if I have no data to loose is typically to scratch everything and redo properly from the start.

On a more general note I personally would not structure my storage in the way (I think?) you are doing. I would keep my VMs disk with the VM settings in the same folder. I would also use one disk / iSCSI datastore for all VM. In my case I have about 15 VM, two NAS (primary and backup) and three physical servers in my home office / test setup.

Best of luck /Mats
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Travis,

I am not sure if I am able to help, but ...

Have you tried to rescan the iSCSI adapter in vSphere, "Rescan all ..." on the to right of the Configuration > Storage Adapters page in the vSphere client.

A more complete "remount" of iSCSI on the ESXi side would be to
1. Unmount iSCSI datastore on Configuration > Storage > Datastores in vSphere client, right click datastore "Unmount" (assumes no VM is using the datastore)
(1b. Use "Delete" instead if you want to completely scrap the old datastore)
2. Detach iSCSI device on Configuration > Storage > Devices, right click iSCSI device, "Detach"
3. Remove freeNAS IP adresses from Dynamic Discovery and Static Discovery, Configuration > Storage Adapters, right click iSCSI adapters, "Properties"
4. Answer YES to the question if you want to "rescan all" and you should have no visible iSCSI datastore, devices or paths in vSphere client

Assuming you datastore was properly formatted you should be able to reattach and remount it by basically doing things in reverse order
1. Add freeNAS IP adress to Dynamic Discovery for the iSCSI storage adapter and answer YES to "rescan all" question
2. Attach iSCSI device by right clicking greyed out device on Storage > Devices an select attach
3. On Storage > Datastores select "rescan all" and then right click greyed out datastore and "Mount"
(3b. If you used "Delete" in the first step, there is no datastore to remount, you have to use "Add storage" instead)

When a datastore is created I believe a unique identifier is stored within the datastore which allows ESXi to identify it. This UUID is used by the VM's to find the datastore. Based on your description this seems to be one problem, i.e. you effectively deleted a data store from the FreeNAS end by deleting the zvol.

I am not sure on how to recover from that. If you had the configuration file on a local disk I guess you could go into VM settings and remove the hard disk from the VM and then create a new hard disk on good storage, that should break any link to old (non-existant) storage. Of course any data on that initial hard disk (on iSCSI) would be lost.

You can also ssh into the ESXi host to access the local filesystem, your datastores will be visible there. If you are not very sure about what you are doing I would probably not try it.

My approach in these situations if I have no data to loose is typically to scratch everything and redo properly from the start.

On a more general note I personally would not structure my storage in the way (I think?) you are doing. I would keep my VMs disk with the VM settings in the same folder. I would also use one disk / iSCSI datastore for all VM. In my case I have about 15 VM, two NAS (primary and backup) and three physical servers in my home office / test setup.

Best of luck /Mats

Just seeing this reply, but I've somehow fixed the problem since I posted.

I ended up having to reboot the ESXi host, and upon rebooting things seemed to be working correctly. I did, however, try the above mentioned steps and it wouldn't let me unmount or delete the datastore because it was showing active. Maybe deleting the VM and re-adding after everything was fixed would have worked. Re-scanning would show the path, but not the device.

I've messed around with the CLI of the host before, and don't feel very comfortable doing so - I generally stay away from that.

As for my setup, I have no idea if I'm doing things per any best practices out there. I'm completely self taught in VMWare and iSCSI, so I'm trying to learn from this. I would also like something that is easy to backup/restore in the event of a configuration change or failure. Currently, I'm running my VMs off a local SATA drive on my ESXi host. I know that this is a recipe for disaster, and it's proven true in the past. My new setup is much different.

The way I currently have it setup is that my ESXi server has one zvol mapped for local storage. I believe I created this datastore with 100G of usable space. Then I have another zvol for each machine with a ".disk1" suffix in the event I need/want to add a disk later. Each of these are added as a RDM to the respective VM. I choose to store the settings with the VMs, but it still asks for a location for the RDM files, which I store on the zvol mapped to the ESXi for local storage. Once I get all this setup and configured, transfer services over to the iSCSI shares from the local SATA disk, I will pull the SATA disk and not have any local storage on the host.

On the iSCSI side of things, I currently have 4 1TB drives in a ZFS raid-10 volume. I plan to have the iSCSI traffic on a separate NIC on both the FreeNAS box as well as the ESXi box.

Please offer any recommendations you may have, I'm sure I'm doing something wrong!
 
Status
Not open for further replies.
Top