hey all-
(just a little write up of my experience, maybe someone will benefit from)
I have 6 drives installed.
I've got esxi installed on my nas box and have dedicated 1 drive for esxi and vm's. I then have freenas installed as a vm. (i'm aware of the lack of redundancy here)
I have the other 5 drives being passed on to my freenas vm as per the instructions provided here.
http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/
(if there is a better way to do this please let me know)
I have created a raidz volume with the 5 drives. to test the raidz - i powered down the entire server, and removed one of the data drives.
When I powered up esxi, and try to then power up the freenas server i get an error informing me that I can't power on the server because one of the disks is "invalid". To get around this - from the vsphere client, where i manage the vm's, i edited the settings for the freenas vm- and removed the 'faulty' hard drive.
I'm now able to get into freenas. Freenas alerted me that their is a degraded disk, as expected, and upon checking the data on my cifs' - all is there. (raidz did its job =))
I powered down freenas, and esxi, put a different hard drive in, powered up esxi and performed the above steps and 'passed' the new drive onto freenas.
Going back into freenas, i still have a degraded array I went to my degraded raidz volume, viewed disks, and replaced the bad drive with the newly added drive.
Reviewing the status of the volume/raidz - i can see that all disks are online.
I was concerned still at this point, as i was still getting a 'degraded' alert. After a few minutes - the red alert (in the upper right) turned green, and my array was healthy again. I didn't have to do anything to get the rebuilding process to start, it just did its magic. I do wish I was informed though, that the rebuilding was taking place- is there a way to check on the rebuild process/status?
Well- i hope this helps, and some benefit from this.
Thanks-
Matlock
(just a little write up of my experience, maybe someone will benefit from)
I have 6 drives installed.
I've got esxi installed on my nas box and have dedicated 1 drive for esxi and vm's. I then have freenas installed as a vm. (i'm aware of the lack of redundancy here)
I have the other 5 drives being passed on to my freenas vm as per the instructions provided here.
http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/
(if there is a better way to do this please let me know)
I have created a raidz volume with the 5 drives. to test the raidz - i powered down the entire server, and removed one of the data drives.
When I powered up esxi, and try to then power up the freenas server i get an error informing me that I can't power on the server because one of the disks is "invalid". To get around this - from the vsphere client, where i manage the vm's, i edited the settings for the freenas vm- and removed the 'faulty' hard drive.
I'm now able to get into freenas. Freenas alerted me that their is a degraded disk, as expected, and upon checking the data on my cifs' - all is there. (raidz did its job =))
I powered down freenas, and esxi, put a different hard drive in, powered up esxi and performed the above steps and 'passed' the new drive onto freenas.
Going back into freenas, i still have a degraded array I went to my degraded raidz volume, viewed disks, and replaced the bad drive with the newly added drive.
Reviewing the status of the volume/raidz - i can see that all disks are online.
I was concerned still at this point, as i was still getting a 'degraded' alert. After a few minutes - the red alert (in the upper right) turned green, and my array was healthy again. I didn't have to do anything to get the rebuilding process to start, it just did its magic. I do wish I was informed though, that the rebuilding was taking place- is there a way to check on the rebuild process/status?
Well- i hope this helps, and some benefit from this.
Thanks-
Matlock