FreeNAS 9.3 sharing volumes to cluster+vMotion Vmware

Status
Not open for further replies.
Joined
Dec 30, 2014
Messages
45
I am in a trouble!

I work at an environment with FreeNAS 9.3 with 5 1TB SATA disks working on RAIDZ with iSCSI connection to one VMWare server.

I configured 2 giga NIC in FreeNAS and 2 giga NIC at VMWare server using round robin mode.

Now I need to include other VMWare Server in cluster and work with vMotion, so, I need a central storage, serving this 2 ESXi servers.

I haven't been sleeping anymore because I'm reading about iSCSI x ZFS, NFS x ZFS, and honestly I do not know what to think.

I have a budget to change the disks and install SSD drivers for ZIL and L2ARC, if necessary.

Please, help me with this:

* Will have 25-35 virtualized servers with all of Linux and Windows flavors;
* My biggest workload will be 4 production servers, and this guys will stay at vMotion/Cluster ambient;
* My two Domain Controllers (DNS, AD) will installed here;
* Most of this environment will be homologation/test/pre-production servers;

iSCSI+ZFS or NFS+ZFS ?

I'm reading some official Vmware documentation and some NFS and ZFS tunings, but I can't get to an answer alone.

Thank you.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
if you need it the professional way, contact the ixsystems support.

and you should start to read the forum stickies...you did not. otherwise we would see your hw specs here.
 
Joined
Dec 30, 2014
Messages
45
if you need it the professional way, contact the ixsystems support.

and you should start to read the forum stickies...you did not. otherwise we would see your hw specs here.

Sorry man, I thought the hardware specs was in my signature!

Intel S1200V3RP, Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz, 32GB ECC memory, 3 NIC gigabit, 5 Segate disks 1TB and one ZFS POOL RAIDZ.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The second ESX server shouldn't really have anything to do with this (except the possibility of increased load).

The first thing to consider is your pool composition. RAID-Z1 is not going to give you decent performance for anything more than a couple VM's. And you better have backups, because with only 1 parity disk, you are asking for trouble. Striped Mirrors are the generally recommended configuration for shared storage.

the second thing is the sharing methodology. ISCSI vs NFS. Given everything else going on, if it were mine, I'd go NFS. This means you will need to have free capacity equal to the size of the used amount of your ISCSI zvol. Mount he NFS share on a host and storage vmotion from ISCSI share to NFS share. If you can't do this, it's not a big deal, since the RAID-Z1 is going to kill your performance anyway.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
greg, just some small hints: do not fill the pool over 50%, create a dataset for each vsphere server, otherwise you will have many lock requests for vmfs, iscsi will be much faster then nfs, not only but also because of the different sync write settings. and please use mirrors, raidz is slow really slow. you might need more disks and a slog if you prefer nfs. for l2arc you have not enough ram.
 
Joined
Dec 30, 2014
Messages
45
The second ESX server shouldn't really have anything to do with this (except the possibility of increased load).

Yes, increase load and possibility to use vMotion feature.

"The first thing to consider is your pool composition. RAID-Z1 is not going to give you decent performance for anything more than a couple VM's. And you better have backups, because with only 1 parity disk, you are asking for trouble. Striped Mirrors are the generally recommended configuration for shared storage."

The best option would be to set up 2 RAID 1 and stripe them? Like a RAID 1+0? Only 4 disks?

My chassis does not fit more than 8 disks.

"the second thing is the sharing methodology. ISCSI vs NFS. Given everything else going on, if it were mine, I'd go NFS. This means you will need to have free capacity equal to the size of the used amount of your ISCSI zvol. Mount he NFS share on a host and storage vmotion from ISCSI share to NFS share. If you can't do this, it's not a big deal, since the RAID-Z1 is going to kill your performance anyway."

I can change my RAID.

In this scenario is a good option to use the ZIL and L2ARC, using only RAID 1+0 with 4TB disks?
 
Last edited:
Joined
Dec 30, 2014
Messages
45
greg, just some small hints: do not fill the pool over 50%, create a dataset for each vsphere server, otherwise you will have many lock requests for vmfs, iscsi will be much faster then nfs, not only but also because of the different sync write settings. and please use mirrors, raidz is slow really slow. you might need more disks and a slog if you prefer nfs. for l2arc you have not enough ram.

very nice zambanini!

I really prefer to use iSCSI because I worked with this on the past.
The problem about "create a dataset for each vsphere server", to use vMotion I need to share one dataset between the vsphere servers, it is a big problem, right?
So, maybe one idea is: configure RAID 1 and stripe it.. or use all of my space and configure 2 RAID 1 and stripe, 8 disks total.
Without ZIL, only my 32GB of ECC RAM and ensuring that will not take up more than 50% of the area.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Nothing needs to be done to your current existing FreeNAS setup to enable vmotion. That is all handled on the VMware side.

What we are suggesting is that you correct the FreeNAS settings. Yes Striped pools are like RAID 10. Is there any reason you can't use all 8 drives?
 
Joined
Dec 30, 2014
Messages
45
Nothing needs to be done to your current existing FreeNAS setup to enable vmotion. That is all handled on the VMware side.

What we are suggesting is that you correct the FreeNAS settings. Yes Striped pools are like RAID 10. Is there any reason you can't use all 8 drives?

Well, in my old times, I worked with some storages, and the EMC experts recommend to use smaller discs with a larger number of physical drives, increasing the flow rate of use. Don't makes sense?

I could manually split loads between these pools too.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Yep. makes sense. Although the only reason they said smaller disks was because they are cheaper. At that scale you are buying drives to meet a performance target, not necessarily capacity.

I'd go with 1 pool of with 8 drives (striped mirror - 4 vdevs)

Of course, like I said, you can add the 2nd ESX server and see how things go. vmotion doesn't put a lot of impact on the storage. It just syncs the 2 ESX hosts to move the state of a guest from one host to the other. If you have been happy with the performance so far, if might be fine in the future.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, in my old times, I worked with some storages, and the EMC experts recommend to use smaller discs with a larger number of physical drives, increasing the flow rate of use. Don't makes sense?

Here you want to use larger disks and a larger number of them. ZFS is a CoW filesystem and you want to maintain a high percentage of free space. Just like the EMC guys said, the number of spindles improves overall performance. However, with ZFS, having gobs of free space (like 50%+++) helps to mitigate a whole slew of performance issues too.

I don't see any value in creating a dataset for each host, though.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I don't see any value in creating a dataset for each host, though.

I was scratching my head on that as well. How do you handle vmotion? What is the value? I could see possibly of a little bit value of a dataset per VM, but that seems extremely slim. I don't understand the per host dataset.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
vmfs is a clustered file system. when multiple vsphere nodes/servers access the same storage pool for write operations (iscsi extend, nfs share whatever), vcenter and the vsphere nodes verify on every write, that the involved file on vmfs or nfs is not in use. it also uses some kind of different heartbeat style locks. to make a long storry short (typing this on my blackberry) you will get a much higher write delay. for more information, please search for vmfs locking. we get rid of the vmware high latency warnings, when we split the volumes and put each iscsi extend an its own dataset. setup is still fn 9.2.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Joined
Dec 30, 2014
Messages
45
Here you want to use larger disks and a larger number of them. ZFS is a CoW filesystem and you want to maintain a high percentage of free space. Just like the EMC guys said, the number of spindles improves overall performance. However, with ZFS, having gobs of free space (like 50%+++) helps to mitigate a whole slew of performance issues too.

I don't see any value in creating a dataset for each host, though.

But this 50%++ free must be in vdev, zpool or the extents already allocated for VMWare (in this case the control is more complicated.) ? Thank you.

vmfs is a clustered file system. when multiple vsphere nodes/servers access the same storage pool for write operations (iscsi extend, nfs share whatever), vcenter and the vsphere nodes verify on every write, that the involved file on vmfs or nfs is not in use. it also uses some kind of different heartbeat style locks. to make a long storry short (typing this on my blackberry) you will get a much higher write delay. for more information, please search for vmfs locking. we get rid of the vmware high latency warnings, when we split the volumes and put each iscsi extend an its own dataset. setup is still fn 9.2.

Nice, I had not seen anything about it. Thank you!

Putting each iscsi extent on it's own dataset is different than having a different dataset for each host.

I thought a lot of the VFMS locking issues were resolved with VFMS-5 and FreeNAS 9.3 (which added VAAI support for ATS (among others)).

http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html
http://www.virten.net/2015/01/working-with-freenas-9-3-block-vaai-primitives/

I think some questions comes after read this links. Thank you.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
50% is at the zpool level.

Happy reading! :smile:
 
Joined
Dec 30, 2014
Messages
45
50% is at the zpool level.

Happy reading! :)

depasseg, the second link is very nice!

one question:
When I set up a new extent to VMWare, I have two options in "Extent Type": "device" or "file". I can't choose the "device" option because when I go bind the target to extent, the device does not appear, so I select file, etc.

This is the way I work, creating the "file extents" into /mnt/zfspool. But this way I can't set the "Sparse volume" mentioned in " http://www.virten.net/2015/01/working-with-freenas-9-3-block-vaai-primitives/ "

So I thought in create a zvol and inside this volume, create the extent files and migrate my data.

This is the best practice?
 
Joined
Dec 30, 2014
Messages
45
50% is at the zpool level.

Happy reading! :)

Check this out:

# esxcli storage core device vaai status get -d naa.6589cfc000000d9791ecf2e133890cd6
naa.6589cfc000000d9791ecf2e133890cd6
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: unsupported
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'm not sure that FN supports sparse volumes. I don't use iscsi, but when I tested it, I thought it fully allocated space.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Yes, creating zvols and using them for device type extents is recommended practice with the new iSCSI target in 9.3. And they can be sparse and support VAAI Delete.
 
Status
Not open for further replies.
Top