How is people dealing with VM and sharing?

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
Hello,

I have been using freenas last 6 months, so I am a little bit new still, I am not a big fan of bsd systems, just because already used to work on debian SOs, so I have been running VMs since then with byvhe. With this said, I am just mounting my datashares in my debian VMs with NFS and with CIFS in my windows pcs, so that I can access the same data. Running in my VMs, services like plex, nextcloud, and so on. This is working at the moment fine as it is, but I have the problem that with the VMs and NFS, I can not use inotify or data needs to be downloaded first to the VM and the uploaded to its destination, to arrive to the final place. I just wanted to know if this is how people is setting up their enviroments when trying to use VMs instead of jails.

Also I been questioning myself on how the isci option will be working. Because as far as I one if I make an isci volume to add it inside an ESXI as a datastore just for after mounting it into a VM as a local drive, and not as a network drive, inotify for example will be working fine, but NFS, SAMBA and other more services that freenas ofers, won't be working with the isci volume, as I do not know if other essential services more critical, like SMART will be working neither.

I will appreciate a lot if someone could guide me on what types of setups, refering to system setups are you using.

Kind regards
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
For the VMs I run on FreeNAS, I simply mount NFS shares into the VM. That said, I'm not using the VM for things I can natively run just as well on FreeBSD, like Plex (or in my case Emby) and Nextcloud.
 

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
For the VMs I run on FreeNAS, I simply mount NFS shares into the VM. That said, I'm not using the VM for things I can natively run just as well on FreeBSD, like Plex (or in my case Emby) and Nextcloud.
Yes, thats the case, I imagine that you rely on jails, but as I do not want to use freebsd on jails, that is why I prefer setting my VMs with all the services I want, as I am not only running those 2 services.
 

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
I have continued diging into this and found this:

https://www.ixsystems.com/documentation/freenas/9.10/sharing.html#extents

Really? If I change to iscsi protocol I won't be able to use more than 50% of the actual disk storage? :|
I mean, just to make a quick note, if I buy lets say 100TB in 10TB drives, and then set them up in RAIDZ, which will end up in 81.9TB usable.

Then if I make an iscsi volume I won't be able to use more than 40.95TB? :|

Please can someone clarify on this. As this said then I should assume that running a share with NFS or CIFS will end up in much more storage than using iscsi.

Kind regards
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Yes.

Rather than RAIDzX, we recommend using stripped mirrors for iSCSI. So, at a minimum, you'll lose 50% of your storage to parity.

It's highly recommended that you keep your usage below 50% (of usable storage) for performance reasons, when using block storage (like iSCSI).

Search the forums for more information.

I mean, just to make a quick note, if I buy lets say 100TB in 10TB drives, and then set them up in RAIDZ, which will end up in 81.9TB usable.

Then if I make an iscsi volume I won't be able to use more than 40.95TB? :|

.
 

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
Yes.

Rather than RAIDzX, we recommend using stripped mirrors for iSCSI. So, at a minimum, you'll lose 50% of your storage to parity.

It's highly recommended that you keep your usage below 50% (of usable storage) for performance reasons, when using block storage (like iSCSI).

Search the forums for more information.
As far as I know that other 50% is not used for parity, with iscsi is just used to work correctly. Using a RAID0? And then if you lose 1 disk? All info is lost.
People already complain when using a RAID6 because you lose 2 disks, but I do not think this is a good manage of resources to lose 50% of the space...

Kind regards
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
With traditional two way mirrors, you can lose up to one disk in any vdev without losing your pool. If you want more redundancy, consider a three or four way mirror.

As an example, one our users, @jgreco was testing a system for iSCSI use a few years ago. As I recall, he had 24 - 2TB disks and used 8 vdev’s of 3-way mirrors. So, that 24RB raw storage, equates to ~7TiB of usable space. Apply the 50% recommendation. That gets it down to about 3.5TiB max.

Yes, it can look expensive, but compare the cost of COTS hardware and a free OS to the capex and opex of a commercial solution.
 

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
With traditional two way mirrors, you can lose up to one disk in any vdev without losing your pool. If you want more redundancy, consider a three or four way mirror.
Yes, with a mirror yes, you can lose up to 1 disk, as it is mirrored in another one, but with stripping is not possible.

As an example, one our users, @jgreco was testing a system for iSCSI use a few years ago. As I recall, he had 24 - 2TB disks and used 8 vdev’s of 3-way mirrors. So, that 24RB raw storage, equates to ~7TiB of usable space. Apply the 50% recommendation. That gets it down to about 3.5TiB max.

I think this makes no sense, you purchase a buch of disks just to arrive to the space of a tradicional laptop of 500€....

Yes, it can look expensive, but compare the cost of COTS hardware and a free OS to the capex and opex of a commercial solution.

Well, I can tell you that big companies that own datacenter also buy HDD of the same brands as us and even cheaper because the buy lots on a same purchase, and even they also use free OS, so I we can not compare to that, as the can have more TB than us with a better price with the same or better specs.

Sorry, but I have to say I won't go with iscsi, for me losing 50% of space for nothing is just not funny, hopefully someone will change his mind and make a workaround on this, as I first said, this is just a wrong way of managing resources,,,,,

Kind regards
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Sorry, but I have to say I won't go with iscsi, for me losing 50% of space for nothing is just not funny, hopefully someone will change his mind and make a workaround on this, as I first said, this is just a wrong way of managing resources,,,,,

BPA say that for best performance on ISCSI have only 50% used of that zvol.
I have more than 70% used on a few systems, and performance impact is not that high ( it is only a little ), but all are at least 8 disk mirrored strip.
And yes, i lost half of space with Stripped Mirrors, but it is the best in IOPS.
Now I personaly use smaller HDD's or SSD's in stripped mirrors for performance, and the large HDD's in Raidz2 for storage.
For example:

I have 3 x stripped mirrors for speed to vmware, and 4 x 7 disk radiz2 for storage.
Not all of my vm's are disk use intensive, for example for PLEX, i used one 450 gb drive for install and transcode from the striped mirror set and 3 x 5tb drives from the storage array. I use'd 3 x5tb instead of one 15tb drive so i can migrate more easily between my data-set's.

And i have more Iscsi dataset's to my esxi server's and i also have more freenas box'es to my esxi server's, each with a specific storage type for diferent purposes. Delta is my Backup Dataset for example, it is composed of 2 x 6 drive raidz2
 

Attachments

  • plex.png
    plex.png
    35.9 KB · Views: 563
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I said striped mirrors (similar to RAID10), not a straight stripe (akin to RAID0).

Nothing prevents you from running iSCSI on RAIDz2 or using going over the 50% recommendation.

If you don’t heed these recommendations you might run into issues like poor performance, running out of disk space, etc.

Yes, with a mirror yes, you can lose up to 1 disk, as it is mirrored in another one, but with stripping is not possible.

That 500 Euro laptop won’t be able to function as an iSCSI target for an ESXi cluster running a bunch of VM’s.
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Do note that VmWare, HyperV and other bare metal hypervisor's were build using ISCSI or FC in mind.

And Yes, Speed has it's price.
I have clients which spent more than 10.000 usd on a 24 bay netapp and all disk's are used in raid 10, so they lost half of capacity, and we are talking 900 gb drives at 15.000 rpm.

So considering that in freenas you can use whatever storage drive's you can afford, and the software is free, putting 10 drives in stripped miror's and loosing 50% capacity is realy not that big deal, considering the cost's compared to other SAN / NAS system's.
 

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
And Yes, Speed has it's price.
I have clients which spent more than 10.000 usd on a 24 bay netapp and all disk's are used in raid 10, so they lost half of capacity, and we are talking 900 gb drives at 15.000 rpm.
Yes, I know that with RAID10 you sacrify half of the disks, but you do it for an specific purpose, which is not losing data and also increasing performance. But in this case we are talking about the 50% for nothing, just because of ZFS, that is what I am currently pissed off.
At the moment I am running 1nvme of 250GB for VMs and 4x4TB for plex and personal data, in a RAIDz1 for already losing 1 disk for parity, and I was considering on taking 4x10TB as I need more storage, so as you can imagine losing 1 disk for parity which will end into 28TB or so, and then losing half of that which will be 14TB, is not my aim in life :p
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Yes, I know that with RAID10 you sacrify half of the disk, but you do it for an specific purpose, which is not losing data and also increasing performance. But in this case we are talking about the 50% for nothing, just because of ZFS, that is what I am currently pissed off.
At the moment I am running 1nvme of 250GB for VMs and 4x4TB for plex and personal data, in a RAIDz1 for already losing 1 disk for parity, and I was considering on taking 4x10TB as I need more storage, so as you can imagine losing 1 disk for parity which will end into 28TB or so, and then losing half of that which will be 14TB, is not my aim in life :P

OK, but what is the exact scenario we are talking?
In a 4 drive combination, you are better of using a 1tb nvme for vm's to boot, and the 4 drive raidz1 for data. But as for raidz1, this happened to a friend the day before yesterday ( see attached ). so maybe a 6 drive z2 would be better.

I would only thing at a raid 10 ( stripped mirrors, couldn't they found a shorter term, like raidx, if i use raid 10 on the forum i will get hammered ) above 8 disk's, but it relay depends on the storage need, purpose and founds.

So he has a raid z1 and 2 failed drives, ada0 and ada3.
 

Attachments

  • Capture.PNG
    Capture.PNG
    34.5 KB · Views: 476

el_pedriyo

Explorer
Joined
Jun 24, 2018
Messages
65
Yes, I am currently using the ssd nvme just for the VMs SO and then I store all the data in the HDDs. With this done then I access the data through NFS.
At the moment I only have 1 parity disk, and another outside the dataset for backups, apart from uploading my encrypted files also to the cloud
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sorry, but I have to say I won't go with iscsi, for me losing 50% of space for nothing is just not funny, hopefully someone will change his mind and make a workaround on this, as I first said, this is just a wrong way of managing resources,,,,,

There isn't "someone [who] will change his mind and make a workaround". This is computer science, not magic. ZFS can make things move hella-fast, much faster than conventional HDD, if you give it plentiful resources. However, because it does this through CoW-style features, which give you the snapshots and all the other awesome features, there's also a price to be paid. The price is that speeds slow dramatically, down to random access on raw disk speeds, when you do not resource ZFS the way it expects and needs to be resourced.

You can get plain, straightforward, and generally sucky performance if you go with something that isn't ZFS.

You can also get plain, complicated, and sucky performance if you under-resource ZFS.

The goal is to give ZFS resources it needs and then suddenly you're getting amazing SSD-like speeds from mere HDD.

It's computer science. You are trading cheap resources to get the desired performance.
 
Top