FreeNAS for Horizon View

Status
Not open for further replies.

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
Hi all. Today is the day of my some first things) The first post at this forum and the first server for virtualization purposes.

It is all about VDI (Horizon View) for 10-15 office employees and 2-3 power users (CAD Software). FreeNAS as the storage for all VMs.

Here is the initial config:
  • Xeon E5-2620 V3
  • 64Gb RAM
  • LSI 9207-i8
  • 4x 2TB HGST HUS726020AL
  • NVMe Intel SSD DC P3600 400GB (ZIL+l2ARC)

Could you please comment this config and suggest some add-on's to it? The main problem is that I don't have any experience in building such systems and I can't imagine IO load. I know that ESXi would do a lot of random IO read/writes but we have limited budget so we can't afford to buy a lot of HDDs or SSDs.


The concept of my dream storage (screenshot taken from https://blogs.technet.microsoft.com/filecab/2016/04/27/s2dtp5new/)
3tier1.png


Is it possible to build it using FreeNAS? Or to build something near it?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
you will need more disks and ram
How much? We really need no more than 2 TB of usable space. Maybe it would be better buy some SSD's to have enough I/O for virtualization? If I have enterprise DC SSD for ZIL+L2ARC would it be a good idea to buy something like 2xSamsung 850 Pro 512 GB for VM's as pool and use 2-4 TB as another pool for cold data? Or better buy additional 4x1TB HHD's and add them to the pool?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Even if it is very fast NVMe?
I'm not saying it won't work, it's just bad practice to mix multiple functions on one device.
Intel P3600 has both Enhanced Power Loss Data Protection and End-to-End Data Protection
OK, good.
We really need no more than 2 TB of usable space.
With 4x 2TB in two striped mirrors, your pool will be roughly 50% full. The post I linked to above suggests 50% as a high water mark for high performance block storage.
 

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
With 4x 2TB in two striped mirrors, your pool will be roughly 50% full. The post I linked to above suggests 50% as a high water mark for high performance block storage.
S0 we lose 50% on redundancy and another 50% on ZFS performance recommendations?)

l2arc is not persistent
Could you please provide more details?

Is FreeNAS needs a fast storage to be installed to?
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
you should get zfs basics, you need need understand it. l2arc is empty, after und reboot
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi Hunterok, welcome to the forums.

As someone currently running Horizon View on ZFS storage, you have a lot of "firsts" in your post there, and to be honest that's a little scary. Have you done any pilot testing, proof-of-concepts, or mucking about in a test lab with an eval license of Horizon (or VMUG?)

Task workers in VDI are no problem, but CAD work demands a lot of resources - both in the CPU/RAM category, but also in the GPU department. In order to have any kind of acceptable CAD performance in VDI you're basically needing to use a vSGA shared GPU like a GRID, FirePro, or Tesla card. If you're worried about the cost of SSD and HDD then you do not want to see the pricetags on those.

You can greatly simplify things by making your CAD boxes physical, at which point you only need to deliver the much-lower requirements of "task workers" using an office suite, web browser, and generating very bursty workloads that can be timeshared easily.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
If it were me, and the budget was that tight, I would get rid of the NVMe and go with 1TB SSD's in RAIDZ1 (if you have backups, Z2 if not). 10-15 desktops plus the 3 CAD desktops are going to melt a RAIDZ spinning disk pool. You would effectively be running 15 machines off of the IO capability of one drive (not counting the L2ARC). If you plan to use NFS or forcing sync writes for iSCSI, then you will need a good SLOG device.
 

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
As someone currently running Horizon View on ZFS storage, you have a lot of "firsts" in your post there, and to be honest that's a little scary. Have you done any pilot testing, proof-of-concepts, or mucking about in a test lab with an eval license of Horizon (or VMUG?)
I'm not currently running Horizon View but I'm running some tests with ESXi storage software: FreeNAS, OmniOS, Nexenta.
Task workers in VDI are no problem, but CAD work demands a lot of resources - both in the CPU/RAM category, but also in the GPU department. In order to have any kind of acceptable CAD performance in VDI you're basically needing to use a vSGA shared GPU like a GRID, FirePro, or Tesla card. If you're worried about the cost of SSD and HDD then you do not want to see the pricetags on those.
I know about that so we bought Nvidia Quadro 4000 (160$) for vSGA or vDGA.
You can greatly simplify things by making your CAD boxes physical, at which point you only need to deliver the much-lower requirements of "task workers" using an office suite, web browser, and generating very bursty workloads that can be timeshared easily.
No, we can't as all workplaces must be equipped with thin clients.

If it were me, and the budget was that tight, I would get rid of the NVMe and go with 1TB SSD's in RAIDZ1 (if you have backups, Z2 if not). 10-15 desktops plus the 3 CAD desktops are going to melt a RAIDZ spinning disk pool. You would effectively be running 15 machines off of the IO capability of one drive (not counting the L2ARC). If you plan to use NFS or forcing sync writes for iSCSI, then you will need a good SLOG device.
Could it be Samsung SSD 850 Pro?
 

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
24 RAM, 70/300GB ZIL/L2ARC Intel P3600, 3 HDD 2TB - FreeNAS (iSCSI) single VM
img-2016-07-10-13-47-04_cr.png
the same with Windows Storage Spaces (iSCSI)
img-2016-07-13-13-43-28_cr.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I know about that so we bought Nvidia Quadro 4000 (160$) for vSGA or vDGA.

o_O

No, we can't as all workplaces must be equipped with thin clients.

...grumbles something about back to mainframe computing...

Could it be Samsung SSD 850 Pro?

You could use 850 Pro's, or probably even Evo's, for a pool, yes. I do not suggest RAIDZ for block storage. You will run into interesting issues. Look to mirrors for block storage.
 

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
...grumbles something about back to mainframe computing...
Maybe it's hard to believe but in some countries your business could be blocked by police or some inspection for no reason whatever. They took your PC's for "inspection" and it could be for about 1-2 years. So it is essential to have stable workplaces.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Maybe it's hard to believe but in some countries your business could be blocked by police or some inspection for no reason whatever. They took your PC's for "inspection" and it could be for about 1-2 years. So it is essential to have stable workplaces.

That line of reasoning would tend to support the idea of not putting all your eggs in one big hypervisor basket that could be carted off, not to be seen again "for about 1-2 years."
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm not currently running Horizon View but I'm running some tests with ESXi storage software: FreeNAS, OmniOS, Nexenta.

They're all using ZFS under the hood so they'll have very common performance characteristics, as well as behavior with regards to ARC/L2ARC/SLOG.

I would strongly suggest you build a View test lab and start getting yourself oriented in that world with respect to replicas/linked clones so that you've got a good grasp on the terminology, or you might not be able to make full use of any layout I build for you.

I know about that so we bought Nvidia Quadro 4000 (160$) for vSGA or vDGA.

Just a single card, or multiple? A single card shared 3 ways via vSGA will not leave a lot of resources, as a Quadro 4000 is only providing you with 256 Kepler CUs and 2GB of VRAM. If you can fit your CAD workload into that space then more power to you but I wouldn't be surprised to find performance lacking. Multiple hosts also means multiple cards.

No, we can't as all workplaces must be equipped with thin clients.

Sucks for performance, but I understand. Where I am we use thin/zero clients as loss prevention. Some tweaked-out meth junkie breaks in and throws one in his coat? Congratulations, you've stolen a paperweight that will phone home to me. Best of luck selling it.

Important question coming up: Have you purchased or spec'd out any of the View hosts?

Go RAM-heavy as you'll want to be able to sustain at least a single host failure - so if you're buying three hosts, size as if you're only getting two. Also depending on your View configuration you'll need anywhere between an extra 16GB to 64GB for holding the management components (vCenter, View Composer, View Connection Server, View Security Server, and don't forget a SQL server to hold the DBs) depending on if you do an all-in-one (which could work for your small deploy) or separate all services into individual VMs (best practice, but might be overkill in your situation.)

Sorry on the delayed response.
 

Hunterok

Dabbler
Joined
Jul 11, 2016
Messages
10
They're all using ZFS under the hood so they'll have very common performance characteristics, as well as behavior with regards to ARC/L2ARC/SLOG.
What do you think about Nutanix storage performance compared to FreeNAS? The developers also announced NVMe support in their future product updates.
I would strongly suggest you build a View test lab and start getting yourself oriented in that world with respect to replicas/linked clones so that you've got a good grasp on the terminology, or you might not be able to make full use of any layout I build for you.
That's exactly what I'm going to do after I'll get my 850 Pros. Thank you for the advice!
Just a single card, or multiple?
A single card for test lab and multiple for production.
Have you purchased or spec'd out any of the View hosts?
Not yet. I'll do it in 2 weeks. Right now I'm waiting for LSI 9207-i8 RMA which suddenly stopped working after I had connected 5-th HDD to make some tests.
or separate all services into individual VMs (best practice, but might be overkill in your situation.)
Why individual VMs so affects performance?

Sorry on the delayed response.
I'm not in hurry right now))) I'm looking for rather good answers than fast ones. And your answers are very helpful. Thank you for that!

P.S. Sorry for my English. Still learning.
 
Status
Not open for further replies.
Top