Freenas for 300 users vdi instant clone

Status
Not open for further replies.

mhejek

Dabbler
Joined
Feb 17, 2018
Messages
16
hi
i need advice for building freenas for vmware vdi with 300 users
could some one can pointing me how much hardisk to strip for make better iops.

i try to build
SuperStorage 5029P-E1CTR12L
10g 2x
xeon silver 4112 .X 1(single)
Memory 32 GB DDR4-2666 X 1
Hard Disk 600 GB 10K RPM 2.5” SAS 12Gbps 128MB 512N Seagate X 6

it is enough?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you're looking at 300 concurrent users, it's way under what's needed.

I think you are a good candidate for implementing ZIL (searching for how to do that on this forum is easy) to improve your IOPS. More RAM will help a lot (100MB per user seems low).

If the instant clone means you will not be keeping any user data (and you store your gold image somewhere outside the pool), you can probably afford to just stripe as many disks as you can afford and fit inside the chassis and risk a failed disk causing downtime. If you're not able to accept downtime, you will need to work with Mirrored VDEVs (again as many as you can afford to have) to improve the IOPS, losing 50% to redundancy. (RAIDZ1/2/3 will not give you sufficient IOPS)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I hope you have not bought this?
hi
i need advice for building freenas for vmware vdi with 300 users
could some one can pointing me how much hardisk to strip for make better iops.

i try to build
SuperStorage 5029P-E1CTR12L
10g 2x
xeon silver 4112 .X 1(single)
Memory 32 GB DDR4-2666 X 1
Hard Disk 600 GB 10K RPM 2.5” SAS 12Gbps 128MB 512N Seagate X 6

it is enough?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

mhejek

Dabbler
Joined
Feb 17, 2018
Messages
16
If you're looking at 300 concurrent users, it's way under what's needed.

I think you are a good candidate for implementing ZIL (searching for how to do that on this forum is easy) to improve your IOPS. More RAM will help a lot (100MB per user seems low).

If the instant clone means you will not be keeping any user data (and you store your gold image somewhere outside the pool), you can probably afford to just stripe as many disks as you can afford and fit inside the chassis and risk a failed disk causing downtime. If you're not able to accept downtime, you will need to work with Mirrored VDEVs (again as many as you can afford to have) to improve the IOPS, losing 50% to redundancy. (RAIDZ1/2/3 will not give you sufficient IOPS)
is ZIL helping if all my drive use ssd?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Because you should articulate the situation and ask for advice before purchasing hardware. Because the hardware you need may be totally different from the hardware you bought.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
mhejek said:
is ZIL helping if all my drive use ssd?

You didn't list SSD drives, but if you exchange your spinning disks for SSD it would be doubtful that ZIL would make a big difference (but may still have a small positive impact if you select the right drive to use for it).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
is ZIL helping if all my drive use ssd?
You said 10k RPM spinning disks initially.
I ask again, are you asking what to buy or is this what you have already?
SSD pool or not, with that number of VMs, you will need a very good SLOG device.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
VDI is virtualization. That means synchronous write. There is already documents explaining the difference between in pool ZIL and a SLOG (separate log) so I am not retelling the story.
Look at the links in my signature.
Even with SSD you will be writing two times without a SLOG for every sync write.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If you're looking at 300 concurrent users, it's way under what's needed.

If the instant clone means you will not be keeping any user data
Just because you start with a clone doesn't mean they can't diverge from one another.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

mhejek

Dabbler
Joined
Feb 17, 2018
Messages
16
Because you should articulate the situation and ask for advice before purchasing hardware. Because the hardware you need may be totally different from the hardware you bought.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
this hardware is what i want to buy.ok let short it
SuperStorage 5029P-E1CTR12L
10g 2x
xeon silver 4112 .X 1(single)
harddisk ?? need advice
memory ??need advice
there will be 5 VM windows 2016
and will handle 300 concurent user
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
i need advice for building freenas for vmware vdi with 300 users
VDI will not be as demanding on the IOPS as something like a database server, but I can only guess at the actual IOPS your systems use. If your virtualization infrastructure exists already, in any capacity, so it can be tested, it would be best if you used some tool like VMware Capacity Planner, Recon or Lanamark to find out what you are doing now. They are great tools or alternatively just start up perfmon and measure IOPS. If you can determine the amount used per a single VD instance, you just multiply that out by the number of VD instances you will have. So, if one virtual desktop is using 10 IOPS and you need to be able to run 300 concurrently, then you need to be able to sustain 3000 IOPS.
So, you get your best IOPS with mirror vdevs, and if we estimate that each vdev can give you 170 IOPS, that means you need 9 vdevs, which equates to 18 drives with spinning disks. You could get that in a single 24 bay drive enclosure.
Yes, you do need a SLOG device.
Where I work, we just ordered two new servers using the Intel 3D Xpoint DC P4800X as the SLOG device.
https://www.newegg.com/Product/Product.aspx?Item=9SIA6ZP6M02493
It is very fast but it is also very expensive. If your budget won't support that, some people have also used the 900p
https://www.newegg.com/Product/Product.aspx?Item=9SIA12K6TW8192

This tells you the number of drives you need to support the IOPS, but if you switch to SSDs, the number of drives is reduced because SSDs support more IOPS than hard disk drives. That is a financial question though because either solution will work, one is more expensive than the other. With SSDs, you could probably manage with 8 drives in mirror pairs. Never go without a mirror. Single drives are not an option if you want reliability. Some people even use 3 way mirrors if they just can't tolerate down time.
You will need to choose a drive size based on the amount of storage you want to have. If you were using the 9 vdevs in the example above, and they were 500GB drives, you would have around 8.1TB of raw storage, but you can't use all of that. You can only get the pool about half full before performance (in this application) would start to suffer. You would need to keep utilization of the pool under 3TB, probably more like 2TB. Your mileage may vary.
memory ??need advice
FreeNAS and ZFS use RAM to cache read and write operations so you need as much as you can reasonably afford. Because of the high cost of memory, I only went with 256GB in the new servers we ordered and the servers can support 4 times that much, so I can expand it if I see it is needed.

This is meant to give you an idea how to make the calculations. You need to get some numbers like how much storage capacity you need and how many IOPS your virtual desktops will use so you can make more meaningful calculations. Just throwing hardware at a problem without understanding the problem can lead to something that is over powered in one aspect without being powerful enough in another aspect.
Research is key to making a wise investment in hardware.
 
Last edited:

mhejek

Dabbler
Joined
Feb 17, 2018
Messages
16
VDI will not be as demanding on the IOPS as something like a database server, but I can only guess at the actual IOPS your systems use. If your virtualization infrastructure exists already, in any capacity, so it can be tested, it would be best if you used some tool like VMware Capacity Planner, Recon or Lanamark to find out what you are doing now. They are great tools or alternatively just start up perfmon and measure IOPS. If you can determine the amount used per a single VD instance, you just multiply that out by the number of VD instances you will have. So, if one virtual desktop is using 10 IOPS and you need to be able to run 300 concurrently, then you need to be able to sustain 3000 IOPS.
So, you get your best IOPS with mirror vdevs, and if we estimate that each vdev can give you 170 IOPS, that means you need 9 vdevs, which equates to 18 drives with spinning disks. You could get that in a single 24 bay drive enclosure.
Yes, you do need a SLOG device.
Where I work, we just ordered two new servers using the Intel 3D Xpoint DC P4800X as the SLOG device.
https://www.newegg.com/Product/Product.aspx?Item=9SIA6ZP6M02493
It is very fast but it is also very expensive. If your budget won't support that, some people have also used the 900p
https://www.newegg.com/Product/Product.aspx?Item=9SIA12K6TW8192

This tells you the number of drives you need to support the IOPS, but if you switch to SSDs, the number of drives is reduced because SSDs support more IOPS than hard disk drives. That is a financial question though because either solution will work, one is more expensive than the other. With SSDs, you could probably manage with 8 drives in mirror pairs. Never go without a mirror. Single drives are not an option if you want reliability. Some people even use 3 way mirrors if they just can't tolerate down time.
You will need to choose a drive size based on the amount of storage you want to have. If you were using the 9 vdevs in the example above, and they were 500GB drives, you would have around 8.1TB of raw storage, but you can't use all of that. You can only get the pool about half full before performance (in this application) would start to suffer. You would need to keep utilization of the pool under 3TB, probably more like 2TB. Your mileage may vary.

FreeNAS and ZFS use RAM to cache read and write operations so you need as much as you can reasonably afford. Because of the high cost of memory, I only went with 256GB in the new servers we ordered and the servers can support 4 times that much, so I can expand it if I see it is needed.

This is meant to give you an idea how to make the calculations. You need to get some numbers like how much storage capacity you need and how many IOPS your virtual desktops will use so you can make more meaningful calculations. Just throwing hardware at a problem without understanding the problem can lead to something that is over powered in one aspect without being powerful enough in another aspect.
Research is key to making a wise investment in hardware.
very well explanations
thanks very much
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Something like this would probably be more what you need:
https://www.supermicro.com/products/system/2U/2029/SSG-2029P-E1CR24H.cfm

That's probably overkill. A well-specced single CPU system is probably sufficient until you get up into the past-10Gbps range, and the duals are generally hideously expensive in comparison, which is money better spent on RAM. You really want no less than 64GB RAM, and 128GB or more may be desirable or even required depending on workload. You probably want or need to maintain gobs of free space on lots of drives on a HDD pool to maintain good long term performance, see any of my posts on fragmentation, so more 2.5" drives or larger capacity 3.5" drives are both possibilities - there's really no case where you'd want less than 2TB drives even if you only need a small fraction of the space. In a VDI environment, you may want to see how big the working set is and then try to leverage as much of that into L2ARC as you possibly can. Basically what @Chris Moore said in #12 about it being hard to provide specific advice without specific information applies.
 

mhejek

Dabbler
Joined
Feb 17, 2018
Messages
16
That's probably overkill. A well-specced single CPU system is probably sufficient until you get up into the past-10Gbps range, and the duals are generally hideously expensive in comparison, which is money better spent on RAM. You really want no less than 64GB RAM, and 128GB or more may be desirable or even required depending on workload. You probably want or need to maintain gobs of free space on lots of drives on a HDD pool to maintain good long term performance, see any of my posts on fragmentation, so more 2.5" drives or larger capacity 3.5" drives are both possibilities - there's really no case where you'd want less than 2TB drives even if you only need a small fraction of the space. In a VDI environment, you may want to see how big the working set is and then try to leverage as much of that into L2ARC as you possibly can. Basically what @Chris Moore said in #12 about it being hard to provide specific advice without specific information applies.
this is my second project using VDI.
my fisrt project using vmware horizon published apps.user starting fisrt apps take about 70 secods to start,but it using esx disk not using NAS.and it make me upset.

so this is my second project,
i want all my VM reside at NAS.so this it is.

My ESX server want to build is
2X (MAIN SERVER and BACKUP )using Vmotion and HA
SuperServer 6019P-WTR
with dual Intel Xeon Scalable Silver 4114 10C
Memory 256GB
Hardisk 480 intel 2X
dual 10GB NIC

VM data store using ISCSI multipath with double NAS
SuperStorage 5029P-E1CTR12L
10g 2x
xeon silver 4112 .X 1(single)
Memory 256GB (After some advice of you
HArdisk 12 Intel ssd 480GB with raidZ2 (how about it?)

Thanks
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
RAID-Z2 will not give you the IOPS your system needs. Mirror vdevs and you will need a SLOG device.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

RSA

Dabbler
Joined
Mar 25, 2018
Messages
18

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
i think SSD will give IOPS and no need SLOG
RAIDz2 gives the IOPS of a single drive. Do you have any experience to backup your imagination?
ESXi will be doing synchronous write. That means, without SLOG, the NAS will be writing to the in-pool ZIL, sending the acknowledgement that the write is done, then the write will be sent to stable storage in the pool. So, everything will be written to the pool twice. When you implement a SLOG (separate log) the writing to it is very fast, faster than the SSDs in the pool if you have a proper SLOG device, then the acknowledgement is sent, and the writing of the data to the pool is only done once. So, for sync write, having a SLOG makes the writing faster, oftentimes it is more than twice as fast.
If you don't know how it works, you can not make it work better.
 
Last edited:

RSA

Dabbler
Joined
Mar 25, 2018
Messages
18
i understand how it works.
Check your calculations
So, if one virtual desktop is using 10 IOPS and you need to be able to run 300 concurrently, then you need to be able to sustain 3000 IOPS.
for example with 12 pcs of Intel DC3710 (read 85000IOPS, write 35000 IOPS) - not new, not very expensive, not very fast SSD, you will get in RAID6 (RAIDZ2) approx 106935 IOPS for writing and 1020000 IOPS for reading. Calculations from http://wintelguy.com/raidperf.pl. i know about synchronous write, but "very fast proper SLOG SSD" is very expensive. And if you have a lot of IOPS, i think you don't need it. But you are absolutelly right for HDD - a lot of mirror vdevs and SLOG
 
Status
Not open for further replies.
Top