Exploring possibilities

Status
Not open for further replies.

blipblip

Cadet
Joined
Sep 2, 2016
Messages
6
Hi all,
We have some hardware coming off of a pilot program that we thought we could build into a storage system:
HPE Gen8 servers - Dual Intel Xeon E5-2640 or better. A couple of 1Us and a couple of 2Us
Each server has minimum of 192Gb Mem
Each server has minimum 1 x 2-port Intel x-520 10Gb NIC
Each has HP P420i RAID
Storage components we have:
6 x Intel 2Tb P3520
12 x HP 1.8Tb 10k SAS

We are exploring the possibility of creating a single storage system with around 10TB usable space that can support 3 ESXi hosts with total of around 40 VMs, running general workload (DC, RDS, fileserver). We are ok with spending some money if we need to add some components, but nothing elaborate (<$5k).

What is the best system we could build out of these components?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I hate these questions. We need to know the workload profile to even hazard a guess.
 

blipblip

Cadet
Joined
Sep 2, 2016
Messages
6
Sorry for not being more specific.

When I mentioned general workload (DC, RDS, fileserver) above, the profile most likely will be something like:
50/50 read write
Peaked round 3k iops
Avg. IO Size 19kb
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Sorry for not being more specific.

When I mentioned general workload (DC, RDS, fileserver) above, the profile most likely will be something like:
50/50 read write
Peaked round 3k iops
Avg. IO Size 19kb
Now THAT was a refreshing post! Real numbers we can work with. Sorry to rant but so many people come in here and say "I have a 40k budget, what should I buy" and they hardly know IOPS from throughput!

As your your use case, you could do that a quarter of what you have. With those numbers and the workloads (file server is the potential exception) being smallish data sets, most of it can stay hot in the ARC cache and just be insanely fast. At this point, it's a matter of prioritizing space or more IOPS. You mentioned 12 SAS disks. Does that include an enclosure? you could use the SSDs for your SLOG as they are power loss protected.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
"We are exploring the possibility of creating a single storage system with around 10TB usable space that can support 3 ESXi hosts with total of around 40 VMs, running general workload (DC, RDS, fileserver). We are ok with spending some money if we need to add some components, but nothing elaborate (<$5k)."

First glance at this says if you have six E5-2640 CPUs you may be light on CPU for VMware, FreeNAS, and 40VMs. Your systems have plenty of memory. If it were me I would do the following:

1. Find the largest CPU (core wise) you can put in your existing systems. Get 6 of those CPUs to build your VMware farm. If we know the exact Server model you have, we can help pick the new CPUs.
2. Find another server chassis (for FreeNAS) that can support the number and types of disks you currently have.
3. Get a Supermicro motherboard that will fit in the selected chassis AND support your existing memory.
4. Get a CPU for your SuperMicro motherboard.
5. Use a small amount of existing memory (at least 16GB) from the VMware server farm for your SuperMicro motherboard.
6. Create your FreeNAS solution using the SuperMicro based system and the old disks.
7. Install ESXi 6 on the a USB key on each of the 3 HP servers.
8. Use the FreeNAS system to house the 40 VMs

If done properly, you may be able to allocate 1 core per VM and have available overhead in the event one system in the VMWare farm is offline for any reason. You should also be less than $5K in outlays.
 

blipblip

Cadet
Joined
Sep 2, 2016
Messages
6
Thank you for the input.

The 1U servers has 12 drive slots whereas the 2U servers has 16, and can expand to 25 if needed.

@joeinaz these servers dont need to run ESXi. The components mentioned here only need to run FreeNAS. As far as servers go, we can only stay with HP due to various reasons.

So my thought now is:
Grab a 2U server, put in 384gb mem
A pair of P3520 for SLOG (overkill @2TB? Maybe replace with Intel P4800X 375gb?)
Single P3520 for L2ARC
12 SAS drives in 6 mirrored pairs vdevs in a pool
Not sure what CPU to grab. I think there are a pair of E5-2690 in the mix.
Set the P420i controllers into HBA mode or grab an LSI card

This should give me around 10tb usable. Then use NFS for esxi hosts.

What do you guys think?

I also wonder how much performance will this yield. 10k iops?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Then use NFS for esxi hosts.
That all sounds great but why do so many people insist on NFS?
I also wonder how much performance will this yield. 10k iops?
This is hard to say as it will depend on the read/write patterns as some with greatly benefit from the SLOG/L2ARC. I'm guessing the drives are good for 150+IOPS each so 12x150=1800 raw disk read and half that for writes.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
A pair of P3520 for SLOG (overkill @2TB?
You'll approximate the size of SLOG according to your network speed.
It's function is to hold the data between each ZIL flush, committed to disk.
It'll need to be rechecked, IIRC from reading a devpost somewhere, there is currently sort of a fixed number approx 8-12GB or so, that'll be the maximum SLOG size.
The calculation is along the lines of '40Gbps / 8 = 5Gbyte' incoming per second. If ZIL flushes every 4th second, you'll end up at 20GB.
Maybe replace with Intel P4800X 375gb
I like that choice.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Thank you for the input.

The 1U servers has 12 drive slots whereas the 2U servers has 16, and can expand to 25 if needed.

@joeinaz these servers don't need to run ESXi. The components mentioned here only need to run FreeNAS. As far as servers go, we can only stay with HP due to various reasons.

So my thought now is:
Grab a 2U server, put in 384gb mem
A pair of P3520 for SLOG (overkill @2TB? Maybe replace with Intel P4800X 375gb?)
Single P3520 for L2ARC
12 SAS drives in 6 mirrored pairs vdevs in a pool
Not sure what CPU to grab. I think there are a pair of E5-2690 in the mix.
Set the P420i controllers into HBA mode or grab an LSI card

This should give me around 10tb usable. Then use NFS for esxi hosts.

What do you guys think?

I also wonder how much performance will this yield. 10k iops?

384GB may be overkill for a FreeNAS solution. 16GB would likely be fine for just FreeNAS; As for the CPU, a single E5-2640 should be enough CPU for the just FreeNAS; Finally, if you are looking for 10k IOPs, you will need either more than a dozen 10k 1.8TB disks or you will need to deploy SSDs to achieve the goal of 10k IOPs.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
384GB may be overkill for a FreeNAS solution. 16GB would likely be fine for just FreeNAS
I disagree with this suggestion.

RAM is imperative to host L2ARC.
My prediction is that you'll pass the point of diminishing returns at around 128GB. If the machines carry 192GB as they sit, all the better.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
I disagree with this suggestion.

RAM is imperative to host L2ARC.
My prediction is that you'll pass the point of diminishing returns at around 128GB. If the machines carry 192GB as they sit, all the better.

Oops! I forgot about the use of an L2ARC;
 

blipblip

Cadet
Joined
Sep 2, 2016
Messages
6
I disagree with this suggestion.

RAM is imperative to host L2ARC.
My prediction is that you'll pass the point of diminishing returns at around 128GB. If the machines carry 192GB as they sit, all the better.
So if L2ARC needs to be <= 5 x Mem size, if I go with 192gb men, i’ll Need to limit the ssd size to 960gb. Wonder if I can do that with the nvme intels...

All the server has 192gb mem minimum with 16gb sticks. I can put in 24 sticks to max it out at 384gb. But if that is not going to get me extra performance, then no point doing it.

Thank you all for your input!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Another key to performance with ZFS, mirrors, for IO intensive workloads is <free space>. Gobs of it.
You can think of ZFS as a way to extract killer performance out of cheap hardware - however, you'll need quite a lot of hardware.
Here's a way to think of the upgrade path, a priority list if you will.
- Free up more space. 50% should ideally, to avoid fragmentation among other problems.
- Add more vdevs (mirrors)
- Add RAM
- Add L2ARC

I'd not go so far as to say that 128GB is <the definitive max you'll benefit from>.
If your workload would benefit from a larger L2ARC than a 128GB could host, add RAM and go!
At some point the ratio of poolsize:L2ARC become sort of philosophical. Think of this scenario, a 1TB L2ARC ...and 5TB of real-usable space, where you'll perhaps end up using 3TB... which then would put about 30% of your pool into the L2ARC... is ...entertaining :P
 
Status
Not open for further replies.
Top