BUILD iSCSI Datastore for 25VMs

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No. They need to be separate devices.

Correct. Especially since the two devices have radically different requirement profiles. A SLOG device needs to have power loss protection, so something like the Intel 750 or DC P3700 are the usual choices. A L2ARC should be one or two devices that are extremely good at reads, and will see a moderate level of writes, for which inexpensive large consumer NVMe drives are probably ideal.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
So one thing to bear in mind is that only having two 6TB vdevs might not be that awesome, but I *strongly* encourage you not to go down to four 4TB vdevs or something like that. If performance turns out to be an issue, add additional 6TB vdevs until performance is acceptable, even if your pool utilization is fairly low. From your described usage, I'm kinda suspecting you'll be "okay-but-not-thrilled" with the two 6TB vdevs.
Would you like to elaborate a little on the reasoning on that part?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Would you like to elaborate a little on the reasoning on that part?

Because the natural impulse of a storage admin is to add more spindles as a first-tier strategy to increase IOPS. That's a factor in ZFS too, of course, but ZFS may actually get more of a boost from having the extra space and some L2ARC. Having (for example) an array of 12 2TB drives and no ability to expand is bad. Having an array of 4 6TB drives and finding you want double the IOPS means you can go to 8 6TB drives or even 12, you're not full, and in a crisis you can even trade off some IOPS for additional space that is actually sitting right there ready to go.

So rather than going FIRST to IOPS and maybe overestimating what the thing will deliver, and finding yourself stuck with 12 2TB drives and a limited number of IOPS and no expansion capability, I'd go with the larger drives and see what's what.
 

sekim

Cadet
Joined
Jun 14, 2016
Messages
6
Turns out that there is a single CPU version of the ready built chassis I was looking at originally which makes life easy.

Following your feedback the revised spec looks like this:-

5028R-E1CR12L SuperStorage Server 2U (with X10SRH-CLN4F motherboard)
Supermicro Rear hot-swap drive bay for 2x 2.5" drives
2 x Supermicro 32GB SuperDOM SATA-III 32GB
Intel Xeon E5-1620 v3 Quad-core 3.50 GHz
Intel X540-T1 10GbE NIC
128 GB (4 x 32 GB) - Crucial DDR4 RDIMM
4 x HGST Ultrastar 7K6000 6TB SAS HDD
Intel 750 400GB NVMe SLOG
SanDisk X400 1 TB 2.5" SSD L2ARC

I am fairly sure that this will meet my needs but I will add more drives if required. It's not worth buying 4TB drives as the cost saving is negligible and as @jgreco says 2TB drives leaves me no room for expansion so would rather not do that.
 
Joined
Mar 22, 2016
Messages
217
First, the X10DRH might not be the best option. It's a big hot dual board. For a compute node, great, but for a NAS device with only 12 drives, you're unlikely to need it. You'd *probably* be better off with the X10SRL and an E5-1650 v4 (6 core, very fast) or quite likely the E5-1620 v4 would be fine. Make sure the memory you're getting is ECC Registered RDIMM (LRDIMM won't work with the E5-16xx) if you go that route. The two mirror vdevs will give you a varying number of IOPS, which could be as low as ~250 but as high as ~10K, depending on choices you make.

Getting IOPS out of a pool is a matter of properly sizing things, and there isn't a straightforward formula. If you give ZFS gobs of space to work with, for example, you might easily get 10x the write IOPS out of a hard drive pool that you'd expect to be able to get, but this would be because you're only using 10% of the space, and that doesn't cleanly translate to read IOPS, for which you need ARC/L2ARC.

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

etc

Maybe a tad late to the game on this one, but is there any news about LRDIMM support for the new e5-16xx V4? They have a theoretical max memory capacity of 1536. Would require eight 192gb DIMMS. Don't know how you'd get that with RDIMMS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Maybe a tad late to the game on this one, but is there any news about LRDIMM support for the new e5-16xx V4? They have a theoretical max memory capacity of 1536. Would require eight 192gb DIMMS. Don't know how you'd get that with RDIMMS.

As far as I know, still no LRDIMM. It is intended as a workstation processor series.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
but is there any news about LRDIMM support for the new e5-16xx V4?
It's traditional for Xeon E5-16xx not to support LRDIMMs (despite the serious lack of documentation), so definitely do not expect it to work.

They have a theoretical max memory capacity of 1536. Would require eight 192gb DIMMS. Don't know how you'd get that with RDIMMS.
In a few years, once 256GB RDIMMs are available. It'll probably be close to the end of DDR4's product cycle, with DDR5 around the corner, though, so pricing may never be favorable.
 
Joined
Mar 22, 2016
Messages
217
Sigh, oh well. Thanks for the info!

256gb RDIMMS would pretty intense. My dads 5 year old laptop's HDD isn't even that big.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
They have a theoretical max memory capacity of 1536. Would require eight 192gb DIMMS. Don't know how you'd get that with RDIMMS.

With twelve slots, not eight. Normally what's been done on Xeon E5-26xx v3/v4 is to add a third DIMM per channel. See

http://www.supermicro.com/support/resources/memory/X10_memory_config_guide.pdf

This implies you'd get there with 128GB quad rank LRDIMM's and we'd probably miss out on seeing 128GB dual rank RDIMM's, because who'd buy them if LRDIMM's available. I'm not sure where @Ericloewe is getting 256GB from.

The downside is that memory speeds are typically reduced with the addition of that third DIMM module. Also obviously the guide listed is for 26xx systems; for 16xx, you basically toss out all the RDIMM talk.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm not sure where @Ericloewe is getting 256GB from.
Just assuming two DIMMs per channel. Of course, boards that take three per channel make it actually possible.

Come to think of it, the IMCs probably can't address DIMMs larger than 128GB...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Just assuming two DIMMs per channel. Of course, boards that take three per channel make it actually possible.

Come to think of it, the IMCs probably can't address DIMMs larger than 128GB...

Okay, but there's really no basis for thinking they'd ever head on out to what's typically assumed to be an extended memory configuration with only eight slots. It's nice that the v4 boosted the per-CPU to 1536GB though.

That makes this

https://news.samsung.com/global/sam...-gigabyte-ddr4-modules-for-enterprise-servers

interesting but I don't want to know the price. Even the 64's are hideously expensive, around $800, while the 32's are around $160, or less than half the price.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Joined
Feb 15, 2014
Messages
8
You are aware, that in any case of problems with ESX, you will not get any support at all from the VMware Support, because that config is by far not supported. It will for sure work, but I'm just saying, that if you pay for the support, you will not get it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You are aware, that in any case of problems with ESX, you will not get any support at all from the VMware Support, because that config is by far not supported. It will for sure work, but I'm just saying, that if you pay for the support, you will not get it.

What configuration exactly are you referring to?

TrueNAS has been VMware Certified for some time, and FreeNAS is basically the same product minus bells and whistles.

http://www.vmware.com/resources/com...0&sortColumn=Partner&sortOrder=Asc&bookmark=1
 
Joined
Feb 15, 2014
Messages
8
That won't matter to VMware. Not supported is not supported, you will not get support. No matter how supported TrueNAS is.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That won't matter to VMware. Not supported is not supported, you will not get support. No matter how supported TrueNAS is.

The usual solution to that is to tell a little white lie, or so I thought was the common practice...
 
Joined
Feb 15, 2014
Messages
8
Sure, maybe that works as long as they do not request you to open joint support case which the manufacturer of the system. When it doesn't tell via iscsi initiator names or such.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
I have never had anyone from VMware care what storage system was being used..
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have never had anyone from VMware care what storage system was being used..

That's not true, but this is no longer 2009 and what's needed for iSCSI to work is well-known, and the TrueNAS platform's been certified. There are certainly ways you could generate broken FreeNAS storage systems that wouldn't work well with VMware, but then that brings us back to the reason that the OP presumably started this thread, and the VMware support conversations are more likely to be of the form "performance stinks"-->"fix your storage server" than "Oh we noticed that your 'FreeNAS storage server' isn't actually on the certified list, go bugger off."
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
That's not true, but this is no longer 2009 and what's needed for iSCSI to work is well-known, and the TrueNAS platform's been certified. There are certainly ways you could generate broken FreeNAS storage systems that wouldn't work well with VMware, but then that brings us back to the reason that the OP presumably started this thread, and the VMware support conversations are more likely to be of the form "performance stinks"-->"fix your storage server" than "Oh we noticed that your 'FreeNAS storage server' isn't actually on the certified list, go bugger off."
I was getting the impression just because you had unsupported storage you automatically got no support from VMware for anything VSphere or ESXi, which has definitely not been my experience. But yeah i wouldn't expect much support if the problem stems from something they think is storage related...
 
Status
Not open for further replies.
Top