Confirm this build layout is appropriate for ESXi with 10 hosts and about 25-30 VMs.

Status
Not open for further replies.

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
Good morning all ... I've been using FreeNAS now in my ESXi environment for about 7 months, and have been very pleased. @jgreco will likely remember talking to me when I stood up my last system. There was a lot of great info you all provided back then to get me started. While my last build was kind of a "hackers hobby", now I'm trying to standardize on a solid build to migrate over to, and provide increased speed along with path for future growth.

Just a refresher. Right now, Ive got a storage environment of a FreeNAS box with about 15TB of usable storage, running RAIDZ2. (7 3TB disks, with a hot spare). Yes, I know that isn't idea for ESXi. I was dealing with what I had at the time, with the space I needed. The network backend on this old box is two 1GB ethernet connections. I have both interfaces IPd up, and I use MPIO from ESXi to talk to the storage backend, not LACP, since that doesn't seem to work with a darn. System is dual XEON hex cores, with 96GB RAM. I have 20GB of Intel S3700 space (from 2 x 100GB drives) set for SLOG. I have a 20GB part on each, mirrors. I then have 160GB of JBOD space with the 2 remaining 80GB parts for L2ARC.

All in all, for my workloads, performance is acceptable. There are times the system comes under load, like with an Microsoft Exchange server does maintenance on a Exchange DB. But again, it's be tolerable.

I have a 2nd NAS that is running Windows Storage Server 2012 R2. I won't bother with the specs, because this box is going away.

Right now, the 2 NAS boxes are used solely for iSCSI storage for ESXi.

The goal now is to build two new boxes, with "best practices" and migrate everything over to the new FreeNAS boxes, and retire these old hodge podge pieces of crap.

I'm going to start out an say right now, the plan is to build with 1GB interfaces for network to start with. We are looking to upgrade network to 10GB next year when funds are available, but for now, I'm going to start on 1GB. I will be dropping 10GB dual port NICS in to these new units, so the HBAs are there, when the time comes.

Having said all that, here is what I was looking to do ...

I have already purchased 2 Supermicro 36 Bay Servers. Specs are below:

Supermciro 847E16-R1400LPB SuperChassis
Come with 2 SAS2 backplane:
Front: SAS2-846EL1 (Backplane support up to 24 3.5" Ports SAS/SATA Expander)
Rear: SAS2-826EL1 (12 Ports 3.5 "SATA SAS 826 backplane with single LSI SAS2x28 expander Chip)
Supermicro X8DTN+
Dual Intel Xeon E5645 2.26Ghz Hex Core CPU 144GB (18x 8GB DDR3 ECC REG Memory)
LSI 9211-8i HBA JBOD card On board Intel® 82576 Dual-Port Gigabit Ethernet Controller Dual 1400Watt power supply

My intent is to drop in a 2nd Intel 82576 dual port nic, giving me 4 1GB ports, and also drop in a Chelsio T520 for the future 10GB expansion.

With all of that said, the next thing is to ensure that we purchase drives, SLOG, L2ARC appropriately so that it works now, works when we move to 10GB, and provides a path for expansion. With that said, here is my intended layout on each machine:

2 x Verbatim 64GB USB Flash Drives for OS
2 x Intel DC S3700 200GB Drives mirrored for SLOG
2 x Samsung 850 Pro 256GB Drives striped for 512GB L2ARC
14 x 3TB Seagate Enterprise ES.3 7200RPM Drives for storage.

The layout of the storage would be to use 12 of the drives, in 2 drive mirrors, stripped together to give me 18TB of usable storage with 2 hot spare drives available in the chassis. (and I know about not "filling" it all the way for performance reasons. I say 18TB usable, because with compression, which I have seen give me about 1.5 to 1, i'll come in at a comfortable number under the 18TB of RAW space)

Thoughts on this layout? Clearly, with the handful of 1GB network ports, throughput will be limited to network. And a 200GB SLOG is overkill. I wanted to make sure it is sized right though for the future when we go live with 10GB.

And a small question about this layout. This layout leaves me with 16 available bays. From what I read, it "sounds" like I can add additional 3TB of space, by dropping in 2 additional 3TB drives, mirroring them, and adding them to the volume. If I'm wrong with that, lemme know.

Any thoughts are appreciated. The only thing I'm currently locked in with is the chassis, as I have already purchased them. All of the drive specs for volumes, SLOG, L2ARC are all open to be discussed, and honestly I want to make sure I get it right.

@jgreco Your wonderful insight here would be greatly appreciated!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
The layout of the storage would be to use 12 of the drives, in 2 drive mirrors, stripped together to give me 18TB of usable storage with 2 hot spare drives available in the chassis. (and I know about not "filling" it all the way for performance reasons. I say 18TB usable, because with compression, which I have seen give me about 1.5 to 1, i'll come in at a comfortable number under the 18TB of RAW space)
Just note that with iSCSI you actually want to stay below 50% used space. So since you are calculating 18TB, then in reality you want to only use 9TB max (I would personally stay lower like 40% max).

I have 20GB of Intel S3700 space (from 2 x 100GB drives) set for SLOG. I have a 20GB part on each, mirrors. I then have 160GB of JBOD space with the 2 remaining 80GB parts for L2ARC.
Are you saying that you have partitioned the SSDs and are using them for both SLOG and L2ARC? If so, then that is a "No No"...

Question regarding the two *new* boxes for iSCSI: Are you planning on running them as isolated machines? Meaning that each will have its own instance of FreeNAS or are you just going to make them JBODs? I am asking because you mentioned a:
LSI 9211-8i HBA JBOD
Which kind of leads me to believe that they will be their own instances of FreeNAS, just wanted to be sure. If that is the case, then maybe might want to consider just using a LSI 9200-8e instead and attaching the boxes to your FreeNAS Server as an actual JBOD.

Lastly, are you using "sync=always"? Very important to do this especially when dealing with VMs...
 

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
Just note that with iSCSI you actually want to stay below 50% used space. So since you are calculating 18TB, then in reality you want to only use 9TB max (I would personally stay lower like 40% max).


Are you saying that you have partitioned the SSDs and are using them for both SLOG and L2ARC? If so, then that is a "No No"...

Question regarding the two *new* boxes for iSCSI: Are you planning on running them as isolated machines? Meaning that each will have its own instance of FreeNAS or are you just going to make them JBODs? I am asking because you mentioned a:

Which kind of leads me to believe that they will be their own instances of FreeNAS, just wanted to be sure. If that is the case, then maybe might want to consider just using a LSI 9200-8e instead and attaching the boxes to your FreeNAS Server as an actual JBOD.

Lastly, are you using "sync=always"? Very important to do this especially when dealing with VMs...


RE SPACE - I agree with you. My currently deployment is 15TB raw, I provisioned 12TB for iSCSI, and am getting 1.5x compression.

RE SLOG/L2ARC - I know that is a "no no". That is why I said that build was a hacker hobby. My provisions for the new systems are dedicated SLOG devices, and dedicated L2ARC devices.

RE DEDICATED BOXES - Yes, these two machines are intended to be two standalone FreeNAS boxes. I don't quite understand what you are suggesting regarding using the 9200-8e.

And yes ... SYNC=ALWAYS!!! :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You can set up the other two boxes as just JBODs with no mobo or CPU and connect them via sas to the main box to essentially have 3x36 bays in one machine
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
RE SPACE - I agree with you. My currently deployment is 15TB raw, I provisioned 12TB for iSCSI, and am getting 1.5x compression.
Yeah, but that is running RaidZ2 (7 x 3TB disks, with a hot spare). If doing Mirrors for iSCSI, then space is going to be a lot less.

Since you are thinking about 14 x 3TB (**Staying away from Seagate discussions...), then roughly looking at 21TB (Mirrors with no Hot Spare), then consider not using more then 50% = 10.5TB... Ballpark figures...

Consider this (from another post I responded to):
One of my setup results in ~17.5TB of Usable Space; but I plan on never going above 8TB combined for my iSCSI Volume (Has two zVols). Currently I have each zVol set to only 2.5TB, but can increase it to 4TB later if needed. *** From my understanding while one can increase a zVol; you don't want to try and decrease it...

So in conclusion while I have 40TB of Raw Space (10 x 4TB), which yields me ~17.5TB Usable Space (Mirror vDevs); I will only plan to ever use 8TB of that... *** Not counting the 2 Hot Spares or 1 Cold Spare

Just some food for thought...
 

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
Yeah, but that is running RaidZ2 (7 x 3TB disks, with a hot spare). If doing Mirrors for iSCSI, then space is going to be a lot less.

Since you are thinking about 14 x 3TB (**Staying away from Seagate discussions...), then roughly looking at 21TB (Mirrors with no Hot Spare), then consider not using more then 50% = 10.5TB... Ballpark figures...

Consider this (from another post I responded to):
One of my setup results in ~17.5TB of Usable Space; but I plan on never going above 8TB combined for my iSCSI Volume (Has two zVols). Currently I have each zVol set to only 2.5TB, but can increase it to 4TB later if needed. *** From my understanding while one can increase a zVol; you don't want to try and decrease it...

So in conclusion while I have 40TB of Raw Space (10 x 4TB), which yields me ~17.5TB Usable Space (Mirror vDevs); I will only plan to ever use 8TB of that... *** Not counting the 2 Hot Spares or 1 Cold Spare

Just some food for thought...



I completely understand the concerns about space. You want to keep the used FreeNAS space below 50%.

Can you provide any additional input regarding the rest of the build?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The price differential between 3TB and 4TB Constellation ES drives should be fairly small ($20?) and you should probably consider the larger drive, as more free space on the pool translates to reduced fragmentation. Make sure you're calculating your space requirements based on mirrors, not RAIDZ2. Be aware that large block sizes with RAIDZ2 compress a lot better than the smaller, faster block sizes you might want to use with mirrors, so reliance on compression might be bad.

With 144GB of memory and 10GbE in play, you might also consider moving towards NVMe for L2ARC. SATA is fine for 1GbE, but once you are able to be pushing 1GByte/sec of traffic out to the network, it's nice to be able to pull that from L2ARC at similar speeds. I would suggest that a pair of 950 Pro's results in a nice setup. I suggest the 512's, with the possible caveat that it's possible to configure your system such that the main ARC gets stressed out by too many L2ARC header entries for certain ZFS record sizes. It's really interesting to watch a ZFS pool that's doing almost no reads from HDD because you've given it enough L2ARC to fulfill everything from SSD.
 

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
The price differential between 3TB and 4TB Constellation ES drives should be fairly small ($20?) and you should probably consider the larger drive, as more free space on the pool translates to reduced fragmentation. Make sure you're calculating your space requirements based on mirrors, not RAIDZ2. Be aware that large block sizes with RAIDZ2 compress a lot better than the smaller, faster block sizes you might want to use with mirrors, so reliance on compression might be bad.

With 144GB of memory and 10GbE in play, you might also consider moving towards NVMe for L2ARC. SATA is fine for 1GbE, but once you are able to be pushing 1GByte/sec of traffic out to the network, it's nice to be able to pull that from L2ARC at similar speeds. I would suggest that a pair of 950 Pro's results in a nice setup. I suggest the 512's, with the possible caveat that it's possible to configure your system such that the main ARC gets stressed out by too many L2ARC header entries for certain ZFS record sizes. It's really interesting to watch a ZFS pool that's doing almost no reads from HDD because you've given it enough L2ARC to fulfill everything from SSD.

Jgreco-

Thanks for the input. Honestly, Im seeing 3TB Constellation drives for around $80/each, and 4TB Constellation drives for around $140/each. That is why the 3TB drives looked so appealing.

That said, here is a random question, for the cost, do you always stick to Enterprise drives, or do you consider cheaper alternatives?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Note that Samsung 960 Pros may be cheaper than the 950 Pros, once available.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Status
Not open for further replies.
Top