iSCSI/NFS for ESXi & general use NAS on one machine, feedback and recommendations requested

Status
Not open for further replies.

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
You'll find that going to 64GB of RAM on that board is *outrageously* expensive. It's made from Unicorn blood or something. You could upgrade to an E5 system (which has a MUCH higher capacity for RAM) for the price of the DIMMs that will get you to 64GB of RAM on the Avoton.
Yes I saw that originally. I wanted to use higher density memory especially after reading all of your fun threads revolving around ZFS and low memory issues. I believe I may simply look into building the E5 based machine with 64GB memory for a dedicated ESXi box, drop in some SSD drives in striped mode, and simply use some of my extra space as a backup for the data-store. I did not do my homework into performance issues and specifics when I was making this build. I know our system guys complain about I/O on the different SANs but I just thought they were being difficult ;). I know our stuff is all over the place but we do have some ridiculous 3PAR SAN and the rest are EqualLogic. None of these have "trickled" down to the development environment I use, fairly certain we are still using whatever it is the EqualLogics replaced.

Thanks again guys, now that I have a solution mapped out I will start reading about more specific details.
 

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
Don't take this the wrong way imanz, but you *really* need to figure out what you are wanting to do and go with it. Your last thread was 'wishy-washy' about what you wanted to do, then you created this thread and you've already discussed that you are committing to 32GB of RAM and maybe 64GB of RAM. Now you're talking about a second setup.

How about doing some more research and come up with a *solid* setup. Don't wishy-washy it and fill in the blanks as you go. Figure out *exactly* what you want to do and then post that.

Changing your expectations with every post is wasting everyone's time.

My apologies, I am under other deadlines and English is not my first language; sometimes I jump around and expect others to follow!

The project is simple, the hardware I listed originally is what I currently have assembled and working. During this time I had finished a couple other projects and had a few boards laying around which I was deciding between returning or building into a cluster of ESXi nodes (supermicro avoton boards). A large part of my project stemmed from moving to a schedule of working from home instead of the office, and having the same test environments would be perfect. For work the majority of my development and testing will result in memory intensive load on the VMs. As far as other requirements, I am fairly liberal about specifics as long as it gets the job done, the biggest two things I start to look at is power usage and noise, both of which are very important especially in a machine that runs 24/7 like the freeNAS build.

I did not do nearly enough research into iSCSI and using freeNAS in this way past it being possible. I was simply hoping to get it running and go as other deadlines are slowly encroaching. Now that I have read some additional information and these posts I am starting to have a better picture in mind.

Considering the cost of DDR3 ECC SODIMMs being described as "unicorn blood" I'd go with keeping your homelab on the simpler side and letting work pick up the tab for the expensive testbeds. ;)

That was never out of the question either way ;) . My test environment at work is more than sufficient but working from home (always) and being reliant on my internet connection makes me very uneasy. I guess its time to price something out.

Either way I wanted to thank everyone on here for their time, it has been very helpful. Out of curiosity, does this project offer any type of "entry" tier for showing support? I know Pfsense offered a $99 membership which is a great price level for supporting a project that has provided you some good stuff like this one!
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The only support you're going to get for FreeNAS aside from community support is through iX. They do 3 hour consult/repair rates, but I won't quote those here because they could be incorrect because that's not really my department to speak for. :p
 

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
The only support you're going to get for FreeNAS aside from community support is through iX. They do 3 hour consult/repair rates, but I won't quote those here because they could be incorrect because that's not really my department to speak for. :p

I was thinking about it in regards to an affordable way for supporting the project! Either way, I will keep you updated after everything is setup with some "real" world usage statistics just in case someone else reading this has a similar build.
 

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
A wild HoneyBadger appeared!

So here's my thoughts:

SLOG might be the answer but bear in mind that you'll only see the gains from it if you're using NFS or forcing sync writes in iSCSI. In your case I'd use NFS unless you really need the higher throughput from iSCSI MPIO, but that would necessitate breaking up that 3-way LACP trunk. Also, while 3-ways can be fun for all parties under certain circumstances, they aren't a best practice for LACP since you get funny load-balancing behavior. It'll work, you just might not be getting full utilization out of that third link.

With how inexpensive cheap MLC SSD is (and how expensive good MLC is) you could probably get something like 4x Crucial MX100 256GB and either make a zpool from them (with the M1015) or put them directly into an ESXi box as a local datastore. You did say both "server" and "cluster" when referring to it though; if you've got >1 server, stick them in the FreeNAS box.

Short version:
Get a little more RAM, an M1015, and a bunch of cheap SSDs. Export them over NFS to your ESXi host(s) as a datastore. Enjoy.
That looses the whole point of "NAS/SAN" storage, or it doesn't? We shouldn't think as a one server set-up but as a 5 or 10, or yes? I thought FreeNAS (after IX system's acquisition) is targeting 'enterprise' use cases...

I plan to pursue further with MPIO (with Quad Intel 350TX4 on both ends, ~4 Gbit/s total). Solaris MPIO seems to offer what LACP/LAGG and other L2 features cannot. https://www.microway.com/download/whitepaper/Nexenta_iSCSI_Multipath_Configuration.pdf

Forcing sync writes in iSCSI and a decent Intel 530 MLC SSD 250 GB as a SLOG - any good chances for a low/moderate VM workload? Not 50, but let's say 16 VM's max on an eight core CPU/32 GB RAM... it all depends on peak load I know, but let's give it a try. ;)

.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Forcing sync writes in iSCSI and a decent Intel 530 MLC SSD 250 GB as a SLOG - any good chances for a low/moderate VM workload? Not 50, but let's say 16 VM's max on an eight core CPU/32 GB RAM... it all depends on peak load I know, but let's give it a try. ;)

Absolutely no way to know because everything is so arbitrary. I ran 2 VMs on my pool (neither were loaded except when I used them) and I could have slit my wrists and bled out faster than they would load. It was terrible, and it is why I do not run VMs from my pool anymore. ;)

The bottom line, if you plan to go with VMs, be ready to drop the money if you want it to perform. That quite possibly means E5, 64GB+ of RAM, ZIL and/or L2ARC. Basically, if it's slow that means "open your wallet". Still slow? That means "open your wallet more". Rinse and repeat until performance is acceptable.

Unfortunately, if you do this wrong you could spend stupendous amounts of money on things that don't actually make your system faster. This is why iXsystems is in business. It can be easier and cheaper to just pay someone to make the problem go away. If it's two slow you call them up and they make it write.. err.. right. :D
 

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
Nexenta:
Optimize I/O performance for CIFS/NFS/iSCSI UPS-backed deployments?

Optimize appliance's I/O performance by disabling ZFS cache flushing. While providing a considerable performance improvement in certain scenarios (in particular those involving CIFS, NFS or iSCSI) - this settings may be unsafe, in terms of application-level data integrity. It is strongly recommended to use this feature if and only if your storage is NVRAM protected, and the hardware platform is connected to Uninterrupted Power Supply (UPS). Default setting: unchecked (disabled).
 
Status
Not open for further replies.
Top