iSCSI/NFS for ESXi & general use NAS on one machine, feedback and recommendations requested

Status
Not open for further replies.

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
Hello Everyone!

I am not sure if this would be considered a cross post however my original question regarding raid 5/6 in this thread https://forums.freenas.org/index.php?threads/zfs-6-drive-raid-iscsi-storage-share-question.24074/
was answered and brought up some additional hardware related questions.

Here is my current system:
Supermicro A1SAI-2750F-O (8 Core Avoton)
16GB DDR3 1600Mhz ECC memory
6 * 3TB WD Red Drives RAIDZ2
4 * 1GB Intel NICs + dedicated IPMI
CyberPower 1000W UPS (CP1000PFCLCD)
--------------------------------------------------------
Network equipment connected:
PFSense running on A1SRI-2558F-O with 1 WAN/ 5LAN ports
TP-Link TL-SG3216 ( 16 Port managed switch) - 3 NICs on freeNAS connected in LACP to switch on one subnet, which contains ESXi server & my office machines. 1 NIC on seperate subnet/vlan for external access i.e owncloud, ect.

The goal of this system is to be used at home as my main storage device for machine back up, extended storage, DLNA media server, FTP, owncloud, ect; the only other constant user is my wife excluding times I may share a file via FTP/owncloud. I will keep a copy of all my data on this server, but any files I consider incredibly important will be additionally backed up on a USB external & remotely. I would also like to use a portion(~1.2TB) of my storage for an iSCSI(or maybe NFS?) target for my ESXi cluster. From my basic understanding iSCSI and ZFS are not exactly the fastest solution when used together as the main volume. If this is the case what is a different budget friendly solution for iSCSI that does not use ZFS?

If it is possible to combine iSCSI/NFS for an ESXi system and use the same machine and array for general freeNAS use which of the following would you recommend for increasing performance:
1. Increase memory to 32GB (8GB X 4 1333MHZ DDR3 ECC) - I really do love these new motherboards but unfortunately mine uses SODIMM, and although the theoretical max is 64GB, I am yet to find 16GB sticks. However, the 8GB sticks are already ~$100 so I can only imagine what the 16GB would cost.
2. Add the ServeRAID M1015 controller that seems to be recommended with freeNAS.
3. One or two enterprise SSD drives (intel S3500 80GB) and use them as ZIL for the total pool. I am not sure if a mirror ZIL is needed as I have seen some discussion on the forums about this, let me know what you think.
4. Buy two (maybe four :/ ) cheaper and larger SSD drives, like the Intel 530 240GB and build out a separate array of dedicated SSD drives that will become the iSCSI target.

OR
I could scrap all of this, scale to a single ESXi node that has build in storage. :(


Thanks for looking over everything!
 
Last edited:

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
Mirrors trump raidz almost every time. Far higher IOPS potential from a mirror pool than any raidz pool, given equal number of drives. Only downside is redundancy - raidz2/3 are safer, but much slower. Only way that doesn't trade off performance for safety is 3-way mirrors, but it sacrifices a ton of space...

For >= 3TB size disks, 3-way mirrors begin to become more and more compelling.

http://nex7.blogspot.com/2013/03/readme1st.html
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Definitely look into more memory if you don't want to be Really Unhappy With Performance(tm).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Definitely look into more memory if you don't want to be Really Unhappy With Performance(tm).

Gee, exactly what I said in the other thread. :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well if you want to be that way, N00b, let me remind you, N00b, I been sayin' that a lot longer than you. :tongue:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Actually, I wasn't make fun of you or anything like that. I was just giving the same response that was in the other thread, because I'm not seeing why a new thread was necessary.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
A wild HoneyBadger appeared!

So here's my thoughts:

More RAM is typically the first answer for anyone looking for better performance, but if you're doing light ESXi loads, you could probably get away with 16GB. Swapping 2x4GB for 2x8GB and ending up with 24GB might be prudent though. If you do decide to bump it up, take a look at bug 1531 w.r.t increasing transaction sizes with larger RAM, and maybe employing a few tunables to adjust your write_limit_max value to avoid your server stalling under heavy I/O. Storage traffic doesn't handle "oops, writes stalled" very well.

SLOG might be the answer but bear in mind that you'll only see the gains from it if you're using NFS or forcing sync writes in iSCSI. In your case I'd use NFS unless you really need the higher throughput from iSCSI MPIO, but that would necessitate breaking up that 3-way LACP trunk. Also, while 3-ways can be fun for all parties under certain circumstances, they aren't a best practice for LACP since you get funny load-balancing behavior. It'll work, you just might not be getting full utilization out of that third link.

With how inexpensive cheap MLC SSD is (and how expensive good MLC is) you could probably get something like 4x Crucial MX100 256GB and either make a zpool from them (with the M1015) or put them directly into an ESXi box as a local datastore. You did say both "server" and "cluster" when referring to it though; if you've got >1 server, stick them in the FreeNAS box.

Short version:
Get a little more RAM, an M1015, and a bunch of cheap SSDs. Export them over NFS to your ESXi host(s) as a datastore. Enjoy.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The txg sizing stuff works much better in FreeNAS 9, though you can still cause a flood of traffic and get it to burp once or maybe twice.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hey Cujo, that's a nicer avatar. What prompted the change?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Got tired of the bloody rabbit.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I'll be the guy that goes out on the limb and asks WHY? (Edit: I see honey badger went there as well.)

You have a small Avoton based esxi box that is used very lightly for work, and doesn't run anything critical. You are looking at 1.2TB as a storage spec. That is what I gathered from the other thread.

With 1GBe even if you manage to figure out multipath iscsi. Nothing you can do at a reasonable cost will approach the speed of a local drive. It's not even close. Grab one or more ssd's and stick it in the esxi box. Use that as your FAST datastore. If you want to connect to your FreeNAS box at that point for some BULK iscsi storage... then do it. It is very hard to touch the 500+MBps and iops that a local ssd(s) can provide. It takes 10GBe and a whack of resources and money.

The story is different if you need 50TB at high speed and need to share it over many servers. But you don't. Simpler is way way faster and better in this case. It might not be as sexy and complicated, but it will blow the doors off anything your current gear can do.

Two bits. My home/work esxi box gets hammered.
 

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
Thanks guys, Appreciate the help.

I should have mentioned before posting that I was essentially guaranteed to be bumping up the ram to 32GB after my first post; my only concern was seeing multiple threads recommending 64GB for iSCSI, and although this motherboard supports it, it may not be possible.

Otherwise, I guess I am convinced, I will get a separate pool of SSD drives. I really do not want to mess with my main pool too much by introducing too many factors that could cause data loss down the road. It seems nobody has mentioned CPU limitation as a cause for concern so I imagine I should get decent performance out of this box after the last few improvements for a while to come.

Unrelated to this post, I did setup my current setup with a few quick real world tests of copying movies and backup files, I have to say I am happy with the performance.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Nothing you can do at a reasonable cost will approach the speed of a local drive. It's not even close.

Agreed. I've got a Solaris box here with dual 8Gbps FC, 96GB of RAM, and enterprise SLC for L2ARC/SLOG. Local SSD still wins.

If it's a single server, you've got no plans to make a cluster, and you have drive bays to go local, do it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You'll find that going to 64GB of RAM on that board is *outrageously* expensive. It's made from Unicorn blood or something. You could upgrade to an E5 system (which has a MUCH higher capacity for RAM) for the price of the DIMMs that will get you to 64GB of RAM on the Avoton.
 

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
I'll be the guy that goes out on the limb and asks WHY? (Edit: I see honey badger went there as well.)

You have a small Avoton based esxi box that is used very lightly for work, and doesn't run anything critical. You are looking at 1.2TB as a storage spec. That is what I gathered from the other thread.

With 1GBe even if you manage to figure out multipath iscsi. Nothing you can do at a reasonable cost will approach the speed of a local drive. It's not even close. Grab one or more ssd's and stick it in the esxi box. Use that as your FAST datastore. If you want to connect to your FreeNAS box at that point for some BULK iscsi storage... then do it. It is very hard to touch the 500+MBps and iops that a local ssd(s) can provide. It takes 10GBe and a whack of resources and money.

The story is different if you need 50TB at high speed and need to share it over many servers. But you don't. Simpler is way way faster and better in this case. It might not be as sexy and complicated, but it will blow the doors off anything your current gear can do.

Two bits. My home/work esxi box gets hammered.

I should have made a mention to this in the original post. I was going to setup a second Avoton board with 16GB of ram to add to the ESXi system and currently have a cluster of 2 with a goal of adding more. My virtual machines do get used alot for work but most of the usage is not hard drive based. Also, I was hoping of setting up a cluster of ESXi for having a better test system as we have been playing with Anvil a bunch and are hoping to push though a project to start using it for future deployments. I am still trying to decide between going this route or just building a regular server based machine where adding 64GB of memory will be easier and suck it up and VPN into work for access to clustered environments.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Don't take this the wrong way imanz, but you *really* need to figure out what you are wanting to do and go with it. Your last thread was 'wishy-washy' about what you wanted to do, then you created this thread and you've already discussed that you are committing to 32GB of RAM and maybe 64GB of RAM. Now you're talking about a second setup.

How about doing some more research and come up with a *solid* setup. Don't wishy-washy it and fill in the blanks as you go. Figure out *exactly* what you want to do and then post that.

Changing your expectations with every post is wasting everyone's time.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
my only concern was seeing multiple threads recommending 64GB for iSCSI, and although this motherboard supports it, it may not be possible

Generally it's "64GB for VM hosting" regardless of protocol simply because VMs tend to multiply like rabbits. "Oh, I'll just run a couple." "Let me just clone this and test on it." "Oh, I've got the space, I'll spin up another copy of X."

Considering the cost of DDR3 ECC SODIMMs being described as "unicorn blood" I'd go with keeping your homelab on the simpler side and letting work pick up the tab for the expensive testbeds. ;)
 
Status
Not open for further replies.
Top