10GB NIC direct connected can't ping.

Status
Not open for further replies.

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Is that why it seemed so familiar.
I hereby propose that a period in the place of a question mark denote a rhetorical question.
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
;)

So...further info and testing...

jgreco, in response to your post and link...the pool in question (12 x 1TB 7200 SAS) does have a 128GB SLOG (not that its using it all, but I got it on a deal). The hardware specs of the server in question are in my signature...pretty beefy.

With the same lun, here are the results I'm seeing:
upload_2014-11-6_19-35-14.png


Does this make sense to you? Are these what you would expect? Is FreeNAS just that much stronger on ISCSI vs NFS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you read up on L2ARCs your L2ARC shouldn't exceed 5x your ARC. You are obviously at something like 10x your ARC size because your ARC is almost certainly not more than 10 or 11GB in a "best case" situation.

I know you've heard this from me at least twice, but 16GB of RAM is NOT enough RAM for what you are trying to do. One of the reasons is because L2ARC uses ARC for it's index, which means you need even more RAM to not hurt performance. We've had lots and lots of users that wrongfully assume this is like windows and more hardware is always better. In fact, with an already stressed ARC because of it's small size adding an L2ARC can actually decrease performance. Yes, I just said you can add more hardware and see performance decrease.

Much of what you are hashing out in this thread is clearly explained in my noobie guide. Your problems have been documented by dozens of users before you that aren't happy with the "add more than 32GB of RAM" mantra that we give here, but that's the harsh reality. You are welcome to continue down this path, but just like the dozens of people before you, you are spinning tires on problems that aren't going to be solved by spinning the tires again. At some point time is money and it's just not worth your time. I've literally seen a few users spend months trying to get extra performance out of insufficient hardware. Some of the guys have spent so much time that they realized that if they had been paid minimum wage for the hours they had spent trying to make their existing hardware work they could have used those wages to pay for the appropriate hardware more than once over.

And FYI, there's another thread around here debating the whole ISCSI vs NFS thing that is actively being used, so you might want to read up on that thread. It's not a whole lot different than this thread, but you might want to read it just to see what has been said there as it's been slightly different than this thread.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Does this make sense to you? Are these what you would expect? Is FreeNAS just that much stronger on ISCSI vs NFS?

Yes. Yes. No. (Linked an entire detailed post on the topic for you.)
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
We're talking about an HP DL380 dual Xeon with 130GB of ram.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Yes. And?

No, but seriously. You think you have a hot machine. That's great. So therefore you must have no problems. Why are you here, then?

If you're waiting for me to hand you a more personalized clue than what you got back in #16, better sit down and get comfy, it'll be awhile. I need you to follow along with the things I've asked of you because I hate repeating myself.
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
And I don't have any l2arc (ssd as "cache") because I have so much RAM (arc).

I'm sorry if it's not been clear, perhaps I should have moved to different thread or asked a diff question to avoid confusion on hardware.

This thread started about trying to find a compatible 10gb card, but now I am just very surprised about very slow nfs vs iscsi on Freenas regardless of NIC. I do understand the whole principle of sync vs non-sync, ARC vs l2arc vs SLOG (seperate intent log) but with the chart above, others just don't have as severe a penalty that freenas does on nfs and I was hoping for some tuning advice.

Thank you for the original feedback on the nic.
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
Fwiw, I've read every link you supplied and do appreciate your assistance it just doesn't answer why Freenas is 1/5 the performance vs other nfs servers.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I specifically asked you to try something in #16. I don't recall seeing an answer. (Feel free to correct me if I missed it though.)
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
Hi. Nope, I missed that suggestion. I'll execute today hopefully and get back with results. I assume this is to determine the rough speed of the hardware and define how much ESXi/NFS sync writes might be getting in the way.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
between that and the pool design... those are the usual problems.
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
This is the command that I ran: zfs set sync=disabled Big50/Big50-NFS-ESX

FreeNAS Sync=standard (default setting): IOPS = 684.16
FreeNAS Sync = disabled IOPS = 1269.99
Solaris sync=standard IOPS = 2594.97

Further info: this pool is a 4x3 Raid50 (4 vdevs of 3 disk raidz) w/ 128GB slog.

I read one JGreco’s bug reports concerning txg being too large based on RAM, but that seems to have been resolved?
My objective now is: is there something to tune in NFS to get better IOPS or just accept it if I want to use FreeNAS?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Well that pretty much clears things up, it's working right about where it ought to. Not sure what's happening with Solaris but it is "wrong" for most definitions of that term.

A RAIDZ vdev has IOPS approximately equivalent to a single member device. Typical high performance SAS drives might see around 200-300 IOPS per drive, and since you've used RAIDZ, you are basically limited to 4x that because you have 4 vdevs. Assuming fast SAS drives, or 300 IOPS capability, 300*4 = 1200. Sync=disabled is giving you 1270.

Unless you have lightning fast SLOG, a 50% loss for sync=standard over sync=disabled is within the ballpark for an SSD SLOG. There it is all about the latency and every little bit hurts you.

So the very first thing I would suggest you try is to ditch the RAIDZ's. Go with six mirror vdevs of 2x1TB. My guess is that'll bump you up into the 2000-2500 range, and in practice you ought to see more with all that RAM. Also my memory is saying that we've previously pegged the 840 Pro as a poorly performing SLOG device, so you might want to consider making that L2ARC and getting ... maybe an Intel S3700? I forget the latest hotness for SSD SLOG. With 128GB of RAM you can easily do some L2ARC if you feel that'd be beneficial. It will not show up on most benchmark tests but in production it makes a hell of a difference.
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
Wow that was meaty! Thank you very much.

So what I'm hearing is that by the math I'm basically pushing near the max io of the pool regardless of 1gb vs 10gb.

IN my case this pool is more for bulk vm/data storage (data drives) vs pure vm speed so I'll have to evaluate the loss of 2-3tb of space vs increased performance. I have another 8x300gb sAs in raid10 for pure vm speed. Someday I hope to add a nice set of raidz(2?) Ssd's for top tier performance.

I don't understand the Solaris difference myself, but wanted to make sure I didn't have a limiter in here somewhere causing the server to crawl.

I have a second 128g ssd as well so Ill evaluate upgrading the slog to something better. The intels were pricey last I checked.
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
May I ask how you are measuring the IOPS?

Also I believe raidz is not the best option when it comes to maximizing IOPS.

Your vdevs contain 3 spindles of spinning rust. Let's assume each one can do 300 IOPS.
For writes ZFS splits a data block in 2 parts, computes the parity and lets the 3 spindles write concurrently. 2 spindles write data, 1 writes parity. A write operation is considered finished, when all 3 parts are written to the disk. That gives you the write IOPS of the slowest disk in the vdev.

For reads ZFS needs to read back all 3 parts to reconstruct one data block. Before it can verify the parity, it has to wait for all 3 spindles to finish the fetch operation of their part. The 3 spindles behave like a single drive, which is why you get only 300 of the possible 900 read IOPS.

Now, you have 4 of these vdevs striped. Theoretically that gives you 4x 300 = 1200 IOPS


MEH: I just noticed jgreco beat me to it and posted the same conclusion.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Without knowing lots more, yes, the math is suggesting you're at least in the ballpark of max I/O. You can make some incremental improvements through tuning and fiddling around with things. There's a lot of squishy room for tuning improvements, but when I say "a lot" I mean like 10%-25%, not 100% or 500%.

There's no requirement that your vdevs be the same size. Assuming you mean you have 8x300GB SAS drives in 4 sets of mirrored vdevs, you could simply add the 1TB drives as mirrored sets to that pool and wind up with a pool with 10 vdevs. Assuming all drives are pretty fast, that'd probably land you in the neighborhood of 2500-3500 IOPS. There's some loss due to the smaller drives not being able to be used equally.

SSD's in RAIDZ2 are not likely to be good for VM speed. As with HDD's, mirror them.
 

Brendonb

Dabbler
Joined
Oct 14, 2014
Messages
26
bestboy: I'm using the VMWare IO analyzer which is a packaged Linux with a web interface for running iometer. I find it to be an excellent way to run "standardized" tests on numerous storage.

Joe: My thought on the SSD's is that because they're so impressively fast, you won't feel the IO loss from a raidzx configuration and because they're so small from a data set size I wouldn't want just half the usable space.

Additionally, it would just feel wrong to mix disk sizes. In my case, I do plan to use them differently so it makes sense to keep them separate (I think). I'm doing a sort of manual tiering because most free OSes don't do auto tiering; ZFS has L2Arc and that's an ethereal "tier". I am curious about MS Storage spaces and their auto-tiering but I don't have duplicate storage boxes. Maybe someday.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Joe: My thought on the SSD's is that because they're so impressively fast, you won't feel the IO loss from a raidzx configuration and because they're so small from a data set size I wouldn't want just half the usable space.

Bear in mind that VM storage burns a lot of space; in order to avoid problems due to fragmentation, you shouldn't fill a dataset above probably 60% in the best case. Otherwise you'll see things grind to a crawl. Basically with ZFS you have to throw a lot of resources at the problem. If mirroring vs RAIDZ is an issue for you, then you may not be willing to appropriately resource your filer. Think on this before you proceed.
 
Status
Not open for further replies.
Top