Slow speed with FreeNAS iscsi target

Status
Not open for further replies.

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
I'm not seeing very good read/write numbers with a Centos machine connected to FreeNAS vis iscsi.

My FreeNAS Box:
- 12 Bay Server X8STi-F Intel Xeon L5639 Six Core 12GB RAM
- 6 x 1TB WD Red Drives in a raidz configuration

Here is what I'm seeing for write performance on the Centos 6:
[root@sme9b1 ~]# dd if=/dev/zero of=testfile0 bs=1M count=25000; sync;
dd: writing `testfile0': No space left on device
4666+0 records in
4665+0 records out
4892278784 bytes (4.9 GB) copied, 130.534 s, 37.5 MB/s

I have a Promise vtrak m610i hardware raid device and with a similar Centos machine connected to this box, I'm getting almost 3 times that speed:
[root@nameserver ~]# dd if=/dev/zero of=testfile0 bs=1M count=6000; sync;
6000+0 records in
6000+0 records out
6291456000 bytes (6.3 GB) copied, 67.2032 seconds, 93.6 MB/s

Anyone have any insight on the difference? Do I need more memory or something in my FreeNAS box?

Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A hardware RAID subsystem with a write cache is very likely to be pretty fast.

ZFS uses your CPU in lieu of a hardware RAID controlller; the L5639 is a fairly poor choice because it trades away per-core performance in favor of more cores (generally useless to FreeNAS) and a lower clock speed to gain a better TDP.

Basically your Promise/Centos setup has the CPU doing next-to-nothing (just moving packets from the ethernet to a hardware RAID) while the FreeNAS system has to be actually doing the array's work, and is hobbled by the poor hardware platform. However, do note that even with a better choice of CPU and more memory, ZFS is fairly piggy on system resources, and you might have to carefully select your hardware in order to get good performance.
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
Basically your Promise/Centos setup has the CPU doing next-to-nothing (just moving packets from the ethernet to a hardware RAID) while the FreeNAS system has to be actually doing the array's work, and is hobbled by the poor hardware platform.


I'm pretty surprised that a Xeon processor that is around 2 years old is considered a poor hardware platform. I didn't realize FreeNAS required such high end hardware to provide even middle of the road performance. If that is the case, I'm probably better buying another Vtrak unit off ebay. They can be picked up for $700 or so.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I'm pretty surprised that a Xeon processor that is around 2 years old is considered a poor hardware platform. I didn't realize FreeNAS required such high end hardware to provide even middle of the road performance. If that is the case, I'm probably better buying another Vtrak unit off ebay. They can be picked up for $700 or so.
I don't think that's quite fair sir. It's not that FreeNAS "requires high end hardware". FreeNAS requires APPROPRIATE hardware. My $80 CPU (G3220) is simply going to outperform your six-core Xeon for certain FreeNAS tasks (like, CIFS sharing, I'll clobber yours).

Jgreco was not saying that this CPU was "not good enough" as a CPU in general, he was saying, that the strengths of that CPU match poorly with FreeNAS. It's not about getting the fastest/best hardware, it's about getting the RIGHT hardware. There are, of course, untold number of places where your fairly expensive 6-core Xeon will be far, far better. FreeNAS is just not one of them.

Jgreco is simply advising you that in the "cores vs. clock vs. TDP" calculus, you've chosen poorly, by getting a CPU that is stronger-than-average in categories FreeNAS doesn't care about, and weaker than average in categories that it does.

As for "middle of the road performance", you can achieve that with FreeNAS with really lesser expensive hardware (I have, for example, achieved relatively high performance, with hardware that is all lower-end to medium-end), as long as you choose CORRECTLY with FreeNAS in mind.

The forum is littered with people achieving very good performance on hardware whose value, on ebay, would be less than $100.
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
Jgreco is simply advising you that in the "cores vs. clock vs. TDP" calculus, you've chosen poorly, by getting a CPU that is stronger-than-average in categories FreeNAS doesn't care about, and weaker than average in categories that it does.

As for "middle of the road performance", you can achieve that with FreeNAS with really lesser expensive hardware (I have, for example, achieved relatively high performance, with hardware that is all lower-end to medium-end), as long as you choose CORRECTLY with FreeNAS in mind.

The forum is littered with people achieving very good performance on hardware whose value, on ebay, would be less than $100.

I'm not sure how we got down this rabbit trail. The CPU is not the bottleneck here, nor would it ever be with a Xeon 5600 series processor unless I started doing a bunch of FS compression, deduping, or other CPU intensive operations like Samba. This CPU is more than sufficient for ZFS.

This is either an IO problem or a memory problem. I am just not familiar enough with ZFS yet to know how much an SSD helps vs adding more RAM, or both.

This is supported by the fact that my CPU is sitting 70% idle during the entire disk write operation I initially presented. Even if I start half a dozen instances all hammering the the iSCSI connection, my CPU never gets lower than 60% idle.

I guess I was expecting a little more constructive advice than "Your six core xeon CPU is hobbling the array." I mentioned this to a linux developer buddy of mine and he just started laughing.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
OK, well, have a good day.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
no, it's largely a latency problem. your "developer" friend isn't a competent systems engineer. inability to see the complete picture in a complex system with lots of moving parts.

if you don't intend to listen to the advice, don't waste our time by asking. kthxbye!
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
no, it's largely a latency problem. your "developer" friend isn't a competent systems engineer. inability to see the complete picture in a complex system with lots of moving parts.

if you don't intend to listen to the advice, don't waste our time by asking. kthxbye!


The developer is one of the lead developers for one of the major Linux distros. Don't think his competence in systems engineering is in question.

I'm listening, I just don't agree with you. You seem to believe my disagreeing is rude somehow. However, I'm looking into your claim and trying to see where, or how the CPU could the bottleneck and I can't find any evidence of it on the system. That processor is based on the Gulftown Core, so it is pretty solid. I did a test where I did my best to saturate the iSCSI connection and the CPU isn't breaking a sweat:

NAS_resource_zps9f6aef30.png


This is telling me the bottleneck is somewhere else.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First: He never said that your CPU is the bottleneck. He said that it was a "poor choice", but you took it to imply that its the bottleneck.

Second: It's pretty well known around these parts that CPU Frequency has a much larger effect on performance than more cores(within reason). One core at 10Ghz is full of fail. Feel free to RTFM and see that we mention several times that frequency does matter. Size doesn't though.. according to jgreco's mom!

Third: Linux is not FreeBSD. You're the 3rd person I've had to explain that to in 3 days. Linux is NOT FreeBSD. Linux is NOT FreeBSD. I don't care if your buddy designed the damn whole storage subsystem for Linux by himself, his knowledge is almost useless in FreeBSD land. They aren't related at all.

Fourth: Really about all he told you was that trying to compare ZFS with FreeNAS to CentOS is apples and oranges. The comparison at all is completely idiotic. ZFS is way more than a file system. It has significant RAM and CPU needs to perform relative to every other file system you're used to. He didn't tell you anything more than that. He didn't even start recommending hardware to you.

Fifth: If you read around these forums you'll see that iSCSI doesn't perform particularly fast on ZFS without some serious hardware to throw at it or serious tweaking. ZFS looks like a hardware RAID controller to those people that only scratch the surface of ZFS, but its not. It does way more than your hardware RAID controller will ever do. Until you realize this you are going to continue to make pretty ballsy comparisons of your hardware that only shows that you don't have a deep understanding of what is going on. And your buddy with linux experience has no clue what he's talking about with respect to ZFS. ZFS changes the way a lot of things behave. As a linux guy he can't even understand the surface of how ZFS works.
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
First: He never said that your CPU is the bottleneck. He said that it was a "poor choice", but you took it to imply that its the bottleneck.

He actually said my FreeNAS setup was: "hobbled by the poor hardware platform." I took this to mean he felt my CPU was causing the poor performance I was seeing. I'm just saying I disagree based on what the system stats are telling me. I now think it's an IO subsystem problem. I've ordered the IBM M1015 card that jgreco recommends. We'll see if that makes a difference.

Third: Linux is not FreeBSD. You're the 3rd person I've had to explain that to in 3 days. Linux is NOT FreeBSD. Linux is NOT FreeBSD. I don't care if your buddy designed the damn whole storage subsystem for Linux by himself, his knowledge is almost useless in FreeBSD land. They aren't related at all.

Yep, I understand that. I believe the comments were in reference to the Xeon L5639 being a poor processor. It may not be the ideal processor for FreeNAS given some of the Xeon E3 and E5 processors out there, but it certainly isn't a bad one. When I first looked at FreeNAS 3-4 years ago, none of you were running anything near this processor and were getting better performance results than I'm now seeing.

Fourth: Really about all he told you was that trying to compare ZFS with FreeNAS to CentOS is apples and oranges. The comparison at all is completely idiotic. ZFS is way more than a file system. It has significant RAM and CPU needs to perform relative to every other file system you're used to. He didn't tell you anything more than that. He didn't even start recommending hardware to you.

I wasn't attempting to compare CentOS to FreeNAS. If CentOS had native ZFS, I wouldn't be here right now. Not because I think CentOS better than FreeBSD -- it's just RHEL/Centos is my briar patch. I've been working with RH for over 15 years.

The hardware question was basically what I was asking. Everyone seems to focus on ARC and L2ARC around here, so I was wondering if increasing one and or the other would help. Since asking this question, I've ruled out memory being the problem and L2ARC (I added and SSD and it did nothing). As previously noted, the CPU isn't the problem either; therefore it has to be my SATA controller.

Thanks for the feedback.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Hint: An M1015 isn't likely to help. I'm an iSCSI user and have an M1015. :)
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
Hint: An M1015 isn't likely to help. I'm an iSCSI user and have an M1015. :)


Once I get the local filesystem performance up, I'll worry about the iSCSI part. I'm not so much worried about iSCSI long term. This was just the best way for me to do some of the testing I wanted to do because I've got a Xen machine setup and can quickly setup VMs to play with.

I'm pretty sure I found the problem. As you know, I had a older 32-bit Xeon box that simply wouldn't work for FreeNAS. Last week I replaced it with the box I detailed in the first post of this thread. The box is a 2U rack server with 12 hot swap SATA bays. 6 of the bays plug right into the motherboard and 6 into a SATA raid card that came with the box. On closer inspection of the SATA card, I found that it was a PCI-X board plugged into a PCI slot. The drives that plug into this card where the first six drives in the drive cage, so they were the ones I was testing with.

266MB/s max PCI performance divided over 6 drives = pathetic throughput.

I tested a small 3 drive raidz in the drive cages plugged right into the motherboard and my performance jumped up by a factor of 3. I'm pretty sure now that M1015 is gonna make a huge difference.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Wait.

Am I understanding you correctly? All of the FreeNAS installs you've been talking about having these issues with, have been virtualized in Xen? Or did I miss something.
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
No, my FreeNAS is on bare metal. I have a xen box (Centos 5) that I use to install vms on to mess around with different OSs and configs. I prefer xen to vmware.

Sorry for the confusion.

Greg
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Don't think his competence in systems engineering is in question.


Absolutely not in question. Now we KNOW he's not competent as a systems engineer. Heh.

See, the problem here is that "developers" often confuse themselves for systems engineers, but they so rarely are. They write code. They're even good at it. But it is only one small cog of making large, complex systems that work well.

So since you've been hindered by an incompetent opinion, I will give you one clue as to what's going on. Then I'm off, because I don't really have the time this month, and I'm not going to force clue upon you.

ZFS is reliant on software to do things that hardware normally does. So. Your problem, I'm betting, is latency. A little latency dramatically reduces the speed at which things move.

We have seen this time and time again where users come in with a slowish platform (and my guess is that the L5639 has per-core performance around 2/3rds that of what we consider a reasonable CPU, the E3-1230). They complain that it is sooooo slow. They upgrade to a platform that is only 50% faster yet their speeds double.

So why is that? Because when you look at their CPU stats, it may not LOOK like the CPU is the bottleneck... the trite and overly simplistic view you've taken...

For large amounts of data in from the net, it floods the TCP buffer and essentially becomes similar to a blocking operation. So let's simplify our discussion by pretending that network I/O is synchronous (which it most certainly isn't, but it /resembles/ being for this discussion).

So let's think about what happens here when some data arrives from the net. The CPU is idle. Data comes. CPU gets busy, finding blocks and calculating parity, preparing a transaction group. This should in theory max out CPU, but it is largely not threaded so it is basically maxxing out a single CPU core for a small chunk of time. Acknowledgement is sent back over the network. CPU is idle. At some future point, CPU will schedule a transaction group flush to the disk (once a txg is full).

This repeats many times a second, processing data coming in from the network. Without even considering the act of transaction group flushing, can you see that there are periods where the CPU is idle?

And then there's the transaction group flush. When writing large amounts of data, the CPU is forced to idle pending completion of the previous transaction group. CPU speed is mostly irrelevant here; flushing a txg is a lightweight operation in that it is just shuttling data out to disk.

So here's the thing. Your CPU not being 100% busy is no shocker. But a faster CPU means that the amount of work done between receiving data and being done processing that data is reduced, and handling the data faster means that your NAS goes faster. THAT's the thing that faster CPU helps with.

And in practice it is much more complicated than this... it isn't linear with the speed of the CPU, due to the clever layers of caching and buffering built in to multiple layers of UNIX and ZFS. I just wanted you to be able to have a chance to grasp what's going on, so this discussion is very simplified.

Remember, too, ZFS is replacing an expensive RAID controller with silicon designed specifically to do a task ... replacing that with a general-purpose CPU. Sun's bet was that a CPU was going to be cheaper than the specialized hardware. That does not mean that every CPU is fantastic for use with ZFS! One still has to pick one that's suited to the requirements, in much the way that if you buy a 4 port hardware RAID controller to run eight drives, that isn't going to work out so well for you. People just naturally GET that latter example, but it is harder for people to wrap their head around the qualities that make for a great ZFS CPU. Prefer clock speed over core count. Prefer turbo boost. Two cores is probably fine although four cores helps reduce contention. Most filers won't ever make good use of more than four cores. If you start adding in compression and encryption, increase the CPU.

So. Your CPU does not need to be pegged at 100% for me to estimate that it is likely to be introducing additional latency. It is hard to know how much, without actually doing some extensive testing. But it was one of two things I noticed that would obviously impact performance, the other of which was your choice to employ RAIDZ1 with six disks. And latency is a performance-killer. I wouldn't be shocked if a 5 disk RAIDZ1 array on an E3-1230 with 32GB was more than twice as fast.

Many of us here in the forum have seen this sort of problem. We had some very nice Opteron 240 based storage servers from the mid-2000's that were fully capable of gigabit goodness under FreeBSD+UFS, and when I upgraded one to FreeNAS, I was shocked at the horrible performance. It is because with ZFS, the CPU really does matter. It is infuriating and makes planning more complex and difficult. I feel the pain. But it is what it is. I didn't write it. And I've said it before, ZFS is a resource PIG. But if you give it the resources to do the job, it will do an awesome job. It is just too bad that those resources are massively larger than your average NAS. Only you can make the call as to whether or not you wish to provide those resources. Hopefully this message will help you and your developer friend understand why it needs CPU resources.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yep. Eventually people will realize that a CPU at 100% is definitely the bottleneck. But a CPU that isn't 100%(or that doesn't make it obvious that a single-threaded process is bottlenecked) isn't necessarily not a bottleneck.

Surprised you even spent that much time writing jgreco. You are more gracious that I'd have been.

I'm unsubscribing from this thread though. Good luck to everyone that posted.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hey, you know me. Occasionally I am victorious in the neverending effort to increase cluefulness. OP is at least trying to determine the problem, and I don't blame him for believing a developer he clearly trusts. I mean, you go to the resources that have worked in the past... but this isn't a code problem, it is complex system dynamics.

I guess i could additionally comment that testing of individual subsystems would be very helpful in further isolating the problem(s). I don't believe, for example, that the disk controller is a serious problem, but actual testing of that with dd on the raw disks would actually answer that.

My guess? Local disk speeds of the RAIDZ1 for write are sub-100MB/s, switching to 5 disk Z1 improves it 10-25%, and going to three striped mirror vdevs could get performance up to 200-300MB/s write - assuming the disk controller is not a bottleneck, which, being PCI-X, it is. I would still hope for in the area of 200MB/sec local write performance on a three vdev stripe.

This is all based on my guess that the CPU is a significant limiting factor.
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
Absolutely not in question. Now we KNOW he's not competent as a systems engineer. Heh.

See, the problem here is that "developers" often confuse themselves for systems engineers, but they so rarely are. They write code. They're even good at it. But it is only one small cog of making large, complex systems that work well.

So since you've been hindered by an incompetent opinion, I will give you one clue as to what's going on. Then I'm off, because I don't really have the time this month, and I'm not going to force clue upon you..

This person designed and built the package build system that compiles and maintains all of the packages and repositories for the distribution. The IMS build system system has like 92 cores, 140GB of DRAM, and 20 some odd TB of SAS attached RAID storage.

I trust his opinion.

Many of us here in the forum have seen this sort of problem. We had some very nice Opteron 240 based storage servers from the mid-2000's that were fully capable of gigabit goodness under FreeBSD+UFS, and when I upgraded one to FreeNAS, I was shocked at the horrible performance. It is because with ZFS, the CPU really does matter. It is infuriating and makes planning more complex and difficult. I feel the pain. But it is what it is. I didn't write it. And I've said it before, ZFS is a resource PIG. But if you give it the resources to do the job, it will do an awesome job. It is just too bad that those resources are massively larger than your average NAS. Only you can make the call as to whether or not you wish to provide those resources. Hopefully this message will help you and your developer friend understand why it needs CPU resources.

I fairly sure I've established that my immediate bottleneck was the poor RAID/SATA board I had in my box (PCI-X plugged into a PCI slot). I've ordered the PCIe IBM M1015 card you recommend crossflashing. Once I get that board installed, we'll see how the system performs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Writing a package building system is not all that much of an impressive accomplishment, sorry.

It does not make sense that your bottleneck is the existing RAID/SATA controller. Try the following:

Do a "camcontrol devlist" and gather the existing device names (quite possibly "da0" ... "da5")

Then do:

# for i in 0 1 2 3 4 5; do
> dd if=/dev/da${i} of=/dev/null bs=1048576 &
> done
# iostat da0 da3 1

If it shows each device averaging at least 25MB/sec, that's about what I'd expect for PCI-X. You are certainly being limited by such a controller, but if your iSCSI is only able to write at less than 40MB/sec, you are always writing into a fresh transaction group while ZFS is shoveling the bits from the previous transaction group out to storage. As long as that finishes before the next transaction group comes due, you're fine.
 
Status
Not open for further replies.
Top