ZFS-iSCSI write Performance slow?

Status
Not open for further replies.

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
Hi All,

FreeNAS Server Hardware Spec:
Motherboard = PDSME
CPU = Pentium D930 3.0GHz dual core 775 socket
RAM = 8GB DDR2 (max.)
NIC = Intel 82573 PCI-E Gigabit Ethernet

OS = FreeNAS v9.1.0 x64
ZFS file system formated
The server is using the motherboard's south bridge ICH7R
as the disk SATA II controller for connecting 4 disks.
Mirror A = 2TB WD20EARX x 2
Mirror B = 2TB WD20EARSX x 2
WD Intelipark = disabled
ZFS raid 10 = Mirror A stripe Mirror B
Total capacity = 3.6TB

ZVOL created
Volume size = 3TB (cannot set higher)
Compression Level = Off
Block Size = 4,096 bytes

iSCSI extend device (not file) created
iSCSI initiator created
iSCSI portal added
iSCSI target created, Logical Block Size = 4096 bytes
Extend associated with target.

Windows 7 Client PC
Windows 7 Ultimate 64bit client PC can connect to FreeNAS server's iSCSI target successfully.
Windows 7 Ultimate 64bit client PC connected to FreeNAS server using a dedicated network,
separate gigabit switch and network cables.
Windows 7 Ultimate 64bit client PC's 3rd party virus scanner and firewall is disabled. But by default
Windows 7 built-in firewall is activated automatically when 3rd party firewall is disabled.
Windows 7 Ultimate client PC is using Atheros L1 Gigabit Ethernet.

Atheros L1 Gigabit Ethernet
Device Manager- Network adapters- Advanced- Property:
Flow Control = ON
Interrupt Moderation = ON
Max IRQ per Second = 5000
Maximum Frame Size = 1514
Media Type = Auto
Network Address = Not Present
Number of Receive Buffers = 256
Number of Transmit Buffers = 256
Power Saving Mode = Off
Shutdown wake up = Off
Sleep Speed down to 10M = On
Task Offload = On
Wake Up Capabilities = None

Below is the result that I have screen captured.
As you can see the Write(sequential) speed demonstrated on CrystalDiskMark = approx. 39MB/s.
I cannot complain about the Read(sequential) speed = 92MB/s
The Writes connection seems very choppy, unsteady (show in the vertical green lines) according to the Bandwidth Monitor.
RAM usage for FreeNAS during the Write operation is approx. 5GB on average as you can see on the
physical memory utilization diagram below.

Question:
Does having a higher capacity of RAM, say about 16GB of RAM improves
the writing speed to perhaps 80MB/s?

Your advice is much appreciated here.

Thank you.

ZFS_Raid10_iSCSi_Write_Performance.jpg
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Network throughput in FreeNas sucks with no fix yet.
I dont believe more ram will help your write speed.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Network throughput in FreeNas sucks with no fix yet.

What a weird claim.

I dont believe more ram will help your write speed.

It certainly could, but it depends what the problem is. Throughput is a function of the system as a whole; you have relatively complicated subsystems (ZFS. iSCSI, TCP, device drivers) where poor performance in any one of them affects the system as a whole. That's why we typically advocate looking at each subsystem individually to help isolate problems that need to be rectified...
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
What a weird claim.



It certainly could, but it depends what the problem is. Throughput is a function of the system as a whole; you have relatively complicated subsystems (ZFS. iSCSI, TCP, device drivers) where poor performance in any one of them affects the system as a whole. That's why we typically advocate looking at each subsystem individually to help isolate problems that need to be rectified...

Yes, I was thinking about it, looking individually and troubleshoot each and every component that affect the writing speed.
But I just could not handle all those FreeBSD CLI stuff in using IOZone and XDD.
I am really just a noob here, I just could not figure out all those commands when reading the FreeNAS guide towards the end (page 254)
Is there a simplified layman guide so that a noob like me can read it through quickly and understand the ins and outs?
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
Yes, I was thinking about it, looking individually and troubleshoot each and every component that affect the writing speed.
But I just could not handle all those FreeBSD CLI stuff in using IOZone and XDD.
I am really just a noob here, I just could not figure out all those commands when reading the FreeNAS guide towards the end (page 254)
Is there a simplified layman guide so that a noob like me can read it through quickly and understand the in and out?

But I have to say this, if I lower file size from 500MB to 50MB, the write speed can improved a bit better round about 60MB/s
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
Network throughput in FreeNas sucks with no fix yet.
I dont believe more ram will help your write speed.
Thanks for you comment, but I could not simply give up now that I had spent money and valuable time in getting FreenNAS server running.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
But I have to say this, if I lower file size from 500MB to 50MB, the write speed can improved a bit better round about 60MB/s
It's just an indication that you need more RAM, esp small writes as RAM will cache small write quite good.
And RAM will cache the read so the write will be more "sequential" => faster write.

And show me how you did the test?
E.g: % of read, % of write and how many workers.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
What a weird claim.



It certainly could, but it depends what the problem is. Throughput is a function of the system as a whole; you have relatively complicated subsystems (ZFS. iSCSI, TCP, device drivers) where poor performance in any one of them affects the system as a whole. That's why we typically advocate looking at each subsystem individually to help isolate problems that need to be rectified...

I just wonder if there is any post could prove the revert :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
But I have to say this, if I lower file size from 500MB to 50MB, the write speed can improved a bit better round about 60MB/s

Quite frankly, a better test is the larger file. The thing that most people don't quite get is that ZFS uses massive system resources ... and the "write cache" (in ZFS, a "transaction group") can be larger than you would expect. You may be testing mostly the ability of your system to move data from the network to memory with the smaller file size.

Generally speaking, the two things that stand out to me here are the poor performance for smaller amounts of data, which suggests some networking or performance issue not related to the pool itself, and the relatively high percent utilization of the pool, which is not recommended.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
datnus,

As you and I have discussed in several threads now, as soon as you go into the real of using NFS for ESXi datastore or iSCSI you are asking for alot. And as I said before, noobies will not get it right without significant research and experiementing. Nevertheless, I am not surprised at all that you are still not having good performance. I'd expect that your above average IT guy that's never touched ZFS before will be doing research and testing for at least a month, best case. That is, unless he happens to get very lucky(think winning the lottery lucky). Despite my reading and experimenting I wouldn't take on a job like making NFS for ESXi or iSCSI a part time hobby. It would literally turn into a full time job for potentially weeks(and lots of reboots).

It's not trivial. You can't just steal some settings from someone else's working setup. And now you see why the vast majority of new users choose to go to UFS for ESXi datastores or for iSCSI.
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
It's just an indication that you need more RAM, esp small writes as RAM will cache small write quite good.
And RAM will cache the read so the write will be more "sequential" => faster write.

And show me how you did the test?
E.g: % of read, % of write and how many workers.

@datnus, I did a full test involving various data size on CrystalDiskMark, that is from smallest 50MB, all the way up to 4000MB.
The test result has indicated to me that as the size gradually inceased from 50MB to 4000MB, the write speed deteriorated considerably.
So the worst result is writing 4000MB sequentially to ZFS iSCSI, I cannot remember off hand what is the speed like(but I need to dig out the xls file to check on the exact figures), but it is miserably slow!
"E.g: % of read, % of write and how many workers"- what do you mean?
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
Quite frankly, a better test is the larger file. The thing that most people don't quite get is that ZFS uses massive system resources ... and the "write cache" (in ZFS, a "transaction group") can be larger than you would expect. You may be testing mostly the ability of your system to move data from the network to memory with the smaller file size.

Generally speaking, the two things that stand out to me here are the poor performance for smaller amounts of data, which suggests some networking or performance issue not related to the pool itself, and the relatively high percent utilization of the pool, which is not recommended.

Should I test using a single cross-over cable to determine any problem for the network connection?
But I highly doubt there is any kind of network issue here because, on the testing table, I only have 1 unit of gigabit switch (green Dlink energy saving switch) connected 2 cables, one to the client PC and the other one to FreeNAS server.
I may change the switch to another Dlink switch (non-energy saving) just to see I can improve on the writing speed.
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
It's just an indication that you need more RAM, esp small writes as RAM will cache small write quite good.
And RAM will cache the read so the write will be more "sequential" => faster write.

Yes, you ccould be on the right track.
Here is the link: http://arstechnica.com/civis/viewtopic.php?f=11&t=1195173
As you read this forum, a guy was using 4GB ram(actually the minimum is 8GB) for running FreeNAS server, but got very low write speed.
When he upgraded to 16GB ram, the write speed improved significantly.
As with my case, I am stucked with 8GB ram because the motherboard only support the max. 8GB of ram.
Unless, I change the whole motherboard to the latest i3 or i5 platform which support 16GB of ram, but this is only a wishful thinking.
It is not going happen though!
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
With gigabit components, you can use a straight through cable.

I'd run that test to rule out any issue with your energy saving switch.

Should I test using a single cross-over cable to determine any problem for the network connection?
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Normally, I use IOmetter to test.
And I need to input the IO pattern such as % read and % write.
IO pattern will affect the result.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
datnus,

As you and I have discussed in several threads now, as soon as you go into the real of using NFS for ESXi datastore or iSCSI you are asking for alot. And as I said before, noobies will not get it right without significant research and experiementing. Nevertheless, I am not surprised at all that you are still not having good performance. I'd expect that your above average IT guy that's never touched ZFS before will be doing research and testing for at least a month, best case. That is, unless he happens to get very lucky(think winning the lottery lucky). Despite my reading and experimenting I wouldn't take on a job like making NFS for ESXi or iSCSI a part time hobby. It would literally turn into a full time job for potentially weeks(and lots of reboots).

It's not trivial. You can't just steal some settings from someone else's working setup. And now you see why the vast majority of new users choose to go to UFS for ESXi datastores or for iSCSI.

I have reduced the write latency a lot by caching all / almost of read, so the writes "seem" to be more sequential and better throughput.
I have stupidly increased the arc write max to 128 - 256 MBps for arc shrink shift 8 ( ~ 48 - 92 MB), strangely, the latency spikes have disappeared.
My full time job is now looking at the numbers. But I'm not able to reboot the servers so frequently.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102

My advice is to have:
- More RAM and SSD L2ARC. Less Read due to cache will make the write more sequential, therefore, faster.
Sequential write = write, write, write, write.
Random write = write, seek/read, write, seek/read, write, write...
So remove the seek/read by caching them.

- SSD ZIL will improve synced write A LOT!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Personally, if I were going to be trying to do iSCSI on ZFS or NFS via ESXi to a zpool, I wouldn't even consider using a system with less than 16GB of RAM. You just don't have the necessary resources to start adjusting cache settings and such even at that. Even then, unless someone wanted to hire me and paying me hourly to tune their system, I wouldn't touch it until you hit 32GB. RAM is that important.

The less RAM you have the harder it is going to get to optimize your settings. You have less resources to work with so you have less margin for error with your tuning settings being "slightly less than ideal". It's already very very difficult, why throw a wrench into the mix too?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
But I highly doubt there is any kind of network issue here

Please don't "highly doubt." We waste more time around here looking for problems when someone has dismissed the possibility of X being a problem, where X in fact turns out to be the problem. Either you have checked and it is a problem, or you have checked and it is not.

This is 2013. It takes mere minutes to validate your network with "iperf" and "top". Just do it and then we don't need to wonder.
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
Please don't "highly doubt." We waste more time around here looking for problems when someone has dismissed the possibility of X being a problem, where X in fact turns out to be the problem. Either you have checked and it is a problem, or you have checked and it is not.

This is 2013. It takes mere minutes to validate your network with "iperf" and "top". Just do it and then we don't need to wonder.

@jgreco, thanks for your advice. Yes, I should go back and find out if there is any of the network issue here!
 
Status
Not open for further replies.
Top