ZFS-iSCSI write Performance slow?

Status
Not open for further replies.

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
With gigabit components, you can use a straight through cable.

I'd run that test to rule out any issue with your energy saving switch.

@gpsguy, thanks for your advice, I should go back and see whether if there is a network issue here!
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
Hi guys,

Using a Dlink DGS-1008D H/W Ver: C5 (non-energy saving) switch,
previously I was using the Dlink DGS-1008D H/W Ver:E1 (Green switch).

Surprisingly, the write speed has improved somewhat but still significant.

So, this proves that Green switch(energy-saving) (H/W Ver: E1) has higher latency than
its previous model (non-green switch) H/W Ver:C5 but heat generated by this switch
is of course a bit warmer, comparing to the Green switch.

Ok, I have done 2 separate test here:
1) Test with cross over cable
2) Test with non-cover cable (2 network cables + a Dlink DGS-1008D Ver:C5 -non-energy saving)

Please see the screen capture below:

ZFS_Raid10_iSCSi_Write_Performance_v2(DLinkDGS1008Dswitch).jpg

ZFS_Raid10_iSCSi_Write_Performance_v3(cross-over cable).jpg
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
My advice is to have:
- More RAM and SSD L2ARC. Less Read due to cache will make the write more sequential, therefore, faster.
Sequential write = write, write, write, write.
Random write = write, seek/read, write, seek/read, write, write...
So remove the seek/read by caching them.

- SSD ZIL will improve synced write A LOT!

@datnus, I am not sure, whether your testing is correct about having log store on SSD.
for my case, in order to improve write speed( RAM remains as 8GB) the result was not very
so fantastic! Actually it is worst speed record I experienced.

I did some tests last week which I have done using a Kingston 60GB SSD as a log.
The write speed got even worst. I may have to do another screen capture to prove it to you.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@datnus, I am not sure, whether your testing is correct about having log store on SSD.

No, he's not incorrect. For many(but not all), adding an SSD slog may help. How much and whatnot is a totally different argument.
for my case, in order to improve write speed( RAM remains as 8GB) the result was not very
so fantastic! Actually it is worst speed record I experienced.

And that's because his loading, hardware, and userbase is not yours. Remember that I said before that each person will have to customize to their exact server needs.

I did some tests last week which I have done using a Kingston 60GB SSD as a log.
The write speed got even worst. I may have to do another screen capture to prove it to you.

Not surprising in my opinion either. Adding a ZIL increases the stress on the ZFS cache. Stuff stored in the ZIL is still stored in RAM. So adding a ZIL only increases the amount of data that needs to be committed to the zpool, which will cut down the amount of read cache you have available for ZFS.

And before you think that an L2ARC will help, an L2ARC uses RAM to index the L2ARC. So now you are beginning to scratch the surface of my very deeply worded sentence...

The less RAM you have the harder it is going to get to optimize your settings. You have less resources to work with so you have less margin for error with your tuning settings being "slightly less than ideal". It's already very very difficult, why throw a wrench into the mix too?

Good luck!
 

liukuohao

Dabbler
Joined
Jun 27, 2013
Messages
39
@cyberjock, thank you for your comment.

Because I am not born in USA, and english in not my native language, I have a bit trouble understanding the local slang. Like =
why throw a wrench into the mix too
I have no idea what this means? Sorry, I have to say this, I do appreciate your response and even most appreciated if you can write in sweet and short message so that I can understand easily.

The less RAM you have the harder it is going to get to optimize your settings. You have less resources to work with so you have less margin for error with your tuning settings being "slightly less than ideal". It's already very very difficult, why throw a wrench into the mix too?

So, the last message (shown above) that you wrote, means that I need to increase the RAM size from 8 - 16GB in order to see significant speed gain in writing, Right?

According to the FreeNAS guide to have ZFS running, the minimum requirement of RAM is 8GB, is it not? But because extra RAM is need to for optimizing the setting, >8GB RAM should be ideal?

And am I only the guy whose having this problem here? That is not enough RAM to see satisfactorily good write speed?
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Testing small write is not reliable to get accurate speed.
My server can hit 300 MBps for 500MB write but 60MBps for 10 GB file via iSCSI.

@cyberjock: Correct me if I'm wrong. The RAM seems to store the metadata of data blocks. And zfs needs to check the metadata and SEEK for an empty block to write to. So bigger RAM has bigger space for metadata, faster SEEK, hence, faster write?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I'm not trying to be a jerk, but it is sufficiently complicated that no quick writeup here will be thorough, and there's a relatively recent writeup about it.

Please read http://dtrace.org/blogs/ahl/2012/12/13/zfs-fundamentals-transaction-groups/

Some of us had to learn that stuff the UTSL way :smile: But it really covers all the important points I can think of, without delving too deep technically.

Anecdotally: I have a 30TB pool (11 x 4TB RAIDZ3). If I set the system memory to 6GB, I get about 70MB/sec write speed. If I set the system memory to 32GB, I get about 230MB/sec. There are some other tuning items that have been done to prop up responsiveness, so those numbers are kind of specific to this one host.
 

ror

Dabbler
Joined
Sep 18, 2013
Messages
11
I've been lurking, researching, and testing ZFS (for many months with OmniOS with Napp-IT, FreeNAS, and a bit of Nexenta) as datastore for ESXi with NFS or iSCSI; and the wall I am running into is EXACTLY like what jgreco, cyberjock and several other experience users on this forum and others had said thousands of time.

You want good performance? (given that you meet the requirement of good amount of RAM and several mirror vdev)

- Add super fast ZIL for sync write or forget about the whole project. << I am out finding for a suitable SSD right now and the Intel S3700 gives hope. In fact, I'd just created a thread in the Hardware portion of this forum asking for help. http://forums.freenas.org/threads/ssd-suitable-for-zil-intel-dc-s3700.15130/ I'm going to order the Intel S3700 to test. If it doesn't helps much, I will return it and eat the 15% restocking fee. Then I'll just use iSCSI, with redundant UPS at co-location, and stop wasting more time testing ZFS datastore project.

- If you can live with NFS sync=disabled or iSCSI sync=standard which are the same thing, then no need for ZIL. Better be sure you have redundant power supplies AND UPS.

You can see whether your ZIL is being used by SSH to your FreeNAS box and do a "zilstat"... forgot the rest of the options. Look it up. :) If you see your ZIL being constantly used (lots of number moving), add ZIL device or performance will never be what you expect/want. I did this when I did a Veeam replication and my ZIL was constantly hit. Then I did a sync=disable, zilstat showed zeroooos...my peformance and IOPS skyrocketed.

As for ZIL device with super capacitor for power loss protection, from my research, currently the Intel S3700 is the best hope that is somewhat afforable but I have to test. No one said much about the new Seagate 600 Pro. Or ZeusRAM if you got big $$$ ($2500 for 8GB).
 

NORATIO

Cadet
Joined
Jan 12, 2014
Messages
1
OK guys, I need to stop you immediatly. The problem is not the RAM, I have lots of experience using Vmware ESXi with Freenas and the problem is just in bad driver/code compatibility between Freenas and ESXi.

I have multiple computers to test and one of them is a Pentium D with 4GB and I can get 70MB/S no problem using Freenas 8.1. When using version 8.2 or 8.3 or 9.2 they all give me poor performances of 16MB/s. Writing large file with 4GB or 32 GB of RAM has no impact, it will for small files. We are in 2014 and we mostly transfert big files. It's all to playback large file like MKV, store and burn ISOs or play games that load 1 or 2GB data file.

I have also tested from Windows iSCSI connected to the Freenas and it's working no problem, no matter the version of Freenas. So it's really the portion a compatibility that Freenas programmers left behind that was usefull between ESXi iSCSI and Freenas.

Your solution right now is to install version 8.1 and try again.

Here's the configuration on this Pentium D with 4GB RAM and information on my ESXi install.

Pentium D 3GHZ 4GB RAM
1GBit Nic card (Intel)
30$ Raid Vantec Raid card (SATA II 150)
4X 3TB , RAID 10 (mirror, for a total of 6TB)

Freenas config :
ZFS volume (6TB, 5.7TB usuable)
iSCSI with 3 extend files, 3 targets 512block). Kept the default settings when created them

Esxi :
Version 5.1
Running 8 VMs (5 Windows 2003 32bit Datacenter, 1x Ubuntu 13, 2x VDR)
All VMs use a virtual nic (E1000), I kept the default settings again when I configured.

To test of all this I just have to update to a newer version of Freenas and all becomes slow. Revert, all is fine. I think now that newer ditched old compatibility with older hardware that's all. Doesn't mean it never worked with older verison of Freenas.

I spent hours trying to make the latest version of Freenas working and it's always a dead end. However, I have found that a RAID in strippe mode (no parity) will give you good performances between Freenas iSCSI and ESXi. But who wants a Raid with no redundancy? So to me it just mean the written portion of the mirror RAID 10 or 1 is missing a part that was usefull for some users with hardware like us.

I will certainly not spend 2000$ on a Freenas server with lots of RAM when you can buy a Synology with 5 Bay for less then a 1000$. Freenas is for cheap solution. I have one server that is running for 2 years and had bad drives on the way and all is good with resilvering. So the solution works if you have the right version.

Thanks and hope this will help others with the same problem.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
No, the newer versions of FreeNAS didn't "ditch old compatibility with older hardware."

There are a few basic issues:

1) There are some oddities with newer versions of istgt (which incidentally is not written by the FreeNAS Project) that may or may not be related to your issue, but definitely seem to appear more on slowish gear. istgt itself has always been a bit of a "best of a bunch of not-great options" choice. The FreeBSD Foundation sponsored the development of a new kernel iSCSI target module which will be available in a new FreeNAS version. The kernel and userland frameworks appear to be in 9.2.0-RELEASE but without support from the FreeNAS GUI and middleware; this implies that it could be interesting to set it up "by hand" and see how well it works.

2) ZFS is a CoW storage system. This means that if you write a string of blocks, then rewrite a block in the middle of that string, you wind up with noncontiguous blocks (and a dual seek penalty if you read them as a group). Block storage for something like a busy R/W iSCSI environment is exceedingly challenging for a CoW system. Normally admins mitigate this by providing additional RAM and eventually L2ARC. Your system is tiny, only half the minimum recommended memory for NORMAL use. You are basically giving ZFS very little to work with.

3) FreeNAS has over time been tuned to work better with larger systems, tuning which comes at the cost of good out-of-the-box support for smaller systems. You can alter many of the settings to make it work better on a small system. However, in general, I think the attitude is that there are already plenty of NAS platforms aimed at recycling old gear. FreeNAS is a test platform for TrueNAS, a commercial enterprise-grade NAS offering, and FreeNAS is one of very few free software NAS platforms that'll really shine on a 256GB RAM system with a terabyte of L2ARC.

4) Installing older versions of FreeNAS may "fix" your problem, but be aware that some of them have known security issues.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
AMAZING! NORATIO is using FreeNAS 8.1! That never existed!

Yes, I'm teasing. I'm sure you have a typo somewhere.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
I will certainly not spend 2000$ on a Freenas server with lots of RAM when you can buy a Synology with 5 Bay for less then a 1000$. Freenas is for cheap solution. I have one server that is running for 2 years and had bad drives on the way and all is good with resilvering. So the solution works if you have the right version.
I think you're missing some key features of FreeNAS, honestly. I've been struggling to get NFS performance with FreeNAS + XenServer to be reasonable and I've failed, but the iSCSI performance in my environment is exceptional. I'm seeing > 12,000 read IOPS and > 5,000 read IOPS on a 4-drive repurposed system in my environment. This may offer horrid performance for streaming videos like you're doing, but I simply can't imagine that something like a Synology with 512M RAM would offer anything close to that. I haven't tested my aging EqualLogic yet (need to trust the FreeNAS system enough to migrate all the database servers off it first for a fair test), but 5,000 IOPS write speed is comparable to something like a 25-drive RAID-0 array with pretty fast drives in it.

I'm sorry FreeNAS isn't working for you. If older versions worked better, and you don't need things like GUI-based snapshotting and snapshot transfers between hosts, then you should probably take a look at NAS4Free, which is where the old codebase migrated to when FreeNAS went in for a rewrite.
 
Status
Not open for further replies.
Top