Slow simultaneous read

Status
Not open for further replies.

sampledi

Dabbler
Joined
Nov 23, 2012
Messages
13
Hi everybody,
this is my system:
FreeNAS-8.3.0-RELEASE-p1-x64 (r12825)
Supermicro X7SBE motherboard
Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz
8GB RAM
Intel PWLA8494MT PRO/1000 MT Quad Port Server Adapter with link aggregation
L2 managed switch that support link aggregation
Separate network for NAS
4x2TB Seagate Barracuda RAID 0
----
Read performance:
dd if=/mnt/RAID/tmp.000 of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 185.820114 secs (577839395 bytes/sec)
----

Goal is to provide consistent read speed for 3 AFP clients, ideally about 110 MB/s per client. Before Freenas I was testing Debian + XFS + Netatalk, and I was getting those results w/o any tweaks.
However I really like ZFS & Freenas (it just saved me from bad RAM and serious corruption), and I want this to work with Freenas.
With single client performance is fine - about 110 MB/s Read & Write, although a bit "bumpy" (it goes to 113 than down to 103 ...). However, when I start simultaneous test from second client, write speed stays where it was while read speed drops to 50-60 MB/s.
I have tried to find the solution, but without success. Am I missing some Tunables / Sysctls?

Thank you in advance.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
You do realize that in RAID0, you lack redundancy. Any disk lost means a lost pool.

ZFS is a little more CPU and memory piggy than you might be used to. More memory might help sustain high read speeds by increasing the ARC and the space for read-ahead (prefetch).

But I suspect that your problem may be link aggregation. You may find that all your traffic from your test happens to be hitting a single interface, in which case both clients are sharing the single gigE.

This is a side effect of how link aggregation is supposed to work. To ensure the ability of silicon to rapidly "pick" a consistent interface to use, techniques such as hashing are used to select an interface. Many beginners are surprised to learn that aggregating two interfaces does NOT result in a two gigabit link, but rather two one gigabit links between the same network endpoints. Link aggregation picks a consistent interface to use because in most cases, a receiving system that receives packets out of order has to do significant work to re-order those, and this results in large performance degradation. Queuing packets for the same destination out the same interface consistently means that packets will traverse the ethernet in the order sent, so ordering is maintained.

Try your test again and see if everything is going out one interface. If you only set up two out of the four interfaces, you had a 50% chance that both clients would end up on the same link. Upping this to four interfaces reduces the chance of that happening (but does not eliminate it). There are ways around this.
 

sampledi

Dabbler
Joined
Nov 23, 2012
Messages
13
RAID 0 is there to give enough speed. I have external backup, and backup of that backup :)
I know how link aggregation works, but I don't think that's the issue here because performance is the same when I remove the switch and go directly to 4 port adapter. Also please note that write speed stays fine.
I'll try to tweak arc and prefetch and do more tests, thank you.
 

sampledi

Dabbler
Joined
Nov 23, 2012
Messages
13
I'm an idiot :( It was link aggregation.
With little additional tuning I have about 110 MB/s read on 3 clients (direct connection to 4 port Intel adapter). When 4th comes in speed goes down to 90-100 MB/s but I think that is because of my hardware performance (IOPS, RAM, etc). Anyway, I'm more than pleased with performance.
I forgot to mention there is additional 3TB WD RED drive inside for first level of backup, so 8GB RAM is probably not enough for such heavy load (11TB, 4 clients). I suppose that adding SSD for cache would help, but not on this board where there are no 6G ports, and there is no sense to pump it further when I can not add more RAM.
It will do the job for next half year or so, and in the meantime I'll start preparing new system with 1366 board, 24GB RAM, more HDDs, SSD, LSI HBA, some nice case .....
This is addictive :)
 

dfsooner

Dabbler
Joined
Sep 29, 2011
Messages
26
8GB RAM is probably not enough for such heavy load (11TB, 4 clients).

If requirements are proportional, I'm in bigger trouble than you. I Have 48TB of storage and 24GB memory. However, I ran with 8GB until recently and it seemed to run ok. If the rule of thumb of 6GB + 1GB per TB of storage is correct, then I should have 54GB of RAM. Don't think my MB would support that.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If requirements are proportional, I'm in bigger trouble than you. I Have 48TB of storage and 24GB memory. However, I ran with 8GB until recently and it seemed to run ok. If the rule of thumb of 6GB + 1GB per TB of storage is correct, then I should have 54GB of RAM. Don't think my MB would support that.

I would just max it out. If you start having problems that seem to be RAM related then worry about more RAM.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
It is not required to strictly follow the 6GB+1GB per TB rule if you can tolerate less-than-ideal performance. Using ZFS as a home user or for backups or other less-demanding uses, you still need to scale up as you expand storage, but not by the recommended amount. You don't want to be the poor sucker trying to support a 48TB pool on 4GB or 8GB of RAM, though I suspect 16GB would be workable. Remember that the first 6GB is really more for FreeNAS than for ZFS, so if it helps to think of it that way, if you have an 8GB system, that's only 2GB for ZFS, and if you have a 16GB system, that's 10GB for ZFS. This isn't the way it actually works out, but it may help you to understand scaling a little if you think about it like that.
 

sampledi

Dabbler
Joined
Nov 23, 2012
Messages
13
It would be great if someone who is serving more than 3 clients simultaneously, with larger RAID and intensive I/O, could give us details about hardware used (and transfer speed per client).
Or should I open "Freenas Extreme" / "Pimp my Freenas" thread :) ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
A friend has a FreeNAS server for his home. He streams movies to 2 TVs as well as serves files for his whole family through 3 computers. It has 16GB of RAM and has great speed(you can get over 110MB/sec if your local disk can keep up with those speeds). He uses an i3-530 CPU and a 3ware controller with 10x3TB drives.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
6+1 rule

I'm really curious where this "6GB+1GB per TB" rule I keep seeing on the forum comes from. The documentation doesn't contain anything like that:

http://www.freenas.org/images/resou...s8.3_guide.html#__RefHeading__7608_1957652121
ZFS typically requires a minimum of 8 GB of RAM in order to provide good performance. The more RAM, the better the performance, and the FreeNAS® Forums provide anecdotal evidence from users on how much performance is gained by adding more RAM. For systems with large disk capacity (greater than 6 TB), a general rule of thumb is 1 GB of RAM for every 1TB of storage.

All I see there is that you should have 8GB RAM or, if you have 7+ TB of storage, 1GB per 1TB. Of course, 7GB of RAM doesn't make much sense, so I understand the rule as: 1 GB per 1TB of storage, with 8GB being the minimum even if you have less storage.
 

sqwob

Explorer
Joined
Jan 8, 2013
Messages
71
All I see there is that you should have 8GB RAM or, if you have 7+ TB of storage, 1GB per 1TB. Of course, 7GB of RAM doesn't make much sense, so I understand the rule as: 1 GB per 1TB of storage, with 8GB being the minimum even if you have less storage.

What you say is absolutely correct ;)

I have a 16tb system running with 16gb ram and getting excellent performance. I only have 1 gigabit nic and i'm completely saturating it whatever i do.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are correct, but also incorrect. While your quote from the manual is verbatim, the manual does not make any link from 6TB of data to 6GB of RAM. It doesn't really say how much RAM you should have if you had only 4TB of disk space except that you shouldn't have less than 8GB. The short and sweet answer is to just say 6GB + 1GB per TB of disk space.

Depending on your usage you may be able to go under the recommended RAM, or you may have to go horribly over(by multiple factors). If you have 1 user and 20+TB you may be able to get by with less than 20GB of RAM. If you have 20+TB of disk space and you have 10000 users, you will need substantially more. If you choose to be crazy and enable deduplication, you shouldn't be enabling it unless you have at least 5GB per 1TB of storage on top of 6GB per the thumbrules for dedup. Of course, there is no actual upper limit for how much RAM will be needed for dedup, so using it is entirely "at your own risk". If you don't have enough RAM with dedup enabled your system will crash and you will not be able to get the zpool to mount until you have enough. That's why dedup has an extremely harsh warning in the manual and release notes. I consider it borderline irresponsible to use it with anything less than 64GB of RAM.

Most people will use more than 6TB of hard disk space, so remembering the 6GB+1GB per TB of disk space is easiest and not entirely inaccurate. I'm not sure I've seen any situation(aside from dedup being enabled) where 6GB + 1GB per TB has not given excellent performance. I have 32TB of storage space on my server and as a single user I couldn't get good transfer rates or system reliability until I upgraded from 12GB to 20GB. I wasn't doing anything crazy either, had no jail, compression and dedup disabled, etc. It was just me and some occasional streaming video.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
However, the 6GB+1GB per TB is IMHO overkill, especially for noobs you mention. For somebody throwing together a system with five 2TB drives in RAIDZ the thumbrule recommends 14GB of RAM (6 + 8) -- but in my experience 8 GB is enough for normal (home) use with 8TB of storage. (Unless the rule has an implied 1GB per every 1TB over 6TB condition, that I didn't see mentioned anywhere).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
However, the 6GB+1GB per TB is IMHO overkill, especially for noobs you mention. For somebody throwing together a system with five 2TB drives in RAIDZ the thumbrule recommends 14GB of RAM (6 + 8) -- but in my experience 8 GB is enough for normal (home) use with 8TB of storage. (Unless the rule has an implied 1GB per every 1TB over 6TB condition, that I didn't see mentioned anywhere).

Your standards are skewed. "Normal" use for a FreeNAS server would be a corporate environment. That's what ZFS was designed for. Sure, it can be adapted to work at home, but that's not what FreeNAS is sold as. Call up iXsystems and tell them you'd like them to build a FreeNAS server for you at home. They'll probably laugh at you.

Sure, alot of the people here that have problems are home users, but that doesn't discount the number of business users. They may represent the majority of FreeNAS use on the planet and they just don't cut corners like many home users and complain about spending $100 on 16GB of RAM. To them they'd rather spend the $100 now than spend days of troubleshooting, data recovery, man-hours lost, etc because they didn't spend the money the first time. At home you have alot more room to troubleshoot on your time. If some of the people here that have spent weeks troubleshooting their FreeNAS servers were in a corporate environment and having to devote weeks of a technicians time it would be better for the corporation to ditch FreeNAS and go to something else or fire the employee for incompetence and hire someone else to fix the problem in a day(or less).

The plain and simple is that there's TONS of evidence that 6GB + 1GB per TB of storage works. If you want to use less, fine. If you want to use alot less, fine. But if you show up and expect someone to spend time trying to teach you how to tweak your system to work with 2GB of RAM and 20TB of storage space you have another thing coming.

Besides, it's a thumbrule. It's not hard and fast, but its a very good start. If you've never built a FN server before it is an excellent place to see if you are even remotely close to the amount of RAM you may need. Which would suck more? Building a system that maxes out at 16GB and finding out you need 48GB and having to build a whole second system or building a system that can use 48GB but since you only used 16GB and it worked you won't be adding more? Exactly.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Are you using NFS or iSCSI?
I have 1 quad core CPU, 8 GB RAM, 6 x 500 GB RAIDZ1, but the performance is no where around 30MBps :(
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Are you using NFS or iSCSI?
I have 1 quad core CPU, 8 GB RAM, 6 x 500 GB RAIDZ1, but the performance is no where around 30MBps :(

Please do not start chatting in another thread with something that is unrelated to your issue. The first post the OP said he uses AFP.
 

sampledi

Dabbler
Joined
Nov 23, 2012
Messages
13
Are you using NFS or iSCSI?
I have 1 quad core CPU, 8 GB RAM, 6 x 500 GB RAIDZ1, but the performance is no where around 30MBps :(

Here is the OP again :)
I use AFP but NFS works the same (about 110 MB/s) for 3 workstations simultaneously.

Just to update the thread, I have purchased parts for new FreeNas, and the old one will be used as one more level of backup.
This will be new configuration:
Supermicro X8SAX (because of PCI-X 133 Mhz slot for PWLA8494MT PRO/1000 MT Quad Port Server Adapters I already have)
Intel Xeon 5530
24GB ECC RAM

We'll see is there any difference when fourth and maybe fifth user hit the RAID. Of course more HDDs will be added soon, together with LSI SAS2008 6G and probably SSDs for l2arc and zil.
 
Status
Not open for further replies.
Top