Where to get more speed

Status
Not open for further replies.

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
OK, I have FreeNAS system all up and running - 32Gig memory. Two 10 drive Zpools. One set-up as 10 drive RaidZ2 and the other as two RaidZ1 Vdev's stripped into one. Both the FreeNAS hox and my 3 clients (2 Macs and 1 Windows 7) all have Myricrom 10 Gbe cards. 2 Clients at a time are direct connected to the FreeNAS box, but I usually only have one client running at time.

Using OSX Aja speed test, shows me I can write about 375MBs and read about 500MBs to and from the FreeNAS AFP shares on the Macs. Aja settings are 16 gig sized file, 10 bit 2K video. Black Magic says, I can get about 1000MBs both read and write with a 5Gig file.

However in real world tests, my speeds are much different. USing large video Files, 30 Gigs and up, Windows (using SMB/CIFS) I only get 70-80MBs and with OSX, depending on what drive I am coming off - even a fast mini sas afraid, I max out at 250 MBs.

What am I missing here? Any suggestions?

- Also, not using L2Arc or Zil.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Might be the 10gbe card's FreeBSD drivers. Can't comment on it, but most people here use chelsio. Look at hardware stickies. Might be cabling. Might be client issue. What are the full server specs? Do clients have ssds or spinning rust? Are you using certified, properly rated cable? Might be hitting the IOPs limit for your pool. Samba can be chatty and do funny things.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
Could be. Still why are those apps which are deigned to test disk speed, and are writing big files of lots of zeros I assume, getting such good speed and not the finder or windows explorer?
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
You are already losing 4 drives out of 10 to parity. I would set it up in raid 10 and you should see a huge jump in performance.

I can't remember off the top of my head how to do it in the ui, but give this a shot..

Recreate the pool as a mirror and then drag the dot *down* until you have 5 mirrors.

Should be 5x2...

You will only get the performance of one disk in any raidzX array without striping. On the raidzX2 array you are getting performance from 2 disks... But raidz and raidz2 don't have that great of performance. However on a 5x2 stripe mirror you get full redundancy and the performance of 5 disks together.


Edit *
My bad you said you have 2 10 drive array's, so 20 drives total. Raidz isn't recommended but the math doesn't come out right for 20 drives in raidz2.

If you did 4x5 raidz array's you will get much better performance and you would still only lose 4 drives to parity.

However if speed if what you want, then 10x2 would be even faster.

You would lose half of your storage, but you could feel confident that your data isn't going anywhere.

*


Just my 2c

Donny D
 
Last edited:

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
Donny,

Thanks for your advice, I appreciate your time. However, there is a factual inaccuracy that I don't want someone to wander upon here and take as true.

You will only get the performance of one disk in any raidzX array without striping.
This is not true. RaidZ1 or RadiZ2 is not limited to single drive speed.

Your other advice to not have more than 10 drives in a Zpool is accurate and I too follow this rule. However, after re-reading my original explanation I see I was not entirely clear. In fact two of my spools are setup exactly as you prescribed.

I have 3 Zpools, each spool has 10 disks in it.
1) 10 disks of 4 Tb drives in one RaidZ2
2) 5 disks in RaidZ1 striped with another 5 disks in RaidZ1.
3) 5 disks in RaidZ1 striped with another 5 disks in RaidZ1. (same as number 2)

Using Black Magic speed test and Aja disk test, I am seeing results of 1000Mbs and 350Mbs for both RaidZ2 and the stripped RaidZ1 Zpools. (These speed tools are giving odd results, but that is a different problem.) However, I get Finder OSX copy speeds that top out at 250Mbs and windows 7 speeds that are about 100Mbs. Bottom line, I don't think my bottleneck is at the drives.
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I should have been more clear with my statement. Write speeds will be limited to that of a single drive in a pool without striping.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
I should have been more clear with my statement. Write speeds will be limited to that of a single drive in a pool without striping.
That's incorrect. Write speeds are limited to the sum of the data drives' throughput.

Doug - make sure you disable compression when benchmarking. I had to apply a fair bit of tuning to my CIFS configuration to get streaming gigabit speeds out of my box (four 3-way mirrors).
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

Waco

Explorer
Joined
Dec 29, 2014
Messages
53

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I fully understand how vdevs work.
Then you probably also know that this statement isn't completely accurate: "Write speeds are limited to the sum of the data drives' throughput."
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
Then you probably also know that this statement isn't completely accurate: "Write speeds are limited to the sum of the data drives' throughput."

It is 100% accurate. You'll never write faster than the sum of the data drives throughput. Bad workloads, however, will limit throughput to something less than that.

Note, however, the OP is talking about workloads that are sequential in nature, not random. They are, however, likely using very low queue depths (unlike his benchmarks). His CIFS config will definitely need some work to saturate 10 GbE, but the AFP settings look like they must be decent (based on the benchmark he ran).
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
If by speed you mean MB/s then yes, it's perfectly accurate. What's limited to one drive specs is the IOps.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
If by speed you mean MB/s then yes, it's perfectly accurate. What's limited to one drive specs is the IOps.

That's why I said throughput. Sequential workloads are rarely IOP limited unless the pool is very full or fragmented (which the OPs is clearly not).
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I wasn't referring to the IOPS. I was referring to throughput. I missed the reference to only the data drives however, which was my point. My bad.
 
Status
Not open for further replies.
Top