Apple Final Cut Pro7 Dropped Frames with FreeNAS 9.10.2 U3

Status
Not open for further replies.

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Hi Everybody here. Sorry for my English at first :),I am the new user of FreeNAS. My server and client configuration is:
Server:
INTEL E5-1620V3
SUPERMICRO X10SRL-F
64GB REG ECC DDR4 2133 MEMORY
8GB USB DRIVER FOR FREENAS
LSI 9211-8I WITH IT MODE
24* SEAGATE 8TB ENTERPRISE HDDS running for RAIDZ2
6*Intel X520 DA2 dual ports 10GB SFP+ adapter for 6 clients machine for directly connect.

Client:(Others almost same)
Mac Pro 6,1 install MAC OSX 10.9.5 system
Intel 4 cores E5
32GB ddr3
D300 dual Graphic card
256GB SSD
Promise Sanlink2 10GB Ethernet SFP+ adapter


Test environment speed in AJA system test is 900mb-1000mb/write and almost same in read (very good).
Real environment speed in MAC OSX (afp and smb)read and write via sanlink2 is 500mb-950mb/S(I have a lot of small file so sometime is high sometime is low) and stable. (very good)

The problem is:
When the clientmachine using Final cut pro7 always has error like in the attachment picture (Dropped Frames), playback or recoding to tapes both same(3-4time each day), but its never happened at before when we using local driver.The media format is 1080P prors 422 code. Anybody has experience and any advise? May be something I need add in sysctl? Thanks a lot.
 

Attachments

  • 111111111.jpg
    111111111.jpg
    59.2 KB · Views: 268
Last edited by a moderator:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
RAIDZ2 isn't the best for performance with regards to writes. If you want to improve your write performance, you should consider switching to mirrored VDEVs.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
From the OpenZFS Performance Tuning page:
If sequential writes are of primary importance, raidz will outperform mirrored vdevs. Sequential write throughput increases linearly with the number of data disks in raidz while writes are limited to the slowest drive in mirrored vdevs. Sequential read performance should be roughly the same on each.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
RAIDZ2 isn't the best for performance with regards to writes. If you want to improve your write performance, you should consider switching to mirrored VDEVs.
Thanks a lot, but I still not that understand why raidz2 is not good performance with this kind of project, because its just 20mb/s to Server disk, not that hard.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
It should work. However, I'm not familiar with that application.

Re: causes, could be a number of things. How full is your pool? What other applications are running? Is a scrub running? What is the pool topology? (24 disks in a single RAIDZ2?) What protocol is the share, SMB?

While I was more concerned with random IOPS for VMs, I found adjusting vfs.zfs.txg.timeout to "1" helped to "smooth" out the write performance. On my system a value of "5" (could still be the default, I have no idea) sometimes caused a stall when writing as the previous tx group flushed to disk. (When I added more vdevs I "think" the problem went away, but I can't remember specifically testing it. I'm still using a value of "1".)

@jgreco had some other recommendations on an old thread. You can probably search his name and "vfs.zfs.txg.timeout" and "vfs.zfs.txg.synctime_ms.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
It should work. However, I'm not familiar with that application.

Re: causes, could be a number of things. How full is your pool? What other applications are running? Is a scrub running? What is the pool topology? (24 disks in a single RAIDZ2?) What protocol is the share, SMB?

While I was more concerned with random IOPS for VMs, I found adjusting vfs.zfs.txg.timeout to "1" helped to "smooth" out the write performance. On my system a value of "5" (could still be the default, I have no idea) sometimes caused a stall when writing as the previous tx group flushed to disk. (When I added more vdevs I "think" the problem went away, but I can't remember specifically testing it. I'm still using a value of "1".)

@jgreco had some other recommendations on an old thread. You can probably search his name and "vfs.zfs.txg.timeout" and "vfs.zfs.txg.synctime_ms.
Thanks a lot. I will try your suggestion. Hope can be fix. Thanks again.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Have is your pool structured?

can you provide the output of zpool status in CODE tags?
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Have is your pool structured?

can you provide the output of zpool status in CODE tags?
Hi Stux, Thanks for your reply
Now im at home and also has the almost same server in my home, the only different is home's server just 18hdds not 24hdds and also raidz2 mode. please check the pool structured picture in the attachment, Thanks again.
 

Attachments

  • 3333.png
    3333.png
    58.1 KB · Views: 285

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
It should work. However, I'm not familiar with that application.

Re: causes, could be a number of things. How full is your pool? What other applications are running? Is a scrub running? What is the pool topology? (24 disks in a single RAIDZ2?) What protocol is the share, SMB?

While I was more concerned with random IOPS for VMs, I found adjusting vfs.zfs.txg.timeout to "1" helped to "smooth" out the write performance. On my system a value of "5" (could still be the default, I have no idea) sometimes caused a stall when writing as the previous tx group flushed to disk. (When I added more vdevs I "think" the problem went away, but I can't remember specifically testing it. I'm still using a value of "1".)

@jgreco had some other recommendations on an old thread. You can probably search his name and "vfs.zfs.txg.timeout" and "vfs.zfs.txg.synctime_ms.
Sorry, I forgot answer your questions:
1: atmost 50% space used.
2:just FCP7 nothing else, no pr, no ae, no davinci etc. Just FCP7
3:acturlly I dont know if the pool scrub running, but I dont think so is about scrub, because it happened 3-4 times each days, but Im not possiable to set the scrub run 3-4times each day.
4:Yes, 24disks build only one RAIDZ2 and share to everbody.
5:I tried AFP SMB and even NFS but same.

Thanks a lot.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Okay, thats what I feared.

So, you have a 24-way RaidZ2?

I would strongly suggest you refactor your home pool into 2 9-way RaidZ2, and your ... work?... pool into either 3 8-way RaidZ2, or 4 6-way RaidZ2.

3 8-way if available space is more important than performance, 4 6-way if performance is more important.

This will cost you a little bit of extra parity (ie reduced storage), but will double your IOPS at home, and triple or quadruple your IOPs at your other location. IOPS are random access operations... which happen, even if you're streaming video.

Also, it will make it so that if you have a raid error the resilvers happen 4-16x faster, which will reduce the chances of an additional error happening.

I don't know how slow it is, but as you scale from a sensibly sized RaidZ2 (say <=9 drives) it gets slower and slower.

I'm suspecting your issue is caused by the excessively wide nature of your single vdev pool.

Of course, the trick is, how do you refactor your pool? Unfortunately, you basically need to back it up and restore it. You should have a backup anyway ;)

As a benefit, because your 24-wide pool is using 20GiB of parity/padding (even though you'd expect it to use 15GiB), if you change to 3x8 RaidZ2 you'll be using 50GiB, whcih is only 2.5x, even though you've tripled the parity. A free 10GiB due to inefficient padding at 24 way.

9-way is even more efficient than 8-way.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Okay, thats what I feared.

So, you have a 24-way RaidZ2?

I would strongly suggest you refactor your home pool into 2 9-way RaidZ2, and your ... work?... pool into either 3 8-way RaidZ2, or 4 6-way RaidZ2.

3 8-way if available space is more important than performance, 4 6-way if performance is more important.

This will cost you a little bit of extra parity (ie reduced storage), but will double your IOPS at home, and triple or quadruple your IOPs at your other location. IOPS are random access operations... which happen, even if you're streaming video.

Also, it will make it so that if you have a raid error the resilvers happen 4-16x faster, which will reduce the chances of an additional error happening.

I don't know how slow it is, but as you scale from a sensibly sized RaidZ2 (say <=9 drives) it gets slower and slower.

I'm suspecting your issue is caused by the excessively wide nature of your single vdev pool.

Of course, the trick is, how do you refactor your pool? Unfortunately, you basically need to back it up and restore it. You should have a backup anyway ;)

As a benefit, because your 24-wide pool is using 20GiB of parity/padding (even though you'd expect it to use 15GiB), if you change to 3x8 RaidZ2 you'll be using 50GiB, whcih is only 2.5x, even though you've tripled the parity. A free 10GiB due to inefficient padding at 24 way.

9-way is even more efficient than 8-way.
WOWOWOWOWOWOW
That's horrible if its true as your said and I dont think so the Freenas is used for me anymore, because when my worker recodering to the tap its can not stop and we want to have a easy way and big and fast Vdisk for work at least FCP. may be I need change the system to openmediavault or ealse base on the linux ext file format, but ofcourse the freenas is really fast for Davinci Resolve project espically 4k or 8k movie and very stable. I will try your said and decide if I still use the Freenas anymore. Thanks everybody.

PS: I will try the newest FNS 11 RC, hope can be well.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
WOWOWOWOWOWOW
That's horrible if its true as your said and I dont think so the Freenas is used for me anymore, because when my worker recodering to the tap its can not stop and we want to have a easy way and big and fast Vdisk for work at least FCP. may be I need change the system to openmediavault or ealse base on the linux ext file format, but ofcourse the freenas is really fast for Davinci Resolve project espically 4k or 8k movie and very stable. I will try your said and decide if I still use the Freenas anymore. Thanks everybody.

PS: I will try the newest FNS 11 RC, hope can be well.
If you don't care about checksums, snapshots or bit rot then go ahead and switch. Zfs is a much better choice than ext and I can't really come up with a reason to use ext when you need this many disks.

A vdev should not be more than 10 disks and the more vdevs you have the more iops performance. I would have tested using mirrors, 4 vdevs of raidz2 and 3 vdevs of raidz2 before every using this system in production. This is pretty common knowledge for people who use zfs.

Your going to want to rebuild with something that follows the best practices.

Sent from my Nexus 5X using Tapatalk
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
If you don't care about checksums, snapshots or bit rot then go ahead and switch. Zfs is a much better choice than ext and I can't really come up with a reason to use ext when you need this many disks.

A vdev should not be more than 10 disks and the more vdevs you have the more iops performance. I would have tested using mirrors, 4 vdevs of raidz2 and 3 vdevs of raidz2 before every using this system in production. This is pretty common knowledge for people who use zfs.

Your going to want to rebuild with something that follows the best practices.

Sent from my Nexus 5X using Tapatalk
Thanks for the reply, I really thinking about as your said, snapshots also very important for me. hmmmmm very confused :). what about I build 3 for raidz? every raidz build by 8 hdds? that also can be will for my fcp7 project? Thanks.
BTW: if I really want to build a 24 hdds server by freenas and only one vdev, whats the best solution? what about iscsi? or something else?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Thanks for the reply, I really thinking about as your said, snapshots also very important for me. hmmmmm very confused :). what about I build 3 for raidz? every raidz build by 8 hdds? that also can be will for my fcp7 project? Thanks.
BTW: if I really want to build a 24 hdds server by freenas and only one vdev, whats the best solution? what about iscsi? or something else?
You wouldn't put 24 drives in a single vdev. That just not something you would do. Get bigger drives and use more vdevs.

Sent from my Nexus 5X using Tapatalk
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
You wouldn't put 24 drives in a single vdev. That just not something you would do. Get bigger drives and use more vdevs.

Sent from my Nexus 5X using Tapatalk
Thanks for your suggestion.
But I have a small question is: what about the IXsystem's Certified sever? like Freenas 4U(the link is:http://www.freenas.org/freenas-certified-servers/). It also has 24hdd bays and also need more vdevs? Thanks.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Thanks for your suggestion.
But I have a small question is: what about the IXsystem's Certified sever? like Freenas 4U(the link is:http://www.freenas.org/freenas-certified-servers/). It also has 24hdd bays and also need more vdevs? Thanks.
Yes things get too slow during rebuilds with the more drives you add. What is confusing you?

Sent from my Nexus 5X using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
It is best practice to structure 24 drives into multiple vdevs in one pool, rather than one vdev of 24 drives in one pool.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Thanks everybody.Now I knew. Thanks again.
 
Status
Not open for further replies.
Top