Performance Tuning for Media Workflows?

dev_willis

Dabbler
Joined
Jan 30, 2021
Messages
28
I'm trying to figure out how best to configure TrueNAS for a video editing workflow. I found this in the docs: https://www.truenas.com/docs/core/solutions/optimizations/mediaentertainment/

However, I have a number of questions that aren't answered there. And is that information current? E.g., I recall reading around here that jumbo frames aren't "worth it" anymore but maybe they still are in the specific case of video editing?

How about the relative performance of RAID types? I understand the various levels of fault tolerance but how significant is the performance difference between, say, RAIDZ1 and RAIDZ2 or the "typical recommendation" of 2+1 RAIDZ? If Z1 is appreciably more performant than the others would it be a good idea to use it and make more frequent backups to compensate for the reduced redundancy? I'm not clear on what exactly 2+1 is.

I've read that disabling Atime can boost performance. Are there downsides to disabling it?

I've read that LZ4 compression is blazingly fast and often faster than no compression but does file size or use case make a difference?

How can I determine what record size to use? We'll have many multi-gigabyte video files and many more multi-megabyte photo files but there will also be a large number of small config-type files as well. Would it be best to find the median file size and set it to that?

Are there things outside of the pool options that I can tweak to improve performance?

Any advice is appreciated!
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
jumbo frames aren't "worth it" anymore but maybe they still are in the specific case of video editing?
You can test with and without. Just remember that this needs to be configured on the NAS, PC, and ALL switches in the data path or it will fragment you 9k frames back down to 1.5k frames killing performance in the process.

How about the relative performance of RAID types?
ZFS depends on the number of vdevs for performance. Generally for performance you will want mirrors. This works especially well for video as you generally ingest once then edit scrubbing around the video. Mirrors give you half the write performance as each drive added together as you have to write the same data to two disks. The up shot is that you can read different from each disk so reads are as fast as a simple disk stripe (think RAID 0).

I've read that disabling Atime can boost performance. Are there downsides to disabling it?
This is just for the last access time stamp. Its up to you if this is important. If you share your storage via iSCSI this should be disabled as the file system you place on the zvol (iSCSI LUN) will track access times anyway. Either way I don't think it will make that much of a difference.

I've read that LZ4 compression is blazingly fast and often faster than no compression but does file size or use case make a difference?
lz4 is fast enough that you should just leave it on. For my photos, I save little to no space, video will be the same. lz4 can speed some things up as it can reduce the amount of data that needs to be physically written or read from the disks. On my vSphere storage I get about 1.5:1 compression and it does speed some things up a bit. Again, leave it on but don't expect much in terms of performance or space savings.

How can I determine what record size to use? We'll have many multi-gigabyte video files and many more multi-megabyte photo files but there will also be a large number of small config-type files as well. Would it be best to find the median file size and set it to that?
On a ZFS pool you can have data sets each with different record sizes. If you have a shared catalog or index sort of thing, you can place that on a dataset with a smaller record size like 16k or for the bulk, you may want 256k+ as you would not read smaller chunks than that often enough for it to matter.

Are there things outside of the pool options that I can tweak to improve performance?
This is a great and loaded question. L2ARC may help but it needs to be reconfigured to fill faster (allow caching for streaming work loads). You can also make you ARC and L2ARC more efficient by selectively enabling them for the datasets that need it. If you archive you video on the same box, you can keep your archive on a separate data set with only the metadata cached. Note, selective ARC caching is not a GUI option. I pushed for this to be added to the GUI but so few people know about it that it never got traction.

Be sure to benchmark your disks locally and benchmark the network so you know if further tuning is needed and where that tuning may be needed.
 

wdp

Explorer
Joined
Apr 16, 2021
Messages
52
That's an excellent response from kdragon75. I'm relatively new to TrueNAS / ZFS but have been a video production engineer for quite some time. It was refreshing to read such a detailed response and a nice sign of what the TrueNAS community has to offer. I stumbled into this discussion looking for similar configuration options and how to best optimize TrueNAS as I work on my first large build.

The best configuration for video editing really comes down to what the environment will be used in, as well as the value of your data and desire maintain a server.

In a shared environment where multiple editors might need to access on unit or project, you generally want to consider drive size, how many drives, how many vdevs you can create, and how much you want to worry about your data. In addition, what kind of media files are you working on? RAW, 8k, 4k, prores, smaller files...are there multicams or just simple one track edits? And are you trying to edit in full resolution at all times.

There's not really an end to the hardware / workflow options you can throw at video editing problems. But most solo editors who get into larger scale storage drastically overshoot their actual needs.

I've seen over 50 Synology boxes on raid 6 and stock configs used for making full productions with multiple editors. And if you the storage bottlenecks at all, you always have the option to work smarter and fall back on lower resolution or proxy workflows. Most TV productions or 8k/big 4k projects I work on, the biggest servers we have can't handle the load when you throw more than 3 editors at it.

RaidZ2 style setups are pretty common these days. Usually split in 2 vdebs across a 24 bay chassis or 12 bay. The downside is when you get mostly full, and have to rebuild, you start looking at some very long times. And then if the server is 4-5 years old, your risk of seeing a second drive fail climbs astronomically.

Some prefer a mirrored vdev environment because of this. It can grow slowly to scale up, has a simple maintenance schedule, and rebuilt times are dramatically faster. But, that's in a full-time video editing production scenario, where speed is critical and you are probably backing up the entire server to LTO tapes or some other near-line storage.

I for one wouldn't touch single drive tolerance options with a 10 foot pole unless I'm backing the entire thing up. I sleep better at night with 2 drives having my back. But lately I find myself looking at mirrored vdev setups more and more, cause they just make sense at the sacrifice of half your total storage availability.

As of right now, the more vdevs you can have, the bestter the random IO performance, but that REALLY only matters if you're working with others, or doing very complex edits with data spread across the entire server.
 

dev_willis

Dabbler
Joined
Jan 30, 2021
Messages
28
This is great information and I thank you both very much. In our situation, I've filled all eight bays of this Mini XL+ with 8TB drives and I've got 20+ TB of data to load on it right off the bat so I didn't feel like I could make the space sacrifice necessary for a mirror and went with RAIDZ2. We have only two editors working on mainly 4K and 6K footage so it will probably be sufficient, particularly with caching and proxies. When we outgrow the Mini and have to step up to a larger solution I'll be sure to plan for mirrors. I also decided to set the record size to 1MB and see how it goes. And I'm not going to mess with jumbo frames. I don't like troubleshooting.

How can I tell if the system would benefit from an L2ARC? It has a 480GB SSD in it specifically for use as an L2ARC but I got the idea that using one when it's not needed can actually degrade performance so I haven't added it to the pool yet. The system has 64GB RAM and I noticed, when I began copying data to it, the ZFS cache quickly filled up most of the available RAM. I kinda figured that was just how it worked tho.

Again, thank you both very much. I appreciate the help!
 

wdp

Explorer
Joined
Apr 16, 2021
Messages
52
It's pretty rare that Jumbo Frames are the answer to any problems, they usually just create them.

4k footage should be fine if it's not RAW or heavy codecs, 6k depending on the codec could be pushing it though. There are a lot of variables to consider with video. RaidZ2 is the right call in a MiniXL in my opinion.

I'm honestly not incredibly versed with the real world benefits of cache layers for video editing/large files. I started building my first large TrueNAS server a few weeks ago and it's undergoing heavy testing at the moment before I roll it out to production usable. But I have deployed plenty of Raid 6 servers with 8 bays to projects and we managed to get by fine without any major concerns. So depending on the codec/footage, you could be fine without any major optimizations. But my ZFS experience is limited any maybe that's just not the case and you just have to brute force it with hardware.

Yes, the RAM cache will fill up what's not in use on the system, and the system will take it back when it needs to.

You can always work smarter in video though. If the hardware can't keep up for a shared environment, proxies are your best friend.
 
Top