Building a Video Editing Server for 8K Files

Joined
Apr 12, 2023
Messages
4
Hey community of Truenas, I'm a newbie and would very much like you opinion on building my very cool server with very cool hardware for editing very cool videos.

I own a Video Editing Company, we make car commercials worldwide for many cool brands (if you want to see some of it, check out our instagram @abdalabrothers)
we mainly shoot 8K RED RAW, 6K CDNG and H264 Files.

I will try to keep it short but describe everything that we have in terms of hardware and how I think I should be setting it up, and would very much love your opinion on it.



I currently have two systems:

(working system) "theflash" is a QNAP TSH886 with ZFS flie system that is supposed to be a "current editing projects" NAS with 8x 3.84tb IronWolf Pro Sata SSDs in Raid Z1 for a total of 22TB of usable space, all the current projects being edited right now stay in this system because it is all flash so its very snappy, we usually have 6-8 editors hitting that at the same time, conected to my Switch (QSW-M1208-8C) through 1x10G connection (each editor gets a 2.5G connection), I could be using 2x10G to the switch with LAG but it glitches out everytime and 10G has been fine for now.

(building new system) "bigbro" is supposed to be our main storage server, wehre every video is stored there, unless it is currentlyu being edited, even backup of "theflash" could be stored there in the future. 12x20TB Ironwolf Pro HDDs which in the end of the year (2023) will be upgraded for a total of 24x20TB Ironwolf Pro's, AMD Epyc 64, and the RAM is currently ddr4 128gb, 1tb nvme ironwolf 525.

How I think I should configure the new system:
motto: everything is upgradable, performance/speed of read and write of video files is the main goal here, even if it costs, if it's worth it the performance gains, we may gravitate towards it. Imagine being able to edit videos from this system would be a blast (RED files are usually 4gb each, CNDG Files are saved each frame as a DNG of 7mb, h24 files vary between 100mb to 1gb)
I am thinking of phases of upgrades, so Phase 1 is 240tb, Phase 2 would be 480tb and Phase 3 (end of 2024) should be 1Petabyte, AND I'M PRETTYT EXCITED FOR IT, so I wanna build thing the right way since the beggining.


System: given my main need being video file reads/writes, with a plex server or something like that in the future should I go CORE or SCALE?

CPU: I currently have this maxed out, but would be interesting to know if we actually need it to be 64cores or I could have gone with the 24 or 32 cores EPYC.

RAM: currently sitting at 128gb ddr4 I think it's not good thinking this will be very soon a 240tb system, and as ZFS uses it as cache, I think we should start with 512gb DDR4

Drives: I am thinking of setting a pool of 12x20Tb drives in Raid-Z2 and as I am expanding in the end of the year, add another pool of 12x20TB in RaidZ2.

Cache?: I thought of using the 1tb Ironwolf NVME as a L2ARC in case memory is full. (1tb may not be enough for us, cause every client project we edit usually is 1-3tb, so probably use one of the 3.84tb ssd from the QNAP NAS could be a solution?)

and here is a crazy idea I had:
since now I have space I can take 2x 3.84tb SSD of the QNAP NAS and use it as the Special Metadata devide that Wendel from Level1Techs talks so much about it, is this a good idea? should this be good for performance on my use case scenario?


Truenas Settings: is there any settings I should specially enable/disable given my use case scenario? I have seen hundreds of YT videos about setting up truenas but most of them just teach you to create a "simple" truenas build with not many cache disks that could potentially make your big storage be real fast with large video files.



thanks for reading all the way through, and I would very much appreciatte the community 2 cents on my case haha
 
Joined
Apr 12, 2023
Messages
4
FYI we wont be ever using the system CPU for transcoding of video editing files, maybe in the future for a plex server we can trow a Nvidia GPU for that matter alone, but editing is donw on each editor individual system (each one has an i9+RTX3080 minimum). is just for storing the files
 
Joined
Apr 12, 2023
Messages
4
[removed per jgreco]
this is literally a Chat-GPT answer, I just checked it. I need real world opinion on this. [image removed, available as attachment for reference]
 

Attachments

  • 1681324931654.png
    1681324931654.png
    141.2 KB · Views: 273
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
this is literally a Chat-GPT answer, I just checked it. I need real world opinion on this. [image removed, available as attachment for reference]

I've deleted this (also the copies in your response). There is no need for confusing half-truth ChatGPT answers to be posted here. In the future, please feel free to "Report" content you feel may be generated by ChatGPT. Thanks, -JG for the moderation team
 
Joined
Apr 12, 2023
Messages
4
I've deleted this (also the copies in your response). There is no need for confusing half-truth ChatGPT answers to be posted here. In the future, please feel free to "Report" content you feel may be generated by ChatGPT. Thanks, -JG for the moderation team
thanks! waiting for a real world reply now haha
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Since no-one chimed in, although we've seen other video editors here, I'll offer some non-AI comments. With the caveat that I have no practical experience building such powerful systems.

(building new system) "bigbro" is supposed to be our main storage server, wehre every video is stored there, unless it is currentlyu being edited, even backup of "theflash" could be stored there in the future. 12x20TB Ironwolf Pro HDDs which in the end of the year (2023) will be upgraded for a total of 24x20TB Ironwolf Pro's, AMD Epyc 64, and the RAM is currently ddr4 128gb, 1tb nvme ironwolf 525.
Solid foundation, although the phase 2/3 versions could certainly use more RAM.

System: given my main need being video file reads/writes, with a plex server or something like that in the future should I go CORE or SCALE?
If it's mostly (or only) storage, go for CORE! The kernel allocator in SCALE cannot allocate more than 50% of RAM to ARC; it would be shame to basically waste half of hundreds of gigabytes of RAM…

CPU: I currently have this maxed out, but would be interesting to know if we actually need it to be 64cores or I could have gone with the 24 or 32 cores EPYC.
Serving 6-8 users should not require 64 cores. If it's SMB, which is single-threaded, a 16-core EPYC with higher clocks should actually do better than a 64-core CPU with lower base frequency.

Drives: I am thinking of setting a pool of 12x20Tb drives in Raid-Z2 and as I am expanding in the end of the year, add another pool of 12x20TB in RaidZ2.
12 is a bit on the wide side. 8 to 10-wide would be the recommended range for raidz2.
Do you really mean "add another pool", or is it rather "another vdev in the same pool"? I suspect the latter.
With tens of drives, and going to a petabyte, you may be in a territory where it could be worth looking into dRAID.

Cache?: I thought of using the 1tb Ironwolf NVME as a L2ARC in case memory is full. (1tb may not be enough for us, cause every client project we edit usually is 1-3tb, so probably use one of the 3.84tb ssd from the QNAP NAS could be a solution?)
I'm unsure about the need for a L2ARC. 128+ GB is already plenty for ARC, but holding a 1-3 TB working set in L2ARC does not look like a realistic target. And I thought that editing would still be done on the all-SSD NAS, so what's the point?
If you do need a L2ARC on "bigbro", a NVMe drive would be better.

and here is a crazy idea I had:
since now I have space I can take 2x 3.84tb SSD of the QNAP NAS and use it as the Special Metadata devide that Wendel from Level1Techs talks so much about it, is this a good idea? should this be good for performance on my use case scenario?
A special vdev is pool-critical and must be as resilient as the rest of the pool, so you'd need at least 3-way mirror to match raidz2.
But I doubt it would be useful. Metadata is typically around or below 1% of capacity, and probably even less in your case because you have very big files. With lots of RAM, all the metadata which would go in the special vdev will just remain in ARC as long as the system is on.

Alternate crazy idea:
With all the PCIe lanes of an EPYC, you could set up an all-NVMe pool for editing videos. If 8*3.84 TB offers sufficient capacity in "theflash", "bigbro" could match it with 8 U.2 drives of 3.84 TB drives, or 4*7.68 TB.
Although it may be better to keep the editing pool and the archiving pool on different servers, for security.

Truenas Settings: is there any settings I should specially enable/disable given my use case scenario? I have seen hundreds of YT videos about setting up truenas but most of them just teach you to create a "simple" truenas build with not many cache disks that could potentially make your big storage be real fast with large video files.
Setting a large record size (1 MiB, under "Advanced Options") is an obvious candidate, so I suppose it's already done.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
So reading your comments here, I think you really shoulda bought an actual TrueNAS Enterprise product rather than playing around. Production mission critical storage really should be built by the people who make the product, just sayin' :P The R series has really good entry level pricing for all flash storage. https://www.truenas.com/r-series/

But here we are. The only data I have to pull upon from this type of use-case is really Linus Tech Tips who has absolutely gone through the pain of trying to do exactly what it is you are trying to do. The difference between you and his successes/failures is that you are only used SATA SSDs for your bulk storage. This gives your system more breathing room and you aren't going to have weird stability issues that are caused by the fact that your pool wants to go faster than your RAM.

CPU: I currently have this maxed out, but would be interesting to know if we actually need it to be 64cores or I could have gone with the 24 or 32 cores EPYC.
The more editors you have hitting the NAS at any given time, the more CPU cores you are likely going to need. SMB performance to a single end node is directly tied to single threaded CPU performance, so depending on how many editors (6 to 8 now, but maybe more sooner than later?) are accessing the server at once, it may be better to have a 32 core "F" series processor like the 75F3. It really depends on your workload in real life. Without having access to more data, it's hard to say.

RAM: currently sitting at 128gb ddr4 I think it's not good thinking this will be very soon a 240tb system, and as ZFS uses it as cache, I think we should start with 512gb DDR4
Cache?: I thought of using the 1tb Ironwolf NVME as a L2ARC in case memory is full. (1tb may not be enough for us, cause every client project we edit usually is 1-3tb, so probably use one of the 3.84tb ssd from the QNAP NAS could be a solution?)
It's all about what you are getting for cache hits. How many of your editors are accessing the same files at the same time? How often are your editors accessing the same files? From my server, which has 256GB of DDR4 and an EPYC 7302, I can see that going from 128GB to 256GB was a meaningful upgrade. However, adding an L2ARC with a 118GB Optane drive didn't yield much additional performance as my mean hit ratio over the past day was 6%. That doesn't mean yours will be the same, and monitoring your system will tell you whether or not you could use more RAM. If your hit ratio is high, then more ram will help. L2ARC Cache should only be considered when you have maxed out your RAM.
1681675391970.png


since now I have space I can take 2x 3.84tb SSD of the QNAP NAS and use it as the Special Metadata devide that Wendel from Level1Techs talks so much about it, is this a good idea? should this be good for performance on my use case scenario?
YEESSSS!! :) Consider a 3 way mirror tho, if you lose a special VDEV you lose your pool.
Check my post over there: https://forum.level1techs.com/t/zfs-metadata-special-device-real-world-perf-demo/191533

Truenas Settings: is there any settings I should specially enable/disable given my use case scenario? I have seen hundreds of YT videos about setting up truenas but most of them just teach you to create a "simple" truenas build with not many cache disks that could potentially make your big storage be real fast with large video files.

Since SCALE is running Linux, like other have said above, ZFS behaves differently because of differences in the kernel between BSD and Linux. The OpenZFS folks changed the default behavior and will only use 1/2 of your RAM for caching.
Modifying zfs.arc.max will help you tune your performance. But caution is advised, because if you are too aggressive you will create system instability. The limit is there for a reason.
1681675805853.png


I have a few tweaks that I run on startup in the UI
1681675900482.png


Most of them are ZFS tunables like the above. I also use Datacenter TCP. @jgreco wrote a neat write up in the resource section talking about high speed networking, here https://www.truenas.com/community/r...ng-to-maximize-your-10g-25g-40g-networks.207/
 
Last edited:
Top