Help with a Video Production NAS Setup

Techie

Cadet
Joined
Sep 26, 2022
Messages
2
Hey Guys,

First off, thank you for your help! I primarily work on video projects (Premiere Pro) and have been trying to create an optimal solution for my needs.

Basically, this is my first venture into TrueNAS and what I want the NAS to be, is the live project machine that a couple client systems (1 primarily, but there's a couple of us) use for hosting video files / active project directory.

My current setup is a Qnap TS-x32PX with 10 Gbe running 5 x 14TB in Raid 5. I also had a sata SSD read-write cache (4 x 2tb raid 10) but have turned that into a 3 x 2tb raid 5 for live projects as one of my drives would randomly disconnect and ruin the cache.

The main editing machine is a Ryzen 5950x, 128gb mem, 3080, ASRock x570 Creator (10gbe, TB3)

10GBe Switch: TP-Link TL-SX1008

When working of the sinning disks, (assuming latency) it becomes hard to edit because of a very noticeable lag. Also the Sata SSDs end up being a bottleneck for transferring files (which can exceed 500gb) from an Gen 3 PCIe drive. So what I want to do is create an "NVMe Live Project NAS" and backup to the Qnap. The main benefit for me going NVMe (Probably Gen 3, but maybe 4) would be latency and transfer speed, I want to be working within seconds/minutes - not 30mins.

So, I'm thinking the solution would be: Gen 3 drive for recording from the camera / ingest with thunderbolt 40Gbps enclosure. 40Gbps nic on both the main editing machine and the NAS. Then 4 x 2tb NVMe drives in RaidZ1.


I have a lot of PC hardware on hand, but wanted to check a couple things and make sure the parts I've put aside are recommended. I'm happy to buy something else if required.


Case: Node 304 but can go larger to accommodate another motherboard.
Motherboard: Recommendations? I have a ASUS Rog Strix B560-G Gaming WiFi, but assuming that wouldn't be a first choice for NAS. ECC support? Also, supporting the add-in card below + 40Gb nic + 10Gb nic.
CPU: Intel i7 11700F
Memory: Yet to be determined because of motherboard choice / ECC. Also, capacity suggestion? Cache seems like it would be useless to me because of file sizes.
40Gbps Nic: Recommendations? I've seen used Mellanox nics come up a few times on the forum, stating they are good and cheap second hand.
10Gbps Nic: Recommendations? Other computers that I may edit off have 10Gbe/5Gbe, so was going to wire the NAS into the 10Gbe network and point-to-point the 40Gb connection to the main editing PC.
Drives: (4 x m.2 NVMe) I have 2x barely used XPG SX 8200 Pro's (different controller models) that have 1280TBW endurance with TLC (though, consumer drives), which seems decent so may get another 2. Or go for WD SN700's for double the endurance or something else if recommended?
Add In Card (Drives): Supermicro AOC-SHG3-4M2P... Anyone used this or is something else recommended for my setup? Is there a performance/latency hit for 4 times 4x PCIe drives on a 8x board with PLX switch.



Thank you! happy to answer any questions!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,909
What is your budget? Yes, you said that you can re-use hardware, but thinking about how much money one would be willing to spend usually helps ;-)

Why do you want to edit off of the NAS vs. using the NAS for storage of "inactive" projects only?

I would certainly go for a proper server motherboard and ECC RAM.

Please, also check the first six articles from the "Recommended readings" in my signature for an overall understanding what ZFS/TrueNAS can and cannot do for you.
 

Techie

Cadet
Joined
Sep 26, 2022
Messages
2
Hi Chris, thank you for your response!

What is your budget? Yes, you said that you can re-use hardware, but thinking about how much money one would be willing to spend usually helps ;-)

Honestly, budget conscious but flexible. I'm a big fan of used, but good (mostly not for storage / motherboards). Lets say for the main system sub $1000 ish (no drives)

Why do you want to edit off of the NAS vs. using the NAS for storage of "inactive" projects only?

Good question, primarily so that the active project repository can be accessed on multiple machines without affecting the main editing system. My primary editor can be working on a project while I can be researching, gathering assets, ingesting footage for another without affecting his machine. Plus the possibility of scaling up production if I need more people. Another benefit I was thinking about was previous project B Roll, this doesn't need to be NVMe, but there's TB's of B roll that may be used in future projects and would rather have them on snappier Sata SSDs, which could be contained in this solution. I could get a couple 4TB NVMe's and Raid 1 them locally for now but I feel like I'm just putting off the inevitable.

I would certainly go for a proper server motherboard and ECC RAM.

Makes sense, I'll see if I can find some recommended... Unless you have suggestions? From my (limited) knowledge, high single core performance is what's most useful in a scenario like this?

Please, also check the first six articles from the "Recommended readings" in my signature for an overall understanding what ZFS/TrueNAS can and cannot do for you.

Will do! Thank you. the fact that you referenced it implies an incompatibility? I'll have a read, but do you foresee an issue?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,909
Good question, primarily so that the active project repository can be accessed on multiple machines without affecting the main editing system. My primary editor can be working on a project while I can be researching, gathering assets, ingesting footage for another without affecting his machine.
Perhaps I misunderstand your workflow, but wouldn't this be a perfect scenario for having the "archive" files on the NAS and doing the editing locally on the workstation?
Plus the possibility of scaling up production if I need more people.
Yes and no. A given hardware will have a certain limit in what amount of parallel editing it can sustain. So you would always have to buy something bigger than what you need right now. That is not necessarily a bad thing, but something to be aware of.
Another benefit I was thinking about was previous project B Roll, this doesn't need to be NVMe, but there's TB's of B roll that may be used in future projects and would rather have them on snappier Sata SSDs, which could be contained in this solution. I could get a couple 4TB NVMe's and Raid 1 them locally for now but I feel like I'm just putting off the inevitable.
Basically, see below. The big conceptual change you have to keep in mind is that moving to a distributed system adds a lot of complexity. Putting SSDs in a NAS (instead of HDDs) and expect the same results as if they were local, will only work to a point.
Makes sense, I'll see if I can find some recommended... Unless you have suggestions? From my (limited) knowledge, high single core performance is what's most useful in a scenario like this?
The single core thing about SMB is per session. So as soon as you have multiple PCs connected, you can saturate multiple cores better and will even need them.

The overall challenge is that we are basically talking about performance and performance tuning. Whatever the requirements vs. the actual, there is always one factor, that serves as the bottleneck and by that limits overall performance. Performance tuning then means to identify that factor (often far from trivial) and then remove it.

Turning this around means that unless a ton of data is available (I have yet to see this), there are two possible approaches: 1) Go big right from the start and accept that you will spend money on things that are (sometimes significantly) more powerful and therefore expensive than needed. 2) Use an incremental approach and replace things as you learn more about the way a system's components interact with one another.

With a NAS for editing directly on it, there is a ton of additional aspects coming in. Most prominently I/O latency and bandwidth. So PCIe lanes all of a sudden become a major factor. This is a complex topic and many people make a very good living just from knowing these things. In other words, don't expect to master this in a short timeframe, but plan with a couple of months at least.
Will do! Thank you. the fact that you referenced it implies an incompatibility? I'll have a read, but do you foresee an issue?
No specific issues. But you have a pretty advanced scenario, so background knowledge is needed. Otherwise you will have a hard time to even ask the right questions. In 10 years your current requirements will possibly covered by a standard NAS, but today they are not. So you are a bit on the leading edge, at least relative to what most people here do. (From an enterprise perspective it is different, but there we are talking about very different amounts of money.)
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,116
Case: Node 304 but can go larger to accommodate another motherboard.
Mini-ITX is going to be a severe limitation… beginning with putting two NICs and a AOC-SHG3-4M2P on a single PCIe slot.
I also see no 3.5" HDD in there.

Motherboard: Recommendations? I have a ASUS Rog Strix B560-G Gaming WiFi, but assuming that wouldn't be a first choice for NAS. ECC support? Also, supporting the add-in card below + 40Gb nic + 10Gb nic.
ECC may not be priority if this is fast short-term storage, and finished projects are moved to long-term secure storage, but anything with enough PCIe lanes for your needs will likely come with ECC support anyway.

Add In Card (Drives): Supermicro AOC-SHG3-4M2P... Anyone used this or is something else recommended for my setup? Is there a performance/latency hit for 4 times 4x PCIe drives on a 8x board with PLX switch.
PLX implies some extra latency, which is probably negligible.
With a x16 slot bifurcated to x4x4x4x4, a plain, cheaper, riser card (Asus Hyper M.2, or generic equivalent) would do, without added latency.
 
Top