How to get the fastest speeds possible? Building a Video Editing Monster.

Grassfire

Cadet
Joined
Apr 20, 2022
Messages
1
Let me start by saying I'm not a networking professional BUT I know my way around the kitchen pretty well, haha.

I need help configuring TrueNAS to be the fastest storage server possible with the hardware I have. If I need to make small/medium hardware changes I can.

My company has been using a TrueNAS CORE server I built about three years ago and have had great results. However, we need more space so I have recently built a new server to replace our older machine with. Specs at the bottom of the page.

Goal #1:
Deploy TrueNAS Scale with optimized settings for fast access to very large video files.

Goal #2:
Provide approximately 500MP-1000MB per second data transfer speeds simultaneously to five editors.
Each editor workstation has dual 10Gbe NIC cards. My understanding is that the latest version of SAMBA allows for faster speeds when spread across multiple connections. When we built our Studio CAT6a cable was installed so it's my only option to go with. the good news is, we have access to two cable connections at each workstation.

Goal #3:
Automatically sync our server data with our unlimited Google Drive account.

Goal #4:
Host a plex server within TrueNAS

Goal #5:
Host a Windows VM and use it as a Render Server for programs that are optimized for multi core rendering, such as Blender.

Goal #6:
Protect ourselves against someone accidentally or intentionally deleting files that should not have been deleted.

Goal #7:
Have two different permission levels so that employees only have access to specific folders that we want them to see.

Goal #8:
Find a way to use our old TrueNAS Core server. This could be a backup (albeit is much smaller than our new server at 30TB vs approx. 250TB) or we could setup some kind of Fast Tier vs. Storage Tier. My only issue there is I'm afraid it could take too much time (man power, not network speed) moving projects back and forth between the servers. Is there a way to automate this while not creating duplicates for my automatic Google Drive upload backups?

Statements/Questions:
  1. While write speeds are nice when uploading footage to the server, it's read speeds that are most important for our needs.
  2. We went with such high end CPU specs and RAM amounts in hopes to be able to accomplish Goal #5 without degrading our Network Disk Performance.
  3. We have a 1gb up & down fiber internet connection.
  4. We understand that currently, TrueNAS Scale is slower than TrueNAS Core. We are curious if in the short term if it is worth dealing with temporary slow downs while knowing that it will be optimized over time. Do the benefits of TrueNAS Scale outweigh the temporary performance issues?
  5. We are curious if there are any benefits to setting up an iSCSI solution which (according to my limited knowledge) make the network drives show up as physical drives on each of the server workstations. Historically we have simply mapped network drives to each workstation.
  6. What specific tweaks need to be made to optimize read speed for our editors? (ie. Network settings, HD cluster size, SAMBA config, etc)
  7. I wanted to create a Tiered Storage solution in Windows Server 2022 but like all those that tried before, I failed hard. Windows needs to get their act together. I searched for other Tiered Storage solutions within a modest budget but found nothing. In a perfect world I could "pin" HOT projects to the NVMe Tier until we were done with it and then "unpin" it at which time it would automatically move itself to the slower mechanical hard drives. This sounds so simple and yet so hard to make happen! If you have ideas, hit me up!
  8. Most of the files we deal with are very large. On average, the files range between 10GB-200GB each.

Networking Hardware:
Gateway:
Ubiquity Unify Dream Machine Pro
Switch: Ubiquity Unify Switch Pro Aggregation Layer 3 switch with (28) 10G SFP+ ports and (4) 25G SFP28 ports.
Access Point: Ubiquity Unify Access Point WiFi 6 Pro

NEW Server Hardware:
Processor:
2x Intel Xeon E5-2695 V4 - E5-2695V4 2.10GHz 18 Cores (total 36 Cores)
Memory: 256GB DDR4 ECC REG
Hard Drives: 14x Seagate 18TB 7200 RPM 256MB SAS 3.5 4096/512E
Hard Drives: 4x
Seagate FireCuda 530 2TB NVMe Drives
Hard Drives: 2x Crucial MX500 1TB 3D NAND (boot)
NVMe PCI Adapter: ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card Supports 4 NVMe M.2 (Yes I know it is a PCIe 4.0 card in a PCIe 3.0 slot)
Storage Controller: 1x AOC-S3008L-L8e HBA 12Gb/s
NIC: Integrated Intel X540 Dual Port 10GBase-T
NIC: 2x AOC-S25G-I2S SUPERMICRO 2-PORT 25GBE SFP28 PCIE NETWORK CONTROLLER
Motherboard: X10DRI-T
* Integrated IPMI 2.0 Management
Backplane: BPN-SAS3-836EL1 16-port 3U SAS3 12Gbps single-expander backplane, support up to 16x 3.5-inch SAS3/SATA3 HDD/SSD
PCI-Expansions slots: Full Height 3 PCI-E 3.0 x16, 3 PCI-E 3.0 x8
HD Caddies: 16x 3.5" Supermicro caddy (Rear 2qty 2.5" is wired to on board SATA)
Power Supply: 2x 1000Watt Power Supply

OLD (but still working) Server Hardware:
Processor:
Intel Xeon 12 Core CPU
Memory: 20GB DDR4 ECC REG
Hard Drives: 5x Seagate 8TB 7200 RPM 256MB SATA 3.5
Hard Drives: 4x Crucial MX500 1TB 3D NAND. 2x operating as Cache, 2x operating as Logs
Storage Controller: on motherboard.
NIC: Intel X540 Dual Port 10GBase-T
Motherboard: X10DRI-T
Backplane: none
PCI-Expansions slots: unknown
HD Caddies: 10x 3.5"
Power Supply: 750Watt
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
My understanding is that the latest version of SAMBA allows for faster speeds when spread across multiple connections.
Incorrect unless you're talking about more than one transfer at a time and somehow using separate TCP sessions.

Deploy TrueNAS Scale with optimized settings for fast access to very large video files.
Be aware (as you mention that you are) that reports and some fairly convincing evidence has shown that SCALE is nowhere close to the performance of CORE as SCALE is yet to reach the full maturity of tuning expected to come around U3 (several months in the future), but may or may not reach performance parity.

Host a Windows VM and use it as a Render Server for programs that are optimized for multi core rendering, such as Blender.
Maybe will work OK from the perspective of the VM if you get the devices (virtIO) and drivers right, but a terrible waste of RAM from your storage server if that's its primary purpose.

It will also divide your attention between providing block storage for the VM and providing shares to the workstations (won't be optimal on the same pool unless you're doing iSCSI instead)

I wanted to create a Tiered Storage solution
Maybe this thread would be something to look at: https://www.truenas.com/community/t...tion-i-wish-to-run-autotier.94103/post-651286

We are curious if there are any benefits to setting up an iSCSI solution which (according to my limited knowledge) make the network drives show up as physical drives on each of the server workstations. Historically we have simply mapped network drives to each workstation.
It can do that (show up as a locally attached disk on the workstation), but it will necessitate that you understand sync writes and probably that you put together a high-performing SLOG setup unless you're prepared to lose data sometimes.

Because each iSCSI target will effectively be a disk private to the computer that mounts it, no sharing would happen from the TrueNAS side, only from the workstation if you chose/needed to do that.
 

mervincm

Contributor
Joined
Mar 21, 2014
Messages
157
The samba feature I think you are talking about is called SMB multichannel. Works well with Windows to Windows, but not in Linux (by default) till samba 4.15

"server multi channel support" no longer experimental
-----------------------------------------------------
This option is enabled by default starting with 4.15 (on Linux and FreeBSD).
Due to dependencies on kernel APIs of Linux or FreeBSD, it's only possible
to use this feature on Linux and FreeBSD for now.
 
Joined
Apr 14, 2020
Messages
8
By no means am I trying to be rude but that list is a lot to ask of a community forum. I think you would best be served by paying a professional solutions provider to evaluate your needs and formulate a plan tailored to you. I can see from the list of hardware used in your build you failed to follow some of the key hardware recommendations for true as servers. The first to pop out is the use of client hardware is usually discouraged, even more so for a use case such as yours. The use of the very expensive sea gate 530s, which are pcie4 as you pointed out, in a system that only has pcie3 and the Asus card is only an adapter from m.2 to PCIE and relies on motherboard bifurcation support. For the price of those drives you could have picked up an actual m.2 raid card like the ones from rocketpoint. Also the use of an X10 motherboard even three years ago is fairly dated tech for an aggressive case like yours. If you had gone with a dual Rome server not only would you have more compute but more cores to help with the VMs not to mention the pcie4 with TONS, 128, of lanes. Again I am not trying to berate you, just trying to help you out. I could be wrong about the recommendations I made as well which is why I encourage you to consult a pro that does this sort of thing everyday. I hope I helped.

EDIT: In my recommendations above I screwed up the name of the nvme cards. They are called rocketraid by highpoint and to be honest they may fail to follow one of the basic tenets of truenas, no hardware raid, although I believe they can be configured to behave like an hba as well. I have never looked up their performance in truenas. They are my next must have item to buy and I will check out what people have to say about them, regardless of their applicability to truenas I’m still going to pick one up.
 
Last edited:
Top