Filesystem build for shared 4k video editing + 10GBASE-T

Status
Not open for further replies.

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
Hi all. Newly registered here but been lurking for the past week as I research this new build. Quick background and context for this:

- I've recently founded a small video production company and we're moving to our office by end of November. I've worked at multiple post-production houses throughout my career, all who had various levels of shared storage for multi user video editing (SAN and NAS solutions).
My budget can't quite stretch to these off the shelf enterprise solutions (and our needs aren't quite as demanding), so I was originally looking at the QNAP TVS-1282T3 as our shared fileserver solution. Thunderbolt 3 sounded great, especially since we have Macs and PCs in our workflow and the setup in the beginning would be almost plug and play, however I think we'd quickly outgrow TB3 infrastructure-wise and have to utilise the 10GbE within about 6 months.
When you're a small startup that's a lot of money to drop on a single piece of hardware so I wanted to do a my due diligence which took me to the Freenas community!

Our setup: 2x video editors (1 mac, 1 PC). This would expand to 3 by end of the year, and hopefully 4 in 6 months. PC Workstations would have Intel x540-T2s, Macs have Sonnet 10G breakouts. Each workstation will have local NVMe/SSDs for scratch disks/render files.
Our Needs: Shared filesystem for storing all 4k footage that all editors work from. No VMs, possibly utilising it later as a remote render station. Read speed/capacity is our priority, write speed/redundancy is secondary.
Current build spec:

Processor: Intel - Xeon E5-2620 V4
Motherboard: Supermicro - X10SRM-TF
RAM: Kingston - 64GB (4 x 16GB) Registered DDR4-2133
HDDS: 8x Seagate - IronWolf Pro 8TB
PSU: SeaSonic - FOCUS Plus Gold 650W 80+ Gold
Case: Fractal Design - Node 804
UPS: APC - BR1200GI

Current planned setup: 2 Vdevs of RaidZ1 in to a single pool for the shared fileserver. In the beginning the 2 editors will attach directly to each of the one 10Gbe ports, and when we expand to more editors I'll purchase a switch and route through that.

Current Backup Solution: 2x Terra Master 2 bay DAS units in RAID1, each with 10TB WD Red drives. Back ups to be done nightly. Not a perfect solution, but was my home setup from when I was freelancing and has done me good for the past few years. I also want to integrate cloud syncs (Backblaze etc, but this is new to me and I need to research the Freenas integration).

So... providing I'm not wildly off on my build spec and setup I've been going in circles on a couple of points:

  1. The X540-T2 NIC on the motherboard concerns me. From what I've read, this chipset will work with Freenas, but Chelsio NICs are the preferred choice. It kinda bugs me that I wasn't able to find anyone who has used this motherboard on builds! It ticks all my boxes (mATX, dual 10GbE etc).
  2. The Vdev config. Initially I was set on a single RaidZ2 but my research points me using multiple vdevs into a single pool. This is fine, but am I right in thinking 2x Raidz1 is the way to go, or should I look to do a mirror setup? I think I would need to have a larger drive pool to really saturate the 10GbE line, but I'm looking for the best setup possible with 8 drives.
  3. L2ARC. I read through the guide and the conclusion I came to is that I wouldn't benefit from SSDs in an L2ARC, considering my planned RAM capacity and usage. Since the NAS is used for active video projects, our individual video files are between 5-50GB, with projects taking up anywhere between 800GB-2TB. With 2 concurrent editors working on separate projects, that's roughly 1.6TB-4TB of 'active' data my ARC+l2ARC would have to support (apologies if I'm completely misunderstanding this). I would want to max out my RAM first, but since this board supports a max of 64GB, it makes me doubt my MB choice a little bit more...
I think I've covered everything, and apologies if I've missed some critical reading!

And to finish, this is a slightly more expensive build that I'm also thinking of:
PCPartPicker part list / Price breakdown by merchant

CPU: Intel - Xeon E5-2620 V4 2.1GHz 8-Core Processor (€417.50 @ Mindfactory)
CPU Cooler: Noctua - NH-U9DXi4 37.8 CFM CPU Cooler (€59.90 @ Amazon Deutschland)
Motherboard: Supermicro - MBD-X10SRL-F-O ATX LGA2011-3 Narrow Motherboard
Memory: Kingston - 64GB (4 x 16GB) Registered DDR4-2133 Memory (€829.99 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Storage: Seagate - IronWolf Pro 8TB 3.5" 7200RPM Internal Hard Drive (€288.89 @ Amazon Deutschland)
Case: Lian-Li - PC-A76WX ATX Full Tower Case (€249.90 @ Caseking)
Power Supply: SeaSonic - FOCUS Plus Gold 650W 80+ Gold Certified Fully-Modular ATX Power Supply (€85.99 @ Amazon Deutschland)
UPS: APC - BR1200GI UPS (€299.00 @ Amazon Deutschland)
Other: Chenbro OCR Kabel Mini-SAS to 4x SATA 0.6m (€23.40)
Other: LSI-SAS-9207 (€253.40)
Other: Intel-X540-T2 (€138.00)
Total: €5823.76
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2018-11-04 01:55 CET+0100

EDIT 2018-11-06

- Replaced the Case with a Fractal Define R5
- Replaced RAM with 4x32GB Crucial CT32G4RFD424A DDR4-2400 regECC DIMM CL17
- Added 2x Noctua NF-AH14 PWN 140mm fans for hard drive cooling
 
Last edited:

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I can't give you any advice about your setup for video editting. I develop corporate software and have a completely differtent use for my FreeNAS storage. But I think you should be carefull with the configuration of your storage pool. As far as I know if a vdev in the pool fails your entire pool is lost. And with a RAIDZ-1 vdev if a disk fails your vdev fails if a second disk in the same vdev fails.
 
Last edited:

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
I can't give you any advice about your setup for video editting. I develop corporate software and have a completely differtent use for my FreeNAS storage. But I think you should be carefull with the configuration of your storage pool. As far as I know if a vdev in the pool fails your entire pool is lost. And with a RAIDZ-1 vdev if a disk fails your vdev fails if a second disk in the same vdev fails.

Yep, I believe that's a similar situation (though not with RaidZ) to what happened in this video. Scary stuff.

So after a bit more reading this morning, I think that a mirrored vdev setup is what I should be looking to go with. Something like a 4x2 way mirror for my 8 drive build, or a 6x2 way mirror if I go with the more expensive 12 drive build.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Yep, I believe that's a similar situation (though not with RaidZ) to what happened in this video. Scary stuff.

So after a bit more reading this morning, I think that a mirrored vdev setup is what I should be looking to go with. Something like a 4x2 way mirror for my 8 drive build, or a 6x2 way mirror if I go with the more expensive 12 drive build.
Normaly I stay away from discussions about what setup is the best. I firmly believe that no 2 use cases are alike. However I can tell you that, while I started with a 2 x 2 mirrorred setup a couple of years ago, I switched to RAIDZ-2 with 6 drives. With this setup I can loose any 2 disks and still save the day. With 6 drives (or more) RAIDZ-2 gives a nice balance between usable storage and "safety". The real safety of course is in my back-up strategy according to the 3-2-1 rule. In your use case I think that availability is very important and should be a firm deciding factor.
 
Last edited:

Evertb1

Guru
Joined
May 31, 2016
Messages
700
what happened in this video. Scary stuff.
Yes the (by now) famous Whonnock server of Linus tech tips. I just love his channel but to be honest he best stay away from my stuff :smile:. I mean, that guy works so chaotic some times.
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
Yes the (by now) famous Whonnock server of Linus tech tips. I just love his channel but to be honest he best stay away from my stuff :). I mean, that guy works so chaotic some times.

You're telling me... As someone who's worked in post production for nearly a decade, some of his early workflow solutions were... pretty ghetto to say the least. I'm eager to not make similar mistakes I've personally experienced when setting up the infrastructure of my own company.

Evertb1 said:
Normaly I stay away from discussions about what setup is the best. I firmly believe that no 2 use cases are alike.

Yep exactly. While a lot of isolated research I've read says that mirrored setups are the best and only way, there is always another line of thought that suggests the opposite. For my use case, multi user read access in a video production environment, the closest "same case scenario" I've found is from 45 Drives (again found through good 'ol Linus...). Using their ZFS Calculator, in a setup of 10 drives, they recommend 2vdevs with 5 drives each (in either Raidz or raidz2).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yep, I believe that's a similar situation (though not with RaidZ) to what happened in this video.
Let's not even go there. The less said, the better.

It sounds like you're not editing on the server, correct? Just sequentially moving big files around to and from, right? If so, RAIDZ2 is probably a good choice for you. I'd say not RAIDZ1 because of the lower reliability, but RAIDZ2 is fine there. RAIDZ3 if you want extra reliability.

Failure of a single vdev will result in loss of data (some exceptions apply), but the idea is that each vdev is reliable in its own right. And that works just fine.

That said, for eight disks, just go with a single 8-wide RAIDZ2 vdev instead of two 4-wide RAIDZ1 vdevs. It's going to be a better experience.
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
Let's not even go there. The less said, the better.

It sounds like you're not editing on the server, correct? Just sequentially moving big files around to and from, right?

Correct, all raw footage will live on the server and just be read by the host edit machines. Each edit machine will have a M.2 SSD for scratch render files/cache. Final renders of the project will probably be direct to the server, depending on the write performance. Although It may end up being quicker exporting to the local SSD and copying it across after. End of the day, read speed to the server is my main priority.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Would files typically be read multiple times or just once? If the former, L2ARC might help out a bit.
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
Would files typically be read multiple times or just once? If the former, L2ARC might help out a bit.

Definitely multiple times, and quite randomly in some situations too. I've actually ordered the gear mentioned in my first post, but bumped the RAM up to 128GB (max capacity supported on the X10-SRM with RDIMM RAM).
My plan is to run real world tests at home before deploying it live, and see how the ARC holds up with this RAM. I have budget left over in case I needed to add an L2ARC.

A lot of things I've read seem to opt for an Optane drive as an L2ARC, though is it still technically possible (and still see benefits) from SATA SSDs or M.2 drives? I didn't dig too deep in to this.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I think you should consider a 24bay chassis 4U rather than the Node 804.

As an investment in critical infrastructure I think it’s make sense. Then you could start with two 6-way raidz2 or one 8-way etc.

For large sequential files, raidz2 is the way to go
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
A lot of things I've read seem to opt for an Optane drive as an L2ARC, though is it still technically possible (and still see benefits) from SATA SSDs or M.2 drives? I didn't dig too deep in to this.

Well, Optane makes a lot of sense for a SLOG. But for L2ARC? Not so much. You need something which is faster than your HDs. And it’d be good if it was faster than your network connection. A Samsung evo NVME drive would probably be a good choice. Good burst read performance. And would be able to sustain the trickle of l2arc writes
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
I think you should consider a 24bay chassis 4U rather than the Node 804.

As an investment in critical infrastructure I think it’s make sense. Then you could start with two 6-way raidz2 or one 8-way etc.

For large sequential files, raidz2 is the way to go

I actually ditched the Node 804 for this reason. The limitation I have is that we are in a shared office environment and it's not feasible to have a large rack mount style server there, otherwise I would have gone with a large bay chassis like you suggest. I needed a smaller, more regular footprint so opted for a Define R5. This thing is designed to see us through about a year before we hopefully expand and can justify our own larger office space, at which point the IT infrastructure will be completely different :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You could probably go for a TB or so of L2ARC with 128 GB of RAM. Definitely set recordsize=1M on the datasets storing your videos to save space on metadata, both in RAM and on disk.

I also vote for the Samsung 970 Evo or whatever it's called for L2ARC.
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
You could probably go for a TB or so of L2ARC with 128 GB of RAM. Definitely set recordsize=1M on the datasets storing your videos to save space on metadata, both in RAM and on disk.

I also vote for the Samsung 970 Evo or whatever it's called for L2ARC.
Nice, thanks for the advice!
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
I actually ditched the Node 804 for this reason. The limitation I have is that we are in a shared office environment and it's not feasible to have a large rack mount style server there, otherwise I would have gone with a large bay chassis like you suggest. I needed a smaller, more regular footprint so opted for a Define R5. This thing is designed to see us through about a year before we hopefully expand and can justify our own larger office space, at which point the IT infrastructure will be completely different :)
You might have to add more fans to keep you drives from frying. Also, have you considered 32gig Ram modules? Users here generally shy away from Kingston ram. Look at the supported ram on Supermicro's website. Lastly, I found a few posts regarding your motherboard when searching the forum.
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
You might have to add more fans to keep you drives from frying. Also, have you considered 32gig Ram modules? Users here generally shy away from Kingston ram. Look at the supported ram on Supermicro's website. Lastly, I found a few posts regarding your motherboard when searching the forum.
Yep, I added a couple of extra 140mm fans as intake at the front for the drives. I've also changed the RAM to Crucial ram listed on the Supermicro website.
 

Catterick

Dabbler
Joined
Nov 3, 2018
Messages
10
Just wanted to give an update on this build and thank this community for such an incredible amount of knowledge and feedback.

The final build ended up being:

Processor: Intel - Xeon E5-2620 V4
Motherboard: Supermicro - X10SRM-TF
RAM: 4x 32GB Crucial CT32G4RFD424A DDR4-2400 regECC
HDDS: 6x 10000GB Seagate Exos X10 ST10000NM0016
PSU: SeaSonic - FOCUS Plus Gold 650W 80+ Gold
Case: Fractal Define R5
UPS: APC - BR1200GI
10Gbe switch: Netgear XS708T

Ended up going for the 10TB Exos drives as my supplier didn't have the 8TB Ironwolf Pros in stock and would have taken too long to order them in. Will be adding 2 more 10TB Exos drives, either to expand the existing pool or leave as hot swap backups.
Am currently testing with the drives in a 3x2 mirrored setup and am pretty happy so far. It gives my 23tb of usable storage and allows me to expand the pool quite easily.
The 10Gbe tests I've done so far have been pretty good, I'm able to saturate the link with 2 clients with synthetic tests, and real world tests have shown no issues with both clients editing form the server with actual live projects.

So basically, I'm super happy with the setup and looking forward to deploying it live next week!

I did run in to a couple of issues though:

- The X10SRM-TF does not seem to like PWM fans. I originally had 2x140mm fans at the front for intake, and a single 140mm at the rear for exhaust. These were Noctua NF-A14s. The motherboard would constantly ramp the fans up and down, from min speed to max speed, no matter what fan setting I used in the BIOS. Apparently this is due to the motherboard thinking low RPM fans are actually failing? I'm not sure, but tried for ages to use various IPMI tools to curb the fan speeds but nothing seemed to work. I gave up and switched the fans for 3 pin Fractal design fans and run them through the case's integrated fan controller. Will revisit this later down the line though!

- One editing client uses an Asus XG-C100 and for some reason this was a bit buggy when setting the recieve and transmit buffers. Completely disabled itself after maxing the transmit buffer and had to reinstall the card. Worked fine after this.

- For some reason, sometimes buttons on the Freenas GUI just don't respond. If I refresh the page buttons work, but then the next thing I try to do again a button won't respond. Same happens in Chrome and Firefox.
e.g: I want to install a plugin. I select the plugin, click install, and it works fine. After I go to install another plugin and clicking the install button doesn't do anything. I refresh the page and the button will now work. Very strange.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
- The X10SRM-TF does not seem to like PWM fans. I originally had 2x140mm fans at the front for intake, and a single 140mm at the rear for exhaust. These were Noctua NF-A14s. The motherboard would constantly ramp the fans up and down, from min speed to max speed, no matter what fan setting I used in the BIOS. Apparently this is due to the motherboard thinking low RPM fans are actually failing?
There's an easy solution for that:
https://forums.freenas.org/index.ph...nge-ipmi-sensor-thresholds-using-ipmitool.35/
 
Status
Not open for further replies.
Top