High Speed Read Server

AntoineR

Cadet
Joined
Nov 12, 2020
Messages
4
Hi,

I'm planning to build a high speed FreeNAS for video editing and calibration in the school I work.

My main problem is the DPX 2K files, 12MB each (25p) so 300MB/s sustained read needed.
What if 2 or more people need to work at the same time ? (not using the same files of course)

My plan is to use a 6049P-E1CR24H Supermicro, that come with a Super X11DPH-T board and 2x Intel Xeon Scalable Silver 4214 12/24 cores.
24 SATA HDD 10To Entreprise Drives with SAS 9305-16i and using the SATA ports from the motherboard.
RAM : What ever is needed, many I guess.
Should I use L2ARC or just adding more RAM ? I couldn't find an answer.
I'll use NVME SSD for ZIL/SLOG.

The Supermicro comes with 2 x 10GBE.
I've read 10GBE use a lot of CPU, but how can I calculate proc needed ?

My last question is simple, where will be my bottleneck ?

In another way, could I achieve such a read speed of do I need to consider buying TrueNAS X10/X20 or iXsystems 4U Rack ?
Or it can be possible but will be to much for my budget and I'm just a dreamer :rolleyes:?

I know the RAIDZ/vdev/pool configuration will be important too, but that will be another thing to solve later.
 

jayecin

Explorer
Joined
Oct 12, 2020
Messages
79
Will students be working on the server or will they need to be transfering a sustained 300MB/s between their local PCs and the Server?
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Couple things...

A) Those disks as 5 vdev mirrored would give the most throughput. Should see 300MB/sec easily enough.
B) I'm running a supermicro box with 4x 10G connections and a single silver 4210; I can under certain circumstances, saturate all 4 NICs; mostly it depends on the workload; small files vs large; with large being able to saturate more/better.
C) Considering read is your primary thing; I would recommend using NVMe as your L2ARC. Depending on the files, I'd get enough to encompass most of the file workload or at least try to hit the 90% mark.
D) ZIL/SLOG will do nothing to increase your read or write speed; it will increase your write security/sanity; but again, it won't make anything faster.
E) You can either bond the NICs and hope your users load shed accordingly to the lesser used NIC, or you can turn on multi-channel with SMB, though it's still considered "beta" after probably what 10 years?...but it will allow Windows boxes to use more than one NIC for throughput. Depends on if your clients have more than 1 NIC. If not, just bond (LACP) them and call it a day.
F) RAM; as much as you can get; if reading data is your thing. The more RAM, the less it has to read from L2ARC or disk if in neither; also L2ARC will consume RAM for its lookup tables. To that, at least 128GB...256 would be better. Probably not over 512GB unless you have the budget for it.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Should I use L2ARC or just adding more RAM ? I couldn't find an answer.
Always more RAM before L2ARC.
My last question is simple, where will be my bottleneck ?
That is not a simple question. Odds are it will be your configuration of the vdev.

You did not state how much storage space you need. This too is critical during the design time. Realize that increasing storage capacity with ZFS is not as simple as adding another hard drive, well not if done properly.
 

AntoineR

Cadet
Joined
Nov 12, 2020
Messages
4
Thanks for all the answers, help a lot to see more clearly.

The server will be storage only, client (mostly OSX) will use a 10GBE network with appropriate nic.
The plan is to LAG 2x10GBE and 4x10GBE in the future.

Everyone agree about the RAM so do I.

As for the storage, we will use our old Synology to back up, so RAIDZ2 or RAIDZ1 should be ok.
I don't like mirroring as it consume too much disk space.

As for the vdev configuration, I know I need to write the documentation about it.
If more storage is needed in the futur, an expansion will be added, slower but for the slower use. (archive/admin)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I don't like mirroring as it consume too much disk space.
We all hear you on this, no one like to waste space however you may not have the performance you desire with just simple RAIDZ configuration, Mirrors are likely the best bet. Before you commit any significant data to your project, I would assume you will test it out and ensure you have the speeds you desire. Testing is always the best thing to do as it will make better use of your overall time.

So how much data are you looking to need to store on the server? Basic rule of thumb is to not exceed 50% of storage capacity, plus odds are the data will grow, it always does. Remember one thing and this is very true... Buy good hardware that will last so you do not need to spend more money on those portions of the server, the one thing you can expect to replace are the hard drives, they will fail. If you boot from a USB Thumb/Flash drive you can expect it will fail eventually too, which is why we recommend booting from a SSD. Doesn't need to be a fancy SSD but at least a mainstream one.

Good luck
 

AntoineR

Cadet
Joined
Nov 12, 2020
Messages
4
Thanks.
I will consider all that about storage capacity.
And I let you know about the final system configuration and results.

I still need to figure if it's better to use 8*32 or 4*64Gb memory modules.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I still need to figure if it's better to use 8*32 or 4*64Gb memory modules.
Normally this would be an easy answer but money is a factor I'm sure. First of all, ensure you are reading the user manual for the motherboard. Second I would reference the QVL for the RAM and buy based on what was tested or take a risk if you don't buy based on what has been certified to work. Third you should consider future expansion. Last I'd consider overall cost, to include if you had to add some more RAM in the near future should performance be an issue. I do not see you ever needing 5TB of RAM although it might be nice to have for bragging rights.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
The bigger question is, what will it be used for? Playing back video files is not the same as editing with scrubbing etc.
Depending on what you do latency is going to be very important as well. How long is it allowed to take to get you the frame it requested etc.

For example would it be more cost effective to have a NVMe pool for editing and a bulk pool for storage and rendering?
What happens if someone render while two ppl edit on projects?
Would a beefy server that converts into proxy be a better route?
Even client side CPU usage would have to be taken into account at some point.
Or is it going to be good enough and not worth worrying too much about?

You could look at LTT and what they do and what challenges come with high bitrate RED footage and many editors. Then figure out what applies.
 

AntoineR

Cadet
Joined
Nov 12, 2020
Messages
4
You're right, I didn't say much about the use.

Editing is made on high end iMacs using proxy files.
Color correction on beefy hackintosh's using ProRess HQ.
But we're starting to need DPX file color correction, we can't use proxies for this.

For now all is store on a Synology Raid 10 and 4x1GBE LAN but we don't have the read speed we need.

If the speed cannot be achieved, we'll use internal raid storage into the clients but we prefer to try to share resources from a central storage. Easy to manage, back-up etc.
 
Top