shrödinger
Cadet
- Joined
- Nov 23, 2018
- Messages
- 8
Hello!
I'm planning my first FreeNAS build, for professional use, as a replacement for my current internal hardware RAID (8 x 3 TB in RAID10).
Working in the video post-production industry, my needs lean towards quite large volume (30-40GB), many high speed sequential reads, and few high speed sequential writes.
The NAS will only be used to store the video files used in the edits I'm working on, and the video files I'm rendering when I'm done. I may also store on it the cache files used by the software. No virtualization, no Plex, no plugins. If I find plugins or scripts that may be useful, I'll just run them on a server accessing the NAS.
The NAS will serve the following clients (around 8 to 12 hours a day, using Samba) :
Main workstation (custom build running Windows 10 - 90% of the time)
Render workstation (Mac Pro 3,1 - 10% of the time)
Laptop (MacBook Pro 15" 2015 - 10% of the time)
Projects server (E200-8D running CentOS - 10% of the time)
Here's my part list :
Case : SuperMicro SC846E16
MoBo : SuperMicro X10SRL-F
CPU : Intel Xeon E5-1650 v4
RAM : 4 x 16 GB Crucial DDR4 2400 ECC Registered
HBA : LSI 9201-16i (instead of two 8-ports HBAs to save PCIe lanes)
NIC : Chelsio T520-CR (probably aggregated)
Switch : QNAP QSW-804-4C (clients will have 10GBASE-T NICs)
Boot : Kingston A400 SSD (already owned, in an additional fixed internal 2,5" bay)
Main drives (for source files) :
8 x Seagate SV35 3TB 7200RPM (already owned)
16 x WD Red 3TB 5400RPM (8 already owned, 8 to buy)
Render drives : 4 x Samsung 970EVO 500GB M.2 in a PCIe adapter
Cache drives : 4 x Samsung 970EVO 500GB M.2 in a PCIe adapter
On the setup side, here's my plan :
Main Zpool : 4 x raidZ2 Vdevs, 6 x 3TB drives in each = ~36TB usable, given the 80% max capacity
Render Zpool : 4 x 500GB striped Vdevs = ~2TB
Cache Zpool : 4 x 500GB striped Vdevs = ~2TB
Render and cache Zpools won't need redundancy as data won't stay too long on it (once the video file is rendered, I'll move it to the main volume for archive, and cache will be erased each day). The goal with the multiple Zpools is to never read and write simultaneously on the same volume.
I have some questions regarding this list :
- I red mixing 7200RPM and 5400RPM won't be an issue (all will run at the slowest speed). Is it true?
- Other forum info about the SC846E16 backplane (SC846EL1) says that only two 4-ports SAS cables are needed to connect to the HBA. However the backplane datasheet (p.9) shows three SAS connectors likely going to the HBA. Will a LSI 9207-8i be enough?
- Would you have specific advices regarding M.2 drives in PCIe adapter on this kind of build? I couldn't find much.
- According to this list, do you think that I'll be able to saturate 10GbE with sequential read/write of larges files on a single client?
- Do you think adding SLOG or L2ARC would be useful? Writes are not critical as I'll always be able to restart them (I'll mostly write from USB drives, clients internal drives, or rendering from one Zpool to another), and I'd like to configure the main Zpool to be fast enough that it won't be the bottleneck.
Many thanks for reading, feel free to throw any thought you might have!
I'm planning my first FreeNAS build, for professional use, as a replacement for my current internal hardware RAID (8 x 3 TB in RAID10).
Working in the video post-production industry, my needs lean towards quite large volume (30-40GB), many high speed sequential reads, and few high speed sequential writes.
The NAS will only be used to store the video files used in the edits I'm working on, and the video files I'm rendering when I'm done. I may also store on it the cache files used by the software. No virtualization, no Plex, no plugins. If I find plugins or scripts that may be useful, I'll just run them on a server accessing the NAS.
The NAS will serve the following clients (around 8 to 12 hours a day, using Samba) :
Main workstation (custom build running Windows 10 - 90% of the time)
Render workstation (Mac Pro 3,1 - 10% of the time)
Laptop (MacBook Pro 15" 2015 - 10% of the time)
Projects server (E200-8D running CentOS - 10% of the time)
Here's my part list :
Case : SuperMicro SC846E16
MoBo : SuperMicro X10SRL-F
CPU : Intel Xeon E5-1650 v4
RAM : 4 x 16 GB Crucial DDR4 2400 ECC Registered
HBA : LSI 9201-16i (instead of two 8-ports HBAs to save PCIe lanes)
NIC : Chelsio T520-CR (probably aggregated)
Switch : QNAP QSW-804-4C (clients will have 10GBASE-T NICs)
Boot : Kingston A400 SSD (already owned, in an additional fixed internal 2,5" bay)
Main drives (for source files) :
8 x Seagate SV35 3TB 7200RPM (already owned)
16 x WD Red 3TB 5400RPM (8 already owned, 8 to buy)
Render drives : 4 x Samsung 970EVO 500GB M.2 in a PCIe adapter
Cache drives : 4 x Samsung 970EVO 500GB M.2 in a PCIe adapter
On the setup side, here's my plan :
Main Zpool : 4 x raidZ2 Vdevs, 6 x 3TB drives in each = ~36TB usable, given the 80% max capacity
Render Zpool : 4 x 500GB striped Vdevs = ~2TB
Cache Zpool : 4 x 500GB striped Vdevs = ~2TB
Render and cache Zpools won't need redundancy as data won't stay too long on it (once the video file is rendered, I'll move it to the main volume for archive, and cache will be erased each day). The goal with the multiple Zpools is to never read and write simultaneously on the same volume.
I have some questions regarding this list :
- I red mixing 7200RPM and 5400RPM won't be an issue (all will run at the slowest speed). Is it true?
- Other forum info about the SC846E16 backplane (SC846EL1) says that only two 4-ports SAS cables are needed to connect to the HBA. However the backplane datasheet (p.9) shows three SAS connectors likely going to the HBA. Will a LSI 9207-8i be enough?
- Would you have specific advices regarding M.2 drives in PCIe adapter on this kind of build? I couldn't find much.
- According to this list, do you think that I'll be able to saturate 10GbE with sequential read/write of larges files on a single client?
- Do you think adding SLOG or L2ARC would be useful? Writes are not critical as I'll always be able to restart them (I'll mostly write from USB drives, clients internal drives, or rendering from one Zpool to another), and I'd like to configure the main Zpool to be fast enough that it won't be the bottleneck.
Many thanks for reading, feel free to throw any thought you might have!
Last edited: