Repurpose 240TB of hard drives into video editing NAS.

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
But in the future, when I build a faster production server, I guess I need to be looking at single core performance for single client maximum performance. Is this true?
The speed of the processor does affect the speed that the system can throw data out to the network. Recent reports indicate that the AMD Epyc 8 core processor, a Supermicor H11SSL-i system board and a Intel X540 T2, are able to provide, "sustained performance around 8.5 Gbit/sec (peaking at 9 averaging at 8.4 or so) with only 8 disks", but I don't know what kind of disks were being used. The user was strictly configuring the server for storage with no virtualization of any kind.
8x8TB Seagate ST8000DM004 (Yes I know they are SMR, but for WORM use case it is fine.)
Be aware that if you need to replace a drive, the resilver will run very slowly. A feature of the SMR disks.
This is the kind of disk I am planning to purchase for my next upgrade, you might want to consider:
https://www.ebay.com/itm/HGST-Ultra...TA-6Gb-s-3-5-64MB-HDD-hard-drive/223016135412
If I start loading data onto this pool, and then add more drives, How do I make sure the data gets spread over the vdevs?
The new vdev will be slightly faster, so it will receive more data. There is a kind of automatic load balancing between the vdevs depending on their capability. It will never be fully equal unless you rewrite the data because the data is not moved around on the pool. I am probably not explaining it well.
Once you write data on the first vdev, it will stay on that vdev even after you add a second vdev. The only way to get the original data to be spread across both vdevs is to copy it into a different directory, causing a new write, then delete it from the old location. Maybe someone can help me with that.
Words are not my friend today.
Anything else I should set up?
Did you setup the monitoring scripts:

Building, Burn-In, and Testing your FreeNAS system
https://forums.freenas.org/index.php?resources/building-burn-in-and-testing-your-freenas-system.38/

Github repository for FreeNAS scripts, including disk burnin
https://forums.freenas.org/index.ph...for-freenas-scripts-including-disk-burnin.28/
but how do I set up notifications if a drive starts failing?
As long as you have the system configured with email access, it will send you email alerts of system faults.
Here is a link to a general guide for configuration, which is still useful despite being a little out of date:
https://www.familybrown.org/dokuwiki/doku.php?id=fester:intro
This is the section that deals with email setup:
https://www.familybrown.org/dokuwiki/doku.php?id=fester:email
 

riggieri

Dabbler
Joined
Aug 24, 2018
Messages
42
Is there anyway see a more detailed breakdown of CPU usage by core? It looks like I am maxing one core when doing the SMB/CIFS transfer but I am not 100% by just looking at the reporting page.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It looks like I am maxing one core when doing the SMB/CIFS transfer but I am not 100% by just looking at the reporting page.
SMB is single threaded, so it is entirely possible (likely) that you are seeing what you think you are seeing. If I recall correctly, it is single thread per transfer, so if you had another client, it should spawn another process on another core.
 
Joined
Feb 2, 2016
Messages
574
two different 10G clients on SMB/CIFS. I have a feeling I am hitting the single threaded limit here of the L5640.

Have you tried NFS?

In my limited experience, when you're CPU-bound, sometimes NFS can pump more bits down the pipe than SMB.

Cheers,
Matt
 
Status
Not open for further replies.
Top