Hi everyone.
Typically I do not post, and would search for the right answers and make my own decisions. I have been successfully running a FreeNAS system for the company I work at for the last 2 years, with a second remote backup server. In the last year or so we have moved heavily into processing photogrammetry surveys and I am quickly filling up our current server with this data, so it is time to add an additional server for point cloud and photogrammetry purposes.
The reason I would like some help, is the data that I process gets accessed by a cloud composed of 13 current computer nodes, simultaneously. Typically we do 5-10 projects at a time, in 2 weeks, and then store useful data, and remove the working files.
A project starts out as images from our survey, typically 8,000-15,000 images at 10MB each (80-150GB). Each node accesses the images and creates large processing chunks, typically 200 x 0.5-2GB finished project is about 1TB. During next stages these chunks are accessed for further processing. Each node is fed by 10Gb fiber, and the server has a LAGG of 2x 10Gb fibers, this appears to be sufficient for our current purposes as nodes take longer to process and bandwidth seems to not be an issue, but may be upgraded in the future. Currently peaking at 14Gb through the LAGG while processing.
Due to large amount of random read (images) and synchronized write and read (working chunks), I would like to pick a pool setup that would not throttle much, would RAIDZ3 be up to the task? I am looking at a 22 disk chassis, possibly filling it with pool of 2 groups of 11 x 16TB disks per vdev. Suggestions on HBA controller?
To help with frequent access of data I am thinking of adding L2ARC, is this a bad/good idea for such a workload? If so what would be recommended?
Our current server runs on E3-1245 with 64GB of ram, would a similar spec CPU be sufficient?
Any help would be appreciated, or if any more info required I would be happy to do so. Thank you.
Typically I do not post, and would search for the right answers and make my own decisions. I have been successfully running a FreeNAS system for the company I work at for the last 2 years, with a second remote backup server. In the last year or so we have moved heavily into processing photogrammetry surveys and I am quickly filling up our current server with this data, so it is time to add an additional server for point cloud and photogrammetry purposes.
The reason I would like some help, is the data that I process gets accessed by a cloud composed of 13 current computer nodes, simultaneously. Typically we do 5-10 projects at a time, in 2 weeks, and then store useful data, and remove the working files.
A project starts out as images from our survey, typically 8,000-15,000 images at 10MB each (80-150GB). Each node accesses the images and creates large processing chunks, typically 200 x 0.5-2GB finished project is about 1TB. During next stages these chunks are accessed for further processing. Each node is fed by 10Gb fiber, and the server has a LAGG of 2x 10Gb fibers, this appears to be sufficient for our current purposes as nodes take longer to process and bandwidth seems to not be an issue, but may be upgraded in the future. Currently peaking at 14Gb through the LAGG while processing.
Due to large amount of random read (images) and synchronized write and read (working chunks), I would like to pick a pool setup that would not throttle much, would RAIDZ3 be up to the task? I am looking at a 22 disk chassis, possibly filling it with pool of 2 groups of 11 x 16TB disks per vdev. Suggestions on HBA controller?
To help with frequent access of data I am thinking of adding L2ARC, is this a bad/good idea for such a workload? If so what would be recommended?
Our current server runs on E3-1245 with 64GB of ram, would a similar spec CPU be sufficient?
Any help would be appreciated, or if any more info required I would be happy to do so. Thank you.