Issues manipulating huge files - performance alternatives?

bonox

Dabbler
Joined
May 2, 2021
Messages
17
Hello All

Been using FN/TN for a long time, but am finally reaching the end of the road with one particular problem. Editing large files (terabyte size) is practically impossible and i'm looking for a better way if anyone knows one.

I'm using datasets on multiple pools of 12 wide stripes of mirrors, varying between 15K SAS disks down to 7.2K disks (same disk type in each pool obviously), dual 10Gbps NICs, 160GB RAM and dual X5650s and mostly 1MB block size. Generally it's no slouch for good single user performance.
The scenario is something like a large encrypted container that you want to write a large file update to. Say updating a 4GB file inside a 500GB container). These will often potter along at only 1 or 2 megabytes per second, after screaming through the cache of course. In the past, i've generally had to work with another file system like ReFS to do edits, then copy the master back to the NAS.

I'm wondering if this is just a nasty side effect of copy on write behaviour or if there are alternatives like zVol/block level instead of datasets that might improve this behaviour a bit please? Or am I stuck with a hybrid workflow instead of being able to rely on a NAS as a direct work tool (as opposed to a backup tool after doing the work somewhere else).

I get the same issue on unencrypted huge files as well for what it's worth - editing intermediate solution files for stress analysis (FEA) routines that run into the hundreds of gigabytes.

Thanks in advance for any tips.
 
Top