Tweaking RAM

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
To apply home PC or laptop constraints on a server OS leads to confusion.
Yes, but even on a home PC, OP's concept is wrong. You have the RAM, use it for something--and a sensible (i.e., non-Windows) OS will do just that. Possibly even recent versions of Windows get this better; I haven't done a lot with them. But your analogy makes just as much sense on my laptop as it does on my servers. (and yes, I know you know this--I just don't want to leave the "out" there)
 

ilmarmors

Dabbler
Joined
Dec 27, 2014
Messages
25
my 2 truenas systems are running fine. I have been running rsync tasks for the last 3 weeks. I feel that the rsync processes are somewhat slow.

Recently I updated server to TrueNAS 12.0-U1 to take advantage of zstd compression which in my particular case (uncompressed tiff files) gave significant compression ratio improvement comparing to lz4. But I needed to copy files to force recompression. zfs send | zfs recv didn't gave full benefits, single threaded cp and rsync was very slow. IMPORTANT: my server WAS NOT limited by CPU or disk I/O and network wasn't involved at all, it was limited by one thread which was doing copying synchronously. It didn't look that RAM played a big role, probably it is benefitial if file system metadata fits into RAM. Not so important if you have big files.

So, I ended up using msrsync (multi-stream rsync) - python wrapper around rsync (original author: https://github.com/jbd/msrsync ), but for running on TrueNAS server itself, which has python3, I actually used forked and fixed msrsync3 from https://github.com/carlilek/msrsync/tree/python3 . It gives 3 self test failures, but they are not real failures - just failing comparing the same binary and unicode strings in unit tests.

Here is what I did in my particular case, not applicable to your case, but maybe can be useful for somebody directly or indirectly. Haven't tested how msrsync handles situations when some files are deleted at srcdataset and those files should be deleted on dstdataset too.

I run 12 rsync in parallel (CPU has 12 cores, 24 threads), partitining data at 1000 files (default) or 32GB (default 1GB) per rsync invocation, provided custom bucket partition location at scratch space, added stats and optimized rsync invocation with -W (copy whole file as there were no files at destination) and --inplace - write file directly to destination file without tmp file)

./msrsync3 -P -p 12 -s 32G -b /mnt/scratch/tmp --stats --rsync "-a -W --inplace" /mnt/tank/srcdataset/ /mnt/tank/dstdataset/

Afterwards I checked with rsync that there is no changes between src and dst datasets, deleted src dataset and renamed dstdataset to srcdataset. Zstd compression benefits fully unlocked on old data - compression ration increased from 1.24 to 1.83.
 
Top