winnielinnie
MVP
- Joined
- Oct 22, 2019
- Messages
- 3,641
I almost want to, because it's as if the developers are still in the mentality of a pre-ZFS world with slow CPUs.I'm not going to cry to coreutils about it though.
If I'm understanding correctly: inline compression, fast CPUs, and CoW filesystems (such as ZFS and Btrfs) make "preserving sparseness" pointless in these use-cases.
But like the situation with mainline Linux kernel development, I wouldn't be surprised if mentioning ZFS will make them scoff at you in disgust.
A round-trip over the network is an even bigger waste of time. I'd rather just let my CPU with LZ4 inline compression do its magic, while leverage the speed of bypassing the network round-trip with a server-side copy instead. (Especially when there aren't really "sparse" files in the first place.)Not very from the point of final result, but reading/writing holes would be a waste of time.
Besides, up until these last few posts, this wasn't even about preserving the sparseness of a very small file that is "apparently" large. It dealt with MKV and MP4 videos (that contain the same video stream), with different containers. Neither are "highly sparse" files, and it's likely the way an MKV header is saved that seems to fool cp's "crude heuristic". (This can be true for other filetypes, but I don't want to keep repeating this over and over for different filetypes.)
Last edited: