Since support for NTFS (and possibly extFS) is built into the kernel that TrueNAS Scale uses, I think it is safe to assume that it will remain there for a very long time.The only supported file system for TrueNAS is ZFS
Hashing is a feature of rsync so it probably isn't fair to claim that it is "too much work" to get some level of certainty.As far as data verification with hash of the files using third party utilities, I have been there, too much work to do it right, it was for me the triggering point to go to Synology (for the BTRFS) an afterwards to come here to the TN and ZFS.
By hashing I mean verifying the data integrity of my source data and also my backups.Hashing is a feature of rsync so it probably isn't fair to claim that it is "too much work" to get some level of certainty.
My source data is on a ZFS volume. I'm only interested in whether the backed-up data has the same hash as the source.By hashing I mean verifying the data integrity of my source data and also my backups.
I can't imagine why one would use ZFS on a USB drive.
The only supported file system for TrueNAS is ZFS; plan for any others to be turned off at some point and locked out; if you want to use a backup drive formatted with another file system I would recommend setting up a workflow where you connect the drive to some other machine and transfer the data via a network protocol. The other option is a single device pool that is formatted with ZFS, this is what I use for my external backup.
I have used the import feature from a NTFS formated drive and some files or folder couldn't be migrated to ZFS due to some special characters in the name. I have given up on the use case after that as not being reliable.If I remember correctly, ALL foreign file systems are being deprecated in TrueNAS. That would include NTFS and the "import data" function. Thus, the reason why I assumed ZFS on the external drive, and persisted with that thought until @harsh was clear about using NTFS.
The reason "import data" is being deprecated, (again IF I REMEMBER CORRECTLY), is that file attributes are not fully maintained. Thus, any attempt to use the files afterwards in a share would require manual intervention to potentially change owner, group, permissions and or ACLs.
Now whether deprecating "import data" is a good decision or bad, is beyond my knowledge.
If I am wrong about the deprecating of "import data" function, feel free to correct me. I will not be offended for being wrong.
Probably because you are not using ZFS replication. Though, to be fair, Most of my iocage root dataset crawl to a stop during replication.My source data is on a ZFS volume. I'm only interested in whether the backed-up data has the same hash as the source.By hashing I mean verifying the data integrity of my source data and also my backups.
The integrity of the data has absolutely nothing to do with my question. My question is why my backups are taking so long.
Can we concentrate on my question?
An update on things I've tried to obtain acceptable performance:
I stopped the rsync in process, remounted the NTFS formatted USB drive using "mount -t ntfs /dev/sdi2 <dest>" instead of "ntfs-3g /dev/sdi2 <dest>" and the speed has picked up by a factor of 17 (now just under 24MBps). Still not what I was hoping for, but it will cut days off the backup.
I found this suggestion elsewhere because most seem to think that discussing ZFS ad nauseam will be the answer.
Until better information surfaces, this may help others in a similar situation.
zfs send pool/fs@snap | gzip > backupfile.gz
During the actual writing of files to the USB drive. There are no writes during the scan phase and no existing files on the destination to read.Are the speeds you're seeing during the scan or transfer portion of rsync?
rsync -acvAlso, what rsync command are you using?
There's nothing "changed". The files aren't on the destination. Mine is the worst-case scenario for an incremental backup approach. Have you ever tried to do a ground-up restore from incremental backups?One of the benefits of using zfs snapshots instead is that you wouldn't need to do the comparison scan and instead would just have the transfer speed. I'm guessing a good part of the problem you're running into is all of the random access as rsync attempts to determine what's changed in order to update the drive.
I want an archival backup that isn't tied to Truenas or ZFS. I may decide to go another direction if the performance will be that poor on something as simple as a full backup. I'm pretty sure I can get much better archival performance on other operating systems as I archived this server once before using a similar setup when it was running Windows Server 2016 and it went a lot faster. My goal here is to try to get Truenas Scale to at least approach that level of performance.I've never tried it, but you should be able to send a zfs snapshot to a file on the usb drive.
You stated in your earlier posts you are using a Dell server with the USB drive attached to it, also you said you are also using a desktop. So I would suggest you experiment with rsync only between the server and the desktop internal drive over LAN. This way you exclude the USB interface altogether. Let's see what kind of performance you can get.During the actual writing of files to the USB drive. There are no writes during the scan phase and no existing files on the destination to read.
rsync -acv
There's nothing "changed". The files aren't on the destination. Mine is the worst-case scenario for an incremental backup approach. Have you ever tried to do a ground-up restore from incremental backups?
I want an archival backup that isn't tied to Truenas or ZFS. I may decide to go another direction if the performance will be that poor on something as simple as a full backup. I'm pretty sure I can get much better archival performance on other operating systems as I archived this server once before using a similar setup when it was running Windows Server 2016 and it went a lot faster. My goal here is to try to get Truenas Scale to at least approach that level of performance.
I've checked the CPU usage and it is barely an idle.
Please, please, please -- no more theories about ZFS on the destination. If I can't figure out how to make a portable backup that doesn't take weeks, my Truenas experiment will be over. I've been stuck with backups that I couldn't read on other systems before (previously because of tape formats) and I felt like all my backup efforts were wasted.
I'm contemplating doing some testing with a different USB adapter that might have better support from Truenas Scale if it is a driver issue.
That's my next step. My power is going to be out tomorrow morning so I don't want to interrupt the progress of the current rsync session.So I would suggest you experiment with rsync only between the server and the desktop internal drive over LAN.
During the actual writing of files to the USB drive. There are no writes during the scan phase and no existing files on the destination to read.
rsync -acv
There's nothing "changed". The files aren't on the destination. Mine is the worst-case scenario for an incremental backup approach. Have you ever tried to do a ground-up restore from incremental backups?
I want an archival backup that isn't tied to Truenas or ZFS. I may decide to go another direction if the performance will be that poor on something as simple as a full backup. I'm pretty sure I can get much better archival performance on other operating systems as I archived this server once before using a similar setup when it was running Windows Server 2016 and it went a lot faster. My goal here is to try to get Truenas Scale to at least approach that level of performance.
I've checked the CPU usage and it is barely an idle.
Please, please, please -- no more theories about ZFS on the destination. If I can't figure out how to make a portable backup that doesn't take weeks, my Truenas experiment will be over. I've been stuck with backups that I couldn't read on other systems before (previously because of tape formats) and I felt like all my backup efforts were wasted.
I'm contemplating doing some testing with a different USB adapter that might have better support from Truenas Scale if it is a driver issue.
It would defeat the purpose of making a portable archive of the files if I had to build a ZFS capable machine to read an all-inclusive snapshot.Also, my suggestion didn't involve ZFS at the destination. I was just pointing out a possibly faster way to backup via snapshot to a file
This also defeats the purpose as it produces a ZFS formatted hard drive. Portability is an absolute must when archiving.Have you tried testing rsync from one pool to another within TrueNAS? Hopefully you can easily add a single drive to do so.
Whether it is rsync or any other application, it is going to take a while to do a full dump. The issue here is that "while" was amounting to weeks.I know that I've had rsync take a while when doing an initial copy and that was ext to ext. I've also run into problems compressing and uncrompressing to/from usb drives, both with ext and ntfs.
It would defeat the purpose of making a portable archive of the files if I had to build a ZFS capable machine to read an all-inclusive snapshot.
This also defeats the purpose as it produces a ZFS formatted hard drive. Portability is an absolute must when archiving.
Whether it is rsync or any other application, it is going to take a while to do a full dump. The issue here is that "while" was amounting to weeks.
The -c flag is going to slow down the entire process by a substantial degree.rsync -acv
rsync -aAHSXxv --delete --stats SOURCE DESTINATION