Mount USB drive and use it as backup target

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
Searching around I'm not seeing a lot of solutions to mount a USB drive in TrueNAS, is there a reason for this? Basically want to mount the USB drive and then run a local rsync from my main file share and send it to the USB drive. Then eject it and store it offline.

I'm currently doing this over the network and I have tweaked the mounts and cipher to get maximum transfer speeds that gigabit allows but I'm wondering why I even need to do this over the network? Just do it on the NAS itself and it could go as fast as the backup drive can write. This seems like a common enough use case for TrueNAS I'm actually surprised there isn't functionality to do this in the GUI, like how the "import disk" is in the GUI. You would hope you would do more backups of TrueNAS than you would do imports...

What am I not getting here?
 
Joined
Jun 2, 2019
Messages
591
Fairly simple
1. Plug USB drive into USB port
2. Confirm new drive shows up in disks list
3. Create a pool using USB disk
4. Manual rsync or add rsync process to cron jobs
Done
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
So it has to be a pool, I guess that would be ok. Though I would really prefer a format that could be easily read in another computer which is why I wasn't finding much in my research. This method would allow a elegant way to stripe across a couple disks though, makes sense.

Do you know of a good writeup on how to do this? Ideally I'd like to automate as much as possible, plug in the drive, kick off a task and have it perform the backup then unmount/eject the USB drive.
 
Joined
Jun 2, 2019
Messages
591
The easiest solution is to just keep the USB drive plugged in and create a cron to rsync the data to your USB pool. Assuming you have not set up a share (SMB, NFS, etc.) directly to the USB drive, clients will not be able to access the USB drive.


Screenshot 2022-07-03 at 12.26.40 PM.png


Anything fancier you would likely have to create a script (bash, python, etc.) to monitor for the existence of the USB drive to execute what ever commands you desire, then launch the script as a background process via a cron at boot (@ reboot) or simply launch it as a background process as a post init script. Just put the script on your data pool, add execute permissions (chmod a+x), and add the "&" after the script command to launch it as a background process. You might be blazing a new trail, unless someone has already cooked up something similar already.

Screenshot 2022-07-03 at 12.47.27 PM.png
 
Last edited:

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
I already have another pool permanently in the NAS that gets twice weekly Rsync to it and it isn't shared at all, almost exactly as you are describing. The only reason for this to exist is accidental protection against oops, deleted the whole directory instead of a file type of thing and this is a fast restore. There's also an off site backup protecting against, oops the house burned down disaster protection.

What I'm looking for here is protection against ransomware; while not actually processing the backup the drives must be unplugged from any computer. You would also have two sets of these "offline" ransomware backups, the one you know is good (but older) and the one you are about to update. I find it hard to believe anyone isn't targeting this use case, ransomware is a huge threat now and you are all far more advanced in these systems than I am.

Personally I'd like such a task to be a manual trigger to start after I plug in the drives (maybe you have more than one), but it should finish up by unmounting the USB storage making it inaccessible as quickly as possible and allowing it to be unplugged. Goal being make it inaccessible to ransomware as quickly as possible and it's on the user (somehow) to make sure the system is clean before starting a backup.

That's another reason I'd like the contents to be easily viewable to other computers, I'd like to be able to verify some files can be opened outside of the NAS.

Maybe I've talked myself out of doing a faster solution directly on the TrueNAS box. My dataset doesn't change a ton so doing rsync over the network, while painful the first time, shouldn't be a huge deal going forward.
 
Joined
Jun 2, 2019
Messages
591
Joined
Oct 22, 2019
Messages
3,641
The only reason for this to exist is accidental protection against oops, deleted the whole directory instead of a file type of thing and this is a fast restore.
What I'm looking for here is protection against ransomware
That's what snapshots offer. Why not leverage regular snapshot tasks to safeguard against this? (Even though snapshots are not backups, they do what you're asking for in this thread.)

Outside of that, rsync to your USB drive will work for a crude way of backing up the contents of a dataset, which you seem to already be doing.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
That's what snapshots offer. Why not leverage regular snapshot tasks to safeguard against this? (Even though snapshots are not backups, they do what you're asking for in this thread.)

Outside of that, rsync to your USB drive will work for a crude way of backing up the contents of a dataset, which you seem to already be doing.
Snapshots can't be encrypted? Sounds like I need to read up on that.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
Snapshots are a property of the dataset. If the dataset is encrypted, then its snapshots are de facto encrypted.
Got it, then that's not what I'm looking for in this thread. I only wandered off topic to address another use case mentioned. But yeah I'll look into snapshots but I don't think I'm willing to do away with my local backup drive.
 
Joined
Oct 22, 2019
Messages
3,641
But yeah I'll look into snapshots but I don't think I'm willing to do away with my local backup drive.
Don't you have that taken care of with rsync? Subsequent rsync runs should be much faster after the initial "full" backup.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
Don't you have that taken care of with rsync? Subsequent rsync runs should be much faster after the initial "full" backup.
Wandering off topic again but my nas has a RAIDZ2 array in it that contains my main share. It also has a RAIDZ1 array in it that contains the backup, and that's all it does. If I take up your suggestion and look into snapshots I might be able to eliminate that second array of drives. I'd really have to completely wrap my head around snapshots, how they work and any down sides.

The remote site backup and the topic on this thread of a backup that will mostly live offline are two other backup strategies. Each strategy targets a different problem and I don't consider them to be interchangeable.
 
Joined
Oct 22, 2019
Messages
3,641
Snapshots can't be encrypted?
There's overlap in terminology.

I thought you meant "encrypted", as in intentionally "encrypted by ZFS", for security and privacy reasons. But upon second reading, you meant encrypted by a third-party using ransomware?

Snapshots are read-only, and the records (blocks) at the very moment you took your snapshot are forever immutable. (Unless you destroy the snapshot and any other snapshots that reference these records.)

In theory, if a hacker uses ransomware to "encrypt" all your files in a dataset (or SMB share), it will double the "used size" due to ZFS's inherent "copy-on-write" characteristic. This is because the existing records of data are still "as they were" in your snapshot, while the ransomware tried to "encrypt in place" your files, which actually writes new "ransomed" data.

A simple revert to the previous snapshot, and you're back in business. (This "revert" takes only seconds.)

However, if you're at a point where a hacker is freely using ransomware on your SMB share or even inside the server itself, you've got bigger problems to address.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
The remote site backup and the topic on this thread of a backup that will mostly live offline are two other backup strategies. Each strategy targets a different problem and I don't consider them to be interchangeable.
You can do as suggested by @elvisimprsntr, which means your backup drives would be ZFS pools themselves. This way you can import, backup, export, and physically store the drive somewhere safe.

ZFS can be read on another TrueNAS machine, on Linux distros, and on Windows with a third-party software. (I use ZFS on Linux without any issues.)

Otherwise, you'd have to use a common filesystem (such as exFAT), and do your backups over the network. Thankfully, there shouldn't be too much data to transfer over the gigabit network after you've already completed a full backup.)

Just keep in mind you'll lose certain file attributes and permissions if going from ZFS to exFAT or NTFS.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
I did mean ransomware, thanks. And yes my concern is mostly them coming in and encrypting a mounted share from some other infected workstation.

Initial concerns with snapshots before doing research;
1. any issue in doing say two a week, forever?
2. is releasing old snapshots and merging to current version an expensive (work) operation?
3. easy automation of that?

My build four years ago was a little pricey with all the extra drives (actually a big step down from what I had, just updated my signature) but I have every confidence in how the internal backup with internal twice weekly rsync from main share to backup works.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
You can do as suggested by @elvisimprsntr, which means your backup drives would be ZFS pools themselves. This way you can import, backup, export, and physically store the drive somewhere safe.

ZFS can be read on another TrueNAS machine, on Linux distros, and on Windows with a third-party software. (I use ZFS on Linux without any issues.)
I'm going to play with this idea once I pick up another drive.
 
Joined
Oct 22, 2019
Messages
3,641
1. any issue in doing say two a week, forever?
No real issue. If you begin to run low on space, you can prune older snapshots. Do realize that a snapshot takes up zero space, unless it is the only reference to certain files/records (such as in the case of deleted or modified files.)

If all that ever happens is you keep adding files, but not deleting anything, then even your oldest snapshots will not take up any space. The only thing taking up more and more space are the very files you are creating / adding.

Only when you start to delete or modify files will the older snapshots "take up space", due to keeping references to the original records of data (of which your live filesystem no longer references because you deleted these files.)


2. is releasing old snapshots and merging to current version an expensive (work) operation?
That's not how it works with ZFS, unlike "snapshots" from other technologies. Creating and destroying snapshots is instantaneous. Not sure what you mean by "merging" in the context of ZFS snapshots.


3. easy automation of that?
From within TrueNAS' GUI, you can create automatic snapshot tasks, and customize their frequency and expiration times.


I did mean ransomware, thanks. And yes my concern is mostly them coming in and encrypting a mounted share from some other infected workstation.
Snapshots protect against this.

Let's say the infected workstation accesses the SMB share that houses 100 GB of files. The day before, you luckily took a snapshot of this dataset on the TrueNAS server.

The ransomware encrypts all 100 GB of files, and deletes the original files!

From the SMB share and even from within TrueNAS on the live filesystem, all you see is 100 GB of useless encrypted files. All original files are gone!

You'll notice that your dataset now consumes 200 GB, not just 100 GB.

This is because your snapshot from yesterday is an exact replica of the dataset and all the files "as they were" in that moment in time the day before. The snapshot is uniquely referencing 100 GB worth of data that no longer exists elsewhere; while the live filesystem/dataset is referencing 100 GB of useless encrypted data. This is why there is 200 GB being used up.

You go to the snapshot and choose to revert to it.

Now your dataset essentially "rewinds" to that snapshot in an instant, and you're good to go once again. The 100 GB of useless encrypted data is no longer referenced, not even by the live filesystem.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
See also:
 
Top