Mount discs at a given time

WhiteTiger

Explorer
Joined
Jan 2, 2014
Messages
86
I would like to keep some disks offline, for example two internal and one external on USB, and take them online at a certain time.
How can I do?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
FreeNAS isn't designed for this.

What might be possible depends on your concept of "offline".

Are these going to be powered on all the time? That fails many definitions of "offline", but you might be able to do that and just have a cron script to import the pool at the desired time.

If they're not going to be powered on all the time, you'd be better off having the operator coordinate with the NAS when the disks are attached and powered on.
 

WhiteTiger

Explorer
Joined
Jan 2, 2014
Messages
86
FreeNAS isn't designed for this.

What might be possible depends on your concept of "offline".

Are these going to be powered on all the time? That fails many definitions of "offline", but you might be able to do that and just have a cron script to import the pool at the desired time.

If they're not going to be powered on all the time, you'd be better off having the operator coordinate with the NAS when the disks are attached and powered on.

My idea might be that some discs are connected and configured, but not mounted.
Obviously in this case they are not accessible to users or to the same services as the NAS, for example they are not accessible for backup.
However, a script could mount them and this script can be started at a certain time or launched by a service.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,906
Just out of curiosity: What is the overall goal behind this idea?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
My idea might be that some discs are connected and configured, but not mounted.
Obviously in this case they are not accessible to users or to the same services as the NAS, for example they are not accessible for backup.
However, a script could mount them and this script can be started at a certain time or launched by a service.

Okay, so disks just sort of standing by. You can implement a script to do whatever you'd like and call it from cron ("scheduled task"). It won't really be easy to have it do anything NAS-y, but you could use it for making a backup copy of data or something like that.
 

WhiteTiger

Explorer
Joined
Jan 2, 2014
Messages
86
Okay, so disks just sort of standing by. You can implement a script to do whatever you'd like and call it from cron ("scheduled task"). It won't really be easy to have it do anything NAS-y, but you could use it for making a backup copy of data or something like that.
Great!
The first time the disks are online I want to create SMB folders, configure them in Mirroring or activate other services on them.
Obviously services not shared with the other disks.
When I take the disks offline, these services are obviously not accessible.
The important thing is that I do not lose the configuration and that they become available again when the disks come back online.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Maybe look here for some inspiration:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
FreeNAS/TrueNAS is not going to be able to watch for these drives to "become available" and magically start SMB or other services on them; that's simply far outside the design goals.

However, there is a full API, so the same script that you use to bring the drives online could also talk thru the API to request a new SMB share (etc) to be created. Obviously you would also need a "shutoff" script that reversed all the steps.

As @sretalla notes, this is not entirely unheard-of.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
You don't necessarily need to remove or re-create the shares...

If you just yank out (export) a pool, the shares from it remain (although the user experience of trying to access it at that point may not be pretty... probably timeouts and other errors), when you re-attach (import) it again, all the shares would start working as before.

If you want to smooth the user experience, you'll need to master the APIs for share creation and removal as @jgreco mentions.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
If you just yank out (export) a pool, the shares from it remain (although the user experience of trying to access it at that point may not be pretty... probably timeouts and other errors), when you re-attach (import) it again, all the shares would start working as before.

saturday-night-live-maya-rudolph.gif
 

WhiteTiger

Explorer
Joined
Jan 2, 2014
Messages
86
Maybe look here for some inspiration:

The document reported does not help me much.

The goal is to make backups and then keep them offline to avoid accidental loss, ransomware and other problems.

The management of removable disks or tape with cartridges cannot be taken into consideration, as well as for the cost, especially because there is not one person to take care of it.
So everything has to be automatic.

In practice, 5 HDDs are online and with the services fully functional.
At a predetermined time a script mounts two other HDDs already configured but kept offline.
Once these are online too, the first 5 HDDs will be backed up on these two.
At the end of the backup, the two HDDs are placed offline again.

So the folders on the "offline" disks must already be created.
Any commands to be executed to make them accessible can be placed in the script.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I hope you're not expecting someone to have already written the exact thing you want?

The link @sretalla provided includes a lot of the general stuff you'd need, but how exactly you want to go about your particular solution may be somewhat different.

There is no such thing in FreeNAS as "already configured but kept offline"; the middleware expects a pool to be mounted and usable. You can either run your "offline" stuff entirely yourself with scripting as @sretalla linked to, or you can create scripts that use the API to import your pool (making them online), configuring a SMB share or running rsync or whatever, and then exporting the pool when done, as I suggested above. If you don't care to have this be accessible by Samba, and it sounds to me like you don't, then the stuff @sretalla pointed at strikes me as a plausible starting point for your project.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
The document reported does not help me much.
I disagree.

At a predetermined time a script mounts two other HDDs already configured but kept offline.
https://www.truenas.com/community/resources/how-to-backup-to-local-disks.26/ said:
zpool import Backup

Once these are online too, the first 5 HDDs will be backed up on these two.
https://www.truenas.com/community/resources/how-to-backup-to-local-disks.26/ said:
./backup_freenas_rsync
You need to write the syntax of that script to tell rsync what to send to the Backup Pool.

At the end of the backup, the two HDDs are placed offline again.
https://www.truenas.com/community/resources/how-to-backup-to-local-disks.26/ said:
Export backup disk:
cd /
sync
zpool status Backup
zpool export Backup
zpool list

So the folders on the "offline" disks must already be created.
rsync does that

Or if you decide to do it with snapshots and zfs send | recv, there's nothing to do there.
 

WhiteTiger

Explorer
Joined
Jan 2, 2014
Messages
86
I apologize, but I may have explained what I was asking.
I didn't want anyone to do the work for me; I wanted a confirmation, an opinion or an advice on a scenario that sees a NAS managed in a totally automatic way.
The document reported instead assumed that a disk was hot-inserted and hot-removed when the job was finished.
In my scenario, that disk (or those RAID disks) were instead mounted and unmounted automatically by scripts.

Of course I don't think about taking disks out of a RAID. If I take two disks out of a 4-disk RAID5, the RAID simply doesn't work anymore.
In fact, I thought I had a second RAID (RAID 0 or 5/6) that I would like to put online only when I need it and put offline when the job was finished.

What I was expecting are answers like, for example, these:
  • It can be done / no, it cannot be done.
  • It can be done, but only with single disks, not with RAID.
  • It can be done, but you need to reboot before you can use them.
  • Can / cannot be done using this plugin / app
  • We recommend that you do this other thing instead.
About the document, I'm re-reading it better, to immediately do some tests with a NAS using a Virtual Machine.

As I said earlier, the goal is to manage a backup.
I am not talking about an Enterprise environment, nor a home environment.
I'm talking about a small office where I want to secure backups made from the NAS or PC so that if ransomware comes along, it won't find all the drives online.
I want to avoid human intervention because I have no guarantee that the job will then be done or done correctly.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
The document reported instead assumed that a disk was hot-inserted and hot-removed when the job was finished.
If you were to follow all of the steps it mentioned, yes.

In fact, I thought I had a second RAID (RAID 0 or 5/6) that I would like to put online only when I need it and put offline when the job was finished.
Right... hence export the pool and only import it when you want to use it, then export it when done.

It can be done / no, it cannot be done.
Yes, it can.

It can be done, but only with single disks, not with RAID.
Any ZFS pool will work, RAIDZ, Mirror or otherwise.

It can be done, but you need to reboot before you can use them.
Nope. No reboot required.

Can / cannot be done using this plugin / app
You would need to script the stop/start of jails/plugins to correspond with what you're doing. Some plugins/jails could work to help, but are not specifically required.

We recommend that you do this other thing instead.
Do what you want. If you're going to script things, then the possibilities are endless.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
What I was expecting are answers like, for example, these:
It can be done / no, it cannot be done.

I outlined this as possible in #2 and #5 above.

It can be done, but only with single disks, not with RAID.
It can be done, but you need to reboot before you can use them.

The discussion of possibilities should have made these answers implicitly understood. Certainly the stuff @sretalla pointed to is talking about pools, and doing this without reboot. It isn't clear what you think a reboot would do anyways. This isn't Windows. :smile:

Can / cannot be done using this plugin / app
We recommend that you do this other thing instead.
About the document, I'm re-reading it better, to immediately do some tests with a NAS using a Virtual Machine.

I think most of the responses here have generally indicated that what you want to do is possible, except that there isn't an app that's available for this, because, well, it's a weird thing to do. Please pay attention to that last bit, because you're hardly the only person who has concerns like:

As I said earlier, the goal is to manage a backup.
I am not talking about an Enterprise environment, nor a home environment.
I'm talking about a small office where I want to secure backups made from the NAS or PC so that if ransomware comes along, it won't find all the drives online.
I want to avoid human intervention because I have no guarantee that the job will then be done or done correctly.

Generally, this would not be referred to as an enterprise environment, but rather a SOHO environment. Enterprise environments typically have paid sysadmin staff keeping an eye on the thing, support contracts for all the hardware, etc.

This feels like you are fighting some overall concepts here too. One of the most powerful ZFS-isms is snapshots, which capture the state of the pool or a dataset at a given moment at time, which you can then later access. Some of us have snapshots going back a decade, because, well, it's easy to do. This is the common way that ZFS is used to protect against ransomware, even if the PC's have direct access to a fileshare on the filer. They still cannot corrupt the snapshots. It takes no extra hardware, and you can get multiple snapshot tiers, so you have every ten minutes going back six hours, every hour for the last two weeks, and then every week for a decade.

For people who want that additional layer of safety, you would normally set up a SECOND server, without any services running except rsync, have that copy from the first server via a "pull", and then snapshot on the second server as well. Both servers are then extremely resistant to ransomware.
 
Top