I need to create a dataset to archive files I don't want to delete ever.
Basically, I currently have several datasets which are used to maintain integrity of personal documents but they are always in a state of flux. What I mean for instance, I have photos I can access anytime. I do move some back and forth between the computers on my network. With backups and all, I do find redundant files spanning several dataset. I have backups of those files in compressed form and I want to be able to set them as read only so that I cannot delete them due to user error.
With this in mind, I have created a dataset, which is going to contain those backup files or files/documents for which I have no need or desire in modifying them.
My question is more on the management side of things.
Ideally, I want to retain access to these files under the new dataset as a share but I want to prevent the files from being erased. I also do not want to have write access unless when I need to move new files to be placed in the archive. Once the file has been copied, I have to have the dataset revert to its "no delete" and "no write" permission. What is the best strategy to accomplish this?
With that in mind, I would like to be able to run some kind of script that would allow me to see whether a file has been added or removed. It is possible to query the snapshots to see the differences. Is this the proper way of doing it?
Basically, I currently have several datasets which are used to maintain integrity of personal documents but they are always in a state of flux. What I mean for instance, I have photos I can access anytime. I do move some back and forth between the computers on my network. With backups and all, I do find redundant files spanning several dataset. I have backups of those files in compressed form and I want to be able to set them as read only so that I cannot delete them due to user error.
With this in mind, I have created a dataset, which is going to contain those backup files or files/documents for which I have no need or desire in modifying them.
My question is more on the management side of things.
Ideally, I want to retain access to these files under the new dataset as a share but I want to prevent the files from being erased. I also do not want to have write access unless when I need to move new files to be placed in the archive. Once the file has been copied, I have to have the dataset revert to its "no delete" and "no write" permission. What is the best strategy to accomplish this?
With that in mind, I would like to be able to run some kind of script that would allow me to see whether a file has been added or removed. It is possible to query the snapshots to see the differences. Is this the proper way of doing it?