Basil Hendroff
Wizard
- Joined
- Jan 4, 2014
- Messages
- 1,644
This is a hypothetical scenario at this stage. A FreeNAS server may be the wrong appliance to use for this, but I'd like to understand why and what the limitations are.
Consider a small business that employs predominantly Windows PCs. The business doesn't have a lot of cash to throw at hardware or software, but still wants to maximise business continuity by minimising system downtime. A couple of FreeNAS servers are considered as file servers; one active, the other on cold standby (ie. no automatic switching). Replication has been set up between the two servers, but the business understands there is the potential to lose any data created since the last replication event. To keep the cost down, local authentication rather than directory services is employed. The servers are hardened in the sense they meet FreeNAS basic hardware requirements and ZFS RAIDZ2 has been implemented. Note though, the servers may not be identical in their hardware or pool configuration eg. the second server may employ fewer, but larger disks for its pool.
In preparation for swinging the second server into action in the event of a catastrophic failure of the active server, what are some of the things to consider within the FreeNAS OS to minimise any downtime? Below are several questions I've been pondering and searching through forum posts for answers to.
Consider a small business that employs predominantly Windows PCs. The business doesn't have a lot of cash to throw at hardware or software, but still wants to maximise business continuity by minimising system downtime. A couple of FreeNAS servers are considered as file servers; one active, the other on cold standby (ie. no automatic switching). Replication has been set up between the two servers, but the business understands there is the potential to lose any data created since the last replication event. To keep the cost down, local authentication rather than directory services is employed. The servers are hardened in the sense they meet FreeNAS basic hardware requirements and ZFS RAIDZ2 has been implemented. Note though, the servers may not be identical in their hardware or pool configuration eg. the second server may employ fewer, but larger disks for its pool.
In preparation for swinging the second server into action in the event of a catastrophic failure of the active server, what are some of the things to consider within the FreeNAS OS to minimise any downtime? Below are several questions I've been pondering and searching through forum posts for answers to.
- It appears possible to replicate the pool directly, except for the system dataset, which needs to be treated separately. From the perspective of the second server, how important is the system dataset of the first server?
- I notice that user and group account information that exists on the first server isn't 'replicated' (by design I understand) across to the second server. At the time of a switch, I doubt it is just a matter of unplugging the boot drive from the first server and plugging it into the second to 'transfer' account information across? What steps should be taken to ensure that account information is 'synced' between the two servers under normal operating conditions?
- As in the previous point, SMB shares aren't 'replicated' as such to the second server, but the underlying datasets are. How should shares be treated?
- I understand permissions are transferred during replication. However, under normal conditions, files on the second server need to be read-only to prevent users from accidentally changing data on the wrong server. At the time of a switch, the permissions have to change to that of the active server. Is it possible to change the state of permissions in this way?
- It's unlikely that jails, VMs and plugins would be employed for this business, but if any of these were, are there challenges in replicating and activating these?
- Assuming it is possible to implement a standby FreeNAS file server, is there anything else I might need to consider?
Last edited: