Hello all together,
we plan to use some Dell R530 hardware with following specs to store backup copies of virtual machines using NFS datstore. This was working well until we had an issue with housekeeping of older backups and the dedup tables were growing too big. But more details first.
Server Dell R530, Xeon E5-2603 v3 @ 1.60GHz, 128 GB ECC RAM, 4 x Broadcom Gigabit LAN, no RAID controller (using internal SATA ports), 7 x 4 TB SATA 7200 rpm in RAID-Z1 giving a 20 TB pool and one 200 GB mixed mode SSD as cache/L2ARC. Redundant power supplys and UPS attached. Network is configured in one failover pair for management IP (lagg0) and a second failover pair for data traffic (lagg1).
As i wrote we had some issues with removing old backup copies due to an error in a script, so we kept more than 100 copies of the VMs instead of max 10. But deduplication was working very good, we still had lots of free disk space
One site had a dedup ratio of 42 with 18,8M allocated blocks, another site had a ratio of 19 with 65,7M blocks. We are now starting from scratch with fixed housekeeping script and are going to test the environment.
The future plan is to use ZFS replication to sync all day by day changes to another site. This is the main reason why we want to use deduplication - to identify the changed blocks in the backup data. Dedup is the key to minimize the traffic for the replication.
Seeing this plan, what do you think about the configuration?
Shall we switch to RAID-Z2 because due to compression and dedup we won't need all that space even in big sites. Or use a second SDD as ZIL instead of one hard disk (the server has 8 drive bays).
How will this configuration work with ZFS snapshots and the replication?
We usually have lots of write traffic and only read access if we need to restore a VM, which happens very rarely.
What is your suggestion on this configuration?
Thanks in advance, Harald
we plan to use some Dell R530 hardware with following specs to store backup copies of virtual machines using NFS datstore. This was working well until we had an issue with housekeeping of older backups and the dedup tables were growing too big. But more details first.
Server Dell R530, Xeon E5-2603 v3 @ 1.60GHz, 128 GB ECC RAM, 4 x Broadcom Gigabit LAN, no RAID controller (using internal SATA ports), 7 x 4 TB SATA 7200 rpm in RAID-Z1 giving a 20 TB pool and one 200 GB mixed mode SSD as cache/L2ARC. Redundant power supplys and UPS attached. Network is configured in one failover pair for management IP (lagg0) and a second failover pair for data traffic (lagg1).
As i wrote we had some issues with removing old backup copies due to an error in a script, so we kept more than 100 copies of the VMs instead of max 10. But deduplication was working very good, we still had lots of free disk space
One site had a dedup ratio of 42 with 18,8M allocated blocks, another site had a ratio of 19 with 65,7M blocks. We are now starting from scratch with fixed housekeeping script and are going to test the environment.
The future plan is to use ZFS replication to sync all day by day changes to another site. This is the main reason why we want to use deduplication - to identify the changed blocks in the backup data. Dedup is the key to minimize the traffic for the replication.
Seeing this plan, what do you think about the configuration?
Shall we switch to RAID-Z2 because due to compression and dedup we won't need all that space even in big sites. Or use a second SDD as ZIL instead of one hard disk (the server has 8 drive bays).
How will this configuration work with ZFS snapshots and the replication?
We usually have lots of write traffic and only read access if we need to restore a VM, which happens very rarely.
What is your suggestion on this configuration?
Thanks in advance, Harald