jlw52761
Explorer
- Joined
- Jan 6, 2020
- Messages
- 87
I'm finally in a position to upgrade my FreeNAS system to TrueNAS Core, and one of the things I am trying to decide is if I need a SLOG or not. I currently do not have one (or L2ARC for that matter), but wanted to determine if, because I am using 7.2k spinning rust, if I need to or not.
What I'm looking for is something similar to arcstat but will report ZIL utilization. I know I can see what is being written into the ZIL using zilstat, but that doesn't tell me if the ZIL is reaching it's max or not, or even what the current max is. Similar to how we have hit/miss/size stats for ARC, I'm hoping to see, is the ZIL able to dump to disk fast enough or do I need a SLOG to ensure that I don't have data loss.
My current rig is eating about 52GB of ARC at a >98% hit rate. It's got (12) 2TB 7.2k SAS (4x 3-wide RaidZ) and I'm serving a mix of NFS, iSCSI (ESXi datastores) and internal BHYVE VMs.
For reference, here's the stats of the system I'm converting to
Dual Intel Xeon E5-2603 (Quad Core, No HT)
128GB RAM
(12) 2TB 7.2k SAS
(2) 100GB SATAIII SSD (ZFS Mirrored Boot)
I will be moving the workload from the current rig to this new one, moving from FreeNAS-11.2-U8 to TrueNAS Core. I have two M2400 systems, old EMC Avamar DataDomain Storage Nodes that have 12 3.5" drive bays and the backplane and SAS expander already there.
I am not married to the zpool configs to be honest as I can provide NFS from a Linux VM and use the array purely as iSCSI storage. If I go that route, I will probably go with a 6x 2-wide mirror zpool. That will be slightly faster than the 4x 3-wide RaidZ, but not by a lot, and factor in I'm also on a 1Gb network backend that may someday go to a 10Gbe backend, kinda planning for not having to do this again. The previous system, once eveyrthing's transported, will be rebuilt and located offsite and will be a replication partner for this new system, with some limited local ESXi running on it as well.
What I'm looking for is something similar to arcstat but will report ZIL utilization. I know I can see what is being written into the ZIL using zilstat, but that doesn't tell me if the ZIL is reaching it's max or not, or even what the current max is. Similar to how we have hit/miss/size stats for ARC, I'm hoping to see, is the ZIL able to dump to disk fast enough or do I need a SLOG to ensure that I don't have data loss.
My current rig is eating about 52GB of ARC at a >98% hit rate. It's got (12) 2TB 7.2k SAS (4x 3-wide RaidZ) and I'm serving a mix of NFS, iSCSI (ESXi datastores) and internal BHYVE VMs.
For reference, here's the stats of the system I'm converting to
Dual Intel Xeon E5-2603 (Quad Core, No HT)
128GB RAM
(12) 2TB 7.2k SAS
(2) 100GB SATAIII SSD (ZFS Mirrored Boot)
I will be moving the workload from the current rig to this new one, moving from FreeNAS-11.2-U8 to TrueNAS Core. I have two M2400 systems, old EMC Avamar DataDomain Storage Nodes that have 12 3.5" drive bays and the backplane and SAS expander already there.
I am not married to the zpool configs to be honest as I can provide NFS from a Linux VM and use the array purely as iSCSI storage. If I go that route, I will probably go with a 6x 2-wide mirror zpool. That will be slightly faster than the 4x 3-wide RaidZ, but not by a lot, and factor in I'm also on a 1Gb network backend that may someday go to a 10Gbe backend, kinda planning for not having to do this again. The previous system, once eveyrthing's transported, will be rebuilt and located offsite and will be a replication partner for this new system, with some limited local ESXi running on it as well.