A little background:
We've been using a ZFS storage system for main storage for about 3 years now. It's a Supermicro chassis, 36x 2TB Seagate Constellations, Intel SRCSASJV controller with BBU, OCZ SAS SSDs for log devices. Without going into too many specifics, we've been running OpenIndiana with much success. Lately though, hardware failures have plagued us. We lost a SAS expander which involved downtime, and recently we had three drives fail within a 24 hour period, which put us at risk of data loss; thankfully it didn't happen, though the resilver time on the raidz2 pool was excruciating. (we also have a mirror/stripe pool which recovered in hours instead of days)
So, we bought a new Supermicro storage server to act in a replication role. Ideally, we'd like to have one production pool on each server, and a backup pool. The servers would replicate to each other, and in the event of a catastrophe, we'd have a replica of each on the other.
The new server is the 6047R-E1R24L. Highlights are LSI 2038 HBA in IT mode, 64GB RAM, 24x WD RE SAS 4TB disks, 2x Intel SSD Pro 2500 Series.
I'm evaluating FreeNAS because OpenIndiana has an issue with the HBA / WD disks where it thinks they are over temp and faults them.
So far, I'm liking what I'm seeing. I haven't evaluated FreeNAS in about 4 years; we chose OI over it then because of the superiority of Solaris's CIFS server over Samba, though that's no longer much of a concern for us.
But --
Abysmal writes. I set up a single stipe / mirror pool using 22 of the 24 WD disks, with two as spares. It's encrypted. I took a bit of a circuitous route in setting up the log devices, preferring to do it at the CLI with small partitions (kind of like this: http://mark.nellemann.nu/2013/01/31/zfs-log-and-cache-on-sliced-disks/) so i didn't have to waste my entire 240GB SSDs on a cache that will maybe use 2GB. The logs are using a mirror of 2 2GB slices on the SSDs. I'm using a 2TB zvol presented to an ESXi host via iSCSI. I set sync=always on the zvol, I can see that the log devices are getting written to, but I can't see any performance improvement over not using them at all. If sync is off (rather, set to sync=standard) which is off with ESXi and iSCSI as I understand it, I get about the performance I'd expect given the underlying hardware and the network.
It's difficult for me to compare with my other system, as the OS is different, and the HBA is different. We never experienced this kind of problem, even before adding SSD log devices on the OI system, but having a BBU write cache probably helped a lot.
Sorry if this is a rambling first post. I'll provide as much detail as needed but figured this might get me started. This is a non production system, so I can manhandle it as I please for the time being for testing purposes.
We've been using a ZFS storage system for main storage for about 3 years now. It's a Supermicro chassis, 36x 2TB Seagate Constellations, Intel SRCSASJV controller with BBU, OCZ SAS SSDs for log devices. Without going into too many specifics, we've been running OpenIndiana with much success. Lately though, hardware failures have plagued us. We lost a SAS expander which involved downtime, and recently we had three drives fail within a 24 hour period, which put us at risk of data loss; thankfully it didn't happen, though the resilver time on the raidz2 pool was excruciating. (we also have a mirror/stripe pool which recovered in hours instead of days)
So, we bought a new Supermicro storage server to act in a replication role. Ideally, we'd like to have one production pool on each server, and a backup pool. The servers would replicate to each other, and in the event of a catastrophe, we'd have a replica of each on the other.
The new server is the 6047R-E1R24L. Highlights are LSI 2038 HBA in IT mode, 64GB RAM, 24x WD RE SAS 4TB disks, 2x Intel SSD Pro 2500 Series.
I'm evaluating FreeNAS because OpenIndiana has an issue with the HBA / WD disks where it thinks they are over temp and faults them.
So far, I'm liking what I'm seeing. I haven't evaluated FreeNAS in about 4 years; we chose OI over it then because of the superiority of Solaris's CIFS server over Samba, though that's no longer much of a concern for us.
But --
Abysmal writes. I set up a single stipe / mirror pool using 22 of the 24 WD disks, with two as spares. It's encrypted. I took a bit of a circuitous route in setting up the log devices, preferring to do it at the CLI with small partitions (kind of like this: http://mark.nellemann.nu/2013/01/31/zfs-log-and-cache-on-sliced-disks/) so i didn't have to waste my entire 240GB SSDs on a cache that will maybe use 2GB. The logs are using a mirror of 2 2GB slices on the SSDs. I'm using a 2TB zvol presented to an ESXi host via iSCSI. I set sync=always on the zvol, I can see that the log devices are getting written to, but I can't see any performance improvement over not using them at all. If sync is off (rather, set to sync=standard) which is off with ESXi and iSCSI as I understand it, I get about the performance I'd expect given the underlying hardware and the network.
It's difficult for me to compare with my other system, as the OS is different, and the HBA is different. We never experienced this kind of problem, even before adding SSD log devices on the OI system, but having a BBU write cache probably helped a lot.
Sorry if this is a rambling first post. I'll provide as much detail as needed but figured this might get me started. This is a non production system, so I can manhandle it as I please for the time being for testing purposes.