Hi,
I would like some advice on my FreeNAS build.
My business has a small cluster of 3 ESXi servers running about 25VMs in total. I moved from local storage to iSCSI on a QNAP NAS a few months ago but the latency is too high for my liking (frequently over 50ms with spikes over 100ms). Although the QNAP has an SSD cache it doesn't play nicely with iSCSI - enabling it gives latency spikes of over 1,000ms and an error log full of messages saying "dm-kcopyd track job out of array". I've lost confidence in the QNAP and I want to move to something more stable and better understood.
Our usage is at the low end of the spectrum - IOPS is usually below 200 but does spike up to 1,000 on occasion (usually when I reboot a VM or similar). 1 of the VMs is a MySQL DB and another is an Exchange server with about 50 mailboxes but they both have a generous amount of RAM so don't tend to hammer the disk.
We have 10GBe which is overkill for our needs but that's another story - we have it now.
So here is the spec I am thinking of:-
1 x SSG-6028R-E1CR12L Supermicro SuperStorage Server 2U Rackmount (with Super X10DRH-iT)
1 x MCP-220-82609-0N Supermicro Rear hot-swap drive bay for 2x 2.5" drives
2 x Supermicro 32GB SuperDOM SATA-III 32GB
1 x Intel Xeon E5-2609 v4 Quad-core 2.50 GHz Processor
4 x 32GB Crucial RAM Module - 128 GB - DDR4 SDRAM
4 x HGST Ultrastar 7K6000 6TB configured as 2 x mirrored vdevs
1 x 400GB Intel P3700 PCIe SSD - Dedicated SLOG
If I understand correctly then 4 disks as 2 mirrored vdevs will give me around around 500 read IOPS. Write will be about half that but the SLOG will give a large buffer to keep performance up.
I wasn't going to bother with an L2ARC as I think 128GB RAM should cover things.
Hopefully this is overkill for what I need today but I want something that will cover our future needs and I prefer to run systems under-stressed rather than over! :)
Cheers
I would like some advice on my FreeNAS build.
My business has a small cluster of 3 ESXi servers running about 25VMs in total. I moved from local storage to iSCSI on a QNAP NAS a few months ago but the latency is too high for my liking (frequently over 50ms with spikes over 100ms). Although the QNAP has an SSD cache it doesn't play nicely with iSCSI - enabling it gives latency spikes of over 1,000ms and an error log full of messages saying "dm-kcopyd track job out of array". I've lost confidence in the QNAP and I want to move to something more stable and better understood.
Our usage is at the low end of the spectrum - IOPS is usually below 200 but does spike up to 1,000 on occasion (usually when I reboot a VM or similar). 1 of the VMs is a MySQL DB and another is an Exchange server with about 50 mailboxes but they both have a generous amount of RAM so don't tend to hammer the disk.
We have 10GBe which is overkill for our needs but that's another story - we have it now.
So here is the spec I am thinking of:-
1 x SSG-6028R-E1CR12L Supermicro SuperStorage Server 2U Rackmount (with Super X10DRH-iT)
1 x MCP-220-82609-0N Supermicro Rear hot-swap drive bay for 2x 2.5" drives
2 x Supermicro 32GB SuperDOM SATA-III 32GB
1 x Intel Xeon E5-2609 v4 Quad-core 2.50 GHz Processor
4 x 32GB Crucial RAM Module - 128 GB - DDR4 SDRAM
4 x HGST Ultrastar 7K6000 6TB configured as 2 x mirrored vdevs
1 x 400GB Intel P3700 PCIe SSD - Dedicated SLOG
If I understand correctly then 4 disks as 2 mirrored vdevs will give me around around 500 read IOPS. Write will be about half that but the SLOG will give a large buffer to keep performance up.
I wasn't going to bother with an L2ARC as I think 128GB RAM should cover things.
Hopefully this is overkill for what I need today but I want something that will cover our future needs and I prefer to run systems under-stressed rather than over! :)
Cheers