Michael Wynne
Cadet
- Joined
- Mar 5, 2015
- Messages
- 3
Looking for Second Opinions
The plan is to set up a new storage device using FreeNAS 9.3 and keep the costs around 13,000
I currently have the following hardware planned:
Qty 1- SuperMicro Chassis - 216BAC-R920LPB (room for 24 2.5inch drives) $1300
Qty 1- SuperMicro Motherboard - X10DRC-T4+ (4 Intel 10GB ports & LSI Controller) $750
Qty 2- Processors - Intel Xeon E5-2623v3 (3Ghz 4 Core) $500each
128GB DDR4 ECC Memory $1400
Qty 3- LSI 9300-8i HBA $250each (Multiple 12G SAS lanes may or may not be necessary for this SSD setup)
I also do not want to have to crack open the case and add cards later.
Qty 1- Intel X540-T2 (10GBaseT NIC for expansion)
Qty 8- Samsung 845DC PRO 800GB SSD's $850each (to start, later expansion up to 24, Drives has powerloss protection, high endurance and 20% overprovisioning)
Qty 2- Samsung 845DC EVO 240GB $200each Mirrored Boot drives (the 845DC's are necessary for the capacitors)
The plan is to run mirrored drive pair VDEV's striped across an 8 drive Zpool initially and as drive capacity is needed add paired mirrors to the pool up to a maximum of 12 drives. A new pool would be created after 12 drives using a different lun/nic and iscsi target IP. My thoughts/hopes are 12 drives will come close to saturating an interface.
Our current environment has 12 physical servers and 6 Virtual Machines that we are planning on moving over to a Hyper-V failover cluster using the new SSD FreeNAS system as an Iscsi target for the Cluster Shared Volumes and Quorum. 2 of these machines are currently running SQL on server 2008 and will be migrating to VM's on 2012R2/SQL2014 with 128GB RAM allocated to each. All of the databases,logfiles and tempdb currently reside on the same LUN on an EMC VNXe3150. I have my doubts that the EMC can support the IOPs I need for the new VM's and believe we can see dramatic improvements using the SSD NAS. The plan is to host the Virtual Machines on the FreeNAS LUN as well as the databases and logfiles. I will use the EMC to host the tempdb and a few of the low activity VM's.
Our largest databases are around 30GB with around 70 devices connecting at anygiven time and 15 databases total but only 3 that are fairly active. I set up a freenas fileserver running 9.2.1.8 last year that has been performing like a rock star. It has a single Xeon 1670v2, 64GB Memory, mirrored 845dc 240GB Os drives, 10GB Nics, 128gb l2arc. ZFS config is 16-WD Red 4TB Drives, 4Drive Raidz2 VDEV's, 4 VDev's in the ZPool. I may move a couple of things to this storage device as well to split things up. It is currently only serving as an ISCSI target for a CentOS Samba server (I never could get CIFS to function in a stable enough state for production and opted for the ISCSI route using Samba3 and Winbind.
The main goal of this project is to ensure our storage devices has enough IOPS to handle 15-20 VM's and some SQL databases, bandwidth being secondary but still important. I also want to have expandability for future projects such as VDI which could add 40-50 Virtual Machines. Ideally I would like to have some kind of HA/loadbalancing or atleast replication set up building an identical server in the future to replicate too. It makes me nervous only using mirrors in the ZPOOL in a production environment, but this is most likely necessary to keep the IOPS high and hopefully the low failure rates of these drives and powerloss protection offsets that a bit. We also keep nightly onsite and offsite backups of all systems and will be live replicating the VM's to an offsite Hyper-V server. I've also considered looking at the TrueNAS systems but not sure what configurations they have and how difficult it is for me to add SSD pairs as needed, but I will most likely contact them for a Quote.
Feel free to blow holes in my logic and any provided suggestions are welcome.
The plan is to set up a new storage device using FreeNAS 9.3 and keep the costs around 13,000
I currently have the following hardware planned:
Qty 1- SuperMicro Chassis - 216BAC-R920LPB (room for 24 2.5inch drives) $1300
Qty 1- SuperMicro Motherboard - X10DRC-T4+ (4 Intel 10GB ports & LSI Controller) $750
Qty 2- Processors - Intel Xeon E5-2623v3 (3Ghz 4 Core) $500each
128GB DDR4 ECC Memory $1400
Qty 3- LSI 9300-8i HBA $250each (Multiple 12G SAS lanes may or may not be necessary for this SSD setup)
I also do not want to have to crack open the case and add cards later.
Qty 1- Intel X540-T2 (10GBaseT NIC for expansion)
Qty 8- Samsung 845DC PRO 800GB SSD's $850each (to start, later expansion up to 24, Drives has powerloss protection, high endurance and 20% overprovisioning)
Qty 2- Samsung 845DC EVO 240GB $200each Mirrored Boot drives (the 845DC's are necessary for the capacitors)
The plan is to run mirrored drive pair VDEV's striped across an 8 drive Zpool initially and as drive capacity is needed add paired mirrors to the pool up to a maximum of 12 drives. A new pool would be created after 12 drives using a different lun/nic and iscsi target IP. My thoughts/hopes are 12 drives will come close to saturating an interface.
Our current environment has 12 physical servers and 6 Virtual Machines that we are planning on moving over to a Hyper-V failover cluster using the new SSD FreeNAS system as an Iscsi target for the Cluster Shared Volumes and Quorum. 2 of these machines are currently running SQL on server 2008 and will be migrating to VM's on 2012R2/SQL2014 with 128GB RAM allocated to each. All of the databases,logfiles and tempdb currently reside on the same LUN on an EMC VNXe3150. I have my doubts that the EMC can support the IOPs I need for the new VM's and believe we can see dramatic improvements using the SSD NAS. The plan is to host the Virtual Machines on the FreeNAS LUN as well as the databases and logfiles. I will use the EMC to host the tempdb and a few of the low activity VM's.
Our largest databases are around 30GB with around 70 devices connecting at anygiven time and 15 databases total but only 3 that are fairly active. I set up a freenas fileserver running 9.2.1.8 last year that has been performing like a rock star. It has a single Xeon 1670v2, 64GB Memory, mirrored 845dc 240GB Os drives, 10GB Nics, 128gb l2arc. ZFS config is 16-WD Red 4TB Drives, 4Drive Raidz2 VDEV's, 4 VDev's in the ZPool. I may move a couple of things to this storage device as well to split things up. It is currently only serving as an ISCSI target for a CentOS Samba server (I never could get CIFS to function in a stable enough state for production and opted for the ISCSI route using Samba3 and Winbind.
The main goal of this project is to ensure our storage devices has enough IOPS to handle 15-20 VM's and some SQL databases, bandwidth being secondary but still important. I also want to have expandability for future projects such as VDI which could add 40-50 Virtual Machines. Ideally I would like to have some kind of HA/loadbalancing or atleast replication set up building an identical server in the future to replicate too. It makes me nervous only using mirrors in the ZPOOL in a production environment, but this is most likely necessary to keep the IOPS high and hopefully the low failure rates of these drives and powerloss protection offsets that a bit. We also keep nightly onsite and offsite backups of all systems and will be live replicating the VM's to an offsite Hyper-V server. I've also considered looking at the TrueNAS systems but not sure what configurations they have and how difficult it is for me to add SSD pairs as needed, but I will most likely contact them for a Quote.
Feel free to blow holes in my logic and any provided suggestions are welcome.