HalfJawElite
Cadet
- Joined
- Apr 18, 2017
- Messages
- 9
Hello Everyone,
First off I want to say that I’ve long browsed the online forums for help and ideas on setting up a FreeNAS system and figured it’s now time for me to join in on the discussions and gain some knowledge. I’m fairly new to FreeNAS, but look forward to learning what its full potential can be for my home lab from the experts here online.
And without further adieu, the million dollar question:
1. All SSD drive vdev’s with maxed out RAM? (no L2ARC or Cache)
OR
2. WD RE4 vdev with SSD ZIL and L2ARC and maxed out RAM?
I’m looking to build this as a shared SAN for my VMware ESXi cluster, which I hope to be adding more hosts to in the future, along with replacing aging ones. Currently I’m running two VMware 5.5 u3b hosts with local SSD storage on them for my VM’s, and an NFS “backup” volume on a QNAP NAS for copying the VM’s every time I take snapshots of them. I’m looking to add a fast SAN which I can provision and store all of the VM’s on without the need for local storage on the hosts. This is to give me better flexibility when I need to spin up VM’s or migrate them between hosts for maintenance. I currently run some VM’s for study purposes, but also have a few running for some friends as well. I’ll be running either 40 Gb InfiniBand or 10 Gb Ethernet on the networking side, connected directly to each ESXi host, until the time comes to add a switch.
I’ve done some digging through the forums and the FreeNAS primer document and found some useful information, but nothing concrete that can help assess which option is better suited for my scenario.
Links to similar forum posts:
ESXi NFS storage SSD RAIDZ3
I'm building a high performance ZFS SAN for education
An All SSD Rig and FreeNAS
Successful SAN build for ESXi
High Performance All SSD Array?
I’m looking to get some more details on which FreeNAS drive setup would provide the fastest speed and higher IOPS. This is considering that the number of VM’s running on between all the hosts at any time can be between 6-7 minimum, with potentially more running as needed by my other users. One of these VM’s will also be running a small email server, with another two running a few websites. Any suggestions to hardware changes for the SAN would be welcome as well, since the system has not been purchased yet.
The bare essentials for the SAN configuration will be as follows:
Motherboard: MBD-X11SSL-F-O
CPU: Xeon E3-1220 v5 OR i3-6100
RAM: 64 GB DDR4 ECC UDIMMs (4 x 16 GB sticks)
PSU: SeaSonic 400 watt 2U PSU
Case: Norco RPC-250
Networking: Dual port Mellanox Connectx-2 InfiniBand OR Dual port Intel 10 Gb copper PCI-e card
The two ESXi hosts are currently configured with the following setup:
Motherboard: Tyan S5512GM4NR
CPU: Xeon E3-1230 v2
RAM: 32 GB DDR3 ECC UDIMMs (4 x 8 GB sticks)
PSU: SeaSonic 400 watt 2U PSU
Case: Norco RPC-250
Networking: Dual port Intel ET gigabit PCI-e card for vMotion network & single port Mellanox Connectx-2 InfiniBand OR single port Intel 10 Gb copper PCI-e card
Storage: 1 x Curcial M500 960 GB SSD
First off I want to say that I’ve long browsed the online forums for help and ideas on setting up a FreeNAS system and figured it’s now time for me to join in on the discussions and gain some knowledge. I’m fairly new to FreeNAS, but look forward to learning what its full potential can be for my home lab from the experts here online.
And without further adieu, the million dollar question:
1. All SSD drive vdev’s with maxed out RAM? (no L2ARC or Cache)
OR
2. WD RE4 vdev with SSD ZIL and L2ARC and maxed out RAM?
I’m looking to build this as a shared SAN for my VMware ESXi cluster, which I hope to be adding more hosts to in the future, along with replacing aging ones. Currently I’m running two VMware 5.5 u3b hosts with local SSD storage on them for my VM’s, and an NFS “backup” volume on a QNAP NAS for copying the VM’s every time I take snapshots of them. I’m looking to add a fast SAN which I can provision and store all of the VM’s on without the need for local storage on the hosts. This is to give me better flexibility when I need to spin up VM’s or migrate them between hosts for maintenance. I currently run some VM’s for study purposes, but also have a few running for some friends as well. I’ll be running either 40 Gb InfiniBand or 10 Gb Ethernet on the networking side, connected directly to each ESXi host, until the time comes to add a switch.
I’ve done some digging through the forums and the FreeNAS primer document and found some useful information, but nothing concrete that can help assess which option is better suited for my scenario.
Links to similar forum posts:
ESXi NFS storage SSD RAIDZ3
I'm building a high performance ZFS SAN for education
An All SSD Rig and FreeNAS
Successful SAN build for ESXi
High Performance All SSD Array?
I’m looking to get some more details on which FreeNAS drive setup would provide the fastest speed and higher IOPS. This is considering that the number of VM’s running on between all the hosts at any time can be between 6-7 minimum, with potentially more running as needed by my other users. One of these VM’s will also be running a small email server, with another two running a few websites. Any suggestions to hardware changes for the SAN would be welcome as well, since the system has not been purchased yet.
The bare essentials for the SAN configuration will be as follows:
Motherboard: MBD-X11SSL-F-O
CPU: Xeon E3-1220 v5 OR i3-6100
RAM: 64 GB DDR4 ECC UDIMMs (4 x 16 GB sticks)
PSU: SeaSonic 400 watt 2U PSU
Case: Norco RPC-250
Networking: Dual port Mellanox Connectx-2 InfiniBand OR Dual port Intel 10 Gb copper PCI-e card
The two ESXi hosts are currently configured with the following setup:
Motherboard: Tyan S5512GM4NR
CPU: Xeon E3-1230 v2
RAM: 32 GB DDR3 ECC UDIMMs (4 x 8 GB sticks)
PSU: SeaSonic 400 watt 2U PSU
Case: Norco RPC-250
Networking: Dual port Intel ET gigabit PCI-e card for vMotion network & single port Mellanox Connectx-2 InfiniBand OR single port Intel 10 Gb copper PCI-e card
Storage: 1 x Curcial M500 960 GB SSD