meggenberger
Cadet
- Joined
- Aug 28, 2017
- Messages
- 1
Hi all.
I'm building a new VMware lab environment and part of it is the storage. My plan is to use the following:
Server: HP DL380 G8 with two Xeon E5-2670 with 256GB Memory and second drive bay for a total of 16 2.5" drives
Drives: 16x Samsung 850 Pro 1TGB 2.5" SSD
Boot Device: 16GB Sandisk SD Card
Controller Card: LSI SAS 9207-8i (P20 IT mode)
Network Card: Chelsio T520-CR
Now my reasoning behind the setup. The server was easy, it's my main workhorse and I have a few of them for VMware Hosts and other purposes in use. So familiarity with the hardware and no additonal costs. Server run in the basement in a rack so noise, cooling and power is not an issue.
Drives I settled on SSD only. As it will be VMware datastore only the main characteristics are performance. 10TB of usable storage is enough but as its a few VMs the main usage pattern will be random writes and random reads with more read than writes. Sequential traffic is not so common as its around 80 VMs. I see some sequential when doing work on the databases (nightly loads and maintenance stuff).
So as its 2.5" only I'm limited to a max of 1TB of SATA. SAS 2.5" also only 900GB a viable option and then SSD. From a performance perspective rotating disks are limited. In other environments I calculate normally with around 75-100 IOPS usable for SATA drives and ~150IOPS for SAS. So without caching I would get at ~2k IOPS from 13 (16 - 1 Spare - 2 Parity) SAS disks. With caching a bit more but as its not an ideal case for caching (lots of random access) I won't probably see a huge increase. Also price wise the SAS disks are quite expensive an almost par to the SSDs. Those have a spec of 100k IOPS random read 4k and 90k IOPS random write 4k. Yes they are specsheet values but still I should be able to get out quite a few IOPS from those disk.
I didn't plan to have a dedicated ZIL as I would assume even the shared SSD should put out plenty of IOPS. I could add a PCI based Intel SSD 750 later if I really need ZIL and maybe some L2ARC on those faster NVMe based SSD but I'm not sure if that would be worth it. You might have different experiences here.
I though I go for 1 Hot Spare Disk and the rest in a Z2 config.
Controller Card, nothing special here. LSI based IT mode.
For the network card a Chelsio T520-CR as I read the Chelsio one's are the best fit for FreeBSD. I will use iSCSI with two seperate networks on the two ports to benefit the VAAI from VMware.
Boot Device I planned for a SDCard as the Server has a convenient slot. I read that some had issues with failing SDCards. I run VMware off SDCards without any issues for a while now (>3yrs). Is FreeNAS using the boot device more? In VMware when using SDCard it uses a RAM disk for scratch and you normally then specify a NFS share for all its data that it writes (logs etc). How is that in FreeNAS? And if it fails, how hard is it to recover with a new setup? So when you do a re-setup it should read the whole disk config/layout from the disks or do you have to redo it? Are there any good documents about this process?
I have all parts already except the drives. Those I'd like to order soon (together with an order a customer of mine makes to get some decent discounts ;-) ...)
So ... now any inputs from you guys is appreciated as I'm still fairly new with FreeNAS and normally work with "Enterprise" storage. The ones with the bigger price tags, not necessarly the better products.
Regards,
Marc
I'm building a new VMware lab environment and part of it is the storage. My plan is to use the following:
Server: HP DL380 G8 with two Xeon E5-2670 with 256GB Memory and second drive bay for a total of 16 2.5" drives
Drives: 16x Samsung 850 Pro 1TGB 2.5" SSD
Boot Device: 16GB Sandisk SD Card
Controller Card: LSI SAS 9207-8i (P20 IT mode)
Network Card: Chelsio T520-CR
Now my reasoning behind the setup. The server was easy, it's my main workhorse and I have a few of them for VMware Hosts and other purposes in use. So familiarity with the hardware and no additonal costs. Server run in the basement in a rack so noise, cooling and power is not an issue.
Drives I settled on SSD only. As it will be VMware datastore only the main characteristics are performance. 10TB of usable storage is enough but as its a few VMs the main usage pattern will be random writes and random reads with more read than writes. Sequential traffic is not so common as its around 80 VMs. I see some sequential when doing work on the databases (nightly loads and maintenance stuff).
So as its 2.5" only I'm limited to a max of 1TB of SATA. SAS 2.5" also only 900GB a viable option and then SSD. From a performance perspective rotating disks are limited. In other environments I calculate normally with around 75-100 IOPS usable for SATA drives and ~150IOPS for SAS. So without caching I would get at ~2k IOPS from 13 (16 - 1 Spare - 2 Parity) SAS disks. With caching a bit more but as its not an ideal case for caching (lots of random access) I won't probably see a huge increase. Also price wise the SAS disks are quite expensive an almost par to the SSDs. Those have a spec of 100k IOPS random read 4k and 90k IOPS random write 4k. Yes they are specsheet values but still I should be able to get out quite a few IOPS from those disk.
I didn't plan to have a dedicated ZIL as I would assume even the shared SSD should put out plenty of IOPS. I could add a PCI based Intel SSD 750 later if I really need ZIL and maybe some L2ARC on those faster NVMe based SSD but I'm not sure if that would be worth it. You might have different experiences here.
I though I go for 1 Hot Spare Disk and the rest in a Z2 config.
Controller Card, nothing special here. LSI based IT mode.
For the network card a Chelsio T520-CR as I read the Chelsio one's are the best fit for FreeBSD. I will use iSCSI with two seperate networks on the two ports to benefit the VAAI from VMware.
Boot Device I planned for a SDCard as the Server has a convenient slot. I read that some had issues with failing SDCards. I run VMware off SDCards without any issues for a while now (>3yrs). Is FreeNAS using the boot device more? In VMware when using SDCard it uses a RAM disk for scratch and you normally then specify a NFS share for all its data that it writes (logs etc). How is that in FreeNAS? And if it fails, how hard is it to recover with a new setup? So when you do a re-setup it should read the whole disk config/layout from the disks or do you have to redo it? Are there any good documents about this process?
I have all parts already except the drives. Those I'd like to order soon (together with an order a customer of mine makes to get some decent discounts ;-) ...)
So ... now any inputs from you guys is appreciated as I'm still fairly new with FreeNAS and normally work with "Enterprise" storage. The ones with the bigger price tags, not necessarly the better products.
Regards,
Marc
Last edited: