HP DL380 G8 Setup for VMware iSCSI datastore host

Status
Not open for further replies.

meggenberger

Cadet
Joined
Aug 28, 2017
Messages
1
Hi all.

I'm building a new VMware lab environment and part of it is the storage. My plan is to use the following:

Server: HP DL380 G8 with two Xeon E5-2670 with 256GB Memory and second drive bay for a total of 16 2.5" drives
Drives: 16x Samsung 850 Pro 1TGB 2.5" SSD
Boot Device: 16GB Sandisk SD Card
Controller Card: LSI SAS 9207-8i (P20 IT mode)
Network Card: Chelsio T520-CR

Now my reasoning behind the setup. The server was easy, it's my main workhorse and I have a few of them for VMware Hosts and other purposes in use. So familiarity with the hardware and no additonal costs. Server run in the basement in a rack so noise, cooling and power is not an issue.

Drives I settled on SSD only. As it will be VMware datastore only the main characteristics are performance. 10TB of usable storage is enough but as its a few VMs the main usage pattern will be random writes and random reads with more read than writes. Sequential traffic is not so common as its around 80 VMs. I see some sequential when doing work on the databases (nightly loads and maintenance stuff).
So as its 2.5" only I'm limited to a max of 1TB of SATA. SAS 2.5" also only 900GB a viable option and then SSD. From a performance perspective rotating disks are limited. In other environments I calculate normally with around 75-100 IOPS usable for SATA drives and ~150IOPS for SAS. So without caching I would get at ~2k IOPS from 13 (16 - 1 Spare - 2 Parity) SAS disks. With caching a bit more but as its not an ideal case for caching (lots of random access) I won't probably see a huge increase. Also price wise the SAS disks are quite expensive an almost par to the SSDs. Those have a spec of 100k IOPS random read 4k and 90k IOPS random write 4k. Yes they are specsheet values but still I should be able to get out quite a few IOPS from those disk.
I didn't plan to have a dedicated ZIL as I would assume even the shared SSD should put out plenty of IOPS. I could add a PCI based Intel SSD 750 later if I really need ZIL and maybe some L2ARC on those faster NVMe based SSD but I'm not sure if that would be worth it. You might have different experiences here.
I though I go for 1 Hot Spare Disk and the rest in a Z2 config.

Controller Card, nothing special here. LSI based IT mode.

For the network card a Chelsio T520-CR as I read the Chelsio one's are the best fit for FreeBSD. I will use iSCSI with two seperate networks on the two ports to benefit the VAAI from VMware.

Boot Device I planned for a SDCard as the Server has a convenient slot. I read that some had issues with failing SDCards. I run VMware off SDCards without any issues for a while now (>3yrs). Is FreeNAS using the boot device more? In VMware when using SDCard it uses a RAM disk for scratch and you normally then specify a NFS share for all its data that it writes (logs etc). How is that in FreeNAS? And if it fails, how hard is it to recover with a new setup? So when you do a re-setup it should read the whole disk config/layout from the disks or do you have to redo it? Are there any good documents about this process?

I have all parts already except the drives. Those I'd like to order soon (together with an order a customer of mine makes to get some decent discounts ;-) ...)

So ... now any inputs from you guys is appreciated as I'm still fairly new with FreeNAS and normally work with "Enterprise" storage. The ones with the bigger price tags, not necessarly the better products.

Regards,
Marc
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hi all.

I'm building a new VMware lab environment and part of it is the storage. My plan is to use the following:

Server: HP DL380 G8 with two Xeon E5-2670 with 256GB Memory and second drive bay for a total of 16 2.5" drives
Drives: 16x Samsung 850 Pro 1TGB 2.5" SSD
Boot Device: 16GB Sandisk SD Card
Controller Card: LSI SAS 9207-8i (P20 IT mode)
Network Card: Chelsio T520-CR

Now my reasoning behind the setup. The server was easy, it's my main workhorse and I have a few of them for VMware Hosts and other purposes in use. So familiarity with the hardware and no additonal costs. Server run in the basement in a rack so noise, cooling and power is not an issue.

Drives I settled on SSD only. As it will be VMware datastore only the main characteristics are performance. 10TB of usable storage is enough but as its a few VMs the main usage pattern will be random writes and random reads with more read than writes. Sequential traffic is not so common as its around 80 VMs. I see some sequential when doing work on the databases (nightly loads and maintenance stuff).
So as its 2.5" only I'm limited to a max of 1TB of SATA. SAS 2.5" also only 900GB a viable option and then SSD. From a performance perspective rotating disks are limited. In other environments I calculate normally with around 75-100 IOPS usable for SATA drives and ~150IOPS for SAS. So without caching I would get at ~2k IOPS from 13 (16 - 1 Spare - 2 Parity) SAS disks. With caching a bit more but as its not an ideal case for caching (lots of random access) I won't probably see a huge increase. Also price wise the SAS disks are quite expensive an almost par to the SSDs. Those have a spec of 100k IOPS random read 4k and 90k IOPS random write 4k. Yes they are specsheet values but still I should be able to get out quite a few IOPS from those disk.
I didn't plan to have a dedicated ZIL as I would assume even the shared SSD should put out plenty of IOPS. I could add a PCI based Intel SSD 750 later if I really need ZIL and maybe some L2ARC on those faster NVMe based SSD but I'm not sure if that would be worth it. You might have different experiences here.
I though I go for 1 Hot Spare Disk and the rest in a Z2 config.

Controller Card, nothing special here. LSI based IT mode.

For the network card a Chelsio T520-CR as I read the Chelsio one's are the best fit for FreeBSD. I will use iSCSI with two seperate networks on the two ports to benefit the VAAI from VMware.

Boot Device I planned for a SDCard as the Server has a convenient slot. I read that some had issues with failing SDCards. I run VMware off SDCards without any issues for a while now (>3yrs). Is FreeNAS using the boot device more? In VMware when using SDCard it uses a RAM disk for scratch and you normally then specify a NFS share for all its data that it writes (logs etc). How is that in FreeNAS? And if it fails, how hard is it to recover with a new setup? So when you do a re-setup it should read the whole disk config/layout from the disks or do you have to redo it? Are there any good documents about this process?

I have all parts already except the drives. Those I'd like to order soon (together with an order a customer of mine makes to get some decent discounts ;-) ...)

So ... now any inputs from you guys is appreciated as I'm still fairly new with FreeNAS and normally work with "Enterprise" storage. The ones with the bigger price tags, not necessarly the better products.

Regards,
Marc
FreeNAS has a configuration db that you need to backup. With that, you can be back in business from a reinstall in 30 minutes.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Boot Device I planned for a SDCard as the Server has a convenient slot. I read that some had issues with failing SDCards. I run VMware off SDCards without any issues for a while now (>3yrs). Is FreeNAS using the boot device more? In VMware when using SDCard it uses a RAM disk for scratch and you normally then specify a NFS share for all its data that it writes (logs etc). How is that in FreeNAS? And if it fails, how hard is it to recover with a new setup? So when you do a re-setup it should read the whole disk config/layout from the disks or do you have to redo it? Are there any good documents about this process?

I am not familiar with that server chassis, so I don't know what other options you have for boot. FreeNAS has the ability to boot from a Zpool and many people run a mirrored pair of boot devices so the server does not crash if one of them fails. If you are running from a single boot device and it fails, you server will crash if it needs to access a file from the boot device, such as a module that is not in RAM. Most, but not all, of the system is loaded into RAM at boot. This does not happen frequently but you will want to use a quality boot device to reduce the likelihood.
I already commented abut the config db. That stores all the configuration options in FreeNAS. You can do a clean install on a fresh boot device, boot into the FreeNAS installation, upload your config db and reboot and it will bring you back to the configuration at the time the backup was made. If there are no hardware changes, it is relatively quick and easy.
As for the Zpool that holds your data, it is independent of the boot pool. You could export the pool from one FreeNAS server and import it into another FreeNAS or BSD server. It is hardware independent, and the OS just needs to be able to see all the disks that comprise the pool and it will mount. You can easily migrate a pool from one hardware platform to another, even using a different model disk controller, as long as all the disks are addressable to the OS.
FreeNAS is the free version of TrueNAS and although I understand there are some features in TrueNAS that we don't get in FreeNAS, it is very much an enterprise worth OS based on BSD Unix and the ZFS file system.

I didn't see much in the way of questions in your initial post and it sounds like you have a fair understanding already. If there is something you need to know, please ask. I don't want to waste your time with things you already know.
 
Status
Not open for further replies.
Top