Hi All,
I'm in the process of building an updated ESX cluster for our corporate server cluster at work.
After messing with HP's 3PAR equipment for weeks trying to get it working without an HP support contract, I threw my toys and decided to go with something I've been using at home for years, TrueNAS.
A corporate environment and a home environment have very different requirements so I'm looking to make sure I cross my T's and dot my i's to make sure I get the best performance out of it.
The server itself is pretty much overkill, but I managed to point to it sitting in a corner, unused, and say "Can I have that?" and get told yep.
HPE DL360 Gen9. 72 logical cores, 256gb mem. Not a problem. Have installed 2 NVMe drives in a mirror for an SLOG.
It's connected to a Supermicro 44 disk JBOD through 12gbit HD SAS, fully populated with 1.92TB 6Gbit/s SATA SSD's.
Will be connected to the cluster via 2x 10Gbe iSCSI in multipath.
Our existing environment is just under 30TB, so will give us plenty of spare room.
We have a couple MSSQL servers, which are my biggest concern for performance. Rest are MS file servers, DC, front end web application etc etc, run of the mill stuff.
So with these 44 SSD's, I'm wanting to make sure I build the pool using the optimal vdev setup. No matter what setup, I don't think that bandwidth will be an issue, so I'm more inclined to focus on IOPS performance.
I was thinking of making 4x 10 disk vdevs as RaidZ2. Would this be good or should I be looking at making more vdevs with less disks?
Not really sure how to calculate IOPS etc.
Hoping I can get some feedback or suggestions on this, and an explanation as to why would be extra helpful!
Thanks!
I'm in the process of building an updated ESX cluster for our corporate server cluster at work.
After messing with HP's 3PAR equipment for weeks trying to get it working without an HP support contract, I threw my toys and decided to go with something I've been using at home for years, TrueNAS.
A corporate environment and a home environment have very different requirements so I'm looking to make sure I cross my T's and dot my i's to make sure I get the best performance out of it.
The server itself is pretty much overkill, but I managed to point to it sitting in a corner, unused, and say "Can I have that?" and get told yep.
HPE DL360 Gen9. 72 logical cores, 256gb mem. Not a problem. Have installed 2 NVMe drives in a mirror for an SLOG.
It's connected to a Supermicro 44 disk JBOD through 12gbit HD SAS, fully populated with 1.92TB 6Gbit/s SATA SSD's.
Will be connected to the cluster via 2x 10Gbe iSCSI in multipath.
Our existing environment is just under 30TB, so will give us plenty of spare room.
We have a couple MSSQL servers, which are my biggest concern for performance. Rest are MS file servers, DC, front end web application etc etc, run of the mill stuff.
So with these 44 SSD's, I'm wanting to make sure I build the pool using the optimal vdev setup. No matter what setup, I don't think that bandwidth will be an issue, so I'm more inclined to focus on IOPS performance.
I was thinking of making 4x 10 disk vdevs as RaidZ2. Would this be good or should I be looking at making more vdevs with less disks?
Not really sure how to calculate IOPS etc.
Hoping I can get some feedback or suggestions on this, and an explanation as to why would be extra helpful!
Thanks!