FreeNAS for iSCSI - VMware ESXi storage

Status
Not open for further replies.

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
Hello, I hoping that the FreeNAS forums can assist me in a sanity check of some of my presumptions. I have been reading a lot about FreeNAS and everything that it can do. My specific goal for a new FreeNAS is that of a "SAN" for connecting multiple VMware ESXi hosts. I have done some virtual testing and understand the basics of how the software works but now I am to the point of throwing money at the solution and want to ensure that I am not just throwing money away. I have worked extensivly with the DELL MDxxxx series SANS, and a few other NAS solutions like QNAP / Synology and others, I am looking to move away from a vendor like DELL / EMC or similar vendors as they require you to purchase the hardware components from them at far too high a premium.

The FreeNAS solution that I am looking for will handle the following use attributes:
- Enterprise grade hardware ( or as close as possible given lack of dedicated R&D )
- Connect via iSCSI to multiple ESXi hosts - VM counts to be in the 10-20 range varied applications from light Web server use / SQL / AD / File Print Services
- High speed storage for good VM performance
- 2 x 10GB SFP+ networking for redundant iSCSI path


The FreeNAS hardware that I am considering would consist of:
- 8-24 Hot Swap bay chassis as a head unit ( Supermicro? ) - SAS presumed probably with an LSI based HBA controller
- 12-24 Hot Swap bay expansion chassis for future use - possibility of multiple extended chassis would be a plus
- Dual DOM SATA for FreeNAS boot OS
- Hardware to connect head unit to expansion chassis - LSI card with external connector and cable to secondary unit... hardware recommendations?
- RAM - presumably 64Gb or more
- I have not listed disk sizes as it is a variable, I have multiple locations that all require the same solution but not the same data storage needs. I presume that each unit will have multiples of the same size disks in various increments. Presumable 2-4TB SAS disks for an example - extrapolations can be done after the fact for smaller or larger environments.
- I presume a SSD for a ZIL? I don't know the absolute need for this but from what I have read it can help with a storage array for iSCSI.


I have been reviewing various supplies of Supermicro or similar equipment, I will keep my commentary to a single vendor, Thinkmate, as they have a fairly simple configuration page and seem to have a lot of the various Supermicro components that many people recommend on the FreeNAS forums. I am no way associated with Thinkmate, I just found they're configuration system to be easy to use...

So here are some hardware specifications that I configured using the Thinkmate site:
Six-Core Intel® Xeon® Processor E5-2603 v4 1.70GHz 15MB Cache (85W)
Thinkmate® 2U Datacenter Class Passive Heatsink
Intel® C612 Chipset- Dual Intel® Gigabit Ethernet - 10x SATA3 - IPMI 2.0 with LAN
4 x 16GB PC4-19200 2400MHz DDR4 ECC Registered DIMM
Thinkmate® RAX-2308 2U Chassis - 8x Hot-Swap 3.5" SATA/SAS3 - 600W Single Power
2 x 128GB SATA 6.0Gb/s Disk on Module (MLC) (Vertical)
8 x 2.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Hitachi Ultrastar™ 7K6000 (512e)
LG Slim 8x DVD-RW / 24x CDR Combo (SATA)
LSI SAS 9300-8i SAS 12Gb/s PCIe 3.0 8-Port Host Bus Adapter
Thinkmate® 2U Riser Card - Left Side WIO - 4x PCIe 3.0 x8
Thinkmate® 2U Riser Card - Right Side WIO - 1x PCIe 3.0 x8

Here is the JOBD Expansion chassis:
Thinkmate® STX-2312 2U Chassis - 12x Hot-Swap 3.5" SATA/SAS3 - 12Gb/s SAS Single Expander - 740W Redundant Power
12 x 2.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Hitachi Ultrastar™ 7K6000 (512e)
LSI SAS 9300-8e SAS 12Gb/s PCIe 3.0 8-Port Host Bus Adapter
2 x 1-Meter External SAS Cable - 12Gb/s to 12Gb/s SAS - SFF-8644 to SFF-8644


I did not include any SSD for the ZIL cache, as I am not sure if I need to ( advise please ).

Thinkmate has this configuration at about $8700.00 not a horrible price, but money spent wisely is cheaper than money spent wrong.

I am not tied to any vendor or hardware at all, but want to keep my considerations to "enterprise" grade hardware that is readily available for a period of time due to any replacement / expansion / or failure of components. Making these systems flexible for scale and price is also a goal, as stated I have many locations that vary in size from 3 esxi hosts with 20+ VM's to location that only have 4 VM's on two hosts.

Reliability is key to my purchasing of any hardware, with the noted exception of not going with a major manufacturers system...

Thank you all for your consideration to this issue, and I hope that I am not being redundant in my request for assistance. I was unable to find much information on "new" hardware and solutions as most people are re purposing existing hardware and questions and comments are tailored to that situation rather than a new build.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Most people who are asking for help here are looking for a inexpensive solution for home, but I have been doing this very type of research for a new storage system at work and I will be happy to share some of my results with you. I just need to get home first.

Is there a reason for the two 12 bay chassis solution instead of a single 24 bay chassis?

Yes, for iSCSI to VMs you will need SLOG (cache) but I think it would be best to us NVME in a PCIe card form. I will show you some options if you want.

Yes, Supermicro is a good choice because you can configure it to the spec you need where both Dell and HP have some built-in hardware that is suboptimal.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
4 x 16GB PC4-19200 2400MHz DDR4 ECC Registered DIMM
More memory is better, you might even want to max it out if the budget permits because ZFS uses memory to cache read and write operations.

Did you see the question I asked earlier? I would be happy to help, but some feedback would be nice.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For an ESXi dataset to perform well, you are going to need a SLOG (Separate Log device) and it needs to be fast at writing, and it needs to have Power Loss Protection (PLP), a PCIe slot NVMe drive like the Intel DC P3700 series card would be a good option. Also, you will need something for L2ARC which you would want to be fast also, but you could probably get away with using a small group of Samsung 960 Evo drives. There is some flexibility.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Disclaimer: I'm doing exactly what you want at home, running about 40-50 VMs to support my home, lab, etc.

The FreeNAS hardware that I am considering would consist of:
- 8-24 Hot Swap bay chassis as a head unit ( Supermicro? ) - SAS presumed probably with an LSI based HBA controller
- 12-24 Hot Swap bay expansion chassis for future use - possibility of multiple extended chassis would be a plus
More chassis means more space. I'd start with a 36-bay chassis like I have... you can build a very respectable system with 36 drives.

- Dual DOM SATA for FreeNAS boot OS
Nothing wrong with DOMs. The 36-bay chassis can handle up to 4 2.5" drives internally, so you could also go to 2.5" drives in those internal bays.

- Hardware to connect head unit to expansion chassis - LSI card with external connector and cable to secondary unit... hardware recommendations?
Again, I wouldn't be worried about the expansion chassis. If you want to do that down the road, just add a second HBA with external ports.

- RAM - presumably 64Gb or more
You can't get too much RAM. I have 192GB (that's the limit until I get into 16GB LRDIMMs, which get substantially more expensive). Don't worry about L2ARC until you max out the memory.

- I have not listed disk sizes as it is a variable, I have multiple locations that all require the same solution but not the same data storage needs. I presume that each unit will have multiples of the same size disks in various increments. Presumable 2-4TB SAS disks for an example - extrapolations can be done after the fact for smaller or larger environments.
Keep in mind that, for the purposes of bandwidth and IOPS, you want many vdevs. You also must configure them in striped mirrors (2-way or 3-way depending on your level of paranoia).

- I presume a SSD for a ZIL? I don't know the absolute need for this but from what I have read it can help with a storage array for iSCSI.
Every ZFS pool has a ZIL, within the pool. You're referring to an SLOG. Which you *absolutely need* for any sort of high-performance iSCSI/NFS store. Doesn't need to be big (16GB will do unless you are running 40GbE or better) but it needs to be insanely fast. It also has needs to have power loss protection (typically only found in "enterprise" drives) and have very high write endurance. The enterprise Intel Optane DC P4800X drive, while expensive, is the pinnacle. If you want to run an L2ARC (again, after maxing the memory as I discussed above), a second one of these would be the best solution.

I have no clue about Thinkmate... I guess they're just putting their own name on Supermicro servers that they configure?

Six-Core Intel® Xeon® Processor E5-2603 v4 1.70GHz 15MB Cache (85W)
I'm not a fan of the 2603. It's just clocked too low, which impairs anything that's single-threaded (Samba especially, if you intend to also run CIFS on this thing).

Thinkmate® 2U Datacenter Class Passive Heatsink
The stock Supermicro heatsink works fine, as part of the overall cooling system.

16GB PC4-19200 2400MHz DDR4 ECC Registered DIMM
MOAR MEMORIES. :)

128GB SATA 6.0Gb/s Disk on Module (MLC) (Vertical)
You don't need drives this big. 40GB is more than enough.

2.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Hitachi Ultrastar™ 7K6000 (512e)
8 drives only gives you 4 vdevs... or about 400 IOPS total. That will be tolerable for lightly-loaded VMs, but don't expect to run something data-hungry (database, Splunk, etc.) on that. Also, keep in mind that you can't exceed 50% pool utilization without performance dropping precipitously. So, this configuration only gives you ~3.3TiB usable space.

LG Slim 8x DVD-RW / 24x CDR Combo (SATA)
Not needed

Here is the JOBD Expansion chassis:
Thinkmate® STX-2312 2U Chassis - 12x Hot-Swap 3.5" SATA/SAS3 - 12Gb/s SAS Single Expander - 740W Redundant Power
12 x 2.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Hitachi Ultrastar™ 7K6000 (512e)
LSI SAS 9300-8e SAS 12Gb/s PCIe 3.0 8-Port Host Bus Adapter
2 x 1-Meter External SAS Cable - 12Gb/s to 12Gb/s SAS - SFF-8644 to SFF-8644
Again, I wouldn't worry about the expansion chassis. Build a big box, max it out, then worry about expansion.

Hopefully that helps a bit!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is there a reason for the two 12 bay chassis solution instead of a single 24 bay chassis?
Complete agreement with @tvsjr . I was actually going to suggest the 4U chassis that has 48 x 2.5" drives across the front.
https://www.thinkmate.com/system/stx-nl-xd72-24s1-10g
More drives is more IOPS. Each individual drive does not need to be large. I see a lot of these type of system that are running 300GB or 600GB drives. Part depends on how much storage you need, but you need to have enough drives to get the IOPS.
 
Last edited:

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
Thank you very much for your replies, I appreciate the assistance. I see the main point is more memory, and as always with storage more disks. It seems that I also need to add a SLOG drive for performance in my application. I will also move to a larger chassis with more initial space for better capacity and performance return.
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
Disclaimer: I'm doing exactly what you want at home, running about 40-50 VMs to support my home, lab, etc.


More chassis means more space. I'd start with a 36-bay chassis like I have... you can build a very respectable system with 36 drive

I will up my chassis scale to accommodate more drives...

Nothing wrong with DOMs. The 36-bay chassis can handle up to 4 2.5" drives internally, so you could also go to 2.5" drives in those internal bays.

Would you recommend an alternative to the DOM's? internal sata disks perhaps or USB / memory ( I want to stay with a mirror for insurance )

I see that in your system your running intel 320 SSD disks in a mirror is this what you would recommend over the DOM's. They certainly are a better price point but I was concerned about them taking up drive space unless there is an internal bay in the chassis to overcome that limitation.

Again, I wouldn't be worried about the expansion chassis. If you want to do that down the road, just add a second HBA with external ports.

I just want to ensure this is always an option, thank you for your thoughts.

You can't get too much RAM. I have 192GB (that's the limit until I get into 16GB LRDIMMs, which get substantially more expensive). Don't worry about L2ARC until you max out the memory.


Keep in mind that, for the purposes of bandwidth and IOPS, you want many vdevs. You also must configure them in striped mirrors (2-way or 3-way depending on your level of paranoia).

This is very good advice as my follow up questions were going to be about IOPS and vdevs and how they interact for best performance in a iSCSI role. I am sure that striped mirrors 2-way is fine for my application. Can you elaborate on the best way to configure a given number of disks in an example... say 8 drives vs 10 vs 16 or any other number that is relational to the chassis size?

Every ZFS pool has a ZIL, within the pool. You're referring to an SLOG. Which you *absolutely need* for any sort of high-performance iSCSI/NFS store. Doesn't need to be big (16GB will do unless you are running 40GbE or better) but it needs to be insanely fast. It also has needs to have power loss protection (typically only found in "enterprise" drives) and have very high write endurance. The enterprise Intel Optane DC P4800X drive, while expensive, is the pinnacle. If you want to run an L2ARC (again, after maxing the memory as I discussed above), a second one of these would be the best solution.

I will certainly add a SLOG drive given your advice, boy are those Intel Optane DC P4800X drives pricey though... is there a recommended alternative? I will go that route if there is some data that says it will dramatically increase the speed of the system but considering a $1700.00+ drive vs something else seems very high cost for return. I see in your system your using the intel s3700 SSD, a $200 component, is this only a "good enough" solution, is there a middle ground?

I have no clue about Thinkmate... I guess they're just putting their own name on Supermicro servers that they configure?

Yes, it seems they are just "branding" supermicro products... as I stated I am just using it as a simple configurator no affiliation or desire to use them, just simple interface to ballpark costs and parts.

I'm not a fan of the 2603. It's just clocked too low, which impairs anything that's single-threaded (Samba especially, if you intend to also run CIFS on this thing).

I see your system has dual CPU's, but in other reading I have seen that a dual CPU setup is not really that advantageous. I understand your commend on clocking and I will see what I can do to get into a 2.5GHz+ CPU in my scenario.

The stock Supermicro heatsink works fine, as part of the overall cooling system.


MOAR MEMORIES. :)


You don't need drives this big. 40GB is more than enough.


8 drives only gives you 4 vdevs... or about 400 IOPS total. That will be tolerable for lightly-loaded VMs, but don't expect to run something data-hungry (database, Splunk, etc.) on that. Also, keep in mind that you can't exceed 50% pool utilization without performance dropping precipitously. So, this configuration only gives you ~3.3TiB usable space.



Not needed


Again, I wouldn't worry about the expansion chassis. Build a big box, max it out, then worry about expansion.

Hopefully that helps a bit!

Your assistance is greatly appreciated, thank you very much for sharing your hard earned knowledge and expertise.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The big reason for dual CPU is the additional PCIe lanes to connect more interfaces. Core count isn't the point. I would go with a high clock speed, low core count and dual socket for the lanes.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thank you very much for your replies, I appreciate the assistance. I see the main point is more memory, and as always with storage more disks. It seems that I also need to add a SLOG drive for performance in my application. I will also move to a larger chassis with more initial space for better capacity and performance return.
Not just SLOG (Separate Intent Log) but also L2ARC (Level 2 Adaptive Replacement Cache) and they need to be fast, high endurance SSD, but they don't need to be high capacity. The ones they offer on the Thinkmate site (on the page I linked above) are 2TB and 4TB. That would be such massive overkill.
I will certainly add a SLOG drive given your advice, boy are those Intel Optane DC P4800X drives pricey though... is there a recommended alternative?
Don't get me wrong, I like me some overkill, but that is just too much and the Optane drives are so new that the prices are still just stupid high. Something like this should be fine:
https://www.newegg.com/Product/Product.aspx?Item=9SIA8PV5VV1499
One of the users on the forum (Stux) wrote a good guide on how to OP (Over Provision) the drive that you can find here:
https://forums.freenas.org/index.ph...n4f-esxi-freenas-aio.57116/page-4#post-403374
Stux says this drive is, "extreme overkill", but that is because he is using it at home. It should be much more appropriate to your needs.
Here is another interesting post that you might find interesting:
Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Would you recommend an alternative to the DOM's? internal sata disks perhaps or USB / memory ( I want to stay with a mirror for insurance )
Definitely not USB. The DOMs are a great solution, but they're not know for being the cheapest. The chassis I run has 4 2.5" drive bays internally (there's a bracket you need) so that's where my two 320 boot drives, my SLOG, and my L2ARC are located. I bought the 320s off eBay for $25/ea. a few years back.

This is very good advice as my follow up questions were going to be about IOPS and vdevs and how they interact for best performance in a iSCSI role. I am sure that striped mirrors 2-way is fine for my application. Can you elaborate on the best way to configure a given number of disks in an example... say 8 drives vs 10 vs 16 or any other number that is relational to the chassis size?
When measuring disk throughput, there are two main numbers... bandwidth (amount of data transferred per unit time) and IOPS (input/output operations per second). Typically, for a block filestore like iSCSI, IOPS is the killer. 7200RPM SATA drives are good for about 100 IOPS. From a pool perspective, the total IOPS is the sum of the slowest drive in each vdev. If you went with the same chassis I'm running and wanted maximum performance, you would fill the chassis with 36 identical drives, configured as 1 zpool of 18 2-way mirror vdevs. This would provide you ~1,800IOPS, which is quite respectable. You would also add an ultra-fast SLOG device, lots of RAM, and a very fast L2ARC.


I will certainly add a SLOG drive given your advice, boy are those Intel Optane DC P4800X drives pricey though... is there a recommended alternative? I will go that route if there is some data that says it will dramatically increase the speed of the system but considering a $1700.00+ drive vs something else seems very high cost for return. I see in your system your using the intel s3700 SSD, a $200 component, is this only a "good enough" solution, is there a middle ground?
It's good enough, but realize that "good enough" means things won't be quite as fast. The SLOG device should have power loss protection and high write endurance. Any solution meeting these requirements will work. I would try to stick with something PCIe, as it will be orders of magnitude faster than SATA. My system is ~3 years old at this point and I haven't seen the need to upgrade, but it's also not being used in a true production environment (just running my home and my lab, although that still consists of a complete Active Directory environment, certificate authority, SIEM, etc. etc. etc.)


I see your system has dual CPU's, but in other reading I have seen that a dual CPU setup is not really that advantageous. I understand your commend on clocking and I will see what I can do to get into a 2.5GHz+ CPU in my scenario.
Dual chips gets you double the PCIe lanes and double the memory (most dual-socket boards will only activate half the memory and half the PCIe slots if you only run one chip). I wanted to run a good amount of memory (192GB is the best I can do without going to much more expensive 16GB LRDIMMs) and, well, I like overkill.
If you're stuck buying new, I would look at one of the "frequency optimized" chips like the E5-2643 v4. But, those are quite expensive. There's a huge market for used systems that are last-generation so they're a few years old, but still extremely competent for FreeNAS purposes. If you can buy used, consider something like:
https://unixsurplus.com/collections...-2x-e5-2680-2-8ghz-192gb-2-port-10gbe-sfp-nic

That system is ready to go. If you like being paranoid, buy two and have one sitting around spare. Spend your money on more drives and better SLOG/L2ARC devices and you'll have a far higher performing system for less cost.
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
Definitely not USB. The DOMs are a great solution, but they're not know for being the cheapest. The chassis I run has 4 2.5" drive bays internally (there's a bracket you need) so that's where my two 320 boot drives, my SLOG, and my L2ARC are located. I bought the 320s off eBay for $25/ea. a few years back.


When measuring disk throughput, there are two main numbers... bandwidth (amount of data transferred per unit time) and IOPS (input/output operations per second). Typically, for a block filestore like iSCSI, IOPS is the killer. 7200RPM SATA drives are good for about 100 IOPS. From a pool perspective, the total IOPS is the sum of the slowest drive in each vdev. If you went with the same chassis I'm running and wanted maximum performance, you would fill the chassis with 36 identical drives, configured as 1 zpool of 18 2-way mirror vdevs. This would provide you ~1,800IOPS, which is quite respectable. You would also add an ultra-fast SLOG device, lots of RAM, and a very fast L2ARC.



It's good enough, but realize that "good enough" means things won't be quite as fast. The SLOG device should have power loss protection and high write endurance. Any solution meeting these requirements will work. I would try to stick with something PCIe, as it will be orders of magnitude faster than SATA. My system is ~3 years old at this point and I haven't seen the need to upgrade, but it's also not being used in a true production environment (just running my home and my lab, although that still consists of a complete Active Directory environment, certificate authority, SIEM, etc. etc. etc.)



Dual chips gets you double the PCIe lanes and double the memory (most dual-socket boards will only activate half the memory and half the PCIe slots if you only run one chip). I wanted to run a good amount of memory (192GB is the best I can do without going to much more expensive 16GB LRDIMMs) and, well, I like overkill.
If you're stuck buying new, I would look at one of the "frequency optimized" chips like the E5-2643 v4. But, those are quite expensive. There's a huge market for used systems that are last-generation so they're a few years old, but still extremely competent for FreeNAS purposes. If you can buy used, consider something like:
https://unixsurplus.com/collections...-2x-e5-2680-2-8ghz-192gb-2-port-10gbe-sfp-nic

That system is ready to go. If you like being paranoid, buy two and have one sitting around spare. Spend your money on more drives and better SLOG/L2ARC devices and you'll have a far higher performing system for less cost.

That is a pretty good system at a good price, I think that may be a good point to start with as a test / lab unit in preparation to preparing a "new" purchase for production.

I suppose that I could add drives for OS internally, storage and a SLOG / L2ARC possibly in the "over provisioned" method discussed by Chris Moore referencing the STUX user build.

The build would then consist of:

Supermicro 4U System
1x X9DRi-LN4F+ Mother Board
- Integrated Quad Intel 1000BASE-T Ports
- Integrated Software Supported RAID
- Integrated IPMI 2.0 Management
2x Intel Xeon E5-2680 V1 Octo Core 2.7GHz
128GB DDR3
36x 3.5" Drive Caddies
1x AOC Dual Port 10GbE SFP+
1x LSI 9211-8i (JBOD IT mode)
Dual Power Supply

either DOM or sata SSD boot drives / mirrored

Intel p3700 for SLOG / L2ARC over provisioned as per instructions:
https://forums.freenas.org/index.ph...n4f-esxi-freenas-aio.57116/page-4#post-403374


Additional components I still have questions are:
Hard drives -- is there a significant reason to go with SAS over SATA given this type of hardware setup?
What is the best count for number of drives? Would it best to populate all drive bays initially rather than expand given that I will be wanting iops for iSCSI?

I am still fuzzy on how to allocate the drives properly for the best results... I am not familiar enough with pools and zdevs to ask the right questions yet.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
IMO, it's not worth the price increase to go from SATA to SAS if you're sticking with 7200RPM drives. If you're going to 10K or 15K drives, then SAS is the way... but those are dramatically more expensive, loud, and they generate tons of heat.

As far as drive count, the more drives, the better. If you have the financial wherewithal to populate all 36 bays AND you need that sort of IOPS, that's the way to go. You would configure those drives as one pool consisting of 18 vdevs, with each vdev being a mirrored pair. The GUI makes this configuration dead-simple.

As far as SLOG/L2ARC, @Stux has it figured out. Traditionally, you want to keep those two on separate devices... but he's done the research and has it sorted, IMO. Some day, I'll probably move to a similar configuration... or simply build a complete PCIe SSD array for my VMs.
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
IMO, it's not worth the price increase to go from SATA to SAS if you're sticking with 7200RPM drives. If you're going to 10K or 15K drives, then SAS is the way... but those are dramatically more expensive, loud, and they generate tons of heat.

As far as drive count, the more drives, the better. If you have the financial wherewithal to populate all 36 bays AND you need that sort of IOPS, that's the way to go. You would configure those drives as one pool consisting of 18 vdevs, with each vdev being a mirrored pair. The GUI makes this configuration dead-simple.

As far as SLOG/L2ARC, @Stux has it figured out. Traditionally, you want to keep those two on separate devices... but he's done the research and has it sorted, IMO. Some day, I'll probably move to a similar configuration... or simply build a complete PCIe SSD array for my VMs.


Thank you again for your commentary.

If there is no real reason to go with SAS then I will stick with SATA for the 25% cost difference... Is there any concern with the LSI HBA card controlling that many SATA vs SAS drives... I am probably being overly concerned but I have been confused by reading to many different configurations that this was an issue.


If I was to start with 12 x 3tb NAS style SATA drives - like Western Digital Reds - with the expectation of expanding to 36 drives in the future would this give me a good environment to test theories and get numbers for VMware performance? I realize that with more disks more performance is presumed, but it would be nice to see the incremental values as one increases number of drives for better performance.

I would think that 3tb drives would give me radically more storage than I need for now even with 12 disks, and if 18 disks is 1800 IOPS I presume the math says that 12 drives would be 600 IOPS?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The system I pointed you at has two expander backplanes in it (front and rear)... so you'll run one cable from the HBA to the front and one to the rear. You won't have any speed issues, even if you fill the system with drives.

18 *vdevs* have 1800 IOPS assuming 7200RPM drives... that's 36 drives, because each vdev is a mirrored pair. If you went with 12 WD Reds (which are 5900RPM drives, and more like 75 IOPS), you would have 6 vdevs - or about 75*6 = 450 IOPS - and, assuming 3TB drives, about 8TB of usable space (accounting for overhead and the 50% maximum rule on iSCSI pools).

If you are concerned about IOPS but aren't concerned about heat generation, I would recommend going with WD Red Pro or HGST NAS drives, just for the IOPS boost of the 7200RPM drive.
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
The system I pointed you at has two expander backplanes in it (front and rear)... so you'll run one cable from the HBA to the front and one to the rear. You won't have any speed issues, even if you fill the system with drives.

18 *vdevs* have 1800 IOPS assuming 7200RPM drives... that's 36 drives, because each vdev is a mirrored pair. If you went with 12 WD Reds (which are 5900RPM drives, and more like 75 IOPS), you would have 6 vdevs - or about 75*6 = 450 IOPS - and, assuming 3TB drives, about 8TB of usable space (accounting for overhead and the 50% maximum rule on iSCSI pools).

If you are concerned about IOPS but aren't concerned about heat generation, I would recommend going with WD Red Pro or HGST NAS drives, just for the IOPS boost of the 7200RPM drive.


That is great advice thank you for the tutorial.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
BTW, do make sure you have plans to keep this system cool. A 4U rackmount box consuming 300-600 watts depending on how many drives you add will throw off some significant heat, and it won't be quiet. It needs to live in a conditioned space. Throwing it in some little closet without any sort of air conditioning will let those drives get hot, massively shortening their lives.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The system I pointed you at has two expander backplanes in it (front and rear)... so you'll run one cable from the HBA to the front and one to the rear. You won't have any speed issues, even if you fill the system with drives.

18 *vdevs* have 1800 IOPS assuming 7200RPM drives... that's 36 drives, because each vdev is a mirrored pair. If you went with 12 WD Reds (which are 5900RPM drives, and more like 75 IOPS), you would have 6 vdevs - or about 75*6 = 450 IOPS - and, assuming 3TB drives, about 8TB of usable space (accounting for overhead and the 50% maximum rule on iSCSI pools).

If you are concerned about IOPS but aren't concerned about heat generation, I would recommend going with WD Red Pro or HGST NAS drives, just for the IOPS boost of the 7200RPM drive.

If you’re really concerned about IOPS, use enterprise SSDs to build the array and don’t bother with SLOG

Eg, a mirrored p4800x will probably have nearly a million IOPS. (I didn’t bother looking up the specs.... but they’re about as good as you can get)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Re slog, these days the p900 is better than the p3700, BUT you need to check if it’s compatible with ESXi yet.

I don’t care about the cost option is the p4800x
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
BTW, do make sure you have plans to keep this system cool. A 4U rackmount box consuming 300-600 watts depending on how many drives you add will throw off some significant heat, and it won't be quiet. It needs to live in a conditioned space. Throwing it in some little closet without any sort of air conditioning will let those drives get hot, massively shortening their lives.

Understood, all of my systems live in controlled server rooms.
 
Status
Not open for further replies.
Top