SAN Build with FreeNAS for VMware

munnavai

Dabbler
Joined
Oct 30, 2019
Messages
11
Hi,

I need some suggestion on building a SAN with FreeNAS for VMware.
These ESXi's will Host mostly VM with Oracle DB installed.
VMware will access the Storage through iSCSI Multipathing, from 3 ESXi.
We are planning to Build this with:

- Dell R730xd, 12 Bay.
- Ram 128GB Ecc.
- 2 128GB SSD for Mirrored Boot Pool.
- Seagate Exos 8 x 14TB SAS Dive

Pool1 Raid10 - 4 Drive for File Storage
Pool2 Raid10 - 4 Drive For VMware.

Now, i assume there will be huge read/write from the Oracle DB's, how can i achieve best Read/write performance for the Oracle DB's across all 3 ESXi's ?
-Please give me suggestion on the Cache Planning and how do i achieve the best performance.

Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
FreeNAS doesn't do "RAID10", perhaps you mean striped mirrors.

https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

You'd also be well advised to browse through

https://www.ixsystems.com/community/threads/the-path-to-success-for-block-storage.81165/

VM storage and database storage are very similar problems. Also be aware that whatever controller Dell sold you in that R730xd is probably not suitable to the task; please also check out

https://www.ixsystems.com/community...s-and-why-cant-i-use-a-raid-controller.81931/

ZFS's write cache is your system memory. For the fastest write performance, turn off sync writes. There is a lot of confusion by people who errantly think it is the SLOG/ZIL. SLOG and ZIL are never faster than simply disabling sync writes. If you need sync writes, for example because you have valuable database or VM data, you should consider a high quality SLOG device. See

https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

That article is a bit dated as far as specific devices are concerned, but the general description is still spot on.

Modern SLOG devices are typically the Optane SSD's.

For read caching, large quantities of inexpensive SSD is recommended. You can probably support 1TB of L2ARC on your system, which will greatly increase read speeds.
 

munnavai

Dabbler
Joined
Oct 30, 2019
Messages
11
Hi @jgreco

Thanks for your Help, i have read the links you provided, now please give me your valuable suggestion on these.

Here is my Preparation.
- Dell R730xd, 12 bay Front, 2 bay back side. Raid Card in HBA Mode Or, IT Mode.
- Ram 128GB
- 8 x 14TB Seagate Exos SAS 7200 RPM

1. Pool1 (mirrorred striped vdevs) - 4 Drive - For, Dedup, Compress, File Storage
2. Pool2 (mirrorred striped vdevs) - 4 Drive - For VMware Storage through iSCSI Multipathing.
3. ARC Using the RAM
4. SLOG - Intel Optane Mirrored Or, Samsung SM1715 NVMe Drive
5. Boot Drive - 2 SLC USB with Mirror.

Now, I need your suggestion on
1. what else i can do to make faster for the VM's as they will all have Oracle DB inside.
2. as i understood, SLOG will be per Pool, so do i need another SLOG for the 2nd Pool ?
3. can i partition the single SLOG device to allocate on Both pool ?
4. with My Pools, How much SLOG Device will be used ? Or what size should i buy ?
5. is the Intel Optane SSD 900P is a good choice for SLOG ? can you say about its PLP feature ? i have read the blog's some says it has plp but no enhanced plp. please clarify,

Thanks a Lot.
 
Last edited:

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Hi @jgreco

Thanks for your Help, i have read the links you provided, now please give me your valuable suggestion on these.

Here is my Preparation.
- Dell R730xd, 12 bay Front, 2 bay back side. Raid Card in HBA Mode Or, IT Mode.
- Ram 128GB
- 8 x 14TB Seagate Exos SAS 7200 RPM

1. Pool1 (mirrorred striped vdevs) - 4 Drive - For, Dedup, Compress, File Storage
2. Pool2 (mirrorred striped vdevs) - 4 Drive - For VMware Storage through iSCSI Multipathing.
3. ARC Using the RAM
4. SLOG - Intel Optane Mirrored Or, Samsung SM1715 NVMe Drive
5. Boot Drive - 2 SLC USB with Mirror.

Now, I need your suggestion on
1. what else i can do to make faster for the VM's as they will all have Oracle DB inside.
2. as i understood, SLOG will be per Pool, so do i need another SLOG for the 2nd Pool ?
3. can i partition the single SLOG device to allocate on Both pool ?
4. with My Pools, How much SLOG Device will be used ? Or what size should i buy ?
5. is the Intel Optane SSD 900P is a good choice for SLOG ? can you say about its PLP feature ? i have read the blog's some says it has plp but no enhanced plp. please clarify,

Thanks a Lot.

I have been running systems like this for nearly 6 years now. Our systems are a mix of R720xd, R730xd, and R740xd servers, mostly with JBODs.

1: CPU matters. Make sure the R730xd has enough CPU to handle parity calculations when a failure happens, in addition to its normal workloads. For what you propose above, you should be fine with 12 to 16 cores total.

2: DO NOT, I repeat, DO NOT use a PERC RAID card for this. Besides the part where the FreeNAS hardware guide recommends against using RAID cards, even in passthrough mode, the PERC cards have some quirks that can sometimes make them a pain to deal with in FreeNAS. Trust me on this one, stay away from the Dell RAID cards. Your best bet, if you would like to stay with Dell hardware, is to use an HBA330 mini which will slot onto the motherboard in place of the PERC RAID card. You can also get a PCI Express version of the HBA330. These can be had new, from Dell, for ~$300 or ~$100 from ebay. They are based on the LSI 9300 controller and we have had very good experiences with them. These cards can be flashed to IT mode, but in my experience, that is not necessary.

3: Make sure the backplane for your R730xd is properly connected to the HBA330. Make sure you are using two cables. If you only use a single cable, you may have an issue where FreeNAS will not properly detect your drives as multipath, and you will not be able to hot swap them.

4: Consider using battery backed NVDIMMs for your SLOG devices instead of Optane drives. If you don't know what that is, think of a memory DIMM with an equal amount of RAM and flash chips, with a battery attached. They use a memory slot instead of a PCI express slot or drive bay. If power to your server were to suddenly fail, the battery connected to the NVDIMM would provide enough power for the data on the RAM chips of the NVDIMM to be written to the flash chips. If your server supports NVDIMMs, and the R730xd does, they are a far better choice for an SLOG device vs Optane drives for a number of reasons. First of all, because you are writing to RAM instead of flash, NVDIMMs are significantly faster than Optane drives. Additionally, because you only write to the flash of an NVDIMM in the event of a power failure, and you are always writing to the flash of an Optane drive, the Optane drives will wear out far faster than the NVDIMMs. Optane drives are theoretically capable of writing data fast enough to survive power loss, but Intel does not guarantee that capability.

5: As far as SLOG size goes, you only need to store 5 seconds of data. Anything more than that is wasted space. Two 10Gbe network interfaces only require an SLOG size of 12.5GB, at worst. My suggestion here is to go with 4 x Micron battery backed 16GB NVDIMM modules, running them as two mirrors, one for each pool.

I realize it's a lot to read but hopefully that helps.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Excellent post but a couple quick nits to pick.

... Optane drives are theoretically capable of writing data fast enough to survive power loss, but Intel does not guarantee that capability.
For enterprise workloads, use the DC P48xx series Optane devices, where power-loss-protection (and warranty coverage in a shared-access environment) is guaranteed. The 900p/905p "prosumer" drives have specific language denying warranty if used in a "server" style setup. Weigh the risk of the lack of support vs. the cost savings.

As far as SLOG size goes, you only need to store 5 seconds of data.
This is outdated. The values now are derived from the vfs.zfs.dirty_data tunable family and presently max out at 4GB regardless of the size of your SLOG device. If you're using 16GB NVDIMMs you could bump those values up to match your effective usable space in order to be able to absorb larger bursts of writes (at the cost of some of your RAM as well, mind you) - but this is a balancing act you walk between the speed of your network vs. your vdevs vs. memory usage.
 
Last edited:

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
This is outdated. The values now are derived from the vfs.zfs.dirty_data tunable family and presently max out at 4GB regardless of the size of your SLOG device. If you're using 16GB NVDIMMs you could bump those values up to match your effective usable space in order to be able to absorb larger bursts of writes (at the cost of some of your RAM as well, mind you) - but this is a balancing act you walk between the speed of your network vs. your vdevs vs. memory usage.

Good information, I did not know that.
 

munnavai

Dabbler
Joined
Oct 30, 2019
Messages
11
I have been running systems like this for nearly 6 years now. Our systems are a mix of R720xd, R730xd, and R740xd servers, mostly with JBODs.

1: CPU matters. Make sure the R730xd has enough CPU to handle parity calculations when a failure happens, in addition to its normal workloads. For what you propose above, you should be fine with 12 to 16 cores total.

2: DO NOT, I repeat, DO NOT use a PERC RAID card for this. Besides the part where the FreeNAS hardware guide recommends against using RAID cards, even in passthrough mode, the PERC cards have some quirks that can sometimes make them a pain to deal with in FreeNAS. Trust me on this one, stay away from the Dell RAID cards. Your best bet, if you would like to stay with Dell hardware, is to use an HBA330 mini which will slot onto the motherboard in place of the PERC RAID card. You can also get a PCI Express version of the HBA330. These can be had new, from Dell, for ~$300 or ~$100 from ebay. They are based on the LSI 9300 controller and we have had very good experiences with them. These cards can be flashed to IT mode, but in my experience, that is not necessary.

3: Make sure the backplane for your R730xd is properly connected to the HBA330. Make sure you are using two cables. If you only use a single cable, you may have an issue where FreeNAS will not properly detect your drives as multipath, and you will not be able to hot swap them.

4: Consider using battery backed NVDIMMs for your SLOG devices instead of Optane drives. If you don't know what that is, think of a memory DIMM with an equal amount of RAM and flash chips, with a battery attached. They use a memory slot instead of a PCI express slot or drive bay. If power to your server were to suddenly fail, the battery connected to the NVDIMM would provide enough power for the data on the RAM chips of the NVDIMM to be written to the flash chips. If your server supports NVDIMMs, and the R730xd does, they are a far better choice for an SLOG device vs Optane drives for a number of reasons. First of all, because you are writing to RAM instead of flash, NVDIMMs are significantly faster than Optane drives. Additionally, because you only write to the flash of an NVDIMM in the event of a power failure, and you are always writing to the flash of an Optane drive, the Optane drives will wear out far faster than the NVDIMMs. Optane drives are theoretically capable of writing data fast enough to survive power loss, but Intel does not guarantee that capability.

5: As far as SLOG size goes, you only need to store 5 seconds of data. Anything more than that is wasted space. Two 10Gbe network interfaces only require an SLOG size of 12.5GB, at worst. My suggestion here is to go with 4 x Micron battery backed 16GB NVDIMM modules, running them as two mirrors, one for each pool.

I realize it's a lot to read but hopefully that helps.


You are simply awesome !!!
Now, I am changing the hardware Selection and Need your Advice too.
as i have 12 bay only, I am planning this way.

1. Pool1 (mirrorred striped vdevs) - 4 Drive - For, Dedup, Compress, File Storage
2. Pool2 (mirrorred striped vdevs) - 4 Drive - For VMware Storage through iSCSI Multipathing.
AND,
3. Pool3 (mirrorred striped vdevs) - 4 x 4/8TB SSD - For high I/O intensive VM's through iSCSI Multipathing.


(1) HBA330 mini ( with 2 cables to connect to backplane, hope my hardware vendor knows how to do that )
(2) 4 x Micron battery backed 16GB NVDIMM ( if i can manage to purchase )
Or,
(3) 2 x DC P48xx series Optane devices. i am choosing, Intel® Optane™ SSD DC P4801X Series (100GB, M.2 110MM PCIe* x4, 3D XPoint™)

(4) and finally, for L2ARC, please give me an Intel SSD model for read intensive workload, and for these amount of space

- can you please tell me is there any PCIe adapter where i can add these 2 x m.2 Optane Drive. is there any recommendation on this, because it is 100G in size and the lowest space. this best fits my requirements.
- and please tell me how can i make the partition, is it from command line ? what will be the process ?


Thanks a Lot.
 

munnavai

Dabbler
Joined
Oct 30, 2019
Messages
11
Excellent post but a couple quick nits to pick.


For enterprise workloads, use the DC P48xx series Optane devices, where power-loss-protection (and warranty coverage in a shared-access environment) is guaranteed. The 900p/905p "prosumer" drives have specific language denying warranty if used in a "server" style setup. Weigh the risk of the lack of support vs. the cost savings.


This is outdated. The values now are derived from the vfs.zfs.dirty_data tunable family and presently max out at 4GB regardless of the size of your SLOG device. If you're using 16GB NVDIMMs you could bump those values up to match your effective usable space in order to be able to absorb larger bursts of writes (at the cost of some of your RAM as well, mind you) - but this is a balancing act you walk between the speed of your network vs. your vdevs vs. memory usage.

Hi @HoneyBadger

Thanks a Lot, please give me some light on these.

- can you please tell me, the drive i am going to use (Seagate Exos SAS 7200 RPM) for the storage is OK ? or do you recommend something else ? i need most capacity possible for the storage.
- accoring to my pool size, is there any zfs value to change to make the I/O faster ? lets say i will use the DC P48xx series for the ZIL, and intel good SSD for the L2ARC.
- and according my Pool Size or Disk speed, is there anything i can do as you said "larger bursts of writes", i am thinking about my Oracle DB vm's as they used to pay good load on the drives. can i make faster the transfer ? lets say changing to jumbo frames etc..

hope you understand.

Thanks
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
You are simply awesome !!!
Now, I am changing the hardware Selection and Need your Advice too.
as i have 12 bay only, I am planning this way.

1. Pool1 (mirrorred striped vdevs) - 4 Drive - For, Dedup, Compress, File Storage
2. Pool2 (mirrorred striped vdevs) - 4 Drive - For VMware Storage through iSCSI Multipathing.
AND,
3. Pool3 (mirrorred striped vdevs) - 4 x 4/8TB SSD - For high I/O intensive VM's through iSCSI Multipathing.


(1) HBA330 mini ( with 2 cables to connect to backplane, hope my hardware vendor knows how to do that )
(2) 4 x Micron battery backed 16GB NVDIMM ( if i can manage to purchase )
Or,
(3) 2 x DC P48xx series Optane devices. i am choosing, Intel® Optane™ SSD DC P4801X Series (100GB, M.2 110MM PCIe* x4, 3D XPoint™)

(4) and finally, for L2ARC, please give me an Intel SSD model for read intensive workload, and for these amount of space

- can you please tell me is there any PCIe adapter where i can add these 2 x m.2 Optane Drive. is there any recommendation on this, because it is 100G in size and the lowest space. this best fits my requirements.
- and please tell me how can i make the partition, is it from command line ? what will be the process ?


Thanks a Lot.

I have had good results from the StarTech line of adapters. If you have the extra PCI Express slots, I would suggest going with a single drive per slot as the card complexity and price increase dramatically the more drives per card you want.

Make sure you understand your workloads. You may not actually need a dedicated L2ARC device. We've found that the small, default L2ARC that runs in RAM is more than enough for our workloads.

As far as a specific model for an L2ARC device, pretty much any enterprise grade SAS or NVME SSD from a major vendor will work. We use Micron drives but Intel, Samsung, Western Digital, or Seagate all make good SSDs.

You should not have to do any manual work to create the pools. The storage pool wizard provided in the FreeNAS UI is more than sufficient for most things. If you need a step by step procedure, the FreeNAS User Guide provides a very good walkthrough.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
These questions all relate to each other, in a sense.

- can you please tell me, the drive i am going to use (Seagate Exos SAS 7200 RPM) for the storage is OK ? or do you recommend something else ? i need most capacity possible for the storage.
- accoring to my pool size, is there any zfs value to change to make the I/O faster ? lets say i will use the DC P48xx series for the ZIL, and intel good SSD for the L2ARC.
- and according my Pool Size or Disk speed, is there anything i can do as you said "larger bursts of writes", i am thinking about my Oracle DB vm's as they used to pay good load on the drives. can i make faster the transfer ? lets say changing to jumbo frames etc..

1. Pool1 (mirrorred striped vdevs) - 4 Drive - For, Dedup, Compress, File Storage
2. Pool2 (mirrorred striped vdevs) - 4 Drive - For VMware Storage through iSCSI Multipathing.
AND,
3. Pool3 (mirrorred striped vdevs) - 4 x 4/8TB SSD - For high I/O intensive VM's through iSCSI Multipathing.

For your main "file pool" Pool1, if you are looking at maximum capacity then spinning disks are still the least expensive way to get there. The Seagate drives are fine. 7200rpm NL-SAS drives are for the most part all the same. You will not have any SMR or "shingled" drives in that category. I would suggest that you do not use deduplication though; it is memory-intensive, and you may not get the results you desire. Definitely use compression.

For Pool2, this will also be NL-SAS - but if this is for VMware storage, bear in mind that only four disks will limit the available IOPS. Use this for larger, slower virtual machines. An L2ARC would be a good choice here if there are certain workloads that will be identified as "hot" on this pool.

Pool3 - Your data vdev SSDs should be capable of handling the write workload and offering sustained performance - if you're ordering them as options from Dell, consider the Toshiba PX05SV "Mixed workload" line since you are talking about I/O intensive Oracle DBs. They are rated at 3 DWPD, so the 3.84TB model would be good for roughly 21PB over its 5-year warranty life.

(2) 4 x Micron battery backed 16GB NVDIMM ( if i can manage to purchase )
Or,
(3) 2 x DC P48xx series Optane devices. i am choosing, Intel® Optane™ SSD DC P4801X Series (100GB, M.2 110MM PCIe* x4, 3D XPoint™)
Note that the Micron NVDIMM solution will be significantly faster than the Optane cards - "ten times faster" would not be unreasonable to say.

Compare the two results here:

 

munnavai

Dabbler
Joined
Oct 30, 2019
Messages
11
These questions all relate to each other, in a sense.





For your main "file pool" Pool1, if you are looking at maximum capacity then spinning disks are still the least expensive way to get there. The Seagate drives are fine. 7200rpm NL-SAS drives are for the most part all the same. You will not have any SMR or "shingled" drives in that category. I would suggest that you do not use deduplication though; it is memory-intensive, and you may not get the results you desire. Definitely use compression.

For Pool2, this will also be NL-SAS - but if this is for VMware storage, bear in mind that only four disks will limit the available IOPS. Use this for larger, slower virtual machines. An L2ARC would be a good choice here if there are certain workloads that will be identified as "hot" on this pool.

Pool3 - Your data vdev SSDs should be capable of handling the write workload and offering sustained performance - if you're ordering them as options from Dell, consider the Toshiba PX05SV "Mixed workload" line since you are talking about I/O intensive Oracle DBs. They are rated at 3 DWPD, so the 3.84TB model would be good for roughly 21PB over its 5-year warranty life.


Note that the Micron NVDIMM solution will be significantly faster than the Optane cards - "ten times faster" would not be unreasonable to say.

Compare the two results here:


Hi @HoneyBadger

Thanks a million !!!


- now i need to understand, what i am going to do with the extra ZIL space available on the Optane drives, if i purchase 100G x 2 = 100G Mirrored Slog, 16Gb for Pool1, 16Gb for Pool2, 16 Gb for Pool3 (later), still i have 50% + space left on the SLOG.

- i wanted to use the Slog not only for 5 sec, within 5 sec it will not populate the 16Gig, as these are persistent data, more data fill should give us higher performance data right, is there any way to write on the SLOG for more than 5 sec ?

- and, is there any GUI option to partition the Disks before i add them as SLOG and L2ARC, i have found is being done at the CLI.

Thanks again.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
If I understand things correctly, you want to run ESXi and then FeeNAS in a VM. In that case you will need to pass through an HBA into your FreeNAS VM. This HBA will then not be available to ESXi. So in essence you will need 2 HBAs in your server. Without this approach, ZFS will not have direct access to the disks, with all the risks that come associated.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi @HoneyBadger

Thanks a million !!!


- now i need to understand, what i am going to do with the extra ZIL space available on the Optane drives, if i purchase 100G x 2 = 100G Mirrored Slog, 16Gb for Pool1, 16Gb for Pool2, 16 Gb for Pool3 (later), still i have 50% + space left on the SLOG.

- i wanted to use the Slog not only for 5 sec, within 5 sec it will not populate the 16Gig, as these are persistent data, more data fill should give us higher performance data right, is there any way to write on the SLOG for more than 5 sec ?

- and, is there any GUI option to partition the Disks before i add them as SLOG and L2ARC, i have found is being done at the CLI.

Thanks again.

Sorry for the delay.

1. Leave the extra space unpartitioned for the drive to use for wear-leveling. It will help increase overall endurance. If you want to use a device to L2ARC use a separate device; Optane is overpriced for it.

2. The ZFS write behavior is more complex than "fill for five seconds" but you can set up some tunables to allow for more dirty data to match your desired 16GB SLOG partition size. This won't change the sustained write speed of your vdevs though, so it would only "increase" the burst write speeds. Counter-question - what is the network connection that will be used?

3. There's no GUI option to partition SLOG devices, as sharing them across pools has its own caveats (high activity on one pool can impact another)
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
If I understand things correctly, you want to run ESXi and then FeeNAS in a VM. In that case you will need to pass through an HBA into your FreeNAS VM. This HBA will then not be available to ESXi. So in essence you will need 2 HBAs in your server. Without this approach, ZFS will not have direct access to the disks, with all the risks that come associated.

To the OP: Hopefully, this isn't your plan. I read your post and understood it as if you were creating a FreeNAS that other ESXI servers would access. If your plans are to use the system mentioned in your post as an ESXI host, running a FreeNAS VM, I would strongly caution against that for production workloads, especially more latency sensitive workloads such as Oracle DB storage.

It is possible to run FreeNAS as a VM and there are many scenarios where it can be beneficial. However, there are also significant drawbacks, specifically around CPU contention, performance, and stability. Additionally, VMWare does not support the virtualization of "software defined" storage as a method of sharing local storage across the network. Again, it is possible, but not recommended.

My personal experience around this involves using virtual FreeNAS VMs to break out chunks of VMWare vSAN storage for our development teams, into VMFS datastores. The idea being that the developers could go nuts within a small, defined space, and I really didn't care what they did so long as it fit within the space they were allocated. It was a good idea in theory but in practice it led to a considerable number of stability and reliability issues for the developers and we eventually dropped the concept.
 

munnavai

Dabbler
Joined
Oct 30, 2019
Messages
11
To the OP: Hopefully, this isn't your plan. I read your post and understood it as if you were creating a FreeNAS that other ESXI servers would access. If your plans are to use the system mentioned in your post as an ESXI host, running a FreeNAS VM, I would strongly caution against that for production workloads, especially more latency sensitive workloads such as Oracle DB storage.

It is possible to run FreeNAS as a VM and there are many scenarios where it can be beneficial. However, there are also significant drawbacks, specifically around CPU contention, performance, and stability. Additionally, VMWare does not support the virtualization of "software defined" storage as a method of sharing local storage across the network. Again, it is possible, but not recommended.

My personal experience around this involves using virtual FreeNAS VMs to break out chunks of VMWare vSAN storage for our development teams, into VMFS datastores. The idea being that the developers could go nuts within a small, defined space, and I really didn't care what they did so long as it fit within the space they were allocated. It was a good idea in theory but in practice it led to a considerable number of stability and reliability issues for the developers and we eventually dropped the concept.


Hi @ChrisRJ and @firesyde424

No, absolutely not, this is not my intention to run a FreeNAS as VM,
i needed a guideline to make a SAN Storage with FreeNAS and ZFS to be accessed from VMware.

I have got enough light on making or building the SAN, as i am new to FreeNAS building, more accurate suggestion will be highly appreciable.

Thanks.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
I think your plan seems fairly sound, based on what you've posted here. The one further suggestion I would make is, once you've put this all together, test it and simulate some workloads for a few days to make sure that it's going to perform the way you expect or need it to perform. If it doesn't, because you are only testing, you can make changes much quicker without having to evacuate data or schedule downtime.
 
Top