AIO ESXi+FreeNAS Whitebox Build

Status
Not open for further replies.

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
Hello, there!

I'm really new to the FreeNAS forums and wanted to personally thank everyone for all of their support here. FreeNAS is a really great community and I've been learning quite a lot from everybody!

With that said, I had a couple of questions in regards to my upcoming all-in-one (AIO) whitebox build. My current plan is to run ESXi bare-metal, virtualize FreeNAS on top of ESXi, emphasize redundancy, and leave room for future expansion in my home lab testing environment. At this time, I have already bought most of my hardware, and am now down to choosing a hardware RAID controller, host bus adapter (HBA), and two solid-state drives (SSDs). Before I continue with any questions though, here's my current build:

-Server Build-
-In Progress-
  • Solid State Drives: 32-512GB (2x???GB)
  • Hardware RAID Controller: ???
  • Host Bus Adapter: ???
-Requirements-
  1. Run enterprise-grade SSDs in RAID 1 for ESXi + FreeNAS + local VMs and datastores.
  2. Run six HDDs in RAID-Z2 using HBA pass-through to FreeNAS for replication, data backups, media, etc.
  3. Hardware RAID controller must be compatible with ESXi, have battery backed up cache (BBU), must support SATA 6Gb/s, and optionally support 2 or more SSDs and >4TB HDDs.
  4. Host Bus Adapter must be compatible with HBA pass-through for FreeNAS, have battery backed up cache (BBU), and must support SATA 6Gb/s, 6 or more HDDs, and >4TB HDDs.
-Optional Requirements-
  • I would prefer to have ESXi on mirrored SSDs/HDDs for redundancy instead of utilizing USB devices because a) the failure rate of (hot) USB 3.0/2.0 devices and SD cards is higher than SSDs; b) the log files generated by ESXi and FreeNAS boot environments can be stored on SSDs; c) ESXi and FreeNAS bootup times and write performance is a lot quicker on SSDs; and d) USB 3.0/2.0 devices and SD cards cannot utilize RAID 1 while running both ESXi and FreeNAS (though, with a hardware RAID controller, and ESXi or FreeNAS running bare-metal, it is possible to use RAID 1 with dual disk-on-modules (DOMs), dual USB 3.0/2.0 devices (e.g. 16-32GB SanDisk Cruzer Fit), and dual SD cards), whereas SSDs can.
-Future Plans-
  • I will eventually add another Intel Xeon E5-2680 v2 CPU and additional 32-64GB of RAM.
  • I want to eventually get a dedicated Intelligent Platform Management Interface (IPMI) module (e.g. AXXRMM4 or AXXRMM4LITE).
  • I want to eventually get two redundant power supplies (PSUs). However, to do so, I will most likely require a new server chassis.
  • I want to eventually get a racked uninterruptible power supply (UPS).
  • I may eventually build and connect another racked server or DAS/NAS/SAN to my whitebox using iSCSI/FC interfaces and 10Gb/s+ connections.
-Questions and Concerns-
  • Is an SLOG (e.g. for ZIL/L2ARC/ARC) required? Would an SLOG only increase read and write performance?
  • Is SSD TRIM support lost when using hardware RAID? Is there a work around (e.g. for syncs and writes)?
  • ESXi does not support fake RAID (e.g. on-board SATA/SAS RAID controllers - with the rare exception of generic AHCI/IDE/RAID driver support (NOT RECOMMENDED!)) or HBA pass-through (AFAIK).
I frequently read the IBM M1015, Dell H200/H310/H700, and LSI 9211-8i/9200-8e hardware RAID controllers and host bus adapters are highly recommended and must be flashed to IT mode to enable HBA pass-through for FreeNAS or be flashed to IR mode for running RAID arrays. Right now, I do not know the advantages or disadvantages of those (just yet), as I am still researching the differences online. But, if anyone has any recommendations, I would be greatly appreciated! Also, I am vaguely unfamiliar with how ZFS works - in terms of both the filesystem and future hard disk drive expansion (beyond slowly adding larger hard disk drives one-by-one and using ZFS mirrors/vdevs?). If anyone has any advice on that too, again, I would be very grateful! Lastly, for enterprise-grade SSDs, I have been eyeing the Intel S3520/S610 series. Are there any other great brands or specific low-wattage, low-heat, high drive writes per day (DWPD) or total bytes written (TBW), and high IOPS model SSDs out there?

As I move forward, I am motivated to learn as much as I can, knowing full well that I will encounter some road blocks ahead. And likewise, for those learning, if anyone has any questions, please feel free to ask me! I am more than happy to help in any way that I can!

Thank you in advance for everyone's support!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There's a lot to comment on here, but I'm on my phone, so I'll stick to the overview:

First of all, Hardware RAID with FreeNAS is a recipe for poor performance and data loss.

Second, your goals - and by extension your build - are confusing. Why do you care how quickly the thing boots? Over three years, my disks are in the low single-digit power cycles, which is a decent proxy for server power cycles. Adding a RAID controller just to boot ESXi and FreeNAS is wasteful and does FreeNAS no good.
 

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
There's a lot to comment on here, but I'm on my phone, so I'll stick to the overview:

First of all, Hardware RAID with FreeNAS is a recipe for poor performance and data loss.

Second, your goals - and by extension your build - are confusing. Why do you care how quickly the thing boots? Over three years, my disks are in the low single-digit power cycles, which is a decent proxy for server power cycles. Adding a RAID controller just to boot ESXi and FreeNAS is wasteful and does FreeNAS no good.
My first post may have been unclear in regard to what my initial intentions and goals were, so I will try to help clarify those two points as best as I can.

To start, my basis for running hardware RAID originally was meant to improve IOPS performance and reliability for my VMs. As a secondary benefit of that, hardware RAID would also be able to provide increased boot performance and (RAID 1) redundancy for ESXi and FreeNAS (as they would all be running on the same SSD boot device(s)). Secondly, while it's not advised to run consumer SSDs in a hardware RAID (e.g. due to shorter life expectancy and the loss of TRIM support), enterprise-grade SSDs more than make up for this fact with over-provisioning, consistent read/write IOPS, battery backed up cache (BBU), vendor support, and higher DWPD/TBW ratings. With that said, since the time of my first post, I have decided to go with a single larger 256-512GB enterprise-grade SSD for my homelab setup - despite knowing the disadvantage of having a single point-of-failure. In that regard, I believe that I have made the right decision in my particular use case. I have considered many different options: 1) In terms of budget, I know what I can work with right now. 2) In terms of expansion, I have left optimal room for upgrading my whitebox build in the future. And, 3) in terms of reliability, I know that I can always backup (e.g. to my mechanical and external HDDs), upgrade (e.g. to a larger capacity), and replace my SSD (e.g. with the same make or model) later on.

I hope that clears up any confusion and questions that you may have had, Ericloewe!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Okay, so VMs running on ESXi-managed storage. Okay, now I understand what you were going for. So FreeNAS doesn't have to serve block storage to the VMs, right?

For starters, check out the Hardware Recommendations guide to get a better overview of your hardware doubts. I'll have a more detailed reply at a later time, if you remind me by answering this thread.
 

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
Okay, so VMs running on ESXi-managed storage. Okay, now I understand what you were going for. So FreeNAS doesn't have to serve block storage to the VMs, right?

For starters, check out the Hardware Recommendations guide to get a better overview of your hardware doubts. I'll have a more detailed reply at a later time, if you remind me by answering this thread.
Yes, that's correct. The VMs would be running on ESXi-managed storage. Additionally, I will be using a VM of FreeNAS to manage ZFS for my HBA pass-through 6x4TB Western Digital Red HDDs.

I will be sure to check out the Hardware Recommendations Guide! Thank you, Ericloewe!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
When you upgrade, look into used Supermicro 4U chassis. It's hard to beat their value. Some sellers even clean them up and they're like new when they get to you. Also don't be afraid to buy an entire server and get rid of the internals and keep the chassis, if the price is right.

Universal Power Supply
Uninterruptible Power Supply. ;)
I want to eventually get a racked uninterruptible power supply (UPS).
See?

Host Bus Adapter must be compatible with HBA pass-through for FreeNAS, have battery backed up cache (BBU
Not only is there not such a thing, you do not want it and you do not need it. Caches on the disk controller are a very bad thing, a necessary evil in the case of hardware RAID and completely avoidable with ZFS.

I may eventually build and connect another racked server or DAS/NAS/SAN to my whitebox using iSCSI/FC interfaces and 10Gb/s+ connections.
If you just want more storage in FreeNAS, you just have to attach an external disk shelf with an SAS expander and add the disks to the pool. Though, with a 4U, you'd have a lot of room to expand beyond the initial six drives.

Is an SLOG (e.g. for ZIL/L2ARC/ARC) required? Would an SLOG only increase read and write performance?
A lot of confusion here:
Every ZFS pool has a ZIL to safely buffer sync writes without destroying performance.
A Separate Log device is a disk/set of disks which is dedicated to the ZIL, offloading it from the pool and allowing for decent performance. An SLOG is only useful in sync write scenarios (generally block storage and little else).

The ARC is ZFS' read cache and is in RAM. The Level 2 ARC is an extension of the ARC to mass storage media (e.g. SSDs) to have an additional layer of caching before needing to read from the pool itself.
 

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
This is really great information! Thank you!

When you upgrade, look into used Supermicro 4U chassis. It's hard to beat their value. Some sellers even clean them up and they're like new when they get to you. Also don't be afraid to buy an entire server and get rid of the internals and keep the chassis, if the price is right.
Yeah, their value is great if you can catch a good deal on them. I looked into some of the aftermarket SuperMicro 4U chassis before and they're a bit out of my price range right now. Before I purchased my Rosewill 4U RSV-R4000 chassis, I was eyeing a few of those.

Uninterruptible Power Supply. ;)
Heh, I saw that "universal" typo yesterday and quickly fixed it before your most recent post! Nice catch! ;)

Not only is there not such a thing, you do not want it and you do not need it. Caches on the disk controller are a very bad thing, a necessary evil in the case of hardware RAID and completely avoidable with ZFS.
After reading the "Confused about that LSI card?" thread on the forums and researching the various hardware RAID and HBAs online, I concluded that I need a hardware RAID controller in the future, and an additional HBA for my needs right now. In regards to that, are there any newer RAID controllers that can be cross-flashed between both IT and IR mode? I would like to have the option available so that I can switch between the two different modes when I need them, but if that's not possible, then I won't worry about it too much. Also, is cross-flashing to IT/IR mode only possible with hardware RAID controllers that are based on the SAS 2008 chipset? And seeing as I won't need cache for an HBA, would having a hardware RAID controller with cache (that can be cross-flashed to an HBA) be beneficial for later?

If you just want more storage in FreeNAS, you just have to attach an external disk shelf with an SAS expander and add the disks to the pool. Though, with a 4U, you'd have a lot of room to expand beyond the initial six drives.
I had in-mind the eventual possibility of setting up a SAN with iSCSI/FC interfaces for expanding my homelab environment. More-so for testing purposes, but also for continued educational and career development. Theoretically speaking, wouldn't performance over iSCSI/FC be faster than using a SAS expander?

Every ZFS pool has a ZIL to safely buffer sync writes without destroying performance.
A Separate Log device is a disk/set of disks which is dedicated to the ZIL, offloading it from the pool and allowing for decent performance. An SLOG is only useful in sync write scenarios (generally block storage and little else).

The ARC is ZFS' read cache and is in RAM. The Level 2 ARC is an extension of the ARC to mass storage media (e.g. SSDs) to have an additional layer of caching before needing to read from the pool itself.
I see! If I am understanding this correctly, a separate SLOG device (preferably SSD) would be beneficial to have for increasing write performance with block storage. In that case, I probably wouldn't need an SLOG unless my storage array became too large? Though, I do have a 24TB storage array, so perhaps I might need one after all. From what I've been reading online, L2ARC isn't really necessary either until you've reached a point of using up all of your RAM. Is that correct?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
After reading the "Confused about that LSI card?" thread on the forums and researching the various hardware RAID and HBAs online, I concluded that I need a hardware RAID controller in the future, and an additional HBA for my needs right now. In regards to that, are there any newer RAID controllers that can be cross-flashed between both IT and IR mode? I would like to have the option available so that I can switch between the two different modes when I need them, but if that's not possible, then I won't worry about it too much. Also, is cross-flashing to IT/IR mode only possible with hardware RAID controllers that are based on the SAS 2008 chipset? And seeing as I won't need cache for an HBA, would having a hardware RAID controller with cache (that can be cross-flashed to an HBA) be beneficial for later?
As long as the hardware RAID is not for ZFS...
Most LSI SAS2008/2308/3008 cards support both IT and IR, but IR is very low-end RAID and only really usable for RAID 0/1/10. If you need high-end hardware RAID, you can get an LSI SAS3108-based (or any LSI SAS3 RAID controller, really) RAID controller, which will work as a plain HBA using the mrsas driver, which is the default on FreeNAS. It's not as tested as IT mode HBAs, but it does provide the same functionality on paper. Expect to pay absurd amounts of cash for one, though.

I had in-mind the eventual possibility of setting up a SAN with iSCSI/FC interfaces for my homelab environment.
The important question is "what's going to be using it?" - ESXi?

Theoretically speaking, wouldn't performance over iSCSI/FC be faster than using a SAS expander?
No way in hell. Besides the additional overhead of piping SCSI via TCP/IP and all the complexities that entails, bandwidth is much lower. One 10GbE link vs. four aggregated 12Gb/s SAS channels (6Gb/s with SAS2 gear).

In that case, I probably wouldn't need an SLOG unless my storage array became too large?
Block storage has nothing to do with the amount of data you're dealing with. I could have a 200GB iSCSI share and a 200TB SMB share.

From what I've been reading online, L2ARC isn't really necessary either until you've reached a point of using up all of your RAM. Is that correct?
You need L2ARC if you've added as much RAM as you can realistically add (cost, physical limitations, etc.) but still need to increase your ARC hit rate to reach desired performance.
using up all of your RAM
Empty RAM is wasted RAM. It will always be close to full.
 

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
As long as the hardware RAID is not for ZFS...
Most LSI SAS2008/2308/3008 cards support both IT and IR, but IR is very low-end RAID and only really usable for RAID 0/1/10. If you need high-end hardware RAID, you can get an LSI SAS3108-based (or any LSI SAS3 RAID controller, really) RAID controller, which will work as a plain HBA using the mrsas driver, which is the default on FreeNAS. It's not as tested as IT mode HBAs, but it does provide the same functionality on paper. Expect to pay absurd amounts of cash for one, though.
Yeah, I wouldn't be using the hardware RAID controller for ZFS; only RAID 1, in the case of my SSD boot devices (for ESXi/FreeNAS/VMs); or in IT mode, as an HBA (for local ZFS data storage) for my HDDs - but not both at the same time. I had only heard that specific hardware RAID controllers could be flashed directly to IT or IR mode, back-and-forth between either mode, or very rarely, utilized in both modes simultaneously. As I didn't know what was possible, thank you for the clarification on that.

As you mentioned in the Hardware Recommendations Guide, the LSI 9300-8i is one of the SAS3 controllers, albeit at a price point that is nearly double or triple that of current SAS2 controllers. However, there are still a few other IBM ServeRAID, Intel, and SuperMicro SAS 3108 controllers out there (e.g. listed on the ServeHome "LSI RAID Controller and HBA Completed Listing Plus OEM Models" thread) that could become more reasonably priced in the future. But again, you will more than likely be expected to pay even more absurd amounts for them than the LSI 9300-8i hardware RAID controller. Unless, of course, businesses and corporations start retiring a massive supply of them.

Needless to say, I am still interested in the SAS3 controllers, but more-so in regard to future considerations. For right now though, I won't be using or purchasing a SAS3 controller anytime soon. However, I still like having knowledge on the subject and envisioning what it might entail in the future.

With that said, I will definitely be picking up a SAS2 controller that can support both IT and IR modes - as I will be using it as an HBA now and for RAID 1/0/10 in the future.

Which SAS2 controller(s) would you recommend?

The important question is "what's going to be using it?" - ESXi?
Yes, for what I currently have planned. ESXi will be using the SAN. However, that may be subject to change when I test out Dell EMC and other data storage based solutions. Moving forward though, I would like to say that I will not be restricted by ESXi. But, at the same time, I believe that I will have the flexibility to change solutions when that time comes.

Besides the additional overhead of piping SCSI via TCP/IP and all the complexities that entails, bandwidth is much lower. One 10GbE link vs. four aggregated 12Gb/s SAS channels (6Gb/s with SAS2 gear).
Ah, I had not considered aggregated 12Gbps SAS channels, nor even knew that was possible. I am also forgetting one key component: that we're not constrained by 6Gbps SAS channels and 10GbE link aggregation. However, one key component that I did consider was the price-to-performance increase of 10GbE links versus 8-16Gbps+ FC links.

Block storage has nothing to do with the amount of data you're dealing with. I could have a 200GB iSCSI share and a 200TB SMB share.
In terms of the amount of data, are you referring to bandwidth utilization and maxing out the channel? I will have to research this a bit more before I fully understand the concept. Also, I could have rephrased my previous question a bit better. Perhaps the question that I should be asking is this: would having a separate SLOG device help in my particular use case?

You need L2ARC if you've added as much RAM as you can realistically add (cost, physical limitations, etc.) but still need to increase your ARC hit rate to reach desired performance.
Ah, okay. Got it! That makes a lot more sense.

Empty RAM is wasted RAM. It will always be close to full.
I believe I calculated out the correct amount of RAM that I need for my whitebox build. My understanding is that FreeNAS requires a minimum of 8GB of RAM and another 512MB~1GB of RAM per TB for ZFS. As I already have 24TB for a RAID-Z2 array, I should be good with 32GB of RAM (8GB + 1GBx24 = 32GB). Does ZFS utilize RAM based on total storage capacity (24TB) or usable storage capacity (14~16TB)?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Which SAS2 controller(s) would you recommend?
Anything with an SAS2308.

In terms of the amount of data, are you referring to bandwidth utilization and maxing out the channel?
Neither. The question is block storage (disk images, if you will) versus file storage (SMB, etc.). Though NFS seems to rely on sync writes even for files.
would having a separate SLOG device help in my particular use case?
Unlikely.

Does ZFS utilize RAM based on total storage capacity (24TB) or usable storage capacity (14~16TB)?
ZFS uses all available RAM. The question is "how much RAM do you need for the desired performance level?"
As for raw versus used versus usable - it's a rule of thumb. It's deliberately vague.
 

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
Anything with an SAS2308.
I spent most of today researching and comparing hardware RAID controllers and HBAs. I narrowed my selection down to the LSI 9207-8i (~$60) controller after considering the price, performance, and feature differences between the LSI 9240-8i (~$65), Dell H310 (~$30), and IBM ServeRAID M1015 (~$75) controllers (which are all essentially the same card underneath the SAS 2008 chipset). Additionally, I noticed that the LSI 9207-8i could be flashed to both IT and IR mode, but wasn't sure if it was possible to also cross-flash to an LSI 9240-8i (for RAID 5/50 support). I can't seem to find anything on that online.

Neither. The question is block storage (disk images, if you will) versus file storage (SMB, etc.). Though NFS seems to rely on sync writes even for files.
Ah, you were referencing the low-level file system and application layers. I'll definitely have to look further into how writes are managed by their respective block and file systems.

Unlikely.
Perfect.

ZFS uses all available RAM. The question is "how much RAM do you need for the desired performance level?"
As for raw versus used versus usable - it's a rule of thumb. It's deliberately vague.
Got it. I'll just have to keep an eye on my server's RAM usage and performance levels then and adjust accordingly. I'll be sure keep that rule of thumb in-mind too.

Thank you, Ericloewe!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
which are all essentially the same card underneath the SAS 2308 chipset)
No, of those, only the SAS 9207 is. The 2008 is okay, but the 2308 is better and for that price a no-brainer.

Also, I noticed that the LSI 9207-8i could be flashed to both IT and IR modes, but wasn't sure if it was cross-flashable to an LSI 9240-8i (for RAID 5/50 support).
Not possible. Not that you'd want to, given the abysmal performance people report.

I'll just have to keep an eye on my server's RAM usage
No. As I said, nearly all RAM will always be in use, same as any modern OS.

Windows does the same, though at a different layer than ZFS:

upload_2017-12-14_11-58-7.png


The tiny rectangle at the end is free memory. The rest between it and used memory is cached stuff from disk. Using ZFS, it would be the ARC, with the OS doing minimal caching.
 

Wizman87

Cadet
Joined
Dec 7, 2017
Messages
9
No, of those, only the SAS 9207 is. The 2008 is okay, but the 2308 is better and for that price a no-brainer.
That was a typo on my part. I meant they were all underneath the same SAS 2008 chipset. Sorry about that! I realize the LSI 9207-8i was underneath the SAS 2308 chipset.

Thank you for the reassurance. I'll be picking up an LSI 9207-8i then!

Not possible. Not that you'd want to, given the abysmal performance people report.
No worries. It's not a big deal breaker. I'll be getting a real hardware RAID controller later on anyways (or an LSI 9240-8i).

No. As I said, nearly all RAM will always be in use, same as any modern OS.

Windows does the same, though at a different layer than ZFS:

The tiny rectangle at the end is free memory. The rest between it and used memory is cached stuff from disk. Using ZFS, it would be the ARC, with the OS doing minimal caching.
Right. I understand how ZFS allocates RAM a bit better now. My original thought process was that ZFS only used RAM as a type of read/write buffer, but now I see that is the job of the ARC. I just want to ensure that I can still share RAM with my VMs while using ZFS.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yes, the ARC will yield under memory pressure, no worries there.
It might be too slow in some specific scenarios and cause the memory manager to page some stuff out to disk. If this impacts you, just limit the ARC to a lower amount and it'll stop.
 
Status
Not open for further replies.
Top