Am I doing this right? FreeNAS baremetal to ESXi Host for HomeLab

Status
Not open for further replies.

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Hello Everyone,

It's been a few years since I played with FreeNAS and I am now getting back in to it. Few years older, few years smarter, and a lot more reading this time around.

So let me cut to the chase... I am building a bare metal FreeNAS box as from what I've read many of the senior forum members here say putting FreeNAS in a VM is pretty much asking for problems and difficulties. I intend to use this box as a SAN for an ESXi lab environment via NFS or iSCSI (I havn't done enough research into that side yet of which protocol is better as last I checked NFS was better but that was a few years back)

The specifications of the build are as follows:



Of course the case, fans, etc.. are all included but I don't feel those require as much detail.

If your still reading at this point thank-you very much, and I appreciate any and all input. If I missed something stupid by all means call me out on it! I'd much rather hear it now than after spending more money. Thanks for the validation / corrections in advance.
 
Last edited:

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
A few things: I'm not sure about the performance of iSCSI vs NFS but one of the benefits of iSCSI is that you can treat it as a locally attached block device and format it with whatever filesystem you want while NFS will use ZFS as the underlying filesystem. Also iSCSI requires more of a buffer of free space in your pool (I remember reading on here that 50% of the pool free is optimal).

I definitely recommend getting more than 16 GB of RAM, I would recommend at least 32 GB. The more the better since the majority of it will be used for an ARC (read cache). I have 64 GB in mine, but then again my VMs live in my server and not on ESXi.

also from my understanding a SLOG might not necessarily be required if I up my RAM to 32GB

I think you're confusing an SLOG with a L2 ARC: an SLOG is just the ZFS Intent Log (ZIL) moved from the pool to a separate device, usually a battery backed SSD and it "caches" sequential writes and dumps them to the pool in batches so operations can be performed faster (no waiting for every bit to be written to the pool sequentially). An SLOG is beneficial for your use case, since you will be using your pool for block storage for your hypervisor. Without going into too much detail (and I'm sure other people can explain it better) the SLOG is beneficial not only for the above reason, but also in case of power outages since the VM will be able to write it's last bits to the SSD (since it's battery backed) before all power is lost so the filesystem won't become corrupted, like it could have become if there wasn't enough time to write to rotational storage before all power was lost.

Adding more RAM won't do anything in this case, also L2ARCs are only useful if you have a bunch of people/things accessing a lot of data and you have already maxed out your DIMM slots. I added an L2ARC to my system before just for the hell of it, and it was only like 1% utilized since my L1ARC being utilized the most.

I know you didn't ask about it, but you will most likely want to use striped mirrors for your pool layout rather than RAIDZ/2/3 since it can handle far more IOPS/throughput but at the cost of halving your storage capacity and reducing redundancy.
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Thanks for all the information. Yeah I was planning on using striped mirrors as the IOPS should be much better since there is less write penalty. Unless iSCSI is much better for ESXi usage given the information you provided I will likely go the NFS route then. I can use more of the storage I have available. I will have to look in to pricing the additional RAM to bump it up to 32GB. Also out of curiousity, can you mix RAM sizes? For example if the MoBo has 4 slots can I put 2x 8GB ECC and 2x 16GB ECC? That would give me 32 + 16 = 48, which would mean I can maximize on the available slots and if I want to upgrade again in future I only "waste" 2x 8GB dimms instead of 4x 8GB dimms.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I am building a bare metal FreeNAS box as from what I've read many of the senior forum members here say putting FreeNAS in a VM is pretty much asking for problems and difficulties.
Naw, it really just requires proper disipline or you can mess things up easily. A few years ago putting FreeNAS on ESXi was discouraged because of this and some people trying to take shortcuts and then they loose data and that makes FreeNAS look bad. If you want to place FreeNAS on ESXi then you can do it and you will get support from this site if you have the proper attitude for this project. But no one will talk you out of placing FreeNAS on bare metal, it is preferred of course.

Also out of curiousity, can you mix RAM sizes? For example if the MoBo has 4 slots can I put 2x 8GB ECC and 2x 16GB ECC? That would give me 32 + 16 = 48, which would mean I can maximize on the available slots and if I want to upgrade again in future I only "waste" 2x 8GB dimms instead of 4x 8GB dimms.
If the motherboard support it (most do) then yes. The only issue would be interleve so you would want pairs of the same RAM in the proper paired slots (refer to user manual). You RAM would also run at the slowest speed if you have a mixed bag of RAM.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Dump the perc H310 (sas2008)...they are getting old. The HP H220 hba (sas2308) can be found for ~$45. Samsung 850pro is not a good slog. You'd really want something with power loss protection and write optimized (Intel s3700/s3710 to name a couple)...yes nvme is superior but costly (intel p3700 is not too expensive). Zvol based device extent shared via iSCSI is typically better for VMware environments running on FreeNAS storage (vaai support).
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
But no one will talk you out of placing FreeNAS on bare metal, it is preferred of course.
My thoughts exactly. Seeing as last time I attempted to use FreeNAS with ESXi I had issues (likely hardware related) but either way, I had some trouble and that was with both on bare metal. I figured for my first attempt back at it with proper HW I would make my life that little bit simpler by not putting FreeNAS in a VM. Suppose I could always do that after the fact with an additional HBA in my ESXi host and use it for backup's or playing or something :)

The only issue would be interleve so you would want pairs of the same RAM in the proper paired slots (refer to user manual).
Excellent, thank you very much. I will be sure to keep them paired properly.

Dump the perc H310 (sas2008)...they are getting old. The HP H220 hba (sas2308) can be found for ~$45.
Thanks for the advice, do you mind if I ask what benefit having sas2308 will have over sas2008? From the quick research I have done it appears that the card bus itself has more bandwidth being PCI-e3.0 vs 2.0, but that bandwidth shouldn't ever be an issue as I plan on using 7200 RPM spinning disks (or slower when I go to WD red's). To work around any potential IOPS limit I can connect my SSD straight to the MoBo SATA ports so that they talk through different controllers.
Unless I'm missing something... Let me know if I'm wrong :) the H310's are actually in transit from amazon, I bought them a day or two ago but I can always cancel / return if there is a huge reason to. Other than what I noted above the HP H220 seems to be $15 bucks more per card which albeit not a large difference in the scheme of things, but if theres no huge benefit I'd rather save the 30 bucks towards a PSU, RAM, or NAS HD's

Samsung 850pro is not a good slog. You'd really want something with power loss protection and write optimized (Intel s3700/s3710 to name a couple)
Thanks for the advice on this. I was going to just stick with the 850pro due to cost reasons (I already have it vs. buying a s3700 for ~100 bucks on eBay) but the power loss protection rings a bell in documentation I had read prior and is probably a good idea.

Zvol based device extent shared via iSCSI is typically better for VMware environments running on FreeNAS storage (vaai support).
Thanks, I did not know this, much appreciated!

One thing I forgot to mention is that this will all be connected to a UPS, so I should have protection from power outages. Is the power loss protection SSD still required even with a UPS?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If I recall correctly that system board only has PCIe 2.0 anyway so there's not any point in having the PCIe 3.0 card.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
One thing I forgot to mention is that this will all be connected to a UPS, so I should have protection from power outages. Is the power loss protection SSD still required even with a UPS?

Yes, you still want power loss protection in the event the server suddenly loses power...could happen from any number of factors that a ups does not help with (think power supply dies, etc.) No need for the hp H220 if you have the dell h310 is transit.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Be sure to wipe all data from the s3700 when it arrives and make sure it's running the latest firmware.
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Hey Everyone, got an additional question for my PSU. Was thinking of grabbing this..

Seasonic FOCUS Plus Series SSR-850FX 850W 80+ Gold

I did read https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/ and they recommend Seasonic in there, but this isn't the G series so I figured I would ask for some verification. Says its a single 12V rail @ 70 amps so I think that should cover me if I want to include 12 drives at some point. That would be 3 SSD's and a max of 12 HD's on the above noted hardware.

Also, an update for anyone interested:
  • got my 2x H310's crossflashed to IT mode
  • Have another 16 GB of RAM on the way that will increase my total to the max of 32GB supported by the board
  • Have 6x 2TB drives installed currently, 7200 RPM currently, might upgrade to WD RED's in the future (seem very high reviewed here on the forums and lower spindle speed so lower power)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
That looks like a fine power supply except the power might be a bit on the light side if you are actually going to have 12 hard drives and all the other hardware you mentioned above. Make sure you have calculated all the power your system is using. I would do an internet search for that specific power supply and see how the reviews are and if you don't find any bad reviews, buy it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Seasonic Focus is the replacement for the G-Series.

the power might be a bit on the light side if you are actually going to have 12 hard drives
How do you figure? It should be good for 20+ drives.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes, you still want power loss protection in the event the server suddenly loses power...could happen from any number of factors that a ups does not help with (think power supply dies, etc.) No need for the hp H220 if you have the dell h310 is transit.

It’s not that you need the SLOG for power loss scenarios, ZFS will deal with that without the SLOG.

The problem is the way ZFS deals with it is *very* slow.... without the SLOG.

And, if you use a SLOG to avoid the slow ZIL writing to your pool, then the SLOG itself needs to deal with sudden power loss.

And the Samsung 850 does not.

Would make a fine L2ARC, but I suspect you don’t have enough ARC for that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Hey Everyone,

So I updated my main post with some important notes that change over the past few days with my additional purchases: If you don't want to go back heres the TL;DR
  • RAM increased 16 > 32GB
  • Purchased Intel S3700 100GB for SLOG
  • Using Samsung 850 Pro's (2) which will be used as storage for ESXi Host OS drives, Will probably setup in basic Mirror
  • The Toshiba drives I did already have lying around, if/when I buy HD's they will be WD Reds as I feel it would be down right stupid / disrespectful to ask for help on these fourms and ignore the huge following Reds have by trying to save a few dollars..

Seasonic Focus is the replacement for the G-Series.
I did not know that. Thank-You!

For those that aren't as smart like me maybe @jgreco could update his Proper Power Supply Sizing Guidance post to include the Focus alongside the G series. This may already be done and I might have missed it, sorry if thats the case. I live and die by those How-TO guides posted up here by the super users.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
How do you figure? It should be good for 20+ drives.
I'm thinking of not just the hard drives but also the two PERC 310H cards, the motherboard and CPU.

But I may have been wrong by indicating the power supply may be on the edge for a 12 drive system. Okay, I really thinking I could be wrong, sorry for the bad advice. I still think looking up a good review of the power supply and how it operates when stressed is the best thing to do.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm thinking of not just the hard drives but also the two PERC 310H cards, the motherboard and CPU.
Well, the HBAs are 10W each, motherboard and CPU 100-150W - let's say 200W, which is very pessimistic. That leaves 650W, which is plenty for 21 drives.
 
Status
Not open for further replies.
Top