FreeNAS 9 under ESXi 6 with X10SL7-F and SAS drives.

Status
Not open for further replies.
Joined
Aug 17, 2015
Messages
4
I just ordered a bunch of hardware for a local Print Shop and I have a couple questions about setup. First, the hardware:

Xeon 1220v3
Supermicro MBD-X10SL7-F-O
Crucial 16GB (2x8GB) ECC Registered DDR3 1600
6x Seagate Constellation ES.3 (SAS)
2x Seagate Barracuda 500GB (SATA)
NORCO RPC-2008 2U Rackmount chasis
iStarUSA TC-2U/50-80 500W PSU

The 2x Barracudas will be Raid0 and ESXi + VMs will be installed on that.
The 6x Constellations will be a 4 drive RaidZ2 with 2 active spares.

1) Can I pass the LSI 2308 directly to the FreeNAS VM through the vSphere client, or does that only work with PCI HBA cards? What do I need to do to give FreeNAS direct access to these HDD's?

2) I've seen a lot of mentions of having to flash this mobo to IT mode: Is this just if you want extra SATA ports or will this still be needed if I'm using SAS drives? Never flashed a mobo before so I'm not sure what the ramifications of this are.

Any help or pointers would be much appreciated. I'm really looking forward to this build!

-David
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
Crucial 16GB (2x8GB) ECC Registered DDR3 1600
Well, I can safely say you have the wrong memory listed there. The board you have uses unbuffered ECC, not registered.

Also, it's highly recommended to not run FreeNAS as a VM, regardless if you set it to directly access the card.
 
Joined
Aug 17, 2015
Messages
4
Ok. I'm a little fuzzy on Registered ECC vs Unbuffered ECC. I can't seem to find any documentation that states in doesn't support registered, I can't even find documentation that specifies unbuffered, it all just says ECC. More specifically, I ordered this ECC Registered RAM. I was under the impression that Registered ECC Vs Unbuffered ECC was a feature-set difference of ECC, not a matter of compatibility? Can somebody point me to some good documentation on the topic?

Edit: Nevermind, did some research and found that not only does that mobo not support Registered, the Xeon E3-1200 series CPU's don't even support Registered RAM. Time to order new RAM, shoot....

As for running it in a VM: I don't really have a choice, the server also has to serve as a domain controller, PM Suite controller and Mail Server. So the server has to run other VMs along with FreeNAS. Besides, according to this post, it's supported and OK if you're careful to set it up right (which I'll hopefully be doing). I also plan to have a backup instance of FreeNAS installed on a flash drive so if ESXi goes down for some reason, they'll at least be able to boot off that and access their files.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I run a system very much like you describe here, on similar hardware (see my signature). You can indeed run FreeNAS as a VM under ESXi and use it for a VM datastore, provided you jump through all of the required hoops. One requirement is that you must pass control of the disks directly to FreeNAS via a suitable HBA, which requires a system with VT-d capability; the Supermicro X10SL7 you've specified (and which I also use) meets this need very nicely. Another is that you must have plenty of RAM. I don't believe you will have much luck with only 16GB. I have 32GB and give half of that to FreeNAS. Yet another 'gotcha' is that you must stick with the E1000 ESXi NIC (details at the link below). You'll have to decide on whether to use iSCSI or NFS-based datastores. You will need separate storage from which to boot FreeNAS (I use mirrored SSDs on the mobo SATA ports).

You will need to run this command to force ESXi to scan the FreeNAS-based datastores after FreeNAS starts up:

Code:
ssh root@[your-esxi-server-name] esxcli storage core adapter rescan --all


I'm a developer and an old hand at working with computers. I started my build after much study, was fully aware of the risks and issues, and I've been shaking down and testing this rig for months. So far, so good - it has been stable and reliable, letting me fire up Oracle database instances and other test environments at need.

However, I would hesitate to recommend this setup for production use and if your customer insists on such a rig I would be very explicit about the caveats involved and make them aware of the risks right from the start.

There's a very good installation guide available at this link:

https://b3n.org/freenas-9-3-on-vmware-esxi-6-0-guide/

Good luck!
 
Joined
Aug 17, 2015
Messages
4
@Spearfoot Thank you for the info! I actually won't be using FreeNAS as an ESXi datastore, just file storage. The customer is my Mother actually, who owns a small print shop (only 5 employees). So she listens to my recommendations (Maybe not wise sometimes, but I keep learning). If FreeNAS has direct access to the LSI controller, what exactly ARE the caveats and risks?

The structure I'm planning on using is:

2X 500GB drives in Raid0. ESXi and ESXi datastore will be on these drives as well as all VM images/configs.
6X 1TB SAS drives via LSI 2308, Direct I/O Passthrough to the FreeNAS VM if possible.
VMs:
FreeNAS (10-12GB of RAM)
Univention UCS (Active Directory compatible PDC) (1-2GB of RAM)
Ubuntu Server (Redmine server, 1GB of RAM)

Dedicated RAM 12-15 / 16. I know it's close, but they can always add an additional 16gb later down the line.

FreeNAS would be installed on the Raid0 ESXi datastore and given d-i/o access to the LSI 2308 (giving it direct access to all 6 SAS drives). The 6 SAS drives will be set up in a 4x RaidZ2 with 2 spares for 2TB of usable storage. This will just be CIFS Share(s). I've read that for FreeNAS, RAM should be >= 8gb + 1gb per usable TB of storage. If this is true (Is it?), shouldn't 10-12GB of RAM give me acceptable performance since my net storage space will only be 2TB?

Both FreeNAS and Redmine will be connected to UCS via LDAP. Redmine and UCS will backup to FreeNAS and FreeNAS will backup to MozyPro.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
The risk is that you can lose all of your data! :)

Striping 2 x 500GB drives in RAID0 is not a good idea; you have no redundancy. Better to mirror the 2 drives (RAID1) so that the system doesn't crash if one of the disks goes down. Also, there's little point in setting up 4 drives in RAIDZ2 with 2 spares; just put all 6 drives into the RAIDZ array. Otherwise you're simply wasting the space on the 2 spare drives.

There is a lot of information here on the forum about virtualizing FreeNAS, the strengths and weaknesses of the various ZFS volume layouts (RAIDZ1, RAIDZ2, etc.), the importance of RAM, and so forth. I believe you would profit from some research here before you commit to a system design.
 
Joined
Aug 17, 2015
Messages
4
Ok, fair enough :). For the past 5 years they've been working off of this D-Link with only a ONE 1TB HDD in it. They have MosyPro doing 5x/day backups/snapshots which was a good start, but they need something more robust on-site in the first place.

Risk: Loose all my data.
Ok, so.... what would/could cause this and how can I minimize my risk?

Oh, whoops! Got my raids confused. Yes, I meant Raid1, not Raid0! lol. Well given how many troubles they've had, I wouldn't feel comfortable building a system that didn't have an equal number of spares to failure tolerance. I live about 250 miles away from where the server will be so I need spare drives in the system and ready-to-go at all times since I can only get over there on the weekends. I know it's a lot of 'wasted' space, but the second a drive fails I'll be glad I had spares plugged in. Plus, I've read that during a Raid5/6/Z/Z2/Z3 rebuild, historically there is about a 10% chance of a second drive failure during the rebuild which would of course render the zPool useless. That's why I'm going with 2 spares. I know $300/TB is steep, but it's worth it for the redundancy in my case.
 
Status
Not open for further replies.
Top