Best way to use hardware I have.

Status
Not open for further replies.

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Hello!

What I've got:

1x Cisco UCS c220 m4. dual 2660 v4. 384GB RAM. HBA card.

8x1.2TB 10k SAS drives or 8x480GB SSD Intel S3500.

Unraid gaming machine running plex, deluge, nextcloud, Radarr, Sonarr etc...

What I'd like to setup:

ESXi, OS for dockers, OpnSense,FreeNAS to serve the storage.

I would use the storage for VM storage, Media (movies, music,etc), home photos/videos, and important docs. Of these, I only truly care about the home media, and the docs. What is the best way to use the 8 storage bays? I have ESXi installed to a raid 1 of SD cards. My thoughts were to use 1 bay for an SSD. Install the VMs to it (freenas, opnsense windows 10, etc) and use the remaining 7 bays for storage. While I don't have a ton of media, my whole reason for moving things away from unRaid is I ran out of bays. So I'd like to get as much storage out of this as reasonable. Any thoughts? Is it important to have a slog? These SSD are not terribly high performant (Micron p400e). Any advise/input is greatly appreciated. Especially if FreeNAS is overkill for what I need and there is a simpler way to proceed. TIA!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You have a big learning curve going from unraid to ZFS. It is not what you are accustomed to.

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

This is an example of building an ESXi host and virtualizing on it...

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

The ZFS ZIL and SLOG Demystified
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

10 Gig Networking Primer
https://forums.freenas.org/index.php?resources/10-gig-networking-primer.42/
 
Joined
Dec 29, 2014
Messages
1,135
The UCS servers have SD cards that you can use to boot ESXi. I use my UCS boxes (M3 generation) as bare metal FreeNAS and ESXi, and they all boot from the flex flash. Does this one have an LSI RAID controller in it? Even if you set JBOD mode on the drives, that won't play very nice with FreeNAS. The other issue is that you need somewhere other than the SD cards to write the logs from ESXi. All the drive in the cage come off to two cables, so you could connect those to a SAS controller and pass that through to FreeNAS. The way that drive cage is built, you can't share it. You could also get a PCI M.2 SSD controller and box ESXi from that, and store the logs there.
 
Joined
Dec 29, 2014
Messages
1,135
Another thought is that you could use the virtualization features of FreeNAS for VM's. I know there are a lot of threads about setting up Plex in an iocage/jail, and that is what a lot of people use for media. I don't combine features because I have the space to split functions between boxes, but a lot of people do use the FreeNAS as a combination storage and VM host.
 

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Thanks for the replies! I do have the HBA card, so I think I'll be ok on that front. I do have the flex flash as well, so I've already installed ESXi to the SD cards. Looks like my weak link right now is the p400e SSD drives. I don't think I'll get great performance out of them. Also looks like I either have to go the route of pci m.2 adapter or lose two drive bays to SLOG/freenas install drive. I've been trying to find out for the past hour if the p400e has power loss protection...
 
Joined
Dec 29, 2014
Messages
1,135
If you virtualize FreeNAS, you don't need to worry about a boot drive pass through. That would just be a VMDK on the ESXi host, I think. Don't take my word for it because I have never done it. There is quite a length discussion on how to virtualize FreeNAS in this link https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/. If there are only two cables coming from your HDD back plane as I suspect, you aren't going to be able split that up.
 

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Thanks Elliot, it just now clicked! Since I'm passing the HBA controller through to the freenas VM, I obviously can't boot freenas from a disk attached to said controller. Is that what you mean? In that case, I must purchase at least two more drives: one for slog, and one for ESXi logs/datastore for freenas VM?
Alternatively, I can baremetal the Freenas. I really don't want to do this, but I also don't want to drop any more cash. I'll start researching this now. If you have any must-reads, I'd greatly appreciate it. I see from your sig your running the freenas install from the SD cards, any gotchas regarding that? In this method, I would only have to upgrade my SLOG if performance is an issue and if I can confirm if the Micron p400e supports PLP or not. Thanks again!
 
Joined
Dec 29, 2014
Messages
1,135
Glad that helped. I don't know that the SD cards would help you much. It is more help for a VM host when you have a large single pool where you host all your VM's. Then you don't need to dedicate a drive for boot, and you change ESXi to write the logs to the data store. If you don't the log writes will likely wear out your SD cards. Since I have another pool of HDD's for that, I probably would use the SD cards if I were building it from scratch. I would use the small pool where I store the logs for booting as well on both FreeNAS and ESXi. Live and learn. If ESXi will let you use the data store from which you booted, you don't need the SD cards. I never do it that way, so I can't tell you for sure.

In your case, here is my recommendation assuming that you can't use the boot media to store VM's (which I think is the case). Install ESXi on the SD cards. The CIMC can do HW RAID (don't yell at me about that), but you might be able to do ZFS and let it do the mirroring between SD cards. That is all moot if you just use a single SD card. Get some kind of NVMe PCIe card to use as the data store for VM's and log files from ESXi. When you create the FreeNAS VM, its boot media would be a VMDK on that NVMe disk. Pass through a supported SAS HBA to FreeNAS, and then FreeNAS will have full control over the drives in your cage. There is likely more that I have missed because I have never done it that way. I would strongly encourage to read the thread I referenced because it goes into a lot more detail from people who have actually done it.
 

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Ok. That settle's it. I'm going to grab a Optane 900p. I'll use a USB drive for "freenas boot drive" (it will be a datastore under ESXi). Now to research if L2ARC is needed in my setup. Thank you for all the help! You Rock!
 
Joined
Dec 29, 2014
Messages
1,135
M1. Can I create datastores on the optane 900p as use it as a boot/slog/l2arc device?

I have an Optane 900P that I use as an SLOG, and it works great for that. Whether you need an SLOG or L2ARC depends on how you are using your data. I mount my ESXi data stores via NFS which uses synchronous writes. The write performance is terrible without the SLOG, but it is very good with it.

M1. Can I create datastores on the optane 900p as use it as a boot/slog/l2arc device?2. Can I expect this setup to saturate a direct-peer 10gb link?

Again, it depends. ESXi can get mig 8G reading reading my FreeNAS and mid 4G writing. I am happy with that. It was around 500M writing without (edit: 500M WITHOUT) the SLOG. Depending on the 10G NIC you have, you might need to adjust some tunables. I haven't done any tuning for that. I think it works as well as it does because I stayed with the FreeNAS 10G NIC of choice which is Chelsio. Mine are T5 based, and I have been very happy with them.
 
Last edited:

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Thanks for the info Elliott! I ran into a whole other can of worms. Once I replaced the SAS12GRAID card with the Cisco HBA card, CIMC decided that the fan policy should be “high-power” due to pci cards needing extra cooling. Meaning the fans are going full blast and will not come down. Cisco is saying “expected behavior” which I call bs on. Returning the Raid card in the same pci slot resolves the issue. So my choices at this point are either go hw Raid, or risk zfs on a JBOD setting. Cisco assures me all info is passed and people use it for SDS all the time, but everything I’ve read says this is bad, really bad.
 
Joined
Dec 29, 2014
Messages
1,135
I understand exactly what you mean. The biggest delay for me was figuring out cards the CIMC liked so it didn't try and turn the system fans into a Cesna trying to take off! I had some nice Chelsio NIC's, but the CIMC hated them. 15K drives also made it want to run the fans higher in addition to the fact that the bays that were populate seemed to change that calculation as well. It delayed me months putting the M3's into service because they sit in my home office and the noise was really annoying. I don't think there is any risk to using that controller in JBOD. It just means that SMART may not work right. For me, the discovery failed but SMART would work if I manually hacked the smartd.conf file. I have not yet had (knocking furiously on wood) a drive failure in these, so I don't know if the driver for the RAID controller would provide enough of an alert to let me know. That is the only downside that leaps to mind, but you could configure alerting in the CIMC to make sure you get notified if a disk dies.
 
Last edited:

pgiblox

Cadet
Joined
Sep 18, 2014
Messages
2
Elliot: I just acquired a C240 M4. I'm curious how you got FlexFlash to work as the FreeNAS boot drive? I upgraded all firmware. When I try to install the latest FreeNAS (Install media mounted via KVM virtual DVD), it fails when attempting to create partitions on the block device (logical "CiscoVD" RAID1 drive). I understand the device might technically be read-only since it is designed to boot a hypervisor into RAM, however, there has to be some way to load data on the cards? I tried both UEFI and BIOS modes thinking that the EFI partition may have caused problems originally. Any help? In the meantime, I have a SanDisk Ultra plugged into the internal USB slot.

Regarding Jail and FreeNAS log storage, I have a pair of 15k SAS drives in mirrored zpool. I plan to move those to SATA/SAS SSDs when budget allows.

Also, thanks for suggesting the 900P. I planned to use an PCIe SSD for SLOG (I have 128GB RAM, so probably don't need L2ARC). Did you use a pair of them to ensure data consistency, or do you just have faith in that single SSD?

Only other change I plan on is to possibly replace the dual E5-2906v3's with a single E5-2970v3 or similar. But maybe I should plan on getting two in order to utilize the other 3 PCIe slots (Riser 2). I'm not hosting a bunch of VMs, so single core performance is more important to me (ZFS performance, Plex transcoding, etc).

All in all, I am looking forward to migrating to this new system. My SuperMicro I purchased 5 years ago (when it was already 5 years old) is definitely showing its age. However, it should serve as a great off-site backup and help me feel more confident moving from RAIDz2 to a stripe of mirrors for the primary system.

Edit: Now that I think about it. I actually had to disable FlexFlash entirely. Even when booting from the Sandisk USB drive, FreeNAS would kernel panic when trying to probe the FlexFlash drive. Something about a "DRDY ERR"; samething I would receive when trying to install to FlexFlash with the installer ISO. SD cards are Cisco-branded 64GB cards, and they would boot ESXi just fine.
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
You have to go into the CIMC and enable the Flexflash controller and the virtual drives. I have a pair of internal HDD's that I am using for the logs to not wear out the SD cards. I could have used that for the boot drive too, and I don't know why I didn't think of it then. I don't the laptop with me that has my M4 build notes, but I can find that next week. If you want to use all the HDD bays for storage, you can boot from the SD but put the system dataset on the HDD pool you build. It will work with SD card boot, but it is a tedious process to set that up. That is why I have detailed notes on my work laptop.
 

pgiblox

Cadet
Joined
Sep 18, 2014
Messages
2
Thanks for the feedback. The devil must be in the details. I originally enabled the FlexFlash controller via CIMC and enabled (host-connected) the virtual drive. That is, the controller pooled the two physical drives and exposed one virtual drive named "CiscoVD Hypervisor". I could see this when installing FreeNAS, but selecting it caused all sorts of CAM errors when partitioning and prepping the filesystem. I'll try it again later this weekend and get back with the exact error.
 
Joined
Dec 29, 2014
Messages
1,135
I am going from memory, but you also have to do something in the CIMC to synch the two cards together to get the RAID going. If you don't do that, it won't let you write to it as I recall.
 
Joined
Dec 29, 2014
Messages
1,135
Here are my steps I use for a customer build of an M4 series ESXi host.

Run HUU DVD
enter CIMC
admin/password and then change password
set time zone
enable flexflash, configure cards and select auto synch
once synch completes, enable virtual drive
 
Status
Not open for further replies.
Top