FreeNAS iSCSI VM Datastore (ProxMox)

Status
Not open for further replies.

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
I've been running FreeNAS for a few years now, and over the years, it has slowly taken on more and more "general purpose" functions, and after spending some time in Reddit's /r/homelab section, I decided it was time to upgrade my setup to a proper VM host, and return my NAS to it's primary purpose as a NAS. I managed to pick up a used Dell R710 without disks for far cheaper than I expected, and then reality hit me and brought me back to this point in my FreeNAS build years ago when I realized how much the disks were going to account for the cost of the build. But then, I realized, I *still have the NAS* and I could just use it as my primary VM datastore. However, this is a pretty significant paradigm shift to every other build I've ever done, so I wanted to ask questions before diving headlong into completely unknown waters.

My NAS:
ASRock C2750d4i (yes, I'm aware of AVR54, and have already had one board die on me, but for now, this is what I have)
32GB RAM
Boot: 2x8GB Kingspec SLC SATA DOMs, mirrored
Data: 6x2TB WD Red, RAIDZ2
2xGbe NICs (on-board)
FreeNAS 9.10, up-to-date

My VM Host:
Dell R710
Dual Xeon L5638 2.4GHz hex-core
48GB (6x8GB) RAM
PERC H700 RAID card
Boot: 2x500GB WD Blue, RAID1 (temporary, they were all I had on hand, definitely going to replace them when I get the chance)
4xGbe NICs
ProxMox VE 4.4

Router: Mikrotik/Routerboard RB3011UiAS-RM
Switch: Linksys SRW2024 24-port Gigabit

I also purchased a pair of Mellanox ConnectX-2 10Gbe cards super cheap, and I'm interested in giving those a try. It is my understanding that, while they are not officially supported by FreeNAS out-of-the-box, they are supported in FreeBSD 10, and that it is possible to build and load the necessary drivers, and that people have successfully done so. It is also my understanding that SFP+ can be directly connected between 2 machines without need for a switch. If this is incorrect, please let me know before I run any farther down a dead end.

I've done some reading on SLOG's, and I'm not sure whether or not I need one. My workload will be a handful of network-related VM's (DHCP/DNS/firewall), a few web apps, the most disk-intensive of which is probably ownCloud, or possibly Plex, but that's a lot more sequential, a Minecraft server, and then mostly just VM's for messing around with. I'm a little worried by this issue that I seem to be experiencing, but I'm hopeful that it can be worked out.

Basically, the thought is to create an iSCSI target on the FreeNAS box and serve it to the ProxMox machine as my primary datastore. Are there any issues with this setup, or is this at least a decent/sane plan? Is there anything I can do to improve on it without much more cost? In the long run, if changes need to be made, I can do that, but I'm just getting started, and I don't want to dump a bunch of money in right now when I don't really know what I'm dealing with. I know this is a rather large and open-ended question, but I'm looking for advice, best practices, corrections to any assumptions I might have made, and known pitfalls that I can avoid, etc.

Also, if there is any additional information that I could provide that would be helpful, let me know.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
10Gb SFP+ connected directly together works just fine (I've only tested with DAC/Twinax as that is what I use). I've looked at the bug report you linked to and that seems to be attributed to autotune. I have 72GB of RAM in my main FreeNAS system and it doesn't have this issue...I checked it with the arc_summary.py script and my arc size is 85.6% and a target size of 86% while currently getting an arc hit ratio of 72% (pool only hosts vm's).

As for iSCSI, I prefer to use this for VM storage (I'm on ESXi), I'm sure ProxMox/KVM will work great with it too. If using mpio, make sure your interfaces are on different subnets so FreeNAS will route traffic out of multiple interfaces properly. A slog with power loss protection is highly recommended for any VM setup as well as setting the zfs property sync=always on your zvol. In your iSCSI extent on FreeNAS, you might disable physical block size reporting too. I know vmware doesn't like anything over 512bytes still...unsure what KVM expects, but just a heads up. Almost forgot...vm's usually call for mirrored vdev setup (stripped mirrors/raid10 style) for increased IOPS performance over raidz protection levels. I'd configure your pool for 3 x mirrored vdevs instead of raidz2 if you want better vm performance.
 
Last edited:
Status
Not open for further replies.
Top