BUILD Going to run FreeNAS in VM, want to comment on my build?

Status
Not open for further replies.

penetal

Cadet
Joined
Apr 5, 2013
Messages
9
Hey, so I am pretty much done picking out parts for my SFF server where I intend to have a few virtualized servers running, one of which will be FreeNAS. I want to be able to do a PCI passthrough on the SATA controller (hence the separate card).

I think I'll dedicate around 16GB of ram to FreeNAS and use the other 16GB for other VMs, the NAS will however experience a very light load I assume, as it's only me using it.

If you have any experience with the parts I picked I'm very interested in hearing what you have to say about it. And more specifically if you have heard PCI passthrough will not work well with these components. I will also try to do a PCI passthrough of a Radeon HD5450 so I can use a VM as a medicenter on my TV.

(I know its not recommended to virtualize FreeNAS, but I'm willing to take the performance hit so I can run other servers on the same box.)

---
My Current build:

Case - Lian Li PC-V354B

PSU - EVGA SuperNOVA 650W

MOBO - Intel S1200V3RPS

CPU - Intel Xeon E3-1230v3

Memory
Kingston ValueRAM TS DDR3 PC12800/1600MHz ECC CL11 8GB (KVR16LE11/8)
or (less likely)
Crucial DDR3 PC12800/1600MHz ECC 2x8GB (CT2KIT102472BD160B)

SATA Controller - ServeRAID M1015 that I will crossflash as per jgreco's recommendation.
Never used one of these before so I'm a little curious on how to connect SATA disks to it.​

Storage - (4x in RAIDZ1) WD Red 3TB (Will probably grow)
---
Open to changes in the build if you have anything to recommend.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah.. the recommendation to not virtual has nothing to do with performance. It's because of how many people said "I can do this". Then next week they're saying "ZOMG.. my wife will leave me if I can't get my data back".

You're taking your data into your own hands.. good luck with that.
 

penetal

Cadet
Joined
Apr 5, 2013
Messages
9
Yeah.. the recommendation to not virtual has nothing to do with performance. It's because of how many people said "I can do this". Then next week they're saying "ZOMG.. my wife will leave me if I can't get my data back".

You're taking your data into your own hands.. good luck with that.


I'm not too worried about that. I will backup what is sensitive to loss, and no risk no gain. I'll rather fail and learn than never learn at all. I get there is a risk, specially the one of "when it goes wrong it explodes".

Thanks for wishing me luck, though I sense you didn't really mean it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nope. I mean it 100%.

Guess who you get to talk to when everything goes south and you just "gotta" have some data off of the zpool? Me. I'm the unexpected resident data recovery guy for most people that haven't followed the advice. And I don't take too kindly to being asked to spend my weekend on Skype and Teamviewer trying to get your data back. Assuming I even choose to help you and not give you the "sucks to be you.. should have listened huh?" answer and leaves you out in the cold.
 

penetal

Cadet
Joined
Apr 5, 2013
Messages
9
Nope. I mean it 100%.

Guess who you get to talk to when everything goes south and you just "gotta" have some data off of the zpool? Me. I'm the unexpected resident data recovery guy for most people that haven't followed the advice. And I don't take too kindly to being asked to spend my weekend on Skype and Teamviewer trying to get your data back. Assuming I even choose to help you and not give you the "sucks to be you.. should have listened huh?" answer and leaves you out in the cold.


Wow you really do that for random internet folks? That is incredible, you sir are a much better person than most.

Got any thoughts on the HW I have picked so far?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's the exact CPU I have, and 32GB of RAM is a good start. I will warn you as an ESXi user, that those socket 2011 boards are more expensive but the ability to load them up with more RAM than the 1155s makes them totally worth it for virtualizing stuff. If you manage to get it working for the long term you'll want to virtualize more and more. You will certainly run out of RAM before you run out of CPU power.

Case is okay I guess. I'm a rackmounting guy and I consider cases to be more of a personal preference than anything else. I avoid building computers for size because inevitable you are putting more hardware in a more compact location which can (and often does) lead to heat issues. An ESXi server is going to use more watts than a FreeNAS server since its more likely to be "busy" with OS overhead and user workload. The case looks like it was designed with cooling in mind, but sometimes you need more than the case can use.

I'm not a fan of Intel motherboards though. Too many people have had weird unexpected results with some hardware that should have worked. Supermicro doesn't seem to have the same problem as often. This is definitely non-characteristic for me though. I'm a fan of the Intel SSDs, Intel CPUs, but I'm hesitant to try an Intel board because of all of the trouble I've heard about them.
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
I think he did mean it. We have no way of knowing your level of skill in this subject. There have been a number of horror stories with running FreeNAS as a VM and we would truly hate to see another one. That being said backing up your sensitive data is smart.

For the M1015 you will need forward breakout cables; each cable connects four drives. I ordered the following cable from monoproice.com and it works well:
0.75m 30AWG Internal Mini SAS 36pin (SFF-8087) Male w/ Latch to SATA 7pin Female (x4) Forward Breakout Cable - Black
Regarding the storage configuration: RAIDZ2 is highly recommended over RAIDZ1 especially with 4 disks. If you read Cyberjock's guide it goes over these details very well.
Glad to see you are using ECC Ram - get as much as you can and dedicate as much as you can to the FreeNAS VM.
Read the sticky on virtualizing FreeNAS.
 

penetal

Cadet
Joined
Apr 5, 2013
Messages
9
I will warn you as an ESXi user, that those socket 2011 boards are more expensive but the ability to load them up with more RAM than the 1155s makes them totally worth it for virtualizing stuff.

I would love more memory, but my budget is almost depleeted with this so I'm trying to get good quality for what money I have to spend. I had a SuperMicro board picked out before, but that board is almost twice the cost, that's why I have the Intel one now. My hope is that the board will unofficailly support 16GB sticks when those come out, that way I can upgrade without too much problems.

Also the MOBO is 1150 not 1155, not sure if you just pressed the wrong key or read the wrong socket.


I think he did mean it. We have no way of knowing your level of skill in this subject. There have been a number of horror stories with running FreeNAS as a VM and we would truly hate to see another one.

I can understand this, and am fully aware that I'm not going the safest route, and might end up chewing the carpet later on. I accept this and if it were to happen and I'm not able to figure out how to fix it myself I won't be upset if no one is willing to help, as I wouldn't deserve it.


For the M1015 you will need forward breakout cables

If I understood you right, this should do the job nicely?
Lindy Internes SATA + SAS Kabel SFF-8087 an 4 x SATA (Latch)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My hope is that the board will unofficailly support 16GB sticks when those come out, that way I can upgrade without too much problems.

Not a chance in hell. When you go to 16GB sticks it MUST be registered. It has to do with the density of the memory and the capacitance of the chips. If the board supported Registered you might have a chance.

Also the MOBO is 1150 not 1155, not sure if you just pressed the wrong key or read the wrong socket.

Yeah, I knew better. Thought one thing and typed another.

I can understand this, and am fully aware that I'm not going the safest route, and might end up chewing the carpet later on. I accept this and if it were to happen and I'm not able to figure out how to fix it myself I won't be upset if no one is willing to help, as I wouldn't deserve it.

Honestly, and this isn't personal, but I don't even try to do recovery on virtualized hardware. Too many variables where things can go wrong with the real hardware, the virtualized hardware, the FreeNAS configuration as well as the ESXi configuration. And since all problems roll downhill it turns into a game of figuring out where the error is. It's just not worth my time...
 

penetal

Cadet
Joined
Apr 5, 2013
Messages
9
Not a chance in hell. When you go to 16GB sticks it MUST be registered. It has to do with the density of the memory and the capacitance of the chips. If the board supported Registered you might have a chance.
....balls

Honestly, and this isn't personal, but I don't even try to do recovery on virtualized hardware. Too many variables where things can go wrong with the real hardware, the virtualized hardware, the FreeNAS configuration as well as the ESXi configuration. And since all problems roll downhill it turns into a game of figuring out where the error is. It's just not worth my time...
I assume that if it dies I will have a snapshot of the VM in a working state, so the only thing that it really matter if breaks would be the zpool/disks. If those are fine I don't really care if the VM spasses out.

On another note, I was thinking I'd use Xen, not sure if you have tried it but it seems you use ESXi. Are there any specific reasons as to why?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Oh my god. I just spent 15 minutes typing up a response to just have my connection die on me when I hit submit. Sigh.

Anyway, I have always found citrix xen server to be lacking in a lot of ways. The ability to run any iso being one of them.

I personally have esxi and proxmox environments in my house. I have become a huge proponent for using proxmox with openvz containers. Love them.

For PCI passthrough, esxi is king if you have supported hardware. Not all off the shelf parts support the PCI pass through so make sure you do your research.

Sent from my Galaxy Nexus
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Snapshots are disabled if you use PCI passthrough so snapshots won't save you. If you aren't using PCI passthrough you might as well throw your data away right now. Using virtual disks and the RDM feature is a recipe for failure.

http://forums.freenas.org/threads/a...ide-to-not-completely-losing-your-data.12714/

As I said before.. good luck. I'm thinking this is far from planned out well unfortunately. Now you might be starting to see that doing VMs is far from trivial, and at best you might not lose anything. At worst, it'll work fine until it doesn't. And once you've walked into the world of "it doesn't work" you are already past the point of no return. :(

Off the record, there was some discussion about creating a section of the FN forums for ESXi/virtualizing users last week. Do you know why we chose to abandon the idea? We didn't want to give anyone the impression that it was "so easy a caveman can do it". It's not. It requires VERY careful planning of hardware that is compatible AND functional with your hypervisor, VERY careful planning of what hardware is used and how it is provided to the VMs, and VERY careful planning not to end up locking yourself out of your own data because of a mistake. If you haven't done it before you are just asking for lots of frustration and pain.

This thread was my first attempt at using a VM. I spent 10 solid days of 16 hour days on it and I couldn't get it to work. And I had the assistance of our forum VM god. Lucky for me I had no data to lose since I was in the process of upgrading so the disks were blank. But I couldn't get it to work at all. And trust me, I'm very determined to get things to work and will continue to try until I figure out that it really isn't possible at all.
 
Status
Not open for further replies.
Top