ESXi server

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
Hello,

I am looking for options to build an ESXI (home) server.
Instead of building 2 systems. I want to build one which will be devided in 2 different VM's
I know Freenas does not like it te be virtualized, therefore i am trying to follow the guidelines regarding this article:
http://www.freenas.org/blog/yes-you-can-virtualize-freenas/

I want to use the follow hardware:
CPU: Intel Xeon 1230v5
Motherboard: Supermicro X11-SSL
RAM: 32GB ECC mem (16gb for each VM)
Drives: 2x WD Red for Freenas, 1 small SSD (~250gb) for other VM
HBA: IBM 25R8071 - LSI Logic SAS3444E
PSU: still has to decide for one (400W good enough??)
Case:
still has to decide for one

I want to use 2 drives (mirrored) for Freenas with the ability to upgrade with another 2 in the future.
Therefore I wanted to use this 4 sata port HBA on PCI-e x8 instead of a 2 sas/sata x8 card.....

I have to admit I do not fully understand the article, so i have a few questions:

Will it freenas with this HBA??

And is it good enough to use 2 mirrored drives, because they are talking about three different Vdevs in the article....

If you have any other suggestion i'll be happy to hear them!

Thanks in advance.

*EDIT*

I know the HBA has 3Gbs sata 2 ports.
But does that matter for mechanical drives?
I wouldn't be surprised if they even won't use the 3 Gbs....
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I have to admit I do not fully understand the article,
Good. This is the best you could do. Too few admits to themselves their lack of knowledge.

Step 1: make sure you understand enough to get going.
I can tell from own experience that the two threads on the topic were alien to me at first glance. Only after additional research on my own I could understand them. Which is the whole point of the guides.
Everything you need to know is mentioned in the threads (don't virtualize freenas, and the other sister thread which basically says - if u absolutely have to - do this).

Explaining a short bullet point list of what you need to will not get you to the required level of understanding and will not set you up to handle THE problems that WILL in the future.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hello,

I am looking for options to build an ESXI (home) server.
Instead of building 2 systems. I want to build one which will be devided in 2 different VM's
I know Freenas does not like it te be virtualized, therefore i am trying to follow the guidelines regarding this article:
http://www.freenas.org/blog/yes-you-can-virtualize-freenas/

I want to use the follow hardware:
CPU: Intel Xeon 1230v5
Motherboard: Supermicro X11-SSL
RAM: 32GB ECC mem (16gb for each VM)
Drives: 2x WD Red for Freenas, 1 small SSD (~250gb) for other VM
HBA: IBM 25R8071 - LSI Logic SAS3444E
PSU: still has to decide for one (400W good enough??)
Case:
still has to decide for one

I want to use 2 drives (mirrored) for Freenas with the ability to upgrade with another 2 in the future.
Therefore I wanted to use this 4 sata port HBA on PCI-e x8 instead of a 2 sas/sata x8 card.....

I have to admit I do not fully understand the article, so i have a few questions:

Will it freenas with this HBA??

And is it good enough to use 2 mirrored drives, because they are talking about three different Vdevs in the article....

If you have any other suggestion i'll be happy to hear them!

Thanks in advance.

*EDIT*

I know the HBA has 3Gbs sata 2 ports.
But does that matter for mechanical drives?
I wouldn't be surprised if they even won't use the 3 Gbs....
I've built two "FreeNAS-on-ESXi" AIO (All-In-One) servers like you're contemplating building (see 'my systems' below). The best articles I've found on these are:
Regarding mirrored drives... mirrors are fine. Topology just depends on your needs and how comfortable you are with the safety factor of whatever topology you use. I use RAIDZ2 because I like the warm-and-fuzzy feeling of knowing that I can lose any two of the seven drives in my main system - and still not lose my data! In your case (assuming the 4-port HBA you mentioned actually works in this scenario) I would fully populate the HBA with 4 drives in a RAIDZ2 configuration, buying the largest capacity drives I could afford. But starting out with a pair of mirrored drives would certainly work, too, and it would be simple to expand your pool later by adding another pair of drives in a mirrored vdev.
Will it FreeNAS with this HBA?
Good question, and you should be concerned about finding out the answer, because I've never heard of that HBA and chances are it's obscure enough that no one has ever tried it as a passed-through HBA to a FreeNAS VM running under ESXi. For this reason, I'd be wary of trying it. Especially when LSI 9211/IBM M1015/Dell H200 HBA cards can be purchased used for reasonable prices, and are known to work flawlessly in this scenario.

I believe a 400 PSU will be adequate for your system, but consult the "Proper Power Supply Sizing Guidance" thread to make sure.

What do you mean when you say "other VM"? Realize that ESXi and the FreeNAS VM must exist on a local datastore, and ESXi must be installed on a bootable device. I boot ESXi from a pair of SSDs connected to an LSI-based HBA in a bootable RAID1 configuration. This is also the local datastore where I install the FreeNAS VM. Once the FreeNAS VM boots up, it can provide an additional datastore to ESXi on which you can install as many VMs as you have space for.

In your case, you could configure your 250GB SSD drive as the boot device on which you would install ESXi and the FreeNAS VM. Then you would pass-through the HBA to the FreeNAS VM and setup your HDDs on the HBA, giving FreeNAS control over them, and providing either NFS or iSCSI-based storage back to ESXi, on which you could install additional VMs. The disadvantage of this approach is that your boot device lacks redundancy.

This is all fun, but can be complicated... so you want to study it and try to gain an understanding of how all of the pieces of the puzzle fit together before you make expensive purchases that may be unsuitable.

Good luck!
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
First thank you all for your answers and tips!

I know they say you shouldn't until you have to. But i probably dont want to build 2 seperate systems.
So i just want to good at once, to prevent lots of troubleshooting in the future.

About the article:
There are just a few things that sounds contradicting to me.
First they say you shouldn't use additional controllers. But for PCI-passthrough you NEED a controller.
Also, I don't understand why they want you to have 3 Vdevs? With each it own redundancy.....
this would mean a would have to use a minimum of 6 hard drives to make this happen?
This make no sense to me.... in the first place.

But after a while I will manage to understand it, I think.....

About the HBA.
I also saw the HBA's you mentioned and there sweet priced indeed.
But they only offer 2 ports on a x8 PCie slot.
I assume that is because it are 6Gbs ports.
The HBA i mentioned has 3Gbs ports, so it can has 4 on on one x8 PCie slot.
But if it doesn't work, then it is obviously useless....


For the OS part
I want to use VMware ESXi as hypervisor, booted from a tiny SSD drive.
Then boot Freenas from an USB stick (maybe 2 in mirrored).
For the other OS/VM I want to use Windows booted and used from a small SSD.
Here is no redundancy because this system will only be used as a game related server. There is no valuable data stored here.
Also I assume i can use both SSD's through the storage controller on the motherboard..?..

I know this all wont be build overnight. And even when it is finished I want to test it with invaluable data for some time before I give it an all-green.

Again thanks for your help!

*EDIT*

I have one more question. I have to passthrough a NIC for each VM right?
Since the motherboard has 2 ethernet ports, is it possible to use one for each VM?
Or do i have to use additional NIC's?
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
First thank you all for your answers and tips!
You're welcome!
About the article:
There are just a few things that sounds contradicting to me.
First they say you shouldn't use additional controllers. But for PCI-passthrough you NEED a controller.
Also, I don't understand why they want you to have 3 Vdevs? With each it own redundancy.....
this would mean a would have to use a minimum of 6 hard drives to make this happen?
This make no sense to me.... in the first place.
Where does the article say anything about not using additional controllers?

I too don't understand the precaution about using 3 vdevs in the "Yes you can virtualize FreeNAS" article. I assure you that there are hundreds or thousands of us FreeNAS users with a single pool comprised of a single vdev. Both of my AIO systems have a single pool made up of a single vdev; one contains a single 7-disk RAIDZ2 vdev, the other a 4-disk RAIDZ2 vdev. I'll post a question and see if Josh Paetzel can clarify this point.
But after a while I will manage to understand it, I think.....

About the HBA.
I also saw the HBA's you mentioned and there sweet priced indeed.
But they only offer 2 ports on a x8 PCie slot.
I assume that is because it are 6Gbs ports.
The HBA i mentioned has 3Gbs ports, so it can has 4 on on one x8 PCie slot.
But if it doesn't work, then it is obviously useless....
You misunderstand how the LSI 9211/IBM M1015/Dell H200 cards are designed: the 2 ports you mention each accept an SFF-8087 forward breakout cable allowing you to connect 4 devices. So these cards can connect 8 drives. With different cables and a backplane, they can drive up to 256 drives. And they run at 6Gb/s vs. the 3Gb/s HBA you mention above. There's a reason they come highly recommended here on the forum and are frequently used in ZFS servers.
For the OS part
I want to use VMware ESXi as hypervisor, booted from a tiny SSD drive.
Then boot Freenas from an USB stick (maybe 2 in mirrored).
For the other OS/VM I want to use Windows booted and used from a small SSD.
Here is no redundancy because this system will only be used as a game related server. There is no valuable data stored here.
You can set your system up this way, if you choose. I don't see any advantages, but, again, it's your system to do with as you please. For a time, I booted ESXi from a USB flash drive, and installed the FreeNAS VM on two SSDs with its 'mirrored installation' feature. I eventually switched to the RAID1 setup I'm using now to gain the advantage of being redundant and therefore safer.

I just wanted to make the point that you can install both ESXi and FreeNAS (and probably your Windows VM as well) on a single SSD.
Also I assume i can use both SSD's through the storage controller on the motherboard..?..
As local datastores for ESXi, yes.
I have one more question. I have to passthrough a NIC for each VM right
No
Since the motherboard has 2 ethernet ports, is it possible to use one for each VM?
Or do i have to use additional NIC's?
It's possible... but not what you want to do.

With ESXi, you set up one or both of the motherboard NICs on ESXi. Then you assign a virtual NIC to each VM. Read the "FreeNAS 9.10 on VMware ESXi 6.0 Guide" I mentioned above and this will make more sense.
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
Aha, That explains!
I didn't knew that such kind of SAS breakout cables excisted! That will make the choice for a HBA a lot easier. Thanks!


For the controller part
It is not mentioned in this article. But in other freenas blogs I read that (raid?)controllers are not advised because it puts an extra layer between the storage and ZFS. But that doesn't matter for now....
Let's just stick to the way it is meant to be done.

About he storage
The way you explain it I think it is indeed better to just use 2 normal sized SSDs and use them in mirror voor ESXi Freenas and windows. Would save some works as wel.

PS. sorry for not quoting your answers, but I'm on my phone currently.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Aha, That explains!
I didn't knew that such kind of SAS breakout cables excisted! That will make the choice for a HBA a lot easier. Thanks!
:)
For the controller part
It is not mentioned in this article. But in other freenas blogs I read that (raid?)controllers are not advised because it puts an extra layer between the storage and ZFS. But that doesn't matter for now....
Let's just stick to the way it is meant to be done.
That's true, you do not want to use a RAID controller with FreeNAS; it works best when it has direct control of the disks, while a RAID controller inserts an unwanted level of abstraction. The LSI HBA card we've been discussing (the LSI 9211/ IBM M1015 / Dell H200) can be used as a RAID card when its ROM is flashed with IR mode firmware, allowing for RAID-1 and RAID-5 arrays, for example. But for FreeNAS, we flash the ROM to IT mode, which makes the card function strictly as a Host Bus Adapter (HBA), giving FreeNAS direct control of the drives.

About he storage
The way you explain it I think it is indeed better to just use 2 normal sized SSDs and use them in mirror voor ESXi Freenas and windows. Would save some works as wel.
Well... that's what I do. But you have to use a RAID controller compatible with ESXi to do it. ESXi doesn't have drivers to set up any kind of RAID array with a motherboard's built-in SATA ports.

That's why both of my AIO systems (again, see 'my systems' below) have two LSI-based controllers: one in IR mode for the bootable RAID1 mirror of SSDs for ESXi and the FreeNAS VM, another in IT mode for passing through to FreeNAS.

We've discussed several different ways to set up an AIO:
  • Boot ESXi from USB, install FreeNAS VM on local SSD/HDD
  • Boot ESXi from USB, install FreeNAS VM on two local SSD/HDDs (using mirrored option)
  • Boot ESXi from local SSD/HDD, install FreeNAS VM on the same SSD/HDD
  • Boot ESXi from local SSD/HDD, install FreeNAS VM on two local SSD/HDDs - the boot disk plus another one - (using mirrored option)
  • Boot ESXi from RAID array, install FreeNAS VM on the same array
Any of these will work, some just offer more safety (redundancy).
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I had FreeNAS 9.10 running under ESXi 6 with a M1015 passed through for a few months and it worked well once I got it setup, but getting there was a bit of a struggle. Virtualizing everything just adds a bunch of complexity which and cause a bunch of headaches, especially if you're using FreeNAS as a datastore for your VMs, but it does come with it's geek cred.

I had FreeNAS installed on a 150 GB WD Raptor and ESXi installed on another one of the same, I then had two other SSDs which I used as datastores for the OSes of my VMs, like I said before using FreeNAS as a datastore for the OSes just adds additional headaches, but it's obviously doable. I then connected the VMs to FreeNAS via NFS.
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
Spearfoot,

I juist ordered all the hardware to finally start building everything up.

I have one question regarding the Boot device for ESXi Freenas and Windows.
For now I want to use a single SSD for all three of them and upgrade it in the future with an additional controller for raid support.

But do i have to make three different partitions to do this?
Or is it just a matter of installing everything on it and does it manage itself?

And

If the Boot SSD fails, i assume that then it is just a matter of installing a new SSD with Freenas OS on it and it is will recognize my Vdev without loosing any data? (Freenas data, the windows VM aint that important).

Once again, thanks in advance
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Spearfoot,

I juist ordered all the hardware to finally start building everything up.

I have one question regarding the Boot device for ESXi Freenas and Windows.
For now I want to use a single SSD for all three of them and upgrade it in the future with an additional controller for raid support.

But do i have to make three different partitions to do this?
Or is it just a matter of installing everything on it and does it manage itself?

And

If the Boot SSD fails, i assume that then it is just a matter of installing a new SSD with Freenas OS on it and it is will recognize my Vdev without loosing any data? (Freenas data, the windows VM aint that important).

Once again, thanks in advance
No, you don't partition the disk, ESXi does when you install ESXi on it. ESXi doesn't require much space, so most of the SSDs capacity will be available as a VMware 'local datastore'. After installing ESXi, create your FreeNAS and Windows virtual machines on the local datastore, using 16+ GB of space for the FreeNAS boot disk and whatever size you deem appropriate for Windows. Leave some free space on the datastore for ESXi to use for swapping and other overhead.

The thing is... you don't have to get this right the very first time. You can always wipe everything and start over if you don't like the way things are going. You'll learn a lot about ESXi once you install it and play with it and all of this will make much more sense.
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57

Sorry to bother you again Spearfoot, but if you dont mind I have a few questions regarding the guide Benjaming put together.

In step 5 Benjamin created a new vmswitch. With the Storage network and the storage kernel both on the left side.
And even without a physical nic you can see that they are both connected to each other.

I have to mention that I use VMware 6.5 (it thought i just grab the newest update) but I figured out that not everything works the same way. Right now with being this far, I dont want to start over..

But in my setup I have 2 physical nics on the vswitch 0 and none on vswitch1. But the problem I have is that if I dont use a physical nic in vswitch one, the storage network and storage kernel are not connected to each other. And therefore I cant ping the storage kernel from freenas and (logically) I cant create a NFS share between vmware and freenas.....

Right now I have it up and running with one physical nic attached to vswitch1, but this is not like the guide.
Can you tell me what the use if of this NFS share and if there are any downfall and/or potential risk in having a physical NIC attached to vswitch1.
Or is there a cure for this? If so, i cant find it...anywhere...

Thanks again!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Sorry to bother you again Spearfoot, but if you dont mind I have a few questions regarding the guide Benjaming put together.

In step 5 Benjamin created a new vmswitch. With the Storage network and the storage kernel both on the left side.
And even without a physical nic you can see that they are both connected to each other.

I have to mention that I use VMware 6.5 (it thought i just grab the newest update) but I figured out that not everything works the same way. Right now with being this far, I dont want to start over..

But in my setup I have 2 physical nics on the vswitch 0 and none on vswitch1. But the problem I have is that if I dont use a physical nic in vswitch one, the storage network and storage kernel are not connected to each other. And therefore I cant ping the storage kernel from freenas and (logically) I cant create a NFS share between vmware and freenas.....

Right now I have it up and running with one physical nic attached to vswitch1, but this is not like the guide.
Can you tell me what the use if of this NFS share and if there are any downfall and/or potential risk in having a physical NIC attached to vswitch1.
Or is there a cure for this? If so, i cant find it...anywhere...

Thanks again!
The purpose of the second 'Storage' vSwitch is to a have a separate network channel strictly for use between the ESXi host and the FreeNAS VM. All of the traffic on this network takes place internally on the server, again, only between the ESXi host and FreeNAS VM. So there's no need for NIC; all of the traffic simply passes through the switch, and is used for NFS or iSCSI-based datastores for the ESXi host.

It's very important that the two vSwitches are on separate subnets: I use 192.168.1.0/24 for my LAN and 10.0.58.0/24 for the storage network.

And it's very important to assign two virtual NICs to the FreeNAS VM, and assign them IP addresses within the two subnets, one on the LAN network, and the other on the storage network.

Perhaps ESXi 6.5 doesn't allow a vSwitch with no conntected physical network adapters, unlike the earlier versions? I don't know. I use 6.0.
 
Last edited:

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
The purpose of the second 'Storage' vSwitch is to a have a separate network channel strictly for use between the ESXi host and the FreeNAS VM. All of the traffic on this network takes place internally on the server, again, only between the ESXi host and FreeNAS VM. So there's no need for NIC; all of the traffic simply passes through the switch, and is used for NFS or iSCSI-based datastores for the ESXi host.

It's very important that the two vSwitches are on separate subnets: I use 192.168.1.0/24 for my LAN and 10.0.58/0/24 for the storage network.

And it's very important to assign two virtual NICs to the FreeNAS VM, and assign them IP addresses within the two subnets, one on the LAN network, and the other on the storage network.

Perhaps ESXi 6.5 doesn't allow a vSwitch with no conntected physical network adapters, unlike the earlier versions? I don't know. I use 6.0.

Thanks for your answer!
I have the storage network and the storage kernel set on the /16 subnet (all my other devices etc are set on /24.
So this is all done, I just don't get why the nic is required to connect them....

For the rest I have everything set up just like the guide. So I will start using freenas with non-critical data to check en test everything. We will see how this goes...

Thanks!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
So this is all done, I just don't get why the nic is required to connect them....
Hmmm... I don't get it either.

You created a virtual switch with related port group and vmkernel port, per Ben's instructions? The only IP involved here is for the vmkernel: I used 10.0.59.1 for mine, screenshot below. I assign IP 10.0.59.2 to the second NIC on the FreeNAS VM, and FWIW, I can't ping the switch (at 10.0.59.1) from the FreeNAS VM (at 10.0.59.2) - but it works flawlessly nonetheless:
vmware-network-configuration.jpg bacon-network-interfaces.jpg
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
ok, I have it working right now.
still strange though how it all works.
This picture kinda explains my problem:
Vswitch vmwarew.png
With vswitch1 configured they way it is above it works fine,
But when i configur it like it is below, then i cant ping the host. And i cant create a NFS-share....
But right now it works :)

And between my server and my Router is a (physical) switch. With one line between the switch and router.
So i assume there is less to use both ethernet ports on vswitch0.

Thanks anyway for sharing your thoughts
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
ok, I have it working right now.
still strange though how it all works.
This picture kinda explains my problem:
View attachment 15563
With vswitch1 configured they way it is above it works fine,
But when i configur it like it is below, then i cant ping the host. And i cant create a NFS-share....
But right now it works :)

And between my server and my Router is a (physical) switch. With one line between the switch and router.
So i assume there is less to use both ethernet ports on vswitch0.

Thanks anyway for sharing your thoughts
I'm confused... :confused:
Is it working now, without a NIC attached to the vSwitch?
Also, do you have your LAN and Storage vSwitches located on different subnets? They need to be...
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
I'm confused... :confused:
Is it working now, without a NIC attached to the vSwitch?
Also, do you have your LAN and Storage vSwitches located on different subnets? They need to be...

I have it working with the nic attached to the nas.
Al my connections on vswitch0 are on the /24 sub.
And all the connections on vswitch1 are on the /16 subnet (I can't set the switch itself on a certain sub, or isn't that what you mean)?

Also both the LAN nic on the freenas VM is on /24 and the Storage NIC of the Vm is on /16.

So everything is up and running. Just like the guide.
Except for the part where I have a nic attached to vswitch1, and the guide hasn't...
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I have it working with the nic attached to the nas.

Al my connections on vswitch0 are on the /24 sub.
And all the connections on vswitch1 are on the /16 subnet (I can't set the switch itself on a certain sub, or isn't that what you mean)?

Also both the LAN nic on the freenas VM is on /24 and the Storage NIC of the Vm is on /16.

So everything is up and running. Just like the guide.
Except for the part where I have a nic attached to vswitch one, and the guide hasn't...
I still don't understand how you've set up your network... On my system I use:
  • 192.168.1.0/24 for the LAN, with the VMkernel port IP=192.168.1.32
  • 10.0.59.0/24 for the Storage network, with the VMkernel port IP=10.0.59.1
In other words: two completely different networks. I see you set your Storage VMkernel port IP to 192.168.2.52 - what IP are you using for your LAN VMkernel port IP?

If you follow Ben's setup advice, you'll put the Storage network on a 10.x.x.x network and the LAN on a 192.168.x.x network. Doing this just makes things easier... :)

Also, you need to bind the NFS service to the IP address of the storage network interface. I bind it to both the LAN and storage networks so that other servers can access the NFS shares, like this:
nfs-service-settings.jpg
 

Cougar014

Explorer
Joined
Oct 30, 2016
Messages
57
I still don't understand how you've set up your network... On my system I use:
  • 192.168.1.0/24 for the LAN, with the VMkernel port IP=192.168.1.32
  • 10.0.59.0/24 for the Storage network, with the VMkernel port IP=10.0.59.1
In other words: two completely different networks. I see you set your Storage VMkernel port IP to 192.168.2.52 - what IP are you using for your LAN VMkernel port IP?

If you follow Ben's setup advice, you'll put the Storage network on a 10.x.x.x network and the LAN on a 192.168.x.x network. Doing this just makes things easier... :)

Also, you need to bind the NFS service to the IP address of the storage network interface. I bind it to both the LAN and storage networks so that other servers can access the NFS shares, like this:
View attachment 15564

I have set up my network like this:

Vswitch0 has the 'VM network' and the 'management network' (192.168.2.50/24)

Vswitch1 has the storage network (freenas connected with 192.168.1.52/16)
And the storage kernel connected with 192.168.2.52/16

The Freenas vm has 2 Nics, one to the storage network. And one to the VM network. The vm network nic is 192.168.2.15/24 the storage network nic 192.168.1.52/16.

So the storage network is on the /16 subnet and the VM network (internet) is connected to the/24 subnet

I'm not sure is this understandable, I'm not used to throw with IP's this way....
To be honest, it took me 2 days to set this up:(


*EDIT*
On both the vswitches I have one physical nic each......
The physical nic on vswitch1 I s there for the reason I explained earlier here above (the post with the photo).
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
:(

I have set up my network like this:

Vswitch0 has the 'VM network' and the 'management network' (192.168.2.50/24)

Vswitch1 has the storage network (freenas connected with 192.168.1.52/16)
And the storage kernel connected with 192.168.2.52/16

The Freenas vm has 2 Nics, one to the storage network. And one to the VM network. The vm network nic is 192.168.2.15/24 the storage network nic 192.168.1.52/16.

So the storage network is on the /16 subnet and the VM network (internet) is connected to the/24 subnet

I'm not sure is this understandable, I'm not used to throw with IP's this way....
To be honest, it took me 2 days to set this up:(
Ah, ha! Your networking is malformed.

Class 'C' networks are of the form 192.168.x.x and may have subnet masks of /24, /25, /26, /27, /28, /29, or /30 - note that /24 just means you're using a subnet mask of 255.255.255.0

Your vSwitch0 VMkernel is configured at IP address 192.168.2.50 on a 192.168.2.0/24 network, which allows a host address range of 192.168.2.1 - 192.168.2.255 - so far, so good.

Your vSwitch1 VMkernel is configured at IP address 192.168.2.52 on a 192.168.0.0/16 network, not a valid class 'C' network, but which nonetheless allows a host range of 192.168.0.1 - 192.168.255.254 - a whopping huge number of hosts, and 'way more than you need for the storage network.

Do you see how they overlap? This is double-plus ungood. :)

You want the virtual machine and storage networks to be separate. I suggest designing the storage network as a class 'A' network, as Ben uses in his article. I use 10.0.59.0/24, which lets me use IP addresses between 10.0.59.1-10.0.59.254 on the storage network.

Here are some handy IP subnet calculators that I use all the time, and will make you into a network engineer, lickety-split! :D

http://www.subnet-calculator.com/
http://www.tuxgraphics.org/toolbox/network_address_calculator_add.html

Good luck!
 
Top