My Dream System (I think)

Status
Not open for further replies.
Joined
Nov 11, 2014
Messages
1,174
Metrocast Cable. It use to be really crappy and unreliable before I joined them. I waited until I started hearing better things about them. They are the fastest ISP in town too, although I don't subscribe to the fastest speed. The pricing is a bit high for my tastes and so long as I can stream 2 Netflix movies and surf the internet at the same time, all is good. The only ISP options I have are Metrocast Cable and Verizon DSL. If Verizon FIOS were an option, I'd likely jump ship.

Well , no jealousy here then. Since I can't tell if Metrocast is worst than Comcast.:smile:

By the way today I am playing with ESXI 5.5 and adding raid card 926i-8i and I was wondering how did it go with your project, meaning were you able to set it up successful with the proper drivers installed ? It's little frustrating to figure it out, but at the end I know it will worth it.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
For my system please keep in mind that the RAID card is only used in IR mode and for the boot mirror drives for ESXi 6.0U2. The drivers were already part of the ESXi package so nothing special to load for that.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The only workaround I had to use was described above by @jgreco, and it works quite reliably:

Pleasant to know that actually works now...

I also agree with the Grinch that a dual-CPU server w/ 32+GB RAM allocated to the FreeNAS VM is a minimal entry-level virtualized filer for real work; i.e., multiple users running multiple VMs in a production environment.

I seem to be quite disagreeable this morning because I feel obligated to point out that the number of users and maybe even the number of VM's isn't anywhere near as relevant as how busy the VM's are.

My use-case is different; I'm a developer and my ESXi-plus-FreeNAS AIO is my essential developer's Swiss Army Knife. I don't regularly run any VMs except for FreeNAS, but I have VMs set up to run several versions of FreeBSD, Linux, Windows w/ Visual Studio, Oracle databases, etc., depending on what project I'm working on. So a single-CPU server with only 16GB of RAM dedicated to FreeNAS suits my needs perfectly.

The usual problem as you get down to smaller amounts of memory are that ZFS loses the ability to keep sufficient stuff in ARC and identify blocks that are being repeatedly accessed. This has a lot more to do with the ratio of size-of-data-being-accessed to ARC size than it does any hard and fast rule about minimum amount of RAM, but typical light VM usage seems to set this above the 16GB mark. Once you get below whatever ratio is suitable for your workload, performance just totally sucks ... but things do keep working. The ARC is just madly thrashing and not being too useful.

This is why there are various recommendations for "minimum" for VM usage... often as high as 64GB. 64GB is large enough to allow ZFS to work correctly doing block storage work for several typical/average VM's. As with so many ZFS things, though, the "rules" aren't absolute and there are counterexamples that disprove every rule.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Back to the hardware raid card setup vs 'share iSCSI from FreeNAS'.
I see there are a couple of different priorities implicitly discussed, built by different lines of argument grinding against each other, instead perhaps more fittingly - in paralell to each other.

I'll suggest a list (inspired by the man of all sorts of lists @Mirfster ) for fun.
It is a topology and hierarchy to illustrate differences encapsulated in the above discussion:
  • Production box - Corporate level (at home)@jgreco
  • Home Production box - Angry wife level @joeschmuck
  • Home lab box - @Spearfoot
  • Less critical than home lab box - me :p
@jgreco advocates the hardware raid solution. I doubt anyone is better informed on the advantages in the case of 'production' box, or at the very least - leaning towards the 'production' rather than 'one user at home'.
The hardware setup is probably easiest to make reliable and "decently cost efficient" solution to get the job done.
Though, in my camp I feel like another $150-200 for a raid card, which does not enable particularly smooth installation/notifications via the chosen ESXi, the expenditure appears far over the top.
IMO, that type of cash is a lot more appealing to spend on SSD's for VM storage, or even another dust collector on the shelf...

Learning outcome:
I'll take note on (attempting) to be more precise in future recomendations. Particularly in relation to actual use case requirements.
It is deceptive how appearingly similar use cases yeild quite different priorities and benefit from different approaches to solving the problem.
 
Last edited:
Joined
Nov 11, 2014
Messages
1,174
For my system please keep in mind that the RAID card is only used in IR mode and for the boot mirror drives for ESXi 6.0U2. The drivers were already part of the ESXi package so nothing special to load for that.

So you haven't install anything for the raid card and you are able to see the card and drives health status in esxi?

What about raid volume management, how you do that ?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
So you haven't install anything for the raid card and you are able to see the card and drives health status in esxi?

What about raid volume management, how you do that ?
Check #341 and #342 for some additional 'gotcha'.
 
Joined
Nov 11, 2014
Messages
1,174
Check #341 and #342 for some additional 'gotcha'.
I did that very carefully. In fact spend a good Saturday on it and got as far as getting esxi to show health but no luck in graphical management from another host.
That's why I ask what @joeshmuck did to get it to work and how far.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I did that very carefully. In fact spend a good Saturday on it and got as far as getting esxi to show health but no luck in graphical management from another host.
That's why I ask what @joeshmuck did to get it to work and how far.
Nope, not yet. I haven't had enough time to play with it, family will hate me if I take the internet down a few times. But I may have an open window tonight.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I did that very carefully. In fact spend a good Saturday on it and got as far as getting esxi to show health but no luck in graphical management from another host.
That's why I ask what @joeshmuck did to get it to work and how far.
Alright.
What magic 'knobs' did you turn?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Alright.
What magic 'knobs' did you turn?
If I told you it wouldn't be a secret.

So back to the hardware monitoring using the Dell PERC H310 card.

I am a bit perplexed and I will need to do more research. I can tell you what I have currently running on the hardware. In @jgreco 's posting #342 is a listing of specific software to install. Before I go and mess things up I think I need to be very clear on what packages need to be installed.

So I found the following files which I believe are correct for my situation:
megaraid_sas-06.803.73.00-offline_bundle-2152363.zip
vmware-esx-storcli-1.19.04.vib
VMW-ESX-5.5.0-lsiprovider-500.04.V0.59-0004-offline_bundle-3663115.zip

Now the one problem I keep running into are the two files I installed to enable NUT to work properly. Every time I need to update a file via esxcli, I get the following error messages that I need to resolve I think before I go any further. The way through this is to "remove" the package and then software installs and upgrades work fine, then I need to reinstall the upsmon file to restore functionality:

Code:
VIB Margar_bootbank_upsmon_2.7.2-1.3.0vmw.500 violates extensibility rule checks: [u'(line 27: col 0) Element vib failed to validate content']
VIB Margar_bootbank_upsmon_2.7.2-1.3.0vmw.500's acceptance level is community, which is not compliant with the ImageProfile acceptance level partner
To change the host acceptance level, use the 'esxcli software acceptance set' command.
Please refer to the log file for more details.


So maybe I need to either figure out what is happening in this upsmon file in order to fix it. Maybe there is a more recent file as well. Once I figure this out, then I can work on the H310 and have something to test the upsmon fix.

EDIT: The upsmon file was updated to a newer version 2 months ago. I'll give it a shot tomorrow evening.
 
Last edited:

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
If I told you it wouldn't be a secret.

So back to the hardware monitoring using the Dell PERC H310 card.

I am a bit perplexed and I will need to do more research. I can tell you what I have currently running on the hardware. In @jgreco 's posting #342 is a listing of specific software to install. Before I go and mess things up I think I need to be very clear on what packages need to be installed.

So I found the following files which I believe are correct for my situation:
megaraid_sas-06.803.73.00-offline_bundle-2152363.zip
vmware-esx-storcli-1.19.04.vib
VMW-ESX-5.5.0-lsiprovider-500.04.V0.59-0004-offline_bundle-3663115.zip

Now the one problem I keep running into are the two files I installed to enable NUT to work properly. Every time I need to update a file via esxcli, I get the following error messages that I need to resolve I think before I go any further. The way through this is to "remove" the package and then software installs and upgrades work fine, then I need to reinstall the upsmon file to restore functionality:

Code:
VIB Margar_bootbank_upsmon_2.7.2-1.3.0vmw.500 violates extensibility rule checks: [u'(line 27: col 0) Element vib failed to validate content']
VIB Margar_bootbank_upsmon_2.7.2-1.3.0vmw.500's acceptance level is community, which is not compliant with the ImageProfile acceptance level partner
To change the host acceptance level, use the 'esxcli software acceptance set' command.
Please refer to the log file for more details.


So maybe I need to either figure out what is happening in this upsmon file in order to fix it. Maybe there is a more recent file as well. Once I figure this out, then I can work on the H310 and have something to test the upsmon fix.

EDIT: The upsmon file was updated to a newer version 2 months ago. I'll give it a shot tomorrow evening.
As I recall APC has a virtual appliance to monitor the ups and shutdown the host if a power failure were to occur.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
As I recall APC has a virtual appliance to monitor the ups and shutdown the host if a power failure were to occur.
I believe that is for an APC UPS that communicates via Ethernet, not USB port. If that has changed, I'd like to know.
 
Joined
Nov 11, 2014
Messages
1,174
I am a bit perplexed and I will need to do more research. I can tell you what I have currently running on the hardware. In @jgreco 's posting #342 is a listing of specific software to install. Before I go and mess things up I think I need to be very clear on what packages need to be installed.

I'll wait till you catch up with the thickening so we can share the experiences. First they are to way to install drivers: one with zip file (offline bundle) other is vib files. The one jgreco describe is the offline bundle method using zip files which require you to have addition software bundle that paid esxi customers have access, other ( with vib files) don't). Will get to that when you are done with your research too.


Now the one problem I keep running into are the two files I installed to enable NUT to work properly.

This part I can't help you with , cause I don't use NUT so no experience with that. I use the virtual appliance that APC offers and I run it as another VM (512MB Ram with opensuse)that works perfect to shutting down the esxi host. (this may only work with smart ups and management card)

For shutting down the VMs , you have 2 options:
1. you can ask hypervisor to do that on the way out (but there is a trick to make it work with esxi 5.5 at least)
2. Do that with APC software from inside the VM (I like this one better)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I've removed "upsmon" so I could install the other three files without the warning messages. I will install it again soon but with the new upsmon files.

I have installed all three files (that I listed above) without issue and rebooted the machine. Here is a screen capture from vShpere. As you can see, I can now view the drives in vShpere.
Capture.JPG

However! when I use the "./storcli show" I get the following message:
Code:
[root@localhost:/opt/lsi/storcli] ./storcli show
Status Code = 0
Status = Success
Description = None

Number of Controllers = 0
Host Name = localhost
Operating System  = VMkernel6.0.0

[root@localhost:/opt/lsi/storcli]

and when I use the command "./storcli /c0 /vall show" I get the following message:
Code:
[root@localhost:/opt/lsi/storcli] ./storcli /c0 /vall show
Controller = 0
Status = Failure
Description = Controller 0 not found

[root@localhost:/opt/lsi/storcli]


So it appears that I can monitor the health of the RAID and drives but the CLI does nothing for my configuration. And now I need to figure out how or if the system will notify me of a problem. More work to do.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Is it just me but after installing the files for monitor the RAID card, it seems like the server is running slower. Must be me, I need some sleep.
 
Joined
Nov 11, 2014
Messages
1,174
Is it just me but after installing the files for monitor the RAID card, it seems like the server is running slower. Must be me, I need some sleep.

Same thing happened to me. I only install 2 of the 3 files. So I didnt install the storcli driver, but my writes drop from 200 to 80 MB/s right after instating the driver.

P.S. I didnt mention it because I thought it was just me, but now when you mention it I am glad you did so I know it wasn't my imagination. :smile:
How cool is to share expirince with nice people who are working on same projects. I love this forum.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
How cool is to share expirince with nice people who are working on same projects. I love this forum.
But I wish the experience was a more positive one. So I'm not sure what to do at this point in time with this. I may backout the changes one at a time and see what happens. Hey, maybe you could do it before me again and let me know what one caused the slowdown. I'm about to head out for work in a minute, won't get the chance to play until later tonight.
 
Joined
Nov 11, 2014
Messages
1,174
But I wish the experience was a more positive one. So I'm not sure what to do at this point in time with this. I may backout the changes one at a time and see what happens. Hey, maybe you could do it before me again and let me know what one caused the slowdown. I'm about to head out for work in a minute, won't get the chance to play until later tonight.

It will become positive because we will get to the bottom of it.:smile:
I have an idea, but I won't be home soon to try it: Since our raid cards were recongnized in esxi without the driver for the card, I am thinking to start fresh esxi install but this time install only the management software (the one that makes raid controler health visible in esxi "storage"). This way we can use "the good driver" that was already part of esxi and obviously works better and in the same time we can monitor our raid card health in esxi.

P.S. Black Ninja always go for the cleanest and most elegant solution.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
so you are saying to reverse this action?

Code:
 VIBs Installed: LSI_bootbank_scsi-megaraid-sas_06.803.73.00-1OEM.550.0.0.1331820
  VIBs Removed: VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585


But not this action...
Code:
VIBs Installed: LSI_bootbank_lsiprovider_500.04.V0.59-0004
  VIBs Removed:
 
Joined
Nov 11, 2014
Messages
1,174
so you are saying to reverse this action?

Code:
 VIBs Installed: LSI_bootbank_scsi-megaraid-sas_06.803.73.00-1OEM.550.0.0.1331820
  VIBs Removed: VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585


But not this action...
Code:
VIBs Installed: LSI_bootbank_lsiprovider_500.04.V0.59-0004
  VIBs Removed:

You should start with fresh esxi install before you install anything, not after you add, remove or override drivers. Then install only the file named "...lsiprovider_500.04..." (which is not actual driver for raid card , but more like a management software allowing the card health to be visible).
Then you should be all set to use and monitor your raid card with only one thing missing: the ability to make changes in raid card configurations from esxi, but you can do that from the card bios on boot which might be even better to do so.
 
Status
Not open for further replies.
Top