FreeNAS + ESXI Lab Build Log

Status
Not open for further replies.

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Correctamundo (with one caveat, correct port and correct IP address for the IPMI interface).

You have three options for which port IPMI uses and you must connect your interfaces accordingly.
  • Dedicated: Always use dedicated IPMI interface.
  • Shared: Always use LAN1 interface.
  • Failover: If dectected on IPMI, then use IPMI, otherwise fall back to LAN1.
How do you have it connected and what do your IPMI BIOS settings look like?

I had given IPMI a static address within the BIOS settings but did not know about the port or interface requirements. I have not yet downloaded the IPMI manual (doing now), but will take a look. I also have the current Ethernet cable plugged into the NON IPMI, probably LAN1, interface. If I have the time tonight I will switch things around and see what I can figure out. Thanks all for the help. Today was just about getting it connected, being able to turn it off and on via the TPLink H110, and monitoring it. Work was also a little crazy so not as much time was available to play/study.

Another aside - I tried to run SuperDoctor V on a Windows 10 VM and it did not seem to report on the machine health properly, showing a few errors. I know this may be able to be fixed, and I may look into it further later, but this is not as big of a priority as IPMI, ESXI permissions/access, the UPS appliance, or FreeNAS.


Edit #2--looks like you just updated the post above me, or I am blind. I am re-reading and adjusting.
 
Last edited:

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
I had given IPMI a static address within the BIOS settings but did not know about the port or interface requirements.
  • I think we were all there once. ;)
  • If someone says the were born knowing everything, they are about as full 'o $hit as Kim Jong-un (little "Rocket Man") saying he was born without a butthole.
I have not yet downloaded the IPMI manual (doing now), but will take a look.
  • That link I provided is actually for the Linux / Windows client used to access IPMI.
  • Alternatively, you can access via the assigned IP address in a browser, but I much prefer IPMI View. Reference Figure 1.
  • @Bidule0hm suggested the fans aren't controlled in the BIOS, rather IPMI, and he is absolutely correct.
  • You can adjust fan mode manually via IPMI View (Figure 1), or the web interface, although I would recommend eventually automating thermal management via script. I'm happy to assist with that and provide the one I use. Basically, it checks the HDD temperature once per minute and issues an IPMI command to use the least noisy fan mode needed to keep the warmest HDD drive under 40C. As one aside on that topic, just as an advance FYI for when you begin to research the topic, the X9 boards lack the granular fan control offered by X10/11 boards and the scripts that you see being actively maintained are for those boards.
I also have the current Ethernet cable plugged into the NON IPMI, probably LAN1, interface.
  • So if you haven't changed the setting from factory default (Failover), even though you don't have IPMI_LAN connected, if you are connected to LAN1 (LAN2 won't work), you should still be able to access. My prior recommendation holds, however.
  • If you still can't access, I would check the jumper as mentioned in my last reply. It is possible the prior owner turned off BMC for security purposes via JPB1 on the motherboard and the jumper wasn't reset for you.
Another aside - I tried to run SuperDoctor V on a Windows 10 VM and it did not seem to report on the health, showing a few errors. I know this may be able to be fixed, and I may look into it further later, but this is not as big of a priority as IPMI, ESXI permissions/access, the UPS appliance, or FreeNAS.
  • I'm only familiar enough with SuperDoctor to know it isn't needed (IMO) and requires a license key.
I haven't read your entire thread so if any advice has already been covered, my apologies.

Figure 1
ipmi.png
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
SD5_notworking.png


There is the snapshot of SD not working as it believes it is on a VM. It does pull some of the processor info through. The program is likely useless once I get IPMI up and running. I look forward to tinkering with it tonight and using all the info you have given me. In the mean time I put up a Nagios XI instance for one of my coworkers to explore. Much more to do, but the rails finally arrived at home, and with IPMI looking like it will be working by the end of the day, things are good.

As far as the scripts, yep, I will be doing one of those. I think Stux, now you, and possibly others have them available. Once I get access I will look into it. I also may as well go over all the motherboard jumpers one by one to make sure they are set correctly, or best I can with what I know.

Another problem to report - Does anyone else have "typing"/connection issues while accessing their home environments. When you enter a period, it inputs "E." or twice the key you pushed, etc. I had hoped that the newer hardware would magically erase this, but it may be beyond my home lab scope - needing business Internet, better equipment, etc. I am also going through a VPN-public-server-ESXI-VMs. Extra steps are not likely helping.
 
Last edited:

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
You'll be fine with red drives. Your bigger concern should be climate control... you're going to be dissipating a fair amount of heat. Doing so into a sealed room like a closet is a bad idea, as the room will heat up substantially. I'm actually working on this problem myself.
  • That problem is a complete PITA. The room heats up substantially and then thermal runaway.
  • Been there if you want some guidance.
  • I went so far as to remove the sheetrock in my closet, reinforce metal studs (code requirement as home is a condo) with wood 2x4s and 2x6s so I could hang the server on the wall (26" server depth but of course the closet had to be 25.5" deep), install Rockwool SAFE'n'SOUND (which is remarkable) between the studs, add an exhaust fan in the ceiling, and then replaced the sheetrock. Air intake = air flow pulled under the closet door.
  • The following variables all work together to manage airflow: (1) Server is mounted upside down so it is pulling the coolest / densest air from the floor and under the door, (2) Server rear is facing the ceiling so it is forcefully expelling warm air upwards, (3) Exhaust fan in the ceiling is extracting that warm air. To the extent convection plays a role, that assists as well. I think everyone should vertically wall mount, and upside down. ;) Kidding, but the chassis orientation does align perfectly with the air flow path and the server is more easily serviced than if it was racked.
The good news is that the closet has an air vent, is on the bottom floor, and has a wall to outside and a wall to the garage. Depending on how it goes I will make modifications. I think that will be part of the fun, I hope.
  • By air vent do you mean a through wall vent not connected to your HVAC system, or a supply vent connected to your HVAC system?
  • I can 100% assure you that you are going to need a way to exhaust the warm air from your closet or you are going to have thermal issues.
  • Fortunately, your HDDs are 5400 RPM and don't get anywhere near as hot as the HGST Deskstar NAS HDDs I run (which are 7200 RPM). There is a reason they are excluded from the hardware recommendation guide, and if I didn't already have them on hand, may have chosen a different drive.
  • Of course, if you believe WD's rated operating temperature, you can hit 65°C. Everyone has their own opinion on optimal drive temperature and there are plenty of external resources to be cited whether that magic number is low or high, so I won't inject my own position. That being said, I think everyone would agree you should stay as far away from that 65°C maximum operating temperature as possible.
  • As @tvsjr goes on to note the key will be to consider the full system, and how air moves in and out of the closet.
  • I'm assuming that vent is an air supply, but given the option of a floor level supply vent or a ceiling level return vent (or vent to external), I'd actually choose the later. You can pull enough air in from under the door to be sufficient, but absolutely have a way to extract warm air (or let it flow out on its own accord).
Just remember, it's not about air... it's about air *flow*. The heated air has to get back to the A/C return. If you seal the room up tight, you'll simply increase the static pressure of the room to the point that no air will move, and it'll be just about as bad as having no vent at all.

And yes, I've been doing a lot of thinking and studying on my closet, dealing with the same issues...q
  • Sealing the room up tight is fine, as long as there is air flow movement (which there isn't at the moment for you, and accordingly I agree with @tvsjr). Theoretically, the fan in my closet is cycling the entire volume twice per minute.
  • I think having a well sealed room actually helps in my scenario, as the ceiling height fan induces negative pressure in the closet and forces all air intake to come from beneath the closet door.
  • When I misbehave my fiance locks me in the server closet, and from those experiences, I can attest to the fact that the closet is a negative pressure environment as I can feel cool air being sucked under the door on my toes and the door requires a slight degree of force to be opened.
Here are a few pics of my server closet if you care to have a looksie: https://photos.app.goo.gl/ckG5k3zH1SR7e4ZV2

And before anyone suggests they would be uneasy with the HDDs aimed at the ground, I should note that hopefully you aren't often swapping drives on a daily basis, but when you need to, that piece of spinning rust is very carefully inserted into the hole. Further, the front bezel acts as a safety net should a caddy somehow unlatch itself.

In conclusion, I'm able to achieve 40°C or lower HDD temperatures even with an ambient temperature as high as 78°F during the summer, without the fans being on full speed, i.e. the system works but it took a helluva lot of work to get there (hopefully you don't have to do the same). I'd do it again though, it was a fun project.
 

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
The program is likely useless once I get IPMI up and running.
  • I couldn't have summed it up more succinctly myself. I think the only advantage Super Doctor has is that you can use it to update the BIOS (but the last one was published in 2015 and you incur the cost of a license fee for that "convenience").
A few references for your perusal when you start to play with fan speeds:
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Once you have IPMI, you can turn the server off and on with it. You can use the external device for something else.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I don't even have the front panel button connected on one of my servers.
It is turned on with IPMI only.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Once you have IPMI, you can turn the server off and on with it. You can use the external device for something else.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
You don't even need the machine on to access IPMI? I wonder how I can set up external access without needing to go through ESXI first (needing power) while still being secure. Obnoxiously long password? Hmmm...


I am in IPMI now after battling with the rack/cage and learning how badly a second person is warranted in mounting this chassis. I ended up using the box the chassis was shipped in to hold the chassis up while I maneuvered the inside rail onto the middle rail (don't do this). Also, whomever put the screws in for the adjustable depth rack holes...wow. I have never had to work my zip gun that hard. I smelled burning oil and thought I saw smoke.

22961-da87409c2d628c7ee38c2fbd551ec9c9.jpg

The rails for the chassis were about 2 inches short so I had to undo 24 T25 (Torque) screws to move the two mounting rails (vertical).

22963-793065d1777ad175f4da48a6c42b23b5.jpg
<-not nice
22962-e8fa3b25569781d990aa7637a5995815.jpg
<- my first attempt at getting around the metal, worked for an easy screw

I had to run out to Home Depot real quick to get a longer T25 bit as they conveniently made a part of the rail guard it from shorter bits. I'd like to do more, but I am tired and to play some ESO before I get some sleep. I finally have the dang thing racked and I have access to IPMI.

P.S. - After adding the pictures I realized that I told the story in reverse order or at least jumbled...ahh well. Victory!
 

Attachments

  • 2inchtooshort.jpg
    2inchtooshort.jpg
    293.3 KB · Views: 341
  • GhettoFab.jpg
    GhettoFab.jpg
    106.3 KB · Views: 333
  • NotNice.jpg
    NotNice.jpg
    146.1 KB · Views: 307

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You don't even need the machine on to access IPMI?
IPMI comes on independently when you apply standby power to the system board. The IPMI interface IP address serves up a webpage that you can access and there are options on there for powering the server on or off or rebooting. It also has a remote console with remote media capability.
I use it instead of having a KVM.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You don't even need the machine on to access IPMI? I wonder how I can set up external access without needing to go through ESXI first (needing power) while still being secure. Obnoxiously long password? Hmmm...
You might want to take a look at this.
https://forums.freenas.org/index.ph...ess-to-your-freenas-server.62376/#post-444738

I have never tried to make my IPMI interfaces available outside my home network, but if I were going to do it, it would be through a VPN.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
You might want to take a look at this.
https://forums.freenas.org/index.ph...ess-to-your-freenas-server.62376/#post-444738

I have never tried to make my IPMI interfaces available outside my home network, but if I were going to do it, it would be through a VPN.
My only issue with that is that you need something to run the VPN. That is either going to be on the server (VM), the Cisco FW I have yet to learn how to use, or my own computer. As options 2 and 3 are a no go (keeping my personal PC off limits until everything is locked down), a remote IPMI connection with ridiculous PW may be the way to go, or just sticking with the remote power control through the smart plug. I wonder if the APC UPS has electrical monitoring. If so, I can then shift the smart plug over to the wall between it and the UPS, using it to measure the electricity and as a master remote on/off...unless that can also be done through the UPS, in which case I will leave the Kill-a-Watt on the wall for a master electrical monitor.

I do plan to make for secure remote access for both myself and my friends/family/coworkers. That's all part of the learning experience. I will not be moving any files over to the server that are sensitive until I have confidence in its security. I've thought of using everything from VPN to a PGP server to a AAA...possibly all in time.
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
All of this should be behind VPN, especially IPMI. Figure out your firewall or buy something that supports it. PFSense, on dedicated hardware or an old PC, is a great choice.
 

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
IPMI comes on independently when you apply standby power to the system board. The IPMI interface IP address serves up a webpage that you can access and there are options on there for powering the server on or off or rebooting. It also has a remote console with remote media capability.
I use it instead of having a KVM.

I'm glad to hear that you were able to get IPMI working. It makes everything so much easier.

To add a little more flavor to @ Chris Moore's commentary ...

View IPMI View you can see the "Chassis Power Status" is currently "OFF" ... all I have to do to bring the server back up is click the "Power Up" button. It is similar to WoL (Wake-on-LAN).
via ipmi view.png

Similarly this can be achieved via web browser. Click "Perform Action" ("Power On Server" is already selected by default as the Host is off).
via browser.png

  • On to more important topics, and I believe my sentiment echos that of others here, should you expose IPMI? Thinking about this from a homelab standpoint, what is the need for such? And who needs access to IPMI other than the server admin (you). Let's say you are traveling on business, your UPS shuts down your host, and you need IPMI to power the server back on. I would forgo the risk of exposing IPMI and just have your wife do it (via front panel).
  • The same logic applies to the FreeNAS. Do you really want to allow others to be able to log in and change FreeNAS settings, etc.? Or worse, open up the opportunity for someone to gain access to your entire system? In my opinion you are creating threat vectors that have marginal (if any)value.
  • There are "safer" ways to expose server functionality (Plex as a ridiculously simple example) without compromising the security of your system.
  • But if you choose to do this, I would ensure that access is locked down tighter than Fort Knox (if we even actually have gold there), and then lock down access again.
  • There is a reason many of us put IPMI / Freenas on a management VLAN segregated from the rest of our internal networks and wouldn't think about exposing management, even via VPN.
  • It isn't as if this is a production server for a Fortune 500 company, where you need to have remote access to a server at a colo, and there needs to be resource redundancy with that access.
  • Just my perspective and $0.02.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
I am working on a contract at work to get another of these:
http://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR60L.cfm
Damn it has an LCD screen ... I want. Oh and I suppose the fact that fully populated with 12 TB HDDs you are approaching 1 petabyte in gross capacity. Nice. I had a difficult enough time keeping my HDDs cool, but stripping out the fact that I bet few of these ends up as homelabs = the fans can run at full bore, how does that thing keep 60 7200 RPM HDDs cool? I started to look at the manual but gave up as I hate that SM doesn't include clickable section links (hate that).
I wish I could have one at home, but I had to settle for one like this:
http://www.chenbro.com/en-global/products/RackmountChassis/4U_Chassis/NR40700
  • You poor thing ... I think you will make it with 48 bays instead of 60. ;) Per your signature you have 16 populated currently? Not a pointed remark (question), just curious.
  • As an aside, I'm starting to regret getting the 836 instead of 846 as I've had all 16 bays populated since day 1 with 6 TB drives (and even replaced the DVD and Floppy slots with 2.5" HDD Hot-Swap kits). I could use 8 more HDD bays and have had my eye on ebay 846 barebones systems.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I had a difficult enough time keeping my HDDs cool, but stripping out the fact that I bet few of these ends up as homelabs = the fans can run at full bore, how does that thing keep 60 7200 RPM HDDs cool? I started to look at the manual but gave up as I hate that SM doesn't include clickable section links (hate that).
The server room is a bit cooler than a regular room, so the front row of drives is around 25C and each row gets a little warmer. The back row of drives runs about 35C and the boot drives that are in the very back run about 42C, but those are SSDs and are pretty tolerant of the heat.
Per your signature you have 16 populated currently? Not a pointed remark (question), just curious.
I just have not updated that. I put another 16 in there to make an iSCSI share.
As an aside, I'm starting to regret getting the 836 instead of 846 as I've had all 16 bays populated since day 1 with 6 TB drives (and even replaced the DVD and Floppy slots with 2.5" HDD Hot-Swap kits). I could use 8 more HDD bays and have had my eye on ebay 846 barebones systems.
I happen to know that this seller will accept an offer of $350...

https://www.ebay.com/itm/Server-Che...NEW-in-Box-New-Rail-Kit-Included/263519436540
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
All of this should be behind VPN, especially IPMI. Figure out your firewall or buy something that supports it. PFSense, on dedicated hardware or an old PC, is a great choice.

It will be in time. Right now I have it a port forwarded from my dynamic, public IP address to ESXI only allowing HTTPS. In the future, likely sooner rather than later now that I have stable access to ESXI and IPMI (for better monitoring of the system under initial stress loads), I will restrict down to specific IP addresses. There is much to learn and I will not be putting any risky files on the server until it is tested as secure. I will be using Plex and other services to make media available and may restrict the lab on a application per person basis instead of giving people keys to ESXI. Time will tell. I am really happy with what I have done so far and all I have learned. The best part though is that this is just the beginning of what could be a fantastic platform to make a great many more milestones.
I'm glad to hear that you were able to get IPMI working. It makes everything so much easier.

Yep, easy fix, just never seem to have enough time at home.
  • On to more important topics, and I believe my sentiment echos that of others here, should you expose IPMI? Thinking about this from a homelab standpoint, what is the need for such? And who needs access to IPMI other than the server admin (you). Let's say you are traveling on business, your UPS shuts down your host, and you need IPMI to power the server back on. I would forgo the risk of exposing IPMI and just have your wife do it (via front panel).
I agree. I only thought of allowing IPMI from the outside so I could make it easier to monitor from work during these initial days. I am still somewhat worried about the heat and highly unlikely fire or other extreme risks so I like keeping a close eye on it. I can do this just through ESXI - VM - local connection to IPMI or IPMIView. I may use IPMI to teach a few people at work, but the above still applies. Finally, my wife would not be interested in helping much. She would pull the power cord though.

I will eventually be locking all of this down, but i am playing with it as a more open playground for the moment while I learn the more restricted lower levels. It is a risk, but I am only giving access to a select few whom I trust. I do appreciate the advice though and all of these suggestions will undoubtedly be part of my journey with this server, with or apart from FreeNAS use. This is part of a long term plan, 5 years plus, and we are only in the first few days of implementation. It could be locked down by the end of next week if I am ready to use it for more than what it is now.
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Keep in mind that many IPMI products have known, open security vulnerabilities... especially if you aren't running brand-new systems with 100% current patching. If you put something on the public interwebz on 443, you will start getting bots hitting it in minutes. One vulnerability and you're owned. IPMI systems were *never* intended to be publicly exposed.
 

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
Keep in mind that many IPMI products have known, open security vulnerabilities... especially if you aren't running brand-new systems with 100% current patching. If you put something on the public interwebz on 443, you will start getting bots hitting it in minutes. One vulnerability and you're owned. IPMI systems were *never* intended to be publicly exposed.
  • Agreed.
I am really happy with what I have done so far and all I have learned.
  • I'm glad to hear that.
  • I can absolutely understand your sense of achievement as last year I decided it was time to get rid of my 3 COTS NAS appliances and build a real server. Thanks almost exclusively to this forum and the wealth of knowledge offered hear, I was able to make that rather large leap from COTS to a Supermicro chassis running ESXi, FreeNAS, pfSense, and a number of VMs.
  • I had never used ESXi, FreeNAS, or pfSense prior, and definitely hit some bumps along the way, but considering I don't have an IT background, I found the entire "experience" greatly rewarding, so again good for you.
I only thought of allowing IPMI from the outside so I could make it easier to monitor from work during these initial days. I am still somewhat worried about the heat and highly unlikely fire or other extreme risks so I like keeping a close eye on it.
  • I do appreciate the fact that you want to keep a close eye on it until you have a stable solution.
  • Heat is a valid concern, but fire? Yes, I'm aware that if you throw enough heat at something you can get it to combust, but I think we are a long ways from that threshold here.
  • I'm very curious what your HDD temps look like, with the closet door closed, considering there isn't a proper exhaust? I fought that epic battle with my server, but it was a rather laborious, and somewhat costly fight. Thankfully I won many hours and a few $$$ later.
Finally, my wife would not. She would pull the power cord though.
  • I think you misunderstood what I was saying. I was just suggesting that if your server went down and you were on the road (as a hypothetical), you could have your wife power it back on by pressing the power button on the front panel.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am still somewhat worried about the heat and highly unlikely fire or other extreme risks so I like keeping a close eye on it.
I had a drive fail catastrophically in one of the servers I manage for work. According to the log, the drive hit 185 celsius before it died and the heat was so intense it cause the two neighboring drives to have errors too. It is a good thing the server was running RAIDz2 with a hot spare. I ended up having to replace three drives at one time, which is rare... That server didn't burst into flames though. It is very unlikely that would happen for anything short of an electrical fault. I had a power supply fail that smelled like it was on fire, but no flames ever appeared. These things don't happen often. I have only seen them because I have been dealing with many servers for many years.

PS. We had one of the two cooling units for my section of the datacenter go down and the room temp went up to the low 80s (fahrenheit) causing the drives to go up to the high 50s celsius. It was a stressful couple weeks waiting for the parts to fix the cooling unit. Those commercial coolers, I would have thought the parts were more readily available. The way the data-center is partitioned, the other coolers didn't help my section much...
 
Status
Not open for further replies.
Top