DrKK's New FreeNAS; Build Experiences

Status
Not open for further replies.

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Also, I agree, where are the pics? I'm actually curious about what this all looks like, I might consider buying a new case so I can move some things around and the one you have looks intriguing.
I'll do pictures. Maybe tomorrow.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
But here's the thing. Now I'll be nervous, every day, about the cables in there.
Meh, that's what email reporting is for. If that's not good enough then I guess this would be a good test case for some of the other reporting methods that @Kris Moore was talking about in one of his update threads that are coming to future versions of FreeNAS.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What *ELSE* is wrong with those cables that I haven't seen yet? Are they harming the performance of my pool?
Nah, SATA doesn't negotiate down to keep the interface working, that's far too smart for a consumer interface. It'll just throw errors like crazy.
I don't want my pool to degrade because of cables while I am overseas or something. etc.
Well, the degradation is mostly due to bending, there's not much to degrade if they're currently working and aren't touched.
 

Z300M

Guru
Joined
Sep 9, 2011
Messages
882
As expected from Newegg, my hard drives were more or less just tossed in the box without much attention being paid to securing them very well. Whatever. I'm sure it's fine.
The last few "bare" drives I've bought from NewEgg (all Seagate) have come in "cushion packs" (for want of a better term) in individual right-size cardboard cartons, and then those individual cartons in an outer carton with enough air packs to fill the space.
 

iammykyl

Explorer
Joined
Apr 10, 2017
Messages
51
I don't know what you mean by "12v supply". The 24pin connector includes (several) 12v supplies.

If you mean the two 4-pin (or, single 8-pin) auxilliary power connectors, you'll find those are not necessary at all in lower power builds to fire up the system. Of course, once I was done with the out-of-case testing, I did connect those as well. But I do not believe they are necessary at all in a situation like mine.
Yes, I was referring to the two 4-pin (or, single 8-pin) auxiliary power connectors. May not be critical to power up a low powered CPU, but is required for stable operation as it supplies +12 VDC to a voltage regulator module either on the MB or in the CPU. The X11SSM-F manual lists it as a required connection. As others remarked, could have been the odd POST behavior.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Yes, I was referring to the two 4-pin (or, single 8-pin) auxiliary power connectors. May not be critical to power up a low powered CPU, but is required for stable operation as it supplies +12 VDC to a voltage regulator module either on the MB or in the CPU. The X11SSM-F manual lists it as a required connection. As others remarked, could have been the odd POST behavior.
On further reflection, I am sure you're right. And it would also answer the odd POST behavior.

See, this is why we have these discussions. Thanks.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
UPDATE: Transferred the old pool last night while sleeping (zfs send via ssh). Saturated the 1Mbps ethernet. No problems, looking good. The fact that the source machine did not have AES-NI for the ssh connection did not seem to be an issue at all, plenty of CPU to saturate the 1 Gbps.

I'm renaming pools, etc., and prepping to move the jails and stuff over now.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Glad things are going smoothly.
Well not so much.

We'll see. I just moved the jail pool over, and that works out great, but three times I had a major system crash, couldn't tell what it was, because it sent about 10 Mbps of text errors over the IPMI, looked board related more than OS related. Happened three times as I was messing with the just brought in jails. As the system was stable under load for 24 hours, the only thing added since was the jail SSD, so I assume something was jacked up with that.

Reset all connected to and from jail SSD. Problem appears to have cleared, but am nervous for another day. We'll see.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Does that R5 case have a filter for the side panel fan?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well not so much.

We'll see. I just moved the jail pool over, and that works out great, but three times I had a major system crash, couldn't tell what it was, because it sent about 10 Mbps of text errors over the IPMI, looked board related more than OS related. Happened three times as I was messing with the just brought in jails. As the system was stable under load for 24 hours, the only thing added since was the jail SSD, so I assume something was jacked up with that.

Reset all connected to and from jail SSD. Problem appears to have cleared, but am nervous for another day. We'll see.
Sounds very weird. Never had any similar trouble with my X11SSM-F, though I've only ever tried it with the boot SSD.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Sounds very weird. Never had any similar trouble with my X11SSM-F, though I've only ever tried it with the boot SSD.
Yeah well, it's been trouble free now for many hours. The side panels are off. 99.1% sure it was an electrical problem induced by (as many others have said) the fact that the side panels put pressure on the drive connectors.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I have to agree, my X11SSM-F never gave me any trouble once I upgraded the initial firmware. Are you still using those jacked up SATA cables? Stick with the ugly ones if they work reliably until you can purchase new cables to your liking.

On a different topic, did you run the customary RAM and CPU stress testing?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
On a different topic, did you run the customary RAM and CPU stress testing?
I did not. I have high confidence in the CPU and RAM, so I was going to let the customary RAM and CPU stress testing wait for a week or so. I'll get to it, but I think there's very little chance that Kingston ECC DIMMs would be jacked up, or an Intel i3 would be jacked up. If I'm wrong, and I have to unbuild the box next week, so be it. I wanted to get my services back online, the Mumble server, the Plex, the mail server, ZNC, etc as quickly as possible.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Also, update:

Today I did some more with the server. I set up the snapshot schedule, the scrub schedule, etc. But much more importantly, I moved the jail SSD over. And I want to talk about that for a moment, because I really developed an appreciation for this.

As a lot of you know, many of us run our jails on a separate pool, often on a single (or mirrored) SSD, even though we have ample space in our data pools to run the jails if we so chose. We often don't have super convincing reasons for doing this; at the end of the day, "it just seems like a smart thing to do" somehow. Well let me tell you how nice that was today. My jails are configured with VIMAGE, with specific fixed IPs outside of the DHCP range of the router, and with specific MAC addresses that I put into the config. Well let me tell you how awesome that was today as I moved my jail ssd over to the new box ("Daneel" by the way, is the name I settled on, another robot from Asimov's work). Here were the steps I followed:

1) I went into the old server storage screen, highlighted the ssd with the jails on it, and clicked the icon with the red X in it, and was careful to make sure "erase the data" was *NOT* checked, and proceeded to "detach" the pool. I really hate this language. We have vocabulary in FreeNAS that does not really match the common vocabulary in ZFS at large, and that makes it hard to understand what you are or are not doing. It took me a few minutes to convince myself that this thing called "detach the pool without clicking the DELETE DATA button" was code for "export zpool". There are a few similar vocabulary liberties we take in FreeNAS in other places, but I will not dwell on those. If we have ZFS language, we should be using it, and a tooltip to help the novice user. "Exporting the pool" and "completely thermonuclearly blowing away the collection of devices that constitute a pool" are two completely separate--and unrelated--procedures, and they should not be part of the same user dialogue, particularly one which uses non-standard terminology, where the difference between one and the other is a checkbox. And don't tell me "the whole screen turns crimson red and warns you that you are about to delete the data if you click the checkbox". I'm not buying it. It was either Jordan, or Xin, or Suraj, one of the guys at ixSystems that once told me, "If you have to put an arrow or a flashing red screen on it in your GUI so the user doesn't screw it up, then your GUI is doing it wrong; the problem is the GUI not the user's stupidity." Indeed. Since I don't write GUIs (rather, I write the computationally-intensive stuff GUIs call), this bit of wisdom was new to me, and I remembered it. Anyway, as for why to "detach": The purpose to "exporting the zpool" ("detach" in FreeNASese) is so that all caches and whatnot are flushed through and committed, and the pool's metadata now indicates to the next system that imports it that everything is good and fine, and the last transactions completed, and we are ready to import. If you try to import a pool that was not properly exported, as many of you have personally experienced, there can be issues.

2) Put the SSD into the new box. I clicked "import volume" in the GUI, boom, it's imported, 2 seconds. All my jails are there in the pool.

3) Now, I just go to Jails->config, and say the jail "root" is ssd/jails. Boom done. All my jails are now in the "jails" tab in the GUI.

4) Magically, all the jails are right as I left them, FreeNAS is aware of the entire jail config, everything is good.

5) Of course, if you "added storage" from your pool into the jails, you'll have to set that up again, as that will not carry over.

6) Now I fire up the jails. And everything simply is EXACTLY as it was before, and the beauty of having it on the separate SSD, with static IPs (which of course carry over), specific MACs (which carry over), etc., is that all of my port forwarding rules, all of my static DHCP allotments, all of that stuff, totally seamless now, I don't have to do *A SINGLE THING*. Don't have to reconfigure the firewall to forward the mumble server port to the new box, don't have to reconfigure the other ports to forward to new boxes, don't have to do ANYTHING. It just works.

7) Of course, you lose all of your periodic snapshot tasks that went with the jail's dataset(s), so you'll have to reset those.

Now of course, you can have a similar experience if your jails are simply on your main pool, by zfs sending over your datasets and resetting the jail root and whatever. But the "just move the jail ssd to the new box and import the pool" method really has fewer moving parts, and the transfer speed is infinity, instead of 1Gbps or whatever.

I still have the side panels off the box. I think this is a design flaw in the R5 case, definitely, with standard connectors, you are smashing your SSD's connections on the left panel, and you are smashing your HDD's connections on the right panel, when you close it. Simply making the case 0.5" wider totally solves this problem. I guess I'll just order some 90 degree connectors or something and lick my wounds. Overall though, the case is pimp.
 
Last edited:

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Meh, that's what email reporting is for.
Better set that up right now, actually. Forgot that part, even though it's in my "How to Set Up your New FreeNAS" post from last year or whatever. lol
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
We have vocabulary in FreeNAS that does not really match the common vocabulary in ZFS at large, and that makes it hard to understand what you are or are not doing... If we have ZFS language, we should be using it, and a tooltip to help the novice user.
Yes, yes, OH YES, yes, yes.

Indeed yes.

This.

Whew.

Not least because existing ZFS documentation would then be applicable to FreeNAS, potentially saving significant effort reinventing wheels that could be more profitably spent elsewhere.
 
Status
Not open for further replies.
Top