First Build: X11SSM-F Will it FreeNAS and Boot Device Question

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
Hi all,

Context
Planning my first FreeNAS build based on the Supermicro X11SSM-F board. Would appreciate any feedback on the below config. I think it's fairly close to a lot of recommended builds on this board recently. This will be primarily file storage (backups and Plex data). Plan is for another server on my network to read Plex data via NFS mounts over 1Gbps networking. I've attempted to include some cpu headroom to support future needs that are more processor intensive as the difference in cost isn't dramatic.

Specs
Questions
  1. Boot device best practice. I'd rather avoid usb for reliability and speed reasons noted on these forums and I'm planning to max the available SATA connections so what's the next best option? Would that be a SSD PCIe card?
    1. If so, are there any uses/reasons to size it beyond what's needed for boot environments?
  2. PSU. I think I'm in the ballpark but would spending another $10 for 650W be worthwhile? Napkin calcs say (50W:CPU, 240W:Storage, 100W:misc still leaves me under 400W). Does this wattage sound reasonable?
  3. Would just appreciate a general look through on config. Pretty standard: Is there anything dumb or inadvisable here that I may not be seeing.

Thanks in advance!
 
Joined
Oct 18, 2018
Messages
969
Boot device best practice. I'd rather avoid usb for reliability and speed reasons noted on these forums and I'm planning to max the available SATA connections so what's the next best option? Would that be a SSD PCIe card?
  1. If so, are there any uses/reasons to size it beyond what's needed for boot environments?
Using a PCIe SSD might not be the most efficient use of the speed in that slot. If your primary reason for avoiding SATA SSDs is ports you can pick up a cheap HBA off ebay for ~40-60 bucks such as this LSI-9211. Doing so also allows you to grow the number of drive significantly if your storage requirements change and it frees up your PCIe slots for things like SLOG devices, 10G NICs, etc. Of course, whether that is the right choice for you depends on your budget and expected use of the system. If you can swing the price difference between a small SSD + HBA + Cables vs PCIe SSD I think that option is the most flexible.

PSU. I think I'm in the ballpark but would spending another $10 for 650W be worthwhile? Napkin calcs say (50W:CPU, 240W:Storage, 100W:misc still leaves me under 400W). Does this wattage sound reasonable?
650W may be a bit more than you need but so much so that it is a ridiculous expense. That extra buffer room is likely worth the 10 bucks.

Would just appreciate a general look through on config. Pretty standard: Is there anything dumb or inadvisable here that I may not be seeing.
I don't see anything glaring. I would say you may consider not going with the tempered glass version of that case if you care about noise at all. The server parts are not especially attractive to most people and the solid-sided R6 cases have sound dampening paneling. Your board also supports 7th gen i3s if you cared to upgrade from that 6th gen CPU. Again, whether that is worth it or not depends on the cost and what you plan to do with the system in the future.

Looks like a great starting build to me.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Boot device best practice. I'd rather avoid usb for reliability and speed reasons noted on these forums and I'm planning to max the available SATA connections so what's the next best option? Would that be a SSD PCIe card?
M.2 PCIe SSD and a PCIe slot to M.2 converter card are probably the way to go.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Or an x11ssh-f, which comes with one m.2 slot and can boot off a PCIe SSD in that slot.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Or an x11ssh-f, which comes with one m.2 slot and can boot off a PCIe SSD in that slot.
Poor value. It's only x2 and the price difference is larger than the price of an adapter, typically.
 

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
I ultimately went with the X11SSM-F getting a PCIe to M.2 converter card and a 250GB Sabrent drive for the reason @Ericloewe stated after reading up on that X11SSH boards as a potential option last week. Appreciate the perspective regardless. I updated my case selection to the Fractal Design R5 given the feedback on glass noise from @PhiloEpisteme (thank you) and noting it had 8 drive bays. Much obliged all. I'll report back here as the build progresses.
 
Joined
Oct 18, 2018
Messages
969
I updated my case selection to the Fractal Design R5 given the feedback on glass noise from @PhiloEpisteme (thank you) and noting it had 8 drive bays. Much obliged all. I'll report back here as the build progresses.
To be clear, the R6 has different versions. I was suggesting you look at the R6 without the tempered glass side such as this one which I have and is very quiet. The R6 supports 6 HDDs out of the box and can easily be made to support 12. I've posted a build here for details if you like..

Though I also have heard good things about the R5 so if you like it better, by all means go for it. :) I just didn't want to come across as having said bad things about the R6.

Sounds like you're feeling confident about your build. Report back with how it goes!
 

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
@PhiloEpisteme You were clear, I wasn't. The current pricing for the other R6 variants didn't seem to justify the difference (to me) over the R5. I also wasn't aware the R6 could easily be adapted to support 12 drives so it seemed like a straightforward choice.
 

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
Sigh... The best laid plans. I have two issues:

1) I mistakenly ordered the X11SSH-F-O rather than the X11SSM-F. I could live with this as noted above or swap it out but...

2) It doesn't post. I think this indicates bad memory but I'm not certain. It's three short, one long beep. Here's a recording (start at 26s)
https://www.dropbox.com/s/wujw903h31bu3ub/Beep codes.m4a?dl=0

The memory is what I originally described above but I was a little suspect when I received it in one two-pack containing only one dimm and another single pack that had handwriting on it indicating a possible return. I've tried both dimms in (dimmb2, dimmba) order and each in dimmb2 slot. Same result each time. No monitor output (unless my vga to hdmi is also an issue) on any attempt and the same beep codes each time. Fwiw, board is connected to psu, vga, usb keyboard, no boot device installed and testing outside the case.

Is anyone familiar with this sequence? I'm thinking I should return the dimms (and probably the motherboard to get the one I intended) but I'd like to confirm first. Realize this isn't technically FreeNAS related so mods if this is off topic, happy to go look elsewhere on the support side.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Joined
Oct 18, 2018
Messages
969
I mistakenly ordered the X11SSH-F-O rather than the X11SSM-F. I could live with this as noted above or swap it out but...
This may be a blessing since it seems to have an issue POSTing. Perhaps used it to verify your ram if you can and then return it for the one you wanted?

I received it in one two-pack containing only one dimm
When I purchased Crucial memory it was shipped this way directly from Crucial. They seemed to just use the same packaging in this case.

another single pack that had handwriting on it indicating a possible return.
Certainly possible. Memtest will help give you confidence in the DIMM if you can get the board to POST.

Before you return/replace the board I think it is worthwhile making sure the issue is your DIMMs and not your board's ability to communicate with them. It sounds like you may have already done this but it is worth saying explicitly.

In order to determine whether the issue is your board, DIMM(s), or just using the wrong slot you should try to get the board to POST using only 1 DIMM at a time and try each DIMM in every slot. The board has specifications for which slots work with only a single DIMM so you can start there but it is sometimes difficult to understand which slot the specifications refer to. Make sure you use each slot to find the right one. If you can get the board to POST with 1 DIMM in 1 slot but not the other DIMM this is a good indication that the latter DIMM is faulty.

If they both POST when used in the same spot try using 1 DIMM in that slot and the second in each of the other slots until you either get it to POST or all of the other slots are exhausted. If it POSTs you're all set. If it does not this indicates that something is wrong with the board since you know from the steps above that both DIMMs are good.
 

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
An update. And a question. Appreciate the above suggestions on diagnosing the beeps. It turned out to be a damaged LGA socket and so I'm returning the board and meanwhile have swapped it for the X11SSM-F I originally intended for the build. Silver lining I suppose but a bit of a delay.

So I've built out my system and it's in burn-in test mode. A few runs of memtest ran without issue and I plan more before I load up data. I've done the suite of smartctl tests noted in the Resources links and I'm currently 32 hours into badblocks.

One interesting observation: Of my 8 (6TB WD Red) disks, one is taking measurably but not overwhelmingly longer to run than others. The test is up to the second (0x55) pattern and ada2 is 62% complete whereas all the others are between 80-83%. I'm simultaneously running mprime (last 24 hours at least). CPU temp is around 72C. Disk temps for ada0-ada4 are between 32-35C while ada5-7 are between 39-49C (airflow perhaps?). Given the backblaze findings, I'm not all that concerned over the disk temps but note it here in case it's pertinent.

Does anyone know if this discrepancy in runtime unusual or cause for concern?
 
Joined
Oct 18, 2018
Messages
969
Does anyone know if this discrepancy in runtime unusual or cause for concern?
Some differences in completion time are expected. It is possible that a huge difference in speed could indicate a problem. You may want to run performance tests against individual drives after to try to eliminate that as a possibility. If the drives all perform the same after that I think you're probably good to go. Obviously double-check connections etc of everything.

Disk temps for ada0-ada4 are between 32-35C while ada5-7 are between 39-49C (airflow perhaps?). Given the backblaze findings, I'm not all that concerned over the disk temps but note it here in case it's pertinent.
If you're talking about this backblaze article do keep in mind that they were looking at temps much lower than the 40sC. Almost all of their drives are in the 20s. I'm not necessarily saying you're wrong about how much temps matter, just that perhaps that article is not a good indication of the failure rate to be expected of your drives running in the 40sC. Of course, I may be looking at the wrong article.
 

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
Thanks @PhiloEpisteme. That was the article I was referencing. In addition to checking connections and such, I’ll reconsider the safety margin for temps. It looks like there are some good fan control scripts here.

Could you elaborate on recommended approach for performance testing drives? Badblocks again? dd tests?
 
Joined
Oct 18, 2018
Messages
969
Could you elaborate on recommended approach for performance testing drives? Badblocks again? dd tests?
haha oh no! I was afraid you'd ask that. tbh I haven't done any performance testing beyond dd as suggested in some of the burn in posts around here.
 
Joined
Oct 18, 2018
Messages
969
In addition to checking connections and such, I’ll reconsider the safety margin for temps. It looks like there are some good fan control scripts here.
Those are certainly helpful. The other thing to keep in mind about the case you're using is that it is rather large and without enough fans to move air around it may choose a path to flow that isn't over your components.
 

reub

Dabbler
Joined
Sep 22, 2018
Messages
14
haha oh no! I was afraid you'd ask that. tbh I haven't done any performance testing beyond dd as suggested in some of the burn in posts around here.
Fair enough.

Those are certainly helpful. The other thing to keep in mind about the case you're using is that it is rather large and without enough fans to move air around it may choose a path to flow that isn't over your components.
I suspected that. When I have physical access again in a week or so, I'll check correlation between temp and position in case and adjust airflow accordingly.
 
Top