Various newbie questions - HBA recommendations, PCIe bifurcation, networking, pool setup ++

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
Hi everyone! I've had a Windows-based DIY NAS for around seven years now, but it's been giving me trouble lately and I've been weighing moving to some more NAS-oriented OS for a couple of years, so I guess the stars kind of aligned. I've had TrueNAS recommended to me the most among the various NAS OSes, so that's what I've landed on using. So far I've set up a test install on the core hardware it'll be running on, though I'll be re-using a lot of hardware, so the current iteration is necessarily rather preliminary. I've got a bunch of questions, and I've been searching, reading and looking around, but can't say I've found much in terms of answers. No doubt my searching and skimming skills are the source of that issue (as well as a limited understanding of relevant terminology), but most of my questions being kind of vague and specific at the same time is probably also a reason. I'm pretty experienced as a PC user, but I've never managed to penetrate Linux, BSD and other non-Windows OSes beyond the odd experiments and copy+pasting terminal commands from the internet. That might change now that I have an actual legitimate use for such an OS, though I also hope I can make this a mostly appliance-like setup with minimal maintenance required.

First, the hardware I'll be running:
CPU: Ryzen 5 1600X
Motherboard: Biostar X370GTN ITX
GPU: n/a (have a spare one ICE)
RAM: Currently 2x8GB DDR4, planning to upgrade to 2x16GB ECC UDIMM
PSU: Silverstone SX500LG SFX-L, keeping this from the current build
Case: Fractal Design Node 304, also keeping this
Boot drive: currently a 32GB flash drive, long term I'm planning to re-use the 128GB SATA SSD from the current Windows setup
Storage: 2x4TB WD Red drives, 1x 6TB Seagate, all three from the current NAS. 4TB drives mirrored, used for backups, 6tb as standalone media storage. Also have a spare 512GB SSD I'm thinking I'll use for a read cache.

The motherboard and CPU are inherited from my main PC. PSU is from the current NAS, as is the storage. I don't have the budget to meaningfully upgrade this at the moment, but I'm planning to replace the 2x4TB drives with 10TB drives this year. The 6TB drive is for now treated as expendable, though it would be nice to get some parity set up for that too. But again, that's in the future.

Usage: this is a home backup/media storage build. There will be two users across five PCs, three wired and two wireless. Overall system load will be very low, with no performance intensive local tasks. Photo editing (Lightroom/Photoshop) off the NAS will probably be the most intensive workload in terms of actually noticeable performance differences depending on the configuration.

So, to the questions:
- The motherboard has four SATA ports, which is barely enough for the drives I currently have. So I need an HBA. LSI hardware seems widely recommended, but they have a million models (especially when counting OEM variants), making it pretty much impossible to figure out which to get, or if there are meaningful differences at all. I've seen recommendations to buy from specific Ebay sellers to ensure you get the correct firmware (though I don't need configuration options or anything beyond the drives showing up). Are there any widely accepted best practices/best HBAs/recommended sources of hardware?

- I'm also planning on wiring up the apartment for 2.5GbE (any faster isn't worth it for our use, but the speed increase over 1GbE will be nice for photo editing off the NAS and similar work), which means adding a NIC. I'm thinking one of the relatively ubiquitous Realtek-based ones, which do seem to have BSD driver support (at least judging by the relative lack of recent posts asking about it compared to a while back). Are these reliable, and does TrueNAS have drivers for them included? I haven't got the foggiest clue how to install drivers for anything in TrueNAS (I assume this requires the shell?), so it would definitely be nice to choose OOTB supported hardware. I know there is plenty of affordable used 10GbE hardware out there, but sadly most of that doesn't support 2.5/5GbE speeds, and 10GbE switches are either too loud for home use or disgustingly expensive, so that's a no-go (and SFP+ is out of the question). Any advice here would be much appreciated.

- Related to the above point, seeing how I want to add two pieces of PCIe hardware on an ITX board: does anyone have any experience with PCIe bifurcation in TrueNAS? Is there any reason to avoid it? My motherboard has the worst BIOS I've ever come across (that includes motherboards in the early 2000s - that's what I get for absolutely needing the first available AM4 ITX board I guess), so I'm not 100% sure bifurcation will actually work on it, but worst case scenario I get a split riser I can re-use for something else later, and I'll run the NIC off the m.2 slot instead. Still, I would love to hear if anyone has any (good/bad) experience to share.

- Storage setup: The current W10 setup runs two separate storage pools, with the 4TB drives in a parity Windows Storage Space and the 6TB drive by itself. I'd like to keep these separate still, so I'm thinking I'll set up two pools, one for media and one for backups etc. The backup pool will also be hosting photos for editing etc, so it's not quite cold storage, but it's not accessed frequently. Backups from PCs are periodic, though I'm thinking I might configure separate shares for Windows File History for the two main desktops, otherwise everything will be accessible to both of us. I don't think a write cache is at all necessary for this setup, but a read cache would be a huge boon for photo editing - browsing through a library of 50MB RAW files in Lightroom gets laggy quickly off a HDD. As mentioned I have a spare 500GB SATA SSD that could do this job nicely. Are there any glaring oversights in this setup? I know reusing old HDDs is a bit iffy, but they'll be upgraded in due time, and with the HBA I'll even have the SATA ports for a parity drive for media at some point.

I know this is quite the wall of text, and that most of these questions are no doubt answerable with sufficient time spent searching and reading. I just really hope someone can show some mercy to a TrueNAS beginner and help me parse some of the tons of (often conflicting) information out there. Thanks in advance.
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
Well, there's clearly not that much help to be had here for a newbie with a ton of questions, but at least I did figure out the HBA part. I'll stick some links in here in case anyone comes along with similar questions later. Thanks to the proprietor of the excellent Ebay store The art of server (and their extremely handy HBA comparison video) I've landed on an IBM M1115 LSI 9210-8i HBA. As I'm only using HDDs for the main array (and have enough SATA ports on the motherboard for my boot + write cache SSDs) I didn't see a need for anything newer than that. Looking forward to getting the HBA in hand so that I can get the system set up properly :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Well, there's clearly not that much help to be had here for a newbie with a ton of questions

Well, you seem to have come in with a bunch of preconceived notions which could potentially be a lot of work to rectify. Please try to remember that the forum is made of community participants, not paid support staff. Trying to provide feedback on such posts can be time consuming, and since you failed to avail yourself of guidance in the form of existing resources and posts that address literally everything in your message, that may be why your post didn't attract any interest.

I'll address a few points, but I don't have the time to compose a comprehensive response today.

write cache SSDs

There is no such thing as a "write cache SSD". You might have misunderstood SLOG to be some sort of write cache; it isn't. Read this article.

I've never managed to penetrate Linux, BSD and other non-Windows OSes beyond the odd experiments and copy+pasting terminal commands from the internet.

That's not a crime. Part of the intent behind FreeNAS is to act as an appliance where you don't need to be doing "terminal commands."

RAM: Currently 2x8GB DDR4, planning to upgrade to 2x16GB ECC UDIMM
Also have a spare 512GB SSD I'm thinking I'll use for a read cache.

32GB of RAM is relatively tight for L2ARC (what you're calling a "read cache"). ZFS needs lots of RAM to do a good job of determining what is active enough to evict out to L2ARC.

So I need an HBA. LSI hardware seems widely recommended, but they have a million models (especially when counting OEM variants), making it pretty much impossible to figure out which to get

https://www.truenas.com/community/threads/confused-about-that-lsi-card-join-the-crowd.11901/

https://www.truenas.com/community/t...s-and-why-cant-i-use-a-raid-controller.81931/

https://www.truenas.com/community/t...-sas-sy-a-primer-on-basic-sas-and-sata.26145/

I haven't got the foggiest clue how to install drivers for anything in TrueNAS (I assume this requires the shell?), so it would definitely be nice to choose OOTB supported hardware.

FreeNAS is an appliance, and doesn't really support "installing drivers". You can hammer things in, but it has the potential to cause problems and issues, so it is discouraged, especially for beginners.

I know there is plenty of affordable used 10GbE hardware out there, but sadly most of that doesn't support 2.5/5GbE speeds, and 10GbE switches are either too loud for home use or disgustingly expensive, so that's a no-go

So what's wrong with something like the Mikrotik switches? Inexpensive, dead silent ...

4 ports of SFP+ for $140.

(and SFP+ is out of the question). Any advice here would be much appreciated.

Oh dear lord, why is "SFP+ is out of the question"? Because 10G copper sucks in so many ways.

https://www.truenas.com/community/resources/10-gig-networking-primer.42/

does anyone have any experience with PCIe bifurcation in TrueNAS? Is there any reason to avoid it? My motherboard has the worst BIOS I've ever come across (that includes motherboards in the early 2000s - that's what I get for absolutely needing the first available AM4 ITX board I guess), so I'm not 100% sure bifurcation will actually work on it, but worst case scenario I get a split riser I can re-use for something else later, and I'll run the NIC off the m.2 slot instead. Still, I would love to hear if anyone has any (good/bad) experience to share.

There isn't really such a thing as PCIe bifurcation "in TrueNAS" (or "in Windows" or "in AnyOS"). PCI bifurcation is a platform configuration issue, and is supposed to be handled by the host BIOS. The OS really has next to nothing to do with it. If you have a CPU with 40 PCIe lanes, and four are routed to the first PCIe slot and eight are routed to the second PCIe slot, the BIOS describes this to the OS. If bifurcation results in the second slot being split up into x4+x4, the BIOS basically presents that to the OS in a manner that looks like that slot is actually two slots.

If your BIOS doesn't support it, then the OS isn't likely to do anything magic to support it either.
 

GBillR

Contributor
Joined
Jun 12, 2016
Messages
189
I'll add a few other thoughts:

You probably turned off most people, when you mentioned a realtek nic in your second question. Search the forum.

Also, take a look here:

https://www.truenas.com/community/t...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

https://www.truenas.com/community/resources/freenas-101.96/

https://www.truenas.com/community/resources/introduction-to-zfs.111/

I've found this forum to be filled with people who truly want to help... if you are willing to help yourself too.

These forums are filled with probably thousands of posts. It is unlikely you will have a question that has not been asked and answered already. That said, even if you do ask a question that has been answered before, you'll find people willing to take the time and point you in the right direction. I am a reader of many forums and I will tell you that more often then not, you will not find that degree of helpfulness.

Good luck with FreeNAS, and welcome to the forum.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
That said, even if you do ask a question that has been answered before, you'll find people willing to take the time and point you in the right direction.

And just to emphasize THAT, some of us have taken time to try to write deep dives in the form of resources and/or stickies posted in the forums. I do storage and IT professionally, but I like to try to educate through accessible (hopefully!) documentation, written such that someone who understands the difference between a hard drive and a desktop computer might have some chance of getting into ZFS, which is both very simple and extremely complicated.

Many of my answers now come in the form of links to previous resources. This is because those resources tend to be much more thorough answers, and also include their own followup Q&A and comment threads. We all understand that the volume of content is overwhelming, so I'm happy to give people a shove in the direction of the correct resources.
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
Well, you seem to have come in with a bunch of preconceived notions which could potentially be a lot of work to rectify. Please try to remember that the forum is made of community participants, not paid support staff. Trying to provide feedback on such posts can be time consuming, and since you failed to avail yourself of guidance in the form of existing resources and posts that address literally everything in your message, that may be why your post didn't attract any interest.

I'll address a few points, but I don't have the time to compose a comprehensive response today.



There is no such thing as a "write cache SSD". You might have misunderstood SLOG to be some sort of write cache; it isn't. Read this article.



That's not a crime. Part of the intent behind FreeNAS is to act as an appliance where you don't need to be doing "terminal commands."



32GB of RAM is relatively tight for L2ARC (what you're calling a "read cache"). ZFS needs lots of RAM to do a good job of determining what is active enough to evict out to L2ARC.



https://www.truenas.com/community/threads/confused-about-that-lsi-card-join-the-crowd.11901/

https://www.truenas.com/community/t...s-and-why-cant-i-use-a-raid-controller.81931/

https://www.truenas.com/community/t...-sas-sy-a-primer-on-basic-sas-and-sata.26145/



FreeNAS is an appliance, and doesn't really support "installing drivers". You can hammer things in, but it has the potential to cause problems and issues, so it is discouraged, especially for beginners.



So what's wrong with something like the Mikrotik switches? Inexpensive, dead silent ...

4 ports of SFP+ for $140.



Oh dear lord, why is "SFP+ is out of the question"? Because 10G copper sucks in so many ways.

https://www.truenas.com/community/resources/10-gig-networking-primer.42/



There isn't really such a thing as PCIe bifurcation "in TrueNAS" (or "in Windows" or "in AnyOS"). PCI bifurcation is a platform configuration issue, and is supposed to be handled by the host BIOS. The OS really has next to nothing to do with it. If you have a CPU with 40 PCIe lanes, and four are routed to the first PCIe slot and eight are routed to the second PCIe slot, the BIOS describes this to the OS. If bifurcation results in the second slot being split up into x4+x4, the BIOS basically presents that to the OS in a manner that looks like that slot is actually two slots.

If your BIOS doesn't support it, then the OS isn't likely to do anything magic to support it either.
First off, thanks for a lengthy reply! Sorry if my initial post came off as full of preconceived notions - I did do a fair bit of research beforehand, so what I wrote here was rather what I had arrived at after that, including the points at which I was stumped. I realize I could probably have been clearer about that (as well as included some links to what I had already read).

I'm also well aware of just how much time can be needed to answer a lengthy post with a bunch of questions like this - I've written enough lengthy support essays over on TPU, smallformfactor.net and AnandTech to know the time expenditure required. Still, I thought it was better for me to collect all my questions into one post rather than posting several specific threads. At least that's the typically preferred approach on the forums I frequent. I was frankly just hoping someone would drop in with some suggestions for one or two points, rather than addressing everything at once, though of course I see how this is a bit much to ask.

I'm also starting to realize that a major fault in my searching here was in not focusing on the 'FreeNAS (Legacy Software Releases)' section. Given the consistent advice against upgrading (unless necessary) I focused on reading things centered on v12/TrueNAS to avoid any compatibility pitfalls or other changes, which clearly excluded a lot more than what I expected it to. For example, there are zero pinned threads in the various subforums under General Help (which tbh really surprised me). I obviously understand that transitioning a forum like that is a huge undertaking, but that seems like a rather major oversight to me (or at least not adding a pinned thread to each subforum reminding new users that the explicitly labeled legacy forums are still highly relevant and likely to contain continuously updated resources). It just seems odd to me as a new user to keep highly active subforums under headings that seem explicitly designed to warn away users by stating them as legacy. I should obviously have looked more, but this is also IMO an area where the forums could improve in terms of accessibility to new users.

Thanks for the links to the HBA posts - no doubt those drowned in the search results due to the combination of age and a billion other semi-relevant posts touching on the issue, though I do think I've seen some of them. The main issue is that I can't see that any of them actually draw up what the differences between the various models are, which was a necessity for me to even start to consider what I might want or need. Given that that HBA confusion thread was from 2013 I don't think I've even seen it listed. I might also have written off 8-year-old results as unlikely to be relevant - I'm really not used to hardware of that age still being so. I've got some adjusting to do, clearly. Luckily I found other resources that helped me figure out the differences and which ones matter to me, and I think I'll be happy with the M1115 I've ordered.

I also think there's a rater major difference in terms of wants, needs and expectations in play here. Of course I completely understand that the average enthusiast user here is likely to have far above average storage and home server needs, while I'd describe myself as barely having outgrown GbE and pure HDD storage. Thus I think I'm both struggling to interpret most of what's written here in light of my own needs, while it also seems like the general recommendations here are way excessive for my needs. I live in a medium sized apartment with one other person, and this NAS is for a combination of backups and mass storage/media storage for SSD-only PCs. It won't see particularly heavy use - a worst case scenario would likely be simultaneous scheduled backups from three computers (which is unlikely as I doubt I'd configure them to run at the same time) while serving media to one and a RAW photo library to another. A more realistic scenario would be the aforementioned photo editing + some downloads and media serving at the same time. Of course I'm also well aware that my budget and the desire to re-use existing hardware places some pretty specific limitations on my setup that are entirely on me to deal with.

To answer a few of the specific points:

  • SFP+ is out of the question due to the inability to terminate my own wires or otherwise route them cleanly. My apartment has no way of running hidden cabling (only concrete walls), and I don't have the luxury of a server cabinet/room where I can store coils of overflow cabling. I also need to pass wiring through several concrete walls, which would be needlessly invasive (I don't much feel like impact drilling 1" holes through concrete - getting Cat6 through there will be enough of a challenge, though I might be able to sneak that through some door frame gaps etc.). Nor is it compatible with the existing 2.5G NICs in motherboards in the house, nor would it be a feasible add-on to two of the three ITX PCs here (my main PC and HTPC). So while I'm entirely aware that SFP+ is technically superior, it's very, very poorly suited for my use case. I understand that my dismissal can read as flippant, but it's well reasoned. Besides, as I said I'm not aiming for 10GbE (let alone anything higher), as it's complete overkill for my use case. 2.5GbE should do fine for many years to come, and Cat6 handles that (and even 10GbE should that become a desire) more than good enough for my use case.

  • I know PCIe bifurcation isn't OS dependent, but in my experience OSes can always throw wrenches in the gears of the operation of stuff like this. I've seen bifurcated setups work perfectly in Windows but one or both AICs fail to show up at all in Linux and vice versa, hence my question. I don't generally trust an OS to not mess things up once I start veering off the beaten path, particularly not one where I'm not experienced. I've since decided to take the easier path of running the second AIC from an m.2 riser rather than a bifurcation riser though, so this point is kind of moot now.

  • Sadly 32GB of RAM is the maximum supported by my motherboard. I'm aware that most ITX DDR4 boards will gladly accept 2x32GB DIMMs no matter what their spec says, but I'd also need to significantly stretch my budget to get there. Even finding 32GB ECC UDIMMs is somewhat of a challenge, and I frankly can't afford the risk of a $400+ ebay order from China or the US not working, as a return would likely be out of the question (or at least exorbitantly expensive). From reading the cyberjock slideshow @GBillR linked I now understand that L2ARC isn't likely to do what I want it to (and would liely underperform without at least 64GB of RAM), so I'll need to figure out some other setup there.

  • I'm sorry for not using the correct terms for various stuff - that is one of the parts I find particularly impenetrable in terms of understanding ZFS and TrueNAS (especially with so many abbreviations being confusingly similar).

  • The driver installation thing was pretty much what I'd seen already (i.e. it's not an intended operation, and is likely to break stuff), but it's good to get confirmation - which is why I asked. I've been reading various NIC driver lists, but they're pretty impenetrable overall, and it's difficult to tell how up to date they are.
  • The main reason for asking about drivers is support for the new-ish 2.5GbE RTE NICs. I've seen several threads from a few years ago discussing driver support for those, but discussion seems to have dried up (unless my searching skills are just that severely lacking). Which to me might mean that a) it's a commonly accepted fact that they don't work, or b) that support has been introduced and it's no longer worth discussing. It's also essentially impossible for a new user to tell whether people complaining of poor Realtek GbE NIC support in 2012 or 2016 is relevant to Realtek 2.5GbE NIC support in 2021, even if I understand that RTE doesn't have a history of providing good BSD drivers. Still, if suport isn't there (which, judging by the responses here, it isn't) the fall-back option is to splurge on a nGbE-compatible 10GbE NIC, which again sadly excludes the used server parts market (AFAIK none of the Intel NICs support nGbE in any form). (If there were reasonably affordable (>$200) 2.5/5/10GbE switches with 5+ ports I could go the used NIC route, but nothing like that exists - the only options then are 1-2 10GbE ports + 5-8 GbE, which provides me with near zero benefit.) Anything Aquantia should do the job though, and from what I can tell it seems decently supported. It would be nice if AICs existed with Intel's new 2.5GbE NICs, but they don't. My main PC has one of those, but of course they also have that hardware bug in early revisions that can utterly kill performance with some switches, so it's not quite ideal.
Still, I'm a good deal closer to knowing what I want and need now, so thank you for taking the time to respond! I completely understand the frustration of being asked what looks like the same questions you've seen variations of hundreds of times - I'm plenty used to being on that side of relations like this. I think I've got somewhat of a new understanding of how hard it can be to tell which questions are the same as my own though, as differences in terminology, ways of asking, and specific hardware/software/etc. made a lot of what I read before posting this seem either not relevant or at least different enough to warrant asking myself. Still, thank you for pointing me in the right direction(s) and highlighting stuff that I've clearly misunderstood!


@GBillR Again, thanks for some really useful links! I definitely understand now how the Realtek thing might have turned people off, though coming in as a newbie to this it strikes me as one of those commonly accepted wisdoms that trace back so far it's difficult for new people to understand whether it's still relevant and true. It's good to get confirmation on that though, and as noted above, I'll rather be going with an Aquantia NIC. I'm definitely willing to put some effort into figuring this out for myself - I wouldn't be coming to a forum otherwise :) - but as I said above, in some cases it simply isn't possible with my lack of knowledge to tell whether the specifics that differentiate my questions from those of others are relevant or not, and the only way out is to ask.

So thank you both for being patient with me and pointing me in the right direction :)


I've re-thought my storage setup a bit from the resources provided here, so now I'm thinking something like this:
zpool1: backups, 1 VDEV of 2x4TB HDDs mirrored*
zpool2: media storage, 1 VDEV of 1x6TB HDDs, no redundancy needed
zpool3: SSD storage, 1 VDEV of 1x500GB SSD, used for semi-temporary storage of performance-sensitive data, likely with some sort of one-way sync to the backups zpool.

This doesn't exactly strike me as an elegant setup, but it should work for my needs. The * in zpool1 is me being unsure of the best approach here. With RaidZ1 being recommended against, and RaidZ2 requiring 4 drives minimum (and from what I understand, 5 drives to provide any benefit over mirroring), it doesn't seem like an option in my case - buying 3 additional drives to get the NAS up and running isn't an option. It then seems like the best approach would be running a single mirrored VDEV here, replacing the drives with higher capacity drives as the first road towards higher capacity (planning on 10TB drives to begin with, as those should last for a few years and are well priced). Does this sound sensible?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
First off, thanks for a lengthy reply! Sorry if my initial post came off as full of preconceived notions - I did do a fair bit of research beforehand, so what I wrote here was rather what I had arrived at after that, including the points at which I was stumped. I realize I could probably have been clearer about that (as well as included some links to what I had already read).

I'm also well aware of just how much time can be needed to answer a lengthy post with a bunch of questions like this - I've written enough lengthy support essays over on TPU, smallformfactor.net and AnandTech to know the time expenditure required. Still, I thought it was better for me to collect all my questions into one post rather than posting several specific threads. At least that's the typically preferred approach on the forums I frequent. I was frankly just hoping someone would drop in with some suggestions for one or two points, rather than addressing everything at once, though of course I see how this is a bit much to ask.

Formats and preferences vary; around here, it seems to work better to have a focused question.

I'm also starting to realize that a major fault in my searching here was in not focusing on the 'FreeNAS (Legacy Software Releases)' section. Given the consistent advice against upgrading (unless necessary) I focused on reading things centered on v12/TrueNAS to avoid any compatibility pitfalls or other changes, which clearly excluded a lot more than what I expected it to. For example, there are zero pinned threads in the various subforums under General Help (which tbh really surprised me). I obviously understand that transitioning a forum like that is a huge undertaking, but that seems like a rather major oversight to me (or at least not adding a pinned thread to each subforum reminding new users that the explicitly labeled legacy forums are still highly relevant and likely to contain continuously updated resources). It just seems odd to me as a new user to keep highly active subforums under headings that seem explicitly designed to warn away users by stating them as legacy. I should obviously have looked more, but this is also IMO an area where the forums could improve in terms of accessibility to new users.

We have little control over what iXsystems does. They've also introduced additional confusion into the issue by rebranding FreeNAS as TrueNAS CORE, while also having a TrueNAS SCALE product which is Linux based. Both of them are "newish" things with various pain points compared to the "legacy" software. This makes sense from their marketing perspective, but is a nightmare transitionally.

Thanks for the links to the HBA posts - no doubt those drowned in the search results due to the combination of age and a billion other semi-relevant posts touching on the issue, though I do think I've seen some of them. The main issue is that I can't see that any of them actually draw up what the differences between the various models are, which was a necessity for me to even start to consider what I might want or need. Given that that HBA confusion thread was from 2013 I don't think I've even seen it listed. I might also have written off 8-year-old results as unlikely to be relevant - I'm really not used to hardware of that age still being so. I've got some adjusting to do, clearly. Luckily I found other resources that helped me figure out the differences and which ones matter to me, and I think I'll be happy with the M1115 I've ordered.

HBA's are mainly not supposed to have features, other than maybe PCI interface stuff, connector type issues, and SAS 6Gbps vs 12Gbps.

I also think there's a rater major difference in terms of wants, needs and expectations in play here. Of course I completely understand that the average enthusiast user here is likely to have far above average storage and home server needs, while I'd describe myself as barely having outgrown GbE and pure HDD storage. Thus I think I'm both struggling to interpret most of what's written here in light of my own needs, while it also seems like the general recommendations here are way excessive for my needs. I live in a medium sized apartment with one other person, and this NAS is for a combination of backups and mass storage/media storage for SSD-only PCs. It won't see particularly heavy use - a worst case scenario would likely be simultaneous scheduled backups from three computers (which is unlikely as I doubt I'd configure them to run at the same time) while serving media to one and a RAW photo library to another. A more realistic scenario would be the aforementioned photo editing + some downloads and media serving at the same time. Of course I'm also well aware that my budget and the desire to re-use existing hardware places some pretty specific limitations on my setup that are entirely on me to deal with.

To answer a few of the specific points:

SFP+ is out of the question due to the inability to terminate my own wires or otherwise route them cleanly.

Fine. We field terminate Cat6 here too. It's how we get pretty looking racks.

But. I think you might be wrong. In general, fiber can be easier to work with than Cat6.

https://extranet.www.sol.net/files/misc/biffiber.jpg

biffiber.jpg


You can order fiber to exact lengths, relatively inexpensively. That OM4 BIF stuff, that's a cable that can be made to custom lengths; a 5M cable is $21.00, so it isn't horribly pricey. I order them down to the centimeter for in-rack work, and they are much more pleasant to work with than category cable. But, yes, no field terminations. So if you are actually going to drill a hole just big enough for a Cat6 cable, feed it thru, and then terminate it on each end, yes, fine, that's not easily done with fiber without some significant tooling. However, the ends on these things are all very similarly sized, so for any other use case, fiber is pretty attractive.

There's only one catch here, which is that Mikrotik also sells SFP+ modules that do copper, including 2.5G, 5G, and 10G.

https://mikrotik.com/product/s_rj10

And while I haven't tried one of these, it seems like this combined with some of their small 10G SFP+ switches would give you the flexibility to do a lot of different and interesting things.

My apartment has no way of running hidden cabling (only concrete walls), and I don't have the luxury of a server cabinet/room where I can store coils of overflow cabling.

The fiber is actually less obtrusive than category cable, and you can order it premade to exact lengths.

I also need to pass wiring through several concrete walls, which would be needlessly invasive (I don't much feel like impact drilling 1" holes through concrete - getting Cat6 through there will be enough of a challenge, though I might be able to sneak that through some door frame gaps etc.). Nor is it compatible with the existing 2.5G NICs in motherboards in the house, nor would it be a feasible add-on to two of the three ITX PCs here (my main PC and HTPC). So while I'm entirely aware that SFP+ is technically superior, it's very, very poorly suited for my use case. I understand that my dismissal can read as flippant, but it's well reasoned. Besides, as I said I'm not aiming for 10GbE (let alone anything higher), as it's complete overkill for my use case. 2.5GbE should do fine for many years to come, and Cat6 handles that (and even 10GbE should that become a desire) more than good enough for my use case.

The problem you're going to run into is that 2.5GbE is poorly supported. Not just in FreeNAS, but pretty much everything. The industry used to upgrade by an order of magnitude in speed every three years, so there was a lot of innovation and competition as we went from 10Mbps-100Mbps-1Gbps-10Gbps, but copper topped out at 1Gbps for a really long time. Faster wifi has been used as a justification for 2.5G and 5G ethernet, but is sort-of bullchips. There's a lot of interest in trying to generate a new low end ethernet market, but it is very rough to justify on the basis of technical merits, and is basically a PT Barnum class exercise.

I know PCIe bifurcation isn't OS dependent, but in my experience OSes can always throw wrenches in the gears of the operation of stuff like this. I've seen bifurcated setups work perfectly in Windows but one or both AICs fail to show up at all in Linux and vice versa, hence my question. I don't generally trust an OS to not mess things up once I start veering off the beaten path, particularly not one where I'm not experienced. I've since decided to take the easier path of running the second AIC from an m.2 riser rather than a bifurcation riser though, so this point is kind of moot now.

Sadly 32GB of RAM is the maximum supported by my motherboard. I'm aware that most ITX DDR4 boards will gladly accept 2x32GB DIMMs no matter what their spec says, but I'd also need to significantly stretch my budget to get there. Even finding 32GB ECC UDIMMs is somewhat of a challenge, and I frankly can't afford the risk of a $400+ ebay order from China or the US not working, as a return would likely be out of the question (or at least exorbitantly expensive). From reading the cyberjock slideshow @GBillR linked I now understand that L2ARC isn't likely to do what I want it to (and would liely underperform without at least 64GB of RAM), so I'll need to figure out some other setup there.

It is situation dependent, of course, but L2ARC can hurt as much as it can help on smaller memory systems.

  • I'm sorry for not using the correct terms for various stuff - that is one of the parts I find particularly impenetrable in terms of understanding ZFS and TrueNAS (especially with so many abbreviations being confusingly similar).

  • The driver installation thing was pretty much what I'd seen already (i.e. it's not an intended operation, and is likely to break stuff), but it's good to get confirmation - which is why I asked. I've been reading various NIC driver lists, but they're pretty impenetrable overall, and it's difficult to tell how up to date they are.
  • The main reason for asking about drivers is support for the new-ish 2.5GbE RTE NICs. I've seen several threads from a few years ago discussing driver support for those, but discussion seems to have dried up (unless my searching skills are just that severely lacking). Which to me might mean that a) it's a commonly accepted fact that they don't work, or b) that support has been introduced and it's no longer worth discussing. It's also essentially impossible for a new user to tell whether people complaining of poor Realtek GbE NIC support in 2012 or 2016 is relevant to Realtek 2.5GbE NIC support in 2021, even if I understand that RTE doesn't have a history of providing good BSD drivers.


  • That is drastically overestimating things; to my knowledge, Realtek has never provided ANY BSD drivers, the drivers in FreeBSD having been written by Bill Paul over at Wind River.

    Still, if suport isn't there (which, judging by the responses here, it isn't) the fall-back option is to splurge on a nGbE-compatible 10GbE NIC, which again sadly excludes the used server parts market (AFAIK none of the Intel NICs support nGbE in any form). (If there were reasonably affordable (>$200) 2.5/5/10GbE switches with 5+ ports I could go the used NIC route, but nothing like that exists - the only options then are 1-2 10GbE ports + 5-8 GbE, which provides me with near zero benefit.) Anything Aquantia should do the job though, and from what I can tell it seems decently supported. It would be nice if AICs existed with Intel's new 2.5GbE NICs, but they don't. My main PC has one of those, but of course they also have that hardware bug in early revisions that can utterly kill performance with some switches, so it's not quite ideal.
Still, I'm a good deal closer to knowing what I want and need now, so thank you for taking the time to respond! I completely understand the frustration of being asked what looks like the same questions you've seen variations of hundreds of times - I'm plenty used to being on that side of relations like this. I think I've got somewhat of a new understanding of how hard it can be to tell which questions are the same as my own though, as differences in terminology, ways of asking, and specific hardware/software/etc. made a lot of what I read before posting this seem either not relevant or at least different enough to warrant asking myself. Still, thank you for pointing me in the right direction(s) and highlighting stuff that I've clearly misunderstood!


@GBillR Again, thanks for some really useful links! I definitely understand now how the Realtek thing might have turned people off, though coming in as a newbie to this it strikes me as one of those commonly accepted wisdoms that trace back so far it's difficult for new people to understand whether it's still relevant and true. It's good to get confirmation on that though, and as noted above, I'll rather be going with an Aquantia NIC. I'm definitely willing to put some effort into figuring this out for myself - I wouldn't be coming to a forum otherwise :) - but as I said above, in some cases it simply isn't possible with my lack of knowledge to tell whether the specifics that differentiate my questions from those of others are relevant or not, and the only way out is to ask.

So thank you both for being patient with me and pointing me in the right direction :)


I've re-thought my storage setup a bit from the resources provided here, so now I'm thinking something like this:
zpool1: backups, 1 VDEV of 2x4TB HDDs mirrored*
zpool2: media storage, 1 VDEV of 1x6TB HDDs, no redundancy needed
zpool3: SSD storage, 1 VDEV of 1x500GB SSD, used for semi-temporary storage of performance-sensitive data, likely with some sort of one-way sync to the backups zpool.

This doesn't exactly strike me as an elegant setup, but it should work for my needs. The * in zpool1 is me being unsure of the best approach here. With RaidZ1 being recommended against, and RaidZ2 requiring 4 drives minimum (and from what I understand, 5 drives to provide any benefit over mirroring), it doesn't seem like an option in my case - buying 3 additional drives to get the NAS up and running isn't an option. It then seems like the best approach would be running a single mirrored VDEV here, replacing the drives with higher capacity drives as the first road towards higher capacity (planning on 10TB drives to begin with, as those should last for a few years and are well priced). Does this sound sensible?
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
Formats and preferences vary; around here, it seems to work better to have a focused question.
Thanks, I'll keep that in mind in the future :)
We have little control over what iXsystems does. They've also introduced additional confusion into the issue by rebranding FreeNAS as TrueNAS CORE, while also having a TrueNAS SCALE product which is Linux based. Both of them are "newish" things with various pain points compared to the "legacy" software. This makes sense from their marketing perspective, but is a nightmare transitionally.
Yeah, it seems a bit of a mess. I guess I understand the desire for a single brand, but given that they're neither compatible nor aimed at the same markets, it seems a bit weird. Guess you mods here just have to keep up as best you can!
HBA's are mainly not supposed to have features, other than maybe PCI interface stuff, connector type issues, and SAS 6Gbps vs 12Gbps.
That's part of the source of my confusion - if they're all so similar, why the dozens of models? The video I linked above sorted that out pretty smoothly though. From what I gathered, it's 2000-series = HDD only, 3000-series = SSD compatible (TRIM etc.), and then there's various layouts, connectors, port counts, whether or not they have a cache chip for RAID controller use, etc. But it takes a while to figure out which differences to ignore!
Fine. We field terminate Cat6 here too. It's how we get pretty looking racks.

But. I think you might be wrong. In general, fiber can be easier to work with than Cat6.

https://extranet.www.sol.net/files/misc/biffiber.jpg

biffiber.jpg


You can order fiber to exact lengths, relatively inexpensively. That OM4 BIF stuff, that's a cable that can be made to custom lengths; a 5M cable is $21.00, so it isn't horribly pricey. I order them down to the centimeter for in-rack work, and they are much more pleasant to work with than category cable. But, yes, no field terminations. So if you are actually going to drill a hole just big enough for a Cat6 cable, feed it thru, and then terminate it on each end, yes, fine, that's not easily done with fiber without some significant tooling. However, the ends on these things are all very similarly sized, so for any other use case, fiber is pretty attractive.
Fiber absolutely has its upsides, and the thinness is really attractive, yes. Those prices are also okay - not Cat6 level, but acceptable. I kind of doubt I'd be able to find anything at those prices here in Sweden though - shipping, import fees, VAT and so on tend to drive up prices, and we're a much smaller market than the US. But even then, I'd be worried about damaging the cabling, both in installation (sharp 90° turns when coming through walls) and from being moved/jostled. I'm planning to run solid core Cat6 to the relevant rooms, terminate them with wall sockets, and then run short, custom-length cables between the sockets and devices, as this makes for both the least risk of damage (two of the PCs in question are on sit/stand desks, so they need to move freely)/easiest repairs and the cleanest installation. Doing the same with fiber would probably be possible, but costs would likely be 3-4x with everything added up. I guess the ideal setup would be if the wall sockets could convert fiber to Ethernet, but I doubt anything like that exists at all (and if it did it would need power, and likely cost a minor fortune). So Ethernet is definitely the way to go for me.
There's only one catch here, which is that Mikrotik also sells SFP+ modules that do copper, including 2.5G, 5G, and 10G.

https://mikrotik.com/product/s_rj10

And while I haven't tried one of these, it seems like this combined with some of their small 10G SFP+ switches would give you the flexibility to do a lot of different and interesting things.
That's no doubt true. But those modules seem to cost nearly as much as an Aquantia 10GbE NIC alone, and more when a used 10GbE SFP+ NIC is added to that. So while it would undoubtedly be both a faster setup and could allow for a cleaner installation, the overall cost simply isn't feasible within my budget - I might have been willing to stretch if it provided a tangible benefit, but I really don't have a use for that speed.
The problem you're going to run into is that 2.5GbE is poorly supported. Not just in FreeNAS, but pretty much everything. The industry used to upgrade by an order of magnitude in speed every three years, so there was a lot of innovation and competition as we went from 10Mbps-100Mbps-1Gbps-10Gbps, but copper topped out at 1Gbps for a really long time. Faster wifi has been used as a justification for 2.5G and 5G ethernet, but is sort-of bullchips. There's a lot of interest in trying to generate a new low end ethernet market, but it is very rough to justify on the basis of technical merits, and is basically a PT Barnum class exercise.
Well, that's one perspective - and to me it sounds like a datacenter/server oriented one that's rather out of touch with (even enthusiast) consumer applications. Another is that the industry has been half-heartedly trying to get >GbE into homes for a decade, but cost and power consumption (+ the need to replace existing cat5e cabling in many homes) makes 10GbE infeasible for the vast majority of home use, leading to stagnation at GbE speeds for pretty much two decades, and increasing user demand for something better. Enthusiasts have been hacking together 10GbE with used server gear for a decade, but that's just not feasible for a larger market.

nGbE provides an in-between step that delivers a tangible performance increase for home users (given that most storage these days can deliver >90MB/s, unlike ten years ago) but not unnecessarily so (given that very little storage in home use will ever see >2-300MB/s real-world load over significant time), all the while being cheap to make, consuming much less power than 10GbE, and generally being easier to integrate into systems. Most DIY-market motherboards for latest generation consumer platforms have 2.5GbE, either Intel or Realtek, and at least in Windows these do deliver 2-2.5x the performance of GbE "for free" (though of course motherboard prices are rising too). Now that we're also finally seeing consumer-focused nGbE switches at below $30/port (which is still 10-15x GbE, mind you), we might actually see home networking improve at scale. For the same to happen with 10GbE as the standard would have taken at least another five years, though more likely ten due to the required node shrinks to make the controllers small and efficient enough to live on a packed motherboard without any cooling.

So, while 2.5GbE (and 5GbE) are decidedly imperfect in-between solutions introduced after otherwise superior standards, they're far more likely to reach a level of actual adoption that can ultimately matter and lead to user experiencese improving. There needs to be something in between $20 GbE swithces alongside free NICs and complex fiber-based decomissioned server gear. 10GbE was supposed to do that, but has failed entirely - and is pretty overkill for most consumers anyhow. I completely agree that the WiFi arguments are a bit silly, but there are very real use cases (such as NAS use!) where the difference between GbE and 2.5GbE can be pretty significant.

With 2.5GbE I can get $30 NICs for the HTPC and the last desktop in the house, a $120-160-ish 5-8-port switch, and use cheap Cat6 cabling, so even with a single "expensive" Aquantia 10GbE NIC it's much cheaper than any other setup.

I definitely see how support in non-consumer facing OSes might be an issue, though using something like an nGbE-compatible Aquantia 10GbE NIC should circumvent that, no? It should auto-negotiate with the switch to whatever is the fastest speed they (+the cabling) can support, regardless of OS intervention, right?
It is situation dependent, of course, but L2ARC can hurt as much as it can help on smaller memory systems.
Noted. I'll definitely stay away from that. I don't like the idea of having another network share just for high speed storage, but if that's what I have to do, that's what I'll do :)
That is drastically overestimating things; to my knowledge, Realtek has never provided ANY BSD drivers, the drivers in FreeBSD having been written by Bill Paul over at Wind River.
Wow, that's pretty terrible. Even if BSD is notably smaller than Linux, which again is a fraction of Windows in the consumer space, how hard (and expensive) would it be for them to provide driver support? I can't imagine it would be even a blip for them in terms of cost or effort. I don't overall have a particularly good impression of Realtek, but this definitely brings it down another notch.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,116
You have way too many questions to answer, but browsing through I think you're heading for trouble. Even if the motherboard supports bifurcation (and it's only a matter for the motherboard, not TrueNAS), how do you plan to fit an adapter and two PCIe cards in the Node 304 without losing two hard drive bays (and possibly spending more on the riser and PCIe cable than the cost of the motherboard)?
It may be better to look for a motherboard with enough SATA ports (and even possibly a 10G NIC that is N-Base-T compatible) rather than desperately attempt to reuse a round peg in a square hole.
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
You have way too many questions to answer, but browsing through I think you're heading for trouble. Even if the motherboard supports bifurcation (and it's only a matter for the motherboard, not TrueNAS), how do you plan to fit an adapter and two PCIe cards in the Node 304 without losing two hard drive bays (and possibly spending more on the riser and PCIe cable than the cost of the motherboard)?
It may be better to look for a motherboard with enough SATA ports (and even possibly a 10G NIC that is N-Base-T compatible) rather than desperately attempt to reuse a round peg in a square hole.
The Node 304 fits HHHL cards with all three HDD bays installed, so that's not an issue. It's definitely a bit tight - that's why I went with a HBA with vertical/top ports rather than horizontal/end ports, as they'll make cable management much easier. But there's nothing that will interfere. Installing a GPU or some other large AIC would necessitate removal of the closest drive bay, but nothing HHHL (or even just "ITX length") does so.
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
Well that took a while (mostly thanks to some extremely slow customs handling on my m.2 riser), but it's up and running! The build this past weekend took a bit more time than originally intended (made some custom power cables for the HDDs, which always takes time, and also DIYed a shroud between the CPU heatsink and rear fan for improved cooling). Looks janky as all get-out, but it works. So far everything seems to be working very well, except for that exhaust fan - I need to get a PWM fan in there, as my motherboard lacks DC fan control, and it's rather noisy at full speed. The HBA worked without a hitch, and I've used the m.2-to-PCIe riser to hook up a GPU for tuning the BIOS and so on, so that's working well too. So far it's just tucked away inside the case waiting for me to put a NIC in there.

Thanks to the incredibly informative, well structured and well-written manual (plus this video for a basic overview) I've set up my pools, datasets and shares, and made all the required users and groups to make things work as I want. I'm incredibly impressed with how easy it was to get automatic logins to shares from Windows working - this was far easier than getting seamless sharing from within Windows itself! MS should really take note. I'm also mightily impressed with how easy it was to create separate single-user shares for specific uses - thanks to this I can now have separate Windows File History shares for various PCs that don't clutter up the general backups dataset. The TrueNAS GUI designers definitely deserve some kudos for their work on making everything easily understood and managed.

Oh, and @Etorix, here's what clearances look like with the HBA installed :)
ypPVGD5.jpg

There's not much clearance, but more than enough for it to work. If I was using an ATX PSU it would definitely be a squeeze (depending on PSU length it would essentially be right next to the HBA), but with my SFX-L PSU + me having made a custom mount for it there's more than decent room for cable management and the like. I'm still very happy I went with a top-ported HBA though, as it did make cabling a lot easier.

Thanks to this setup I now have plenty of room to grow too. I'll be replacing the two 4TB drives with drives in the 10-12TB range down the line (once this Chiacoin nonsense clears up and drive prices normalize), and there's still room for another three drives in the case. I was considering upgrading to a Silverstone CS381 at some point (and hot-swap drive bays would be amazing for upgrades and replacements), but given the current outlook of this system I don't really see the need.

Anyhow, thanks to everyone for their input and help! Definitely got some pointers here that have both made the setup easier and have let me avoid making bad plans for the future. The next step in this adventure will be reading up on and setting up the (command line :( ) Jottacloud file backup service. Not particularly looking forward to that - I might look into other cloud providers that offer better TrueNAS integration. We'll see. I also need to get my UPS configured. That will be interesting.

Oh, and by the way, is there any way in TrueNAS to monitor hardware beyond the very basic dashboard info? I.e. thermals, fan speeds, etc.?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,741
Oh, and by the way, is there any way in TrueNAS to monitor hardware beyond the very basic dashboard info? I.e. thermals, fan speeds, etc.?
You could install something like Observium or LibreNMS in a jail and point it to the TrueNAS with SNMP enabled plus IPMI for fan speeds and temperatures.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Oh, and by the way, is there any way in TrueNAS to monitor hardware beyond the very basic dashboard info? I.e. thermals, fan speeds, etc.?
If you had a supermicro board, you'd be in luck as there are already fan scripts developed for those (mostly) and they come with IPMI as a general rule. I'd be surprised to see you reporting back that you have IPMI on your board.

You can "retrofit" some of that functionality (thermal sensors and fan controls) by adding a Corsair Commander Pro to your box and using the script I have made available for that: https://www.truenas.com/community/posts/619377/
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
If you had a supermicro board, you'd be in luck as there are already fan scripts developed for those (mostly) and they come with IPMI as a general rule. I'd be surprised to see you reporting back that you have IPMI on your board.

You can "retrofit" some of that functionality (thermal sensors and fan controls) by adding a Corsair Commander Pro to your box and using the script I have made available for that: https://www.truenas.com/community/posts/619377/
Yeah, no IPMI on this board. Whenever this setup needs an upgrade (which I hope is still 5+ years out) I'll hopefully be able to get more server oriented hardware rather than re-using old parts. Thanks for the suggestion regarding the Commander Pro though - I'll have to look into it. Though admittedly I would prefer using an Aquacomputer Quadro given that it's ~1/4 the size and does everything I need (my main system has one, and it's fantastic) plus has provisions for thermal sensors, though some quick searching doesn't make it seem likely that anything similar exists for that. I'll have to consider the Commander if I still feel the need for something like this after getting my exhaust fan sorted.
 

LarsR

Guru
Joined
Oct 23, 2020
Messages
716
I'm also using a ryzen 1600x and an msi x370 bord. You will most likely have to disable global c-states and amd Cool&Quit in your bios.
If those 2 settings are not disabled it's possible that your NAS will freeze after 2-3 days of uptime and you need to do a hard reset.
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
I'm also using a ryzen 1600x and an msi x370 bord. You will most likely have to disable global c-states and amd Cool&Quit in your bios.
If those 2 settings are not disabled it's possible that your NAS will freeze after 2-3 days of uptime and you need to do a hard reset.
Hm, that doesn't sound good. I'll have to keep an eye out for this for sure, and disable it if it happens. Any idea why this is?
 

LarsR

Guru
Joined
Oct 23, 2020
Messages
716
Those are advanced power saving settings. if the cpu is mostly idle the cpu will go into something like a hibernation state and will cut power to some components. I had to learn it the hard way and after weeks of some advanced google-foo i found the solution in a very old freenas post here in this forum. After disabling those two settings and another setting called ErP Ready the system became rock solid.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
I would prefer using an Aquacomputer Quadro
I have one of those and had bought it at the same time as I was trying to get the Commander Pro to work.

Unfortunately, the open source linux port of the reverse engineered code doesn't work to read or control the fans (even on linux), so it was no use to me for that project.

I did manage to fit the Commander pro into my Node 304 though (even if it's a squeeze in the back right hand side. (and a mess of cables)
 

Valantar

Dabbler
Joined
Apr 11, 2021
Messages
26
I have one of those and had bought it at the same time as I was trying to get the Commander Pro to work.

Unfortunately, the open source linux port of the reverse engineered code doesn't work to read or control the fans (even on linux), so it was no use to me for that project.

I did manage to fit the Commander pro into my Node 304 though (even if it's a squeeze in the back right hand side. (and a mess of cables)
Well that sucks. At least you saved me a half hour of pointless just-to-be-sure googling though :P I definitely see how there could be room for a Commander Pro taped to the back wall though, and cabling shouldn't be too bad given that I won't be exceeding three fans, and don't need RGB or any of their USB extensions.

Also, judging by the thermals I'm currently seeing (idling in the low 50s, going above 60 at moderate loads installing plugins) I might just ditch the air cooler and get an Arctic Liquid Freezer II 120mm. Those temperatures aren't horrible by any means, but hotter than I'd like for a 24/7 machine, especially given that this one lives in a closet. I'd prefer to stick with air (pump failures and clogged tubes are always a worry), but I don't know of any 120mm-class tower coolers supporting mounting in that orientation on the AM4 mount.
 
Top