HELP: Supermicro X11SSM-F RAM

soopaman

Dabbler
Joined
Dec 23, 2022
Messages
10
Hi Community,

I have been lurking around this forum for quite a while and I have been inspired to take the plunge and build a server. At present, I have a Synology 916+ and 412+ which primarly serve as file servers. I also have a couple of Intel NUC servers running docker and a few VM's. I have been plagued by many failures of WD Red HDD's and I have had enough of SATA drives.

My motivation is to switch over to SAS drives for much improved reliability and hopefully better performance. I have bought a Supermicro X11SSM-F and the parts below. I am currently stuck with the choices for RAM as I cannot make sense of the numerous gotcha's. I have seen some good deals on Ebay as I would like to max out my server to 64GB and hoping I could save some money. My choices are:
  1. Samsung 32GB 2Rx4 PC4-2400T M393A4K40BB1-CRC0Q
  2. Micron 16GB 1Rx4 PC4-2400T DDR4 ECC MTA18ASF2G72PZ-2G3B1QK
  3. Samsung 16GB 2Rx4 PC4-2133P Server RAM ECC DDR4 M393A2G40EB1 HP:752369-081
I would appreciate your help if any of these would work with my mobo please.

I also plan to introduce NVME Samsung 980 NVME to handle the VM's running on Truenas Scale I will install when all is built. Any inputs of how best to do this would be appreciated. I was thinking one drive would be enough with backups to the SAS drives.

I plan to use Noctua CPU cooler but still trying to also figure out which one will work with my CPU and case.

Thanks in advance to you all for your kind support. I appreciate it.
Motherboard: Supermicro X11SSM-F
CPU: INTEL XEON Quad 4 Core E3-1270 V5 4 x 3.6Ghz SR2LF LGA1151 / H4 CPU
PSU: Seasonic 750W Prime Platinum
HDD: 8 x Seagate (ST4000NM0023) 4TB SAS-2 (LFF) 6Gb/s 7.2K
RAID controller: Fujitsu Raid Controller D2607 - A21 SAS - SATA LSI SAS2008 Mega Raid 6G
CHASSIS: Fractal Node 804
NVME: TBD
RAM: TBD
CPU COOLER: TBD
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
many failures of WD Red HDD's
Drives:
RED or RED PRO? RED drives are SMR. very poor for NAS (hell, SMR is poor just about anywhere). WD decided awhile ago to make their RED line not NAS appropriate by sneaking in SMR.
while there is nothing wrong with switching to SAS, SATA drives of the correct varieties are basically just as reliable for usually significantly less prices. just avoid WD RED like the plague on datahorders it is.

RAID controller: NOOOO!. TrueNAS +RAID = sadness.

you want an HBA; while a handfull of RAID controllers have HBA mode, most do not, as that defeats the purpose.
the M1015s are popular (9201 iirc), but there is a massive flood of fake cards.
I would recommend checking out the Art OF server on ebay (currently in vacation mode), who flashes IT mode firmware onto many cards, including the compatible RAID cards.
alternatively, if you have money to burn, the 9305 line (I have the 9305-24i) are great. the 9305-16i is ~200-300$ cheaper.

you also need to plan your boot drives. SSD's are basically the only option.
NVME for VMs will be fine. only one means no error correction.

Chassis: no external hotswap, but nothing wrong with it. I used the matx define for awhile.

Motherboard + CPU:
the I have that board. excellent board. the 1270 will be overkill - you might consider looking for v6s while you're going for overkill anyway. I have 1230v5. i don't think I've ever even pushed it.
the stock intel cooler is all you need. it will definitely fit. anything else can be added later (like if the cooler is too loud, which it shouldn't be, barely audible most of the time, unless you are using it as a pillow :wink:)

PSU: you should probably get 800W+, never skimp on your power supply, they provide less power over time. seasonic are excellent though.
there is a resource for PSU sizing.

RAM:
none of the RAM you listed is valid for this motherboard. you need UDIMMs not RDIMMs. the speed will be mostly irrelevant.




 

soopaman

Dabbler
Joined
Dec 23, 2022
Messages
10
Hi @artlessknave, thanks for your very generous feedback, I really appreciate it.

I plan to flash the HBA card to IT mode, I read a thread where another user did the same with success. I will leave the it to TrueNAS to do the rest.

I already have a couple of Samsung 980 NMVE I can use for the VM's

I got the CPU for $50 so I feel it is a bargain and too much power can never be a bad thing as I plan on running quite a few VM's on the server.

I have a couple of options for cheap secondhand PSU:-
  • The Seasonic 750W Prime Platinum is going for $100 with 10 years warranty
  • Also there is an offer for a brand new Be Quiet! Dark Power PRO 11 850W also for $100
From reading through the recommended HW list and the forum it seemed to me that the Seasonic is the most preferred.

The RAM is a sticky choice because they are really expensive in Europe and hard to find any bargains. Thankfully with your advice, I will be fine.

One last question, can you give me recommendation on how best to configure the 8 SAS drives for performance and redundancy considering that fact that I will be using 64GB of RAM in the system.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
What do you want to do with your system?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I plan to flash the HBA card to IT mode, I read a thread where another user did the same with success. I will leave the it to TrueNAS to do the rest.
I strongly suggests you to read the following resource. Strongly. Especially point 5.

About the RAM, you want to go with ECC if you build this kind of system. Make sure to buy the right kind (unbuffered and registered are usually not interchengamble).

If you want to know more about pool performance, read the following resource. Without knowing your use case we can't offer you suggestions.

Since we are at it I suggest you reading the following resources as well.
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I plan to flash the HBA card to IT mode, I read a thread where another user did the same with success. I will leave the it to TrueNAS to do the rest.
that's OK, however, not all raid controller have IT mode firmware available. I am not familiar with that one.
when listing in a parts list, instead of calling it a raid controller, just put something like:
SAS HBA: raid controller blah blah (flashed to IT)
I got the CPU for $50 so I feel it is a bargain and too much power can never be a bad thing as I plan on running quite a few VM's on the server.
Nice!, no problems there then.
I have a couple of options for cheap secondhand PSU:-

a used PSU is going to have some of it's max range reduced. a seasonic will probably handle that better, as they are usually excellent quality to begin with.
I would bet that that "be quiet" you mention will not supply even 750W as well as the seasonic. you really do not not want to cheap out on the power supply. this is the core of the system, everything needs *stable* *reliable* power, or you can get weird, random, nearly impossible to diagnose issues.
One last question, can you give me recommendation on how best to configure the 8 SAS drives for performance and redundancy considering that fact that I will be using 64GB of RAM in the system.
you have exactly 2 choices. raidz2/3 or mirrors. the amount of RAM isn't relevant.
since you plan to put VM's on separate SSDs, probably you just want raidz2.
main differences:
raidz2/3: vdevs cannot (yet) be changed. one you assign 8 drives, it will always be 8 drives. 8 drive raidz2 = ~75% usable space, 8 drive raid3 = ~ 63% usable space. generally less great for small rw and random rw, but can be better for very large contiguous writes
mirrors: vdevs can be fully changed, going from stripes to 4xmirrors by simply attaching/detaching disks. always 50% usable (or 33% for 3 way mirrors). generally the fastest overall for zfs storage.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
that's OK, however, not all raid controller have IT mode firmware available. I am not familiar with that one.
That's NOT ok. [According to @jgreco wisdom, which I trust] Having a raid controller working in IT mode is not the same of having an HBA working in IT mode.
It is important to use the correct terminology in order to help new users that could potentially read this not misunderstand, and although I'm not familiar with that particular model as well, since he originally wrote RAID controller I'm gonna assume the worst scenario and make sure to scream "danger! danger!" as loud as I can.
618a6d345864eb485bf6e91ca2d917db.jpg

you really do not not want to cheap out on the power supply. this is the core of the system, everything needs *stable* *reliable* power, or you can get weird, random, nearly impossible to diagnose issues.
Agreed!

you have exactly 2 choices. raidz2/3 or mirrors. the amount of RAM isn't relevant.
since you plan to put VM's on separate SSDs, probably you just want raidz2.
main differences:
raidz2/3: vdevs cannot (yet) be changed. one you assign 8 drives, it will always be 8 drives. 8 drive raidz2 = ~75% usable space, 8 drive raid3 = ~ 63% usable space. generally less great for small rw and random rw, but can be better for very large contiguous writes
mirrors: vdevs can be fully changed, going from stripes to 4xmirrors by simply attaching/detaching disks. always 50% usable (or 33% for 3 way mirrors). generally the fastest overall for zfs storage.
A few clarifications:
- RAM is relevant depending on the use case; the more the merrier. Depending on the usage (please clarify that "quite a few VMs"), 64GB could be enough or the bare minimum.
- since it is advised to use identical vdevs in a single pool, mirrors allow an easier (and cheaper) way to expand your storage capacity compared to raidz.

With 8 HDDs you can go with a single vdev in raidz2 or raidz3 (depending on how critical your data is). Any other config won't be worthy if you plan to use them for data storage or similar uses.
And as it's been said, you will need at least a pair of SSDs to use in mirror for your many VMs (which will be part of another pool compared to the HDDs).

EDIT: my post doesn't want to undermine @artlessknave words in any way, but aims at highlight and integrate a few parts (especially the first half) that I think to be of importance.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Having a raid controller working in IT mode is not the same of having an HBA working in IT mode.

Well, not to put too fine a point on it, but if we are going to be technically correct, which according to Futurama is the best kind of correct (*), a RAID controller "working in IT mode" is merely a lobotomized RAID controller, which isn't likely to meet the necessary requirements of being an HBA, whether or not in some imagined "HBA mode" or "IT mode". The goal is to get very specific behaviours out of the HBA, to work swimmingly well with the host kernel driver, and to be otherwise highly compatible with TrueNAS in a variety of ways. See


(*) Futurama, S02E14, How Hermes Requisitioned His Groove Back
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
That's NOT ok. [According to @jgreco wisdom, which I trust] Having a raid controller working in IT mode is not the same of having an HBA working in IT mode.
a RAID controller "working in IT mode" is merely a lobotomized RAID controller,
how is a raid controller crossflashed to IT mode (if available) different from an HBA running the same IT mode firmware?

the resource linked, which I have already read, specifically says:
You must crossflash to IT/IR firmware

beyond the obvious of being kind of a waste of a RAID card
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
how is a raid controller crossflashed to IT mode (if available) different from an HBA running the same IT mode firmware?
The following are direct quotes from that resource (well, it's 1/3 of that at this point). I have underlined some crucial passages that highlight the differences between an HBA and a RAID controller. Altough you have an answer in literally the first two periods of the first point.
An HBA is a Host Bus Adapter.

This is a controller that allows SAS and SATA devices to be attached to, and communicate directly with, a server. RAID controllers typically aggregate several disks into a Virtual Disk abstraction of some sort, and even in "JBOD" or "HBA mode" generally hide the physical device. If you cannot see the type of device (such as "ST6000DX000-1H217Z" in "camcontrol devlist", you DO NOT HAVE A TRUE HBA. If you cannot get the output of "smartctl" for a device, you DO NOT HAVE A TRUE HBA. A true HBA passes communications through itself directly to a drive without further processing. No amount of marketing department wishful thinking can change that technical reality.

Note that having device names in "camcontrol devlist" and getting "smartctl" results is not any sort of proof that you do actually have an HBA instead of a RAID card. It's just an easy test that weeds out a wide range of RAID cards.
A RAID controller that supports "JBOD" or "HBA mode" isn't the same.

In these devices, you are relying on the RAID card driver to communicate from the host to the controller. As previously noted, the LSI HBA drivers have billions of proven run-hours, but in many cases, RAID drivers aren't as solid. Some of FreeBSD's RAID drivers have been tweaked to cope better with device error handling on the theory that you have redundancy (JBOD isn't), many do not allow you to poll the drive's SMART status, and in many cases you can inadvertently set up bad situations with write caching, etc. ZFS has the ability to generate immense amounts of I/O traffic that can be a crushing workload for the weedy little CPU's on a RAID controller, can totally flood the cache on a RAID card, etc. As mentioned in the previous section, many RAID cards also do things such as encapsulation of JBOD within a partition, which effectively locks you into having to use that RAID card. This is super-bad for error recovery. With SATA ports or LSI HBA ports, SATA drives are completely interchangeable.
A RAID controller with write cache is particularly bad.

A RAID controller with a write cache is likely to get swamped by the massive I/O ZFS is pushing. These devices are typically sized to cope with the sorts of I/O a standard server would push around, update a file here, read an executable from there, do some database updates... but ZFS will perform operations such as scrubs and resilvers that will maintain massive I/O pressure for hours or days. Even the normal I/O is demanding, as a ZFS transaction group can easily be a gigabyte, every five seconds. Hiding the actual performance of devices behind a tiny RAID card cache is not a good idea as it leads to less-predictable performance.

You must crossflash to IT/IR firmware
This refers to HBA, as wrote immediately after the bold text.
If you don't crossflash, then a lot of the remainder of this ALSO applies to LSI non-IT-20.00.07.00 HBA's!! The IR firmware is also fine but is a few percent slower. It is not clear there is any value to doing this as you would never want to use an IR virtual device with FreeNAS. We used to do this in the old days for boot devices, but with ZFS boot this may no longer relevant.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
raid controller crossflashed to IT mode (if available) different from an HBA running the same IT mode firmware?

It's not. The LSI firmware is generally all based off the same CPU, and the LSI HBA is basically a LSI low end RAID controller with features stripped, while an LSI high end RAID card is a LSI low end RAID card with faster CPU, onboard cache RAM, and flash or battery backup to save the cache. So you can theoretically run IT firmware even on the LSI high end RAID cards; there's a thread around here describing how to do this on a PERC H710. The problem is that you either need to be good at firmware hacking, or have the benefit of someone who has already done the firmware hacking for you. It is really only in the last few years that this has become a thing. My guess is that the market has been flooded with LSI 6Gbps high end RAID controllers that are no longer supported by VMware ESXi, and that someone spotted opportunity to convert these into HBA's.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I am so confused.

the post was talking about flashing IT HBA only firmware on a card (if compatible firmware is, in fact, available - i do not know the card), not about using a raid card in raid, jbod, "passthrough", or whatever unsafe hack modes people have tried, and often failed spectacularly, using. I already stated that that's a no. as long as the plan was to put IT firmware on a card, I said that was OK.

I have RAID cards flashed to their IT firmware, being the (iirc) 9201/9211 pairs that use the same firmware.

at no point have I advocated, suggested, or ever used, a RAID card, using RAID modes, with TrueNAS for data storage.
 
Last edited:

soopaman

Dabbler
Joined
Dec 23, 2022
Messages
10
Hi all, first of all my gratitude for all the outstanding feedback. Regarding the RAID controller, I read in this thread that it can be done. To be honest, it was really cheap and my objective is to have uninterrupted all round performance so i do not mind getting a proper HBA card. Bargainhardware has a lot of them going for good prices so I do not mind replacing it with a proper spec if it will mean peace of mind in the longer term. It was just a little overwhelming trying to make sense of all the terminology.

I could not make sense of the HBA cards, do I want one with zero memory or 1GB memory on board?

My objective is to store my photo huge photo collection as safe as possible from potential HDD failures.
  • I run Plex and Nextcloud All-in-One as my core apps.
  • I also run Calibre-Web, Traefik and a few self hosted apps in docker.
  • The vm's are mainly Windows 10 when I need access to shitty apps which are not available on Mac or Ubuntu.
  • The fact that Truenas Scale can run Docker apps appeals to me.
  • Moving to Truenas Scale will also allow me a good chance to learn K3s too.
  • I am can scale the VM's from a RAM and resource perspective. If in the future I outgrow this system, I do not mind upgrading the board to something with more horsepower and 128GB of RAM.
I will follow @Davvo's recommendation and go with a single pools for the HDD's and a separate pool for the dual NVME pool. My world will not end if I do ever lose the VM pool as the data will be backed up on the SAS drives and perhaps will keep my Synology NAS's around for a 321 backup strategy. In the meanwhile I will delve deeper into raidz2 and raidz3 to educate myself.

@Davvo as you are using a similar mobo can you recommend me a great CPU cooler please?

My gratitude and appreciation to you all. It really means a lot to me to the fact that you give your time to help me. Also Happy Christmas to you and your loved ones.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I could not make sense of the HBA cards, do I want one with zero memory or 1GB memory on board?
As far as I am aware you don't want a cache (which sometimes can be turned off) between your drives and your CPU.
From a brief search on the forum your current one seems fine, but I am no expert of HBAs.
Regarding @artlessknave , your wording was a bit confusing so I wanted to leave no room for any kind of doubt.

@Davvo as you are using a similar mobo can you recommend me a great CPU cooler please?
The issue is that many supermicro mobo (especially used ones) come with a glued backplate that forces you to use its mounting screws.
This list can help you giving a few options beside the standard (supermicro's), server-grade cooler.

I am using a very simple (and cheap) one since my CPU TDP and usage is very low. It's not great, but it's honest work; however please do note the CPU difference in out builds.
https://amzn.eu/d/2KJ63Q8

You want something with that kind of mounting mechanism. As I said, the server-grade ones are great (don't buy one made for a rack if you have it in a case though).

In the meanwhile I will delve deeper into raidz2 and raidz3 to educate myself.
You have been given a lot of material. Do take your time to read through it, at first it is a bit overwhelming but once you get the gist of it your issue becomes just finding the right resources. My tip is to start with the Introduction to ZFS.

Merry Christmas to you and your dears as well.
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I fail to see the doubt....can you explain it so I can try to keep in clear in future?
that's OK, however, not all raid controller have IT mode firmware available.
To an unexperienced user this might have been seen as a green light to go with the RAID controller mentioned in the first post.

But to be fair, I just realized I confused the raid controller having a IT/HBA mode in it's drivers the same as using them crossflashed in an HBA IT mode.
I'm sorry for my oversight, everyone.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
raid controller having a IT/HBA mode
also, not all controllers's IT/HBA modes are crap, some of them have legit HBA modes, which unfortunately, does muddy the water.
I have an r730 which has a PERC H730P Mini (Embedded). it has an HBA mode. you have to detroy/delete/clear everything RAID, and then reboot the whole system to switch it to HBA mode, so it's very clearly NOT passthough/JBOD/etc.
of course, being embedded, it's not gonna work in anything else, but as far as I can tell, it's a true HBA mode when I was testing (using it as RAID for proxmox testing right now)
 

soopaman

Dabbler
Joined
Dec 23, 2022
Messages
10
Hi dear Truenas family,

Thanks for all the encouragement and support. After much sweating and perseverance, I have finally managed to get things up. I had a right nightmare getting my paws on the correct memory for my system.

It has now dawned on me that this is only half of the story. I have done a lot of reading and stuck at trying to decide if Raidz-2 or Raisz-3 is the best way to go. I have a pair of Samsung 980Pro NVME I would like to use for VM and some of the more demanding applications I will install in a stripped configuration which will all be backed up to the HDD hard drives.

I would like to have performance and data integrity. Would it make sense to apply encryption when creating the storage pools? I am wondering if there is much of a performance overhead when encryption is applied please?

I would welcome your valued inputs before I do a newbie error which will hurt later. Thanks and my sincere appreciation as always.
Cursor_and_TrueNAS.jpg

Cursor_and_TrueNAS2.jpg
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I would like to have performance and data integrity. Would it make sense to apply encryption when creating the storage pools?
if you want the whole thing encrypted, yes. do you need to encrypt everything? I wouldn't, I would just encrypt datasets as needed.
encryption is not relevant to performance or data integrity.
*make sure you backups and store any encryption keys securely and separately, as the pool will be little more than giberish without it*
Raidz-2 or Raisz-3
RAIDz2 is generally best for 8 drives, unless you really need to guarantee uptime. raidz3 is generally better if you have a wider vdev, like 12+ drives. (mine is raidz3, even at 11 drives it's overkill - I would probably switch to mirrors if I ever rebuild it)
VM and some of the more demanding applications I will install in a stripped configuration
I would use mirror, not stripe. a stripe can detect damaged data but CANNOT repair anything. if anything, ever, goes wrong, you would be having to restore from backups rather than letting zfs just fix it.
 
Last edited:

soopaman

Dabbler
Joined
Dec 23, 2022
Messages
10
if you want the whole thing encrypted, yes. do you need to encrypt everything? I wouldn't, I would just encrypt datasets as needed.
encryption is not relevant to performance or data integrity.
*make sure you backups and store any encryption keys securely and separately, as the pool will be little more than giberish without it*

RAIDz2 is generally best for 8 drives, unless you really need to guarantee uptime. raidz3 is generally better if you have a wider vdev, like 12+ drives. (mine is raidz3, even at 11 drives it's overkill - I would probably switch to mirrors if I ever rebuild it)

I would use mirror, not stripe. a stripe can detect datamage data but CANNOT repair anything. if anything, ever, goes wrong, you would be having to restore from backups rather than letting zfs just fix it.

I do not need everything encrypted for sure. I will roll with your recommendation and encrypt it with hardware keys which I have offsite backup hardware keys too.

With regards to the drives, I was initially inclined towards RAIDz3 but I started to suspect it would be overkill. :cool:
 
Top