Building my first TrueNAS SCALE server

M2TMbYpP

Cadet
Joined
Jan 27, 2022
Messages
5
Hi
I've been using a Synology NAS for more than eight years now. But recently, the hardware limitations of my DS2413+ put me in situations where the NAS became unresponsive just because of some simple tasks. My monthly backup to a LTO tape crashes frequently, most Docker tasks run more badly than well, which I no longer want to put up with.

Thus, I decided to switch to new hardware which is upgradeable and not bounded to one hardware manufacturer as with Synology and which is capable of running Docker containers and VMs flawlessly.
In addition, I would also like to rely on FOSS for my future NAS, which is why I will be using TrueNAS.

In the past I had good experiences with HPE servers, so I thught about going with that manufacturer.

Here is my hardware composition:
  • Server: HPE ProLiant DL380 Gen10 4214 1P 16GB-R P816ia 12LFF 800W PS Server (P02468-B21):
    • CPU: Intel Xeon Silver 4214 (12 Core, 2,2 GHz, 16,5 MB, 85 W)
    • Memory: 64GB or 128GB DDR4 RAM
    • Storage Trays: 12x 3.5"/LFF Hot-Swap Trays
    • Data Storage (Data Vdev): 6x 20TB 3.5" HDDs (RAIDZ2 --> two disk parity) (e.g. SEAGATE Exos X20 HDDs)
    • OS Storage: 2x 250GB NVMe M.2 SSD (Mirrored; while installing TrueNAS)
    • Cache Storage: 1x 2TB M.2 NVMe SSD (with a lot of total-TBW)
    • Built-in Storage Controller 1: HPE Smart Array P816i-a SR Gen10 (16 SAS-Lanes, 4 GB Cache, Hardware RAID) (804338-B21)
    • Built-in Storage Controller 2: HPE Smart Array S100i SR Gen10 (14 SATA-Lanes, Software RAID)

  • HBA Storage Controller
    Since hardware RAID controllers are to be avoided and the DL380 has a built-in hardware RAID controller, I thought about buying the LSI SAS 9305-16i (or LSI SAS 9305-24i as the 16i is not often available in my region) for my 6 HDDs. And in case I need to expand my storage in the future, I'd still have more than enough SAS Lanes.
  • The DL380 has no M.2 slots. Do you have any M.2 NVMe PCIe Controller card which you could recommend? 4x M.2 Slots would be perfect :)
    From HPE self, I only found a HPE Universal SATA 6G AIC HHHL M.2 SSD Enablement Kit (878783-B21) which looks like powered (?) via PCIe, but has SATA ports & cables for the actual data transfer which seems to thwarts all advantages from NVMe SSDs.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
How many slots (PCIe) are available to you and what specs?
Does the server support bifurcation?

I use 4 M.2's in a single PCIe3*16 slot using an Asus Hyper M.2 PCie3*16 card and bifurcation in BIOS which lets me address all 4 M.2's seperately. Other makes are available that do the same thing

Note - the motherboard MUST support bifurcation for this to work (X16 to X4, X4, X4, X4). My google-fu is ambivalent on whether your chosen platform supports it. It might support X16 to X8, X8 which isn't ideal

There are cards that don't require bifurcation - but I understand they are:
1. Very expensive
2. I have no experience with them.

The card you mention would allow the support of NVMe SATA SSD's which would give you SATA speeds, but SSD Latency which is possibly more important. Remember that M.2 NVMe <> M.2 SSD (M.2 SSD's are cheaper, but seem less common, normally) and they don't fit in each others slots
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Do you have any M.2 NVMe PCIe Controller card which you could recommend? 4x M.2 Slots would be perfect :)
From HPE self, I only found a HPE Universal SATA 6G AIC HHHL M.2 SSD Enablement Kit (878783-B21) which looks like powered (?) via PCIe, but has SATA ports & cables for the actual data transfer which seems to thwarts all advantages from NVMe SSDs.

It isn't clear to me why you would expect a card that SAYS "SATA" in its description to support NVMe. NVMe SSD's and SATA SSD's are different things. NVMe talks directly to the PCIe lanes in a system in most cases, or a PLX switch chip. SATA SSD's talk to the system through a AHCI controller or SAS HBA. We use cards like your SATA AIC card to site M.2 SATA cards inside a host where slots for 2.5" SSD and power would be hard to find. A nicer example is this Addonics card, which I wrote about a number of years ago. It supports two SATA M.2's *plus* an NVMe M.2 on the back.

I thought about buying the LSI SAS 9305-16i (or LSI SAS 9305-24i as the 16i is not often available in my region) for my 6 HDDs. And in case I need to expand my storage in the future, I'd still have more than enough SAS Lanes.

How many cables are connected to the host backplane? It's certainly possible that they used 12 of them to drive the bays off the P816-i, but sometimes the backplane has its own SAS expander on it, which would make a 9300-8i acceptable.

Also, I don't know what an HPE Smart Array S100i SR Gen10 (14 SATA-Lanes, Software RAID) is. That sounds like an insipid attempt to quantify mainboard SATA ports as some sort of magic RAID controller. Perhaps it is just a typical AHCI controller. If so, your best bet may be to determine what cables are needed to hook these up to your backplane.

Memory: 64GB or 128GB DDR4 RAM
Cache Storage: 1x 2TB M.2 NVMe SSD (with a lot of total-TBW)

These items are drastically mismatched in size. Figure no more than 10x ARC size for L2ARC, and quite possibly more like 5x. This means that you should really not try to go for more than 1TB L2ARC on a 128GB system.

switch to new hardware which is upgradeable and not bounded to one hardware manufacturer

I found this to be dissonant given your selection of a fixed form HPE chassis, which is the very epitome of non-upgradeable. When you want to change to the latest CPU generation, you're going to have to replace the chassis, not just the mainboard.
 

M2TMbYpP

Cadet
Joined
Jan 27, 2022
Messages
5
first of all, thank you for your answers :)

How many slots (PCIe) are available to you and what specs?
With one CPU I can use two x8 PCIe 3.0 slots and one x16 PCIe slot.
If I were to install a second CPU, I could have at least five more usable PCIe 3.0 slots.
Does the server support bifurcation?

I use 4 M.2's in a single PCIe3*16 slot using an Asus Hyper M.2 PCie3*16 card and bifurcation in BIOS which lets me address all 4 M.2's seperately. Other makes are available that do the same thing

Note - the motherboard MUST support bifurcation for this to work (X16 to X4, X4, X4, X4). My google-fu is ambivalent on whether your chosen platform supports it. It might support X16 to X8, X8 which isn't ideal
It's rather difficult to find a reference regarding that question, especially dual bifuraction aka quadfurcation.
But today I found a document where the support is somewhat mentioned (see page 24):

Added a new BIOS/Platform Configuration (RBSU) for dual bifurcation (quadfurcation) of PCIe Adapters to the Advanced PCIe Configuration Options.
This option will allow a x16 PCIe device to be bifurcated into four x4 devices. This option would only be used for PCIe Adapters that support this level of bifurcation.



The card you mention would allow the support of NVMe SATA SSD's which would give you SATA speeds, but SSD Latency which is possibly more important. Remember that M.2 NVMe <> M.2 SSD (M.2 SSD's are cheaper, but seem less common, normally) and they don't fit in each others slots
It isn't clear to me why you would expect a card that SAYS "SATA" in its description to support NVMe. NVMe SSD's and SATA SSD's are different things. NVMe talks directly to the PCIe lanes in a system in most cases, or a PLX switch chip. SATA SSD's talk to the system through a AHCI controller or SAS HBA. We use cards like your SATA AIC card to site M.2 SATA cards inside a host where slots for 2.5" SSD and power would be hard to find. A nicer example is this Addonics card, which I wrote about a number of years ago. It supports two SATA M.2's *plus* an NVMe M.2 on the back.
I may have expressed myself somewhat wrong.
I know the difference between SATA and NVMe and I know that this card is not suitable for my purpose.
It's just that I newer saw a card like this before. There are cards that only support SATA SSDs but still use the PCIe interface for data transfer. This card, however, uses additional SATA ports for data transfer which - for me - is kind of strange:
iu


Anyway. This card is nothing I need. @NugentS which card are you using?
I found this Delock Host Bus Adapter (Item No. 89017) but I don't know how reliable this HBA and this manufacturer is.

How many cables are connected to the host backplane? It's certainly possible that they used 12 of them to drive the bays off the P816-i, but sometimes the backplane has its own SAS expander on it, which would make a 9300-8i acceptable.
I don't know it for sure as there are not much informations available but I'll try to find that out.
At least there was no SAS expander mentioned or visible in all specification documents and pictures I found.
Otherwise I could still buy the server and decide later which SAS HBA I need.

Also, I don't know what an HPE Smart Array S100i SR Gen10 (14 SATA-Lanes, Software RAID) is. That sounds like an insipid attempt to quantify mainboard SATA ports as some sort of magic RAID controller. Perhaps it is just a typical AHCI controller. If so, your best bet may be to determine what cables are needed to hook these up to your backplane.
That's a typical embedded AHCI adapter. Sorry, forgot to mention that. There are only four SATA ports available on the mainboard.

These items are drastically mismatched in size. Figure no more than 10x ARC size for L2ARC, and quite possibly more like 5x. This means that you should really not try to go for more than 1TB L2ARC on a 128GB system.
Is this size limitation of L2ARC still recommended as openZFS 2.0 supports persistent L2ARC?
I didn't found any answers for L2ARC with openZFS 2.0
Then I'd go for 1TB SSDs :)

I found this to be dissonant given your selection of a fixed form HPE chassis, which is the very epitome of non-upgradeable. When you want to change to the latest CPU generation, you're going to have to replace the chassis, not just the mainboard.
Well, as far as I know, it is possible to upgrade CPU on HPE servers.
Even a second CPU is supported and I could install up to ~2TB RAM.
But it's true that the motherboard is tied to this very chassis.
And if I were to choose a newer processor with a different socket, I'd have to replace the whole chassis and not only the mainboard.

Would you recommend another manufacturer/model in this regard?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
As I said in my post Asus Hyper M.2 although I only have a PCIe Gen3 version - its a bit old)
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
  • Data Storage (Data Vdev): 6x 20TB 3.5" HDDs (RAIDZ2 --> two disk parity) (e.g. SEAGATE Exos X20 HDDs)

I don't know about HDD prices in Switzerland, but with only 6 disks, you loose quite a bit of capacity. A few more but bigger smaller HDDs would be more efficient.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Sorry, of course not. I meant "more but smaller HDDs" and have corrected my post. Thanks!
 

M2TMbYpP

Cadet
Joined
Jan 27, 2022
Messages
5
As I said in my post Asus Hyper M.2 although I only have a PCIe Gen3 version - its a bit old)
oops, how could I have missed that?
I was probably already a little too tired ^^
Is this ASUS Hyper M.2 x16 Gen 4 card (90MC08A0-M0EAY0) the successor of your card?
As described in the manual, this card only supports PCIe 4.0 on AMD TRX40/X570 motherboards and apparently supports some types of RAID?
In the meantime, I would rather try the Delock HBA 4x NVMe M.2 card (PCIe 4.0 x16, Bifurcation)

I don't know about HDD prices in Switzerland, but with only 6 disks, you loose quite a bit of capacity. A few more but bigger smaller HDDs would be more efficient.
This is worth considering.
The hard drives here have about the following prices:
  • Seagate Exos X20 20TB for 430€
  • Seagate Exos X20 18TB for 414€
  • Seagate Exos X18 18TB 299€
  • Seagate Exos X18 16TB 293€
If I get 6x Exos X20 20TB HDDs, I get 80TB with RAIDZ2. With 8x Exos X18 18TB I get 108TB. The price for the total amount of HDDs would be almost the same, but getting more storage out of the 18TB hard drives.
My main thought was to save some energy and reduce environmental impact with purchasing as few as possible, but as large hard drives as possible.

I found this to be dissonant given your selection of a fixed form HPE chassis, which is the very epitome of non-upgradeable. When you want to change to the latest CPU generation, you're going to have to replace the chassis, not just the mainboard.
I've been thinking about this for the past few days and it's definitely not the best choice in terms of upgradeability. Supermicro came to mind as another option, as Supermicro offers many ATX/E-ATX/EE-ATX compatible chassis in addition to chassis for proprietary motherboards, which would definitely be a future-proof solution.

So here's my new Supermico-based hardware composition:
  • Chasis: Supermicro SuperChassis 826BE2C-R920LPB
    • Form Factor: 2U Chasis | ATX/E-ATX/EE-ATX
    • Drive Bay: 12x 3.5" hot-swap SAS/SATA
    • SAS-Expander: 12-port SAS3 Expander (BPN-SAS3-826EL2)
    • Power Supply: 2x 920W 1U Redundant PWS W/ Quiet Mode (PWS-920P-SQ)
  • Motherboard: Supermicro X12DPi-NT6
    • Form Factor: E-ATX
    • CPU/Socket: 3rd Gen Intel Xeon / LGA4189
    • Memory: 18 DIMM slots | up to 4TB DDR4-3200 | Supported: ECC, LRDIMM, RDIMM, Intel Optane Persistent Memory
    • Chipset: Intel C621A
    • SATA: 14 SATA3 ports (Intel C621A)
    • IPMI: SPEED AST2600 BMC
    • Network Controller: 2x 10GBase-T (Intel X550)
    • PCIe 4.0 Slots (with 1 CPU):
      • x16: 2x
      • x8: 1x
  • CPU: Intel Xeon Silver 4314 (16-core, 2.40GHz, 3.40GHz (turbo frequency), 24MB cache, 135W)
  • Memory: 2x Kingston Server Premier ECC RDIMM 64GB, DDR4-2666 (KSM26RD4/64MER)
  • HBAs
    • LSI SAS9300-8i SAS controller card
    • Delock 89017 HBA controller card (4x M.2 NVMe, PCIe 4.0)
  • Storage
    • Boot SSDs (mirrored): 2x Corsair Force MP510 (240 GB, 400 TBW, M.2 2280) (CSSD-F240GBMP510)
    • Storage HDDs (RAIDZ2):
      • 8x Seagate Exos X18 18TB, 512e SATA (ST18000NM000J)
        OR
      • 6x Seagate Exos X20 20TB, 512e SATA (ST20000NM007D)
    • L2ARC Cache (Optional - I think I will wait with L2ARC until my TrueNAS server is up and running and I can check if I can benefit from an additional cache at all)
      • 1x Corsair MP600 (1000 GB, 1800 TBW, M.2 2280)
        OR
      • Patriot Viper VP4300 (1000 GB, 1000 TBW M.2 2280)
Thank you all again for all your feedback :)

EDITED: reformatting, added amount of PCIe-slots.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
oops, how could I have missed that?
I was probably already a little too tired ^^
Is this ASUS Hyper M.2 x16 Gen 4 card (90MC08A0-M0EAY0) the successor of your card?
As described in the manual, this card only supports PCIe 4.0 on AMD TRX40/X570 motherboards and apparently supports some types of RAID?
In the meantime, I would rather try the Delock HBA 4x NVMe M.2 card (PCIe 4.0 x16, Bifurcation)
I imagine the PCIe4x is indeed the more modern successor. I only have PCIe3 so the older card was perfect and does just work (once I worked out SMC's arcane slot numbering / labelling) - I had to ask in the end

Try the other card - although the Asus card has the advantage of an inbuilt cooler where with your idea may need to add some cooling, depnding on airflow / case design etc. As long as the motherboard supports 4 way bifurcation (which it should)

Those Boot SSD's are a bit overkill, but whatever. I assume that they are going in the Delock. with the other two slots used later as required (plus one on the mobo, but it doesn't matter). I agree BTW with waiting for the L2ARC to see if its likley to make a difference.

Nice spec. Should last quite a while.
 

M2TMbYpP

Cadet
Joined
Jan 27, 2022
Messages
5
Try the other card - although the Asus card has the advantage of an inbuilt cooler where with your idea may need to add some cooling, depnding on airflow / case design etc. As long as the motherboard supports 4 way bifurcation (which it should)
Yes, there's something to that, of course.
But what I had overlooked is the form factor of this card. It's a full-size PCIe card, but I need a low-profile card for my chassis.
So I'm leaning towards buying this Delock 89045 2 x NVMe M.2 low profile card.
If I ever want to add L2ARC to my NAS, I will either have to buy a second Delock 89045 card or search for another low-profile card with bult-in cooler.

Those Boot SSD's are a bit overkill, but whatever.
They are too large. But there is almost no difference in price if I were to buy 120GB SSDs, which are very rare these days.
 

M2TMbYpP

Cadet
Joined
Jan 27, 2022
Messages
5
In the meantime I took the decision to get another Supermicro chassis.
The Supermicro 826BE2C-R920LPB I wanted to buy has a 12-port SAS3 Expander (BPN-SAS3-826EL2) as HDD backplane. This has the advantace that I only have to get one SFF-8643 cable to connect the expander to the HBA. The disadvantage, however, is the non-existent compatibility with NVMe SAS drives.

The Supermicro 826BAC12-R802LPB, however, has the BPN-SAS3-LA26A-N12 as its HDD backplane, which is compatible with NVMe drives and has SlimSAS x4 (for HDDs) and SlimSAS x8 (for NVMe SSDs) ports.
The disadvantage, however, is that I need more SAS cables (SFF-8654 -> SFF-8643) and a HBA with more ports as the Broadcom LSI HBA 9400-16i has. But since there's not much difference in price, that's fine for me if the system is more future-proof for it :)
 
Top