Home Server Build - Mixing HDD RPM

Delgon

Dabbler
Joined
Aug 24, 2022
Messages
18
Warning: Long Post xD
If you do not care about specs and other info and thoughts, I would appreciate help with HDD section

CPU: AMD R7 5800X3D
MoBo: Gigabyte B550 Aorus Elite V2
RAM: 2x16GB UDIMM ECC - Samsung 3200MHz
BootDrive: 2x 240GB Kingston A400

HBA: AOC-S2308L-L8i - In IT mode.
HDD Old: 3x WD Red 8TB (Plus - bought before they even became Plus series) + 1x WD Red Pro 8TB
HDD New: 4x WD Red Plus 8TB
Pool: 8 Drives in RaidZ2
SLOG: 2x Intel Optane 16GB modules

OS: TrueNAS Scale

A few notes for the specs:
CPU: I get that it is kinda overkill for just a "simple" HDD array. I plan to maybe put some more Dockers. I'll also try Plex and use it to offload transcoding x264 and x265 to the server itself (as I like ridiculous mega placebo params xD)
MoBo: I tested and in theory, the board is working correctly with ECC. It is not perfect as the normal platform lacks the necessary modules to properly check ECC RAM - No Error injection or even proper reposting of Error corrections but all hardware and software pointers say that ECC is ok.
RAM: So 32GB now with the possibility to upgrade to 64GB if I see the need.
BootDrive: 120GB should be enough but 240 costed literally 2$ more in the shop so I just went with them.

HBA:
Afaik it should be good :P It will be slotted into the x16 slot first but might be moved into the x2 slot later. I know the bandwidth is not the best with it, but even with PCIe 3.0 x1, the 2.5GbE would not be fully saturated with it so I think it will be fine.
SLOG: I want to use "Sync: Always" on a few datasets and I might host a few VMs in which case it will help. In theory, the 170-180MB/s writes they give will not be enough to saturate the 2.5GbE that the MoBo has but I do not worry about it as not all traffic will get there, and even if it did, it is still like 1.5Gb which I would be more than happy with. I was thinking about getting a single (2 would be too much $$$ for that for me) P4800x/P4801x like 100GB model for that and maybe even for mini L2ARC later but might not be even necessary if I ever upgrade the RAM to 64GB.

HDD: This is a huge problem and I'm not 100% sure what to do about it. I read about it some but I would still like to get some insight into it.
Right now I have "HDD Old" set so 3 drives at 5400 (or more, WD is kinda weird about the actual speed on those) and 1 drive at 7200. I have 2 options:
1. Buy 4 WD Red (5400 RPM) and have 7 5400 and 1 7200
2. Buy 5 WD Red and have all the "same" 5400 RPM drives. This however means ill have 1 random free 8 TB drive that I'll have nothing to do with.
Is option 1 a bad idea? From what I heard, mixing different RPM drives is not really that bad and should not really affect anything. In reality, everything should just be as fast as the 5400 RPM so I do not really lose anything. Option 2 is in theory possible but would cost me more and leave me with 1 random drive which is kinda wasteful imo.

OS: I went with Scale even tho there are a few vids and posts I saw with lower performance compared to Core which should not impact me a lot. It will allow me to use Dockers which I'm muuuch more familiar with, which I hope will help me with the whole setup and everything.

Notes about usage:
The system will 95% be used for rather "cold" data to store mostly backup/archive data and some media for potential Plex - movies, series, and anime. The server might have a few Dockers for some services like Plex, Cloud, Media Conversion (x264/x265), and maybe some more, we will see. It will be mostly used by 1/2 people as it is a data storage for home usage.

This should give me some possible expansion in terms of RAM (32 -> 64) and possible SLOG (and) L2ARC upgrade possible P4800X that might even be used also as L2ARC but I think it should not be necessary due to the usage being mostly Archival (no real need for huge cache/L2ARC) and mostly working with larger media files.

I think that I thought about the whole setup as good enough for my usage but I still need some insight if Option 1 in the HDD section is ok. Do you think I went super off in some aspect of the build or it should be ok? I really wanted to do as many things as "right" as possible so I really wanted ECC RAM, I cut corners a little bit with HDDs as they are SATA but the HBA is SAS, not sure if it is something to be "proud" about but I thought having proper HBA with SATA drives is better than just SATA controller with SATA drives, maybe I'm super wrong about it tho xD
 

gdarends

Explorer
Joined
Jan 20, 2015
Messages
72
You could do option 1 and use the 7200rpm drive as a hot spare.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Before building anything from the WD drives, I would ensure that none of them are SMRs.

You must not use SMR drives for ZFS. So please, post the complete detailed model numbers of these drives before going any further.
 

Delgon

Dabbler
Joined
Aug 24, 2022
Messages
18
The idea with 7200 as a hot spare. Hmm, that would leave me with 7 drives in raidz2 with 1 sitting there and doing nothing, no parity, no storage. in this case, I could go with raidz3 for more parity, but i would rather stick with z2 and have more space. And I think i read that it is best to have odd drives for z1, even for z2, and odd for z3 (so the data drive count is always even). If i go with option 2 with just getting all same 8 5400 and having 7200 as spare, it would mean I need to either get another HBA or put the spare in the sata port to the motherboard.

Drives I currently have:
WDC_WD80EFZX-68UW8N0_R6GRLMGY
WDC_WD80EFZX-68UW8N0_VLH4W8YY
WDC_WD80EFAX-68KNBN0_VAGSHM5L
WDC_WD8003FFBX-68B9AN0_VRHMN04K

Afaik all are CMR which I checked before also to make sure they are not SMR: https://nascompares.com/answer/list-of-wd-cmr-and-smr-hard-drives-hdd/
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Quite a few WD drives marketed as "5400 RPM class" were actually 7200 RPM. So in my view it is not even certain that you really have a problem here.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Delgon

Dabbler
Joined
Aug 24, 2022
Messages
18
Quite a few WD drives marketed as "5400 RPM class" were actually 7200 RPM. So in my view it is not even certain that you really have a problem here.
Hmm. I think the Drives (WD80EFZX and WD80EFAX) are already quite different in terms of speed. AX is pretty much 10-15% faster than ZX and the drives I would buy would be ZZ so more possible differences there :)
I can see differences in speed while I'm preparing the drives and doing a nice read / clear / read on them.

That's actually not an issue.
Interesting. Didn't know about it. I was aiming for proper HBA even with SATA drives just to be "safe" it is there instead of a simple SATA controller/expansion card. In that case, do you think having a hot spare is better than a cold spare? The server will be in my home so I'll be available to swap them if needed and not sure how bad or good having the drive just on the system kinda idle compared to cold on the shelve is.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
CPU: AMD R7 5800X3D
MoBo:
Gigabyte B550 Aorus Elite V2
I'm curious why you went for the extra cache (and cost) when the use case is mostly serving files to 1-2 users. Looks like a waste of resources.
Consumer motherboard, with Realtek NIC.

SLOG: 2x Intel Optane 16GB modules
SLOG does not need to be mirrored. Maybe you intend to stripe these to get some more performance and endurance, which 16 GB M10 lack.
But the question is: Do you really need sync writes for a media-serving home NAS?
SLOG: I want to use "Sync: Always" on a few datasets and I might host a few VMs in which case it will help. In theory, the 170-180MB/s writes they give will not be enough to saturate the 2.5GbE that the MoBo has but I do not worry about it as not all traffic will get there, and even if it did, it is still like 1.5Gb which I would be more than happy with. I was thinking about getting a single (2 would be too much $$$ for that for me) P4800x/P4801x like 100GB model for that and maybe even for mini L2ARC later but might not be even necessary if I ever upgrade the RAM to 64GB.
Upgrade RAM before you even consider a L2ARC. Again, it's not clear if it would be useful for a media-serving home NAS with two users.

HDD: This is a huge problem and I'm not 100% sure what to do about it.
As pointed by @ChrisRJ there's not even a problem here. The WD Red are "5400 rpm-class" in WD's own and unique marketing speak; they actually spin at 7200 rpm, just like the Red Pro. You couldn't even find (actual) 5400 rpm NAS drives: No one manufactures such drives any more.

Notes about usage:
The system will 95% be used for rather "cold" data to store mostly backup/archive data and some media for potential Plex - movies, series, and anime. The server might have a few Dockers for some services like Plex, Cloud, Media Conversion (x264/x265), and maybe some more, we will see. It will be mostly used by 1/2 people as it is a data storage for home usage.

This should give me some possible expansion in terms of RAM (32 -> 64) and possible SLOG (and) L2ARC upgrade possible P4800X that might even be used also as L2ARC but I think it should not be necessary due to the usage being mostly Archival (no real need for huge cache/L2ARC) and mostly working with larger media files.

I think that I thought about the whole setup as good enough for my usage but I still need some insight if Option 1 in the HDD section is ok. Do you think I went super off in some aspect of the build or it should be ok? I really wanted to do as many things as "right" as possible so I really wanted ECC RAM, I cut corners a little bit with HDDs as they are SATA but the HBA is SAS, not sure if it is something to be "proud" about but I thought having proper HBA with SATA drives is better than just SATA controller with SATA drives, maybe I'm super wrong about it tho xD
As far as I know, the SATA controllers from Intel and AMD chipsets are just as good as SAS HBA. SATA drives actually have a benefit over SAS drives in that they provide more SMART information.
Nothing seems "super wrong", but if you want to do it "proper", using a server board with Intel NICs, and preferably a BMC, should come before "using a SAS HBA". I also suspect you overspend on the special CPU with extra L3 cache, possibly the SLOG and the HBA if the chipset alone could provide enough ports.
 

Delgon

Dabbler
Joined
Aug 24, 2022
Messages
18
I'm curious why you went for the extra cache (and cost) when the use case is mostly serving files to 1-2 users. Looks like a waste of resources.
Consumer motherboard, with Realtek NIC.
I went with it because of 2 reasons.
1. I'll use the CPU for encoding and Plex so I wanted a capable CPU
2. I work on software for CPUs and I might boot this machine like once or twice a month to test this particular CPU. I know, not the best thing to do but I have to think about repurposing and using hardware I have as efficiently as possible :)

SLOG does not need to be mirrored. Maybe you intend to stripe these to get some more performance and endurance, which 16 GB M10 lack.
But the question is: Do you really need sync writes for a media-serving home NAS?
I might try stripping for some more performance.
Like I said, I might use a few VMs on it in which case it might help. We will see.

Upgrade RAM before you even consider a L2ARC. Again, it's not clear if it would be useful for a media-serving home NAS with two users.
Yes. As I mentioned, I'll first have to see how 32GB fares for now and start with up to 64GB.

As pointed by @ChrisRJ there's not even a problem here. The WD Red are "5400 rpm-class" in WD's own and unique marketing speak; they actually spin at 7200 rpm, just like the Red Pro. You couldn't even find (actual) 5400 rpm NAS drives: No one manufactures such drives any more.
Ok. Nice to know there will be no problems with so many different models and potential speeds and other variations. NICE!

As far as I know, the SATA controllers from Intel and AMD chipsets are just as good as SAS HBA. SATA drives actually have a benefit over SAS drives in that they provide more SMART information.
Nothing seems "super wrong", but if you want to do it "proper", using a server board with Intel NICs, and preferably a BMC, should come before "using a SAS HBA". I also suspect you overspend on the special CPU with extra L3 cache, possibly the SLOG and the HBA if the chipset alone could provide enough ports.
The mobo itself has only 4 SATA so I would need to buy SATA controller anyway. Intels Optane 16G were not that expensive so even if I end up not using them for SLOG, I might repurpose them for something else like RPi drive or something if I deem them unnecessary for the server.

Was not aware that NIC brands and everything can have a meaningful impact on the build. Good to know and maybe I'll end up upgrading to 10Gb later, thanks for the tip.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Yeah - buy a different motherboard - that one is crap. (for all I know it might be a great gamer motherboard - but it ain't a great NAS motherboard)

It always astonishes me why people use that sort of motherboard for a NAS. Its a massive compromise from day one. Its got loads of stuff you don't want and should just turn off, and very little of the stuff you do want. It doesn't have a proper NIC (fundamental point) and has such crap expansion beyond a single useable X16 slot - the rest being largely useless (1 * PCIe 3*2, 2 &* PCIe 3*1)
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
If PLEX transcoding (you have PLEX pass?) is very important:

Intel 12500-12700 with Quicksync, W680 Mainboard, ECC RAM.

Mainboard with Intel NIC and probably enough SATA ports (no need for HBA).
Take one Optane as TrueNAS boot drive + 1 additional SSD for VMs (mirror if critical).
RAIDZ2 for media storage

If you want to stay with AM4: Asrock Rack x470, or x570
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The mobo itself has only 4 SATA so I would need to buy SATA controller anyway. Intels Optane 16G were not that expensive so even if I end up not using them for SLOG, I might repurpose them for something else like RPi drive or something if I deem them unnecessary for the server.
These 16 GB Optane make perfect boot drives: Cheap second-hand, just the right size, do not use a SATA port. :wink:

Was not aware that NIC brands and everything can have a meaningful impact on the build. Good to know and maybe I'll end up upgrading to 10Gb later, thanks for the tip.
As suggested, you may look into AsRockRack X470/X570/B550 D4U server boards; some even come with on-board (Intel) 10 GbE.
 

Delgon

Dabbler
Joined
Aug 24, 2022
Messages
18
Any explanations why Intel good, anything else bad, just saying "non intel is crap" does not tell me anything?
As suggested, you may look into AsRockRack X470/X570/B550 D4U server boards; some even come with on-board (Intel) 10 GbE.
Ye, and they would cost me 3-4x the amount of this board I currently have and looks pricy just so i can have onboard 10GbE instead of (still dunno why it is a problem tho) 2.5GbE of non-Intel. And I have had a very bad experience with AsRock and regard them as pretty much the worst MoBo manufacturer there is.

Intel 12500-12700 with Quicksync, W680 Mainboard, ECC RAM.
I needed Zen3 for my tests as I already have 12th gen in my laptop so I went with it. And I prefer proper x264 encoding, hardware one always looks bad to me (Quicksync or NVENC). But if not for my laptop, I might have gone with 12th gen for it. I do not even see that kind of board here o_O

Mainboard with Intel NIC and probably enough SATA ports (no need for HBA).
Take one Optane as TrueNAS boot drive + 1 additional SSD for VMs (mirror if critical).
RAIDZ2 for media storage
Didn't really think about NOT going with HBA, to be honest as everywhere I read it is pretty much necessary to have HBA for the drives etc. I might put the last 2 SATA ports on the board for some 2.5 SSD like 500GB/1TB in a mirror for VMs, Dockers and some fast temp file storage between machines.
Yep, I'm palling to put those 8 HDDs into RAIDZ2 :)


Yeah - buy a different motherboard - that one is crap. (for all I know it might be a great gamer motherboard - but it ain't a great NAS motherboard)
The only thing you said about it is that NIC is "crap" without any explanation on why and so on so not sure how to take it. I'm also repurposing pretty much everything except HBA, and Optane, and swapped RAM to ECC instead of non-ECC.

It always astonishes me why people use that sort of motherboard for a NAS. Its a massive compromise from day one. Its got loads of stuff you don't want and should just turn off, and very little of the stuff you do want. It doesn't have a proper NIC (fundamental point) and has such crap expansion beyond a single useable X16 slot - the rest being largely useless (1 * PCIe 3*2, 2 &* PCIe 3*1)
Expandability. To be honest, I build it for now, not for "in 10 years ill need PCIe for 10 NVMe and 20 SSDs with 20GbE and what not"
Even the current set is very flexible.
1. What I plan to use now.
M.2: Optane
M.2: Optane
PCIe x16: HBA
PCIe x2: -
PCIe x1: -
PCIe x1: -

2. Will need 10GbE for some reason:
M.2: Optane
M.2: Optane
PCIe x16: HBA
PCIe x2: 10 GbE NIC
PCIe x1: -
PCIe x1: -

3. Want NVMe for some more speed
M.2: Optane
M.2: Optane
PCIe x16: 4x4 NVMe card
PCIe x2: 10 GbE NIC
PCIe x1: -
PCIe x1: HBA

Also, remember that M.2 is proper 4x. The lanes on the slots look lacking to you but are 100% useable, not like on other boards where half of the things stop working when you add things. and things like x470/x570 ASRock rack are more anemic in terms of PCIe slots, they have 10GbE instead of 2.5GbE tho (not like I have or need 10GbE on my home media/archive/backup setup)
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
SLOG does not need to be mirrored.
I never bothered with SLOG devices too much, since I don't have a lot of sync writes. But my understanding has actually been that SLOG should indeed be mirrored. Loosing a write-log entry because of an SLOG vdev failure, and thus potentially ending up with corrupted data, is something I would like to avoid.

Or did I miss something here?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Loosing a write-log entry because of an SLOG vdev failure, and thus potentially ending up with corrupted data, is something I would like to avoid.
As often, it depends how paranoid you are…
A SLOG is potentially critical to data integrity, but it is only ever read in the event of a reboot after an unexpected shutdown. So the scenario for data loss is:
1/ Unexpected shutdown (software crash, sudden power loss, critical hardware failure); AND THEN
2/ SLOG fails upon reboot.
Under any other circumstances, a single drive SLOG is perfectly safe. (There's so little data to be read that URE are hardly a concern.)

A double failure seems quite unlikely, unless maybe it has a single cause, for instance a rogue PSU frying the board and drives with excessive voltage until they release the "magic smoke". For business-critical uses there's probably a point for a mirrored SLOG to recover from "the server was almost completely destroyed". For home use, at that point I'm happy to write down the whole NAS, with its last transactions, and fall back on the external backup.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Any explanations why Intel good, anything else bad, just saying "non intel is crap"
It's really "Realtek is crap." In the gigabit NIC sphere, their hardware is crap, and worse yet their drivers are crap. They suck less under Windows than under other OSs (because Realtek actually developed those drivers), but they still suck. Intel gigabit NICs are solid hardware and have solid drivers (under BSD, Linux, and most other OSs), they're widely available, and they're cheap. For gigabit, use Intel, end of discussion.

For 10G, it's a different story. Intel is still good, but so is Chelsio. So are a few others. Realtek is still crap.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Any explanations why Intel good, anything else bad, just saying "non intel is crap" does not tell me anything?
We're strictly talking about NICs here, and the answer is "driver quality".
Server-grade NICs have manufacturer-provided thoroughly tested drivers for FreeBSD and/or Linux. These include Chelsio, Solarflare and Intel server-grade NICs from X700/X500 series down to the i350 and i210, and are known to cope reliably with file serving duties ZFS may throw at them.
Consumer-grade NICs come with (quickly baked) Windows drivers from the manufacturer, and then whatever some lone developer came up with for Linux (SCALE) and either a quick FreeBSD port of the above or nothing at all (CORE). These include pretty much the whole Realtek range, and are known NOT to cope well with heavy load.

You may search the forum for reports of issues with Realtek 1 GbE NICs, and @jgreco 's rants about NICs, crappy drivers and why 2.5 GbE as a whole is technology which should not even exist.

RTL8125 may turn out to be better than its 1 GbE predecessors, and at least iXSystems is making some efforts to support it. In five years, we'll have some data points to evaluate it.
For now, purists here will regard going with a Realtek NIC to serve data as a worse offense against the "TrueNAS/ZFS hardware textbook" than not using ECC RAM or not using a HBA (no offense at all if the chipset alone provide enough ports).

And I have had a very bad experience with AsRock and regard them as pretty much the worst MoBo manufacturer there is.
For what it's worth, AsRockRack is not AsRock. These are two related, but separate, companies serving different markets.

3. Want NVMe for some more speed
M.2: Optane
M.2: Optane
PCIe x16: 4x4 NVMe card
PCIe x2: 10 GbE NIC
PCIe x1: -
PCIe x1: HBA
Ouch! That one would severely cripple the HDD pool…

Also, remember that M.2 is proper 4x. The lanes on the slots look lacking to you but are 100% useable,
But you'd want at least x4, preferably x8, for a HBA and x4 for a 10 GbE NIC. There are few uses for a x1/x2 slot in a server, except to boot from a NVMe drive on adapter (these M10 Optane are x2).
So server boards have a more useful distribution of slots (min. x4) for server duties.
And server boards typically come with a BMC for remote management, which you may count for nothing if you've never used it but which is actually very handy once you've tried and enjoyed it. (It hardly matters here that the BMC typically uses a Realtek NIC: Management is low traffic.)
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Your idea of 4x4 M.2 in PCIe x16 will only work with expensive cards with own controller, as your Mainboard does NOT support bifurcation…

I just bought an Asrock Rack X470D4U… Intel NIC onboard (doesn’t matter if I take a PCIe NIC), bifurcation to use a 40$ Asus Hyper M2 Card and a BMC with integrated VGA output, which allow me to run a AM4 CPU without iGPU…

I don’t understand your hardware choices, but if you have that stuff already: use it, get an additional PCIe Intel NIC and experiment how well it works
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
With a used supermicro board you would have a much solid base and way less issues, but if you are repurposing hardware I guess you have to make choices... especially with those pcie slots.

TrueNAS (ZFS iirc) doesn't care where you plug your drives, it will recognize and use them. Just make sure not to use any RAID controller / anything with a cache.

Example: you can have a RAIDZ2 pool with 5 disks, 4 connected to a proper HBA and the other directly to the mobo.

SLOG doesn't need to be mirrored, but everything is better when you have redundancy. If you lose it you only lose something like 5 seconds of data.

As a side note, a VDEV with mirrored SSD could help your dockers/VMs.
 
Last edited:

Delgon

Dabbler
Joined
Aug 24, 2022
Messages
18
Thanks for the nice info about the Realtek NICs. Maybe I'll try to add some proper 10GbE networking or something as soon as possible.
Your idea of 4x4 M.2 in PCIe x16 will only work with expensive cards with own controller, as your Mainboard does NOT support bifurcation…
The board does support bifurcation. And with x470 or x570 asrock, if you even want to use second PCIe slot there, your first one turns into 8x which means you cannot use proper 4x4 (4 drives) bifurcation and will be stuck with 2x4 (2 drives) which would require the usage of more expensive board with PLX. The board supports 2x8 / 1x8+2x4 and 4x4. This means, if I want 4x4, I would be left with 1x slot for the HBA which does not fix any more of my issues. Just use SATA instead of HBA? well, then It is not possible, 8 sata HDD and 2 sata SSD (boot) would mean 10 ports, and even if It was 8 (which asrock has), if i used 8, I would lose M.2 slots as they share bandwidth. I went with current board as it is SUPER clear, what is on the board, I can use and nothing is cut off if I use some SATA, PCIe, M.2 or whatever.

I just bought an Asrock Rack X470D4U… Intel NIC onboard (doesn’t matter if I take a PCIe NIC), bifurcation to use a 40$ Asus Hyper M2 Card and a BMC with integrated VGA output, which allow me to run a AM4 CPU without iGPU…
I do not need GPU/iGPU, that's why I went with non-APU. GPU is only needed for the BIOS setup and maybe if I want to change something, in which case, I can just put my handy 710 if needed.

For what it's worth, AsRockRack is not AsRock. These are two related, but separate, companies serving different markets.
Oh, did not know about it. Kinda like HP and Hewlett-Packard (HPE) are shown in the industry? Consumer vs Enterprise?


Ouch! That one would severely cripple the HDD pool…

But you'd want at least x4, preferably x8, for a HBA and x4 for a 10 GbE NIC. There are few uses for a x1/x2 slot in a server, except to boot from a NVMe drive on adapter (these M10 Optane are x2).
1x is almost 1GB of transfer and 2x is 2GB. It fits perfectly for:
2x - 10 GbE NIC - Somewhere around 1-1.25GB of transfer (not gonna see theoretical max but still)
1x - HDD HBA - Those drives are not gonna break the ~900 MB/s that 1x is capable of. Like, the speed on those drives is close to 100-180 depending on the end/beginning of the platter which results in 600-1080MB/s in my current 8 RAIDZ2 config. I know it is not "The best" but still ok for just a few HDD I would say, at least looking at the throughput of the connectors themselves.
Also, if I even go back on my SLOG idea due to very small use or something, those M.2 x4 can be repurposed just fine for the HBA and NIC which will not bottleneck them in any way.
In most cases, when it comes to the lanes, and potential throughput, I tried to think about the current setup and how close it is and if I can live with 90% of the performance using 1x instead of 2x or something for HBA, for which a) I would need 10GbE NIC b) I would need to use them full throttle and even then, they would not produce that kind of speed most of the time and only sequentially.

So server boards have a more useful distribution of slots (min. x4) for server duties.
And server boards typically come with a BMC for remote management, which you may count for nothing if you've never used it but which is actually very handy once you've tried and enjoyed it. (It hardly matters here that the BMC typically uses a Realtek NIC: Management is low traffic.)
When I get some more $$$ in a few more years. I will definitely try to get some proper Server grade hardware and so on. Maybe my whole home network will also be switched to 10GbE so I'll be able to utilize it properly too.
Ye, never used any hardware with remote management which was not a really appealing thing for me, especially for like 3x+ the price for the mobos. Maybe I'm bad at that, I'll have to get my hand on some cheap stuff to test and see how/what can be done with BMC, remote, and IPMI, maybe it will change my view on it a lot :)


With a used supermicro board you would have a much solid base and way less issues, but if you are repurposing hardware I guess you have to make choices... especially with those pcie slots.
I'll see how this build goes, and how much I'll need more expansions which server hardware definitely provides compared to consumer stuff. Unfortunately, I do not have enough to get new (as in buy) stuff for every component. I had many components that I use sporadically for other things so I wanted to put them to work XD

TrueNAS (ZFS iirc) doesn't care where you plug your drives, it will recognize and use them. Just make sure not to use any RAID controller / anything with a cache.

Example: you can have a RAIDZ2 pool with 5 disks, 4 connected to a proper HBA and the other directly to the mobo.

SLOG doesn't need to be mirrored, but everything is better when you have redundancy. If you lose it you only lose something like 5 seconds of data.
Ye, that's why my first thought was to go mirrored (like I read a lot about xD) I'll stick with mirror, and see how it goes. Do I need more speed and come to the conclusion that maybe JUST having slog is enough, maybe ill strip them. If I end up not needing them, well, I can always use those for some nicer NVMe "boost" xD

I know the hardware I chose is not "da best" and does not allow for great future expansions like many more Drives with more HBA, GPUs for potential HW encoding, or maybe passthrough for some work or whatnot. Limitations of getting not 100% but almost everything (like 90% from what I can see) throughput-wise I need and have right now. Remember, I use 8 HDDs as the main medium, so lanes on those are not that important, I do not use any kind of backplane with more drive abilities, or SSDs with I know would limit the performance on 1x. Currently, I have a switch with 10GbE in theory but the main Rig still uses 2.5GbE which is another limiting factor, for now, and a potential upgrade I had in mind somewhere as you can see xD But network bandwidth is not a priority for me, I'll not use this NAS for direct work, just backups, a dump of important files, and access from the internet for a family for the cloud (in which case I'm limited to 1Gb anyway)

I DO appreciate a lot of insight in many more aspects like SATA, IPMI/BMC potential, and NIC info which I was totally not aware of. This is my first "proper" (which I can see many do not want to name it that way) NAS I'm gonna get. I hear and read a lot about "test things for your use case", and "measure for your needs" which I try to do. This will be the first proper thing, which I'll know from what I will need in a few years as an upgrade. I get bombarded with new ideas and potential solutions to problems (sometimes with no explanation and just phrases "this is crap" and "that is crap"). I hope I do not come as ungrateful, I really appreciate ideas, but I also thought a lot about those things and my current workload, not potential future mega expansion xD I know having future options is nice, but I do not have enough money to throw at things that I might, or might not need in the future and benefit me in no way right now.

Do I need SLOG? We will see. I can just scrap that idea, those were not super expensive and can be repurposed by me.
Do I need more encoding power for Plex? We will see xD
Do I need more RAM? Maybe, but I have the ability to up it to 64GB here, and will tell me how I should look at it in the future.
I'll def get some IPMI/BMC hardware to see how much I'll crave it for the next build xD
I'll see how the current 2.5 Realtek works and possibly switch to something more tested soon :)
I'll def try to get more proper server/enterprise stuff in my next build but for now, I have hardware that I have. Where I can do some changes but personally I do not see at least now, the benefits of the Asrock board which would just cost me a lot of money and leave me with the current board, Give me 10GbE but much fewer PCIe options, if HBA goes to 1x, same "issue" as I have now, if it goes to 2nd slot, I would lose 4x4 bifurcation which I do not now.
 
Top