[Newbie] Mini-ITX Mobo in 2023

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Hi guys,

I've spent the last couple of weeks reading up as much as I can on what a good mini-ITX server-build for TrueNAS could look like. I kept seeing reddit post after reddit post suggesting to use gaming mobos only to seemingly upgrade their builds on a yearly basis, run into compatibility issues, and get weird bugs and issues... which struck me as not what I wanted tbh... I'm glad I decided to look at the official forum, because apparently it's the norm here to recommend using server motherboards instead and focus on making the best build rather than a build that sorta works even if you need to hold it together with string and sellotape.



My biggest dilema right now is which ASRock rRack or Supermicro board I should get... I've looked through the forums and a lot of the recurring mini-ITX suggestions are either out of stock or seem to have become infamous due to having problems (like the ASRock c2750d4i).

The only thing I've ordered so far is a Jonsbo N2 case -hence why mini-ITX is a must . I plan on Having 5x HDDs + 1x or 2x SSDs (It seems like just having 1x SSD for booting is enough given that I don't plan on using any apps). Other must-haves are 10GbE and ECC memory (pretty sure this is what people in this forum already tend to suggest). I also keep reading that being able to install as much ram as possible is good for cacheing(?) - so I would like to try get a mobo that allows for 64-128GB of ram if possible.

A major doubt I have about some of the options I've found, is that sometimes they have for example 8 SATA ports total, but only 4 are directly onboard... the other 4 are only accesible by using anOculink conector... is it possible to make a single pool of 5 HDDs with a setup like that or would it cap at 4HDDs?
Right now I'm looking at:

[ASRock Rack]
  • C236 WSI4-85 (255€) - [2nd hand unopened / 1GbE]
  • C2750D4I (349,00€) - [2nd hand - problematic? / 1GbE]
  • X570D4I-2T (397,82€) - [new / 1GbE]
  • E3C246D2I (428,78€) - [new / 1GbE]
  • X570D4I-2T (655,29€) - [new / 10GbE]
  • E3C256D4I-2T (659,86€) - [new / 10GbE]
[Supermicro]
  • MBD-X11SCH-F (431,82 €) - [new / 10GbE / only 32gb ram max.]
  • MBD-A2SDI-4C-HLN4F-O (454,57€) - [new / 10GbE]
  • X10SDV-2C-TLN2F (572,40€) - [new / 10GbE]
  • X10SDV-4C-TLN4F (789,83 €) - [new/ 10GbE]

Any thoughts on these options? Any alternative recommendations are very welcome - the supermicro lineup is much harder to look through than the Asrock Rack, since the site sucks, so I've probably missed a few good options.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
Those reddit posts suggesting gamer boards are from people who have little actual concern over the stabilty of their data they put on their TrueNAS. While debates can get heat on ECC vs Regular ram and what CPU to use and such, really do you want to trust your data to a gaming board with more bells and whistles and possible problems vs a tried and true and trusted board as you noted above. So you are def looking in the right place.

As you noted, more ram, the better, ARC is the primary cache system for TrueNAS (you will also find people on reddit and other forums saying add in an SSD for cache!) They are cluseless, LARC has a very very specific use case, so the more ram the better with 32GB kind of being the minimum you want off the bat.

My first question would be, what do you plan to use this build for? Feeding media files to front ends, running VM's via NFS shares, storing files to multiple computers to access or use as a storage locations?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The only thing I've ordered so far is a Jonsbo N2 case -hence why mini-ITX is a must . I plan on Having 5x HDDs + 1x or 2x SSDs (It seems like just having 1x SSD for booting is enough given that I don't plan on using any apps). Other must-haves are 10GbE and ECC memory
More RAM is always better, but if your use case is just to serve files over 10 GbE, 32 GB RAM with Core should be comfortable.

A major doubt I have about some of the options I've found, is that sometimes they have for example 8 SATA ports total, but only 4 are directly onboard... the other 4 are only accesible by using anOculink conector... is it possible to make a single pool of 5 HDDs with a setup like that or would it cap at 4HDDs?
It's possible, of course. You only need a breakout cable from OCuLink to 4*SATA. Same for the MiniSAS-HD/SFF-8639 ports on A2SDi motherboards.

You've somewhat painted yourself into a corner by picking a mini-ITX case before having the system to fit into but have done good research, congratulations!
  • C236 WSI4-85 (255€) - [2nd hand unopened / 1GbE] // looks like a good value option
  • C2750D4I (349,00€) - [2nd hand - problematic? / 1GbE] // depends when exactly the board was manufactured, so not the most attractive option…
  • X570D4I-2T (397,82€) - [new / 1GbE] // X470? else there's a duplicate entry
  • E3C246D2I (428,78€) - [new / 1GbE]
  • X570D4I-2T (655,29€) - [new / 10GbE]
  • E3C256D4I-2T (659,86€) - [new / 10GbE]
  • X11SCH-F (431,82 €) - [new / 10GbE / only 32gb ram max.] // micro-ATX, and no 10 GbE
  • A2SDI-4C-HLN4F-O (454,57€) - [new / 10GbE] // not 10 GbE, this would require A2SDi-H-TF
  • X10SDV-2C-TLN2F (572,40€) - [new / 10GbE]
  • X10SDV-4C-TLN4F (789,83 €) - [new/ 10GbE]
Any thoughts on these options? Any alternative recommendations are very welcome - the supermicro lineup is much harder to look through than the Asrock Rack, since the site sucks, so I've probably missed a few good options.
Any X10SDV in mini-ITX size would do, so references to avoid are those with 'TP' (on-board SFP+) or '7' (on-board HBA), which are Flex-ATX.

-TLN boards are gone, but this seller may still have a X10SDV-F for a fair price.

Think total cost. 10 GbE is easy to add, as you appear to have no other use for the single PCIe slot, and Solarflare NICs can be found for 45 E on eBay. Atom C3000 and Xeon D-1500 boards use DDR4 RDIMM (can take ECC UDIMM as well, but this is typically more expensive for a lower capacity); CPU is included, so you only need to add a 40 mm fan (Noctua NF-A4x25 is a favourite) on the "passive" heatsink to ensure proper cooling in non-server case. Socketed Intel C2x6 or AMD boards use DDR4 ECC UDIMM, and require a matching CPU, Core i3 / Xeon E-2000 or Ryzen (preferably Pro for actual ECC support), so the total cost will likely be higher than an embedded board—but performance on single-threaded SMB will be higher.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Those reddit posts suggesting gamer boards are from people who have little actual concern over the stabilty of their data they put on their TrueNAS. While debates can get heat on ECC vs Regular ram and what CPU to use and such, really do you want to trust your data to a gaming board with more bells and whistles and possible problems vs a tried and true and trusted board as you noted above. So you are def looking in the right place.

As you noted, more ram, the better, ARC is the primary cache system for TrueNAS (you will also find people on reddit and other forums saying add in an SSD for cache!) They are cluseless, LARC has a very very specific use case, so the more ram the better with 32GB kind of being the minimum you want off the bat.

My first question would be, what do you plan to use this build for? Feeding media files to front ends, running VM's via NFS shares, storing files to multiple computers to access or use as a storage locations?
Exactly, I would much rather spend my money on something that I feel confident in than on something that I know I will be constantly worrying about or having to patch up... I just saw yet another thread today where someone had purchased an aliexpress mobo and they were already having issues they were going to have to solve down the line.

Ok, so I had the right idea about ram then... what about regarding the matter where some mobos have their total SATA ports divvied up into two groups of 4 or less ports each - does that mean I can only have a pool of 4 hdds or can I have a pool of >4 hdds?

To answer your question, I plan on using this build for backing up my devices, storing my media library (which an Intel NUC will pull from for JellyFin - so the NUC to do do any transcoding that might be necessary) and storing all my projects and work files (.psd .ai .logicx and video files) - I would also like to be able to work with/on and edit those files from one of my devices while they are stored on the NAS.

Thank you for responding :)

(sorry for my delayed response btw - I was on a mini-holiday until today)
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Ok, so I had the right idea about ram then... what about regarding the matter where some mobos have their total SATA ports divvied up into two groups of 4 or less ports each - does that mean I can only have a pool of 4 hdds or can I have a pool of >4 hdds?
A SATA port is a SATA port, no matter it has its own 7-pin connector or comes packaged as OcuLink, MiniSAS, SlimSAS or whatever with a breakout cable.

To answer your question, I plan on using this build for backing up my devices, storing my media library (which an Intel NUC will pull from for JellyFin - so the NUC to do do any transcoding that might be necessary) and storing all my projects and work files (.psd .ai .logicx and video files) - I would also like to be able to work with/on and edit those files from one of my devices while they are stored on the NAS.
Extra duties may benefit from more RAM. More RAM is likely to favour platforms which use RDIMM over UDIMM.
However direct editing of big files on the NAS with a single and relatively narrow raidz vdev is unlikely to be as snappy as you'd like.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
it's the norm here to recommend using server motherboards instead and focus on making the best build rather than a build that sorta works even if you need to hold it together with string and sellotape.

I understand what the motivation of the gamer crowd over on Reddit is, and how irate they get when they find out their reused gaming boards aren't particularly good choices. Most people here who build a NAS are in it for the long haul, expecting a five to ten year lifecycle out of the thing.

Just like you wouldn't want to do serious modern gaming on a $50 mini-ITX desktop board, you really want the right tool for the job with a NAS. Thanks for reading FIRST, asking questions SECOND, and letting the community help you out. I don't have much more to say that hasn't been said, just wanted to appreciate the effort you've put in up front.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Regarding gaming boards, if you accept the lack of ECC RAM the real incompatibility issue is the usually omnipresent realtek NIC.
 
Last edited:

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
If you check my current system in my specs, it is done building, and what I use it for is buttery smooth. To note, I am limited by PCIe 3 slots for my NVMe drives so I max them out just below 4Gb/s when I do move anything around.

My day to day usage is from my desktop rig (AMD 5950x / 96Gb ram / Kingston S3000 2TB NVMe / 10Gb SFP+) to my Brocade ICX6450 to my TrueNAS (10Gb DAC)

Mint Linux is my main OS - VMware Workstation running VMs direct from my TrueNAS NFS Share, which runs 2 x Samsung 980 Pros 2TB in a mirror.

I can run anywhere from 1-2 to 12 to 15 VM's on at the same time (Mostly Windows Server with several linux), none are super heavy usage, but using them all are buttery smooth (one is a Windows VM I use for work stuff and actively use throught the day).

Copying a VMDK from the TrueNAS to my desktop
1684954791711.png

(PCie 3 x4 limitation of the mobo for the NVMe's)

I then have my SMB shares for personal files (4 Spinning Rust in a 2 vDev mirror config) and instead of getting into NFS and SMB ACL share conflicts and making my head spin :D I just use SMB to connect. files open fast and instant.

I have my pictures directory just over 900GB, mix of jpeg, raw, all of which i can open pretty quick.

One folder has 141 items, half CR2 raw files and half JPG, it took 6 seconds to show me the thumbnail for the JPEG images for the 65 JPEG files. And I have not opened this directoy for as long as I can remember!

Moving 2 x 3.2GB files from my Desktop to my spinning rust pool:
1684955543960.png

So being able to work from files from the TrueNAS should be decent if you have the networking behind it and pending on what you are working on (massive 8k video files 100's of Gb in size, well may be some delay) but other items you should be fine...
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
More RAM is always better, but if your use case is just to serve files over 10 GbE, 32 GB RAM with Core should be comfortable.


It's possible, of course. You only need a breakout cable from OCuLink to 4*SATA. Same for the MiniSAS-HD/SFF-8639 ports on A2SDi motherboards.

You've somewhat painted yourself into a corner by picking a mini-ITX case before having the system to fit into but have done good research, congratulations!
  • C236 WSI4-85 (255€) - [2nd hand unopened / 1GbE] // looks like a good value option
  • C2750D4I (349,00€) - [2nd hand - problematic? / 1GbE] // depends when exactly the board was manufactured, so not the most attractive option…
  • X570D4I-2T (397,82€) - [new / 1GbE] // X470? else there's a duplicate entry
  • E3C246D2I (428,78€) - [new / 1GbE]
  • X570D4I-2T (655,29€) - [new / 10GbE]
  • E3C256D4I-2T (659,86€) - [new / 10GbE]
  • X11SCH-F (431,82 €) - [new / 10GbE / only 32gb ram max.] // micro-ATX, and no 10 GbE
  • A2SDI-4C-HLN4F-O (454,57€) - [new / 10GbE] // not 10 GbE, this would require A2SDi-H-TF
  • X10SDV-2C-TLN2F (572,40€) - [new / 10GbE]
  • X10SDV-4C-TLN4F (789,83 €) - [new/ 10GbE]

Any X10SDV in mini-ITX size would do, so references to avoid are those with 'TP' (on-board SFP+) or '7' (on-board HBA), which are Flex-ATX.

-TLN boards are gone, but this seller may still have a X10SDV-F for a fair price.

Think total cost. 10 GbE is easy to add, as you appear to have no other use for the single PCIe slot, and Solarflare NICs can be found for 45 E on eBay. Atom C3000 and Xeon D-1500 boards use DDR4 RDIMM (can take ECC UDIMM as well, but this is typically more expensive for a lower capacity); CPU is included, so you only need to add a 40 mm fan (Noctua NF-A4x25 is a favourite) on the "passive" heatsink to ensure proper cooling in non-server case. Socketed Intel C2x6 or AMD boards use DDR4 ECC UDIMM, and require a matching CPU, Core i3 / Xeon E-2000 or Ryzen (preferably Pro for actual ECC support), so the total cost will likely be higher than an embedded board—but performance on single-threaded SMB will be higher.
I want the option for 10GbE given that - while my ISP doesn't support it at the moment - they are beta testing it at the moment and it seems like it will be the norm soon (hopefully). Also I would like to be able to edit/work with the files I store on the NAS - using another device... so maybe more than 32 GB RAM would be necessary for my use case?

Ok, I was worried about that - given that I want a pool of 5HDDs... I'd read that it wasn't recommended to make one pool with HDDs that were connected to different controllers - so I take it that if there are two oculink connections, they're both typically connected to the same controller on the board or should I make sure that is the case?

Ahaha yes, I feel like I certainly have... but there's no turning back now, I just received the Jonsbo n2 case today. And also, you're right, I made a blunder in my list. The X570D4I-2T is 655,29€ and the 397,82€ unit is the E3C242D2I (which has 1GbE and 6 Sata ports).

Thanks for the link, I'm considering it, although I like to avoid buying second hand when possible. But it's a good option so I might contact them.

Is there any downside to using a NIC for the 10GbE other than the fact that I use up one PCIe slot? Or is it possible I will run into compatibility issues or something unexpected like that?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I want the option for 10GbE [...] Is there any downside to using a NIC for the 10GbE other than the fact that I use up one PCIe slot? Or is it possible I will run into compatibility issues or something unexpected like that?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there any downside to using a NIC for the 10GbE other than the fact that I use up one PCIe slot?
Nope.
Or is it possible I will run into compatibility issues or something unexpected like that?
Not if you choose your NIC wisely, following the guide Davvo linked. Best bang for buck right now looks like the Solarflare cards.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Thanks for the link, I'm considering it, although I like to avoid buying second hand when possible. But it's a good option so I might contact them.
This is obviously your choice. But second-hand typically provides the best value for the kind of not-last-generation server motherboards we look for, the matching not-top-speed ECC RAM (especially!) and server-grade add-in cards.

Is there any downside to using a NIC for the 10GbE other than the fact that I use up one PCIe slot? Or is it possible I will run into compatibility issues or something unexpected like that?
I'm not sure why you think that a NIC in a PCIe slot is different from an on-board NIC. Either way, Realtek would be bad; Intel, Chelsio, Solarflare are good. It's all a matter of driver.
Same for the SATA slots. Why would it be more problematic making a pool from a SATA port and OCuLink breakouts than making a pool from several discrete SATA ports?

Using the PCIe slot is indeed a big decision for a mini-ITX board because it is the sole one. But in your case this slot can only serve for a NIC or for making a NVMe pool from a bifurcated x16 slot.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Extra duties may benefit from more RAM. More RAM is likely to favour platforms which use RDIMM over UDIMM.
However direct editing of big files on the NAS with a single and relatively narrow raidz vdev is unlikely to be as snappy as you'd like.
What do you mean exactly when you say extra duties? & what would be the best approach for editing big files on the NAS then - is that being a snappy process, as you said, ultimately an impossible feat on a 5hdd NAS unit like the one I'm building?

I understand what the motivation of the gamer crowd over on Reddit is
I think a lot of it comes from watching youtube videos as a primer - I definitely started my own research on youtube and was exposed to all those ideas over there.

So being able to work from files from the TrueNAS should be decent if you have the networking behind it
It seems like your rig is entirely connected to your local network from what you described - have you used it to edit files remotely? I hope I understood your post well, it seems like you're trying to tell me that if I make sure to have as little bottlenecks in my network then I should be able to semi-smoothly do what I intend to do - at least on my local network (?).
(4 Spinning Rust in a 2 vDev mirror config)
Also why do you use this configuration? (I need to read up more on this subject still)
the matching not-top-speed ECC RAM (especially!)
Could you explain this - I don't quite understand what you mean here (Sorry if that's a total noob question - I've been mostly picking up information here and there and I've never looked into hardware before t.t)
I'm not sure why you think that a NIC in a PCIe slot is different from an on-board NIC. (...) Same for the SATA slots. Why would it be more problematic making a pool from a SATA port and OCuLink breakouts than making a pool from several discrete SATA ports?
I was under the impression that if you were to add an expansion card to a motherboard to expand the amount of SATA ports on it - that it is not recommended to combine the onboard SATA ports with the expansion card SATA ports into one pool, because they're using two different controllers which can lead to issues. I thought this might also be a problem on motherboards that use Oculink breakouts and similar options like that since i thought that maybe they would be under a different controller than the SATA ports... but looking at the schematics in the manual for a lot of these mobos they seem to always be connected to the same controller.

As for the NIC in the PCIe slot vs it being onboard, since I don't know that much about hardware and I try to er on the side of caution, I was worried that there would be some kind of tradeoff (beyond losing a PCIe slot) or risk in doing that vs getting a model with an onboard 10GbE NIC. But you've all told me there isn't so I feel less worried about that option now :)

or for making a NVMe pool from a bifurcated x16 slot.
What would be the point of this?



I have narrowed down my list a little bit more thanks to reading through your replies... I definitely have a few more questions now though, mainly the ones I asked above in this reply post, but also I'm a bit confused about the use-cases for SSD drives... are they only useful if I plan on using applications (which I don't - given that I'm using my NUC for any applications)? Or would they be at all useful when it comes to the purposes I listed I want to use my NAS for?

Thanks a lot for all your replies so far, it's been really helpful and reassuring!
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
What do you mean exactly when you say extra duties? & what would be the best approach for editing big files on the NAS then - is that being a snappy process, as you said, ultimately an impossible feat on a 5hdd NAS unit like the one I'm building?
Oops, sorry! By "extra duties" I meant "running apps/jails/VMs in addition to NAS duties". On re-reading, you're not planning any of that and "streaming" to an external player is regular NAS duty. Although more RAM is better, 32 GB should be a decent amount as first intention.
The issue with editing on the NAS is that a vdev has the IOPS of a single drive. Editing big video files will require random access to seek in the file and, due to way ZFS writes, reading one big video file, which one may think of as one big sequential read, likely involves a fair amount of random access. (ZFS groups writes in transaction groups, typically of 5 seconds, and never holds more than two TGs; so a big video file which took more than five seconds to transmit would have been spliced in chunks, interlaced with whatever else was happening at the time.)
Five HDDs are also not going to saturate a 10 GbE link.
From what I've read here, those who want to edit 4K directly on their NAS go all NVMe, on EPYC or Xeon Scalable-class server.

Also why do you use this configuration? (I need to read up more on this subject still)
Understanding pool design is indeed an absolute requirement, as you can't change.
10 disks as 5*(2-way mirror) have the IOPS of 5 disks (5 vdevs), but the storage capacity of 5 disks (50% efficiency). Mirrors are good for IOPS, and thus for random access, but not very efficient, and high resiliency comes at high cost (3-way mirror!). But mirors are flexible: You can add or remove disks in vdevs as well as adding and removing vdevs as you see fit (providing that there's enough capacity in the remaining pool for removal!).
10 disks as 10-wide raidz2 vdev have the IOPS of a single drive, but the throughput and storage capacity of eight (80% space efficient). Raidz (of any level, but Z1 is considered unsecure with current large HDDs) are good for bulk storage, but not good at random access. Also, raidz vdevs cannot change geometry after creation: No adding or removing drives to change width; absolutely no change in raidz level. You can add vdevs but not remove anything: Once added, it's there until the pool is destroyed.

With 5 bays, you've more or less committed to raidz anyway. A Fractal Design Node 304 (6 bays) would have given a choice between 3*mirror or 6-wide raidz2.

I was under the impression that if you were to add an expansion card to a motherboard to expand the amount of SATA ports on it - that it is not recommended to combine the onboard SATA ports with the expansion card SATA ports into one pool, because they're using two different controllers which can lead to issues.
Provided you mean "HBA" instead of "SATA expansion card", attaching to different controllers adds some routing overhead but is perfectly supported.
I thought this might also be a problem on motherboards that use Oculink breakouts and similar options like that since i thought that maybe they would be under a different controller than the SATA ports... but looking at the schematics in the manual for a lot of these mobos they seem to always be connected to the same controller.
You got it: The physical shape of a connector promises exactly nothing as to where it comes from, or what protocol is supported.

What would be the point of this?
Motherboard allowing, 4 M.2 in a x16 PCIe slot (requires support for x4x4x4x4 bifurcation) would make for super fast pool with huge IOPS performance (2*mirror) for running VMs—or for saturating a 10 GbE link in editing video files.
If so, you need onboard 10 GbE and full bifurcation support (e.g. X10SDV-whatever-TF).
If not, you may go for a cheaper motherboard without 10 GbE and add a (refurbished) Solarflare NIC for $50 (example, no endorsement of this particular seller).
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Could you explain this - I don't quite understand what you mean here (Sorry if that's a total noob question - I've been mostly picking up information here and there and I've never looked into hardware before t.t)
I forgot that one. What I meant is that a NAS does not require the latest and greatest CPU. So, rather than the latest 12/13th generations of Intel CPUs (which are actually advised against for lack of proper support by the scheduler…), we rather look at 10/11th gen., or 8/9th gen. (when Core i3 CPUs did work with ECC UDIMM!), and possibly all the way back to Broadwell (the Xeon D-1500 in Supermicro X10SDV). "Not-last-generation CPUs".
These go with matching "not-last-generation" motherboards—much preferably of the server kind rather than desktop/gamer.
And these take RAM which is not of the latest DDR5 generation, often not of the fastest DDR4 speed (a Core i3-9100 would use DDR4-2400; a D-1500, DDR4-2400 or 2133). So "not-top-speed" RAM. But for a ZFS NAS a larger amount of slower RAM is better than a smaller amount of faster RAM—and second-hand DDR4 RDIMM is very affordable.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
The issue with editing on the NAS is that a vdev has the IOPS of a single drive. Editing big video files will require random access to seek in the file and, due to way ZFS writes, reading one big video file, which one may think of as one big sequential read, likely involves a fair amount of random access. (...) From what I've read here, those who want to edit 4K directly on their NAS go all NVMe, on EPYC or Xeon Scalable-class server.
Hmmm, so there's nothing I could do on the 5HDD build I'm working on to make it more apt for editing larger files directly on it? Is my best option to maybe use this build exclusively for storage and build another all-flash NAS for editing and work files down the line?

Is there really nothing I could do to improve the performance of this 5HDD build though? If I threw in two extra SSD drives into the mix (I'm looking at the case and I'm pretty sure I could add 2 SSDs no problem) could I do any alternative configuration or something to help it along, or is that like we discussed earlier 100% useless unless I plan on using the SSDs for running apps?

Maybe I could have a mirrored setup with the 5th HDD as a "hot spare"? t.t (trying to rule out all options)

Five HDDs are also not going to saturate a 10 GbE link.
That would be a good thing though right or are you saying that 10 GbE would be overkill?

attaching to different controllers adds some routing overhead but is perfectly supported
Interesting - I read someone on reddit saying that they had surmised from elsewhere that it wasn't a recommended practice for some reason. It's good to know that isn't a problem.

Motherboard allowing, 4 M.2 in a x16 PCIe slot (requires support for x4x4x4x4 bifurcation) would make for super fast pool with huge IOPS performance (2*mirror) for running VMs—or for saturating a 10 GbE link in editing video files.
Taking note of that for future reference.

By the way, thanks for you replies so far Etorix! This is all helping me so much and I feel like I'm on the cusp of figuring out exactly what I should be doing.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Hmmm, so there's nothing I could do on the 5HDD build I'm working on to make it more apt for editing larger files directly on it? Is my best option to maybe use this build exclusively for storage and build another all-flash NAS for editing and work files down the line?
You can use the NAS for storage, download to your workstation for editing and then store the resulting files to the NAS.

Is there really nothing I could do to improve the performance of this 5HDD build though? If I threw in two extra SSD drives into the mix (I'm looking at the case and I'm pretty sure I could add 2 SSDs no problem) could I do any alternative configuration or something to help it along, or is that like we discussed earlier 100% useless unless I plan on using the SSDs for running apps?
You could have an all SSD pool just for editing… but these SSDs may be better used directly in your workstation.

Maybe I could have a mirrored setup with the 5th HDD as a "hot spare"? t.t (trying to rule out all options)
Yes, it's possible.

That would be a good thing though right or are you saying that 10 GbE would be overkill?
It's not overkill, and you will get more than 1 Gb/s, more than 2.5 Gb/s, but not the nominal 10 Gb/s while pushing big files.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
You can use the NAS for storage, download to your workstation for editing and then store the resulting files to the NAS.
Yeah I suppose that would work just as well lol... I must min-max every aspect of my workflow though! :p

You could have an all SSD pool just for editing… but these SSDs may be better used directly in your workstation.
I only have a laptop at the moment, I might setup a desktop soon, but regardless I want to work on my laptop when I'm out and about too if possible. Would that be achievable with an SSD pool just for editing on my NAS, as you said, or is that more of a solution for working exclusively on my local network?

Yes, it's possible.
Do you see any downsides two a mirrored configuration like that?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I only have a laptop at the moment, I might setup a desktop soon, but regardless I want to work on my laptop when I'm out and about too if possible. Would that be achievable with an SSD pool just for editing on my NAS, as you said, or is that more of a solution for working exclusively on my local network?
You will have to setup VPN tunnelling and use a VPN to do that, but it's technically possibile. I don't think you will be able to do remote edit from outside though.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
You will have to setup VPN tunnelling and use a VPN to do that, but it's technically possibile. I don't think you will be able to do remote edit from outside though.
Yeah sorry I wasn't asking about if it is technically possible and how to set it up in a secure way, I mean hardware-wise is that possible to do in a way that is fast enough?
 
Top