[Newbie] Mini-ITX Mobo in 2023

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Remote editing is limited by the wider network: Your uplink, downlink at your remote location… and anything in between.
I don't know home much is "enough", but reality is unlikely to match the benchmark.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Remote editing is limited by the wider network: Your uplink, downlink at your remote location… and anything in between.
I don't know home much is "enough", but reality is unlikely to match the benchmark.
Is editing files on a NAS remotely something people don't tend to do then? For some reason I was under the impression that they did - or is it just more common to just download the files remotelyfrom their NAS and then work on them on their portable workstation?

Btw, how would I use an SSD pool just for editing that you mentioned earlier that I could have on this build? As in, would I manually transfer files onto that pool from my HDD pool or does TrueNAS have a way of automating or streamlining a process like that?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Btw, how would I use an SSD pool just for editing that you mentioned earlier that I could have on this build? As in, would I manually transfer files onto that pool from my HDD pool or does TrueNAS have a way of automating or streamlining a process like that?
Using a SMB or NFS share.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Is editing files on a NAS remotely something people don't tend to do then? For some reason I was under the impression that they did

Editing files from remote is completely commonplace, which is why we have protocols such as DAV. However, you did mention video files, and the question then becomes is whether or not you're expecting that to be included. In general, it is not, because most video files are massive, and most Internet connections - even if they are CLAIMED to be AT&T 5Gbit up/down - are nowhere near that fast in transit from one location to another, and you'd need that speed on both ends of the connection as well. Video editing tends to demand huge amounts of highly responsive I/O, which favors a local NVMe drive for manipulation of data locally. This can be combined with over-the-Internet transfer of the data back and forth to a NAS to implement a full featured editing solution.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
Yeah sorry I wasn't asking about if it is technically possible and how to set it up in a secure way, I mean hardware-wise is that possible to do in a way that is fast enough?

This starts to play more into networking, sure you could have a fast 1Gb link from your ISP, but the wifi where ever you are, + overhead of a VPN/wireguard removes more speed...it may not be ideal, but again, depends how large are these files.
You also now maybe have to consider corruption, working on a file from the NAS, you hit save, and at that same time your wifi connection drops or something and doesnt actually save the last 3 hours of work you just did and your app crashes on your device now cause it lost connection to your NAS over your VPN tunnel?
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Using a SMB or NFS share.
So manually (right?)... and there's no way for my NAS to recognise I'm opening and editing a heavy file that needs to be on the SSD rather than the HDD pool then?

I think what I'm describing sounds more like cacheing... would focusing as many resources as I can on cacheing (max out ram) be helpful for being able to edit larger files (4k video for example) on the NAS? or would it not really do much and having mirrored vdevs be the actual solution if I decide editing large files is a priority at some point?

(editing large files on the NAS isn't a huge priority for me right now, although I can see it being the case in the future, hence why I'm still asking about it)

Editing files from remote is completely commonplace, which is why we have protocols such as DAV. However, you did mention video files, and the question then becomes is whether or not you're expecting that to be included. In general, it is not, because most video files are massive, and most Internet connections - even if they are CLAIMED to be AT&T 5Gbit up/down - are nowhere near that fast in transit from one location to another, and you'd need that speed on both ends of the connection as well. Video editing tends to demand huge amounts of highly responsive I/O, which favors a local NVMe drive for manipulation of data locally. This can be combined with over-the-Internet transfer of the data back and forth to a NAS to implement a full featured editing solution.
Yeah sorry, I'm interested in the possibility of editing video files remotely, it's not a huge deal for me but I am curious and interested. So it sounds like you're all telling me that editing video remotely will likely require me to download files from the NAS first, edit them, then upload them back onto the NAS.

This starts to play more into networking, sure you could have a fast 1Gb link from your ISP, but the wifi where ever you are, + overhead of a VPN/wireguard removes more speed...it may not be ideal, but again, depends how large are these files.
You also now maybe have to consider corruption, working on a file from the NAS, you hit save, and at that same time your wifi connection drops or something and doesnt actually save the last 3 hours of work you just did and your app crashes on your device now cause it lost connection to your NAS over your VPN tunnel?
It's sounding more and more like editing videos on a NAS remotely is a pipedream and not very practical in any way... ok I can definitely live without that, if I need to edit something remotely I'll just download the files I need from my NAS wherever I am.

-------

I have another last question. I've read superficially about SLOG, would that be helpful in the scenario where I would edit 4k vids locally on my NAS? I feel like I've seen some people say that SLOG is only for very specific use cases on this forum - but I'm wondering if this would qualify as one of those use cases?

So far after taking in all of the replies I've gotten here I'm considering getting a mobo that allows for 5 HDDs (storage pool) and 2 nvme ssds (pool for editing large files) and an ssd (for booting - I'm not sure if a sata ssd is a good option for mounting the boot on it). Any thoughts or criticisms are welcome.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I think what I'm describing sounds more like cacheing... would focusing as many resources as I can on cacheing (max out ram) be helpful for being able to edit larger files (4k video for example) on the NAS?
That would help with reads, including random seeking into the video… once it is loaded in ARC. Writes still have to go the pool and are limited by the sustained write capability of the pool. A dedicated NVMe pool for editing would help, at least with local edits (remote editions is going to be constrained by the wider network, which will certainly not match 10 GbE at home), but you'll have to move files manually between HDD and NVMe.

I have another last question. I've read superficially about SLOG, would that be helpful in the scenario where I would edit 4k vids locally on my NAS? I feel like I've seen some people say that SLOG is only for very specific use cases on this forum - but I'm wondering if this would qualify as one of those use cases?
Not at all. SLOG is for SYNC writes (database transactions, VMs). Your workload does not qualify and should actually be done ASYNC to benefit from maximal speed. Sync writes are A LOT slower than async; a SLOG makes sync writes… "sloggish", which is not as bad as SLUGgish but certainly not fast.

So far after taking in all of the replies I've gotten here I'm considering getting a mobo that allows for 5 HDDs (storage pool) and 2 nvme ssds (pool for editing large files) and an ssd (for booting - I'm not sure if a sata ssd is a good option for mounting the boot on it). Any thoughts or criticisms are welcome.
Any cheap, small, SSD is good for boot. If you have 6 SATA ports (i.e. at least one spare after plugging the 5 HDDs), a SATA SSD is a good option to keep all NVMe slots available.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
A dedicated NVMe pool for editing would help, at least with local edits (remote editions is going to be constrained by the wider network, which will certainly not match 10 GbE at home), but you'll have to move files manually between HDD and NVMe.
Upon thinking about it, my need to edit remotely is basically zero - and after what you guys have told me I reckon if I ever do have the need to do so, I'll just download the files and work on whatever device I have with me. I am however thinking very seriously about having an nvme pool for editing locally like you mentioned.

Not at all. SLOG is for SYNC writes (database transactions, VMs). Your workload does not qualify and should actually be done ASYNC to benefit from maximal speed. Sync writes are A LOT slower than async; a SLOG makes sync writes… "sloggish", which is not as bad as SLUGgish but certainly not fast.
Ahhh ok, thanks for the clarification!

Any cheap, small, SSD is good for boot. If you have 6 SATA ports (i.e. at least one spare after plugging the 5 HDDs), a SATA SSD is a good option to keep all NVMe slots available.
Yeah that's sort of what I was thinking of doing, I'm looking at a few options that have 6 SATA ports and one PCIe slot with 16 lanes- if I'm not wrong I should be able to use a ----- to attach two nvme sdds to the build? The only issue is if I use the PCIe slot up like that, I don't think I can add 10 GbE unless the motherboard comes with those ports.

btw these are the options I'm looking at:

X10SDV-2C-TLN2F (585.30€) - comes with a Pentium D1508
X10SDV-4C-TLN4F (808.03€) - comes with a Xeon D-1541

X570D4I-2T (661.28€)
E3C256D4I-2T (665.90)

X10SDV-F (499.00€) - comes with a Xeon D-1541

I'm considering these because they all have 10GbE already onboard (except for the X10SDV-F), 6-8 SATA ports (as far as I can tell), 1 PCIe(x16) and they all max out at 128 ram. Thoughts? I don't know how much CPUs will affect the kind of work I mentioned I would put the NAS through.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Before chosing a motherboard, especially an ITX one, you need to settle down on your needs. Do you want to reserve the single PCIe slot for an HBA? A NVMe adapter? A SFP+ cage?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Yeah that's sort of what I was thinking of doing, I'm looking at a few options that have 6 SATA ports and one PCIe slot with 16 lanes- if I'm not wrong I should be able to use a ----- to attach two nvme sdds to the build? The only issue is if I use the PCIe slot up like that, I don't think I can add 10 GbE unless the motherboard comes with those ports.
Correct. With only a half-height slot its either NVMe adapter or 10 GbE NIC.
This allows 4 M.2 with a x16 slot, if the board can bifurcate x4x4x4x4 (X10SDV can)
with x8x4x4 (C256), only three slots will work, or you'll need an adapter with a PLX switch
There may be cooling issues for the drives in this little case.

I'm considering these because they all have 10GbE already onboard (except for the X10SDV-F), 6-8 SATA ports (as far as I can tell), 1 PCIe(x16) and they all max out at 128 ram. Thoughts? I don't know how much CPUs will affect the kind of work I mentioned I would put the NAS through.
For SMB sharing you don't need many cores; higher clocks are better, but I don't know how much punch a Xeon E-2300 or an Ryzen (Pro) brings in actual use over a Xeon D-1500.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Before chosing a motherboard, especially an ITX one, you need to settle down on your needs. Do you want to reserve the single PCIe slot for an HBA? A NVMe adapter? A SFP+ cage?
These boards all have 1*PCIe slot (16 lanes) and 1*M.2 slot with 4 lanes. My understanding is that I would want to use the PCIe slot to connect an NVMe adapter that supports at least two NVMe SSDs at the same time to form an ssd pool. I don't know what I could use the M.2 slot for if anything?

This configuration works with every mobo I'm looking at except the X10SDV-F (as I mentioned) because it doesn't have 10GbE so I would have to figure out how to add two NVMe SSDs and a network card(?) to have the build I want. Probably won't go with that option given this apparent issue.

This allows 4 M.2 with a x16 slot, if the board can bifurcate x4x4x4x4 (X10SDV can)
https://www.aliexpress.us/item/3256803691464173.html
Oh interesting, I was thinking just 2 NVMe SSDs, but it seems like I could do 4... would there be any benefits to bifurcating x8x8 over x4x4x4x4?
with x8x4x4 (C256), only three slots will work, or you'll need an adapter with a PLX switch
https://www.aliexpress.us/item/3256801832693327.html
I'm confused about the adapter linked here - it says x8 to 4*m.2... does that mean it wouldn't take advantage of the 16 available ports?
There may be cooling issues for the drives in this little case.
Any way I can mitigate this?

Also, what if I went with the X570D4I-2T model, where might I find information on if it can bifurcate x4x4x4x4 if I wanted to do so?
For SMB sharing you don't need many cores; higher clocks are better, but I don't know how much punch a Xeon E-2300 or an Ryzen (Pro) brings in actual use over a Xeon D-1500.
What about the Pentium D1508... It has 2 cores, do you think that would be a bad pick?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Oh interesting, I was thinking just 2 NVMe SSDs, but it seems like I could do 4... would there be any benefits to bifurcating x8x8 over x4x4x4x4?
You'd do x8x8 for two x8 devices, for instance with one of these adapters
but the case has not enough space for these.

I'm confused about the adapter linked here - it says x8 to 4*m.2... does that mean it wouldn't take advantage of the 16 available ports?
Indeed, I picked the wrong device. With 16 lanes available (and a board which cannot bifurcate all the way to x4x4x4x4x4) the best suited would be this one:

Any way I can mitigate this?
Not really because there's not much space for heatspreaders…
Even cooling the CPU is severely constrained. The Jonsbo N2 is not yet in Noctua's compatibility database, but with 65 mm of clearance a NH-L9x65 would barely fit; NH-L9a/i are safe. Either way, this limits the CPU to 65 W TDP or less.

Also, what if I went with the X570D4I-2T model, where might I find information on if it can bifurcate x4x4x4x4 if I wanted to do so?
The board's manual? :wink:

What about the Pentium D1508... It has 2 cores, do you think that would be a bad pick?
2.2 GHz base, 2.6 GHz turbo, easily cooled by a (mandatory) 60 mm fan on top. RAM crippled to 1866 MHz but not bad for 1-2 clients.maximum.
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
You'd do x8x8 for two x8 devices (...) but the case has not enough space for these.
Ah, what a shame, are they very tall compared? that's kind of annoying... because that would've been an ideal setup for what I wanted.
Not really because there's not much space for heatspreaders… (...)
Damn are they really that big? Would it maybe help with heating if - even if I got a PCIe adapter that allows for 4 NVMe SSDs - if I only used two NVMe SSDs? (my country gets really hot in the summer so heat is a genuine concern - that said I have a NUC with an NVMe SSD that's seemingly doing ok without a heatsink.)
The board's manual? :wink:
haha fair enough.
2.2 GHz base, 2.6 GHz turbo, easily cooled by a (mandatory) 60 mm fan on top. RAM crippled to 1866 MHz but not bad for 1-2 clients.maximum.
That might not work then, I might have more than 2 clients.

I feel like I'm at a crossroads now, I was heavily considering getting the X10SDV-2C-TLN2F or the X10SDV-4C-TLN4F - but I'm discarding the former because of what you mentioned about the CPU... and the later I'm not so sure about anymore....

My biggest question rn is will having those 2-4 NVMe SSDs connected to a PCIe adapter be problematic because of the heat... if so, then maybe I'm better off just focusing on making this a device for storage, maybe in the future it could be my backup NAS. In other words these are the two configs I'm thinking of now, in light of this new information:

1. 5HDDs for storage & 1SSD for booting connected to the SATA Ports, then 2-4SSDs for an SSD pool connected to the PCIe socket with an adapter card. [In this config the risk seems to be overheating because of the NVMes without a headsink]

2. 5HDDS for storage & 1SSD for booting connected to the SATA Ports, then maybe a network card connected to the PCIe socket. [Given that I could get a cheaper 2nd hand Mobo like the X10SDV-F that only has gigabit ethernet since I would have an empty PCIe socket now]

An in between option I just thought of could be having a 1 NVMe SSD pool using the M.2 port that all these boards have and I didn't really have a use for... would that be a weird idea or bad for some reason?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
My biggest question rn is will having those 2-4 NVMe SSDs connected to a PCIe adapter be problematic because of the heat...
I believe the HDDs will be your greatest concern regarding heat.

2. 5HDDS for storage & 1SSD for booting connected to the SATA Ports, then maybe a network card connected to the PCIe socket. [Given that I could get a cheaper 2nd hand Mobo like the X10SDV-F that only has gigabit ethernet since I would have an empty PCIe socket now]
I would go for this.

An in between option I just thought of could be having a 1 NVMe SSD pool using the M.2 port that all these boards have and I didn't really have a use for... would that be a weird idea or bad for some reason?
You don't have parity. In a ITX build you are bound to compromises.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Ah, what a shame, are they very tall compared? that's kind of annoying... because that would've been an ideal setup for what I wanted.
Your case is not a server rackmount, designed to have risers and PCIe cards parallel to the motherboard.

Damn are they really that big? Would it maybe help with heating if - even if I got a PCIe adapter that allows for 4 NVMe SSDs - if I only used two NVMe SSDs? (my country gets really hot in the summer so heat is a genuine concern - that said I have a NUC with an NVMe SSD that's seemingly doing ok without a heatsink.)
Your case also has space only for a half-height card, so something like an actively cooled Asus Hyper-M.2 is out of question.
That leaves the nice and basic Linksys half-height adapter, with two M.2 on each side. The cards "on the back", facing the CPU, would get some moving air from the CPU cooler. The cards "on the top", facing the case itself, would be in cooling dead spot. As would a NIC card, actually.
But SSDs can run hotter that HDDs, it's possible that relatively cool PCIe 3.0 drives would do fine in this setting. It's just something to keep an eye to.

I agree with @Davvo that HDD cooling is the main concern, with a 15 mm thin fan trying to pull air through a backplane. The Jonsbo N1 appears to have a better cooling model than the N2. Please report how it's working for you.

I feel like I'm at a crossroads now, I was heavily considering getting the X10SDV-2C-TLN2F or the X10SDV-4C-TLN4F - but I'm discarding the former because of what you mentioned about the CPU... and the later I'm not so sure about anymore....
Total budget allowing (with RAM and possibly CPU), you may go for a Xeon D with 4-6 cores, or a ≤ 65 W Ryzen 3-5 (Pro) X570D2I-2T for a bit more CPU power. If 10 GbE is on-board, this keeps the PCIe slot for future options—which may never be needed.
With the 6th SATA port for boot, the board M.2 may be used for L2ARC if (and only if) needed, or for a single vdev app/jail/VM pool (lack of redundancy not being a problem if these are standard apps which could be just deployed anew in case of failure and/or data is backed up to the redundant HDD pool).
 

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
I believe the HDDs will be your greatest concern regarding heat.
I can't do anything about HDDs heating up with this case, except maybe getter a better fan to replace the stock fan. I was talking specifically about NVMe SSD having cooling issues because Etorix mentioned earlier in the thread that that could be an issue.

I would go for this.
Why would you go for the 5HDDS storage + 1SSD boot + network card option?

You don't have parity. In a ITX build you are bound to compromises.
That sounds acceptable, I don't think parity is the biggest problem after doing some thinking... I would say the biggest problem is it would probably be more efficient for me to download the video files onto my workstation at that point, I don't think a pool with only one nvme would be good enough for larger video files.
Your case is not a server rackmount, designed to have risers and PCIe cards parallel to the motherboard.
Perhaps in the future.
Your case also has space only for a half-height card, so something like an actively cooled Asus Hyper-M.2 is out of question.
That leaves the nice and basic Linksys half-height adapter, with two M.2 on each side. The cards "on the back", facing the CPU, would get some moving air from the CPU cooler. The cards "on the top", facing the case itself, would be in cooling dead spot. As would a NIC card, actually.
But SSDs can run hotter that HDDs (....)
Ah I see, I only just noticed that it says that in the description for the Jonsbo n2 (When I first read that description several weeks ago it looked like gibberish to me). I'm definitely thinking about scrapping the whole 'nvme pool' plan for this build and do that on another build in the future...

...but just out of curiosity, what if I got that half-height adapter for 4 m.2, but only actually used 2 nvmes on the side facing the cpu cooler. Would that potentially be a safe-ish choice?

... also could't I attach some kind of passive cooling onto the nvmes at least? I've seen those nvme passive cooling shields before online... are they useless or something?
I agree with @Davvo that HDD cooling is the main concern, with a 15 mm thin fan trying to pull air through a backplane. The Jonsbo N1 appears to have a better cooling model than the N2. Please report how it's working for you.
Yeah I definitely agree, specially now that I've had the opportunity to ask questions here and to think about potential builds... that this case is less than ideal in a lot of ways... I'm hoping that it will work well for now, and later as a backup. I will report on it in the future.
If 10 GbE is on-board, this keeps the PCIe slot for future options—which may never be needed
The X10SDV-F is looking mighty tempting though, it has a better processor than the X10SDV-4C-TLN4F and at almost half the price. Do you have any strong arguments against it beyond the fact that I will have to use the PCIe slot? Perhaps what you said earlier, that the NIC would be in a cooling dead spot combined with all the other factors.

or a ≤ 65 W Ryzen 3-5 (Pro) X570D2I-2T for a bit more CPU power
I will look into this though because I'm not really familiar with AMD CPUs at all.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I can't do anything about HDDs heating up with this case, except maybe getter a better fan to replace the stock fan. I was talking specifically about NVMe SSD having cooling issues because Etorix mentioned earlier in the thread that that could be an issue.
Unless you constantly hammer them you will be fine. Use NVMe Gen 3 SSDs, possibily attach passive heatsinks on them.

Why would you go for the 5HDDS storage + 1SSD boot + network card option?
Cheaper. What you save here can be used in a future system (that you appear to be already planning) with less constraints.
It is also perfectly able to fullfill most of your requirements.

That sounds acceptable, I don't think parity is the biggest problem after doing some thinking... I would say the biggest problem is it would probably be more efficient for me to download the video files onto my workstation at that point, I don't think a pool with only one nvme would be good enough for larger video files.
Here I don't understand the second half.
If you are talking about space, unless you edit terabytes of single files a single SSD can do the work.
If you are about reliability l, you don't need to storie the files on the SSD: you can copy the footage from the HDD pool on the SSD, edit there, copy the edited footage to the HDD and delete it from the SSD.

But as others said, live editing on the NAS may require tons of ARC (and possibily L2ARC).
 
Last edited:

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Unless you constantly hammer them you will be fine. Use NVMe Gen 3 SSDs, possibily attach passive heatsinks on them.
You're both seemingly saying the opposite thing to me. I'll think about it then - I was also wondering if I could attache passive heatsinks on them.
Cheaper. What you save here can be used in a future system (that you appear to be already planning) with less constraints.
It is also perfectly able to fullfill most of your requirements.
Fair enough. Definitely considering that option more and more - given like you said, that I'm already thinking of the next build I will do.
Here I don't understand the second half.
If you are talking about space, unless you edit terabytes of single files a single SSD can do the work.
If you are about reliability l, you don't need to storie the files on the SSD: you can copy the footage from the HDD pool on the SSD, edit there, copy the edited footage to the HDD and delete it from the SSD.
I meant that after googling around for a bit, most people seemed to say that editing large video files with one nvme on a NAS was less efficient than simply downloading the files onto my workstation, some people said it might lag. Perhaps they're wrong, but I have no Ideas since I've never tried editing 4k video from a NAS on one NVMe drive.

But as others said, live editing on the NAS may require tons of ARC (and possibily L2ARC).
Define tons of ARC... as I was already planning on going for 128gb ram.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

neetbuck

Explorer
Joined
May 16, 2023
Messages
56
Larger than your working set.
So I can't know for sure until I'm actively using the NAS? I mean, all the the mini-ITX mobos that had more max. ram than that had some other specs I didn't like (m.2(x2) or no PCIe at all).
 
Top