Is editing files on a NAS remotely something people don't tend to do then? For some reason I was under the impression that they did - or is it just more common to just download the files remotelyfrom their NAS and then work on them on their portable workstation?Remote editing is limited by the wider network: Your uplink, downlink at your remote location… and anything in between.
I don't know home much is "enough", but reality is unlikely to match the benchmark.
Using a SMB or NFS share.Btw, how would I use an SSD pool just for editing that you mentioned earlier that I could have on this build? As in, would I manually transfer files onto that pool from my HDD pool or does TrueNAS have a way of automating or streamlining a process like that?
Is editing files on a NAS remotely something people don't tend to do then? For some reason I was under the impression that they did
Yeah sorry I wasn't asking about if it is technically possible and how to set it up in a secure way, I mean hardware-wise is that possible to do in a way that is fast enough?
So manually (right?)... and there's no way for my NAS to recognise I'm opening and editing a heavy file that needs to be on the SSD rather than the HDD pool then?Using a SMB or NFS share.
Yeah sorry, I'm interested in the possibility of editing video files remotely, it's not a huge deal for me but I am curious and interested. So it sounds like you're all telling me that editing video remotely will likely require me to download files from the NAS first, edit them, then upload them back onto the NAS.Editing files from remote is completely commonplace, which is why we have protocols such as DAV. However, you did mention video files, and the question then becomes is whether or not you're expecting that to be included. In general, it is not, because most video files are massive, and most Internet connections - even if they are CLAIMED to be AT&T 5Gbit up/down - are nowhere near that fast in transit from one location to another, and you'd need that speed on both ends of the connection as well. Video editing tends to demand huge amounts of highly responsive I/O, which favors a local NVMe drive for manipulation of data locally. This can be combined with over-the-Internet transfer of the data back and forth to a NAS to implement a full featured editing solution.
It's sounding more and more like editing videos on a NAS remotely is a pipedream and not very practical in any way... ok I can definitely live without that, if I need to edit something remotely I'll just download the files I need from my NAS wherever I am.This starts to play more into networking, sure you could have a fast 1Gb link from your ISP, but the wifi where ever you are, + overhead of a VPN/wireguard removes more speed...it may not be ideal, but again, depends how large are these files.
You also now maybe have to consider corruption, working on a file from the NAS, you hit save, and at that same time your wifi connection drops or something and doesnt actually save the last 3 hours of work you just did and your app crashes on your device now cause it lost connection to your NAS over your VPN tunnel?
That would help with reads, including random seeking into the video… once it is loaded in ARC. Writes still have to go the pool and are limited by the sustained write capability of the pool. A dedicated NVMe pool for editing would help, at least with local edits (remote editions is going to be constrained by the wider network, which will certainly not match 10 GbE at home), but you'll have to move files manually between HDD and NVMe.I think what I'm describing sounds more like cacheing... would focusing as many resources as I can on cacheing (max out ram) be helpful for being able to edit larger files (4k video for example) on the NAS?
Not at all. SLOG is for SYNC writes (database transactions, VMs). Your workload does not qualify and should actually be done ASYNC to benefit from maximal speed. Sync writes are A LOT slower than async; a SLOG makes sync writes… "sloggish", which is not as bad as SLUGgish but certainly not fast.I have another last question. I've read superficially about SLOG, would that be helpful in the scenario where I would edit 4k vids locally on my NAS? I feel like I've seen some people say that SLOG is only for very specific use cases on this forum - but I'm wondering if this would qualify as one of those use cases?
Any cheap, small, SSD is good for boot. If you have 6 SATA ports (i.e. at least one spare after plugging the 5 HDDs), a SATA SSD is a good option to keep all NVMe slots available.So far after taking in all of the replies I've gotten here I'm considering getting a mobo that allows for 5 HDDs (storage pool) and 2 nvme ssds (pool for editing large files) and an ssd (for booting - I'm not sure if a sata ssd is a good option for mounting the boot on it). Any thoughts or criticisms are welcome.
Upon thinking about it, my need to edit remotely is basically zero - and after what you guys have told me I reckon if I ever do have the need to do so, I'll just download the files and work on whatever device I have with me. I am however thinking very seriously about having an nvme pool for editing locally like you mentioned.A dedicated NVMe pool for editing would help, at least with local edits (remote editions is going to be constrained by the wider network, which will certainly not match 10 GbE at home), but you'll have to move files manually between HDD and NVMe.
Ahhh ok, thanks for the clarification!Not at all. SLOG is for SYNC writes (database transactions, VMs). Your workload does not qualify and should actually be done ASYNC to benefit from maximal speed. Sync writes are A LOT slower than async; a SLOG makes sync writes… "sloggish", which is not as bad as SLUGgish but certainly not fast.
Yeah that's sort of what I was thinking of doing, I'm looking at a few options that have 6 SATA ports and one PCIe slot with 16 lanes- if I'm not wrong I should be able to use a ----- to attach two nvme sdds to the build? The only issue is if I use the PCIe slot up like that, I don't think I can add 10 GbE unless the motherboard comes with those ports.Any cheap, small, SSD is good for boot. If you have 6 SATA ports (i.e. at least one spare after plugging the 5 HDDs), a SATA SSD is a good option to keep all NVMe slots available.
Correct. With only a half-height slot its either NVMe adapter or 10 GbE NIC.Yeah that's sort of what I was thinking of doing, I'm looking at a few options that have 6 SATA ports and one PCIe slot with 16 lanes- if I'm not wrong I should be able to use a ----- to attach two nvme sdds to the build? The only issue is if I use the PCIe slot up like that, I don't think I can add 10 GbE unless the motherboard comes with those ports.
For SMB sharing you don't need many cores; higher clocks are better, but I don't know how much punch a Xeon E-2300 or an Ryzen (Pro) brings in actual use over a Xeon D-1500.I'm considering these because they all have 10GbE already onboard (except for the X10SDV-F), 6-8 SATA ports (as far as I can tell), 1 PCIe(x16) and they all max out at 128 ram. Thoughts? I don't know how much CPUs will affect the kind of work I mentioned I would put the NAS through.
These boards all have 1*PCIe slot (16 lanes) and 1*M.2 slot with 4 lanes. My understanding is that I would want to use the PCIe slot to connect an NVMe adapter that supports at least two NVMe SSDs at the same time to form an ssd pool. I don't know what I could use the M.2 slot for if anything?Before chosing a motherboard, especially an ITX one, you need to settle down on your needs. Do you want to reserve the single PCIe slot for an HBA? A NVMe adapter? A SFP+ cage?
Oh interesting, I was thinking just 2 NVMe SSDs, but it seems like I could do 4... would there be any benefits to bifurcating x8x8 over x4x4x4x4?This allows 4 M.2 with a x16 slot, if the board can bifurcate x4x4x4x4 (X10SDV can)
https://www.aliexpress.us/item/3256803691464173.html
I'm confused about the adapter linked here - it says x8 to 4*m.2... does that mean it wouldn't take advantage of the 16 available ports?with x8x4x4 (C256), only three slots will work, or you'll need an adapter with a PLX switch
https://www.aliexpress.us/item/3256801832693327.html
Any way I can mitigate this?There may be cooling issues for the drives in this little case.
What about the Pentium D1508... It has 2 cores, do you think that would be a bad pick?For SMB sharing you don't need many cores; higher clocks are better, but I don't know how much punch a Xeon E-2300 or an Ryzen (Pro) brings in actual use over a Xeon D-1500.
You'd do x8x8 for two x8 devices, for instance with one of these adaptersOh interesting, I was thinking just 2 NVMe SSDs, but it seems like I could do 4... would there be any benefits to bifurcating x8x8 over x4x4x4x4?
Indeed, I picked the wrong device. With 16 lanes available (and a board which cannot bifurcate all the way to x4x4x4x4x4) the best suited would be this one:I'm confused about the adapter linked here - it says x8 to 4*m.2... does that mean it wouldn't take advantage of the 16 available ports?
Not really because there's not much space for heatspreaders…Any way I can mitigate this?
The board's manual?Also, what if I went with the X570D4I-2T model, where might I find information on if it can bifurcate x4x4x4x4 if I wanted to do so?
2.2 GHz base, 2.6 GHz turbo, easily cooled by a (mandatory) 60 mm fan on top. RAM crippled to 1866 MHz but not bad for 1-2 clients.maximum.What about the Pentium D1508... It has 2 cores, do you think that would be a bad pick?
Ah, what a shame, are they very tall compared? that's kind of annoying... because that would've been an ideal setup for what I wanted.You'd do x8x8 for two x8 devices (...) but the case has not enough space for these.
Damn are they really that big? Would it maybe help with heating if - even if I got a PCIe adapter that allows for 4 NVMe SSDs - if I only used two NVMe SSDs? (my country gets really hot in the summer so heat is a genuine concern - that said I have a NUC with an NVMe SSD that's seemingly doing ok without a heatsink.)Not really because there's not much space for heatspreaders… (...)
haha fair enough.The board's manual?![]()
That might not work then, I might have more than 2 clients.2.2 GHz base, 2.6 GHz turbo, easily cooled by a (mandatory) 60 mm fan on top. RAM crippled to 1866 MHz but not bad for 1-2 clients.maximum.
I believe the HDDs will be your greatest concern regarding heat.My biggest question rn is will having those 2-4 NVMe SSDs connected to a PCIe adapter be problematic because of the heat...
I would go for this.2. 5HDDS for storage & 1SSD for booting connected to the SATA Ports, then maybe a network card connected to the PCIe socket. [Given that I could get a cheaper 2nd hand Mobo like the X10SDV-F that only has gigabit ethernet since I would have an empty PCIe socket now]
You don't have parity. In a ITX build you are bound to compromises.An in between option I just thought of could be having a 1 NVMe SSD pool using the M.2 port that all these boards have and I didn't really have a use for... would that be a weird idea or bad for some reason?
Your case is not a server rackmount, designed to have risers and PCIe cards parallel to the motherboard.Ah, what a shame, are they very tall compared? that's kind of annoying... because that would've been an ideal setup for what I wanted.
Your case also has space only for a half-height card, so something like an actively cooled Asus Hyper-M.2 is out of question.Damn are they really that big? Would it maybe help with heating if - even if I got a PCIe adapter that allows for 4 NVMe SSDs - if I only used two NVMe SSDs? (my country gets really hot in the summer so heat is a genuine concern - that said I have a NUC with an NVMe SSD that's seemingly doing ok without a heatsink.)
Total budget allowing (with RAM and possibly CPU), you may go for a Xeon D with 4-6 cores, or a ≤ 65 W Ryzen 3-5 (Pro) X570D2I-2T for a bit more CPU power. If 10 GbE is on-board, this keeps the PCIe slot for future options—which may never be needed.I feel like I'm at a crossroads now, I was heavily considering getting the X10SDV-2C-TLN2F or the X10SDV-4C-TLN4F - but I'm discarding the former because of what you mentioned about the CPU... and the later I'm not so sure about anymore....
I can't do anything about HDDs heating up with this case, except maybe getter a better fan to replace the stock fan. I was talking specifically about NVMe SSD having cooling issues because Etorix mentioned earlier in the thread that that could be an issue.I believe the HDDs will be your greatest concern regarding heat.
Why would you go for the 5HDDS storage + 1SSD boot + network card option?I would go for this.
That sounds acceptable, I don't think parity is the biggest problem after doing some thinking... I would say the biggest problem is it would probably be more efficient for me to download the video files onto my workstation at that point, I don't think a pool with only one nvme would be good enough for larger video files.You don't have parity. In a ITX build you are bound to compromises.
Perhaps in the future.Your case is not a server rackmount, designed to have risers and PCIe cards parallel to the motherboard.
Ah I see, I only just noticed that it says that in the description for the Jonsbo n2 (When I first read that description several weeks ago it looked like gibberish to me). I'm definitely thinking about scrapping the whole 'nvme pool' plan for this build and do that on another build in the future...Your case also has space only for a half-height card, so something like an actively cooled Asus Hyper-M.2 is out of question.
That leaves the nice and basic Linksys half-height adapter, with two M.2 on each side. The cards "on the back", facing the CPU, would get some moving air from the CPU cooler. The cards "on the top", facing the case itself, would be in cooling dead spot. As would a NIC card, actually.
But SSDs can run hotter that HDDs (....)
Yeah I definitely agree, specially now that I've had the opportunity to ask questions here and to think about potential builds... that this case is less than ideal in a lot of ways... I'm hoping that it will work well for now, and later as a backup. I will report on it in the future.I agree with @Davvo that HDD cooling is the main concern, with a 15 mm thin fan trying to pull air through a backplane. The Jonsbo N1 appears to have a better cooling model than the N2. Please report how it's working for you.
The X10SDV-F is looking mighty tempting though, it has a better processor than the X10SDV-4C-TLN4F and at almost half the price. Do you have any strong arguments against it beyond the fact that I will have to use the PCIe slot? Perhaps what you said earlier, that the NIC would be in a cooling dead spot combined with all the other factors.If 10 GbE is on-board, this keeps the PCIe slot for future options—which may never be needed
I will look into this though because I'm not really familiar with AMD CPUs at all.or a ≤ 65 W Ryzen 3-5 (Pro) X570D2I-2T for a bit more CPU power
Unless you constantly hammer them you will be fine. Use NVMe Gen 3 SSDs, possibily attach passive heatsinks on them.I can't do anything about HDDs heating up with this case, except maybe getter a better fan to replace the stock fan. I was talking specifically about NVMe SSD having cooling issues because Etorix mentioned earlier in the thread that that could be an issue.
Cheaper. What you save here can be used in a future system (that you appear to be already planning) with less constraints.Why would you go for the 5HDDS storage + 1SSD boot + network card option?
Here I don't understand the second half.That sounds acceptable, I don't think parity is the biggest problem after doing some thinking... I would say the biggest problem is it would probably be more efficient for me to download the video files onto my workstation at that point, I don't think a pool with only one nvme would be good enough for larger video files.
You're both seemingly saying the opposite thing to me. I'll think about it then - I was also wondering if I could attache passive heatsinks on them.Unless you constantly hammer them you will be fine. Use NVMe Gen 3 SSDs, possibily attach passive heatsinks on them.
Fair enough. Definitely considering that option more and more - given like you said, that I'm already thinking of the next build I will do.Cheaper. What you save here can be used in a future system (that you appear to be already planning) with less constraints.
It is also perfectly able to fullfill most of your requirements.
I meant that after googling around for a bit, most people seemed to say that editing large video files with one nvme on a NAS was less efficient than simply downloading the files onto my workstation, some people said it might lag. Perhaps they're wrong, but I have no Ideas since I've never tried editing 4k video from a NAS on one NVMe drive.Here I don't understand the second half.
If you are talking about space, unless you edit terabytes of single files a single SSD can do the work.
If you are about reliability l, you don't need to storie the files on the SSD: you can copy the footage from the HDD pool on the SSD, edit there, copy the edited footage to the HDD and delete it from the SSD.
Define tons of ARC... as I was already planning on going for 128gb ram.But as others said, live editing on the NAS may require tons of ARC (and possibily L2ARC).
Larger than your working set.Define tons of ARC... as I was already planning on going for 128gb ram.
So I can't know for sure until I'm actively using the NAS? I mean, all the the mini-ITX mobos that had more max. ram than that had some other specs I didn't like (m.2(x2) or no PCIe at all).Larger than your working set.