LFA - NAS build - a simple NAS or NAS w/ transcoding WS

ohboi

Dabbler
Joined
Mar 23, 2019
Messages
26
For a very long time I want to build NAS, I wanted to build a tiny machine, Fractal Node 304, mITX format, but I have always hit the wall (a problem, AMD requires Ryzen, which requires dGPU, which would take the only PCI-e slot, which I would not be able to use as I would not have enough SATA ports, and problems like that (on Intel side too)). To be completely honest, until this very day I am not decided on OS, but I dislike FreeNAS the least (I want to use ZFS and FreeNAS is most ready for it out of box, OMV requires Proxmox kernel and an additional plugin..meh).

I want to build NAS with 24TB of usable storage (well, more like 23,16TB of a usable storage) stripped across 3 VDevs (each running 3 HDDs in RaidZ1 -- I really hope I got this right), ZFS all the way with 32GB of ECC RAM. I will not mention prices on purpose, what I link below is more the less what I would like to go with, but there is still a room for maneuvering, but I right now it is already almost at $1500 (actually around 1450-1500€, but whatever, let's ignore the currency now, prices are all over the place when comparing US and EU pricing (mostly in favor of US pricing)).

I am playing around with two build ideas so far:

0. Components common for both below:
CASE: Fractal Define R6
PSU: Corsair RM650x (+ one extra power SATA cable from my RM750x in my desktop)
CPU Cooler: SilentiumPC Fera 3 (you have probably no idea what it is, it is our little EU secret, better than any CM Hyper 212-series cooler for just $25, I love that little beast)
HDD: 7x WD RED 4TB (WD40EFRX) - I was thinking about this a lot but the server will be in my office/bedroom, so I cannot risk 7200RPM drives (though in the same time I also hate my decision, here 6TB WD HC310 drives cost just about 70% extra compared to 4TB WD RED, which is justifiable for 50% extra capacity an much better speed and reliability specs IMO).

1. Building a "simple" NAS-only NAS, just a storage, no Plex, no VM. With this one I would minimize costs where possible (in this case on CPU), I was considering this build:

MB: Gigabyte C246-WU4
CPU: Intel Pentium G5400 (I believe 2/4 CPU is plenty for a "simple" NAS)
RAM: 32GB, 2x Crucial CT16G4WFD8266 (QVL'd)

A short explanation, 10 SATA ports, WS board. I was originally looking here on some Supermicro boards, such as MBD-X11SCL-F, but if I can avoid yet another add-in card or yet another extra component I take it - fortunately because G5400 has iGPU, the usefulness of IPMI, for me at least, is not very significant. And what's more, it is significantly cheaper than that Supermicro board.

2. Building a NAS with one VM, which I would use for ffmpeg transcoding (it will be probably some minimal CLI Debian or Ubuntu, I will literally just run ffmpeg on it, mostly ripping series from BRs which I would then probably integrate into Plex (no transcoding, just a centralized delivery - I have not yet dived into Plex or Jellyfin (I prefer open-source solutions), there will be plenty time after):

MB: Asrock X370 Taichi (albeit it is one of the consumer boards, one of the reasons I really like it is that it has 10 SATA3 ports, second is that it is an absolute monster in terms of VRM quality, it is stellar, great if I ever decide to go with a more powerful CPU)
CPU: AMD R5 Ryzen 1600 AF (it is basically R5 2600, I would in the future upgrade when 4XXX series gets to the market, whether that 4900X or some cheap offer for 3900X)
GPU: Literally whatever low-end completely passively cooled nVidia or AMD card, this will be merely a video output
RAM: 32GB, 2x Samsung M393A2K40CB2-CTD (not QVL'd, unfortunately)

This one gets me going the most, I am fully lubricated thinking about this, I would like to have a dedicated transcoding machine outside my PC. I guess I don't want that much from the virtualization and media streaming server that much, but what is your experience? Shall I expect some problems when running VM? Some bad experience with bhyve? Is this man page is still valid and bhyve doesn't support more than 16 threads (that could be potentially a real bummer)?


Couple of notes to one or both build ideas:
- I feel there will be a suggestion to the second build as I should go Xeon or just turn around and go with some older platform running some Xeon - no, I don't want to, I will be completely honest, I don't, and any new Xeon would put me many hundreds of dollars back for what? Nothing, they don't deserve people's money (I wish there were some AM4 Ryzen-compatible server motherboards though, now it is just for Epyc)
- the reason I want to buy 7 HDDs is that I already have 2 WD RED 4TB, so I want to first build 2 VDevs, get it up and running, copy all 7.5TB of data to NAS, then zero them, test them as per burn-in guidelines on the Hardware Guide site, and if they are okay, adding that 7th + those 2 drives as the third VDev and rebuilding it into the full capacity
- NICs are a bit of a problem, I plan on 2.5/5GbE interconnection between my desktop and NAS, in my desktop I have no PCI-e slots left, so I will need to go with USB>ETH adapters Club3D, model CAC-1420 (2.5GbE) or Qnap, model qna-uc5g1t. As for the server itself, I could go with some PCI-e based solution, but anything which I would consider available (under $100, preferably new (which I believe I will not find in such price for at least next 3-5 years), not something from China supposedly from some decommissioned server with who knows what real condition) is whether 10GbE, but only for fiber interconnects and 2.5/5GbE PCI-e NICs seem to be not existing.

And as for other headaches...

RAM
In case I will manage to build the second one where I will run 1 VM for transcoding, from my testing I will most likely need to assign 4GB of RAM for that machine, which leaves me with just 28GB of RAM for NAS, and I know about a recommendation of 1GB of RAM for 1TB of storage, but is that for 1TB of the usable part of the storage or of a grand total? If so, would it be wise to bite the tongue and buy 3x 16GB sticks (or bit even harder, go insane and go straight to 64GB)?

I was, originally, thinking about NVMe SSD or Optane for caching (ZIL and/or L2ARC), but now, considering this above I am not sure if the money would not be better spent on getting most RAM.

ECC RAM - compatibilty? Any or only QVL'd?
Here is one thing which makes me thinking ECC RAM market is pretty much an organized crime. I can find 16GB stick of Kingston memory, around $20/stick cheaper as any of those, which I can (if I am lucky) find on the QVL of MB (I recall Supermicro C242 boards I was looking at previously had like one Hynix model in the QVL). One, I am referring to as that Kingston memory is model KTL-TS426D8/16G, but from Kingston site it lists some Lenovo servers - do those sticks have some sort of DRM chip on them to prevent anyone braking this organized crime end of the market's tradition?

I am asking this also because that second build idea is great and all, but that X370 Taichi has just some two random 8GB sticks QVL'd, so I wonder how to increase my chances as far as finding compatible ECC RAMs go. Another reason is that in case I would potentially go 64GB of RAM (as above), I would probably consider 2 32GB sticks (about $20-30 cheaper than 4x 16GB sticks) as it is "easier" for the compatibility for those dual-channel memory layouts.

Drivers (mostly bound to deciding on NICs)
As a part of the build, I will need faster than 1GbE. 10GbE NIC would work, but finding one with RJ-45 is not a simple task (and I don't think $300+ Intel 10GbE NIC is a good deal in my opinion), so I am looking at couple of USB>ETH 2.5GbE and 5GbE adapters, one is by Club3D, model CAC-1420 (2.5GbE) and by Qnap, model qna-uc5g1t. None of these have officially any sort of Linux/FreeBSD driver. How can I then find out if it can work? Inquiry manufacturers what chipsets those products use? Must it be included on FreeBSD's Hardware list?

Interconnection with other devices
As I outlined above, I would like to ask if how I want to interconnect NAS with rest of devices can be done and I am not just going nuts, I want to connect NAS with:
1. My desktop PC via direct ETH<>ETH connection (2.5 or 5GbE connection)
2. Rest of my LAN for occasional access from a laptop or phone or whatever.
Can I do it? For point 1 do I need to just set static IPs from the same subnet? For point 2, if both devices from point 1 will be connected to the rest of LAN, is there a chance of any possible or impossible conflict?


Any help will be very much appreciated. Thank you all for stopping by in advance.
 

ohboi

Dabbler
Joined
Mar 23, 2019
Messages
26
Okay, I think I ticked some boxes which probably discourage anyone to add their take on a brainstorming like this.

I would like to update the original post, I think I am giving up on a VM+NAS plan, with a current trend I will most likely use a next-gen CPUs with many many CPU threads, so why to have two machines with a high amount of CPU threads then, I need to man up and leave transcoding to it.

I am from EU, so I know there are some HW guides here and there on the forum, but there are two problems - those are US listings (we talk 40$ on shipping and that is all before it would undergo customs and easily get 1,25x price).
Then especially my choice of boards with 10 SATA ports (by the way, I found a thread here on the forum about that Gigabyte board and some concerns about those extra 2 contolled by ASMedia ASM1061, which might not support cache flushing commands) has probably not raised good points here as primarily I can see links to eB listings on some open box/refurbished Supermicro boards, again, $70 board would easily become $140 board by the time it gets "home". And those, which would be interesting in price are those from China, and this was yet another argument I found here - a questionable quality and origin, maybe even some fakes.

That made me thinking and again I thought about my "less components, better", so I wanted to avoid potential driver problems with those aforementioned 2.5/5GbE USB adapters and looked at some boards with a potential 10GbE, first, I looked at a consumer market (e.g. AM4 boards, which again could be an option), but prices were quite steep - I think, I will probably buy one of such for my main PC to get myself that sweet 10GbE NIC, but those are not, imo, for servers and "servers".

I returned to Supermicro again and tried to look at C242/C246 boards with 10Gbit and at least 8 SATA ports (I was thinking about making a step back and do just 2 RaidZ1 VDevs of 4 HDDs each), but I found none, which I frankly don't understand, because as far as Intel CPUs go, 8th and 9th gen Intels had a major uplift in the inter-generation SKU specs, but I had more luck with C236, mainly two boards, which popped out:
X11SSH-TF - https://www.supermicro.com/en/products/motherboard/X11SSH-TF
X11SSH-CTF - https://www.supermicro.com/en/products/motherboard/X11SSH-CTF - same, but with extra SAS controller, so it makes it quite a storage powerhouse (8 SATA ports + 2 SAS ports)
X11SSZ-TLN4F - https://www.supermicro.com/en/products/motherboard/X11SSZ-TLN4F - now this one is weird, only box it ticks is 10GbE NICs, but I am not sure how to interpret differences between this one and the first one above, one of the things I noticed is that this one states it "Detects double-bit errors (using ECC memory)" as opposed to both above, which state only a correction of single-bit errors -- is the lack-off of this extra spec missing from first two a mistake?
One note, fortunately, it doesn't seem like a problem to get RAM qualified for any of those, e.g. KSM24ED8/16ME (boards on the supported list by Kingston) is compatible and qualified (and a bit cheaper overall $88 vs. regular $100 for those I was initially looking into).

Would you recommend one of those boards? X11SSH-CTF looks very interesting, SAS controller, although the price is very very high, I am much more fine with it, especially given it also has 10GbE NIC, just an add-in card running that Intel chipset would set me back about $120 alone (eB offer from China).
 

ohboi

Dabbler
Joined
Mar 23, 2019
Messages
26
And to spam you even further, I would like to ask a couple of extra questions:
- in case I would potentially get X11SSH-CTF, shall I expect some problems when 8 drives would be running from SAS ports and 1 from the on-board SATA ports? SAS are ran by LSI3008 and on-board SATA ports by C236 chipset
- in case the answer for above would be "Not recommended.", if I changed the plan to have 2 RaidZ1 VDevs of 4x 4TB WD RED each, does it still yield an acceptable danger in a scenario of the potential resilvering?
 
Top