Building a Video Editor NAS for two machines.

DenisInternet

Dabbler
Joined
Jun 14, 2022
Messages
27
Hey folks,

Long time lurker, first time poster! I've been looking for a good NAS solution for a while now, including reaching out to IX systems directly. They had some great options but unfortunately they where either way above the power/storage I need, or not quite the right fit. This would be something sitting under or near my desk, not in a storage room / closet. So server loudness/volume is important. I am running a few raid systems now, and that's ok, just no "jet-engine" fans or PSUs please.

I considered the TrueNas mini, but the Atom CPU is a little too low spec'd for my needs, and only 8 bays is not great either.
I have also looked at options from Synology and Qnap, but neither quite fit the bill. Synology systems are either underpowered, or server rack jet engine.
And while Qnap came close, there security record leaves me uneasy.

I have also reached out to ProMax, Jellyfish and I am chatting to 45 drives tomorrow. ProMax and Jellyfish software did not impress me, and the prices, especially from Jellyfish where way too high for hardware quality. While I understand turnkey solutions cost significantly more, this was too much for me. (~27-32k USD)

While I don't expect to pay less than 12-15k for what I need, I've started to consider building something myself instead. As the cost savings could be significant, at the cost of some time researching, building and setup.


What I do:
I am a Video Editor, Motion Designer, Colorist, and Composer rolled into one. Software I use on a daily basis include. Adobe Premiere, DaVinci Resolve, After Effects, Abelton Live, Houdini and Redshift. Average project lasts 1-2 months and requires 14-16TB, if I can keep finished project in "active" storage (on the working NAS) for 6 months after project completion that would be ideal. After that, raw footage is deleted, and I only keep the tiny project files and a ProRes 4444 text and textless export of each project.

The Build:
AMD EPYC™ 7302P 16-core/32-thread processor, up to 3.3 GHz -$825
Cooler: Noctua low profile - ~$100 (still looking for the right model)
Mono: AsRock ROMED8-2T - $700 (Includes two 10Gbe Base-T) and IPMI
Ram: 3200 128/256 GB RDIMM DDR4 ECC (8 x 16/32 GB) - $560 (exact model depends which motherboard I end up going with)
NVME Bay: ICY DOCK Rugged 4 x 2.5 NVMe U.2/U.3 SSD PCIe 4.0, 5.25" Bay (4 x OCuLink) - $450
Oculink Pcie Card: 16x PCIE to x 4Oculink Card - ~$200
ATX Powersupply - ~$200
Case: Rosewill 4U Server Chassis Hot Swap - $359
HDDs: Seagate Exos X20 20TB - ~$5330.76
SSD Name Ultra Star 7.68TB x4 ~$3920 (I already own two)
Switch: Any recommendations? Thinking the NETGEAR ProSAFE (XS716E-100NES) ~$1850

Total Cost: ~$14,494

Now 15k is still a lot of money, but I think this would serve me quite well for 3-5 years, and maybe longer by adding a disk shelf.


What do you kind folks think, is this a reasonable build? Or should I stick to a pre-built system, and if so, any you recommend?


Thank you very much!
 
Joined
Oct 22, 2019
Messages
3,587
Are you inferring you'll be running the applications on the server itself? It sounds like you'll be running them on your daily driver computer (Windows?), and then you wish to use the NAS server to offset such large dumps of multimedia and data.

Why the need for such an overpowered CPU on the NAS server? The main bottlenecks (to look out for) are the network, RAM, and possibly drives; for the purpose of storing and reading data as fast as possible.
 

DenisInternet

Dabbler
Joined
Jun 14, 2022
Messages
27
Are you inferring you'll be running the applications on the server itself? It sounds like you'll be running them on your daily driver computer (Windows?), and then you wish to use the NAS server to offset such large dumps of multimedia and data.

Why the need for such an overpowered CPU on the NAS server? The main bottlenecks (to look out for) are the network, RAM, and possibly drives; for the purpose of storing and reading data as fast as possible.
Hey Winnie, thanks for asking.

The CPU is for two reasons. 1)128 PCIe lanes (I will be placing more and more u.2 NVME drives as time passes 8+ maybe even 16 at some point) and 2) It would be nice to have the option to host DaVinci Resolve PostgresSQL database / use for transcodes when possible for archival/compression.

I will be working from two systems a Mac (Primary) and Windows (Secondary), plus two laptops.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello!

Kudos to a well written 1st post.
I've some feedback on the hardware setup and selection (plus - you've not mentioned anything on ZFS setup which ...really plays into the hardware selection!)

I offer some comments, from "top of mind":
AMD EPYC™ 7302P 16-core/32-thread processor, up to 3.3 GHz -$825
The selling point for such a CPU would be to get access to a lot of PCIe lanes. Which requires the motherboard setup to be appropriate for such use. For example PCIe Bi-furcation, making it suitable for cheap pcie non switched adapters for nvme drives.
If there is a cheaper model EPYC with lower core counts, and higher or similar clockspeed, that'll be just as good.
If you are using SMB shares you will be limited by core clock, rather than anything else.
I guess you could have a go at running iSCSI.

Cooler: Noctua low profile - ~$100 (still looking for the right model)
There is a model that fits in 4u cases. I've one too. I expect there to be a epyc compatible version too.

Mono: AsRock ROMED8-2T - $700 (Includes two 10Gbe Base-T) and IPMI
This I really don't like. I don't trust AsRock a single bit, and the experience on hte forums in general is not overwhelmingly positive in any way.
I'd go for supermicro all the way.

Meanwhile it seems convenient to have 2x10Gbit onboard, as a single user, you're not getting 20Gbit out from this unless really elaborate setups and workflows. You'll be looking at 10Gbit.
If the workflow is to basically have one main computer connected as fast as possible to the NAS under the same desk, I'd look for 2nd hand 40GbE networking which is stupidly cheap - you could pick up parts that cost more or less the same as 10GbE, or 20% of what 25GbE would cost. The reason? ...the natural upgrade path was "cut off" and thus industry has been dumping them.
IMO; get 2 40GbE cards, and DAC cables (copper, no fiber, plug and go). Mellanox is a good brand.

I'd also look for a motherboard SKU that only has the bells and whistles you could make use of.
Things that come to mind, m.2 slots, and potentially an onboard HBA. Calculate carefully if the added SKU price equates to what comparable items would cost to just plug into a PCIe slot. Sometimes it make sens - others really not.[/icode]

Ram: 3200 128/256 GB RDIMM DDR4 ECC (8 x 16/32 GB) - $560 (exact model depends which motherboard I end up going with)
I'd grab 32GB sticks, 4 of them. More is usually merrier, but returns are diminishing. Plus, it is one of the easiest upgrades one could make down the line. IMO, no reason to overshoot from the start.

NVME Bay: ICY DOCK Rugged 4 x 2.5 NVMe U.2/U.3 SSD PCIe 4.0, 5.25" Bay (4 x OCuLink) - $450
I'm no fan of these, but I've not much of an argument to why. I'm skeptical to the cooling capability and noise.
I'd look for NVMe adaptercards that accept 4x m.2 drives.
This is a stupid card, for 2xm.2 drives AOC-SLG3-2M2-O
Here's a "smart card" for 4xm.2 drives AOC-SHG3-4M2P-O
The 'smart' part is that it doesnt require pcie-bifurcation to enable all drives.

Oculink Pcie Card: 16x PCIE to x 4Oculink Card - ~$200
.
ATX Powersupply - ~$200
RED FLAG
You're running into very power hungry components, and this system is no joke when coming under load.
I'd be weary to get anything that doesnt match specifications of the typical 1kw rack mounted psu's.
I've seen a healthy 650w PSU detonate when hooked up to a bare motherboard (x8 generation - powerhuuuungry!)

Case: Rosewill 4U Server Chassis Hot Swap - $359
Apparently these are popular.

Your suggested route will make you spend additional money on new chassi + ATX psu, is waaay more than you'd spend for a 2nd hand Supermicro 4U, and you'll end up with a way worse product in the end.

HDDs: Seagate Exos X20 20TB - ~$5330.76
Are you aware that Seagate obscures their SMART data in ways that make its readability impossible to decode properly from anything else than a windows machine with proprietary software? I'ld look for some other drives. My favorite atm is the Toshiba MG09ACA18TE 512MB 18TB

SSD Name Ultra Star 7.68TB x4 ~$3920 (I already own two)
What's your usecase for these?
Here's where you need to think about matching ZFS to hardware, and not matching hardware to ZFS.

Switch: Any recommendations? Thinking the NETGEAR ProSAFE (XS716E-100NES) ~$1850
Depends on how much speed you really need to other computers?
Or their location, and the present cabling.
I'd avoid such a 10GbE copper base-T switch as long as possible.

What are your intended usecase for the nvme drives?
What's your plan on the ZFS setup?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,112
This would be something sitting under or near my desk, not in a storage room / closet. So server loudness/volume is important.
Spinning drives make noise. Too many drives is just impossible to ignore. So you have to think carefully about the maximum number of spinners. What's the use/what are the requirements for HDD pool and an apparently large SSD pool?

What's the use for 16 cores with just two clients?

NVME Bay: ICY DOCK Rugged 4 x 2.5 NVMe U.2/U.3 SSD PCIe 4.0, 5.25" Bay (4 x OCuLink) - $450
I'm no fan of these, but I've not much of an argument to why. I'm skeptical to the cooling capability and noise.
I'd look for NVMe adaptercards that accept 4x m.2 drives.
I beg to differ here. With just a basic, but intensive, use (single drive, no ZFS) I saw a HUGE difference in performance between a M.2 Kioxia XD5 and a U.2 XD5, and the only explanation I can think off is thermal throttling. The larger U.2 form factor is much better at dissipating heat, and allows for more capacity
Which brings us back to the question about the use case, and pool layout, for four 8 TB SSDs.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
HUGE difference in performance between a M.2 Kioxia XD5 and a U.2 XD5, and the only explanation I can think off is thermal throttling.
very cool.
Did the Kioxia XD5 m.2 have any decent heat spreader? Was it "properly cooled"?
I can definitely see the argument of U.2 being better cooled.
But that happening in an ICY DOCK - Is what I'm concerned about.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
This is key:
Which brings us back to the question about the use case, and pool layout, for four 8 TB SSDs.
- and layout / usecase for all the spinning rust.
Upon reading #1 again, Im dragging to continue speculating assumptions.

One thing the forum teaches - peoples expectations and hopes can sometimes be ...wild.
Which ones are wild and which ones are sound, are ...sometimes difficult to decipher.
 

DenisInternet

Dabbler
Joined
Jun 14, 2022
Messages
27
Hello!

Kudos to a well written 1st post.
I've some feedback on the hardware setup and selection (plus - you've not mentioned anything on ZFS setup which ...really plays into the hardware selection!)

I offer some comments, from "top of mind":
AMD EPYC™ 7302P 16-core/32-thread processor, up to 3.3 GHz -$825
The selling point for such a CPU would be to get access to a lot of PCIe lanes. Which requires the motherboard setup to be appropriate for such use. For example PCIe Bi-furcation, making it suitable for cheap pcie non switched adapters for nvme drives.
If there is a cheaper model EPYC with lower core counts, and higher or similar clockspeed, that'll be just as good.
If you are using SMB shares you will be limited by core clock, rather than anything else.
I guess you could have a go at running iSCSI.

Cooler: Noctua low profile - ~$100 (still looking for the right model)
There is a model that fits in 4u cases. I've one too. I expect there to be a epyc compatible version too.

Mono: AsRock ROMED8-2T - $700 (Includes two 10Gbe Base-T) and IPMI
This I really don't like. I don't trust AsRock a single bit, and the experience on hte forums in general is not overwhelmingly positive in any way.
I'd go for supermicro all the way.

Meanwhile it seems convenient to have 2x10Gbit onboard, as a single user, you're not getting 20Gbit out from this unless really elaborate setups and workflows. You'll be looking at 10Gbit.
If the workflow is to basically have one main computer connected as fast as possible to the NAS under the same desk, I'd look for 2nd hand 40GbE networking which is stupidly cheap - you could pick up parts that cost more or less the same as 10GbE, or 20% of what 25GbE would cost. The reason? ...the natural upgrade path was "cut off" and thus industry has been dumping them.
IMO; get 2 40GbE cards, and DAC cables (copper, no fiber, plug and go). Mellanox is a good brand.

I'd also look for a motherboard SKU that only has the bells and whistles you could make use of.
Things that come to mind, m.2 slots, and potentially an onboard HBA. Calculate carefully if the added SKU price equates to what comparable items would cost to just plug into a PCIe slot. Sometimes it make sens - others really not.[/icode]

Ram: 3200 128/256 GB RDIMM DDR4 ECC (8 x 16/32 GB) - $560 (exact model depends which motherboard I end up going with)
I'd grab 32GB sticks, 4 of them. More is usually merrier, but returns are diminishing. Plus, it is one of the easiest upgrades one could make down the line. IMO, no reason to overshoot from the start.

NVME Bay: ICY DOCK Rugged 4 x 2.5 NVMe U.2/U.3 SSD PCIe 4.0, 5.25" Bay (4 x OCuLink) - $450
I'm no fan of these, but I've not much of an argument to why. I'm skeptical to the cooling capability and noise.
I'd look for NVMe adaptercards that accept 4x m.2 drives.
This is a stupid card, for 2xm.2 drives AOC-SLG3-2M2-O
Here's a "smart card" for 4xm.2 drives AOC-SHG3-4M2P-O
The 'smart' part is that it doesnt require pcie-bifurcation to enable all drives.

Oculink Pcie Card: 16x PCIE to x 4Oculink Card - ~$200
.
ATX Powersupply - ~$200
RED FLAG
You're running into very power hungry components, and this system is no joke when coming under load.
I'd be weary to get anything that doesnt match specifications of the typical 1kw rack mounted psu's.
I've seen a healthy 650w PSU detonate when hooked up to a bare motherboard (x8 generation - powerhuuuungry!)

Case: Rosewill 4U Server Chassis Hot Swap - $359
Apparently these are popular.

Your suggested route will make you spend additional money on new chassi + ATX psu, is waaay more than you'd spend for a 2nd hand Supermicro 4U, and you'll end up with a way worse product in the end.

HDDs: Seagate Exos X20 20TB - ~$5330.76
Are you aware that Seagate obscures their SMART data in ways that make its readability impossible to decode properly from anything else than a windows machine with proprietary software? I'ld look for some other drives. My favorite atm is the Toshiba MG09ACA18TE 512MB 18TB

SSD Name Ultra Star 7.68TB x4 ~$3920 (I already own two)
What's your usecase for these?
Here's where you need to think about matching ZFS to hardware, and not matching hardware to ZFS.

Switch: Any recommendations? Thinking the NETGEAR ProSAFE (XS716E-100NES) ~$1850
Depends on how much speed you really need to other computers?
Or their location, and the present cabling.
I'd avoid such a 10GbE copper base-T switch as long as possible.

What are your intended usecase for the nvme drives?
What's your plan on the ZFS setup?
Thanks for the detailed response! @Dice and @Etorix !

For the ZFS setup. Ideally two storage pools both z2 Raids. One for NVME, and one for regular HDDs.

The regular HDDs would store large raw data, and projects that are on "standby" but might steel need occasional tweaks/exports before being archived and deleted off the NAS. This is mostly for longer form projects like feature films/documentaries that often need alternate cuts/exports/specs for various distributors (international vs US, TV cut vs Theatrical release, DCI vs UHD etc, etc)

My use case for the large NVME pool (preferably as a single Raid z2 pool with 6 drives) Is to store all the cache media generated by the programs I use, as well raw 8k and 12k footage that I need to color grade work with directly). I've already purchased two NVME Ultrastar SSDs so if I could purchase 2-4 more of these, it would be the least expensive option, however I can also just purchase 6 of the brand new Kioxia U.2 mixed use SSDs too if recommended.

The High core count CPU is what I picked based on similar spec'ed off the shelf systems with NVME storage such as the Qnap's TS-h1290FX for example. Based on both posts and videos on the Level1techs forums, NAS-Compares, and liftgammagain forums, when you have a large number of NVME SSDs its recommended to have a higher core count. I could always go down to 12 cores and stay on the same platform?

AsRock being unreliable is not something I was aware of, but good to know! My choice for that specific motherboard was bifurcation support and high praise from the folks at Serve the Home, please see this review/article: https://www.servethehome.com/asrock-rack-romed8-2t-review-an-atx-amd-epyc-platform/

That said I am not married to this motherboard, and I am sure I can find a SuperMicro which supports this CPU, and PCIe bifurcation.

NVME Bay: ICY DOCK Rugged, you don't seem to be a big fan haha. Are you hesitant of the reliability of the hardware/vendor track record, something else? I haven't used this vendor before and I am only familiar with their product line because of a video from Level1techs which seemed promising: https://www.youtube.com/watch?v=-h6aaJdJ-Ts

I am just looking for a decent hot-swamp capable NVME PCIe 4.0 backplane and hard drive cage. If you know of a case that includes this, that would be awesome.

ATX Power supply redflag? I have mentioned a specific model but yes was looking in the 850 watts range, but based on your recommendations I should go with 1200? Any recommendations on ATX PSUs or server PSUs that don't sounds like jet-engines?

For the switch, I would be connecting to 2 systems either via a thunderbolt adapter from Sonnet (More reliable company from experience) or Atto (Atto has a pricey 40Gbs adapter), the PC I could just put in a PCIe card. 10Gbe copper was something I just threw on the board, but I am not married to, very interested to learn about 40Gbs as an option. I unfortunately understand painfully little about switches, I just something powerful enough to connect the NAS system to 3-4 computers.

Bummer about the seagate drives, explains why they have been on sale/discount I guess...
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,112
My use case for the large NVME pool (preferably as a single Raid z2 pool with 6 drives) Is to store all the cache media generated by the programs I use, as well raw 8k and 12k footage that I need to color grade work with directly). I've already purchased two NVME Ultrastar SSDs so if I could purchase 2-4 more of these, it would be the least expensive option, however I can also just purchase 6 of the brand new Kioxia U.2 mixed use SSDs too if recommended.
If the goal is absolute performance, a stripe of mirrors is better than a raidz2. 3*(2-way) loses "only" one drive worth of capacity compared with 6-wide Z2.

The High core count CPU is what I picked based on similar spec'ed off the shelf systems with NVME storage such as the Qnap's TS-h1290FX for example. Based on both posts and videos on the Level1techs forums, NAS-Compares, and liftgammagain forums, when you have a large number of NVME SSDs its recommended to have a higher core count. I could always go down to 12 cores and stay on the same platform?
A NVMe pool points towards EPYC or Xeon Scalable. But I'm sceptical about high core count for just two clients. How many clients were involved in those posts/tests?
SMB is notoriously single-threaded… and a colleague of yours found it to be a bottleneck.
https://www.servethehome.com/asrock-rack-romed8-2t-review-an-atx-amd-epyc-platform/
That said I am not married to this motherboard, and I am sure I can find a SuperMicro which supports this CPU, and PCIe bifurcation.
Pretty much any EPYC/Xeon server board supports bifurcation, so no problem here.

I am just looking for a decent hot-swamp capable NVME PCIe 4.0 backplane and hard drive cage. If you know of a case that includes this, that would be awesome.
ServeTheHome has presented a few rack systems with NVMe backplane. Supermicro (or their competitors…) can probably sell you a complete system with 6+ NVMe and 6+ HDD hot-swap bays, ready for a TrueNAS install. But it may come with typical server fans and server noise…
If noise is a major concern, the best bet may be a big tower case, with large, slow and quiet fans blowing toward the drives. No hot swap, less drive bays in a bigger volume but possibly quieter. At the very least, the noise floor would be defined by the number of spinning drives rather than by cooling.

ATX Power supply redflag? I have mentioned a specific model but yes was looking in the 850 watts range, but based on your recommendations I should go with 1200?
In ATX form, I would go for a high-end Seasonic model. No idea for quiet server PSU.

For the switch, I would be connecting to 2 systems either via a thunderbolt adapter from Sonnet (More reliable company from experience) or Atto (Atto has a pricey 40Gbs adapter), the PC I could just put in a PCIe card. 10Gbe copper was something I just threw on the board, but I am not married to, very interested to learn about 40Gbs as an option. I unfortunately understand painfully little about switches, I just something powerful enough to connect the NAS system to 3-4 computers.
The constraint is macOS support for a NIC, in a Thunderbolt enclosure or AIC in a MacPro. Once you have settled for 10/40/25/100 GbE, look an appropriate small switch (Microtik maybe?).

Bummer about the seagate drives, explains why they have been on sale/discount I guess...
Their custom encoding of some SMART parameters is no deal-breaker! The most important parameters are plainly readable.
With only three vendors to chose from (WD, Seagate, Toshiba), one cannot afford to be too picky. (And WD spared no effort to look like the bad guy of the bunch…) If Exos have the best price per TB, go for Exos.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello,

I overall agree with @Etorix in #9.

Reading between the lines of your design and use case @DenisInternet I'm picking up a few things.

1) You are really concerned with noise.
No matter the way, 20 spinning HDDs and even remotely appropriate cooling in a 24bay setup will cause noise, and a fair bit more of it than I think you are looking forward to.

Having such a beast becoming accepted in a general office area requires the area to be really 'sound polluted' and would probably not sane to do office work in.

I think you are underestimating what noise this thing will make, even with really silent fans and cooked drives.

2) Initially I interpreted you were looking for a NAS connected to a single user/work station.
Now you've specified that you'd need it to connected to 3-4 computers. This changes things a bit. That would remove the 40GbE option off the table, since 40G switches that are not <really jet engines> and not measly 1u rack servers, are not out there to my knowledge.

Is it still one editing machine that needs all the performance, or is this sort of "one editing rig" for your staff of editors? ("editors" used loosely here).

3) ICY DOCK. I've really no other product suggestion than either get a proper backplane/setup from a verified 'server vendor', or stay out of the desktop-hotswap game. I think the ICY docks give the best visual impression that I've seen. Also I give fuckallcents about the cooling capacity. 2x 40mm? 50mm? tiiiny little fans are supposed to bring out 20w at idle of the drives, let alone at full tilt, that'll be 48w. They <MIGHT> be enough to dampen the complete combustion of drives, but that's me being generous.
I would any day of the week aim for PCIe adaptors, some sinks on the m.2 sticks, and some decent airflow in the case.

4) PSU, yes, at least +1.2kw.

5) Networking, the way this unfolds, I'm leaning towards a 10GbE networking solution for multiple computers, rather than 1-1 40GbE.

6) The ASrock-rant, I agree the motherboard looks an absolute treat! (But that doesn't take away the taints on ASRock as a brand in these contexts).


The more I think about the setup you're putting forward, I'm feeling worries that you're not going to get quite what you hope for. Either in terms of Price/performance (=could get enough performance for far less price which is pretty much the ZFS memo), or that the noise will either make your hardware unjustified overheating or you cant bare the noise it'll cause.

I've a hard time putting words to it, somehow I feel the direction of the project is a bit "off". Ie, it doesn't straight make sense to me. Here are a few examples of the contradictions that strikes me. Part of the components are super high end, big dollar. Other parts are the wonkiest DIY enthusiast parts possible. Expectations to have a +20bay high performance NAS sitting under your desk, like it would be only little bit louder than a toaster-2bay-NAS, meanwhile dumping $2000 SSDs into a raidz2.
I think you're in a territory where you like to spend money to get best enterprise gear, but don't accept necessary trade-offs. And at the same time,keep looking at DIY stuff.
I picked up on the DIY-route, and that's where this is going.

Starting from scratch, with your use case in mind, I would myself have tried a slightly different route.

- Build 2 boxes, not one.
One for spinning rust (HDDs) that will basically only work as backup/offloading.
One for the NVMe, that can thrive and coexist with humans in the "office space" without embarking on a cooking show side hustle.
Add UPS to your budget.

RustBox:
Any thing like X9, X10, X11 2nd hand SM Xeon E3 build would be fanastic. Chuck in some 32GB RAM and you're fine. This would could be picked up for $100-200. There is really no need for more horsepower.
Chunk this in whatever case that holds enough drives, and attempt to put the box as far away as possible (Where it doesnt cook).
A HBA for the drives, and a 10GbE, I happen to have success with Mellanox ConnectX-3.


NVMeBOX:
You could use any 'ATX gaming case' with some good airflow.
I believe the 'enterprise grade' of SSDs is really overkill and a waste of money in this build. Keep what's already bought, let's find a way to work them into the build that makes sense.

ZFS thrives on masses of cheaper hardware, rather than a few of excellent quality.
I'd go for a few PCIe boards as mentioned earlier.

Then for drives, I'd look for best price for approx 4TB size.

In terms of Raid configuration, the overhead 'intensifies' with each level of raidzX. Mirrors as pointed out, is the purest from a performance perspective, and gives heaps of benefits when working with block storage - in case you'd use iSCSI this is somewhat important. iSCSI would only make sense if there is one "editor" on the working set.

My pragmatic setup would be mirrors of 4TB drives. You can easily expand the pool by adding more mirrors, and also make use of your fancy 7.68TB drives. I'd shoot for enough space that you'd never go past 75% utilization. Roughly calculated, if you want 12TB space for your projects, the calculation is along the lines: (15TB total usable /0.75 fill rate)/4TB drive size = 5mirror pairs = 10 drives total = 3 addin cards used. I'd chunk an old spinning rust 4TB in there as a hot spare, in case something fails. It's only there temporarily until you've replaced the failed drive with a proper one. Then it returns to hot spare status.
Adding your 7.68TB drives, subtracting 2x 4TB drives would be perfectly fine of course.

If you feel mirrors are an absolute waste, I'd look at a Raidz1 setup, with same drives and same idea of a hot spare of some random rust.
Raidz2 is off the table in any way shape or form, the way I see this use case.

Networking, while this route with SFP+ requires a bit more "careful shopping" it is rewarding.
As @Etorix pointed to, a 10GbE SFP+ Mikrotik is a simple and cheap, good option.
for 8ports: CRS309-1G-8S+IN ($270)
for 24ports: CRS326-24S+2Q+RM ($499)
The 24port version also has 40GbE x2, which would be cool to hook up to the nvmebox.
These do require Fiber, or DACs, or modules that convert to 10GbE BASE-T in worst case scenarios.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Then for drives, I'd look for best price for approx 4TB size.
I slept on my reply and would like to add;
Actually, there is no benefit to running stupid amounts of smaller drives, when dealing with NVMes.
The upside is that you'll get away with lesser capable drives - if you have more of them. That's what ZFS does best.
I selected that sort of as minimum size to make the number of drives somewhat reasonable.
I fully expect there to be a hefty premium per TB per drive, for the larger models.
In case there isn't, or the curve is "pretty linear", then I'd go for larger drives than 4TB, for mere convenience.
Nothing else changes.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,112
I think @Dice nailed an important point in post #10: The build list is a mix of server and consumer parts.
And the suggestion to separate it in two distinct servers in a very good one. It will be much easier to find two cases dedicated to 2.5" NVMe and 3.5" HDDs than one single case with the right mix of bays.

How many HDDs by the way? I read "Exos X20" as the drive generation, not the quantity. If it's twenty spinning drives, I wouldn't want the build anywhere near my desk!

But I'd keep the enterprise grade SSDs, for higher capacity and better specification. Consumer QLC drives would not make a reliable pool.
 

DenisInternet

Dabbler
Joined
Jun 14, 2022
Messages
27
I think @Dice nailed an important point in post #10: The build list is a mix of server and consumer parts.
And the suggestion to separate it in two distinct servers in a very good one. It will be much easier to find two cases dedicated to 2.5" NVMe and 3.5" HDDs than one single case with the right mix of bays.

How many HDDs by the way? I read "Exos X20" as the drive generation, not the quantity. If it's twenty spinning drives, I wouldn't want the build anywhere near my desk!

But I'd keep the enterprise grade SSDs, for higher capacity and better specification. Consumer QLC drives would not make a reliable pool.
I slept on my reply and would like to add;
Actually, there is no benefit to running stupid amounts of smaller drives, when dealing with NVMes.
The upside is that you'll get away with lesser capable drives - if you have more of them. That's what ZFS does best.
I selected that sort of as minimum size to make the number of drives somewhat reasonable.
I fully expect there to be a hefty premium per TB per drive, for the larger models.
In case there isn't, or the curve is "pretty linear", then I'd go for larger drives than 4TB, for mere convenience.
Nothing else changes.
Hey folks, thank you both for your input, this has been super helpful!
Crazy week, so just getting back to this.

I think you're both right that I should focus on two separate boxes. For now the more important/pressing one is the NVME box. The HDD storage I am less worried about for now.

Pre Built Options.

1)

There is this unit from Qnap: https://www.qnap.com/en-us/product/ts-h1290fx
Maybe I just purchase this, remove the Qnap DOM and install TrueNAS? If would alleviate my security concerns and allow for future expandability with solid dual 25Gbe connections. I would just need a switch and one of these Thunderbolt adapters below, and I am good to go.


2)
Supermicro has a lot of options, of course the issue is noise. The pricing seems decent considering the hardware, but I doubt this a good option unless I have a separate server room.

DIY
Get a normal or "gamer" PC tower case, and just stack it with NVMEs without hot swap bays.
I suppose I could go either Xeon or Epyc here. Is intel the preferred option for TrueNAS? Is this because of software or clock speed?
A lot of NVME solutions seem to be using Epyc, is that just for PCIe lanes, or is it a heat/energy efficiency thing, something else?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,112
There is this unit from Qnap: https://www.qnap.com/en-us/product/ts-h1290fx
Maybe I just purchase this, remove the Qnap DOM and install TrueNAS?
Quite possibly, though you may want to check about changing the boot drive and OS.
Ouch! There's quite a Thunderbolt/Apple-user tax.
Supermicro has a lot of options, of course the issue is noise. The pricing seems decent considering the hardware, but I doubt this a good option unless I have a separate server room.
I agree that the desktop-oriented QNAP unit, with its two 92 mm fans, seems a better candidate with respect to noise than a 1U box and its row of 5k rpm fans.
The HDD unit would best go in a separate room (lots of HDDs make noise, there's no way around that), so Supermicro may have a revenge there?
DIY
Get a normal or "gamer" PC tower case, and just stack it with NVMEs without hot swap bays.
I suppose I could go either Xeon or Epyc here. Is intel the preferred option for TrueNAS? Is this because of software or clock speed?
It's because server users tends to be conservative (at least when it comes to hardware…), and ZFS users, to be paranoid. So the vast experience of millions of user-years logged on Intel-based hardware matters a lot. But so far there are no bad reports from EPYC users (hi @Arwen !)—and the community needs more users to dare and venture with AMD.
A lot of NVME solutions seem to be using Epyc, is that just for PCIe lanes, or is it a heat/energy efficiency thing, something else?
Obviously lane count. Plus PCIe 4.0 years before Intel had something to offer, and higher core count is the server does more than just storage (dual/quad Xeon can consolidate to single/dual EPYC). Efficiency is not that different between Xeon and EPYC, at least compared with ARM.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
1)
There is this unit from Qnap: https://www.qnap.com/en-us/product/ts-h1290fx
Maybe I just purchase this, remove the Qnap DOM and install TrueNAS?
I'd don't expect this to be a flawless experience. It might work just fine.
But it also might be a nightmare. Remember the hardware used needs to be fully supported by FreeBSD13, in all its quirks and glory.
The reason why we on the forum constantly recommend SuperMicro is due to it being a solid foundation for a solid user experience.

2)
Supermicro has a lot of options, of course the issue is noise. The pricing seems decent considering the hardware, but I doubt this a good option unless I have a separate server room.
Solid reasoning.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,906
Not sure if this is in a professional context. If so, I would get an experienced storage consultant into the game. The time saved will be probably worth the fee.
 

beaker7

Cadet
Joined
Feb 4, 2023
Messages
2
Professional editor here.

Do yourself a favor and get the TS-h1290fx and fill it with as many Micro 9400’s as you can afford. It’s dead quiet and faster than you’ll ever need. If youre paranoid about security keep it off the network and direct attach your computers to the built in 25GbE or installed 100GbE.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,906
@beaker7 , how does your workload look like?
 

beaker7

Cadet
Joined
Feb 4, 2023
Messages
2
@beaker7 , how does your workload look like?

Freelance videographer doing high end corporate and commercial work. Commonly this will be 8k .r3d footage or ArriRaw and VFX. Primary working storage is a TS-h1290fx with 5-6 workstations pulling from it. Its not cheap once you fill it with drives but nowhere else can you get the combination of speed, small-ish form factor, capacity, and silence.
 
Top