Newbie Build

FlyingHacker

Dabbler
Joined
Jun 27, 2022
Messages
39
I am scoping out hardware for a TrueNAS Core build designed to support heavy video file use. (Projects on the order of a few hundred GB)

I was looking for opinions on a setup like such. I am not married to anything here, and am opening to suggestions.



AMD Ryzen 5 5600 Vermeer 3.5GHz 6-Core AM4 Boxed Processor - Wraith Stealth Cooler Included


Asrock x570 Phantom Gaming 4

https://www.asrock.com/MB/AMD/X570 Phantom Gaming 4/index.asp#Specification

This supports ECC RAM, which I would have 64 GB from the QVL


A pair of Samsung EVO M.2 drives - probably 256-512GB each


8 Toshiba N300 8TB NAS drives using the 6Gb SATA ports on the mobo. Would want RAID 5 or 6.

~800W power supply (80plus gold or better)

Case will be some 4U rackmount.

Will add a 10GigE ethernet card (Chelsio)

Should I be looking at something else for a ZIL?

Is this type of consumer grade mobo/cpu a terrible choice? Any other suggestions?

Thanks in advance.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Do you have the parts already or building a shopping list?
Are you planning/wanting to work on the files directly on the NAS?
  • If so plan for a working pool SSD/NVME drive set for working files and a separate pool for bulk storage with HDD.
  • Suggest using iSCSI for access.
What is the end game for drives? 8? 16? 24? etc.

Please read the terminology primer, as you will draw ire for incorrect terms, i.e. its RaidZ1 not Raid5.
 

FlyingHacker

Dabbler
Joined
Jun 27, 2022
Messages
39
Thanks for the reply.

Building a shopping list, with immediate need.

The eBay link looks decent. I wonder if there is a way to get a 10GigE card in there? I definitely need that. I assume you would configure the HBA as JBOD and use TrueNAS for the actual RAID.

Files remain on the NAS, some files get automagically cached to local M.2, but they should be assumed to be on the server.

Interesting thought to keep those two separate in terms of working and bulk storage. So there is no magical caching on the NAS to handle this? It is best left as an exercise for the user?

Is iSCSI faster than mounting SMB volumes?

OK. RaidZ1. RAID capable of loosing at least one unit (possibly 2).

Thanks!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Is the H310 that much worse than the HBA330?
Yeah, but it can be lobotomized to something that would work: https://fohdeesha.com/docs/H310.html

Gen 13 is a decent upgrade over Gen 12, so if it works for you price-wise go for it. Up to four bays should support U.2 NVMe disks, if you wire them up to a PCIe x16 slot.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Agree the 730xd is a better choice, will need a new HBA (Dell HBA330, mini micro).

This would the the correct choice
 

FlyingHacker

Dabbler
Joined
Jun 27, 2022
Messages
39
Thanks for the feedback, guys. Most of this data center HW is foreign to me. So I appreciate all the input I can get.

That 730xd comes with a "Perc H730 1gb Controller with Battery." Would I would still want the HBA 330 mini micro instead? Again, I assuming the preferred method is to disable the hardware RAID and use TrueNAS to handle that.

Thanks again.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Would I would still want the HBA 330 mini micro instead?
100% yes.

Would this require additional hardware, or is this built in? Sorry, I have never seen inside one of these, and there is a lot to digest.
The backplane supports it. Some servers may ship with a suitable adapter and cables, others might not. The official Dell kit is a bit expensive, but generic equivalents should work.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Should I be looking at something else for a ZIL?
The proper terminology here is "SLOG", and this device is useless if there are no sync writes.
But with a SLOG and sync writes enabled, writes are still an order of magnitude slower than with async writes. So set sync=never and don't bother with a SLOG.

Is this type of consumer grade mobo/cpu a terrible choice? Any other suggestions?
It is a poorer choice compared with actual server motherboards: No BMC, not necessarily engineered for 24/7 stability.
If you stick with Ryzen, look at X470DU4, X570D4 and B550DU4 boards from AsRockRack; some variants even come with 10 GbE on board (but Base-T, not SFP+).
 

FlyingHacker

Dabbler
Joined
Jun 27, 2022
Messages
39
OK. Thank you all for your input. I will update when I get everything going, but ordered that Dell Poweredge R730xd 2x E5-2666 v3 2.9ghz 256gb and the Art of Server's H330 (linked above). I also ordered 10 SAS HGST drives (used, but will run full test on them, and I only plan on using likely 10 of them.)

We will then see if we need any of the NVMe drives or not. With 256GB of RAM that will likely provide for plenty of read cache. And based on some of the Lawrence Systems videos on Youtube it would appear with our load of primarily larger files from few users that a ZIL cache might not help much anyway.

Now I have to figure out how many drive to use in my pool(s). Do I do 10 in one pool, or two 5 drive pools? I am trying for RaidZ2. Some of the benchmarks I have seen seem to indicate the 5 drive pool may actually be faster. But first I have to get everything, test the drive for like 6 days, and get TrueNAS setup. So this is a ways off!
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Not much point in "hot" spares for RaidZ2, would make more sense for RaidZ3 in that case. However if you are doing 2x RaidZ3 of 6 drives each, you may as well do 5x Mirror pairs with 2 hot spares for significantly more performance.

Your system also has 2x 2.5" drives in the back for OS/Boot.
2x of these would work well:

There will be a large learning curve on Enterprise gear, plenty of folks here can help you out with that. You will want to do a few things first before even thinking of working with TN, this is a short list (not complete).
Welcome to a fun adventure.
 

FlyingHacker

Dabbler
Joined
Jun 27, 2022
Messages
39
Not much point in "hot" spares for RaidZ2, would make more sense for RaidZ3 in that case. However if you are doing 2x RaidZ3 of 6 drives each, you may as well do 5x Mirror pairs with 2 hot spares for significantly more performance.

Ah, good point. This is likely the ideal setup.

Your system also has 2x 2.5" drives in the back for OS/Boot.
2x of these would work well:

Yep. That was the plan. Those would be for boot. A ZFS mirrored pair would be ideal, assuming TrueNAS lets you do that on install, like the normal FreeBSD's installer does. ??

There will be a large learning curve on Enterprise gear, plenty of folks here can help you out with that. You will want to do a few things first before even thinking of working with TN, this is a short list (not complete).
  • Get access to iDRAC (remote access management) it comes with enterprise license so you will be able to remote KVM and virtual media.
  • Update ALL firmware using SUU (system update utility) or the like (lifecycle online etc.).
  • Run built in Dell diagnostics, both quick and through

All on the list, and generally what I do with any hardware, new or used. Get all those updates out of the way before the machine becomes important. I have experience with some enterprise gear in the past, but everything is different.


Likely the old NAS will server as backup with some sort of rsync or similar. Due to the nature of the work this type of system has served us well for backups for a long time ( it had provisions for ransomware encryption as well ). But this needs further evaluation. ZFS Snapshots would be ideal in the future.

Welcome to a fun adventure.
I'll likely use the badblocks concurrent script shown in this video to test all 12 drives at once.
https://www.youtube.com/watch?v=9bh5ZK8z4ZA

I've been using BSD since about 1995, and linux almost as long. So none of that is an issue, and is generally preferred (by me) over Windows for anything server related.

Thanks for all the info. Always more to learn.
 

FlyingHacker

Dabbler
Joined
Jun 27, 2022
Messages
39
You said keep asking questions.. What is the best way to test used SSDs (for the boot drive)? I assume a multiple full write and read session (like badblocks is bad due to adding additional wear... Or is that still the way to go?

Thanks.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Is this type of consumer grade mobo/cpu a terrible choice? Any other suggestions?
My first build was consumer and, honestly, it ran fine for a couple years (I even had a year uptime once), but frustrated me everytime something happens (usually a bad USB stick boot drive back in the days when FreeNAS still uses those) and I have to scavenge a kb/monitor combo from another machine. The best part of going server-grade, IMO, is IPMI. The ability to run headless and use virtually any remote machine connected to the network to be a thin client terminal. The ability to shutdown, power cycle, and watch your machine boot from even hundreds of miles away from home through VPN is priceless in my humble personal opinion.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You said keep asking questions.. What is the best way to test used SSDs (for the boot drive)? I assume a multiple full write and read session (like badblocks is bad due to adding additional wear... Or is that still the way to go?

Thanks.
Badblocks-esque testing is significantly less useful for SSDs because of the additional layers of abstraction that mean you can't effectively test every bit. What is the best option for testing an SSD? I'm not sure I have a good answer for you. Spares on hand is what I'd do.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yeah. A dry run period before the system becomes critical would also be advisable.
 
Top