New to NAS, Just built a system

HoneyDewStu

Cadet
Joined
Jul 5, 2023
Messages
2
Hello,

Very new to setting up a NAS, Ive built computers and have moved on to trying to data hoard. I have a second PC that Ive stuffed with HDDs. I put a AIO on the CPU and overclocked it, temps are pretty good but may be unnecessary to have done this but Im taking advantage of the AIO. I am considering on putting a gpu in there but do not know how to utilize it if I were to include it, since the iGPU is sufficient. Considering on removing the 2x4GB sticks of ram, not sure if it will improve or hurt overall performance. 16GB vs 24GB of RAM.

Case: Enthoo Pro Full w/ Tempered Glass
Motherboard: Asrock Z77 Extreme4
CPU Cooler: Corsair H150i AIO
CPU: i5-3570K (overclocked to 4.0GHz)
RAM 2x8GB Corsair DDR3 1600MHz and 2x4GB Corsair DDR3 1600MHz
Harddrives: 1x1TB WesternDigital Blue, 1x2TB Seagate Firecuda, 1x2TB Hitachi, 1x5TB Seagate Expansion, 2x8TB Seagate Barracuda
Solid State Drive 1x500GB Samsung 860 EVO (BootDrive)
PSU: RM750x

Not sure whether to start with TrueNAS Scale or Core. I hear that Scale is newer and has less support, but is improving. While Core is older and with a lot of support.

If anyone could provide some advice on where to start, I just put it all together today and working a boot drive in the meantime.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First off, TrueNAS works better with server components. However, it is possible to use left over hardware, with varying degrees of success.

But, the over all theme is reliability. Over clocking anything, CPU, memory, etc..., is actually & actively discouraged. This is because a NAS needs to run day after, month after month, year after year, storing your data without any errors. Heat, whether produced by over clocking or lack of proper cooling, reduces reliability.

In fact, some people use lower power components just to avoid some of those heat related problems.

Next, ZFS prefers similar sized disks. Having a mish-mash of sizes without a clear reason, does not make a good ZFS pool. That said, lots of home and small business users have multiple ZFS pools with different sized or type, (HDD verses SSD), of storage. They then use those different ZFS pools for specific uses, like larger hard disk drives for bulk storage. And SSDs or NVMes for fast, often needed data.

Many of us here in the TrueNAS are conservative in our NAS builds, because we both love our data, and really want it to survive as long as possible.


As for where you can go from here, we have lots of "Resources" and sticky threads with good information. The Resources link can be found at the top of any forum page. Along with a link to the Documentation.
 

HoneyDewStu

Cadet
Joined
Jul 5, 2023
Messages
2
First off, TrueNAS works better with server components. However, it is possible to use left over hardware, with varying degrees of success.

But, the over all theme is reliability. Over clocking anything, CPU, memory, etc..., is actually & actively discouraged. This is because a NAS needs to run day after, month after month, year after year, storing your data without any errors. Heat, whether produced by over clocking or lack of proper cooling, reduces reliability.

In fact, some people use lower power components just to avoid some of those heat related problems.

Next, ZFS prefers similar sized disks. Having a mish-mash of sizes without a clear reason, does not make a good ZFS pool. That said, lots of home and small business users have multiple ZFS pools with different sized or type, (HDD verses SSD), of storage. They then use those different ZFS pools for specific uses, like larger hard disk drives for bulk storage. And SSDs or NVMes for fast, often needed data.

Many of us here in the TrueNAS are conservative in our NAS builds, because we both love our data, and really want it to survive as long as possible.


As for where you can go from here, we have lots of "Resources" and sticky threads with good information. The Resources link can be found at the top of any forum page. Along with a link to the Documentation.
Hey,

Thanks for the reply. I put this thing together with spare parts hoping to get some use out of them, and I'll put it back to stock it doesn't make sense to overclock.

The mix of different size drives not a good idea? I guess I need to learn more about pooling drives.

When searching in Resources, Im not even sure where to start for search keywords. Im about to install TrueNAS Scale on my boot drive. Im not well read on topics like local networking and ZFS. What would be a recommended route for learning?

Sorry for the basic questions.
Thanks,
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't keep a list of reading material for new users handy, (some others in this forum do). Sorry.

If you use ZFS with RAID-Zx, a vDev, (group of disks making a virtual device), take the size of the smallest disk. Any additional space is temporarily wasted. Meaning if you make a ZFS pool out of;

1x1TB WesternDigital Blue,
1x2TB Seagate Firecuda
1x2TB Hitachi
1x5TB Seagate Expansion
2x8TB Seagate Barracuda

You end up using only 1TB of any of the disks. However, if you drop the 1TB disk, then you use 2TB of each disk. But, that still leaves 3TB wasted on the 5TB, and 6GB wasted on both the 8TB disks.

So, if you don't plan on adding other disks, then a Mirror pool makes a bit of sense;

Mirror vDev 1:
1x2TB Seagate Firecuda
1x2TB Hitachi

Mirror vDev 2:
2x8TB Seagate Barracuda

A ZFS pool can have lots off vDevs. So the above gets you about 10TB raw, (50% wasted to Mirror redundancy). The other 2 disks don't really fit into any reasonable pool / vDev scheme.

One note, loss of a vDev, (all it's redundancy), means loss of the pool, (meaning all data in that pool). So, if you loose a Mirror component, replace it before the other Mirror component fails. Or have good backups. But, in the above configuration, you can loose either disk from both Mirror vDevs and still have 100% access to your data. ZFS redundancy is by vDev.


Planning a ZFS pool can take time, and is very specific to a person's use case, budget and data loss tolerance.
 
Top