Sanity check on pool layouts and first build for home NAS

pabo

Cadet
Joined
Aug 31, 2023
Messages
5
Ok, so here is another one of those "need advice posts".

I'm decided to try and build my first home NAS. It will be used as a media storage (probably via NFS), snapshot/backups for my laptop, hosting home assistant (replacing my current setup with a really old raspberry PI that is a bit weak), and probably hosting syncthing and/or nextcloud. I have about 1TB of data today, but I expect this to increase when I actually have easy access to more storage. Let's say it will be 4 TB in 5 years.

I have a small Fractal Design node 304 case with a Supermicro A2SDi-8C-HLN4F board (Intel Atom C3758 CPU) installed. Which means I have 1 PCIe x4 expansion slot and one m.2 slot on the board. The case has 6x 3,5" disk bays but it's possible to mount 2.5" SSDs on the outside of the bays. So it could actually fit 2xSSD + 6x 3,5" HDDs. Although, the cooling might factor in here, of course.

I have not settled my mind over the VDEV layout that I want to use for the data pool. I currently have 2 WD Caviar greens at 2TB each that I thought I'd make use of if sensible. And since WD Red 4TB are currently on sale in my area I thought I might purchase a few of those to create a 4-5 disk raidz2 vdev, that will of course be limited to 2TB / disk so 2-3x 2TB which is enough for now and then I can replace the 2TB drives in the future when my needs change. And if the raidz expansion feature drops in the future that's another option I guess.

Or I could go with RAID 10 setup with one 4TB mirror and one 2TB mirror and easily add another mirror in the future. But the idea of loosing two drives in one VDEV doesn't seem that unlikely (especially since the WD Caviar greens are a bit old) and I don't think I need the IOPS for my use case?

I plan using mirrored Samsung SSDs for the bootpool and probably put at least one VM on an NVME drive in the m.2 slot, for running the applications via portainer (at least that's the current plan feel free to change my mind). The latter I plan on backing up to the storage pool.

So to summarize the disks I plan on using:

Available disks at the moment:
1x m.2 nvme SSD (256GB)
1x Samsung 840 pro SSD (256GB)
2x WD Caviar Green (2TB)

Planned purchases:
1x Samsung 870 evo (256 GB to mirror the other Samsung SSD)
2-3x WD Red 4TB (chosen instead of Seagate ironwolf since they seem to make less noise)

I also plan to do regular backups to an external USB drive at regular intervals for offline backups.

So, is this a sensible plan or am I out of my mind? :tongue:
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Make sure the WD Reds are the Plus versions to avoid SMR
 

pabo

Cadet
Joined
Aug 31, 2023
Messages
5

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Oh and those Greens have a habit of dieing quickly I understand
 

pabo

Cadet
Joined
Aug 31, 2023
Messages
5
Oh and those Greens have a habit of dieing quickly I understand
I can't tell if you're being sarkastic or not

can you tell that I know very little of various hard drive makes and models? :tongue:
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Greens tend to very agressivily park the heads. This is fine in the short term, but not so hot in the long term.
 

pabo

Cadet
Joined
Aug 31, 2023
Messages
5
I decided to skip the WD Green and ended up buying 6x WD red plus 4TB instead.
This weekend I will complete the assembly and start the burn in, but due to a couple of bumps on the road I have some questions.

1. I did a sneak preview test of the CPU with CPUStress from the UBCD and quickly reached almost 100C and started to throttle. So I decided to replace the thermal paste with arctic silver 5 as well as mount a Noctua fan on the heat sink like in this article. But apparently you can mount the fan in either direction (push (down) / pull (up)). Is one way better than the other or is it a try-and see situation?

I would assume that a pull setup would direct the air flow better, since it would pull the hot air up and away form the motherboard and in the path of the HDD and exhaust fans. But maybe the airflow through the fins of the heat sink will not be optimal here?

Contrary, the push setup would probably cause a better airflow through the fins but the hot air will go towards the motherboard and then up and possibly reenter the top of the fan. So now the fan is sucking in hotter air. Am I just overthinking this?

UPDATE:
So I replaced the thermal paste and mounted the Noctua fan on the heatsink in push configuration like in the article above. I also added a couple cable ties to hold the fan in place and not just rely on gravity. And, wow did that work out nicely! I reran the prime test from Ultimate Boot CD and the temperature is steady at 56 C. How nice when stuff actually work out!

2 . The motherboard has two fan zones (system and CPU). But the CPU zone has 3 available headers whereas the system zone has only one. How are these zones used, and does it matter if I switch the zones and put the CPU fan on the system zone and vice versa?

UPDATE:
Never mind the above question. I'm pretty sure the zones are used for controlling the fans through IPMI and it really doesn't matter that much since all headers on my motherboard supports PWM. However, I am a bit confused about the details. Other threads and online sources seem to talk about a CPU/System zone (e.g. Fan1-FanN) and a peripheral zone (e.g. FANA - FANX). The manual for my motherboard do mention a FANA header as well as dual cooling zones but nothing about a peripheral zone only the System/CPU zone. Not entirely sure what to make of this, but probably the other zone is the single FANA header, no matter what the zone is called.

fans.png


3. My PSU only has 5 SATA connectors on the 2 modular cables and I need 8 for 6xHDD and 2xSSD. I was planning on just adding Y-splitters at first (two splitters used for a pair of one SSD and one HDD each and then a third splitter from a molex connector on the peripheral PSU cable used to power 2 of the HDDs). But I've since read that this is probably not a good idea. The reason being that the connector rating of 4.5A might be less then the draw of 2 HDDs at spinup and that the SATA male-female connection is not reliable on these splitters.

Looking at the data sheet for the WD Red drive it seems to have a peak current of 1.75A. Does this mean that I'm can get away with using a splitter here or is the flimsy connection the real issue here? If so, I am quite comfortable with a soldering iron so another option would be to take apart the Y-splitters and splice on the connectors individually instead, using soldered joints and heatshrink tubes. Would this be preferable?

The modular cables are a lot longer than I need them to be, so I also thought about just adding some extra inline SATA terminals to the cable. It looks like they are just clamped on and basically cuts through the cable insulation. So it seems like an easy modification to do, but I haven't been able to find a retailer that sells those terminals.

Any feedback on these issues from those of you with more experience?
 
Last edited:

pabo

Cadet
Joined
Aug 31, 2023
Messages
5
I updated the previous post with some findings from yesterday.

I also started to play around with IPMI with a lot of help from other threads. I set the thresholds for the CPU fan and tried to control the duty cycle with raw commands. It seems the values for the raw commands are obscure and mostly a guessing game. So thanks to all of you who have been able to decipher a lot of them. I managed to set the duty cycle to various levels but as discussed in this thread the fan goes out of control when setting it to a higher value and I had to reset the BMC.

But the exorcise made me realize how useful a fan script can be. So I will take a look at the ones provided by other members and see how far they have come in controlling this behavior.

Today's mission is to chop up my Y-splitters and solder the molded power connectors to the modular harnesses.

I have to say it's a lot of fun tinkering with this stuff, and I'm learning a lot. Initially I wasn't sure whether I wanted to go with the "point-and-click" Synology solution or a DIY project with Truenas. But I'm glad I went with Truenas because now I'm actually learning how the system works.
 
Last edited:
Top