DIY Storage Server

softwaremaniac

Dabbler
Joined
Sep 14, 2021
Messages
10
Hello!

I'm running a company and have an abundance of data to back up (both personal and business related).

I've been relying on cloud storage, but it does not come close to fulfilling my needs.

I want to create cold backups of my current drives and store them as well as add new material as I gather it. OpenVPN will be configured as remote access is essential. It will be located in my office space. After internal discussion, we've determined that the optimal amount of storage for now would be around 30TB, 25 of which would be mine and the remaining 5 would be for my colleagues.

As far as the hardware configuration is concerned, this is what we are considering:

Asus Prime H510M
Intel Pentium Gold G6405
16GB RAM
iGPU
Gigabyte 450W 80+ Bronze
Fractal Design Define R7 (We are also considering the smaller Fractal Design Node cases, a server rack is out of the question for now).

As for the drives, I've always been a fan of WD, but had two fail recently (external), I think I will go with either Iron Wolf Pro or WD Gold 18TB. I have an old 120 GB SSD drive which I can used to install TrueNAS on.

The goal for us is to store everything on the drives and have it readily accessible, with the ability to upload it to Nextcloud for external sharing when required.

The network is the problem. I am the only one with a reasonably fast network, 300/300 DIA, so it will be housed at my place. My team members have extremely poor network connections (we live in the rural area of the country). Down the road I would like to aim to go for 10 Gbe internally and purchase a 10 Gbit card and a 10 Gbit switch.

I am uncertain about which RAID type to choose, the goal is to keep me from worrying about my data and storage issues in general for quite some time, which means I want to be able to both store new and keep old on there.

In addition to this setup, I have a separate QNAP NAS device with currently a single 10TB WD Red Pro drive in it.

Would it be possible for us to:

a) Have a file server to and from which we can upload/download files
b) Have a Nextcloud plugin, so I can maintain the Nextcloud server myself and self-host it on the same hardware

Additionally:
c) Do I need a dedicated hard drive for Nextcloud to keep it separate from the cold storage/backups?
d) Is TrueNAS optimal for this scenario?
e) Can we have it operate dead silent (as much as possible, I know that it's going to be hard with the spinners in)?
f) Would it make sense to go straight to 10 Gbit for internal transfers?

Thank you,

Mihael
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I am uncertain about which RAID type to choose, the goal is to keep me from worrying about my data and storage issues in general for quite some time, which means I want to be able to both store new and keep old on there.
If you're not planning Virtual machines (it seems not) then RAIDZ is the best choice.

You mention 18TB disks, so that rules out RAIDZ1 from recommendation, meaning RAIDZ2 is the right option unless you want additional data protection at additional cost (RAIDZ3).

This only makes sense with 4 disks or more. If your plan was for fewer disks (due to cost), a mirror would be a consideration, but losing out on the possibility to fail more than one disk at a time.

Would it be possible for us to:

a) Have a file server to and from which we can upload/download files
b) Have a Nextcloud plugin, so I can maintain the Nextcloud server myself and self-host it on the same hardware
Yes and yes.

Additionally:
c) Do I need a dedicated hard drive for Nextcloud to keep it separate from the cold storage/backups?
d) Is TrueNAS optimal for this scenario?
e) Can we have it operate dead silent (as much as possible, I know that it's going to be hard with the spinners in)?
f) Would it make sense to go straight to 10 Gbit for internal transfers?
c) No
d) It can certainly do the job... so can a QNAP or Synology and many other NASware distros of Linux... you be the judge on what's better/cheaper (TrueNAS may be better for a number of reasons, but may not be cheaper for the same or similar effective result and with CORE, you're not buying support, you're relying on people like me giving you help for free).
e) Up to your build quality, physical environment and if you want to futz around with fan scripts or let your disks run hot.
f) transfers from/to what? if there's 10 Gbit in other devices you have, there's some logic there... be aware that 10 Gbit speed will potentially highlight a bunch of other performance bottlenecks, so if you aren't prepared to consider all components and make sure that you're using the right gear and settings in all parts, don't bother.

Gigabyte 450W 80+ Bronze
If you want quiet, consider a little higher and go for platinum.

Asus Prime H510M
You won't be able to control fan speeds with scripts on this board... you'll need to go with supermicro or Asrock Rack (with IPMI built-in) for that.

Intel Pentium Gold G6405
I don't think this CPU supports ECC RAM. It's a recommendation, so consider it. If you care enough about your data to be using ZFS, you should probably care enough to be using ECC RAM too.

Depending on your "working set" (the data that you use and change on a day to day basis), this might be OK, assuming the working set is around 5-10 GB.

If your workloads are data-heavy (video/photo editing or something like that), consider a lot more RAM and think more seriously about 10 Gbit for yourself... obviously for the remote colleagues, no difference for them either way as the Internet will bottleneck them first.
 
Last edited:

softwaremaniac

Dabbler
Joined
Sep 14, 2021
Messages
10
If you're not planning Virtual machines (it seems not) then RAIDZ is the best choice.

You mention 18TB disks, so that rules out RAIDZ1 from recommendation, meaning RAIDZ2 is the right option unless you want additional data protection at additional cost (RAIDZ3).

This only makes sense with 4 disks or more. If your plan was for fewer disks (due to cost), a mirror would be a consideration, but losing out on the possibility to fail more than one disk at a time.

No VMs or anything else. Pure storage server for now. I plan to start with a single 18TB drive and expand with another drive a month or two after. Long term, we want to expand even further (6+ HDDs).


d) It can certainly do the job... so can a QNAP or Synology and many other NASware distros of Linux... you be the judge on what's better/cheaper (TrueNAS may be better for a number of reasons, but may not be cheaper for the same or similar effective result and with CORE, you're not buying support, you're relying on people like me giving you help for free).

One of the main reasons I'm building one myself is the fact that things are not proprietary and I can control every piece of hardware and upgrade it as needed. An additional reason being the file storage limitation /every cloud I tried had a hard single-file storage limit which is nowhere near enough for me. Nextcloud was usable and we'd like to continue using it for non archival stuff (day to day).

e) Up to your build quality physical environment and if you want to futz around with fan scripts or let your disks run hot.
I don't have experience with TrueNAS, so I'm not sure what you mean by fan scripts. I assume you mean setting or configuring some kind of fan profile to ensure silent operation?


f) transfers from/to what? if there's 10 Gbit in other devices you have, there's some logic there... be aware that 10 Gbit speed will potentially highlight a bunch of other performance bottlenecks, so if you aren't prepared to consider all components and make sure that you're using the right gear and settings in all parts, don't bother.

From my current hard drives to the archive (server). If I go 1 Gbit, I'll not be saturating the bandwidth. However, if I go 10Gbit, The transfers will be 200 MB/s+ vs 115 or so on 1 Gbit. Of course I would purchase a new switch and the appropriate 10 Gbit cards for my devices (comp, nas, server).


If you want quiet, consider a little higher and go for platinum.
I am aware of this and fully agree. However, the shop I'm buying from doesn't have anything better unless I go ahead and spend at least twice as much and since I expect my load to be at about 250W in full load, I think that the PSU will be most efficient at around 50-60 percent. That's why I chose that one.

You won't be able to control fan speeds with scripts on this board... you'll need to go with supermicro or Asrock Rack for that.
That wasn't my intention in the first place. I'm not looking to spend more just to have the ability to control the fans.

I don't think this CPU supports ECC RAM. It's a recommendation, so consider it. If you care enough about your data to be using ZFS, you should probably care enough to be using ECC RAM too.
I am aware, see the PSU comment. I'm running into the same issue. Lack of parts. I was initially going to go with G4400 or something with ECC support, but it's out of stock.

Depending on your "working set" (the data that you use and change on a day to day basis), this might be OK, assuming the working set is around 5-10 GB.

If your workloads are data-heavy (video/photo editing or something like that), consider a lot more RAM and think more seriously about 10 Gbit for yourself... obviously for the remote colleagues, no difference for them either way as the Internet will bottleneck them first.
I'm expecting a few dozen terabytes of data to be transferred in the first two-three months. After that the transfers will be much, much lower.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I don’t think 10GBe makes sense in this scenario for anything but a trunk to the switch. The HDD pool is unlikely to give you more than 250MB/s and you have multiple users. once you have more VDEVs you could revisit.

how big of a pool are you planning for - just the 30TB plus 20% as recommended or more?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I plan to start with a single 18TB drive and expand with another drive a month or two after. Long term, we want to expand even further (6+ HDDs).
You'll need to revise that plan.

You can't change a single drive into anything other than a mirror (even when RAIDZ expansion arrives... it needs to start as the RAIDZ type that you want it to be at the end, so the fewest disks you could start with would be 3 for RAIDZ2... and then you're stuck at 3 until RAIDZ expansion arrives in a year or possibly longer... remembering that RAIDZ2 uses 2 disks for parity, so leaves you with 1 disk of usable space).

I don't have experience with TrueNAS, so I'm not sure what you mean by fan scripts. I assume you mean setting or configuring some kind of fan profile to ensure silent operation?
BIOS fan settings are structured around CPU temperature and cases with lots of drive bays often have a "drive fan" which is either linked to the CPU or fixed in the BIOS to one speed.

There are scripts in this forum (search for PID fan control) which can use the IPMI interface on server motherboards to control fan speeds based on disk temperatures (or with a bit of additional work, whatever other factor you want to use).

Keeping your disks at a reasonable temperature helps to prolong their healthy operation, so you don't want to set fans to minimum (quietest) all the time as your disk activity (and hence temperature) won't match the CPU, particularly when running long SMART tests, but also during a scrub or other heavy read/write operation.

You don't want to run your fans at the speeds needed to keep disks cool during a long SMART test all the time (because of the noise), so the script takes care of that and keeps it as quiet as possible while not frying your disks.

It's an option and entirely up to you.

I think that the PSU will be most efficient at around 50-60 percent. That's why I chose that one.
OK, maybe you're right, but usually to avoid having the PSU fan spin up, you need to keep it to 20% or so.

That wasn't my intention in the first place. I'm not looking to spend more just to have the ability to control the fans.
You said "dead silent"... I assumed that eliminating unnecessary fan noise was a priority.

I am aware, see the PSU comment. I'm running into the same issue. Lack of parts. I was initially going to go with G4400 or something with ECC support, but it's out of stock.
OK, the choice is yours. Depending on your risk profile, you may find that another option suits you (or the hardware you can get) better (i.e. not ZFS)
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Disagree re power consumption. My rig is good for 80TB total, and about 40TB after parity and 80% fill losses, and it consumes about 106W ATM. See my sig for the setup.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
If you look at the advice that @jgreco gives in his resource page on PSUs, you’ll end up with a much larger PSU that will run at ~20% of rated total at the plug under idle conditions. Key thing here to understand is control of the power output and what the PSU is designed for. Most PSU OEMs do not publish detailed capacity specs for each power rail. So total watts is not what I would go by.

Ditto HDD OEMs re: power consumption. Look for farmerplings resource page on HDD data, he has compiled a really great resource. It was the reason I went for helium filled drives - less power and heat, longer likely life. You can buy 10TB and up drives on the used market NP, some even with a seller warranty.

Low-capacity platinum or titanium PSUs may be interesting but were already difficult to acquire before the pandemic. Most standard PSUs are not offered that small & efficient. Even SuperMicros lower limit for efficient server PSUs is 720W, IIRC.

Some titanium PSUs are even offered fanless or with a hybrid mode where the fan only comes on if a temp is exceeded. I like to keep my stuff cool, ie HDDs under 30*C, SSDs under 40*C, and the board components under 45*C. Scripts can help but I have not yet implemented them.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I am aware of this and fully agree. However, the shop I'm buying from doesn't have anything better unless I go ahead and spend at least twice as much and since I expect my load to be at about 250W in full load, I think that the PSU will be most efficient at around 50-60 percent. That's why I chose that one.

That's fine for a very small number of drives, but once you get to four or maybe six, you need to start engineering this for startup current, and what you "think" is a good idea often ends up being a terrible idea. Your system may survive a bad choice, but you really do need to plan for startup current once you have a handful of drives. Just as with the GPU community, things work a bit differently than a regular PC.

Please do refer to

https://www.truenas.com/community/threads/proper-power-supply-sizing-guidance.38811/

which walks through the whole thing in excruciating detail, including, unlike basically everywhere else on the Internet where some dolt wrote a fancy Javascript app but couldn't be arsed to explain HOW the number is arrived at, I actually show you what the components of the guidance are based on.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
450 W could do it for 6 drives in a Node 304.
And quiet drives in this case might be an acceptable level of noise. More drives in a Node 804 or a Define R7 are not going to be quiet enough to sit close to a working desk.
6 drives in raidz2 gives 4 drives worth of raw space. 10 TB drives, or larger, would fit your 30 TB.
This would be a good mini-ITX motherboard for a Node 304, with onboard 10 GbE:

But ZFS requires that you provide these 6 drives from the start. If you can only buy drives one at a time, you should look at other solutions than TrueNAS.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I disagree. All comes down to whether the OP wants to put in the time and treasure to make it happen.

Synology's BTRFS doesn't hold a candle to ZFS re: data integrity.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Would it be possible for us to:

a) Have a file server to and from which we can upload/download files
Yes, but I'd consider starting with a 6-drive @ 12TB ea Z2 setup. Good performance, reasonable cost to get to a 30TB capacity with 8TB of room to spare (leaving 20% empty to allow top ZFS performance). On top of that, I'd consider your backup strategy - what drives / enclosure for offline backup storage. I standardized around one drive type by using a OyenDigital Mobius 5 in a RAID 5 configuration.

b) Have a Nextcloud plugin, so I can maintain the Nextcloud server myself and self-host it on the same hardware

Additionally:
c) Do I need a dedicated hard drive for Nextcloud to keep it separate from the cold storage/backups?
No idea, have no experience with same.

d) Is TrueNAS optimal for this scenario?
Works great for me but there is a steep learning curve. This is not a Synology with few options and a slick interface. TrueNAS is a swiss army knife but it took me some time to learn even the beginnings of it.

e) Can we have it operate dead silent (as much as possible, I know that it's going to be hard with the spinners in)?
Unlikely unless you like toasting the drives / CPU. Ditto for Synology, QNAP, etc. None are silent. It's an appliance that has to reject 100W continuously. I suggest getting a very good case for air flow and going from there. The Q26 from Lian Li would have been perfect for you but it's very hard to find these days.

f) Would it make sense to go straight to 10 Gbit for internal transfers?
You can but with only 1 VDEV the benefit is marginal. I'd use a 10GbE trunk to a switch and leave it at that for now.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I disagree. All comes down to whether the OP wants to put in the time and treasure to make it happen.

Synology's BTRFS doesn't hold a candle to ZFS re: data integrity.

Then we can agree to disagree.
I agree that BTFRS isn't as good as ZFS - but it is good enough
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Then we can agree to disagree.I agree that BTFRS isn't as good as ZFS - but it is good enough
…. for most people perhaps…at least until they get bitten by a file system crash on Synology followed by data loss. Their cobbled-together file system works most of the time… until it doesn’t (see the forums). BTRFS has a lot of growing to do until it can "self heal" and whatever other features Synology promises to deliver at some point.

Not that this is a unusual issue. Synology is in pretty good company as ReadyNAS, QNAP, Apple, etc. simply do not consider file integrity to be a top priority like TrueNAS does. The complexity of standing up a TrueNAS solution is rewarded by excellent consistent performance that is resilient in the face of hardware failures and other disasters, provided the setup was sound and maintenance ongoing. The few cases we've run into here where people have suffered data loss is when disks reported by SMART as dead for a long time kept being used until ZFS could no longer compensate.

No doubt, RAID5, 6 is sufficient for many folk, especially if they practice consistent backups, etc. But to me, TrueNAS is far more than a airtight storage container. It's an amazingly versatile framework for data storage supported by an awesome company and loyal forum user base. To me, that is worth far more than the slick interfaces offered by ReadyNAS, QNAP, or Synology (and I have used all three). No doubt, the UI and documentation of TrueNAS continue to have lots of improvement opportunities but TrueNAS offers much deeper access into the infrastructure than the other platforms which behave more like appliances - for better and for worse.
 
Last edited:
Top