Suggestions for a DYI build from a laptop

CookieMonster

Dabbler
Joined
May 26, 2022
Messages
34
Hi,
I am new to the NAS world, and I would like to build my first solution. I've been reading a lot, but I am drowning in the sea of information, and I was wondering if any of you, folks, could give me a sense of direction given my situation so that I could start progressing forward with my build instead of to sifting through all the available info for a year+ without any actual progress.

I would like to have a home server with the following features:
  • NAS/file server
  • Media-server
  • Docker support
  • VM support would also be nice to have.

I am a broke student, so I am on a budget. I have already ordered two WD Red Pro 18 GB drives to start with. They should ship tomorrow and arrive next week.

I have a 2019 Dell G3 3590 laptop, and I was wondering if I could convert it into a decent NAS/home server.
This model is known for an extremely poor build quality: cracking plastic case, display hinges in this model break after 7 months of use (very common for this laptop), poor battery. So, it's not usable much as a laptop, but it's got capable specs that seem to me to be far better than the anemic CPU, No GPU solutions that you get from, for example, Synology for a crazy amount of money.
The reasons why I am considering are the following:
  1. I could save money (as I mentioned, I am on a budget)
  2. It's a laptop, so energy cost savings should be great
  3. Built-in screen, should I have issues remoting into it
  4. It's got quite capable hardware:

    9th Generation Intel Core i5-9300H
    8GB 2666 RAM (Non-ECC)
    --> Non-soldered, so can put more if needed
    GTX 1660 Ti Max-Q --> Should allow a crap ton of "easy" 1080->720 simultaneous transcoding streams, and probably around 3-4 of heavy duty 4k->720 streams according to Plex GPU comparison chart (which doesn't have my GPU, but it has the non-Ti desktop 1660, and mobile 1660 Ti Max-Q is about 25% slower than a desktop 1660 non-Ti card according to these benchmarks). The 3-4 streams in the worst case scenario of 4k transcoding would be fine for me.
    512GB NVME SSD --> Can use for cashing?
    HM370 Chipset - supposed to support up to 4 SATA ports
As far as ports, it has:

externally
  • 1x 1Gb Ethernet
  • One USB 3.1 Gen 2 Type-C port with DisplayPort
  • One USB 3.1 Gen 1 port
  • Two USB 2.0 ports
  • One HDMI 2.0 port
  • One SD-card slot
internally

  • One M.2 slot for NVME (PCIe NVMe 3.0 x4, up to 32 Gbps)
  • One M.2 slot for Wi-Fi and Bluetooth combo card (Key E, but not sure how many lanes)
  • One SATA AHCI, up to 6 Gbps


The main ports of interest are the NVME M.2 due to its massive bandwidth and the SATA port.

Questions:
  1. Can I somehow connect multiple SATA drives to this one SATA port? Is there some kind of splitter?
  2. Also, here it says :
    SATA Operation - Configures operating mode of the integrated SATA hard drive controller.
    Default: RAID. SATA is configured to support RAID (Intel Rapid Restore Technology).

    Does it mean I can somehow connect a bunch of HDDs to that SATA port and use them in a RAID array?
  3. I was wondering if there are suitable controllers/adapters that could go into M.2 port and provide several regular full-sized PCIe ports in exchange?

I was thinking that maybe I could have some kind of external case with removable drive bays like in Synology and then connect the drives inside it either to:
  • M.2 (4 lanes for a total of crazy 32 Gbps - if I find a way/adapter/controller to connect to it) or
  • SATA (6 Gbs should be enough for an array of platter drives) or
  • USB-C (10 Gbs should be enough for an array of platter drives) or
  • SS-USB (5 Gbps - probably okay if pulling data only from 2-3 volumes simultaneously)
I am a total noob, so I am not even sure what these adapters/controllers/disk bays would be called--if they even exist. I was wondering if you could help with that.


Some final questions:
  1. Do you think this is a good idea for someone on a budget?
  2. Will my data be safe without ECC?
  3. The built-in 1 Gbps Ethernet seems insufficient (even platter drives peak at higher rate than that). What solution would you recommend to add 2.5 or 5 or 10 Gbs Ethernet?
  4. Would FreeNAS be a good choice for me, or would you recommend to consider other solutions for my use case?
    (Important note: I would like to stay open source and away from proprietary stuff.)
  5. Could you please recommend the parts necessary to make this happen?
  6. How would power management happen for the drives considering that they would probably need to be powered from an external PSU?
If there are parts/controllers available to make this happen, I could keep the bottom plane removed and position the laptop in such a way as to allow the potential m.2 -> PCIe or whatever controllers/adapters stick out of it, and enclose this construction in some kind of box with fans.


I would appreciate your help and opportunity to learn from your experience, folks.
Thank you.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I have a 2019 Dell G3 3590 laptop, and I was wondering if I could convert it into a decent NAS/home server.
I would stop right there and say you're not likely to have a lot of fun with TrueNAS and ZFS.

If you have no other choice of hardware, go for a Linux based NasWare that doesn't use ZFS.

Open MediaVault, UnRAID, Xpenology or something like those.

Just to handle each of your questions in turn:

Can I somehow connect multiple SATA drives to this one SATA port? Is there some kind of splitter?

Read this: https://www.truenas.com/community/r...t-multipliers-and-cheap-sata-controllers.177/

Does it mean I can somehow connect a bunch of HDDs to that SATA port and use them in a RAID array?
No

I was wondering if there are suitable controllers/adapters that could go into M.2 port and provide several regular full-sized PCIe ports in exchange?
There is such a thing to convert M.2 to Oculink or something similar and potentially run 4 SATA ports from it, but again, your overall build is bad, so don't bother with that

Do you think this is a good idea for someone on a budget?
No
Will my data be safe without ECC?
Possibly, but I wouldn't even get as far as using this build at all, so no.

The built-in 1 Gbps Ethernet seems insufficient (even platter drives peak at higher rate than that). What solution would you recommend to add 2.5 or 5 or 10 Gbs Ethernet?
USB adapters are available, but don't waste your money, you'll probably not be able to get much more than 1Gbit out of it anyway.

Would FreeNAS be a good choice for me, or would you recommend to consider other solutions for my use case?
(Important note: I would like to stay open source and away from proprietary stuff.)
Not at all. As already said, look for some of the others. OpenMediaVault and Xpenology are both Open Source (albeit Synology behind Xpenology). UnRAID is not free AFAIK

  1. Could you please recommend the parts necessary to make this happen?
  2. How would power management happen for the drives considering that they would probably need to be powered from an external PSU?
Just don't, you're going to end up losing your data or having a very unhappy time even if you don't.

Please do consider having another shot at it when you can source the right hardware (cheap server equipment is often available on Ebay and the like, have a look at the hardware selection guide in the resources section for hints on what to look for... supermicro, Intel and LSI HBAs are on the great-to-have list).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Completely agree with what @sretalla says here. Just wanted to point out that, while it might seem otherwise, this isn't a matter of hardware snobbery, but rather a matter of hardware that's known to be safe and reliable. We use TrueNAS and ZFS because we care about our data, and we assume others considering TrueNAS care about their data too. We therefore tend to be pretty conservative when it comes to hardware recommendations. We understand budget limitations; we all have them. But all the sympathy in the world isn't going to make unsafe hardware safe.
Xpenology are both Open Source
I know you know this, but for OP's benefit, XPenology is an open-source bootloader than enables you to run Synology's closed-source (AFAIK) DSM management software on hardware (or a VM) of your choice.

@CookieMonster, as noted above, inexpensive used server hardware is available on eBay. I like the HPE Microserver line for a compact server, and the Gen8 is a pretty nice little machine for this purpose. Bays for four spinners, internal USB for a boot device, remote management (always nice for a server), internal USB for a boot device (though a SSD is preferred, you can use an inexpensive adapter for this), and a PCIe slot for a faster network card should it be needed. They're available on eBay in reasonably-suitable configurations for around US$300-350.
 

CookieMonster

Dabbler
Joined
May 26, 2022
Messages
34
Thank you for your help, @sretalla and @danb35 .
I decided to follow your advice and build a proper system from scratch.
I read on multiple websites that people advise against using rack-mountable servers at home because they are super loud. Is it true for all of them? Should I go with a regular ATX case if exiling my server to the basement where it cannot be heard is not an option (I live in a n apartment)?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Stick with a tower server. Racks are noisy. They need to be to get airflow through.
Towers can be quiet
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Not all rack chassis are noisy, but most are. They're designed, generally, to put the greatest amount of gear into the smallest space possible, and to keep it from turning into a very expensive EZ-Bake oven, they need lots of cooling. That means airflow, and that means noise. Don't even think about replacing the fans in your Dell/HP/Supermicro rack chassis with Noctura.

And don't think you can solve the problem by putting it into a closed closet or something similar; that just transforms the closet into an even-more-expensive EZ-Bake oven. But I'm going to repeat my suggestion for the HPE Microservers.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
In addition to what has already been said:

I have already ordered two WD Red Pro 18 GB drives to start with.
Two drives can only serve as a mirror, so that's not very efficient, space- and money-wise.
Even with a third drive, raidz1 is definitely NOT advisable with such large drives. Actually, even 2-way mirrors become questionable at this size—and 3-way mirrors are wallet killers.

The best safe option, money-wise, is to look for whatever capacity comes cheapest per terabyte and make a raidz2 (or radz3) out of that. But since ZFS does not allow geometry changes with raidz, you need all drives (at least four) right from the start.

8GB 2666 RAM (Non-ECC) --> Non-soldered, so can put more if needed
That's the minimum to boot. More is better.

512GB NVME SSD --> Can use for cashing?
Do not even think about L2ARC unless you have 64 GB RAM or more.
Never think of SLOG as a "write cache", and do not consider it unless you have an absolute business requirement for sync writes on a workload.

One M.2 slot for Wi-Fi and Bluetooth combo card (Key E, but not sure how many lanes)
There's not much support for Wi-Fi in TrueNAS… It's really meant for wired.
  • USB-C (10 Gbs should be enough for an array of platter drives) or
  • SS-USB (5 Gbps - probably okay if pulling data only from 2-3 volumes simultaneously
USB-to-SATA adapters are OKish for boot drives where no better solution is available, but not reliable enough to run a member of a ZFS pool holding actual and valuable data.

Some final questions:
  1. Do you think this is a good idea for someone on a budget?
  2. Will my data be safe without ECC?
  3. The built-in 1 Gbps Ethernet seems insufficient (even platter drives peak at higher rate than that). What solution would you recommend to add 2.5 or 5 or 10 Gbs Ethernet?
  4. Would FreeNAS be a good choice for me, or would you recommend to consider other solutions for my use case?
    (Important note: I would like to stay open source and away from proprietary stuff.)
  5. Could you please recommend the parts necessary to make this happen?
  6. How would power management happen for the drives considering that they would probably need to be powered from an external PSU?
1. No.
2. In a proper hardware setting, data should be reasonably safe without ECC. Running off a laptop, with some spaghetti of external power cables, SATA port multipliers or USB adapters, data would NOT be safe even with ECC.
3. Would the client(s?) PC consume data at faster than 1 Gbs rate anyway?
4. If cost is a major concern, ZFS may not be the best option. Under-specced hardware with ZFS is a liability… and I guess that setting up a second and a third NAS to serve as backups is not an option.
5. Any desktop style hardware… Preferably server-grade, but at least an old tower PC, holding all parts in a single case, with no exotic attachment solution. Second-hand and/or outdated hardware is fine, as long as it is still reliable and not worn out.
6. See 5.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Another option, if you're looking for something compact, is some of the QNAP hardware--some of it runs Intel processors, and has room for enough RAM to decently run TrueNAS. I haven't used it myself, but some of the folks here have.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Very good indeed, but they've gotten quite expensive. A Gen8 isn't as compact as a Gen10+, but a suitable one can be had around US$300--used, of course. Only caveat to any of them is that they don't support hot-swap.
 

CookieMonster

Dabbler
Joined
May 26, 2022
Messages
34
In addition to what has already been said:


Two drives can only serve as a mirror, so that's not very efficient, space- and money-wise.
Even with a third drive, raidz1 is definitely NOT advisable with such large drives. Actually, even 2-way mirrors become questionable at this size—and 3-way mirrors are wallet killers.

The best safe option, money-wise, is to look for whatever capacity comes cheapest per terabyte and make a raidz2 (or radz3) out of that. But since ZFS does not allow geometry changes with raidz, you need all drives (at least four) right from the start.

Thank you for the info.
I was under the impression that I could start with 2 drives in a mirror and then add more drives and have ZFS re-stripe itself across the new configuration. That's not the case?

Alternatively, I was thinking about having 1 mirror for the most important data because I read that mirrors are more resilient to drive failures compared to 2Data+1checksum or 3data+1checsum setups.
And then when I need more space, I would add less redundant array (like the abovementioned 2Data+1checksum or 3data+1checsum) to host less critical data. Is having arrays of different redundancy configuration not possible in TrueNAS?

I am trying to buy the hardware on sales to save money and stay within budget.

Why is raidz1 not advisable?
Is it the more drives ("at least four") the better because of the speed gain?

Again, thank you so much for the info! I am googling a lot, but it feels like I need a degree just to get started, lol.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I was under the impression that I could start with 2 drives in a mirror and then add more drives and have ZFS re-stripe itself across the new configuration. That's not the case?
For the most part, no, that isn't the case. You can't, for example, convert a two-disk mirror into a three-disk RAIDZ1 vdev. You could add another mirrored pair, but you can't change the type of vdev. Neither can you convert a three-disk RAIDZ1 into a four-disk RAIDZ1, though they're working on that.
Is having arrays of different redundancy configuration not possible in TrueNAS?
It is possible, though the UI will try to prevent you from putting those arrays in the same pool (it's possible to do so, but almost always a bad idea).
Why is raidz1 not advisable?
Manufacturers' URE specs would suggest that when one disk has failed, the chance of a unrecoverable read error on the remaining disks is very high. It's highly unlikely this would be a pool-destroying event, but it could result in data loss. I think the concern is somewhat overblown, but still RAIDZ2 is likely a better option.
Is it the more drives ("at least four") the better because of the speed gain?
He's saying at least four disks because that's the minimum for a RAIDZ2 vdev. Though more disks (within reason) in that vdev will increase your available capacity, and thus your efficiency.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
And then when I need more space, I would add less redundant array (like the abovementioned 2Data+1checksum or 3data+1checsum) to host less critical data. Is having arrays of different redundancy configuration not possible in TrueNAS?
It's possible to have as many different pools as you want in a single system, each with its own geometry.
It's always possible to add vdevs to a pool (preferably with the same geometry as the previous vdevs).
It is even possible to remove data vdevs from a pool, if and only if the pool is only made of mirrors (no raidz#); so mirrors are very flexible (but not quite space efficient).
It is possible to add (or remove) drives to change the width of a mirror vdev. Again, no such flexibility with raidz#.
It is always possible to replace drives with larger drives; when all drives in a vdev are replaced, the available space will grow accordingly.
Other than this, the only way to reconfigure a pool with raidz# vdevs is "backup/destroy/create/restore".

So whatever you decide it is best to get it right from the start.

Why is raidz1 not advisable?
The reasoning and maths is here:
Upon an URE, a degraded raidz would report that "file X is damaged" rather than fault the entire pool, but the underlying issue is the same. And with >10 TB drives, the concern may even extend to 2-way mirrors.

If you only want to ensure data integrity and guard against "bit rot", raidz1 is adequate.
If you further want some protection against hardware failure, then be warned that raidz1 may no longer be "sufficient" protection against the loss of one drive—which ought to be the basic premise of raid5/raidz1. It's up to you to decide how paranoid you want to be and what is an "acceptable risk".
If you have a backup and/or accept to lose some data upon the loss of a drive, then raidz1 may be adequate. Else, if you want your main NAS to sustain the loss of one drive without data loss and without resorting to a backup to restore, then the minimum setting is a raidz2 (minimum 4 drives) or a 3-way mirror. (This does not protect from major damage to the NAS—fire, flooding, other disaster—so some backup is still recommended, preferably offline and/or off-site.)
 

CookieMonster

Dabbler
Joined
May 26, 2022
Messages
34
For the most part, no, that isn't the case. You can't, for example, convert a two-disk mirror into a three-disk RAIDZ1 vdev. You could add another mirrored pair, but you can't change the type of vdev. Neither can you convert a three-disk RAIDZ1 into a four-disk RAIDZ1, though they're working on that.

So, all VDEVs are supposed to be identical ideally?
Is it better to have two 4-disk VDEVs or one 8-disk VDEV?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So, all VDEVs are supposed to be identical ideally?
in an ideal world, data VDEVs in a pool would all be the same type, size and number of disks. (which would ensure ZFS has the most possible options when it needs a VDEV to use)

Is it better to have two 4-disk VDEVs or one 8-disk VDEV?
That completely depends on your definition of "better"...

is it better to have fast throughput? (relatively wide RAIDZ VDEVs might be "better")

is it better to have more IOPS? (Mirrored VDEVs, and lots of them, will be "better")

is it better to have more capacity for the same money? (one VDEV will be "better" as you only lose one set of drives to parity)
 
Top