Need help choosing between hardware configurations

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
Hi all,

I'm planning on replacing my current Linux system and would appreciate a little help in choosing the hardware.
My current system has the following hardware:
  • Motherboard: ASRock Rack C2550D4I
  • RAM: 8GB
  • Chassis: Coolermaster CMStacker
  • Hot-swap bays: Icy Dock IB544 (4 bays) + Mobile Rack AWB#04 (1 bay, very similar to this, used for the backup disk)
  • Data disks: 4x 2TB + 4x 1TB, both using Btrfs in RAID10
  • Backup disks: 2x 4TB WD Red WD40EFRX, one always stored offsite.
This system is in need of an upgrade. I need to expand the storage because it's almost full now.
My current strategy for upgrading the storage is as follows:
- buy 2 disk that are at least twice as large as the current backup disks and use these for backup
- buy 2 disks the same size as the current backup disks and then use these 4 disks as the data disks
Don't know how long I will be able to keep that up without resorting to spreading backups over 2 disks.

Also that C2550D4I is doomed to fail sooner or later (didn't know about the mass die-off until I read the Hardware Recommendations Guide) and the Linux running on it should be upgraded. Since the system is 4 or 5 years old, I might as well just replace the whole system and while I'm at it, I might as well give FreeNAS a try.

This system does not run 24/7, I boot it when I need it (which is not a whole lot), but I plan to run the new system 24/7.
It actually only runs Samba (for 2 users) and the bulk of the data is photos (RAW), the rest is some music and backups of desktop/laptop/tablet/smartphone.

The performance of this system is mostly OK. The only performance problem I've had was with the Lightroom catalog that's stored alongside the photos.
It consists of a ~2GB SQLite data base and 28GB of previews spread over ~86000 files (and growing).
The reason for the slowness is that whenever I start Lightroom this data is not yet in the cache because I just booted the system and the catalog was fragmented (solved by creating a copy, then replacing the original catalog with the copy).
Another area of improvement would be importing photos into the catalog from the desktop/laptop (8GB to 45GB at a time).
But I guess the bottleneck here is the 1Gbit network, which is not that easily upgraded, even if I had the budget right now.

Requirements for the new system:
  • 24/7 operation
  • A lifespan of 5 years (probably more if I don't run into any limits of the system)
  • Silent (I'm very sensitive to noise)
  • Low idle power consumption (It's going to be idle 20+ hours a day). This is not really about the electricity bill (my current system, 85-90W idle, would cost about 100 euro/year), I just don't want to be wasting energy when it's not being used.
  • Low price is nice, but I prefer to buy quality components
  • Backup to a single disk that is easily (dis)connected and transported offsite
  • Maximum 2 users
  • Samba
  • ownCloud/nextCloud (haven't used it before)
  • Possibly 1 or more VM's
  • Maybe Plex, but I don't know, haven't used it before.
  • In the future maybe replication to a remote NAS
If I'm going to be running the system 24/7, I might add a Linux/FreeBSD VM (accessing the NAS through NFS) so that I can have a Unix envornment that is separated from the NAS without keeping an extra desktop/laptop alongside the Windows ones. If I'm doing that I might also add a Windows VM and maybe some other VM's as a playground to try all sorts of stuff. So running multiple VM's should be a possibility, but I don't expect to be using them heavily.

Now on to the hardware. There are 2 hardware configurations that I'm considering:
1. Room for growth:
The idea behind this build is that it can be easily uprgaded/expanded if needed. I can upgrade the CPU, add a SAS controller to connect more disks, add a 10Gbit network card, etc. In this build I would reuse the current 2 backup disks as data disks, which would mean I have to spend around 1450 euro.​
If I'm going to run VM's on it I will probably add another 16GB of RAM and maybe upgrade the CPU to a Xeon E3-1220v6 or E3-1230v6.​

Things I'm still thinking about:​
  • Chassis: I don't like the idea of having to shutdown the NAS to replace failed disks, so I would like to have hot-swap bays, but the Define R5 does not really support this. Some options I see:
    • SuperMicro SC733TQ-500B (4 bays) or SC743TQ-865B (8 bays), probably very noisy.
    • U-NAS NSC-400 (4 bay), U-NAS NSC-800 (8 bay) or SilverStone DS380/CS380. I think I read somewhere that these all have problems with cooling.
    • reuse my current chassis, which is somewhat noisy.
  • Memory: The Crucial website recommends DDR4-2666 memory for the X11SSM-F, but that motherboard only supports up to DDR4-2400. I guess that's not a problem, but I would like some advise on other memory that's compatible with this motherboard.
  • PSU: If I calculate the idle power consumption (combining data sheets and the PSU Sizing Guidance) I get 52W, possibly even lower considering this post reporting 59 watts idle with 10 disks. For the G360 this is less than the recommended 20% of its maximum capacity.
    Also, if I take a pessimistic view of the maximum power consumption (using the numbers from PSU Sizing Guidance and ignoring datasheets), then I should get at least a 450W PSU. So I'm a bit confused here, what PSU should I get?
  • UPS: Haven't researched this in detail yet, so I have absolutely no idea what to get.
2. Lowest idle watts:
With this build I'm trying to minimize the idle power consumption by using a low power CPU and only 2 data disks which each consume less idle power than one 4TB disk. It would cost around 1800 euro. If I go for 4x 4TB instead (again reusing the current backup disks) it would be around 1350 euro.​
Things I'm still thinking about:​
  • Chassis: same problem as option 1, but I like the compactness of this chassis
  • Backup disk: Since the chassis does not have an ODD cage like the Define R5 I cannot add a hot-swap bay for the backup disk, I would need to use an external enclosure with USB. These normally don't have a fan so I'm not sure if this is a good solution.
  • Performance: Enough for a file-server, but what about light VM use?
  • Does this motherboard/CPU consume less power when idle than option 1? I can't find any info on that and if it doesn't, I don't see a big benefit in this configuration.

Looking forward to your feedback.
 

jro

iXsystems
iXsystems
Joined
Jul 16, 2018
Messages
80
Looks like you've done some research! Here are my thoughts:

If you want the option to run VMs (especially Plex), I'd opt for the first build with the E3-1260v3 CPU. Plex does live video transcoding which can really put a load on the CPU. As you mentioned, 32GB total system memory would also be a good idea for VMs.

I wouldn't worry about lower memory speed negatively impacting performance. I would get whatever memory Supermicro recommends for that specific board.

For PSU, I'd err on the side of more power, especially since it doesn't cost much to jump from a 350W to a 500W power supply. Modern PSUs are very efficient and will typically be at peak efficiency at about 50% load (meaning ~250W output on a 500W PSU). Some PSUs will also shut down their fan under a certain load level. You might take these into consideration when making a selection. Personally, I'd probably get a 500W, especially if you're factoring in future expansion.

A UPS is almost essential. I'd recommend something like this: https://www.amazon.com/dp/B06VY12HW4/ You'll probably get ~10 minutes of run time which is plenty to have a delayed automatic shutdown happen.

As for RAID configuration, I would definitely encourage you to use stripped mirrors/RAID 10 like you mentioned. You'll get much better performance in Lightroom and expansion can be done by simply adding another pair of mirrors to the pool. I would recommend that you expand your pool with the same size disks. If you want to start with the 4x 4TB disks now, you can always add 2x 10TB disks as a second pool, transfer your data into that pool, and expand with 10TB disks down the road. If you mix drive capacities in the same pool, ZFS will send more data to the larger disks than to the smaller disks, so performance can get a bit wonky.

If I were in your shoes, I'd probably go with the first option so you're set up for future expansion. You know you'll be using this system for at least the next 5 years, so it'll be good if it can support your needs as they change. As a fellow photographer, I can say with confidence that average image file sizes will be trending up over the next several years so your storage requirements might trend up along with it ;)

Let us know if you have any more questions!
 

tim.rohrer

Cadet
Joined
Feb 3, 2019
Messages
1
Hi all,

I'm planning on replacing my current Linux system and would appreciate a little help in choosing the hardware.
My current system has the following hardware:
...
Looking forward to your feedback.

That is a solid write up of requirements. I'm glad I came across this as I'm beginning the process of designing a FreeNAS system to replace a Mac-based OpenZFS system I'm using.

I noticed you mentioned quietness as a requirement, but only mentioned it in regard to your cooler. Did you compare the PSU or chassis fans?

With my VMs, I run them off SSDs in a different ZFS pool, and they're connected to external mini-enclosures using Thunderbolt. In my case I'll have to research TB2/3 options.

I'm looking at rack mounted, probably 2U, so I've got to research those.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
  • Data disks: 4x 4TB WD Red WD40EFRX (probably as striped mirrors)
  • Backup disks: 2x 8TB WD Red WD80EFAX
The backup disks -- these are not going to be inside the same box, are they? What is your backup strategy? are you going to stick a drive in and wait for it to resilver? Or do you plan to create a secondary pool and have replication or are you going to use rsync tasks?

If you want to use Plex -- decide how many simultaneous streams you want. Also make sure how many simultaneous "transcoding" streams you want to have. That will dictate which CPU you will need. I have a Pentium G3240 which is more than sufficient for me to stream 3 at the same time. If I am transcoding -- it will only do 1 stream. You should use a jail for Plex -- NOT a VM. You can, but it seems a mighty waste of resources to me especially if you are only going to use Plex on that VM.

If you want to run other VMs like various Linux distros or Windows, then you have to bump up the CPU to a Xeon. A pentium or basic i3 won't cut it.

When it comes to Node 304, -- stay away from it. In fact stay away from all mini-ITX boards and cases. There are 3 reasons for it;
  1. Mini-ITX offer very little expansion options down the line. It will have 1 PCIE, so if you are using a case with more drive space than the SATA ports on board, then you will have to use a HBA card. No more space for a 10GB card down the line. ATX or even m-ATX offer a few more possibilities
  2. Mini-ITX server grade boards are few and far between making them very hard to find. As a result, they are often more expensive than comparable m-ATX or ATX boards.
  3. Mini-ITX cases are a pain in the butt for cable management because of lack of space.
You'll get more IOPS if you have multiple mirrors, so I'd go with that setup instead of any RAIDZx for your use case. You might want to use a rack mount chassis which would allow you to put in 16-24 drives. That way you could use smaller disk size (cheaper to buy a 2GB than a 6GB or 8GB drive). But you can have 8 to 12 mirrors giving you more IOPS.

Are you ok with used hardware? You can get very good used hardware for a fraction of a price -- at least in the USA. You mentioned euro somewhere in your post -- so I am assuming you are in Europe. You might have to look at the used server hardware market in your area. If you do go with rack-mount hardware, make sure you get a 3U or 4U chassis and not 1U or 2U. The 1U/2U tend to be super loud. 3U/4U give you much more space to put in larger coolers to mitigate the noise.

I have a 2U chassis -- but it's placed in a server closet, so I don't care too much but it is loud compared to what a 4U chassis will be.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
As for RAID configuration, I would definitely encourage you to use stripped mirrors/RAID 10 like you mentioned.
With ZFS, really?
 

jro

iXsystems
iXsystems
Joined
Jul 16, 2018
Messages
80
With ZFS, really?
Considering he's starting with 2-4 drives, definitely. 2x mirrors will give him much better performance than 4-wide Z2 and he'll be able to expand by purchasing 2 drives at a time instead of 4 at a time.
 

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
Looks like you've done some research!
Yes, more than I like! I'm at a point were I just want to buy the hardware and get on with it.

I wouldn't worry about lower memory speed negatively impacting performance. I would get whatever memory Supermicro recommends for that specific board.
I couldn't find the memory Supermicro recommends (Micron/Apacer). If anybody knows where I can find that in Europe, please let me know.

For PSU, I'd err on the side of more power, especially since it doesn't cost much to jump from a 350W to a 500W power supply. Modern PSUs are very efficient and will typically be at peak efficiency at about 50% load (meaning ~250W output on a 500W PSU). Some PSUs will also shut down their fan under a certain load level. You might take these into consideration when making a selection. Personally, I'd probably get a 500W, especially if you're factoring in future expansion.
OK, peak efficiency is at 50%, but the system is running idle most of the time. The Seasonic G-series has 87% efficiency at 20%, but what happens if your system is only using 10%? Can it deliver stable power at that load?
I'm also having trouble finding the recommended PSU's here in Europe, does anyone know a good online shop?

A UPS is almost essential. I'd recommend something like this: https://www.amazon.com/dp/B06VY12HW4/
Thanks for the suggestion, but that UPS has US style sockets. Is this the European equivalent?

As a fellow photographer, I can say with confidence that average image file sizes will be trending up over the next several years so your storage requirements might trend up along with it ;)
Yes, unfortunately the camera companies will keep increasing the pixel count, but I already have enough resolution. My intention is to keep using my current cameras for the next 4 to 5 years. That should be no problem as long as they keep working ;-)

Thanks for the feedback.
 

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
I noticed you mentioned quietness as a requirement, but only mentioned it in regard to your cooler. Did you compare the PSU or chassis fans?
I didn't check the specs of the PSU fan yet, but that is because I'm having trouble finding the recommended PSU's. So I if can find only 1 with a decent price I'll have to stick with that.
Another important piece to consider is the chassis. The Fractal Design Define R5 claims to be silent. It uses sound-absorbing materials to shield the noisy components like hard disks and fans (except the rear fan).
 

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
The backup disks -- these are not going to be inside the same box, are they? What is your backup strategy? are you going to stick a drive in and wait for it to resilver? Or do you plan to create a secondary pool and have replication or are you going to use rsync tasks?
My plan is to have a separate pool for each backup disk and then regularly do a zfs send/recv from the main pool to the backup pool. When the backup is done I will remove the disk from the hot-swap bay (or disconnect the external enclosure), transport the disk to the offsite location and bring back the other backup disk that was stored there. This way I will always have 1 backup stored offsite.
When I'm not doing a backup I intend to keep the backup disk disconnected from the system.

If you want to use Plex -- decide how many simultaneous streams you want. Also make sure how many simultaneous "transcoding" streams you want to have.
I'm not sure yet if I'm going to use Plex, I haven't researched it yet. But I would have a maximum of 2 simultaneous streams. I don't know if I would need transcoding.

You should use a jail for Plex -- NOT a VM. You can, but it seems a mighty waste of resources to me especially if you are only going to use Plex on that VM.
Indeed, running it in a VM would be silly. I don't think I said I would do that.

If you want to run other VMs like various Linux distros or Windows, then you have to bump up the CPU to a Xeon. A pentium or basic i3 won't cut it.
Well ... I guess it depends on the number of VM's and the load you put on them. 1 or 2 VM's with a light load should be OK on a i3 I think. But anyway, I think I'll go for the Xeon because I just cannot predict what I'll be running the next 5 years.

When it comes to Node 304, -- stay away from it. In fact stay away from all mini-ITX boards and cases. There are 3 reasons for it;
  1. Mini-ITX offer very little expansion options down the line. It will have 1 PCIE, so if you are using a case with more drive space than the SATA ports on board, then you will have to use a HBA card. No more space for a 10GB card down the line. ATX or even m-ATX offer a few more possibilities
  2. Mini-ITX server grade boards are few and far between making them very hard to find. As a result, they are often more expensive than comparable m-ATX or ATX boards.
  3. Mini-ITX cases are a pain in the butt for cable management because of lack of space.
I understand your reasoning against mini-ITX boards. But I already have one now (the C2550D4I) and it hasn't been a problem for me.
I have to disagree with number 2 though, I found a board that fits my needs and is actually cheaper than the X11SSM-F + CPU + cooler.

You might want to use a rack mount chassis which would allow you to put in 16-24 drives. That way you could use smaller disk size (cheaper to buy a 2GB than a 6GB or 8GB drive). But you can have 8 to 12 mirrors giving you more IOPS.
I assume that 2GB, 6GB and 8GB were meant to be 2TB, 6TB and 8TB?
I also have to disagree here. For the WD Red series, the 2TB drives costs more per GB than the larger drives, at least where I would buy them.
Also using more disks increases the power consumption, something I wish to keep as low as possible.
Keeping that in mind, I don't see a need to go for a rackmount chassis.
Now, in all honesty, I did consider using 10 2.5" 1TB drives (WD10JFCX) in RAIDZ2 because 4 of these still use less power than 1 4TB drive. But I dismissed that setup because it would cost significantly more and write IOPS would be less than striped mirros.

Are you ok with used hardware?
I prefer to buy new.
 

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
As for RAID configuration, I would definitely encourage you to use stripped mirrors/RAID 10 like you mentioned
With ZFS, really?
Yes, I understand that if the wrong 2 disks fail I loose the pool, which is not the case with RAIDZ2.
But in my opinion the risk can be minimized:
  • Use disks from different batches/manufacturer in each mirror vdev
  • Use a hot spare to limit the time in degraded mode
  • Have a good backup strategy
I already have 2 4TB drives, so a 4x 4TB pool would have drives from at least 2 different batches. Also having to rebuild the pool from a backup would just be an inconvenience. I'm not running a business, I can afford a little downtime.
So If I have to balance the risk against the performance characteristics, I think in my case striped mirrors is the best option.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Yes, I understand that if the wrong 2 disks fail I loose the pool, which is not the case with RAIDZ2.
But in my opinion the risk can be minimized:
  • Use disks from different batches/manufacturer in each mirror vdev
  • Use a hot spare to limit the time in degraded mode
  • Have a good backup strategy
I already have 2 4TB drives, so a 4x 4TB pool would have drives from at least 2 different batches. Also having to rebuild the pool from a backup would just be an inconvenience. I'm not running a business, I can afford a little downtime.
So If I have to balance the risk against the performance characteristics, I think in my case striped mirrors is the best option.
Choosing for a mirror can be a valid solution depending on your wants and needs. I just think that it should not be called RAID 10. There is no raid controller (I hope). ZFS handles everything in the software. Using the wrong terms can be confusing for new users.
 

jro

iXsystems
iXsystems
Joined
Jul 16, 2018
Messages
80
I couldn't find the memory Supermicro recommends (Micron/Apacer). If anybody knows where I can find that in Europe, please let me know.

I'm also having trouble finding the recommended PSU's here in Europe, does anyone know a good online shop?

Thanks for the suggestion, but that UPS has US style sockets. Is this the European equivalent?
I'm based in the states, so I unfortunately won't be much help in finding a European retailer for this stuff. Hopefully one of our European members can point you in the right direction. And yes, that UPS you linked looks correct but the price seems really high compared to the US model... That could just be the reality of purchasing stuff in EU.

OK, peak efficiency is at 50%, but the system is running idle most of the time. The Seasonic G-series has 87% efficiency at 20%, but what happens if your system is only using 10%? Can it deliver stable power at that load?
As the PSU gets closer to 0% output, its efficiency (as a percentage of output/input) will go down, but since the wattage is lower, the wasted watts will still be pretty low. For example, lets say a 500W PSU operates at 75% efficiency at 10% load. That means it'll deliver 50W to your parts but draw 66.7W from the wall, wasting 16.7W as heat and noise and component vibration. Compare that to 90% efficiency at 90% load (which is an unrealistically high efficiency), you'll deliver 450W but draw 555W from the wall, wasting 105W.

Obviously this is an oversimplified example, but hopefully it illustrates the point that you shouldn't worry too much about efficiency at very low outputs. To answer your other question, the output will be perfectly stable even at very low power outputs.

Choosing for a mirror can be a valid solution depending on your wants and needs. I just think that it should not be called RAID 10. There is no raid controller (I hope). ZFS handles everything in the software. Using the wrong terms can be confusing for new users.
In my opinion, RAID10 is a perfectly acceptable way to refer to stripes over mirrors on ZFS (assuming it's understood we're talking about ZFS). I don't think that RAID10 implies hardware RAID as there are many software implementations of RAID0, RAID1, and RAID10. I don't believe it confuses newcomers but rather it offers a touchpoint of familiarity that can help them begin to understand the very complex world of ZFS.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
I researched PSU's a little more but I'm getting a bit confused by all the Seasonic models. Can somebody shed some light on this?
Is the Focus Gold 450W or the Focus Plus Gold 550W also a good option for FreeNAS? I could get those for a good price here. The only PSU mentioned in the hardware guide that I can get for a decent price is the Prime Titanium 650W, but at double the cost.
The hardware guide also mentions EVGA, but not which models. So is an EVGA SuperNova 550 G3 or Supernova 550 G2 also a good option?
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
On a Dutch site (Hardware Info) this PSU is rewarded with an "EXCELLENT" remark. This is based on it's build quality -electrical and mechanical- the results of the testing and the delivered cables etc. As you are from Belgium it is possible that you are able to read the Dutch text. You can find the test here. If you are not able to read it: I did and could not find anything bad being said about the PSU. In the conclusion of the test it is remarked as stable, and eficient. And the values for ripple and soundproduction are more then OK.

After burner: I own (and have owned several) a Seasonic PSU myself and never have been disappointed. I trust the brand.
 
Last edited:

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
Thanks for the link! It's a useful article, I learned few things.

I did some more calculations considering the maximum amount of hardware that I would possibly add to the system in the future, resulting in the need for a 650W PSU. This makes the price difference with the Prime Titanium 650W somewhat smaller, so I'm thinking of buying that one.

After discussing everything with my wife, we've decided to go for the following configuration:
We've decided to start with 16GB and jump to 32 GB later if needed.

One last question, would it make sense to go for a UPS of the APC Smart-UPS series? It produces a sinewave instead of the 'stepped approximation to a sinewave' of the Back-UPS Pro series. I have no idea what difference that would make.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
One last question, would it make sense to go for a UPS of the APC Smart-UPS series? It produces a sinewave instead of the 'stepped approximation to a sinewave' of the Back-UPS Pro series. I have no idea what difference that would make.
Well an UPS with a Pure Sine Wave cost about twice as much. What you get is a smoother and cleaner output and a somewhat better efficiency. Personally I don't think it is needed for a fairley simple appliance as a home server. Modified sine wave UPS systems typically protect PCs, home entertainment systems, A/V components and media centers. If you have a good quality PSU it should be no problem to go for the cheaper solution. I own a APC BR1200G-GR also known as BACK-UPS 1200 pro. And it's wave shape is indeed stepped approximation sine-wave. My server survived 3 power failures without a flaw the past 2 years. Of course if budget is not a problem go for the gold:) By the way: you do realize that only when you are running on the battery this is a "problem"?
 

Timmeke

Cadet
Joined
Feb 1, 2019
Messages
9
By the way: you do realize that only when you are running on the battery this is a "problem"?
Right! No, I didn't realize it, but now that you mention it, it should have been obvious. :)

OK, that's it then, I can start ordering the parts.
Thanks everyone for the feedback! It was very helpful.
 

l@e

Contributor
Joined
Nov 4, 2013
Messages
143
The only thing that a standart line interactive ups will not handle are the faster changes that its designed response time. For most of them are 8-10 ms which is half of sine @50Hz. The psu capacitors will not let the server to power down. The only risk there is if the electricity where you live is making “jokes” with high voltage spikes shorter than that. In that case only online double conversion ups will protect 100% since it is fully isolated input from output. But this jokes are mosly observed with long transmission lines and stormy areas. So as far as you live in urban area it will no need to invest too much. Zfs will not suffer from power outages, mos it is the physical parts.
 
Top