SOLVED Low-cost SuperMicro Atom C3338 (Denverton) build

Status
Not open for further replies.

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Hi Everybody!

I’m trying to put together a low-cost, yet fairly capable FreeNAS build. It seems that most other builds described on the forum are targeting significantly higher performance and/or storage needs than (I think) I have. Hence, I’ve tried to put together a more lightweight build while trying to stay within the hardware recommendations.

I’m curious whether anybody has tried a similar setup and/or has some good advice?

The build (total cost around $1000):
  • SuperMicro A2SDi-2C-HLN4F (Atom C3338 2-core 9W 8xSATA 2xDDR4 4xGbE)
  • 1x 8 GB DDR4 ECC
  • Fractal Design Node 304
  • Seasonic PowerSupply G-360 360W Gold
  • 2x WD Red 6TB (storage, ZFS, mirrored)
  • WD Green 120 GB SSD (boot disk)
Potential upgrade path:
  • Double amount of memory (2x8 Gb)
  • Additional 6TB disk (3-way mirror)
  • Additional ZFS pool with 3x10TB
Backup strategy:
  • Backup all data to an attached USB disk (non-ZFS)
  • Backup important data to another attached USB-disk (non-ZFS) to be rotated of site on a quarterly basis
Use cases:
  • Main use case is long-term safe storage of personal files for access by 1-3 simultaneous clients with a decent level of throughput over a 1Gbps network (no processing/transcoding on the server)
  • I also plan on using the server as a data store for 1-3 VMware ESXi hosts with a total of 5-15 active VMs with little or no IO performance requirements
I really appreciate any and all feedback on the build. I would also love to hear if anybody has managed to put together an even cheaper build.

Thanks!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have seen people who have had problems with the WD Green SSD. Several recent failures. I would suggest that you pick up a small capacity, used Intel SSD from eBay. If it is healthy, most are, it will last for longer than the NAS.

Backup strategy is a false hope. ZFS is the file system.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is cheaper the goal or lower power consumption?
I would bet on smashing that price with used gear.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
You would be better off starting with a single 16GB stick of memory since that board only has 2 memory slots giving you limited upgrade options.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Is mini-itx (IIRC the board and the chassis) a strict need?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I also plan on using the server as a data store for 1-3 VMware ESXi hosts with a total of 5-15 active VMs with little or no IO performance requirements
Little or no IO performance requirements is ...one thing when looking at a sitting VM.
It could become quite the different story once taking into account the performance hit your pool suffers once you'll be looking at NFS flags, such as sync=always for the datastore share.
At this point, turn your interest to read up on, and select a suitable SLOG.
In my experience, sync=always causes a increased CPU utilization. Quite significant, too. It may turn out to be a surprice for your selected CPU. (I run a i3-6100 for reference).

If VM data is not <super critical with regards to uptime> another option is to run your VM's off a separate SSD pool (In some scenarios, it may even be reasonable to run it without redundancy), that in turn is snapshotted to your HDD pool for easy local restore when need be.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Thank you all for your input! I really appreciate it.

I have seen people who have had problems with the WD Green SSD. Several recent failures. I would suggest that you pick up a small capacity, used Intel SSD from eBay. If it is healthy, most are, it will last for longer than the NAS.

Thanks for bringing the WD Green SSD issue to my attention. The additional cost for a new Intel SSD 545s 128 Gb is negligible (approx. €20), so I will definitely go for the Intel.

Backup strategy is a false hope. ZFS is the file system.

I'm curious to know more about why the backup strategy would not work. Do you mind elaborating on this? My reason for considering non-ZFS for the backups was the prevalence of reports of failed ZFS pools and the lack of tools to repair and recover data from such pools. I simply wanted a way to recover (at least parts of) my data in case of a complete pool failure. Feels even more important until I "upgrade" to a 3-way mirror for the pool.

Is cheaper the goal or lower power consumption?
I would bet on smashing that price with used gear.

I suppose the goal is to keep the TCO on a "reasonable" level. I don't want to go so cheap that I end up with a setup that doesn't meet my requirements, but I don't want to pay a lot extra for potential performance that I will probably never need either. I usually end up buying stuff that can do 10 times more than I need to be on the safe side. For once, I thought I'd try to keep it closer to my actual needs :)

I have, however, ruled out used gear due to bad experience with the stability of used gear in the past. Also, the availability of used server-grade gear is rather limited in northern Europe where I live.

You would be better off starting with a single 16GB stick of memory since that board only has 2 memory slots giving you limited upgrade options.

Makes sense if I will need more than 16 GB in the future (2x8Gb is supposed to provide better performance than 1x16Gb on this board). Based on your experience, is it likely that I would need 32 Gb of memory with storage needs of 6Tb (in a 3-way ZFS mirror)? Does the answer change if I add a second 3-way 10Tb mirror?

Is mini-itx (IIRC the board and the chassis) a strict need?

I'm afraid so. We live in a city apartment, so the size of the set-up matters.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Little or no IO performance requirements is ...one thing when looking at a sitting VM.
It could become quite the different story once taking into account the performance hit your pool suffers once you'll be looking at NFS flags, such as sync=always for the datastore share.
At this point, turn your interest to read up on, and select a suitable SLOG.
In my experience, sync=always causes a increased CPU utilization. Quite significant, too. It may turn out to be a surprice for your selected CPU. (I run a i3-6100 for reference).

If VM data is not <super critical with regards to uptime> another option is to run your VM's off a separate SSD pool (In some scenarios, it may even be reasonable to run it without redundancy), that in turn is snapshotted to your HDD pool for easy local restore when need be.

Thanks. Good point! It definitely makes sense to run the datastore on a separate SSD pool. Downtime and loss of recent data will not be critical (just inconvenient) so redundancy is not required as long as I have a daily snapshot in the backups. The set-up has one free SATA slot even after setting up a second 3-way ZFS pool, so it should work even in the long term.

The performance of my selected CPU is one of my main worries. I'm wondering whether it will be able to provide decent performance or whether I should opt for the $100 more expensive 4-core version ...
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
(2x8Gb is supposed to provide better performance than 1x16Gb on this board)
Not with the 23xx version of that CPU, it only supports single channel memory.
Based on your experience, is it likely that I would need 32 Gb of memory with storage needs of 6Tb (in a 3-way ZFS mirror)?
It depends entirely upon your workload. If you later expand your server to do other things then you may need more. The cheaper route would be a single 16GB stick now rather than replace 2x8GB sticks to expand in the future.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
he cheaper route would be a single 16GB stick now rather than replace 2x8GB sticks to expand in the future.
(don't ask why I run 48GB RAM :P)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm curious to know more about why the backup strategy would not work.


I’m not sure what @Chris Moore exactly meant, it’s perfectly viable to backup to ZFS formatted USB connected drives, and I believe @Arwen actually has some scripts to assist with this
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
I'm curious to know more about why the backup strategy would not work.
...
I’m not sure what @Chris Moore exactly meant, it’s perfectly viable to backup to ZFS formatted USB connected drives, and I believe @Arwen actually has some scripts to assist with this
One issue lots of people overlook for backups, is that if you can't verify the backup data, then how do you know the backup data is good?

ZFS allows you to scrub your data. Even backup data on a non-redundant pool. ZFS can't fix non-redundant data, but you at least KNOW it's bad before attempting a restore. Thus, go to another backup media for your restore.

Some backup tools will checksum the files and can verify them before restoring. But, why bother? ZFS will do this automatically.

And yes, I wrote a script to perform backups, (though using Rsync), from ZFS to ZFS. The method and script are in the resource section of this forum.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
ZFS can't fix non-redundant data, but you at least KNOW it's bad before attempting a restore. Thus, go to another backup media for your restore.
EDIT: For example a mirror? Let's assume one has some redundancy - a 2-way mirror.

Do ZFS and mirrors work like in this example: even though both of my mirrors have some checksum errors, it can be fixed if the affected sectors are not the corresponding ones?
 
Last edited:

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
For example a mirror?
Well no, that would be an example of *redundant* storage, which ZFS *can* fix...

Do ZFS and mirrors work like in this example: even though both of my mirrors have some checksum errors, it can be fixed if the affected sectors are not the corresponding ones?
Yes of course, if ZFS has a good copy of a block, it will use it (including to repair bad copies).
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
For example a mirror?
...
As stated above, mirrors are redundant, so of course fixes can be applied.
Do ZFS and mirrors work like in this example: even though both of my mirrors have some checksum errors, it can be fixed if the affected sectors are not the corresponding ones?
ZFS does something I've wanted for at least 6 years before ZFS was released. If you have bad data blocks on more disks than you have redundancy, (like both disks of a 2 way mirror), as long as it's not the same data blocks, you can recover ALL the data. Even with ZFS, it is a tricky fix. But, not like before. Basically you can replace in place. Meaning you add the replacement disk and ZFS will create a temporary mirror of a failing disk. Any data from the failing disk that is still good, is used. And any that is not, the other disk(s) is used.

Even if ZFS can't recovery all your data, (and yes, that can happen if you have too many un-repaired hardware faults), ZFS will tell you which files it could not repair. Remember, ZFS is not a backup tool. But, backups are basically extra copies in case the main copies are damaged. Another disk, (or set of disks), with those copies that uses ZFS can be a backup.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Well no, that would be an example of *redundant* storage, which ZFS *can* fix...
As stated above, mirrors are redundant, so of course fixes can be applied.
Thanks, edited.

Basically you can replace in place. Meaning you add the replacement disk and ZFS will create a temporary mirror of a failing disk.
Does it mean one needs to add the third disk (in case a 2-way mirror) before removing any of the failed ones? Thus temporary creating a 3-way mirror? And the new disk, the third one, will be the place for the temporary mirror? Or where else is this temporary mirror stored?
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Do you mean one needs to add the third disk (in case a 2-way mirror) before removing any of the failed ones? Thus temporary creating a 3-way mirror? And the new disk, the third one will be the place for the temporary mirror? Or where else is this temporary mirror put?
Sorry, ZFS has too many options, so the maybe yes to all of the above.

When I need to perform a repair, it's always situational dependant. Meaning I don't fix a flat tire by getting a oil change.

So, in the case of a 2 disk, 2 way mirror where you have a free disk slot and bad disk(s), I would add the replacement disk in as a semi-permanent 3rd mirror. Then re-assess. Others might use ZFS' replace in place option, especially if both source disks have failures. That automatically removes the disk to be replaced after the new disk is re-silvered, thus going back to 2 way mirror, (in this example).

So, without a concrete example, it's hard to give best data safety or common practice procedures. ZFS might be top of the heap for data safety, but others are catching up.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Others might use ZFS' replace in place option, especially if both source disks have failures
That's the case I'd like to understand. What does an in-place replacement mean? Assuming a 2-way mirror, both drives have some failures...
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
What does an in-place replacement mean? Assuming a 2-way mirror, both drives have some failures...
So let's step back just a little bit.

When ZFS reads data from a disk it verifies the checksums, and if checksums don't match it repairs data at the same time, as part of the read operation -- assuming it has write access to the disk and there is a good copy of the data available (for example from the other side of a 2-disk mirror).

So in the normal state, there are no known data errors actually on your mirrored disks; because ZFS would have repaired them as soon as it found them.

As for "in-place replacement", look up the documentation for the "zpool replace" command. You can "zpool replace" a disk before detaching it. In that case the new disk is added to the pool first and resilvered before the old disk is detached. In the case of a mirror you're right, that is exactly like turning your 2-way mirror temporarily into an 3-way mirror. And it's not quite the same as literally copying the content of the old disk to the new disk, because ZFS will read from both disks (I believe) and it will correct checksum errors as it finds them.

I hope this is of some help.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
As for "in-place replacement", look up the documentation for the "zpool replace" command. You can "zpool replace" a disk before detaching it. In that case the new disk is added to the pool first and resilvered before the old disk is detached. In the case of a mirror you're right, that is exactly like turning your 2-way mirror temporarily into an 3-way mirror. And it's not quite the same as literally copying the content of the old disk to the new disk, because ZFS will read from both disks (I believe) and it will correct checksum errors as it finds them.
Is in-place similar to this:
When replacing a working disk, the process keeps the old disk online during the replacement. The pool never enters adegraded state, reducing the risk of data loss. zpool replace copies all of the data from the old disk to the new one.
?
 
Status
Not open for further replies.
Top