Newbie FreeNAS build (old PC components vs. new server components)

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
This is VERY overstated. ECC is recommended and I fully agree with that recommendation. But loosing all your data just because of the use of non ECC memory is nonsense.
Some additional info on why Evert is right here:
It's important to notice the difference in articles between "dataloss" and "total pool failure". Most articles and statistics dealing with redundancy or ECC are talking about dataloss, this can mean a whole drive, whole pool or (most often) even a single block.

The chance of non-ecc corrupting a single block is very small, as long as your ram checks out, way smaller than dataloss due to disk failure. The chance of non-ecc corruption leading to a cascading failure that leads to a whole pool being lost... is even many times smaller than that.

Should you try to use ECC? Yes.
Is it required? No.
Please remember IX recomendations should always be read as: "Can you expect us or the forum to fully support you if you have issues due to not listening to us"


Depending on how important your system is to you I would wait for the release. On my production evironment no beta or RC's
Please be aware that TrueNAS 12 RC is somewhat equal to the old FreeNAS Release versions. there is a translation sheet in another topic :)
Most issues with Beta and Release versions of FreeNAS always where updating old systems. If you start from scratch I think the curent BETA2 is relatively solid.

(Not saying you should use it already, just stating some info you should take into account).


Other comments:
- Your build is rock solid.
- Go Raidz2 or raidz3 for bulk archive storage, not striped mirrors and NOT raidz1.
- Be sure to set a nice high blocksize.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
@ornias thank you for commenting. All this advice in this community is so helpful and very appreciated.

I am thinking Raidz2 is a good fit. So the biggest blocksize? While the video files are usually at least 1 GB or more.

In my current workflow I copy the original video files from the camera media straight to the editing drive, to archive HDDs xxA and xxB. The server will replace the single "A" drives. Whatever happens to the server, I will have at least the single "B" archive drives.
It might not be critical to use ECC but rebuilding the archive would be a pain though if the rare case happens. So ECC would be an extra safety feature.

Am I missing something else here?
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I am thinking Raidz2 is a good fit.
I agree. For many applications Raidz2 is great where Raidz3 is a bit overdone and not very economical. Raidz2 offers a good balance between redundancy and usable storage space. To me a pool with 6 to 8 disks is ideal.
 
Last edited:

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
I agree. For many applications Raidz2 is great where Raidz3 is a bit overdone and not very economic. Raidz2 offers a good balance between redundacy and usable storage space. To me a pool with 6 to 8 disks is ideal.
I plan to have 8 14 tb disks so seems to be the right decision.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Block size can be set to 1MB, and for large video files, that’s a good idea. It’ll cut down on metadata stored, which you want to fit into ARC (RAM) if at all possible.

I agree with a raidz2 vdev size of between 6 and 8 drives, and as many vdevs in the pool as you want or have room for.
I am running 8 now, and that’s a very economical width for bulk storage.
Keep in mind that raidz width cannot be changed after the fact. At least not presently. If you start with 6 drives, it will always be 6 drives.

You can replace all drives in a raidz with bigger drives, one by one, and when you are done you have the additional capacity.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
If you start from scratch I think the curent BETA2 is relatively solid.
I agree that starting from scratch is less of a risk then updating an old version. But I tend to err on the side of caution. I recall the debacle with FreeNAS Corral. I am glad I was still running the old version of FreeNAS when the plug was pulled for Corral. And a lot of people that started out with Corral where not happy to say the least. So yes, being an early adapter can lead to an unpleasent experience. Though I have a lot of trust in the current development team. That needs to be said as well.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Though I have a lot of trust in the current development team. That needs to be said as well.
Yeah corral was... well... crap and one should be carefull.

I for one already thoroughly tested both OpenZFS2.0 (both Linux, BSD and FreeNAS) and TrueNAS Core 12. While I have found some interesting bugs in OpenZFS 2.0 (although those where not putting data at risk), i've not been able to find any significant bugs in the TrueNAS Core 12 ZFS and Iocage middleware+GUI and its iocage implementation.
Thats after about a hunders iocage jails being created, about triple that amount of datasets (including with custom parameters) and about 10 pool (re)creations.
That being said: I have not tested Shares, VM's, LDAP, AD and ACL's as of yet.

The current roadmap for OpenZFS still lists some big features that might get added but are not yet merged into master. Such as:
- draid (a total reimplementation of raid-on-zfs with significantly(!) faster rebuild times)
- zstandard compression (think a little slower lz4 with gzip level compression)

So it might actually be wise to hold off creating a new NAS for a few months unless you can't help it.
It would mean TrueNAS Core 12 would be at least RC state and OpenZFS would have the biggest new features integrated.

I sincerely hope IX is not going to ship TrueNAS Core 12 with a gutted version of OpenZFS2.0 though, but thats to be seen.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Draid might just be enough of a draw to get me to destroy my pool for the second time. Maybe that's TrueNAS 12.1 time frame.

I'm assuming TrueNAS Core 12.0 is feature-complete. RC1 in less than 4 weeks and release a month after that, I don't see how a major feature like draid could land in time for 12.0.

zstd, I don't know, I'm not holding my breath. I see you've been busy on it ornias, hopefully you guys can shepherd it over the finish line, in such a way that it's entirely stable.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
So it might actually be wise to hold off creating a new NAS for a few months unless you can't help it.
Should I be in need of creating a new NAS right now I would do it with the current released version of FreeNAS. I have switched one of my FreeNAS VM's to the Truenas 12.0 Beta train and this worked without trouble.

However, this also got me a warning for new ZFS Feature Flags. That's ok. With an update I get the choice to update my pools. It's not done automatically. But I think that with a fresh install and new ZFS pools, my pools will have those new ZFS Feature Flags. If those are not supported by the "old" FreeNAS, I am out of luck if Truenas does not work out for me. Correct me if I am wrong.

Reinstalling FreeNAS is not much work but I would hate to destroy the pool(s).
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Draid might just be enough of a draw to get me to destroy my pool for the second time. Maybe that's TrueNAS 12.1 time frame.
Indeed, I expect it to at least be possible in TrueNAS Core via CLI (both draid and zstd) if it manages to get into OpenZFS2.0

I'm assuming TrueNAS Core 12.0 is feature-complete. RC1 in less than 4 weeks and release a month after that, I don't see how a major feature like draid could land in time for 12.0.
They should've just waited on OpenZFS2.0 RC or stable and kept TrueNAS 12 in alpha till that moment imho.
That being said: I expect ZSTD to atleast be in U1, mainly due to it requiring a small GUI rework (splitting compression level from compression algorithm in GUI)

zstd, I don't know, I'm not holding my breath. I see you've been busy on it ornias, hopefully you guys can shepherd it over the finish line, in such a way that it's entirely stable.
It's basically pro-forma accepted, I expect it to be merged within a week or two.
As soon as it's merged i'll look into TrueNAS support myself :)

*edit*
And... ZSTD is merged...
 
Last edited:

Sasquatch

Explorer
Joined
Nov 11, 2017
Messages
87
I had quite bad experience with non ecc ram, lost data, whole of it, call me biased but to me no ECC = no data.
It was 8 GB stick in desktop motherboard, first i had couple write errors, one HDD was used so swapped it for new, week later pool was a history.
FreNAS rebooted during smb data copy and pool was impossible to import after that.
Re run memtest, for a week this time and found 2 errors, after another ~2 weeks(waiting for$$ for new hw) I had 15 errors in total, not high error rate for a failed ram.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I had quite bad experience with non ecc ram, lost data, whole of it, call me biased but to me no ECC = no data.
It was 8 GB stick in desktop motherboard, first i had couple write errors, one HDD was used so swapped it for new, week later pool was a history.
FreNAS rebooted during smb data copy and pool was impossible to import after that.
Re run memtest, for a week this time and found 2 errors, after another ~2 weeks(waiting for$$ for new hw) I had 15 errors in total, not high error rate for a failed ram.
Just because your pool failed with failed ram, doesn't mean the two are connected.... Not saying it isn't, but we can't tell from a half-story based on n=1 if thats the case.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
So it might actually be wise to hold off creating a new NAS for a few months unless you can't help it.
It would mean TrueNAS Core 12 would be at least RC state and OpenZFS would have the biggest new features integrated.
I could wait 2-3 weeks I guess. A couple of months rather not. draid sounds good though....
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
A different question I am asking myself (and the community now).

My HBA can take 8 drives which I already purchased. After getting moving my data over to the server at some point, I will have some spare drives left over. In this case, 4 4TB Ironwolf that I could use for other things like backup of private files, etc.

HBAs are recommended but would it be so bad to use the sata ports on my mainboard?
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
HBAs are recommended but would it be so bad to use the sata ports on my mainboard?
No, unless the sata controller on your motherboard is totally crap it would work just fine.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
And if you manage to run out again and want to add yet more drives, you can always throw a SAS expander behind that HBA. 8 x 6Gb/s is more bandwidth than you'll saturate in a hurry with spinning disks.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
And if you manage to run out again and want to add yet more drives, you can always throw a SAS expander behind that HBA. 8 x 6Gb/s is more bandwidth than you'll saturate in a hurry with spinning disks.
Only for SAS HBA's, not for motherboard SATA HBA's.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
I agree. For many applications Raidz2 is great where Raidz3 is a bit overdone and not very economical. Raidz2 offers a good balance between redundancy and usable storage space. To me a pool with 6 to 8 disks is ideal.
So time has pased and I now create my first Truenas Core pool for my archive. I still have 8x14tb drives. I am undecided on how to setup my archive pool.

My considerations right now:

1 vdev of 8 disks in Raid-z2 = 76.39 TiB or
2 vdevs of each 4 disks-raid-z1 = 76.39 TiB or
4vdevs of each 2 disks (mirror) =50.91 TiB

I will also keep cold copies of the entire archive on single disks.

The mirrored pool would be easier to expand (and fail safer?) but also wasting a lot of storage space.

EDIT: Can I also add smaller drives in the same configaration as the other vdevs?

What do you think?
 
Last edited:
Top