Newbie FreeNAS build (old PC components vs. new server components)

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
Hello community,

I am new to FreeNas and this is my first build.

I read the hardware recommendations as well as several threads here and I am not 100% sure what is recommended in my case.

First of all, I am a filmmaker that needs lots of storage for archiving projects (current archive 45 TB) that are only occationally needed once finished. I also have a workstation with very fast storage (8TB in flash storage running in Raid5) for projects being worked on that I would like to backup as well.

My current workflow for data ingestion and backup is as follows:

Shooting to SD card or SSD -> transfer to shuttle disk for transportation -> copy from shuttle disk to workstation -> copy to archive disks xxA and xxB - > copy project files and their backups after mastering project from workstation to archive disks

The intended archive server will be my "A" copy. The "B" copy will remain on single disks right now.

Here is what I have left over:

Intel i7-8700K (with Noctua cooler)
ASRock Z370M Pro4
Ripjaws 16GB F4-3200C16D
Seasonic Focus 750w FX

Here is the rest I am planning to use:

Fractal Define 7 XL case
8x14TB WD RED (for archive, already purchased)
LSI 9300-8i HBA
Chelsio t520-so-cr
SFP+ to 10gBase-T adapter to connect to home network

a second pool of drives ( 4x 4TB Ironwolf NAS) can also be considered for backing up all data and working files from workstation.

I am not sure if I should repurpose old PC parts or sell them to buy dedicated server components (Intel and Supermicro) or even go with a Ryzen 5/ASrock Rack X470D4U setup.

I really appreciate any advice.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
I forgot to mention my second question.

In my particular case, should I gor for Raid-Z (1 or 2) or striped mirros. With Raid-Z2 for instance, I should have around 67TB of usable storage (80% of 84TB) leaving me with 22 TB headroom, which will last for 1.5-2 years. Then I would need to expand somehow.

If I go with striped mirrors, I would lose more capacity, but it would be easier to expand later on, right?
 

Sasquatch

Explorer
Joined
Nov 11, 2017
Messages
87
For ZFS go for ECC ram, unless you don't mind loosing all of your data. IF your old PC motherboard supports ECC then go ahead and reuse it, but only with ECC ram.

Chelsio t520-so-cr plus SFP+ to rj45 may cost you more than t520-bt t420-bt or intel x5x0-t2 (both native rj45).
Even cheper works out to go fibre, unless you workstation have built in 10GbE rj45 Ethernet.

Current FreeNas allows for RaidZ pool extension, read manual on requirements.
Raidz1 is high risk, there is approx 8% chance that second disk will fail during resilvering and that would mean pool loss.
Raidz2 gives better safety than striped mirrors too, if wrong second drive fails in striped mirror pool you're f***d.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
you can expand raidz pools now.

Well. Not as such. You can replace every disk with a higher-capacity one, yes. That's always been supported. You cannot extend the width of a raidz (number of physical disks in it) after the fact.

ECC is nice, I'm not sure it's required. Yes, bad memory can lose you all your data, if you get very unlucky. You're still protected by parity and checksums. You know your own risk profile best, and how solid your backups are. Act accordingly. That board and that CPU do not support ECC.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
For ZFS go for ECC ram, unless you don't mind loosing all of your data. IF your old PC motherboard supports ECC then go ahead and reuse it, but only with ECC ram.

My old motherboard doesn't support ECC. I am not sure how important it is if I have a copy of all the data on separate disks outside the server in cold storage. It sounds better, but I am not sure if ECC might be more important to other type of Applications where data is more often accessed, modified etc.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
Well. Not as such. You can replace every disk with a higher-capacity one, yes. That's always been supported. You cannot extend the width of a raidz (number of physical disks in it) after the fact.

If I got the manual correctly, in my case I have to add another 8 drives of the same capacity in a new vdev and stripe them? Which is a bit annoying and exensive in this case. Or is there another way I overlooked?
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
For ZFS go for ECC ram, unless you don't mind loosing all of your data.
This is VERY overstated. ECC is recommended and I fully agree with that recommendation. But loosing all your data just because of the use of non ECC memory is nonsense.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
This is VERY overstated. ECC is recommended and I fully agree with that recommendation. But loosing all your data just because of the use of non ECC memory is nonsense.

Ok, so let's say I decide to sell my old parts and buy a dedicated server board and cpu that supports ECC. ...

What CPU+Mainboard combo would you recommend for an archive & backup server? in a 10gbe network?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Buying new or from eBay? If new, x11scl-f with i3-9100f or e-2124 and 32GiB or so of ECC RAM. That’s assuming you are going to use an add-in NIC for 10G. If 10g-base-t is decided already, then maybe with onboard 10Gb-t: x11ssh-tf and i3-7100 or e3-1220 v6.

The xeon option gives you more addressable memory in the first case, 128GiB instead of 64GiB, that may be worth it to you.

For the slightly older x11ssh-tf, addressable memory is 64GiB for both CPUs.

I expect that for your use case, 32GiB is plenty - single module newer board, two modules older board.


if used, see resources for some x10 options.
 
Last edited:

Sasquatch

Explorer
Joined
Nov 11, 2017
Messages
87
A2SDi-H-TF seems to be ticking all boxes for @Tobsen 2x mini SAS +4 sata connectors and onboard 10GbE-t, plus pci-e x4 for optional extra HBA for expansion beyond 10HDDs. probably same board as in FeeNasMini+ :wink:
 
Last edited:

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
If new, x11scl-f with i3-9100f or e-2124 and 32GiB or so of ECC RAM. That’s assuming you are going to use an add-in NIC for 10G. If 10g-base-t is decided already, then maybe with onboard 10Gb-t: x11ssh-tf and i3-7100 or e3-1220 v6.
Thank you @Yorick - I will look into these options.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
A2SDi-H-TF seems to be ticking all boxes for @Tobsen 2x mini SAS +4 sata connectors and onboard 10GbE-t, plus pci-e x4 for optional extra HBA for expansion beyond 10HDDs. probably same board as in FeeNasMini+ :wink:

True, this board does tick a lot of boxes. Thanks fpr pointing it out. I own a LSI 9300-81 though that needsa a pcie 3.0x8 though. Would be great if I could use that as well. MAybe there is a version with more pcie slots.
 

Sasquatch

Explorer
Joined
Nov 11, 2017
Messages
87
True, this board does tick a lot of boxes. Thanks fpr pointing it out. I own a LSI 9300-81 though that needsa a pcie 3.0x8 though. Would be great if I could use that as well. MAybe there is a version with more pcie slots.
C3758 has PCIE lanes maxed out on that board, more lanes = different CPU.
Mini ITX is limiting expand-ability no matter what board you get, although most(all??) server grade ones support bifurcating of PCIEx16 slot into 2x8.
But that opens another can of worms called bifurcating risers!!
If you want expandability go with asrock x470d4u,
ryzen 3 2300, any more CPU will be wasted in storage only FreeNas
LSI9300 in pci-e x8
10GbE NIC in pci-e x4
and you have pci-e x16 left for say 16 port HBA for another two 8 hdd wide v-devs
plus you can relatively cheaply jump to ryzen 7 or 9 and run couple VM's.

If you go intel, say lga1151, then affordable cpu's end at i3(perfectly fine for storage only and one user media streaming/transcoding), i5 up don't support ECC and xeons have appalling performance/£ compared to ryzens.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The Ryzen/AsRock Rack combo works well for people that have tried it. There’s a quirk: Because AMD doesn’t validate ECC on Ryzen (as in, it works, but AMD doesn’t go so far as to test it to the extent they test EPYC), reporting ECC errors in IPMI doesn’t work. Reporting through TrueNAS Core does work.

Personally I like having ECC errors show up in IPMI (and emailing me from there). You described a pure storage application, which means that the added compute of Ryzen would not benefit you.

How important IPMI error reporting is to you is a personal choice. Certainly the Ryzen build will work.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
ryzen 3 2300, any more CPU will be wasted in storage only FreeNas
LSI9300 in pci-e x8
10GbE NIC in pci-e x4
and you have pci-e x16 left for say 16 port HBA for another two 8 hdd wide v-devs
plus you can relatively cheaply jump to ryzen 7 or 9 and run couple VM's.
Sounds good...thanks for the advice and clarification.
When you say 10GbE in pci-e x4 you are referring to a single port NIC right? A Chelsio t520 and similar models need x8 I assume.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
AMD doesn’t go so far as to test it to the extent they test EPYC), reporting ECC errors in IPMI doesn’t work. Reporting through TrueNAS Core does work.
Ok, so in TrueNAS Core beta2 it works? I have not upgraded to TrueNAS Core yet. Is there a way to get notification from the FreeNAS or TrueNAS GUI if ECC errors are detected?

Do you recommended upgrading to TrueNAS Core now are wait for the RC or full release?
 

Sasquatch

Explorer
Joined
Nov 11, 2017
Messages
87
Sounds good...thanks for the advice and clarification.
When you say 10GbE in pci-e x4 you are referring to a single port NIC right? A Chelsio t520 and similar models need x8 I assume.
All 10GbE NICs are x8 but will work at full speed in electrical x4 slot - true for pcie 3.0 and 2.0 single link .
x470 has PCI-e:
2 x16 slots, one is electrical x8
x8 slot - electrical x4 - will take x8 card and pcie 3.0 10GbE NIC(t520-xx) will have enough bandwith for 2 links, pcie 2.0(t420-xx) NIC will be limited to ~70% on 2 links.
 

Tobsen

Dabbler
Joined
Aug 14, 2020
Messages
21
All 10GbE NICs are x8 but will work at full speed in electrical x4 slot - true for pcie 3.0 and 2.0 single link .
x470 has PCI-e:
2 x16 slots, one is electrical x8
x8 slot - electrical x4 - will take x8 card and pcie 3.0 10GbE NIC(t520-xx) will have enough bandwith for 2 links, pcie 2.0(t420-xx) NIC will be limited to ~70% on 2 links.
Thanks!
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Do you recommended upgrading to TrueNAS Core now are wait for the RC or full release?
Depending on how important your system is to you I would wait for the release. On my production evironment no beta or RC's. And I made it a habbit to never be an early adapter with updates or new releases. I do have an ESXi lab server so I can run it on a VM but most of the time I wait for a bit and keep an eye on the Forum a couple of days after a new release or update has become available.
 
Top