Specific build components list - up to 32GB RAM

Specific build components list - up to 32GB RAM

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Chris Moore submitted a new resource:

Specific build components list - up to 32GB RAM - First FreeNAS

Mar 3, 2019:

I put this together because I was directly asked for it (or something very similar) on several occasions. While this may not be perfect for you, I put three builds together using same model hardware and they have worked for my purposes. If you don't plan to use Plex, you could probably go with a lower performance CPU. I tried to target this to the majority of users seeking a first time build.


Easily mount ten or eleven hard drives:

CASE: Fractal Design Define R5 Black...

Read more about this resource...
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
Hi @Chris Moore , thank you for this resource. It is funny that - last time I've checked - we didn't list X9s as options for the recommended hardware. I find them to be cheap, reliable, allow DDR3s, single/dual CPUs, etc, so my board of choice.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hi @Chris Moore , thank you for this resource. It is funny that - last time I've checked - we didn't list X9s as options for the recommended hardware. I find them to be cheap, reliable, allow DDR3s, single/dual CPUs, etc, so my board of choice.
There was once an older resource that listed older components.
Hardware Recommendations by @cyberjock - from 26 Aug 2014 - and still valid
https://forums.freenas.org/threads/hardware-recommendations-read-this-first.23069/
https://forums.freenas.org/threads/hardware-recommendations-read-this-first.23069/
The new components are great, but the cost of a full new build causes some people to go for less desirable consumer gear. I just want folks to know there are options.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
The new components are great, but the cost of a full new build causes some people to go for less desirable consumer gear. I just want folks to know there are options.

That happened with me and others. My learning cost was of feel thousand US $. In fact I was able to sell my gear and fund my server grade ones.

Again, great idea and I hope people will read and understand that there are other options out there.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Hi Chris, I'm new to FreeNAS and would appreciate your advise on which SAS Expander/SAS HBA is best for my build. I want to build two units. FreeNAS host with 12 bays 4U case and 24 bay JBOD expander. It will be all used in Which cards would be best to use with this system? Thanks in advance.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hi Chris, I'm new to FreeNAS and would appreciate your advise on which SAS Expander/SAS HBA is best for my build. I want to build two units. FreeNAS host with 12 bays 4U case and 24 bay JBOD expander. It will be all used in Which cards would be best to use with this system? Thanks in advance.
I use two of these in one of my systems. They work great. You just need to ensure the latest firmware is flashed.
https://www.ebay.com/itm/IBM-46M0997-ServeRAID-Expansion-Adapter-16-Port-SAS-Expander/122281398896

Here is a link to a video that talks about how to flash the firmware:
https://www.youtube.com/watch?v=Lw4TTI_HYqM

This is also an interesting video as it explains how you can use that expander to run up to 24 drives:
https://www.youtube.com/watch?v=qccpopxc_Uo
 
  • Like
Reactions: BKG

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I use two of them in a 24 chassis with 12 drivs to each card.
If you give me a little more about the chassis your using, I could make some suggestions.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Thank you Chris. I can budget more money for the card if there is a better one. In the nutshell I'm building a storage where the CCTV server will be archiving footage to. Reliability and scalability is the key. So for instance if in the future I want to add another JBOD cluster will it be possible? Please look at the attached picture for clarification. Thanks again.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
I use two of them in a 24 chassis with 12 drivs to each card.
If you give me a little more about the chassis your using, I could make some suggestions.
That would be great. I will post more specific scenario later tonight.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Here is the PDF again. First one didn't load.
 

Attachments

  • SAS NET.pdf
    39.5 KB · Views: 496

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Here is the PDF again. First one didn't load.
I have a lot of ideas for this, but I need to know how much you are budgeting for this build?
Do you already have hard drives? Going to fill it up all at once?
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Hi Chris, here is my plan:

1. 4U Rackmount Server Case or Chassis, 12 SATA / SAS Hot-swap Drives. This will be the host FN with all the guts (CPU, RAM, non-hardware RAID/SAS HBA card to summon 10TB drives)
2. Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays. This will be the expansion cluster(motherboard with no CPU to host SAS cards and power supply.

Data will be archived at a scheduled times over the local network connection.

I have budget for cards/controllers.

In the future I want to be able to add up to two more 24-bay expansion enclosures.

Thanks again.
 

Attachments

  • SAS NET.pdf
    39.5 KB · Views: 428

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
If decided up on all the hardware will be purchased and assembled at once. As for budget, the idea is to save to compare to ready-built solutions, which I see possible in this case. I'm open/willing to ship in from US but of course have to see the price plus customs might be involved also. But like I said - Yes I am interested.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If decided up on all the hardware will be purchased and assembled at once. As for budget, the idea is to save to compare to ready-built solutions, which I see possible in this case. I'm open/willing to ship in from US but of course have to see the price plus customs might be involved also. But like I said - Yes I am interested.
You might want to consider a drive enclosure like this, and see what kind of deal they will make because it is a very economical way to get hot-swap drive bays, both in terms of cost and rack space:
https://www.ebay.com/itm/New-HGST-4...-J-12-JBOD-2x2x4lane-SAS3-12Gb-s/132963111773
That is only a SAS attached drive enclosure with no processor. It must be attached to a server.

As for the server chassis, you could go with one like this, which is what I use at home:
https://www.ebay.com/itm/4U-48-Bay-...2x-Xeon-Low-Power-6-Core-2-26Ghz/132799185756

That unit doesn't say it comes with any external SAS ports, so it probably doesn't, but you can easily add a couple cards like this:
https://www.ebay.com/itm/LSI-9200-8...IT-Mode-ZFS-FreeNAS-unRAID-NoROM/163534822734

or this:
https://www.ebay.com/itm/LSI-9201-1...IT-Mode-ZFS-FreeNAS-unRAID-NoROM/162872615455

I have a system at work with six drive shelves attached that is running 124 drives. I sure wish I could substitute this because the gear I have takes more than half the server rack. This hardware works great and is about the best value you are likely to find.
 
  • Like
Reactions: BKG

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Chris Thanks a lot for all the help these options loo interesting! I don't think it would be a problem to ship in. Also I am new to FreeNAS and need to understand system's capabilities. So for instance I know RAID and know what I am sacrificing when building certain builds for certain scenarios. But with ZFS - how does it compare to regular RAID format(0,1,5,6,10,60)? What is the best option when building a large system - 60 drives or a small one with 8 drives? Can the storage be expanded in the future? Thanks.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This is a power point presentation that does a fair job of introducing ZFS: https://1drv.ms/p/s!AipQGpAyAeDjgVrOZeXxYNhvq6WX
It is what I used to get started. There is a LOT to ZFS and it takes a while to take it all in.

The short form of the answer, in ZFS, the pool is a collection of vdevs (virtual devices) all the vdevs in the pool are striped together like a RAID-0. So any data sent into the pool is split among all vdevs, so if a single vdev fails, the entire pool fails. Redundancy in ZFS is done at the vdev level, so a vdev might be a mirror (kind of like RIAD-1), or it could be RAIDz (kind of like RAID-5, 1 disk worth of parity), or it could be RAIDz2 (kind of like RAID-6, 2 disks worth of parity), or it could be RAIDz3 (kind of like RAID-6+, 3 disks worth of parity). For storing video (surveillance right?) you probably want RAIDz2 vdevs, and to ensure you have fast enough IO rate to deal with your data, you will want many vdevs. I would suggest going with either six or eight drives per vdev (probably six) so you can have more vdevs. It is generally not a good idea to have more than about ten drives in a vdev, but I have heard of a system where they put 45 drives all in a single vdev. The thing about vdev performance is that (for random IO) each vdev is generally equivalent to a single physical disk. All data written into the pool is automatically checksumed by ZFS so that it can be monitored for errors, regardless of the type of redundancy selected. General rule of thumb, if you need very high IOPS, for virtualization for example, you would use mirror vdevs so that you can have more vdevs without needing a massive amount of drives.

Here is a link to a capacity calculator that I like when I am trying to decide on the number of drives and the capacity of the drives for a particular project:
https://wintelguy.com/zfs-calc.pl

Here is an example with 1TB drives with 6 drives per vdev and 6 vdevs in RAIDz2, just ignore the price.

Capture.PNG
 
  • Like
Reactions: BKG

jmckey

Cadet
Joined
Mar 14, 2019
Messages
1
Thanks for putting this together. I thought this was such a great resource and interested me so much that I think this will be the route I go despite my prior poking into creating a cheap/used solution that is ESXi 6.0 compatible (i.e. on their official compatibility list). I'm curious about how the DELL/LSI IT mode card will handle many drives booting up at once (say 8 total), plus there being 2 drives attached to the motherboard SATA and on the same power supply. I'm guessing the LSI card in IT mode has some spin-up options but I'm wondering what I need to consider in terms of whether my mix of older SATA hard drives support the spin-up delay and if my power supply or motherboard might be a limitation as well.

Right now I'm considering using a 9 yr old case that has 10 bays I can use for hard drives, with a Gigabyte GA-ep45-ud3r (again, 9 yrs old) and has 6 on board SATA II connectors. The power supply is a 500W PC Power & Cooling "Silencer" EPS12v. I plan to eventually go with the 512gb RAM solution you suggest in another thread (Xeon/Super Micro, etc.) but for now I need to try to use this existing hardware due to budget constraints and if that means going w/ less drives I may have to do that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm curious about how the DELL/LSI IT mode card will handle many drives booting up at once (say 8 total)
Total non issue. I had all the drives in my system on a Dell H310 before I upgraded to a newer model card and I only did that because I wanted to and had the opportunity to do it at no cost. If I recall correctly, with the correct assortment of SAS expanders, you can run up to 512 hard drives from the Dell H310 card. You must have a power supply that is able to handle the power draw of all those drives starting at once, but that is completely a different subject. The controller card does not provide the power and no spin-up delay is needed if you have an adequately sized power supply.
Right now I'm considering using a 9 yr old case that has 10 bays I can use for hard drives, with a Gigabyte GA-ep45-ud3r (again, 9 yrs old) and has 6 on board SATA II connectors. The power supply is a 500W PC Power & Cooling "Silencer" EPS12v.
That power supply is probably inadequate because that old system board, processor and memory are going to take a lot of power on their own. Here is a guide, but the guide was intended for newer hardware:

Proper Power Supply Sizing Guidance
https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/
but for now I need to try to use this existing hardware due to budget constraints and if that means going w/ less drives I may have to do that.
It might be alright for four drives, but I would not try to go eight. The amount of power that other components draw is more a factor than you might realize. If you had a super low power system board, you could run nine or ten drives on a good quality 550 watt power supply. With the old system you are talking about using, the CPU and other system board components will be gobbling up all your power leaving none for the hard drives.
 
Top