I have lots of little questions.... I certainly don't expect any one person to answer to all of this. I know everyone has different recommendation and options. I'm open to hearing it all! - THANK YOU IN ADVANCE!
Basic Specs and Usage:
Mostly used by just me (occasionally maxed by 2 or 3 to stream via Plex).
Raidz2 - Probably 10 drives total
I store lots of backup data. Personal Pictures / documents / music, etc. (I add about 20Gb of new data each day, tons of word / excel documents, music, some videos, program installer, etc).
I stream movies via SMB locally and VIA Plex (local and remote) - Mostly 720p / 1080p content, but I know that will change to more and more 4K content)
I replicate (nightly) all of this to another local FreeNAS server via rsync.
I use snapshots (just in case of virus or accidental deletions). Online backup for the super important stuff.
My current servers running similar hardware (and sometimes slower / older gen i7 processors) all seem to run fine (just a little warm / hot due to poor circulation). I assume that my processor usage is rarely maxed out and doubt it even comes close. I think my biggest bottleneck (besides 1G Ethernet) might be the SATA adapter (listed below).
I have built many FreeNAS servers since 2011. Most of them have been using almost new or "used" Dell XPS towers (8300, 8500, 8700, and 8900). I feel fairly comfortable with the basics. For my personal servers, I have managed to secure 8 spinning drives in these desktop towers, which I think run a littler warmer than normal due to poor circulation.
For this new server, I purchased a 12 Bay Rosewell server chassis ($250): https://smile.amazon.com/gp/product/B00N9CXGSO/
I was going to buy a new motherboard with 10-12 SATA ports but upon searching tonight, I gave up as really don't know what's compatible. I know that the boards I was looking at probably would have worked, but I would LOVE to hear a few recommendations on motherboards (I'm not planning on using ECC) (build info and usage below). While I will probably use a motherboard from a Dell XPS8920, I may change my mind based on recommendations).
RAM: NON-ECC (for my existing board at least), Probably 32GB PC4-2133p (what is compatible with my board). I'm not against ECC, but if I'm using an existing computer I already own, then it's just not compatible. I'm open to buying a motherboard that supports it - I just don't know what to look for. My thought was to buy a board with all the needed SATA ports built in to reduce / eliminate heat from the card. For years, I have assumed that motherboard SATA ports were better / more reliable, but after research tonight, I see that that might not be true. What is recommended? Using the motherboard ports or buying enough ports on a card to have all drives on 1 or 2 SATA expansion cards?
SATA Adapter:
I have been using these IO Crest SATA cards as these Dell XPS desktops only have 4 or 5 SATA ports ($30): https://smile.amazon.com/gp/product/B00AZ9T264
But tonight (for this build) I purchased a SAS9211 8i ($90) / LSI00194 - https://smile.amazon.com/gp/product/B0056FIJP2/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
These are / will be plugged into an x16 port (the graphics slot). What is the approximate max theoretical rate of a raidz2 with 10 or 12 drives? I saw that these WD Drives pushed about 185 MB/s max. I ask because 1: I have no clue and 2: I may want to get 10GBE someday...
I have purchased quite-a-few WD RED Drives (about $150 each/ normally $200-300) WD80EFAX SATA 256MB Cache (Shucked from: WDBCKA0080HBK-NESN / https://www.bestbuy.com/site/wd-eas...-3-0-hard-drive-black/5792401.p?skuId=5792401 ).
Drive 4k / 512b Alignment: I have read that some drives "LIE" to the OS about their sectors. What is the process to check my existing server that uses these same drives. What is the process to force the drives into 4k, (which I understand is the best for performance reasons), before / during FreeNAS setup / Pool Creation.
I have read in several places that I should use Raidz2 with 10 drives for optimal performance and alignment. I have 12 bays in this chassis. If I setup raidz2 with 12 drives (vs 10 drives), what kind of a real world performance hit am I looking at? Also, is this bad for the drives? Does it wear them out quicker?
Power Supply: What rating power supply should I use? Currently I've been using the PSU that came with the XPS computers (460 Watt Max) PSU Specs: https://http2.mlstatic.com/fonte-de...mmv-D_NQ_NP_19541-MLB20172509054_102014-F.jpg
Is that PSU powerful enough for 10-12 drives?
8TB Drives Spec: Rated: 5V 400mA, 12V 550mA
Network Speed: I have a gigabit switch and have setup some of my servers to use LAACP with 2 or 4 Ethernet cables. After I installed them and setup LACP on the server and switch, I realized that while the cards were capable of 2Gb/s or 4Gb/s, SMB only transfers using one of the lines, maxing out at 1Gb/s. (Please Correct me if this is not true). But, even so, this allows my servers to backup to each other at 1Gb/s and I can push data to my server from my computer or another at a full 1Gb/s. - I'm ok with 1Gb/s as the price for 10G seems too much for me. I can see multiple $300 cards plus a switch adding up very quickly $$$).
With that said... Any obvious bottlenecks or performance issues that I could fix? Once of my biggest concerns is wasted processing power / over usage due to the 10/12 drive raidz2 and the 4k drive alignment issue.
Basic Specs and Usage:
Mostly used by just me (occasionally maxed by 2 or 3 to stream via Plex).
Raidz2 - Probably 10 drives total
I store lots of backup data. Personal Pictures / documents / music, etc. (I add about 20Gb of new data each day, tons of word / excel documents, music, some videos, program installer, etc).
I stream movies via SMB locally and VIA Plex (local and remote) - Mostly 720p / 1080p content, but I know that will change to more and more 4K content)
I replicate (nightly) all of this to another local FreeNAS server via rsync.
I use snapshots (just in case of virus or accidental deletions). Online backup for the super important stuff.
My current servers running similar hardware (and sometimes slower / older gen i7 processors) all seem to run fine (just a little warm / hot due to poor circulation). I assume that my processor usage is rarely maxed out and doubt it even comes close. I think my biggest bottleneck (besides 1G Ethernet) might be the SATA adapter (listed below).
I have built many FreeNAS servers since 2011. Most of them have been using almost new or "used" Dell XPS towers (8300, 8500, 8700, and 8900). I feel fairly comfortable with the basics. For my personal servers, I have managed to secure 8 spinning drives in these desktop towers, which I think run a littler warmer than normal due to poor circulation.
For this new server, I purchased a 12 Bay Rosewell server chassis ($250): https://smile.amazon.com/gp/product/B00N9CXGSO/
I was going to buy a new motherboard with 10-12 SATA ports but upon searching tonight, I gave up as really don't know what's compatible. I know that the boards I was looking at probably would have worked, but I would LOVE to hear a few recommendations on motherboards (I'm not planning on using ECC) (build info and usage below). While I will probably use a motherboard from a Dell XPS8920, I may change my mind based on recommendations).
RAM: NON-ECC (for my existing board at least), Probably 32GB PC4-2133p (what is compatible with my board). I'm not against ECC, but if I'm using an existing computer I already own, then it's just not compatible. I'm open to buying a motherboard that supports it - I just don't know what to look for. My thought was to buy a board with all the needed SATA ports built in to reduce / eliminate heat from the card. For years, I have assumed that motherboard SATA ports were better / more reliable, but after research tonight, I see that that might not be true. What is recommended? Using the motherboard ports or buying enough ports on a card to have all drives on 1 or 2 SATA expansion cards?
SATA Adapter:
I have been using these IO Crest SATA cards as these Dell XPS desktops only have 4 or 5 SATA ports ($30): https://smile.amazon.com/gp/product/B00AZ9T264
But tonight (for this build) I purchased a SAS9211 8i ($90) / LSI00194 - https://smile.amazon.com/gp/product/B0056FIJP2/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
These are / will be plugged into an x16 port (the graphics slot). What is the approximate max theoretical rate of a raidz2 with 10 or 12 drives? I saw that these WD Drives pushed about 185 MB/s max. I ask because 1: I have no clue and 2: I may want to get 10GBE someday...
I have purchased quite-a-few WD RED Drives (about $150 each/ normally $200-300) WD80EFAX SATA 256MB Cache (Shucked from: WDBCKA0080HBK-NESN / https://www.bestbuy.com/site/wd-eas...-3-0-hard-drive-black/5792401.p?skuId=5792401 ).
Drive 4k / 512b Alignment: I have read that some drives "LIE" to the OS about their sectors. What is the process to check my existing server that uses these same drives. What is the process to force the drives into 4k, (which I understand is the best for performance reasons), before / during FreeNAS setup / Pool Creation.
I have read in several places that I should use Raidz2 with 10 drives for optimal performance and alignment. I have 12 bays in this chassis. If I setup raidz2 with 12 drives (vs 10 drives), what kind of a real world performance hit am I looking at? Also, is this bad for the drives? Does it wear them out quicker?
Power Supply: What rating power supply should I use? Currently I've been using the PSU that came with the XPS computers (460 Watt Max) PSU Specs: https://http2.mlstatic.com/fonte-de...mmv-D_NQ_NP_19541-MLB20172509054_102014-F.jpg
Is that PSU powerful enough for 10-12 drives?
8TB Drives Spec: Rated: 5V 400mA, 12V 550mA
Network Speed: I have a gigabit switch and have setup some of my servers to use LAACP with 2 or 4 Ethernet cables. After I installed them and setup LACP on the server and switch, I realized that while the cards were capable of 2Gb/s or 4Gb/s, SMB only transfers using one of the lines, maxing out at 1Gb/s. (Please Correct me if this is not true). But, even so, this allows my servers to backup to each other at 1Gb/s and I can push data to my server from my computer or another at a full 1Gb/s. - I'm ok with 1Gb/s as the price for 10G seems too much for me. I can see multiple $300 cards plus a switch adding up very quickly $$$).
With that said... Any obvious bottlenecks or performance issues that I could fix? Once of my biggest concerns is wasted processing power / over usage due to the 10/12 drive raidz2 and the 4k drive alignment issue.
Last edited: