Just joined the FreeNAS

Status
Not open for further replies.

Cody White

Dabbler
Joined
Dec 5, 2015
Messages
16
Hey everyone,

My name is Cody, and I just wanted to pop in and say HI!.

I have in the past ran raid setups for personal media collections using HW Raid 5 and ran into a lot of issues with failing hard drives. When I started I was trying to keep things budget and ended buying WD Green 1.5 TB drives. I know how many thing are wrong with that!!! Suffice it to say I learned the hardware what NOT to do.

I have just rebuild the new beast with hard drives I had laying around so its not at optimum settings as I want to replace all of my drives with WD RED 1TB 2.5 drives to help reduce space I want to migrate to a 2.5 form factor.

This time I am trying to do things.. better... right... I hope.

I will be upgrading the following system bit by bit until I get where I want, as you can see I already have some minor upgrade due in. I am not looking to build a gaming rig or any other BEAST. I already have that as my main PC so until there is a valid reason to upgrade the core of my rig I will be running on old hardware.

System 1:
AMD Athlon II X2 7750 - Upgrading to AMD Phenom II X4 965 BE
8GB DDR2 800MHz RAM - Upgrading to 16GB DDR2 800MHz RAM
ASUS M3N78 Pro
LSI MegaRaid SAS 9240-8i - Cross Flashed 2118-IT
Volume 1: 8x 80GB 7200 (Mixed) HDD - Raid 10
NZXT Tempest EVO Chassis
2x Supermicro Mobile Rack CSE-M35TQB

FreeNAS-9.3-STABLE-amd64


Images Here

Let me know what you think.

Thanks!

-Cody.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Welcome!
to help reduce space I want to migrate to a 2.5 form factor
How do 1TB drives @ 2.5" offer space saving over, say, 4TB drives @ 3.5".
I am trying to do things.. better... right... I hope.
Better, sure, but not what would generally be considered "right" in these forums:
Intel is preferred.
ASUS M3N78 Pro
Looks like it doesn't support ECC RAM.
2x Supermicro Mobile Rack CSE-M35TQB
This appears to be for 3.5" drives :confused:
 

Cody White

Dabbler
Joined
Dec 5, 2015
Messages
16
Thank you for the warm welcome

How do 1TB drives @ 2.5" offer space saving over, say, 4TB drives @ 3.5".
Moving to a 2.5 form factor to reduce physical space in my case :) You will understand in a moment.
Intel is preferred.
Looks like it doesn't support ECC RAM.
I know both of these items are potential risks using ZFS however for my current needs, I am looking to get up and running quick. In the mean time I think I might push rsync to an external usb drive to try and mitigate the potential loss of data without ECC memory. I will most likely move to a 1U supermicro case and use my current case as a drive enclosure with a RES2SV240
This appears to be for 3.5" drives :confused:
I am planning on replacing both CSE-M35TQB to M28SAB-OEM
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
In the mean time I think I might push rsync to an external usb drive to try and mitigate the potential loss of data without ECC memory.
It's important to have a backup of any data that doesn't exist elsewhere and cannot be recreated, regardless of other factors, so it's good that you're thinking about this. Using rsync is not strictly backup, unless you combine it with suitable scripting that retains a history of changes. Without a history, you risk overwriting your one good backup with newly corrupted data.

Regarding ECC RAM, there are lots of relevant threads in the forums, so I won't debate it here. However, please at least take note of what the FreeNAS documentation has to say about it, and read the linked case study.
M28SAB-OEM
Let's say you put 3 of these in place of the 2 CSE-M35TQB you have now. The maximum raw storage you can install with WD Red is 24TB. You can achieve 24TB of raw storage with 4 6TB 3.5" drives, with significantly reduced probability of encountering a drive failure, not to mention lower cost and reduced complexity. Of course, 4 drives may not meet your other requirements, but from a storage density point of view, I don't see how 2.5" drives can compete.

I say this as someone who tried to justify the use of 2.5" drives in my own build (low power, cool running, etc). I finally had to surrender to the realization that it just didn't make sense.
 

Cody White

Dabbler
Joined
Dec 5, 2015
Messages
16
I say this as someone who tried to justify the use of 2.5" drives in my own build (low power, cool running, etc). I finally had to surrender to the realization that it just didn't make sense.
I am actually in the same debate my self however I am leaning on the direction of 2.5.

1) CSE-M35TQB has a couple of flaws. It required a SAS to SATA adapter with side band connector. This works well which is what I have however the bay supports 5 drives so that is 1 SAS connector + side band and then 1 SATA so when you try and run a locate the additional 1 drive on each bay is not supported. I have been unable to locate any bays that support SAS/Sata with side band and full location. Maybe I haven't looked hard enough and I am stuck on the supermicro site. Where as the M28SAB-OEM holds 8 drives and uses 2x SAS connectors and has full location capability in each bay. This seems like a +1 at this point. One thing to keep in mind is that I am trying to do this a standard mid to full size case I am not looking to have a server case that is 24 in deep.

2) One of my issues when I was running raid 5 in the past was when a drive failed the rebuild took forever. So I started researching on what was going on and it was highly not recommenced to be using raid 5 on larger drives. So now I am looking at using raid 10, that way on a drive failure the rebuild is only a mirror 1 to 1 with no parity calculation required. Now because I am thinking of running on raid 10 larger drive should be ok, but that is still something under debate.

3) I was also thinking of building the base hardware structure for the future :)... Well another thought was if I start building for a 2.5 drive in the event that larger SSD's drop in price I might consider replacing my RED's with SSD. However I think that might still be a couple years out. I do not have requirements for those kinds of speeds however I figured it would create less heat, less power, and in turn maybe create less failures (Spindle Failures). Then the speed increase is a bonus.

If some of my information above is incorrect I am sorry I do not think I know everything about NAS builds this is just from me reading over forums and articles when I have had issues and some in the past week when rebuilding trying to come up with a new solution.

Price to Price for me @ raid 10 4 drives

4x Red 1TB 3.5 = Available Space = 2TB @ 319.96 ~ 160/TB
4x Red 2TB 3.5 = Available Space = 4TB @ 439.96 ~ 110/TB
4x Red 4TB 3.5 = Available Space = 8TB @ 791.96 ~ 99/TB
4x Red 6TB 3.5 = Available Space = 12TB @ 1,199.96 ~ 100/TB

4x Red 1TB 2.5 = Available Space = 2TB @ 359.96 ~ 180/TB


Hmm... 4/6 TB have the best price per TB however require a substantial upfront cost and an incremental upgrade would not work until at least 4 drives are purchased.

This has given me some more things to think about again.

If anyone has any suggestions on enclosures please let me know.

Thanks!
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
  1. My only comment here is, do you really need hot swap for all drives? With a properly planned and executed build, you shouldn't be replacing drives very often.
  2. This is why we use RAIDZ2 instead of RAIDZ1 with larger drives. It has 2 drives' worth of redundancy so a URE during rebuild doesn't immediately result in data loss. It's also worth noting that with RAID5, the controller knows nothing about the filesystem, so it just stupidly rebuilds the entire disk capacity. ZFS only rebuilds the used capacity.
  3. Nice work if you can get it.
4/6 TB have the best price per TB
Don't forget to factor total build cost into $/TB.

EDIT: I should mention that the ZFS equivalent of RAID10 is known as striped mirrors. It's a configuration with many advantages, including higher performance, shorter rebuild time and easier expansion, but it's not as reliable as RAIDZ2 and you spend 50% of your raw capacity on redundancy regardless of the number of drives.
 
Status
Not open for further replies.
Top