New to FreeNAS, need help on hardware

Status
Not open for further replies.

rockhead006

Dabbler
Joined
Jul 11, 2018
Messages
13
Ok, so I'll go for the SAS card. Which would also allow me to have free SATA connections (on the motherboard) for the boot drive. I'll find a small SSD somewhere. What is the recommended size for FreeNAS. 32GB?

I'll also up my PSU from 450w to 650w. I don't foresee needing more than that. As I used a PSU calculator online and it came out at about 300w.

I'll go for the RAIDz2. With 8 drives in. So should give me 24TB of storage. I see the case I selected has room for 10 x 3.5" drives and 2 x 2.5" drives. So I'll have space to put in a couple of (my existing) 3TB drives in the other two 3.5" drive bays - which I'll use for a backup of critical files from the RAID. And put the boot SSD into one of the 2.5" bays.
I'm hoping the SAS card will fit in too.

So my planned system is:
CASE: Fractal Design Node 804 Case for Computer - Black
MOBO: Supermicro Micro ATX DDR4 LGA 1151 Motherboards X11SSM-F-O
CPU: Intel Pentium Dual-Core G4400 3.3 GHz Processor CPU
MEMORY: Kingston KTM-SX421/8G 8 GB DDR4-2133 MHz ECC Memory Module
HDDs: Seagate BarraCuda - 4 TB internal hard drive, Silver,ST4000DM004 x 8 (eventually)
PSU: Corsair CP-9020098-UK VS Series ATX/EPS 80 PLUS Power Supply Unit, 650 W
SAS CARD: 8x port SATA PCI-E SAS2008 HBA expansion hub LSI SAS 9211-8i IT mode M1015 #911
SAS CABLES: J&D Internal Mini SAS 36 Pin SFF-8087 to 4 SATA 7 Pin Forward Breakout Cable (50 cm) x 2

Am I missing anything? CPU cooler needed? Additional fans?

Total excluding HDDs is: £515 ($680). Hard drives for 8 would be £684 ($900).

My only problem is, I could afford the base system for now, and maybe a hard drive or 2. But not the whole 8 hard drives at once.
So how hard is it to add more HDDs in the future. I assume it requires a complete rebuild when adding each new drive. But is it easy/possible to do it?
Could I just put the new drive(s) in, add to the existing RAIDz2 array, and the array rebuilds and increases the available space. Or is it a lot harder than this?
Or could I use my existing 3TB drives, and add the new 4TB drives (obviously only able to use 3TB of the space), until they are all 4TB? Can I use the other 1TB (from the 4TB drives) somehow. E.g. in another array?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
So how hard is it to add more HDDs in the future. I assume it requires a complete rebuild when adding each new drive. But is it easy/possible to do it?
Could I just put the new drive(s) in, add to the existing RAIDz2 array, and the array rebuilds and increases the available space. Or is it a lot harder than this?
Or could I use my existing 3TB drives, and add the new 4TB drives (obviously only able to use 3TB of the space), until they are all 4TB? Can I use the other 1TB (from the 4TB drives) somehow. E.g. in another array?

I've done a bit of this using mirrored drives, and it's fairly simple. I had a pool made up of a pair of 4Tb drives and a pair of 2Tb drives. I pulled one of the 2Tb drives and dropped a 3Tb in, performed the vdev replacement, waited for it to resilver, and then performed a scrub. Once the scrub was done, I pulled the other 2Tb drive, and repeated the resilver & scrub. When I was done I had 1Tb more space. But note: I performed a 3 day burn-in of that first 3Tb drive before doing this, as I had to be able to trust it while the second drive resilvered.

But RAIDz2 I believe is a bit more difficult. Since I don't run it, and my experience with it is clouded by past Solaris experience, I'll let someone else answer here. However... There's no reason you can't just create another pool. If you have drives, and you have SATA ports & holes in your case to place them, you can just spin them up as separate pools. Under the hood, FreeNAS is BSD Unix. You can ssh in, just like a Linux system, and use rsync, tar, cp, mv, etc... to shuffle your data between pools.


On edit: I believe retail versions of that CPU will come with a cooler.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Even with my well known publicly stated bias towards HGST drives... For $9/ea more you can have new 2Tb drives with a 2 year warranty.

https://www.newegg.com/Product/Product.aspx?Item=N82E16822149680
I bought a batch of eight 2TB Toshiba drives last year and inside six months three of them had failed. Not just bad sectors either; hard fails. I wouldn't take them if you gave them to me for free.
When I picked the HGST drives, I just grabbed the first thing that I saw and didn't shop for a deal. If you wanted a low price, you could take a look at these:
https://www.ebay.com/itm/Hitachi-HD...1-MLC-JPK28A-FW-28A-H3D20003272S/192578477904
Only $30, but they are already 8 years old and I would worry even about HGST drives at that age.
I have a server at work with 16 of the 4TB HGST drives that are 3 years in with no failures but I have another chassis (same vendor) in the same rack that has Seagate drives and I have had to replace 3 already.
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
I've had a 10 x 4TB HGST pool along with a smaller HGST pool running for 3 years 24x7 without any SMART errors. Of course, that's a small number of drives overall, but they tend to have a competitive price and performance with the only complaint I've seen being that they're a little louder. Seagate has released some duds and some great drives as well. WD seems alright, but with a typically higher price. I stick with HGST until I have a reason not to.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
I'll go for the RAIDz2. With 8 drives in. So should give me 24TB of storage. I see the case I selected has room for 10 x 3.5" drives and 2 x 2.5" drives. So I'll have space to put in a couple of (my existing) 3TB drives in the other two 3.5" drive bays - which I'll use for a backup of critical files from the RAID. And put the boot SSD into one of the 2.5" bays.
I'm hoping the SAS card will fit in too.

So my planned system is:
CASE: Fractal Design Node 804 Case for Computer - Black
MOBO: Supermicro Micro ATX DDR4 LGA 1151 Motherboards X11SSM-F-O
CPU: Intel Pentium Dual-Core G4400 3.3 GHz Processor CPU
MEMORY: Kingston KTM-SX421/8G 8 GB DDR4-2133 MHz ECC Memory Module
HDDs: Seagate BarraCuda - 4 TB internal hard drive, Silver,ST4000DM004 x 8 (eventually)
PSU: Corsair CP-9020098-UK VS Series ATX/EPS 80 PLUS Power Supply Unit, 650 W
SAS CARD: 8x port SATA PCI-E SAS2008 HBA expansion hub LSI SAS 9211-8i IT mode M1015 #911
SAS CABLES: J&D Internal Mini SAS 36 Pin SFF-8087 to 4 SATA 7 Pin Forward Breakout Cable (50 cm) x 2

Am I missing anything? CPU cooler needed? Additional fans?

Total excluding HDDs is: £515 ($680). Hard drives for 8 would be £684 ($900).

My only problem is, I could afford the base system for now, and maybe a hard drive or 2. But not the whole 8 hard drives at once.
So how hard is it to add more HDDs in the future. I assume it requires a complete rebuild when adding each new drive. But is it easy/possible to do it?
Could I just put the new drive(s) in, add to the existing RAIDz2 array, and the array rebuilds and increases the available space. Or is it a lot harder than this?
Or could I use my existing 3TB drives, and add the new 4TB drives (obviously only able to use 3TB of the space), until they are all 4TB? Can I use the other 1TB (from the 4TB drives) somehow. E.g. in another array?

Don’t overestimate the net pool storage, @Bidule0hm has a calculator that will give you some useful insights into pool geometry. With eight 4 TB drives in RAIDZ2 you will have net ~19 TB of storage. And a couple of terabytes before that you will need to start acting on your upgrade strategy.

Also, you are right there is no painless way of adding drives to a vdev. To change the topography you need to destroy the pool and rebuild, but you can do that in the UI. If you only have two drives to start I would set them up as mirrors. Then add more pairs and when you reach the number of drives you want in the RAIDZ destroy the pool and rebuild. Going from six drives in mirror pairs to eight drives in RAIDZ2 will dubble your pool capacity. And this also serves to force you to ensure you have proper backup, as you will have to restore from it after the rebuild ;)

A second option is making sure you have the right number of drives in the vdev. So if you have two 4 TB drives and six drives of other various sizes you can build your 8 wide RAIDZ vdev. The pool storage and reliability will depend on the weakest drives but you will have the right topography. Then it’s just a matter of replacing non 4TB drives until you have all eight. Storage will not increase until that last 4 TB drive has resilvered, but you can mess up or ignore proper backup..
 
Last edited:

rockhead006

Dabbler
Joined
Jul 11, 2018
Messages
13
So I currently have 15TB of storage (~14TB usable), over 6 drives. And the same again as backup drives.
So if I spend out the £500 on the basic system (without the drives), and spend another £700 on (8 x 4TB) drives. So about £1200 total ($1575).
And I set up the system as a RAIDz2, I would only gain about 5TB of additional space. I'm not sure if that is worth it. That's £240/TB for the 5 new TBs of storage.
It also sounds like a massive hassle when trying to change the drives (increase space) in the RAID. In previous RAIDs I've had, you just put in a new drive, and it will rebuild the RAID on that drive. And if the drive is larger (than the others) then you can use the remaining free space in a separate RAID group/pool. Which was relatively easy to do. But FreeNAS sounds a lot more complicated than that.

However, if I go for non-RAIDed. I would have about 30TB of usable storage. Obviously no backup/resilience of data. But I could also put in 2 of my old 3TB drives for critical data backup.
And also duplicate data between the drives (manually) to use up an free space for backups (until the space is needed). It's not an ideal situation, as all the data is not backed up.
But I could keep some of my old NASes around, and use for an external/offline backup.

I know this type of system isn't really how FreeNAS works. I was just hoping, that as it's a NAS OS, it would basically just be an OS dedicated to drive access and file sharing. I don't need the RAID side of things.
If you really think it's not worth using FreeNAS (if you don't go for one of the RAID options), then I will likely have to go for something else (likely Linux something, any recommendations?).

But thanks for all your help anyway. I've got some good advice on what hardware to use (e.g. SAS card rather than mobo SATA ports, ECC memory, etc).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You are looking at the cost in the wrong way and you don't need to use new. Also, you can save hundreds of dollars with repurposed equipment.

If you are not using ZFS with redundant disks, you should not use FreeNAS.

This system can work the way you want to use it.
https://lime-technology.com/what-is-unraid/


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Before I started using ZFS (first on FreeBSD and now on FreeNAS) I had a multi TB photo collection on multiple external and internal hard drives. These where NTFS and EXT4 drives.

When I started migrating all of this to my first web/storage server with a ZFS pool I found to my horror that about 10% of my oldest photos (about twice as old as the drive they where in) where irreparably damaged.

https://arstechnica.com/information...-and-atomic-cows-inside-next-gen-filesystems/

I resenlty took possession of the electronic archive of a small housing association and instantly migrated that of the external drive they where on. It’s 200 MB of pdf and word documents going back about 15 years. The drive itself is about 5 years old and again, about 20 % of the files older then 3 years are irreparably damaged. The association has hard copies on all the important stuff, but that’s not the point.

Unless you have some system of ensuring that written data is unchanged and correcting errors you can not reliably store data over several devices, not to mention years.

I use ZFS becasue in the last 10 years I haven’t lost a single file spanning several generations of servers, drives and migrations. If anyone is arguing agains ZFS it always ends with the concession that they don’t really care about ther stuff. That large bulk storage is more important then long term viability. And sure, if that is the use case then go for something closer to 1:1 utilization. But if the survival of the information is important, I have yet to see anything that will beat ZFS. What ever the cost per TB is for ZFS, it’s $0 for any other solution because they just can’t guarantee me my data 40 years from now.

And if anyone could show me the better alternative I would switch to it the weekend after.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You can't just look at the difference between the amount of new storage vs the amount of storage you already have. You are purchasing a whole new server with all the storage in it, not just the difference.
Where I work, we are paying $38k for a new system, if we only consider the change in available storage, it would be a very hard sell.
We are going from 6TB drives to 12TB drives so that we can change the pool configuration to mirror vdevs to get additional IOPS for the database that lives in there. The amount of storage space is only going to increase by around 60TB. Spending all that money for 60TB would be crazy, but it is the speed that we are going for.
In your case, you would be gaining things that you do not even think about.
As for expansion of the pool, you can add more drives in a second vdev. I have 48 drives in my home NAS. ZFS can expand your pool until the end of time.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
You are looking at the cost in the wrong way and you don't need to use new. Also, you can save hundreds of dollars with repurposed equipment.

If you are not using ZFS with redundant disks, you should not use FreeNAS.

This system can work the way you want to use it.
https://lime-technology.com/what-is-unraid/


Sent from my SAMSUNG-SGH-I537 using Tapatalk
BRTFS is not a mature solution and I’m fairly confident it’s built on a flawed premis. I have a hard time seeing it being a viable option any time soon

But more importantly, I don’t see it bringing anything useful to the party. Maybe for pure media storage of things like h256 that might survive heavy damage

https://www.phoronix.com/scan.php?page=news_item&px=Btrfs-Data-Bug-Hole-Read-Comp
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
BRTFS is not a mature solution and I’m fairly confident it’s built on a flawed premis. I have a hard time seeing it being a viable option any time soon
I don't know much about the BRTFS file system except that I hear RedHat abandoned it:

https://access.redhat.com/discussions/3138231

Personally, I wouldn't use it and I will only use ZFS at work as long as they let me make the decisions. For example, I had to setup a RedHat Enterprise Linux server at work. I didn't figure out how to make it boot from ZFS, so it boots from XFS but uses Open ZFS for the storage filesystem. The boot drives are mirrored using mdadm and the 326TB of storage is all one big ZFS pool. That system is very stable and it has only ever been down for a couple of planned power outages where the facility power was being worked on and we had to shutdown the whole building and one time when we moved it from one rack to another. In the year and a half it has been online, I have only had to replace 4 of the 60 data drives. Sorry. I think I got off on a tangent.
 

rockhead006

Dabbler
Joined
Jul 11, 2018
Messages
13
I only really want protection from corruption from my important files like photos & documents. For music and movies/tv I don't care that much if a bit or 2 is swapped, as it would go unnoticed.

Is there such a thing were you can get this sort of protection from corruption with perhaps just 2 disks (in a RAID), I guess not as I think 3 is the minimum. Then I would leave the rest of the disks as standalone non-RAID disk to maximise storage capacity.

Also, I've had a look around for old/non-new drives (and other parts) and they are not that much cheaper. Maybe £60 instead of £85 (for new). And you don't know how used/old/broken the used ones might be, and how long they would last. I would prefer to go for new, and they usually have warranty for X years if they fail, and I can get a quick replacement.

One other question, does FreeNAS power down the drives when they are not being used? As I would prefer that, so they are not running 24/7/365.
If so, is this different for if they are in a RAID? Or un-RAIDed.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I bought a batch of eight 2TB Toshiba drives last year and inside six months three of them had failed. Not just bad sectors either; hard fails. I wouldn't take them if you gave them to me for free.

I just picked the cheapest new... New Seagate 2Tb are $4 more. I was just pointing out that if you are sticking with smaller drives sizes, it doesn't cost much more to get a new drive with warranty, and you can catch them on sale regularly. I don't see anything bigger than 6Tb going on sale with any regularity.

I almost wish we has a garage sale section of the forum, but that would require heavy moderation, etc...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is there such a thing were you can get this sort of protection from corruption with perhaps just 2 disks (in a RAID), I guess not as I think 3 is the minimum. Then I would leave the rest of the disks as standalone non-RAID disk to maximise storage capacity.
You can have multiple pools. I do. Mirror vdevs are supported and you can have 2 or more disks mirroring each other. If you had particularly important data, you could have a 3 way mirror, which would allow 2 of the drives to fail before redundancy was lost.
It is not recommended, but it is possible to do a simple stripe of disks, but I have seen forum members loose their data doing that. Having each disk as it's own separate pool just doesn't make any sense because that makes you need to capacity manage each disk individually. That is one of the problems that pooling storage is supposed to solve. If you have data that is so unimportant to you that you don't care if it is damaged, just use Windows or Linux and a regular file system. You are not gaining anything by using FreeNAS.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Also, I've had a look around for old/non-new drives (and other parts) and they are not that much cheaper. Maybe £60 instead of £85 (for new). And you don't know how used/old/broken the used ones might be, and how long they would last. I would prefer to go for new, and they usually have warranty for X years if they fail, and I can get a quick replacement.
I don't really think used drives are worth the cost most of the time, but in the US we can save big money on the other server gear by picking up something that is 3 or 4 years old. It isn't top tier any more but it still gets the job done.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
You can have multiple pools. I do. Mirror vdevs are supported and you can have 2 or more disks mirroring each other. If you had particularly important data, you could have a 3 way mirror, which would allow 2 of the drives to fail before redundancy was lost.

I think the conceptual bit he's missing here... Consider that the NAS manages datasets within the pool. These datasets are folders / shares / iSCSI block allocations / data allocated to jails & VM's, etc... You can have a mix of them in each pool. The key bit is that there's another layer of organization after you've grouped disks into a pool.

It is not recommended, but it is possible to do a simple stripe of disks, but I have seen forum members loose their data doing that.

I've done a fair bit of that to test software performance. But that's a circumstance where I was after max IOPS per $, and didn't care at all about the data. And even then I had QA tests get blown up by failed disks!


One other question, does FreeNAS power down the drives when they are not being used? As I would prefer that, so they are not running 24/7/365.

There are options for HDD standby, Advanced Power Management, and Acoustic management. I can't say I've really played with them. I have activities that happen around the clock that would likely spin the drives back up anyway.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
There are options for HDD standby, Advanced Power Management, and Acoustic management. I can't say I've really played with them. I have activities that happen around the clock that would likely spin the drives back up anyway.
There are a lot of defaults that need to be changed to get this to work properly including moving the swap space off the pool disks, so it is no easy task, but some people have managed to do it.
One other question, does FreeNAS power down the drives when they are not being used? As I would prefer that, so they are not running 24/7/365.
If so, is this different for if they are in a RAID? Or un-RAIDed.
It really isn't a good idea though, because the hardest thing for a drive to do is spin back up, so drives that stay spinning all the time usually outlast drives that start and stop. Most of the catastrophic and unexpected drive failures I have seen over the years have been a failure to start after a shutdown. When the drives stay spinning, they usually start giving bad sectors, so you have a warning that they are about to blow. When they are in a pool with redundancy, it is a simple matter to replace a drive and move on.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Spinning down the drives has been shown to reduse the operational life and with very little benefit in terms of power saving.

I would say that having a 8 single drive vdev pool system is better then not using ZFS, data will be lost and disk will be poorly utilized but being told the state of things is still preferable to silent data corruption. Just don’t come blaming FreeNAS or ZFS when that happens.

Pooling all disks in a 8 way striped pool is of course plausible for a volatile data store, but as a single drive failure will take the whole thing down it really puts pressure on proper backup. ZFS and S.M.A.R.T monitoring gives you a some chance of replacing a drive pre failure, but it’s not guaranteed in any way.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just don’t come blaming FreeNAS or ZFS when that happens.
@rockhead006 What Garm said here is the big thing to me. If you make each disk a different pool you are eventually going to have a failure and I am trying to tell you to go use something else because I don't want you to blame FreeNAS and ZFS when you loose a drive. I am sure you keep your backups updated. Good luck with all that.
If you want to run from single disks, where you are vulnerable to disk failure, you can still set ZFS to keep copies of the data so it can correct for data errors. It is done from the command line. Here is how:
https://docs.oracle.com/cd/E19253-01/819-5461/gevpg/index.html
 
Status
Not open for further replies.
Top