Expanding storage responsibly?

Status
Not open for further replies.

nadinio

Dabbler
Joined
Jul 9, 2017
Messages
22
I am currently in the process of drafting my build before I sink in much money into my FreeNAS project and I am trying to figure out the optimal way to handle storage expansion. From the start I will probably be working in either a 20 or 24 bay server chassis. I will probably order 4 x 4tb drives and put those in a Raid Z1 pool. After this I will probably pull my older 4 x 4tb drives from my older NAS and add those into the pool as well. I'm thinking adding four drives at a time is a pretty cost effective way to go. However, I am concerned about having five or six Z1 vdevs in one storage pool. Would this be a risky setup? I know if two drives fail in one vdev, I'm screwed. But I am not sure how to scale this server up effectively.
 

nadinio

Dabbler
Joined
Jul 9, 2017
Messages
22
Fair enough. I suppose this is why I am reaching out - to figure out the right way to do this. What would you recommend?
 

nadinio

Dabbler
Joined
Jul 9, 2017
Messages
22
Would I be able to expand a RAIDZ2 configuration? Say from a 4 drive Z2 into a 8 drive Z2? Then ultimately, 3 x 8drive RaidZ2's?
 

iRefugee

Dabbler
Joined
Jul 17, 2017
Messages
20
Would I be able to expand a RAIDZ2 configuration? Say from a 4 drive Z2 into a 8 drive Z2? Then ultimately, 3 x 8drive RaidZ2's?
I am in the same boat as you. I want to incrementally upgrade my system 4 disk at a time. Unfortunately, you cannot add more disks to a vdev. Once a vdev is created you cannot expand them.

For my system, I have 8 expansion slots. I prefer Raidz2. I might go with (8) 1tb drives and sloooooowly upgrade. I hope there are more solutions mentioned here.

Sent from my SM-G950U using Tapatalk
 

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
I want to incrementally upgrade my system 4 disk at a time.
I think you should think again about why you want this.
RaidZ2 will be more efficient and less waste with 6 disks.
Stripped mirrors allows to expand with a pair of disk, any size.
 

nadinio

Dabbler
Joined
Jul 9, 2017
Messages
22
I think you should think again about why you want this.
RaidZ2 will be more efficient and less waste with 6 disks.
Stripped mirrors allows to expand with a pair of disk, any size.

Cost mostly. I can probably do 6 drive Z2's since that seems to be the way to go. 8 drive vdevs are recommended to be in Z3, correct?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
No, there's no hard rule on number of devices vs. RAIDZ type other than the minimums (3 disks for RAIDZ1, 4 disks for RAIDZ2, 5 disks for RAIDZ3).
 

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
Cost mostly. I can probably do 6 drive Z2's since that seems to be the way to go. 8 drive vdevs are recommended to be in Z3, correct?
Cost is the reason you don't want 4 drives raidz2.

6HDD have same same usable storage space than 2x4HDD
6x3TB offer 50% more space for almost the same price than 4x4TB

I think only recommendation are do not use raidz and do not exceed 10-12 drives per vdev.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You haven't told us how you plan to use your storage. For general storage you might choose RAIDz2.

OTOH, if you are planning to run VM's via iSCSI then striped mirrors would be a better choice.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Cost mostly. I can probably do 6 drive Z2's since that seems to be the way to go. 8 drive vdevs are recommended to be in Z3, correct?
I built a system with a z3 configuration and was very disapointed in the speed of access to the disks. I would stay away from it and you can absolutely have ten to 12 disks in a z2 with no problem. You just need to monitor the disks for faults. I have my NAS configured to email me a daily report. There are scripts for that on this board if you have a look around.
In the end a lot of these decisions depend on how you plan to use the system and what your storage need is now vs what you expect the need to be in five to six years, which is an entirely reasonable life for a system, ten years is not. I built my NAS with 12 drives in the storage pool divided into two vdevs of six drives because I can get reasonable performance and upgrade each vdev independently to larger drives to increase my capacity. I am at about 50% utilization now and anticipate needing to upgrade in the next couple years.
Hard drives are commodities that continue to fall in price and they wear out. Don't buy big drives now (when they cost more) thinking they will last the life of the NAS. My suggestion would be to buy what you think you will use for drives and expand later when the bigger drives are cheaper. The reason I went with 12 drives at 2TB each is for the speed of access combined with the amount of storage it would provide. It is easy to expand later if you plan it properly up front.
Generally speaking, the more vdevs you have, the faster your access to the data.
 

philhu

Patron
Joined
May 17, 2016
Messages
258
Just fyi. I have a supermicro sc847 chassis with 2 vdev raidz3. Each has 11 disks. One is 4 tb drives and other is 6 tb drives. Seagate. In this config, using a 10Gbps network line i am consistently seeing 300-500 mBps transfer rates

This is using raidz3. I have had 2 disks die in the 6 months of use and were easily replaced. I keep 2 spare of 4 and 6 g drives
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just fyi. I have a supermicro sc847 chassis with 2 vdev raidz3. Each has 11 disks. One is 4 tb drives and other is 6 tb drives. Seagate. In this config, using a 10Gbps network line i am consistently seeing 300-500 mBps transfer rates

This is using raidz3. I have had 2 disks die in the 6 months of use and were easily replaced. I keep 2 spare of 4 and 6 g drives
In my home system, the oldest disk in my Irene-NAS has just turned 50,000 hours (5.7 years) and still running strong. You just gotta love HGST drives. For my money they bash the crap out of WD every day. That is probably why WD bought HGST and pirated their technology for some of the most recent WD drive models. HGST is run as a separate company, which is why you can still buy drives from them instead of all the drive just being different model WD drives.
That is terribly slow for that number of drives. Really, with the number of drives you have, you should be seeing 800 to 900 all the time. Something must be bottle-necking your throughput. The number of drives directly relates to transfer speed. The bottleneck might be that old system board because it probably limits the PCIe bus speed and that combined with the raid-z3 needing extra compute power to calculate the additional drive worth of parity data could be a factor. something certainly isn't right.
On the big server I manage at work, with 60 drives, the transfer speeds are consistently over 1000, and often as high as 1500 and that pool is built from 4 vdevs at raid-z2. There is more overhead in raid z3 because of that extra parity drive. It makes total performance slower. I think that is a pretty widely accepted fact. I have been running raid-z2 for several years now and as long as you keep an eye on the system, it is very reliable. I have even replaced two drives at once, in the same vdev, totally not recommended and risky but you can do it.
 

philhu

Patron
Joined
May 17, 2016
Messages
258
Well. Forgot to mention. I have not yet vlan'ed my network and the line is shared with another app using 3-500 mBps internally on same subnet

The 2 together use 1gBps. Or most of the 10 gbps line
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am currently in the process of drafting my build before I sink in much money into my FreeNAS project and I am trying to figure out the optimal way to handle storage expansion. From the start I will probably be working in either a 20 or 24 bay server chassis. I will probably order 4 x 4tb drives and put those in a Raid Z1 pool. After this I will probably pull my older 4 x 4tb drives from my older NAS and add those into the pool as well. I'm thinking adding four drives at a time is a pretty cost effective way to go. However, I am concerned about having five or six Z1 vdevs in one storage pool. Would this be a risky setup? I know if two drives fail in one vdev, I'm screwed. But I am not sure how to scale this server up effectively.
My suggestion would be to find an alternate storage location for your data and put 6 drives in (to start) in a RAID-z2 configuration. This gives you an extra drive worth of fault tolerance. Six 4TB drives in RAID-z2 should give you about 14TB of usable capacity. If you see that you need more, you can always add another six drive vdev to the existing pool. That is basically where I am now, two vdevs of six drives each in RAID-z2, but I have 2TB drives.
The recommendation of not using RAID-z1 is due to the possibility that a second disk will fail before a failed disk can be recovered. It puts your data at risk and in this forum, most people, are all about the safety and security of the data. There are some situations where RAID-z1 might be the answer, but it depends on the amount of risk you are willing to accept and if you are going to accept risk, you need to know that you are actually risking something.
In a 24 bay server, you can have 4 vdevs of 6 drives each. If you stay with the 4TB drive size and use RAID-z2 for each vdev, that would get you about 32TB of usable storage. You do not need to keep all drives the same size though. You could add a vdev to the existing pool that uses 5TB drives, then the next vdev could use 8TB drives or you could replace the 4TB drives in vdev0 with 8TB drives instead of adding another vdev. If you did replace drives in an existing vdev, you have to do them one at a time and once all drives in a given vdev are replaced, then you can access the additional space.
Lots of options. Ask questions if you have them.
 
Last edited:

luckyal

Dabbler
Joined
Aug 4, 2017
Messages
32
you can absolutely have ten to 12 disks in a z2 with no problem. You just need to monitor the disks for faults.
Newb here, so apologies for ignorant questions/comments! How DO you "monitor" for faults? Daily checks, email logs? I don't even check email from my wife, so I don't want to be bothered with deciphering a log on a daily basis unless there's a big problem in which case I'm probably screwed anyway.

The main reason why I would chose ZFS to begin with is because unlike other files systems, it runs checksums continually and executes "self healing" protocols, which is what I want - a trouble free set up that I can can set up and forget about for several years. If I have to deal with an ish storm every week, why not just get some off the shelf box like synology?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How DO you "monitor" for faults?
  • Enable the SMART service
  • Configure the SMART service to send email alerts of problems
  • Schedule regular SMART self-tests on your drives
  • Configure FreeNAS to email error reports (this is separate from the SMART email setup, which isn't ideal, but it is what it is)
  • Schedule a regular scrub for your pool every 2-3 weeks
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Newb here, so apologies for ignorant questions/comments! How DO you "monitor" for faults? Daily checks, email logs? I don't even check email from my wife, so I don't want to be bothered with deciphering a log on a daily basis unless there's a big problem in which case I'm probably screwed anyway.

The main reason why I would chose ZFS to begin with is because unlike other files systems, it runs checksums continually and executes "self healing" protocols, which is what I want - a trouble free set up that I can can set up and forget about for several years. If I have to deal with an ish storm every week, why not just get some off the shelf box like synology?

Many of the 'monitoring' tasks are automated by the operating system once you set them up and there is documentation on how to do that here on the site including some scripts that can be run on a scheduled basis through crontab with email alerts. I get two or three emails a day from each of my NAS systems. All it takes is a quick glance to see that all is well. MOST of the time all is well and there is no, 'ish storm' to deal with but you need to keep an eye on it so you can deal with hardware failures that will eventually happen before they get so bad that you loose data.
For example, on my Irene-NAS, I built it with all new hard disk drives and it has had zero errors of any kind for over a year. I still look at the email every morning when the system sends it to me. I have another example, on my Emily-NAS, I built it using a combination of drives that I had used in previous builds and drives that I purchased used from ebay. That system has been running for almost two years and I have had to replace a drive about every second month. What you put into the system is directly related to what you get out. I saved some money but have had to do more maintenance. Still, I have not lost any data or had any significant down time.
If you have a drive that starts to develop bad sectors, you don't want to count on ZFS to be self healing. Replace failing drives as soon as possible and resilver (rebuild) the array. ZFS is the best but that doesn't mean you can shove your NAS in a closet and forget about it.
 
Status
Not open for further replies.
Top