Question: Ways to add storage, Best practices?

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
As the title says, what's the ways to add storage and what are the best practices for doing so? Has anything changed?

I'll start by telling you my situation.

I have been using FreeNAS since early 2016. My initial Pool was created using 6 3TB WD Red drives in a computer tower. Setup as RAIDZ1. At some point the motherboard died and I upgraded to a used server. The server has the ability to hold 24 disks.

In December 2018 I purchased 6 10TB Easystores and schucked them. At that time I put all 6 new drives in to the server and then replaced each 3TB drive one at a time until the pool was replaced with all the new drives and then I had more storage.

Due to "life" I have not kept up with the upgrades from FreeNAS to TrueNAS, I am currently running on FreeNAS-11.3-U5. I do want to upgrade at some point but I just don't want to deal with the repercussions that comes with upgrading, i.e. Jails breaking.

Currently I am at 74% used space, So I know I will be needing more space soon.
So back to the original question, How to add more space? I know I can purchase larger drive and replace them to expand the pool. What would be the best way in my situation that I could utilize the fact that I can have up to 24 drives?
I was thinking of purchasing 6 14TB drives that are on sale right now and replacing the 10TB drives, but I'm wondering if there’s a way to utilize the 10TB drives at the same time.

I see there is an option to add vdev's but the documentation states "the vdev being added must be the same type as existing vdevs" so would that mean instead of replacing the 10TB drive with the 14's that I could add the 6 14's to the pool as another vdev? If this is true, what's the down side?
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I see there is an option to add vdev's but the documentation states "the vdev being added must be the same type as existing vdevs" so would that mean instead of replacing the 10TB drive with the 14's that I could add the 6 14's to the pool as another vdev? If this is true, what's the down side?
1. Yes, that's exactly what it means.
2. Why should there be a downside? Apart from space and power consumption of 6 additional drives, of course :wink:
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
what's the down side?
Your pool will be striped across the two vdevs, meaning that if one fails, your pool fails. If your current six disks are in RAIDZ2, and you keep reasonably on top of your system, this isn't much of a concern. If they're in RAIDZ1, I'd be a little more nervous.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
1. Yes, that's exactly what it means.
2. Why should there be a downside? Apart from space and power consumption of 6 additional drives, of course :wink:
so i think i saw somewhere that using raidz you have the ability to have 1 disk fail and even though the pool has 2 vdev's using raidz you can still only have 1 disk fail but your still losing 1 disk in each vdev for parity.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
meaning that if one fails, your pool fails.
I'm using RAIDZ.
if one drive fails? Or on vdev, meaning more than one drive failed in one vdev?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
There is no RAIDZ. Only RAIDZ1, RAIDZ2, RAIDZ3 - which are you using? Redundancy is at the vdev level. So you can lose as many drives per vdev as that particular vdev is configured to tolerate.

RAIDZ1 is discouraged for really large current drives, because of the risk of losing another one during the resilver process.

If you are using RAIDZ1, what you could do is:
  • add an independent 6 disk RAIDZ2 pool
  • copy the data - with zfs send|zfs receive you will know that the data has been copied correctly because of the checksumming
  • destroy your old pool
  • add the 6 old disks as another RAIDZ2 vdev to the new pool
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
so i think i saw somewhere that using raidz you have the ability to have 1 disk fail and even though the pool has 2 vdev's using raidz you can still only have 1 disk fail but your still losing 1 disk in each vdev for parity.

Yeah, I'm sure you saw that somewhere, there's lots of stupid about ZFS out there. That's wrong. This is correct:

You can lose up to one disk in each vdev in that configuration. Losing any disk means a loss of redundancy for the pool, which means that you need 100.0000% reliability out of the remaining disks or you face data loss. People often misunderstand this point, because if you lose one disk in vdev A but no disks in vdev B, you have only lost redundancy in vdev A. However, this means your POOL has lost redundancy, because there are places in it that if an error shows up, it represents data loss. Those places are clearly "any place on the remaining disks of vdev A".

You can additionally lose a disk in vdev B, and as long as all the remaining disks in vdev A and vdev B have no errors, you haven't lost any data, just ALL your redundancy.

This is why we typically prefer a minimum of RAIDZ2.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
Yeah, I'm sure you saw that somewhere, there's lots of stupid about ZFS out there. That's wrong. This is correct:

You can lose up to one disk in each vdev in that configuration. Losing any disk means a loss of redundancy for the pool, which means that you need 100.0000% reliability out of the remaining disks or you face data loss. People often misunderstand this point, because if you lose one disk in vdev A but no disks in vdev B, you have only lost redundancy in vdev A. However, this means your POOL has lost redundancy, because there are places in it that if an error shows up, it represents data loss. Those places are clearly "any place on the remaining disks of vdev A".

You can additionally lose a disk in vdev B, and as long as all the remaining disks in vdev A and vdev B have no errors, you haven't lost any data, just ALL your redundancy.

This is why we typically prefer a minimum of RAIDZ2.
Ok, this is the way I thought it should work but I was unsure, thank you for confirming.
Now using 2 vdev's in the same pool, using hard links would still work i'm assuming.
Since This pool is for PLEX and Linux ISO's using RAIDZ1 isn't as concerning. However If I want better redundancy for more important files I could create a second pool using RAIDZ2 on the same system since I can have up to 24 disks.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Now using 2 vdev's in the same pool, using hard links would still work i'm assuming.
The filesystem layer and the pool/vdev layer are completely oblivious of each other. You still will have only one pool and create datasets and zvols as you see fit.

The pool/vdev topology determines your redundancy and to a certain extent the performance you can expect.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
There is no RAIDZ. Only RAIDZ1, RAIDZ2, RAIDZ3 - which are you using? Redundancy is at the vdev level. So you can lose as many drives per vdev as that particular vdev is configured to tolerate.

RAIDZ1 is discouraged for really large current drives, because of the risk of losing another one during the resilver process.

If you are using RAIDZ1, what you could do is:
  • add an independent 6 disk RAIDZ2 pool
  • copy the data - with zfs send|zfs receive you will know that the data has been copied correctly because of the checksumming
  • destroy your old pool
  • add the 6 old disks as another RAIDZ2 vdev to the new pool
So I have a question about this, so when creating an independent 6 disk RAIDZ2 pool it has to have a different name, after using zfs send|zfs receive how does that affect all the paths or would that pool then have the same name? essentially cloning then old pool to the new pool? Thanks
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
how does that affect all the paths
All of the paths will have the name of the new pool rather than the name of the old pool. If desired, after the data's copied over and the old pool removed from the system, you can rename the new pool as follows:
  • Export the pool using the GUI
  • From the CLI, zpool import newpool oldpool, substituting newpool and oldpool with the new and old pool names, respectively.
  • From the CLI, zpool export oldpool
  • From the GUI, import the pool
 
Joined
Oct 22, 2019
Messages
3,641
However, this means your POOL has lost redundancy, because there are places in it that if an error shows up, it represents data loss. Those places are clearly "any place on the remaining disks of vdev A".
Color me naïve, but I thought I read somewhere that ZFS will prioritize new writes to the full redundancy vdev(s) in a pool, until the user resilvers the compromised vdev. Not that this is an excuse to be lazy before correcting the degraded vdev, however.

(Although I might be remembering incorrectly, and what I in fact did read was that ZFS will prioritize writes to a newly attached/expanded vdev to spread the writes as evenly as possible throughout the pool; of which it's nothing to do with redundancy, per se.)
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
There is no RAIDZ. Only RAIDZ1, RAIDZ2, RAIDZ3 - which are you using? Redundancy is at the vdev level. So you can lose as many drives per vdev as that particular vdev is configured to tolerate.

RAIDZ1 is discouraged for really large current drives, because of the risk of losing another one during the resilver process.

If you are using RAIDZ1, what you could do is:
  • add an independent 6 disk RAIDZ2 pool
  • copy the data - with zfs send|zfs receive you will know that the data has been copied correctly because of the checksumming
  • destroy your old pool
  • add the 6 old disks as another RAIDZ2 vdev to the new pool
so for coping the data using zfs send|zfs receive is there somewhere I can see how to use this properly? A "how to" of sorts.
I created a new pool using the 6 new drives with RAIDZ2. Now it's just a matter of getting the data copied and destroying the old pool and then adding those disks as another vdev.
Is it as simple as setting up a Replication Task? Selecting source as original pool and destination as the new pool?
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I use the following script to churn a dataset. ie copy it from one place to another and then rename it back to the original dataset. Useful when attempting to balance vdevs or populate a secial vdev. It does however contain the clues you want.
Note - do not use this on an encrypted dataset. There is absolutely no error checking on this - so make sure you have a backup - or do each step manually

zfs snap BigPool/SMB/Common@migrate zfs send -R BigPool/SMB/Common@migrate | zfs recv -F BigPool/SMB/Common_New zfs snap BigPool/SMB/Common@migrate2 zfs send -i @migrate BigPool/SMB/Common@migrate2 | zfs recv -F BigPool/SMB/Common_New zfs destroy -rf BigPool/SMB/Common zfs rename -f BigPool/SMB/Common_New BigPool/SMB/Common

Make a snapshot
Send the snapshot - can take a while to temporary dataset
Make another snaphot
Send the second snapshot to temporary dataset
Destroy original dataset
Rename new temporary dataset
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
I use the following script to churn a dataset. ie copy it from one place to another and then rename it back to the original dataset. Useful when attempting to balance vdevs or populate a secial vdev. It does however contain the clues you want.
Note - do not use this on an encrypted dataset. There is absolutely no error checking on this - so make sure you have a backup - or do each step manually

zfs snap BigPool/SMB/Common@migrate zfs send -R BigPool/SMB/Common@migrate | zfs recv -F BigPool/SMB/Common_New zfs snap BigPool/SMB/Common@migrate2 zfs send -i @migrate BigPool/SMB/Common@migrate2 | zfs recv -F BigPool/SMB/Common_New zfs destroy -rf BigPool/SMB/Common zfs rename -f BigPool/SMB/Common_New BigPool/SMB/Common

Make a snapshot
Send the snapshot - can take a while to temporary dataset
Make another snaphot
Send the second snapshot to temporary dataset
Destroy original dataset
Rename new temporary dataset
so are you making the second snapshot and sending it as like a second verification that it didn't miss anything? or just to copy any new data that was created while sending the first snapshot?
Also I have the daily snapshots setup, should I not use that as my first snapshot? just make a new one when starting?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Yup - the second snapshot is in case anything changed during the send | recv of the first
I don't know if you can use an existing snapshot - but its probably safer not to (this is not something I want to experiment with. You can always delete the snapshots afterwards and being called migrate - they are easy to find
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
So I created a snapshot and sent using the following
zfs snapshot -r Media@migrate02-26 zfs send -R Media@migrate02-26 | zfs recv -F Tank
then this showed up on the console
Feb 26 10:57:21 TrueNAS 1 2022-02-26T15:57:21.998783+00:00 TrueNAS.local devd 1746 - - notify_clients: send() failed; dropping unresponsive client
but it looks like its working, if I close the terminal window will it stop? I was going to open a tmux window but I forgot.

Edit: so my laptop lost connection and the transfer stopped. how can I resume the send? if I just try the command again I get this
cannot receive new filesystem stream: destination has snapshots (eg. Tank@auto-2022-02-25_00-00) must destroy them to overwrite it warning: cannot send 'Media@auto-2022-02-12_00-00': signal received warning: cannot send 'Media@auto-2022-02-13_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-14_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-15_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-16_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-17_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-18_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-19_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-20_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-21_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-22_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-23_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-24_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-25_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-26_00-00': Broken pipe warning: cannot send 'Media@migrate02-26': Broken pipe
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
tmux is your answer

Also, assuming you have two pools. Your first line should reference the first pool and dataset, whilst the second line should reference the new pool and dataset I suspect
 
Top