ZFS pool transfer and upgrade questions

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
I've finally used up over 80% of my ZFS storage and it's currently mad at me for it (read: I have the warning that ZFS doesn't work as efficiently in this range), so I'm looking to get some more drives.

I started with 6x4TB Raid-Z2 and am looking at getting 6x12 or 14TB drives. I never enabled encryption, even with an empty key, on my old pool which was a mistake because it makes it more complicated should I have a drive start to fail and I need to get rid of it or trade it in securely. Based on my research, I read that it wouldn't really be feasible for me to enable encryption after the fact so I gave up on it.

Since I'm now reaching my storage limits, I figured this would be a perfect time to transfer to pool over to a new set of drives while enabling encryption. However, I'm having a bit of trouble planning it out so that I can do it as seemlessly and safely as possible.

I'd like if I can move my jails over seemlessly and keep any mounted storage they have set up the same. I'm considering modifying my ACLs and I'm not sure if that would be better to do on a fresh directory structure and then move the files in or if I should just do it after the transfer and adjust all the files at once; I know that modifying a large number of ACLs on Windows at once can be slow and sometimes will freeze (IIRC) but I know there is the new ACL editor built in that might make it a lot easier to manage.

Also I'm debating on what to do with the old drives; my server supports 12 drives total, so I was considering attempting to just combine all the drives into one large zpool separated into 2 vdevs, however, I'm worried about how the data would balance out, storage efficiency and drive failures. If I just expanded the zpool with a new vdev I wouldn't be able to add encryption, that means I'd have probably have to move the zpool to the new drives and then add in the old drives after it's finished; I'm not sure if the data being all on the larger array would be terribly unbalanced or if that could cause any issues. On storage efficiency, ideally I'd be able to use only 2 parity drives across all 12 drives to increase the amount of storage I get but with the previous setup it looks like I'd continue with the 66% storage efficiency; is there an elegant way to increase this without sacrificing data safety? With drive failues, I am slightly worried about the costs of drive failures in the future, if I add in the old 4TB drives into the pool then if they fail I'd either have to buy 4TB drives that are no longer cost efficient (and are likely now SMR) or replace them with larger drives that I won't be able to utilize the space of until I've swapped all the 4TB drives and expanded the vdev.

With those issues in mind, are there good mitigations for the issues or should I consider using the 4TB drives as a separate zpool that I'd have to manually balance data into or use only for certain files e.g. movies or isos exclusively.

Once I have the new drives, will transferring the data be as easy as doing a zfs send from the original pool into a zfs receive for the new pool? Or is there some other precautions that I have to take into account?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I was considering attempting to just combine all the drives into one large zpool separated into 2 vdevs, however, I'm worried about how the data would balance out, storage efficiency and drive failures.
Don't be unless your intent is to improve IOPS performance by adding the VDEV (seems not from your description).

Redundancy should be matched, so you don't add failure risk.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
I'm planning on sticking with RAID-Z2 on both vdevs if I combine them into a single pool so that shouldn't be a concern then.

Do you have any insights into the other questions I posed? I'm nervous about doing such a large data transfer.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Once I have the new drives, will transferring the data be as easy as doing a zfs send from the original pool into a zfs receive for the new pool?
Yes, look at this link for a suggestion on how to do it so you can see progress (it's a pain to watch it just sitting there not knowing how far it is along). ttps://docs.oracle.com/cd/E36784_01/html/E36835/gnheq.html

Or is there some other precautions that I have to take into account?
Since it's a copy, you are fairly safe to just go ahead, the original is still there until you're happy to remove it.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
Thank you very much, that information should prove extremely useful, that guide is probably exactly what I need.

I guess one last question was on the ACLs; I know that ACLs have changed a tad with the move to 11.3 but I'm not exactly sure how.

When I originally created my pool I set it to use Windows ACLs so that I could easily share it out with SMB, set most things to be read-only for everyone and restrict the few folders that I needed to for read access. I had to create a separate dataset for my MineOS server instance (I mounted the dataset into the plug-in jail where the server directory goes) so that it could be hosted as using standard Linux ACLs because for some reason the software wouldn't work when set to Windows ACLs.

Alongside that I have a few other folders that I need to adjust ACLs on because I've learned a lot since originally creating them. (I used to manually add the master editor user to each root folder manually instead of just adding them to the group of the folder owner which made things more messy)

So the question kinda boils down to these:
  • Is there still a separation between Windows-style ACLs and Linux ACLs or has that been changed (I no longer see the distinction made in the UI)?
  • Is the ACL editor good enough that I should be able to make large changes to many files without issues (I had difficulty with large changes using Windows remotely)?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
  • Is there still a separation between Windows-style ACLs and Linux ACLs or has that been changed (I no longer see the distinction made in the UI)?
  • Is the ACL editor good enough that I should be able to make large changes to many files without issues (I had difficulty with large changes using Windows remotely)?
You can work with the ACL editor to do everything now, since it is a frontend for setfacl.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
while enabling encryption.
Why? Encryption with FreeNAS has long been (and remains) flaky, and carries with it a fairly significant risk of your data going away. Unless there's a legal or regulatory requirement that data at rest be encrypted, using it is not likely to be a good idea. FN12 (OK, "TrueNAS CORE 12") will introduce dataset-level encryption, which should be much more robust.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
Why? Encryption with FreeNAS has long been (and remains) flaky, and carries with it a fairly significant risk of your data going away. Unless there's a legal or regulatory requirement that data at rest be encrypted, using it is not likely to be a good idea. FN12 (OK, "TrueNAS CORE 12") will introduce dataset-level encryption, which should be much more robust.

I've never heard that it had any issues. I've only ever heard that it was useful and would help in the situation that a drive had to be RMA'd or resold, because it would prevent any data leakage.

I wasn't on setting up so that the passphrase was encrypted either (I'd prefer it could boot normally without needing a password in the case I'm remote when a long power outage occurs). I just wanted the protection of having a drive fail and not being able to erase it properly, I'd still not have to worry about data leakage.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I've never heard that it had any issues.
Do a search here and read a lot. I've seen a number of cases of lost pools when encryption went sideways.

The biggest issue (and this is second hand, as I don't use encryption) appears to be that FreeNAS' implementation has several idiosyncratic requirements, and the GUI doesn't warn you about them. It doesn't warn you, for example, that the pool must be re-keyed following a disk replacement, but the recovery key you'd previously generated is no longer valid. If you strictly follow the manual for things like pool expansion and disk replacement, you should be fine. But most of us here don't use it, so if you ask here about replacing a disk, we probably won't mention re-keying the pool--and that could result in your losing your pool.
 
Last edited:

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
The more I think about it the more I think that you are probably right on this. I have yet to actually have a drive fail (4 years of almost 24/7 operation, low usage though) and even if I did the worst case is likely sending in the drive to the manufacturer for RMA. Which for it to be of any risk would require them to either keep the drive lying around or keep an image of the entire drive in hopes that I eventually send in more drives until they could reconstruct my data. Even then there are issues with that, the drives might not even work together by the time I'd send in future drives (ie the underlying data has changed too much to form a cohesive pool), or I may never send in enough drives, or the massive insane freakout that them doing this sort of thing would cause if it were leaked would be international lawsuit levels.

In any other case of drive failure, I could take a power drill to the drive and send it to be recycled with rather little worry (I'm not a major high security facility afterall).

If I just decide to sell off old drives then they sure as hell better be in good enough condition for me to do a multi-pass wipe that would completely destroy any chance of data recovery, otherwise I'd be scamming some people.
 
Last edited:

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
@danb35

Taking this change of philosophy into account which do you think is the safest and best way to expand my pool.
  1. Replace a disk, allow to resilver, repeat until all disks are replaced and then expand (or it will auto-expand)
    • allows me more room to choose if the 4TB disks will remain in use for the main pool, repurposed, or sold off
  2. Add new disks as a new RAID-Z2 vdev, allow zfs to self balance the data and basically take a hands-off approach
    • requires the 4TB disks, now ~3.5 years old, to stay in service. If they start dropping like flies then I'm gonna be spending more money.
  3. Create new pool with disks, zfs send the data over, then use the tutorial previous posted by @sretalla to move over to it
    • This will require a lot more micromanaging and may no longer be necessary if I am not adding pool encryption
    • also allows me to choose if the 4TB disks will remain or be removed
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
If you're feeling strongly about that collection of photos and videos, you can always stick them on an encrypted dataset come TrueNAS Core 12. That'll be a lot saner.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'd go with 1). If you have the room and power in the chassis, you can also do 1b): Mark all drives to be replaced at once, with both original and replacement drives in the chassis. When replacement is done, remove the original drives and decide what to do with them.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
I'd go with 1). If you have the room and power in the chassis, you can also do 1b): Mark all drives to be replaced at once, with both original and replacement drives in the chassis. When replacement is done, remove the original drives and decide what to do with them.

Yeah, 1b is basically what I was leaning towards. My chassis is a 12 bay and can easily support the power. I was planning on running the new drives in the chassis for awhile to do the initial break-in tests (e.g. smart short, long, ext -> badblocks -> smart short, long, ext) before they actually got used.

I completely forgot that I can detach and replace drives from the UI, which means I don't have to rip a drive out and physically replace it with the new drive; I can just swap them remotely one at a time and deal with the old drives when I'm done.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I can just swap them remotely one at a time

It doesn't even have to be one at a time. My understanding is you can kick off replacement in batches, and ZFS will keep trucking as long as the original drives are there and providing parity. Don't detach, but do replace. You can take an Online drive and replace, and I believe that should work.

Edit: Source for this are tales from people who look after pools of 130 disks. They buy 40-50 replacements at a time, then replace in batches alongside the original disks.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
It doesn't even have to be one at a time. My understanding is you can kick off replacement in batches, and ZFS will keep trucking as long as the original drives are there and providing parity.

Oooh, that's really cool. I'll double check the docs on everything of course but that would speed things up a lot.

BTW, the encrypted dataset in TrueNAS Core 12 (which I am just finding out about the change), I'm guessing that is something where since it's dataset level I'd be able to always create a new encrypted dataset to move files into and not something as limited as the current setup?

Off topic: Did I see you were helping someone with that ultra-alpha editing vdevs feature for ZFS? I think you re-based it on the ZFS master recently. How's that going along are you still helping on it?
I'm hoping the next time I have to expand my storage (read: years from now), I'll be able to expand the vdev so I don't have to have 4 parity drives across 12 drives.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'm guessing that is something where since it's dataset level I'd be able to always create a new encrypted dataset to move files into and not something as limited as the current setup

That's right. This is ZFS-level encryption. Currently, FreeNAS encrypts the disks individually using geli, and then ZFS sits on the unlocked disks, unaware of the encryption. That's brittle, shall we say.
The new feature is pure ZFS: Encryption on a per-dataset level, handled by ZFS itself. You still need to manage your keys carefully of course, as with any encryption. And, I feel a lot more confident about this than the current system.

I think you re-based it on the ZFS master recently. How's that going along are you still helping on it?

For some value of "still helping". I haven't done a lick in a month. Just got back to it a week ago, I think I am slowly getting a handle on how zfs-tests works. If you're interested in collaborating on that, shoot me a PM here!
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
That's right. This is ZFS-level encryption. Currently, FreeNAS encrypts the disks individually using geli, and then ZFS sits on the unlocked disks, unaware of the encryption. That's brittle, shall we say.
The new feature is pure ZFS: Encryption on a per-dataset level, handled by ZFS itself. You still need to manage your keys carefully of course, as with any encryption. And, I feel a lot more confident about this than the current system.

That's really cool. ZFS itself handling it is definitely going to be better than a cobbled together solution. Since it's a per-dataset level, I can always add an encrypted dataset separately if I absolutely need to later. Worst to worst, (aka they don't make it possible to add later) I could make a new dataset under the root and just move everything into it.

I'll keep my eye on it, thanks for the info.

For some value of "still helping". I haven't done a lick in a month. Just got back to it a week ago, I think I am slowly getting a handle on how zfs-tests works. If you're interested in collaborating on that, shoot me a PM here!

I'm not sure that I have much I can help with. I'll look at the repos sometime but I'm not sure I'll be able to grok all of it.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
Really dumb question. Would it be possible to set up the 4TB drives into their own RAID array and have them act as a hot-spare for the zpool? I could see it being useful very temporarily to have a 3x4TB RAID0 array to replace a 12TB drive in a RAIDZ2 array until a proper replacement comes in.
 
Top