RAIDZ expansion, it's happening ... someday!

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
So now I am planning to copy the 4TB onto one disk, and create a 9-disk raidz3 array which should hopefully give me 12TB of space with 3 copies of all blocks.

Then I will copy the 4TB back onto raidz3.

There is an alternative to that. It's a little tricky, read the forums for exact guidance. What you are going to do: Phantom drive.

- Create a sparse file of 4TB
- Create a 10-wide raidz3 that uses nine (9) physical disks and one (1) sparse file
- DO NOT COPY ANY DATA ON THERE
- Offline the sparse file "disk", and delete the sparse file
- Your pool is now degraded with 2x redundancy
- Copy the 4TB onto the pool
- Bring in the 4TB drive you had your data on, tell ZFS to use it as a replacement for the "failed" sparse file drive, and wait for resilver. Tada, 10x-wide raidz3 with proper 3x redundancy.
- When building your pool and resilvering, take care to not use physical drive letters. You need UIDs, and partitioning for future replacement ease is a grand idea. This is why this gets a bit tricky: Usually TrueNAS takes care of all that, but it won't let you build a pool with sparse files. You can let it partition the drives on a 9-wide, then blow that pool away from command line, and rebuild it from CLI using UID identifiers not /dev, with the sparse file. Recommend reading up on it, having a great checklist, and doing it with an eye towards being chill with blowing it all away and redoing until the pool is exactly the way it needs to be. Then and only then put data on it.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
(BTW the reason I want zaid3 is because I don't intend to ever monitor this setup. It's an install-it-and-leave-it scenario.)
No way, too risky becasue there are always issues, power failures, electronics fail, fires, all kinds of things. If you desire a system which requires no monitoring and no maintenance, rent some cloud space. People who have set up a FreeNAS system several years ago who did the leave it alone scenario were crawling on their knees when problems eventually arise. Some folks even lost data.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Very little can be left alone for years without issues and the hardware required is usually out of reach for non-nation state pocketbooks. Think Voyager, Mars Rover, etc. That wasn't off-the-shelf hardware, it was purpose-built by teams that literally wrote the book on planetary exploration.

We are all intrinsically planning for failure via pool design, component selection, and the value (to us) of the data. See all the cost-benefit analyses we collectively go through on the "Will it FreeNAS" forum pages.

A Z3 VDEV in a pool does not guarantee absolute data integrity until the sun melts us all in 5BN years... all it does (IF properly implemented, monitored, etc.) is lower the probability of pool failure. Besides the disks, other stuff can fail too. Hence the tendency towards server-grade hardware.

Regardless, backups are still needed and those better be scrubbed periodically also. The most vigilant also invest in off-site replication (cloud or ZFS-send, your choice).
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Very little can be left alone for years without issues and the hardware required is usually out of reach for non-nation state pocketbooks.
There are some REALLY great solid designed hardware platforms available that could do such a things and are relatively inexpensive. Often the cut-down server boards are already solid enough to run 5+ years without maintenance. CPU's themselves are also able to run 5+ years without maintenance.
It's mostly storage that gets bad, hardware that gets dirty, thats causing hardware failures and PSU's failing.

But above all: It's most often the software that is at fault for preventing 5+ years uptime. Not the hardware.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
It can take years for hardware / software flaws to show up. I also agree that software is usually more likely at fault for failure and that a stripped down feature set combined with watchdogs can result in relatively stable performance.

The above suggests getting a multi-year proven hardware platform as NOS, burning it in, and installing something known for years to be stable to run on it. Naturally, that may limit the capabilities of said board and software, but that is the price of stability. From a software perspective, shutting down all extraneous processes / features can also be helpful.

I was very fortunate to have been the purchaser of a mini Xl since the hardware support by ixSystems is simply fantastic. I’m on my fourth or fifth ASRock C2750D4I motherboard now and my fingers are crossed that between a v1.03 hardware revision (suggesting a post AVR54 C2750 atom CPU) and p.035 firmware / BIOS that the board may work trouble-free for years as a server.

I have found the -STABLE releases to be just that, STABLE, with no issues re uptime. What usually drove reboots were updates and/or hardware changes. The combination of FreeNAS and FreeBSD has been remarkable in that regard.

That leaves us with the most common culprit, the media as a failure point, which has been the Achilles heel of almost all systems out there (including mine). Hard to design around very high availability requirements without any physical maintenance. Allowing degraded pools to fester is likely the prime reason for data loss in FreeNAS-land other than grossly-misconfigured pools for a particular use case.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I would agree that a person could take a good quality system and make it run for a very long time but as both of you @ornias @Constantin have said, the storage does eventually fail. Sometimes it fails much earlier (infant mortality) and sometimes it will fail well beyond expectations. My server has been rock solid since building it. I do update the ESXi software with patches as they become available. FreeNAS 11.3-U5 (planing to test TrueNAS today) has been very stable for me, actually most of the FreeNAS Stable versions have been good for my limited use case (Storage and Plex).

Those of us who value our data will buy appropriate hardware and we have expectations that the hardware will survive a certain period of time, for example my motherboard, RAM, and CPU will survive at least 10 years, the CPU fan maybe 5 years, the power supply also 10 years (may need to replace a fan or just go fanless), but the hard drives I expect to live no longer than 5 years. My SSD boot drive I expect to last well over the life of the project. If you force a system software reboot periodically then you can attack any memory leaks or other software problems that manifest over long periods of runtime (I don't do that since I do update the system every few months for ESXi).

With all that said, I would still never suggest that someone build a system and forget about it, even if it would operate fine becasue one day, at the worst possible time, the owner will notice something failed and if they are notified early enough they might be able to fix it before experiencing data loss. For some people managing a system like this is more work than they are willing to do so for those people there are other options such as cloud storage if they want to keep data in a fairly safe place. Also over the long haul some of these cloud storage solutions are not priced bad vs building your own system, many have free options (I have 4GB of important data that I use free storage for as an off-site backup) in an Acronis (soon to be Macrium) backup file. But use case does matter as well, I tried cloud storage for a few months and it only works for me for my important information as off-site storage, not for holding system backups which get very large and it's a pain to download an entire image for restoration. So use case matters as well.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Could not agree more. There is a case for cloud storage, there is a different case for on site storage. The cloud lends itself well to (properly encrypted) files without low-latency requirements. On-site has more predicable latency, though with the added cost of acquisition, maintenance, etc.

At the end of the day, between how our electrical rates, purchase costs, upkeep efforts, and time value of money, etc are put together, cloud storage on the whole is likely cheaper if you are buying new hardware... and said in-house hardware better last a long time! Cloud storage is also perfect for folk who are not technically inclined, not scrupulous re maintenance, etc.

however, I prefer to be the master of my own ship and leave pretty much nothing in the cloud. Never mind security implications like iCloud at Apple being decryptable by Apple, for example. I’ve had my shared web server hacked too many times to trust the cloud.

Even if the attacker cannot decrypt the presumably-encrypted data hosted by the cloud, they can still subtly corrupt it. Assuming you even detect said corruption, the time it takes to upload whole new data sets can be eye-opening. For me, DAS-based backups have their Place, ditto a locally seeded and then snapshotted-to offsite NAS.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@AbbeyRoad1124

Gee, everyone said basically what I would have said, except;

In general, these forums are for FreeNAS, which is based on FreeBSD, not Linux. iXsystems is developing a new product, which has a slightly different focus, still a NAS, but some virtualization and is based on Linux. That said, TrueNAS SCALE is not production ready software.

ZFS was NOT created to avoid your "caydsatrayd". ZFS is 14 years old, (production wise), and until recently, other methods were used to expand a pool. It was invented to avoid lots of other problems. For example, (but not limited to);

- File system integrity, (to avoid long, at boot, file system checks)
- Data integrity checks, (so you KNOW when data went bad)
- Combine file system, volume management and RAID functions to ease management of large amounts of disks
- Remove RAID-5/6 type write hole

RAID-Zx expansion was only thought about later.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
To bring this back to raidz expansion: Get hyped, we may have a beta build “maybe as soon as in a few weeks”, which I interpret optimistically as “beta may just happen Q1 2021, and if not that, Q2”. From there to production - 1 to 2 years?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The PR, in Matt’s comments there :)
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Can you just hardlink next time, instead of references without source and now a source reference that requires other to backtrack your steps?

For other people reading, CLICK:
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I make no promises, particularly when on mobile and it's already linked in the OP. That said, sure, can't hurt to link it again.
 

Vega1210

Cadet
Joined
Feb 21, 2021
Messages
1
When this becomes available will we be able to expand or existing Raidz2 or will we have have to pull all our data off the drives start over and then be able to expand in the future?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
When this becomes available, an existing N-wide raidz1/2/3 can become an N+1-wide raidz1/2/3. It'll keep the same redundancy, that can't be changed. After resilvering is done, you could then add another drive, and so on.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Vega1210, Yes, you can expand twice or more. For example;

- RAID-Z2 with 4 disks expanded to 5
- RAID-Z2 with now 5 disks, expanded a second time to 6 disks

The limitation of existing data still "striped" across 4 disks still exists.

And if I recall correctly, a potential future, (as in NOT now), feature might be to allow 2 disk expansion at the same time.

However, RAID-Z1 to Z2, or Z2 to Z3 type expansion is not on any drawing board. It's wanted, but no one has committed to work in it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
In principle and as far as the design goes, this can be applied to existing pools (with loss of compatibility with older systems, of course).
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
It’s out of alpha, and there’s a talk about it at the FreeBSD dev summit today!


 
Top