How do I make a degraded 2 drive RAIDZ?

Status
Not open for further replies.

Skrenes

Cadet
Joined
Sep 15, 2018
Messages
9
Hello everyone. I have a workstation machine running Debian stretch with 24 GB of RAM + 8TB WD Red with an exact setup in another city (at my parents home) and they mirror each other with a nightly rsync script. I'm running out of storage (83% occupied). I've decided to give FreeNAS a shot and bought two more 8TB WD Reds. My goal is to setup the three 8TB WD Reds as RAID5/RAIDZ.

My question is how to migrate from my occupied drive to the RAIDZ setup. Is it possible to setup the two empty drives as a degraded RAIDZ volume, copy the data over, then repair the RAID using the source drive?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
No, this is not possible within FreeNAS. You will need to backup your data and build the pool with all three drives attached.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My question is how to migrate from my occupied drive to the RAIDZ setup. Is it possible to setup the two empty drives as a degraded RAIDZ volume, copy the data over, then repair the RAID using the source drive?
I would say no, but I have been shown to be wrong before, so there may be a more experienced person that knows some trickery from the command line. I still think it is a flat no using the GUI.

The thing you need to do first though, you need to read some of the documentation to guide you into good choices because it sounds like you are making some less good choices based on incomplete information. Have a look at these resources:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

There are many more useful links under the button in my signature.
 

Skrenes

Cadet
Joined
Sep 15, 2018
Messages
9
Thanks everyone for your prompt and honest feedback! I spent more than an hour reading the presentation and links provided by Chris. I think I will stick with my Debian setup (which I forgot to mention, uses a boot partition and a LUKS AES-XTS 512bit encrypted partition). I'll use mdadm to expand my one drive setup to RAID 5 and continue mirroring to my remote NAS.

When I get more time, I will purchase a used 8 hot-swap LFF server with 64GB+ of ECC memory and use FreeNAS.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
When I get more time, I will purchase a used 8 hot-swap LFF server with 64GB+ of ECC memory and use FreeNAS.
Closer to the end of the calendar year, or early part of the new year, there is usually a good supply of retired data-center gear on eBay. If you come back with a budget, we can help you find something that will be a good value and fully compatible. There are some hardware requirements that are specific to BSD (the base OS) and ZFS (the file system) so the hardware does need to be chosen with care for the sake of reliability.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Yes, it's possible, but as @HoneyBadger says, strongly discouraged. In general, the process would work like this:
  • dd if=/dev/zero of=/root/sparsefile bs=1m count=8m truncate -s 8T /root/sparsefile
  • zpool create tank raid1 ada0 ada1 /root/sparsefile
  • zpool offline tank /root/sparsefile
  • (optional) rm /root/sparsefile
  • Set up your shares, copy your data, etc.
  • Through the GUI, do a disk replacement to bring the third disk online and write the appropriate data.
The sparsefile creation works because zfs has compression enabled by default, and zeroes are highly compressible. So, despite being a 8 TB file, it takes very little space on your boot device.

Now, this procedure doesn't partition your disks and create the swap partition. Nor does it create the pool using gptids. Both of those are typically done by FreeNAS, and are discussed elsewhere in the forums--those details are left as an exercise for the reader.

Of course, until the disk replacement is complete, you have (potentially) several TB of data on a pool with no redundancy. A disk failure means complete data loss, while a read error will likely mean corrupted data. So, not really a good idea.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@danb35, we probably should have a Resource on how to do that. It comes up every few months. And with a resource, we can make sure the wording has "strongly discouraged" as part of the first sentence.

Then we can get the details of the correct partitioning and device pathing documented.
 

Allan Jude

Dabbler
Joined
Feb 6, 2014
Messages
22
dd if=/dev/zero of=/root/sparsefile bs=1m count=8g
You can just use truncate -s 8g /root/sparsefile instead. It won't take nearly as long as writing out 8gb of zeros (even if ZFS compressed them away)

You can also use 'gzero', a special device in FreeBSD that makes a huge device that ignores all writes, and returns 0s for all reads. Then just offline it immediately.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You can also use 'gzero', a special device in FreeBSD that makes a huge device that ignores all writes
Sounds like an interesting form of write-only memory. So the syntax in that case would be zpool create tank raidz1 ada0 ada1 gzero?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You can also use 'gzero', a special device in FreeBSD that makes a huge device that ignores all writes, and returns 0s for all reads. Then just offline it immediately.
Curious as to whether or not that would even successfully create the pool, since even creating the partitions would probably return a fail since it would just return 0's when trying to set things up.

Edit: Yep, it works. Just do geom zero load and you'll get the gzero device available. You need to force pool creation with -f due to mismatched device size but it works. Trying to mirror the FreeNAS-specific partition setup might make it fail but passing the whole device appears to work long enough to fool it. I'll take a stab at documenting it later on with the caveat suggested by @Arwen of "you really REALLY shouldn't do this even if you think it's a good idea"

truncate should do the trick too, you'll want to specify 8T to match the other drives (and then offline it immediately) - although again, this would result in there being zero redundancy for any blocks written to the Z1 in this configuration.

And in this case, the OP has signalled their intentions to wait up to get an 8-drive setup, and not play Russian Roulette with their data.
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You can just use truncate -s 8g /root/sparsefile instead. It won't take nearly as long as writing out 8gb of zeros (even if ZFS compressed them away)

You can also use 'gzero', a special device in FreeBSD that makes a huge device that ignores all writes, and returns 0s for all reads. Then just offline it immediately.
Oh snaps, we got some real devs in here now!
 

Allan Jude

Dabbler
Joined
Feb 6, 2014
Messages
22
Yeah, definitely don't run this for long. When I used gzero it was a RAID-Z2, and the extra drive (DOA replacement) was coming within 48 hours. This is definitely a 'footgun'
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
"you really REALLY shouldn't do this even if you think it's a good idea"
Or even "especially if you think it's a good idea"...

Although it gives you, in effect, the ability to turn a two-disk stripe into a three-disk RAIDZ1 at some future time. (Edit: or turn an n-disk RAIDZ1 into an n+1-disk RAIDZ2). If you look at it that way, maybe it isn't such a terribly bad idea--not obviously worse than using striped disks, anyway...
 
Last edited:

Skrenes

Cadet
Joined
Sep 15, 2018
Messages
9
You're all awesome! What a response! This definitely makes me look forward to using FreeNAS in the future. Before I even pursued this upgrade path, I did do a bit of a sanity check on RAID reliabilities with conservative/shoddy MTBF estimates for my drives using this nifty resource. It definitely had me nervous using the degraded setup with mdadm, which is why in addition to my source drive, I have the cloned off-site NAS, and a portable external drive, all mirrored and checksummed for good measure. I'd have to have the worst luck in the world to have 3 consecutive drive failures or URE's during this process.

I'll definitely check back before I purchase a used server to ensure a good fit with FreeBSD and ZFS.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Status
Not open for further replies.
Top