How would you do this?

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Currently I am running:
- 4x 4TB drives in RaidZ
- 4x 8TB Drives in RaidZ

I recently purchased 4x 8TB Drives that I want to replace with the 4TB ones, but I want to set up RaidZ2 with all 8x 8TB Drives.

Would I need to copy all my data to a temporary storage location, set up all my drives as a new pool then copy the data back?
Or can I somehow merge them into a new array without having to move any data?

I suppose its the first one, but wanted to ask incase I can save a lot of time :)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Merging of RAIDZ vdevs is not possible. Do you have space for all 12 drives in your case? You could:

Copy the data from the 4x 4 TB pool to the 4x 8 TB pool if there is enough space.
Build an 8x 4/8 TB RAIID/2 pool with your new drives and the 4 TB ones.
Copy all of the data to that one.
Replace the 4 TB drives with the old 8 TB ones without copying data.
Remove the 4 TB drives.

HTH,
Patrick
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Merging of RAIDZ vdevs is not possible. Do you have space for all 12 drives in your case? You could:

Copy the data from the 4x 4 TB pool to the 4x 8 TB pool if there is enough space.
Build an 8x 4/8 TB RAIID/2 pool with your new drives and the 4 TB ones.
Copy all of the data to that one.
Replace the 4 TB drives with the old 8 TB ones without copying data.
Remove the 4 TB drives.

HTH,
Patrick

wow, thanks for the advice!
Already started CP-ing the files over, luckily I have enough space!

I assume RAIID/2 is the same as RAIDZ2 (or a typo?)?
Cant find anything on "RAIID/2"

That would initially net me a 8x 4TB RAID, which would become 8x 8TB once I replace the 4TB ones and silvering is completed?

I do have an HBA-flashed LSI Card that I can use to connect all 12 drives. (8 on the motherboard and 4 on the HBA)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Ah, yes - just a typo, sorry. I inserted the (intended) "RAIDZ2" after finishing the entire list to make things clearer and I don't know what my fingers thought they were doing ;)

Yes. You would start with an 8x 4 TB pool (built from half 4 TB and half 8 TB drives) and that would expand once you changed the 4 TB drives for 8 TB ones each in turn.
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Ah, yes - just a typo, sorry. I inserted the (intended) "RAIDZ2" after finishing the entire list to make things clearer and I don't know what my fingers thought they were doing ;)

Yes. You would start with an 8x 4 TB pool (built from half 4 TB and half 8 TB drives) and that would expand once you changed the 4 TB drives for 8 TB ones each in turn.

Restoring the (old) 4x8TB Snapshot to the 4x8TB+4x4TB pool now.

But seems it stops when I close the browser.
Do I really need to keep the shell open during the operation?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You are using zfs send | zfs receive? Yes of course the shell needs to be running until that job is finished.
But you can login via SSH instead of using the browser shell and use screen or tmux - both are installed on TrueNAS.

Or you define a local replication task in the UI ...
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
You are using zfs send | zfs receive? Yes of course the shell needs to be running until that job is finished.
But you can login via SSH instead of using the browser shell and use screen or tmux - both are installed on TrueNAS.

Or you define a local replication task in the UI ...

Yes, followed this thread: https://www.reddit.com/r/freenas/comments/bl0ra0/restoring_a_snapshot_to_new_hard_drives/

I opened Putty from my HTPC (runs 24/7) and started it there now, set the keepalive timer to 120, working good so far, 0.5TB done already, only 17TB to go!
 

Pitfrr

Wizard
Joined
Feb 10, 2014
Messages
1,531
For background shell operations I found that screen is a good tool (and also natively available in FreeNAS). You can restore your session if it gets broken.
I use it a lot in zfs send | zfs receive cases.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
For background shell operations I found that screen is a good tool (and also natively available in FreeNAS).
I recall reading somewhere that screen was deprecated and would disappear at some point, so have always used tmux (which has a few more functions).
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Seems this ZFS send | ZFS receive is only running at 95-ish MB/s

kind of slow?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
only running at 95-ish MB/s

kind of slow?

Writing to RAIDZ arrays approximates to the same number of IOPS as a single disk (the slowest in each VDEV). If the way that send | recv is working causes it to be IOPS heavy, then there's no surprise at that speed.
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Well...

Goodbye data, I guess...?

scared to go home from work now.

well_shiit.png
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
incase anyone was wondering, the status is that the array is still working.
I lost contact with the NAS, so I pushed the power button and waited what seemed like 20 minutes for it to shut down, then powered it back up again.

The Silvering restarted.. but only 1 drive came back as faulted on the 5% progress mark.

I also moved the drives to a harddrive enclosure I had taken out of a computer case, instead of directly on the table with a small mouse mat under it.

2020-11-25%2015.12.04.jpg
2020-11-25%2015.11.02.jpg
2020-12-02%2013.52.48.jpg
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I run mine in a similar case (see build #3 in my sig), three quick thoughts:

1. Be wary of 90 degree bends on SATA connections, and the clearance to the side panels. The stress can cause connectivity issues. I ended up fixing mine by ordering SFF cables with 90 deg locking ends.

2. The newer 8Tb drives are coming out with adherence to the newer SATA spec that uses 3.3v on the power plug as a "sleep" signal. Since you have both native SATA and Molex to SATA conversion in view in the pics, keep in mind that moving drives from one power plug to another may alter their behavior.

3. Others may have different opinions, but... That 350 watt PSU is kind of small for 8+ drives.
 
Last edited:

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
3. Others may have different opinions, but... That 350 watt PSU is kind of small for 8+ drives.

I have a wattage meter on the power cord, the computer has a maximum draw og 260 watt, from the wall (with currently 12 drives)



Wondering if I have corrupted data now though:

1606948039641.png
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I have a wattage meter on the power cord, the computer has a maximum draw og 260 watt, from the wall (with currently 12 drives)



Wondering if I have corrupted data now though:

View attachment 43120

Sorry to hear.... Just let ZFS do it's thing and see what happens at this point.


On the PSU wattage... 12 drives... That makes it even worse. Mine only draws around 200 watts as well, and configuring an 850 watt PSU was overkill, but it got pulled from another system as a freebie and has lots of 12v capability. I have run on a 500 watt, but it was marginal given I'm planning a CPU/Memory upgrade. Ideally I'd probably want a 650.

The real question is how much load is on each supply rail? What is the inrush wattage as the drives spin up at boot? There's 3.3v, 5v, 12v, and each gets a limited slice of the total 350 watts. It's quite possible to be well under 350 watts, but be over limit on an individual supply rail, or overlimit at some load spike. This can lead to instability, odd hangs, drives dropped from pools at boot, etc... Configuring a PSU with more headroom won't result in it using more power than it needs, but will add reserve capacity for load spikes.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Looks like it restarted. The question: Is it the same drive, or a different one?

You can get more info by running:
Code:
# zpool status <poolname>


From an ssh session, or even the UI shell.
 

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Looks like it restarted. The question: Is it the same drive, or a different one?

You can get more info by running:
Code:
# zpool status <poolname>


From an ssh session, or even the UI shell.

I think it was foolish of me to click Replace on 4 drives simultaneously.

Tested some of the files with "Permanent errors" and was able to open them, no noticable errors in them.

I am not able to tell if the Resilvering is going good, looking at this image though.

1607092761992.png
 
Top