Migrating to new drives

Status
Not open for further replies.

hertzsae

Contributor
Joined
Sep 23, 2014
Messages
118
I am replacing 4 x 4TB drives with 6 X 6TB drives and a friend is purchasing my old 4TBs. I have borrowed his LSI SATA/SAS controller (already flashed to IT mode) so that I can have all 10 drives plugged in at once. The 6TBs are nearing the end of their stress tests.

All I really want to do is a "replace" operation to migrate my pool from my existing vdev to a new one. Searching through documentation it sounds like replace only works at the disk device level.

What is the easiest way to go about a migration to new hard drives? My searching is turning endless false positives on single disk replacements or moving to the same number of larger drives. My goal is to not have to reconfigure my jails. Although avoiding downtime is nice, it is not mandatory.

Thank you much!

Possibly useful information below:

zpool status:
Code:
  pool: Vol1
state: ONLINE
  scan: scrub repaired 0 in 10h45m with 0 errors on Sun Apr 24 10:45:34 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        Vol1                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/a35a86a2-84b1-11e4-8c11-0cc47a0981c5  ONLINE       0     0     0
            gptid/a3ddc4a0-84b1-11e4-8c11-0cc47a0981c5  ONLINE       0     0     0
            gptid/a46e42f7-84b1-11e4-8c11-0cc47a0981c5  ONLINE       0     0     0
            gptid/a4eb68b6-84b1-11e4-8c11-0cc47a0981c5  ONLINE       0     0     0

errors: No known data errors


zfs list:
Code:
 
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
Vol1                                                   6.26T   558G   140K  /mnt/Vol1
Vol1/.system                                           38.4M   558G   151K  legacy
Vol1/.system/configs-f36704f2fe794cb6a75657843255655d  24.5M   558G  24.4M  legacy
Vol1/.system/cores                                     4.92M   558G  2.48M  legacy
Vol1/.system/rrd-f36704f2fe794cb6a75657843255655d       140K   558G   140K  legacy
Vol1/.system/samba4                                     820K   558G   622K  legacy
Vol1/.system/syslog-f36704f2fe794cb6a75657843255655d   7.75M   558G  7.06M  legacy
Vol1/media                                             6.21T   558G  6.16T  /mnt/Vol1/media
Vol1/pluginjails                                       47.5G   558G   244K  /mnt/Vol1/pluginjails
Vol1/pluginjails/.warden-template-pluginjail--x64       792M   558G   791M  /mnt/Vol1/pluginjails/.warden-template-pluginjail--x64
Vol1/pluginjails/.warden-template-pluginjail-9.3-x64    508M   558G   508M  /mnt/Vol1/pluginjails/.warden-template-pluginjail-9.3-x64
Vol1/pluginjails/couchpotato_1                          717M   558G  1.16G  /mnt/Vol1/pluginjails/couchpotato_1
Vol1/pluginjails/plexmediaserver_1                     42.8G   558G  37.2G  /mnt/Vol1/pluginjails/plexmediaserver_1
Vol1/pluginjails/sabnzbd_1                             1.11G   558G  1.29G  /mnt/Vol1/pluginjails/sabnzbd_1
Vol1/pluginjails/sickrage_1                            1.63G   558G  1.53G  /mnt/Vol1/pluginjails/sickrage_1
Vol1/users                                              244K   558G   244K  /mnt/Vol1/users
freenas-boot                                            630M  57.0G    31K  none
freenas-boot/ROOT                                       616M  57.0G    25K  none
freenas-boot/ROOT/9.10-STABLE-201605240427              611M  57.0G   488M  /
freenas-boot/ROOT/Initial-Install                         1K  57.0G   480M  legacy
freenas-boot/ROOT/Pre-9.10-STABLE-201605021851-118227     1K  57.0G   481M  legacy
freenas-boot/ROOT/default                              4.53M  57.0G   485M  legacy
freenas-boot/grub                                      12.7M  57.0G  6.33M  legacy


camcontrol devlist:
Code:
<ATA HGST HDN726060AL T517>        at scbus0 target 0 lun 0 (pass0,da0)
<ATA HGST HDN726060AL T517>        at scbus0 target 1 lun 0 (pass1,da1)
<ATA HGST HDN726060AL T517>        at scbus0 target 2 lun 0 (pass2,da2)
<ATA HGST HDN726060AL T517>        at scbus0 target 3 lun 0 (pass3,da3)
<HGST HDN724040ALE640 MJAOA5E0>    at scbus1 target 0 lun 0 (ada0,pass4)
<HGST HDN724040ALE640 MJAOA5E0>    at scbus2 target 0 lun 0 (ada1,pass5)
<HGST HDN724040ALE640 MJAOA5E0>    at scbus3 target 0 lun 0 (ada2,pass6)
<HGST HDN724040ALE640 MJAOA5E0>    at scbus4 target 0 lun 0 (ada3,pass7)
<HGST HDN726060ALE610 APGNT517>    at scbus5 target 0 lun 0 (ada4,pass8)
<HGST HDN726060ALE610 APGNT517>    at scbus6 target 0 lun 0 (ada5,pass9)
<Samsung Flash Drive FIT 1100>     at scbus8 target 0 lun 0 (pass10,da4)
<Samsung Flash Drive FIT 1100>     at scbus9 target 0 lun 0 (pass11,da5)
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I think you are a little confused as to the "Replace" option. While it is true that "Replace" could be used, it is not for your situation. This is due to the fact that not only do you want to increase the drive size; but you want to add two drives. You cannot add or remove drives from a RaidZx vDev. It is basically "locked" to that number of drives.

So since you originally created that vDev with 4 drives you could only replace 4 drives with larger ones (One at a time and let the resilvering process complete before moving onto the next drive).

Take a look at @depasseg 's "Howto: migrate data from one pool to a bigger pool" which should be what you are really wanting to do.

Also, since I see some jail in your zfs list output look at @Fuganater 's "Moving Jails from main volume to new SSD volume" (this is also linked in Depasseg's thread.
 
Status
Not open for further replies.
Top