Migration / Upgrade pool to new drives

Ben Smith

Dabbler
Joined
Nov 3, 2016
Messages
14
Evening,

My home server has got to the age I need to replace the HDDs so I'm migrating the storage pool to a set of SAS SSDs. The original pool is 8 x 3Tb WD Red HDD in raidZ2 layout and the new pool will be 5 x 1.9Tb SAS SSDs (PM1633a) - Yes, a lot less but my circumstances have changed since the original build and I no longer need as much storage, so I'm going for lower power & quieter.

I am following the procedure given by https://www.truenas.com/community/threads/howto-migrate-data-from-one-pool-to-a-bigger-pool.40519/ and I've done a test run which appears to have worked fine, but did throw up a couple of questions I haven't found answers on the forums to.

1. When creating the partition on the new SSDs, do I need to apply any specific parameters to optimise the layout? There are quite a few forum posts about configuring block sizes and 4K sectors, etc, but they are quite old. Is it reasonable to assume gpart in FreeNAS 13 is aware and will 'do the right thing' in terms of sizes and alignments ?
The drives are EMC units which report their physical block size is 4K in SMART.

Code:
=== START OF INFORMATION SECTION ===
Vendor:               SAMSUNG
Product:              PA33N1T9 EMC1920
Revision:             EQL8
Compliance:           SPC-4
User Capacity:        1,920,924,123,136 bytes [1.92 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate:        Solid State Device
Form Factor:          2.5 inches
Logical Unit id:      0x5002538a0754ace0
Serial number:        9VNA0J502933
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Sun May 28 18:51:45 2023 BST
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Disabled or Not Supported


2. On the original drive setup I've got a swap partition of 2G on each drive which shows up in swapinfo as 2 mirrors giving a total of 4Gb swap.
I forgot to consider swap and haven't created any on the SSDs.
Do I need to create a new swap area?
The system has 64Gb RAM which seems more than enough. Realistically the existing 4Gb swap is not going to make much difference if something manages to consume all the RAM. But opinions seem divided on whether a swap partition of some kind should always be made available 'just in case'.

3. After the migration, if I look in the GUI under Storage -> Disks, the SSDs are not shown as in use by the new pool.
I assume this is because I created a degraded pool through the shell as I don't have enough ports to connect all the drives at once. (I used instructions from https://www.truenas.com/community/resources/creating-a-degraded-pool.100/ - to create the new pool in a degraded state while I perform the migration. )
Could this cause issues down the line?
Is there a way to get the GUI to recognise these drives are part of the new pool?

Thanks
Ben
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
1. When creating the partition on the new SSDs, do I need to apply any specific parameters to optimise the layout? There are quite a few forum posts about configuring block sizes and 4K sectors, etc, but they are quite old. Is it reasonable to assume gpart in FreeNAS 13 is aware and will 'do the right thing' in terms of sizes and alignments ?
Most of what you can find about block sizes, 4k sectors and optimization is still valid, but overall they are much more important on spinning rust. If you don't tell us more about your use case we can't help much, but the default settings won't hurt you and will, at worst, make you lose some performance (which might be an issue if you do block storage or similar things, depending on your objective).

2. On the original drive setup I've got a swap partition of 2G on each drive which shows up in swapinfo as 2 mirrors giving a total of 4Gb swap.
I forgot to consider swap and haven't created any on the SSDs.
Do I need to create a new swap area?
The system has 64Gb RAM which seems more than enough. Realistically the existing 4Gb swap is not going to make much difference if something manages to consume all the RAM. But opinions seem divided on whether a swap partition of some kind should always be made available 'just in case'.
If you do things (creating the pool) through the WebUI the system will take care of it for you I believe. Realistically, with 64 GB of RAM you shouldn't need swap space.

3. After the migration, if I look in the GUI under Storage -> Disks, the SSDs are not shown as in use by the new pool.
I assume this is because I created a degraded pool through the shell as I don't have enough ports to connect all the drives at once. (I used instructions from https://www.truenas.com/community/resources/creating-a-degraded-pool.100/ - to create the new pool in a degraded state while I perform the migration. )
Could this cause issues down the line?
Is there a way to get the GUI to recognise these drives are part of the new pool?
Please show us the output of zpool status and camcontrol devlist. Please format it using the [CODE][/CODE] brackets.

Now export the pool using zpool export testpool, and import it through the GUI. Once the missing disk is available, you can replace it into the pool using the GUI.
This line is taken from the guide you linked: I assume you would see the pool and the drives the correct way once you do that; maybe @danb35 can confirm this.
 
Last edited:

Ben Smith

Dabbler
Joined
Nov 3, 2016
Messages
14
Most of what you can find about block sizes, 4k sectors and optimization is still valid, but overall they are much more important on spinning rust. If you don't tell us more about your use case we can't help much, but the default settings won't hurt you and will, at worst, make you lose some performance (which might be an issue if you do block storage or similar things, depending on your objective).
I'm more concerned about sub-optimal configuration causing premature wear than squeezing maximum performance out of the drives. Sounds like on that score it's taken care of.
Please show us the output of zpool status and camcontrol devlist. Please format it using the [CODE][/CODE] brackets.

zpool status
Code:
# zpool status
  pool: freenas-boot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:22 with 0 errors on Tue May 30 03:45:22 2023
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            ada0p2    ONLINE       0     0     0
            ada1p2    ONLINE       0     0     0

errors: No known data errors

  pool: old-tank
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 11:22:44 with 0 errors on Sun May 14 11:22:56 2023
config:

        NAME                                            STATE     READ WRITE CKSUM
        old-tank                                        ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/1bdcec49-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/1c99a96b-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/1d579ea3-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/1e16de80-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/1ed93e28-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/1f9b2dbf-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/20605ffe-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0
            gptid/2127fc9d-db56-11e6-9efe-0cc47ae11bbe  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            /root/sparsefile                            OFFLINE      0     0     0
            gptid/852a7f20-fd3d-11ed-a9ce-0cc47ae11bbe  ONLINE       0     0     0
            gptid/d494f720-fd3d-11ed-a9ce-0cc47ae11bbe  ONLINE       0     0     0
            gptid/d622796e-fd3d-11ed-a9ce-0cc47ae11bbe  ONLINE       0     0     0
            gptid/d759073f-fd3d-11ed-a9ce-0cc47ae11bbe  ONLINE       0     0     0

errors: No known data errors


camcontrol devlist
Code:
# camcontrol devlist
<SAMSUNG PA33N1T9 EMC1920 EQL8>    at scbus0 target 0 lun 0 (da0,pass0)
<SAMSUNG PA33N1T9 EMC1920 EQL8>    at scbus0 target 1 lun 0 (da1,pass1)
<SAMSUNG PA33N1T9 EMC1920 EQL8>    at scbus0 target 2 lun 0 (da2,pass2)
<SAMSUNG PA33N1T9 EMC1920 EQL8>    at scbus0 target 3 lun 0 (da3,pass3)
<ATA WDC WD30EFRX-68E 0A82>        at scbus0 target 4 lun 0 (da4,pass4)
<ATA WDC WD30EFRX-68E 0A82>        at scbus0 target 5 lun 0 (da5,pass5)
<ATA WDC WD30EFRX-68E 0A82>        at scbus0 target 6 lun 0 (da6,pass6)
<ATA WDC WD30EFRX-68E 0A82>        at scbus0 target 7 lun 0 (da7,pass7)
<CT120BX500SSD1 M6CR013>           at scbus1 target 0 lun 0 (ada0,pass8)
<CT120BX500SSD1 M6CR013>           at scbus2 target 0 lun 0 (ada1,pass9)
<WDC WD30EFRX-68EUZN0 82.00A82>    at scbus3 target 0 lun 0 (ada2,pass10)
<WDC WD30EFRX-68EUZN0 82.00A82>    at scbus4 target 0 lun 0 (ada3,pass11)
<WDC WD30EFRX-68EUZN0 82.00A82>    at scbus5 target 0 lun 0 (ada4,pass12)
<WDC WD30EFRX-68EUZN0 82.00A82>    at scbus6 target 0 lun 0 (ada5,pass13)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus7 target 0 lun 0 (ses0,pass14)

This line is taken from the guide you linked: I assume you would see the pool and the drives the correct way once you do that; maybe @danb35 can confirm this.
Yes, I did the export from CLI and import within the GUI. Botht he new and old pools are listed as expected on the Storage > Pools page, but on the Storage > Disks page the new disks say n/a in the Pool column. The original disks have pool 'old-tank' listed.
I'm guessing that when a pool is created via the GUI it records something (Serial? GPT Guid?) linking the disks to the pool. It's probably no big deal - but it might mean the Gui either allows, or prevents me doing something in the future because it doesn't know the disks are linked to the new pool.
If it's likely to come back and bite me in the future I could re-create the new pool by temporarily removing the old disks and attaching all 5 SSDs and creating the pool through the GUI - I think that would register everything correctly. It's just a bit more fiddling with cables that I'd prefer to avoid if possible.
 
Top