Sgt_Bizkit
Cadet
- Joined
- Apr 29, 2023
- Messages
- 3
Hello All, (my first post)
Version:
TrueNAS-SCALE-22.02.4
Running within VMWARE with direct physical access to the disks
(This is done so i can dual boot directly into truenas scale if i want and it continue to function normally as a backup OS)
I created a pool via CLI using (due to budget constraints)
3 x 18tb Disks
2 x fake 18tb disks (offlined)
to create a degraded RaidZ2
I later added extra 2x 18tb Disks replacing the offlined fake disks via CLI and all was well.
5 Months later 1 disk encountered uncorrectable sectors, so i offlined and used the GUI to replace the disk. (hindsight i should have done CLI again)
I didnt notice any immediate issue, and thought i'd check the drive stats using "fdisk --List" and saw the replacement had a 2G swap partition created, whereas the others did not.
It started the resilvering process without issue
This means the larger partition would be smaller than the other pool member partitions, so i imagine the resilver will fail near 100% (currently at 81% - 6 hours to go).
However i'm not 100% sure as some people state the swap is created in the event other disks vary in size slighly per manufacturer and its advisable to keep.
others state its used for swapping out RAM or backward compatibility with core (lot of interpretations)
My question:
Should i disable swap using "midclt call system.advanced.update '{"swapondrive": 0}'" then offline, delete the partitions of the RMA drive and let truenas resilver it?
or see if the Resilver takes with the 2G partition?
The RMA drive is Disk /dev/sdc (disk 2 in windows screenshot)
Checking my old notes i used these commands when creating the original pool:
Thank you for taking the time to read & reply
Version:
TrueNAS-SCALE-22.02.4
Running within VMWARE with direct physical access to the disks
(This is done so i can dual boot directly into truenas scale if i want and it continue to function normally as a backup OS)
I created a pool via CLI using (due to budget constraints)
3 x 18tb Disks
2 x fake 18tb disks (offlined)
to create a degraded RaidZ2
I later added extra 2x 18tb Disks replacing the offlined fake disks via CLI and all was well.
5 Months later 1 disk encountered uncorrectable sectors, so i offlined and used the GUI to replace the disk. (hindsight i should have done CLI again)
I didnt notice any immediate issue, and thought i'd check the drive stats using "fdisk --List" and saw the replacement had a 2G swap partition created, whereas the others did not.
It started the resilvering process without issue
This means the larger partition would be smaller than the other pool member partitions, so i imagine the resilver will fail near 100% (currently at 81% - 6 hours to go).
However i'm not 100% sure as some people state the swap is created in the event other disks vary in size slighly per manufacturer and its advisable to keep.
others state its used for swapping out RAM or backward compatibility with core (lot of interpretations)
My question:
Should i disable swap using "midclt call system.advanced.update '{"swapondrive": 0}'" then offline, delete the partitions of the RMA drive and let truenas resilver it?
or see if the Resilver takes with the 2G partition?
The RMA drive is Disk /dev/sdc (disk 2 in windows screenshot)
root@truenas[~]# fdisk --list
Disk /dev/sdd: 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E903C3E2-0EC7-D140-BF00-207392D0F46B
Device Start End Sectors Size Type
/dev/sdd1 2048 35156637695 35156635648 16.4T Solaris /usr & Apple ZFS
/dev/sdd9 35156637696 35156654079 16384 8M Solaris reserved 1
[B]Disk /dev/sdc:[/B] 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D6F9D8A4-5383-4C84-AA23-C2147ADE4674
Device Start End Sectors Size Type
[B]/dev/sdc1 128 4194304 4194177 2G Linux swap[/B]
/dev/sdc2 4194432 35156656094 35152461663 16.4T Solaris /usr & Apple ZFS
Disk /dev/sdb: 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A87EAFD6-7424-A940-BB1A-8B1CC3EA0685
Device Start End Sectors Size Type
/dev/sdb1 2048 35156637695 35156635648 16.4T Solaris /usr & Apple ZFS
/dev/sdb9 35156637696 35156654079 16384 8M Solaris reserved 1
Disk /dev/sdf: 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EA6E172A-78E0-5B41-BB22-A1967F989676
Device Start End Sectors Size Type
/dev/sdf1 2048 35156637695 35156635648 16.4T Solaris /usr & Apple ZFS
/dev/sdf9 35156637696 35156654079 16384 8M Solaris reserved 1
Disk /dev/sda: 120 GiB, 128849018880 bytes, 251658240 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B154E109-E16F-4562-AD84-875DDE6BA813
Device Start End Sectors Size Type
/dev/sda1 4096 6143 2048 1M BIOS boot
/dev/sda2 6144 1054719 1048576 512M EFI System
/dev/sda3 34609152 251658206 217049055 103.5G Solaris /usr & Apple ZFS
/dev/sda4 1054720 34609151 33554432 16G Linux swap
Partition table entries are not in disk order.
Disk /dev/sde: 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CE1CB627-ADA3-9744-BAB6-AD59F74C855D
Device Start End Sectors Size Type
/dev/sde1 2048 35156637695 35156635648 16.4T Solaris /usr & Apple ZFS
/dev/sde9 35156637696 35156654079 16384 8M Solaris reserved 1
Disk /dev/mapper/sda4: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@truenas[~]#
Checking my old notes i used these commands when creating the original pool:
truncate -s 18000207937536 /tmp/FD1.img
truncate -s 18000207937536 /tmp/FD2.img
zpool create StoragePool -o ashift=12 -f raidz2 /dev/sdd /dev/sdc /dev/sdb /tmp/FD1.img /tmp/FD2.img
zpool offline StoragePool /tmp/FD1.img
zpool offline StoragePool /tmp/FD2.img
root@truenas[~]# zpool status StoragePool
pool: StoragePool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
config:
NAME STATE READ WRITE CKSUM
StoragePool DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
sdf ONLINE 0 0 0
sde ONLINE 0 0 0
sdd ONLINE 0 0 0
/tmp/FD1.img OFFLINE 0 0 0
/tmp/FD2.img OFFLINE 0 0 0
[COLOR=rgb(20, 20, 20)]zpool replace StoragePool -f /tmp/FD2.img /dev/sdb[/COLOR]
zpool online StoragePool /dev/sdb
zpool replace StoragePool -f /tmp/FD1.img /dev/sdc
zpool online StoragePool /dev/sdc
Thank you for taking the time to read & reply