HOWTO: Using a pair of large SSDs with boot and data pools for SCALE

cmoyroud

Cadet
Joined
Dec 11, 2020
Messages
9
Moderator Warning: This HOWTO will damage the ability of your TrueNAS appliance to properly maintain itself, will break the ability to replace failed drives, and may cause problems with upgrades and updates. TrueNAS is not designed to do this sort of partitioning, and the developers are not interested in supporting this. You may be on your own if you do this. Further discussion of the topic is available here in this resource.

This is the SCALE version of this howto. It is very similar, the differences are due to Linux-specific commands to copy partition info and bootloader data.

I'm currently running Proxmox with a TrueNAS VM for my filer, and another couple of VMs for Docker containers, and an Unifi controller. I am very interested in SCALE because it would allow me to use a single system for all three tasks, thereby making more efficient use of resources (also, it would avoid using non-really-supported SATA passthrough from the hypervisor to the TrueNAS VM).

Since I want to improve the performance of the overall system (which also hosts a Windows VM using GPU passthrough that I'm typing this from), I wanted to see if using a mirror of Samsung 970 EVO NVMe drives for both boot and high-perf VM pool was doable with SCALE. I don't have the NVMe drives yet, so I ran this experiment with a Proxmox VM.

Assumptions:
1. You're installing onto additional, non-partitioned disks /dev/sdb and /dev/sdc. Be careful to choose the right disks! I suspect that NVMe drives will look something like /dev/nvmeXnY, based on how it looks in Proxmox VE which is also Debian-based.
2. SCALE has been installed on /dev/sda.

Code:
# Copy the partition information between disks
> sfdisk -d /dev/sda | sfdisk --force /dev/sdb
> sfdisk -d /dev/sda | sfdisk --force /dev/sdc

# Attach the newly created partitions to the pool
> zpool attach boot-pool sda3 sdb3
> zpool attach boot-pool sda3 sdc3

# Copy the boot partition
> dd if=/dev/sda1 of=/dev/sdb1
> dd if=/dev/sda1 of=/dev/sdc1

# Copy the EFI partition
> dd if=/dev/sda2 of=/dev/sdb2
> dd if=/dev/sda2 of=/dev/sdc2

# Install GRUB on the disks
> grub-install /dev/sdb
> grub-install /dev/sdc

# Wait for boot-pool to resilver. I monitor with:
> watch -n 5 -d zpool status boot-pool

# Remove the initial install drive from the pool
> zpool offline boot-pool sda3
> zpool detach boot-pool sda3


Now you can power off, remove your original install drive (e.g. a USB stick), make sure that your new boot drives are selected in the BIOS/UEFI and be on your merry way.
 
Last edited by a moderator:

cyrus104

Explorer
Joined
Feb 7, 2021
Messages
70
Thanks a bunch for this, I needed to use this to replace my boot partition and couldn't find a good guide to do it.

I think there is a typo in the Copy the EFI partition, you are coping the boot partition to the efi partition.

Hello,

This is the SCALE version of this howto. It is very similar, the differences are due to Linux-specific commands to copy partition info and bootloader data.

I'm currently running Proxmox with a TrueNAS VM for my filer, and another couple of VMs for Docker containers, and an Unifi controller. I am very interested in SCALE because it would allow me to use a single system for all three tasks, thereby making more efficient use of resources (also, it would avoid using non-really-supported SATA passthrough from the hypervisor to the TrueNAS VM).

Since I want to improve the performance of the overall system (which also hosts a Windows VM using GPU passthrough that I'm typing this from), I wanted to see if using a mirror of Samsung 970 EVO NVMe drives for both boot and high-perf VM pool was doable with SCALE. I don't have the NVMe drives yet, so I ran this experiment with a Proxmox VM.

Assumptions:
1. You're installing onto additional, non-partitioned disks /dev/sdb and /dev/sdc. Be careful to choose the right disks! I suspect that NVMe drives will look something like /dev/nvmeXnY, based on how it looks in Proxmox VE which is also Debian-based.
2. SCALE has been installed on /dev/sda.

Code:
# Copy the partition information between disks
> sfdisk -d /dev/sda | sfdisk --force /dev/sdb
> sfdisk -d /dev/sda | sfdisk --force /dev/sdc

# Attach the newly created partitions to the pool
> zpool attach boot-pool sda3 sdb3
> zpool attach boot-pool sda3 sdc3

# Copy the boot partition
> dd if=/dev/sda1 of=/dev/sdb1
> dd if=/dev/sda1 of=/dev/sdc1

# Copy the EFI partition
> dd if=/dev/sda1 of=/dev/sdb2
> dd if=/dev/sda1 of=/dev/sdc2

# Install GRUB on the disks
> grub-install /dev/sdb
> grub-install /dev/sdc

# Wait for boot-pool to resilver. I monitor with:
> watch -n 5 -d zpool status boot-pool

# Remove the initial install drive from the pool
> zpool offline boot-pool sda3
> zpool detach boot-pool sda3


Now you can power off, remove your original install drive (e.g. a USB stick), make sure that your new boot drives are selected in the BIOS/UEFI and be on your merry way.
 

cmoyroud

Cadet
Joined
Dec 11, 2020
Messages
9
That is correct @cyrus104! I edited the post to correct for that.

Also note that if you want to create other partitions later, you'll need to fix the GPT so it uses the whole disk. I have not found a way to do this unattended. You can do it with parted - just ask parted to print the partition table and it'll ask to fix GPT. Short of using tools like expect, I don't know of any other way.
 

th3n3k

Cadet
Joined
Apr 24, 2021
Messages
5
UPDATE: I've found https://www.reddit.com/r/truenas/comments/lgf75w/scalehowto_split_ssd_during_installation/ with the "remaining steps" (create the additional partition in the free space + create the new storage pool)

That is correct @cyrus104! I edited the post to correct for that.

Also note that if you want to create other partitions later, you'll need to fix the GPT so it uses the whole disk. I have not found a way to do this unattended. You can do it with parted - just ask parted to print the partition table and it'll ask to fix GPT. Short of using tools like expect, I don't know of any other way.

@cmoyroud Please, could you post how to create the new partition and pool? I've copied from 64GB SSD to a larger 250GB and everything is OK (thank you!). I would like to create a new pool in the new space on TrueNAS SCALE but I'm finding commands for BSD/TrueNAS CORE and I'm a listtle bit lost with the ZFS commands here in Linux. Thank you in advance
 
Last edited:

cmoyroud

Cadet
Joined
Dec 11, 2020
Messages
9
Here you go @th3n3k:
Code:
truenas-scale# fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition number (5-128, default 5):
First sector (134217695-201326558, default 134217728):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (134217728-201326558, default 201326558):

Created a new partition 5 of type 'Linux filesystem' and of size 32 GiB.

Command (m for help): t
Partition number (1-5, default 5):
Partition type or alias (type L to list all):67

Changed type of partition 'Linux filesystem' to 'Solaris /usr & Apple ZFS'.

Command (m for help): w
The partition table has been altered.
Syncing disks.


Important note: I've found that the 'partition type or alias' number moves around depending on the OS, version of fdisk, etc. So I'd just use the 'L' option first to see which number corresponds to the 'Solaris /usr & Apple ZFS' partition type on your system. The wording in fdisk seems to indicate that you should be able to enter 'Solaris /usr & Apple ZFS' directly at the prompt, but I've never been brave enough to try it ;)

Once you've done that on all your disks, you'll have to create the ZFS pool from the command-line, since the TrueNAS Web UI won't let you create Zpools from partitions:
Code:
truenas-scale# zpool create nvme-rocket mirror /dev/sda5 /dev/sdb5
truenas-scale# zpool export nvme-rocket


You need to export the pool so the TrueNAS can import it. Just go to the Web UI, and under Storage click on the Import button, your storage should be there.
 

mattyv316

Dabbler
Joined
Sep 13, 2021
Messages
27
Thank you @cmoyroud, this guide is very helpful. I was able to successfully move boot from USB to mirrored SSD, but I am running into problems
trying to create a partition from the rest of the drive. It looks like there doesn't seem to be any space when I use fdisk. Here is some of the output.
The first sector doesn't seem to be correct. It will not let me manually put what I think it should be. Also, if I run the "F" option, it says there is 0 byes and sectors. I am not great with Linux, so I am not sure what my options are at this point. Any help is greatly appreciated.


Code:
root@truenas[~]# fdisk -l
Disk /dev/sda: 372.61 GiB, 400088457216 bytes, 781422768 sectors
Disk model: HSCAC2DA4SUN400G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 3FB2DA40-113D-41D8-AD41-4D16164764E0

Device       Start      End  Sectors  Size Type
/dev/sda1       40     2087     2048    1M BIOS boot
/dev/sda2     2088  1050663  1048576  512M EFI System
/dev/sda3  1050664 30031838 28981175 13.8G Solaris /usr & Apple ZFS


Disk /dev/sdb: 372.61 GiB, 400088457216 bytes, 781422768 sectors
Disk model: HSCAC2DA4SUN400G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 3FB2DA40-113D-41D8-AD41-4D16164764E0

Device       Start      End  Sectors  Size Type
/dev/sdb1       40     2087     2048    1M BIOS boot
/dev/sdb2     2088  1050663  1048576  512M EFI System
/dev/sdb3  1050664 30031838 28981175 13.8G Solaris /usr & Apple ZFS


Disk /dev/sdc: 14.32 GiB, 15376318464 bytes, 30031872 sectors
Disk model: Ultra Fit
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@truenas[~]# fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition number (4-128, default 4): 4
First sector (34-39, default 34): 30031839
Value out of range.
First sector (34-39, default 34):

Command (m for help): F
Unpartitioned space /dev/sda: 0 B, 0 bytes, 0 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
 

mattyv316

Dabbler
Joined
Sep 13, 2021
Messages
27
Never mind. Used your reference to parted above and the info here.
 

NovaSaltado

Cadet
Joined
Feb 3, 2018
Messages
7
@cmoyroud

did you ever get around to doing this yourself? I'm trying this now with the latest TrueNas Scale and was able to get up to

Code:
grub-install /dev/sdm

output:
grub-install: error: cannot find EFI directory

Currently researching how to get around this, but wanted to share the situation in the mean time.
 

NovaSaltado

Cadet
Joined
Feb 3, 2018
Messages
7
Found a reddit thread to move forward:

Code:
Then do this to format your efi partition with mkfs.vfat:

mkfs.vfat /dev/<your efi partiton>


In the end here's what I did:
Code:
mkfs.vfat /dev/sdm2
mount /dev/sdm2 /boot
grub-install --efi-directory=/boot

mkfs.vfat /dev/sdo2
mount /dev/sdo2 /boot
grub-install --efi-directory=/boot
 

NovaSaltado

Cadet
Joined
Feb 3, 2018
Messages
7
Okay, need to scratch my previous messages. I was able to get this to work with TrueNAS Scale 22.02.1.

I've made a post about the process, as I deviated quite a bit from the OP.
 
Top