HOWTO: setup a pair of larger SSDs for boot pool and data

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Yes, it works precisely this way. My procedure is not necessary for fresh installs.
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531
Yes, it works precisely this way. My procedure is not necessary for fresh installs.

Great, yeah currently I have my boot drive on one 250GB SSD, but I want to install 2 1TB SSDs mirrored and move the jails, boot and system Dataset config to them.
 

HarryMuscle

Contributor
Joined
Nov 15, 2021
Messages
161
I'm not sure if maybe such an involved process was required when Scale first became available, however, at this point in time this process is overly complicated for Scale. All you need to do is change literally one line in the install script before starting it and it will create a smaller partition on the boot drive. It's a simple single sed statement. I'll try to post the exact statement I used when I'm back at my computer.

Thanks,
Harry
 
Last edited:

Koskee

Cadet
Joined
Jan 23, 2022
Messages
3
well, it definitely works. Altho fair warning, its mind numbingly slow trying to run the installer USB2.0->USB2.0. Took about an hour and a half, not including the first boot, which is... still going at this point. lol
Very useful guide tho.
Appreciatory sentiments to @Patrick M. Hausen && @acdoussan
Cheers
 

Grinchy

Explorer
Joined
Aug 5, 2017
Messages
78
Thank you so much! Works great!

There's just one point that won't work for me.
If I try to change the settings of the created pool (adding compression etc.) in the Gui it tells me "[aclmode] Invalid choice: DISCARD".

Using ssh it seems to work fine.

Any Idea?
 
Last edited:

Thibaultmol

Cadet
Joined
Nov 17, 2016
Messages
5
Could you post what you did?
I'm not sure if maybe such an involved process was required when Scale first became available, however, at this point in time this process is overly complicated for Scale. All you need to do is change literally one line in the install script before starting it and it will create a smaller partition on the boot drive. It's a simple single sed statement. I'll try to post the exact statement I used when I'm back at my computer.

Thanks,
Harry
EDIT: ended up following this guide, works perfectly now
 
Last edited:

HarryMuscle

Contributor
Joined
Nov 15, 2021
Messages
161
I'm not sure if maybe such an involved process was required when Scale first became available, however, at this point in time this process is overly complicated for Scale. All you need to do is change literally one line in the install script before starting it and it will create a smaller partition on the boot drive. It's a simple single sed statement. I'll try to post the exact statement I used when I'm back at my computer.

Thanks,
Harry
Finally getting back to this ... for those that asked about this, all you have to do is select the Shell option option in the installation GUI and run the following command:

Code:
sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16G/' /usr/sbin/truenas-install


where +16G is the size of the boot partition that you want. Then run:

Code:
truenas-install


to start the installation GUI again. That's all there's to it. This essentially tells the script to limit the size of the boot partition when it creates it so that it doesn't take up the whole drive. Next you will need to create the partition and pool that will hold your data on the same drive as the boot partition using the command line once the installation is done but that's a lot more standard practice and should be easily googled.

Thanks,
Harry
 

Bonoboy

Cadet
Joined
Jul 13, 2018
Messages
9
Unfortunately this isn't working. Made a bootable USB with Rufus and selected the downloaded ISO. (https://download.freenas.org/13.0/STABLE/RELEASE/x64/TrueNAS-13.0-RELEASE.iso). Started from USB. When using shell option and type the command above. I receive "sed: 1: "/usr/sbin/truenas-install": unterminated substitute pattern. I checked the ISO by mounting it in Windows. And there's no /usr/sbin/truenas-install
Maybe it has to do with the warning i receieved when using Rufus to create the bootable USB.
"The image you have selected is an ISOHybrid, but its creators have not made it compatible with ISO/File copy mode. As a result, DD image writing mode will be enforced."
Any idea?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,110
@Bonoboy The above sed wizardry is for SCALE but you're using CORE. Refer to the first post for the FreeBSD procedure.
 

fastzombies

Explorer
Joined
Aug 11, 2022
Messages
57
New to TrueNAS so I had luxury of fresh install. I followed the original guide with one change:

I installed to 2 SSDs and a USB. Then I offline and detach the SSDs. Then I followed the rest of the guide and it worked. Thank you OP for this guide.
 

toobes

Cadet
Joined
Sep 25, 2022
Messages
1
Hi all,

first of all: this is an unsupported configuration. I run it myself for quite some time and I'm confident that it will continue to work, but you proceed at your own risk.

1. Motivation

Current series of new SSDs frequently start at 250 GB or even 500 GB size. It is a waste of space and money to only put the boot pool on a drive this large. OTOH in a production installation it is highly desirable to use SSDs instead of USB thumb drives.
My system has two SSDs for the boot pool and another pool with virtual machines. The ZVOLs in the latter are regularly replicated to my main data pool (RAIDZ2) comprised of spinning disks.

2. Prerequisites

If you already run FreeNAS from one or two SSDs that you like to repurpose, you must perform a fresh installation. It is not possible to shrink a vdev, so we need to repartition.

Parts:
  • 1 USB drive of suitable size, 16 or 32 GB
  • 1, better 2 SSDs for the boot pool and additional data
My target drives:

Code:
ada4: <Samsung SSD 860 PRO 512GB RVM01B6Q> ACS-4 ATA SATA 3.x device
ada4: Serial Number S42YNX0MA13733E
ada4: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada4: Command Queueing enabled
ada4: 488386MB (1000215216 512 byte sectors)
ada5 at ahcich5 bus 0 scbus5 target 0 lun 0
ada5: <Samsung SSD 860 PRO 512GB RVM01B6Q> ACS-4 ATA SATA 3.x device
ada5: Serial Number S42YNX0MA13643E
ada5: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada5: Command Queueing enabled
ada5: 488386MB (1000215216 512 byte sectors)

I'll assume UEFI boot throughout this post.

3. Preparation

As stated, if you are already running the system, a complete reinstall is necessary, so:
  • Backup your configuration
  • Export your data pools from the UI
4. Install to the USB drive

Use your preferred method (I frequently use VMware Fusion on my Mac) to perform a fresh FreeNAS install. Actually there's nothing that fundamentally prohibits what we are trying to achieve now, it is just the installer that won't let us partition the system drive(s) manually. Hence the workaround with a smaller initial drive.

5. Boot the USB installation

This should be easy using the boot menu of your EFI BIOS. The tricky part is that you want to make sure you are really running off the USB drive and don't import and mount the pool still on your SSDs. They are both named the same ("boot-pool"), so the system might get confused. I do not have the time to try it in a VM and do a transcript. Up to this point you will definitely not damage anything, stay calm ;)

6. Enable SSH and login

Configure the "new" FreeNAS installation to the point where you can get a shell via SSH. Then do

Code:
zpool status boot-pool
  pool: boot-pool
state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool  ONLINE       0     0     0
      da0p2  ONLINE       0     0     0

errors: No known data errors

So we are running from USB (da0) ...

7. Wipe your SSDs

My SSDs are ada4 and ada5 (see above), ada0 - ada3 are the spinning disks that build my main storage pool. Danger, Will Robinson! We are deleting the old installation, now. Don't confuse the disk drives!

Code:
zpool labelclear ada4p2
zpool labelclear ada5p2
gpart delete -i2 ada4
gpart delete -i2 ada5
gpart delete -i1 ada4
gpart delete -i1 ada5
gpart destroy ada4
gpart destroy ada5


8. Transfer the new system to the SSDs

Copy the partition table:

Code:
gpart backup da0 | gpart restore ada4
gpart backup da0 | gpart restore ada5


Attach the devices to the zpool:

Code:
zpool attach boot-pool da0p2 ada4p2
zpool attach boot-pool da0p2 ada5p2


Copy the EFI boot partition:

Code:
dd if=/dev/da0p1 of=/dev/ada4p1
dd if=/dev/da0p1 of=/dev/ada5p1


Wait for the resilver to finish! (should not take long)

Detach the USB drive from the pool:

Code:
zpool offline boot-pool da0p2
zpool detach boot-pool da0p2


Result:

Code:
zpool status boot-pool
  pool: boot-pool
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:02 with 0 errors on Thu Jan  9 03:45:02 2020
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool  ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        ada4p2  ONLINE       0     0     0
        ada5p2  ONLINE       0     0     0

errors: No known data errors
gpart show ada4
=>        40  1000215136  ada4  GPT  (477G)
          40        2008        - free -  (1.0M)
        2048      524288     1  efi  (256M)
      526336    67108864     2  freebsd-zfs  (32G)
    67635200   932579976     - free -   (445G)

You should have quite some free space on your SSDs, now.

9. Shutdown, unplug the USB drive, boot from your SSDs, restore services

If the system boots alright, you might want to take the time to import your main data pool and restore your configuration, now.

Apart from the smaller partition size (which the installer refuses to create) there is absolutely nothing different from a regular install. So unless some future upgrade decides to wipe and repartition the system drives during the process this setup will definitely work like any other. We had that (wipe and repartition) once in the past when FreeNAS switched from UFS to ZFS for the boot drive. There might be a situation in the future when the boot loader changes (again) - you are on your own, watch the release notes for every major upgrade!

10. Create a new pool in the available space

Again we will need the command line, when we are finished the pool will be available in the GUI like any other:

Code:
gpart add -t freebsd-zfs -a 1m ada4
gpart add -t freebsd-zfs -a 1m ada5
gpart list ada4
gpart list ada5

Look for these in the output of the gpart list command:

Code:
3. Name: ada4p3
[...]
   rawuuid: 25fe934a-19d6-11ea-82a1-ac1f6b76641c
[...]
3. Name: ada5p3
[...]
   rawuuid: 3fc8e29a-19d0-11ea-9848-ac1f6b76641c


Now create the pool - use the UUIDs from the previous step:

Code:
zpool create ssd mirror gptid/25fe934a-19d6-11ea-82a1-ac1f6b76641c gptid/3fc8e29a-19d0-11ea-9848-ac1f6b76641c


Last we export the pool from the command line:

Code:
zpool export ssd


11. Import the new pool from the GUI

Done! Enjoy :)

Patrick


Edit 20210514: replace "freenas-boot" with "boot-pool" for current versions
Thank you so much. Everything worked out.
 

Bannix

Cadet
Joined
Sep 24, 2023
Messages
2
I know that it is not supported in any way, but is it possible to use the free space not for storage in an additional new pool, but instead use it as an L2ARC (as a metadata vdev replacement as described here) for another already exisiting pool? This is much less critical in terms of data protection but does not completely waste the precious SSD space. Which adjustments in the step-by-step guide would I need to make for this to work? I'm not familiar with the TrueNAS SCALE CLI so any guidance is appreciated.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
The first question is whether you would benefit from L2ARC at all. But yes, conceptually possible.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,110
This is much less critical in terms of data protection but does not completely waste the precious SSD space.
Why "precious SSD space"? The clear advice is to go small and CHEAP for the boot drive, and not to mind about "wasted space"—think rather about "wasted time" hacking around the expectations of an appliance OS.
L2ARC—assuming a L2ARC is useful at all—would best require a high-endurance SSD. Running L2ARC on the extra space of an old 256 GB SSD serving as boot drive may end up wearing out the boot drive. Oops!

If you have a use for L2ARC, and meet the hardware requirements for it (reminder: at least 64 GB for CORE, 96 GB for SCALE!), you can afford a discrete L2ARC drive.
 

Bannix

Cadet
Joined
Sep 24, 2023
Messages
2
Why "precious SSD space"? The clear advice is to go small and CHEAP for the boot drive, and not to mind about "wasted space"—think rather about "wasted time" hacking around the expectations of an appliance OS.
L2ARC—assuming a L2ARC is useful at all—would best require a high-endurance SSD. Running L2ARC on the extra space of an old 256 GB SSD serving as boot drive may end up wearing out the boot drive. Oops!

If you have a use for L2ARC, and meet the hardware requirements for it (reminder: at least 64 GB for CORE, 96 GB for SCALE!), you can afford a discrete L2ARC drive.
Thanks for the response. After doing more research I guess I was a bit overzealous; I found that in my case in a warm system the metadata is cached in ARC anyways and I can ls -R my million files (all residing on hdds) within a second. I'm using the new 23.10 RC with zfs 2.2.
Still it would be interesting to see how to attach part of the ssd to an existing pool out of curiosity.
 

noncompliant

Cadet
Joined
Oct 19, 2023
Messages
1
Thanks a lot! Here's my process using the technique that modifies the installer.
Code:
//Before install, start shell (actually I used vi but this is a clear description of what to do)
# sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+32G/' /usr/sbin/truenas-install
// Then run truenas-install


// After install:

#  parted /dev/nvme0n1
GNU Parted 3.4
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print free
Model: SAMSUNG XYZ (nvme)
Disk /dev/nvme0n1: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name  Flags
        17.4kB  2097kB  2080kB  Free Space
 1      2097kB  3146kB  1049kB                     bios_grub, legacy_boot
 2      3146kB  540MB   537MB   fat32              boot, esp
 4      540MB   17.7GB  17.2GB                     swap
 3      17.7GB  52.1GB  34.4GB  zfs
        52.1GB  512GB   460GB   Free Space

(parted) mkpart
Partition name?  []?                                                     
File system type?  [ext2]? zfs                                           
Start? 52.1GB                   <-- Equal to Start of Free Space entry                                           
End? 512GB                      <-- Equal to End of Free space entry


# blkid /dev/nvme0n1p5
/dev/nvme0n1p5: LABEL="boot-pool" UUID="12851368433026806858" UUID_SUB="13930648100761960720" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="5b5bd244-a7f9-416f-b145-57f9dd0de8e2" <-- !! Careful here, not sure why it's named boot-pool for me, had to verify that this was the right one !!

# zpool create -f nvmePool /dev/disk/by-partuuid/5b5bd244-a7f9-416f-b145-57f9dd0de8e2

# zpool export nvmePool

// Now go into UI and import your pool -
 

catchimran

Cadet
Joined
Nov 6, 2023
Messages
1
Moderator Warning: This HOWTO will damage the ability of your TrueNAS appliance to properly maintain itself, will break the ability to replace failed drives, and may cause problems with upgrades and updates. TrueNAS is not designed to do this sort of partitioning, and the developers are not interested in supporting this. You may be on your own if you do this. Further discussion of the topic is available here in this resource.

Hi all,

first of all: this is an unsupported configuration. I run it myself for quite some time and I'm confident that it will continue to work, but you proceed at your own risk.

1. Motivation

Current series of new SSDs frequently start at 250 GB or even 500 GB size. It is a waste of space and money to only put the boot pool on a drive this large. OTOH in a production installation it is highly desirable to use SSDs instead of USB thumb drives.
My system has two SSDs for the boot pool and another pool with virtual machines. The ZVOLs in the latter are regularly replicated to my main data pool (RAIDZ2) comprised of spinning disks.

2. Prerequisites

If you already run FreeNAS from one or two SSDs that you like to repurpose, you must perform a fresh installation. It is not possible to shrink a vdev, so we need to repartition.

Parts:
  • 1 USB drive of suitable size, 16 or 32 GB
  • 1, better 2 SSDs for the boot pool and additional data
My target drives:

Code:
ada4: <Samsung SSD 860 PRO 512GB RVM01B6Q> ACS-4 ATA SATA 3.x device
ada4: Serial Number S42YNX0MA13733E
ada4: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada4: Command Queueing enabled
ada4: 488386MB (1000215216 512 byte sectors)
ada5 at ahcich5 bus 0 scbus5 target 0 lun 0
ada5: <Samsung SSD 860 PRO 512GB RVM01B6Q> ACS-4 ATA SATA 3.x device
ada5: Serial Number S42YNX0MA13643E
ada5: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
ada5: Command Queueing enabled
ada5: 488386MB (1000215216 512 byte sectors)

I'll assume UEFI boot throughout this post.

3. Preparation

As stated, if you are already running the system, a complete reinstall is necessary, so:
  • Backup your configuration
  • Export your data pools from the UI
4. Install to the USB drive

Use your preferred method (I frequently use VMware Fusion on my Mac) to perform a fresh FreeNAS install. Actually there's nothing that fundamentally prohibits what we are trying to achieve now, it is just the installer that won't let us partition the system drive(s) manually. Hence the workaround with a smaller initial drive.

5. Boot the USB installation

This should be easy using the boot menu of your EFI BIOS. The tricky part is that you want to make sure you are really running off the USB drive and don't import and mount the pool still on your SSDs. They are both named the same ("boot-pool"), so the system might get confused. I do not have the time to try it in a VM and do a transcript. Up to this point you will definitely not damage anything, stay calm ;)

6. Enable SSH and login

Configure the "new" FreeNAS installation to the point where you can get a shell via SSH. Then do

Code:
zpool status boot-pool
  pool: boot-pool
state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool  ONLINE       0     0     0
      da0p2  ONLINE       0     0     0

errors: No known data errors

So we are running from USB (da0) ...

7. Wipe your SSDs

My SSDs are ada4 and ada5 (see above), ada0 - ada3 are the spinning disks that build my main storage pool. Danger, Will Robinson! We are deleting the old installation, now. Don't confuse the disk drives!

Code:
zpool labelclear ada4p2
zpool labelclear ada5p2
gpart delete -i2 ada4
gpart delete -i2 ada5
gpart delete -i1 ada4
gpart delete -i1 ada5
gpart destroy ada4
gpart destroy ada5


8. Transfer the new system to the SSDs

Copy the partition table:

Code:
gpart backup da0 | gpart restore ada4
gpart backup da0 | gpart restore ada5


Attach the devices to the zpool:

Code:
zpool attach boot-pool da0p2 ada4p2
zpool attach boot-pool da0p2 ada5p2


Copy the EFI boot partition:

Code:
dd if=/dev/da0p1 of=/dev/ada4p1
dd if=/dev/da0p1 of=/dev/ada5p1


Wait for the resilver to finish! (should not take long)

Detach the USB drive from the pool:

Code:
zpool offline boot-pool da0p2
zpool detach boot-pool da0p2


Result:

Code:
zpool status boot-pool
  pool: boot-pool
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:02 with 0 errors on Thu Jan  9 03:45:02 2020
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool  ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        ada4p2  ONLINE       0     0     0
        ada5p2  ONLINE       0     0     0

errors: No known data errors
gpart show ada4
=>        40  1000215136  ada4  GPT  (477G)
          40        2008        - free -  (1.0M)
        2048      524288     1  efi  (256M)
      526336    67108864     2  freebsd-zfs  (32G)
    67635200   932579976     - free -   (445G)

You should have quite some free space on your SSDs, now.

9. Shutdown, unplug the USB drive, boot from your SSDs, restore services

If the system boots alright, you might want to take the time to import your main data pool and restore your configuration, now.

Apart from the smaller partition size (which the installer refuses to create) there is absolutely nothing different from a regular install. So unless some future upgrade decides to wipe and repartition the system drives during the process this setup will definitely work like any other. We had that (wipe and repartition) once in the past when FreeNAS switched from UFS to ZFS for the boot drive. There might be a situation in the future when the boot loader changes (again) - you are on your own, watch the release notes for every major upgrade!

10. Create a new pool in the available space

Again we will need the command line, when we are finished the pool will be available in the GUI like any other:

Code:
gpart add -t freebsd-zfs -a 1m ada4
gpart add -t freebsd-zfs -a 1m ada5
gpart list ada4
gpart list ada5

Look for these in the output of the gpart list command:

Code:
3. Name: ada4p3
[...]
   rawuuid: 25fe934a-19d6-11ea-82a1-ac1f6b76641c
[...]
3. Name: ada5p3
[...]
   rawuuid: 3fc8e29a-19d0-11ea-9848-ac1f6b76641c


Now create the pool - use the UUIDs from the previous step:

Code:
zpool create ssd mirror gptid/25fe934a-19d6-11ea-82a1-ac1f6b76641c gptid/3fc8e29a-19d0-11ea-9848-ac1f6b76641c


Last we export the pool from the command line:

Code:
zpool export ssd


11. Import the new pool from the GUI

Done! Enjoy :)

Patrick


Edit 20210514: replace "freenas-boot" with "boot-pool" for current versions
I am trying to install the NAS for the first time so this is a free install on 500GB SSD. so which steps do i have to omit from these steps? Please confirm.

by the way, appreciate these steps.
 

underpickled

Contributor
Joined
Oct 1, 2013
Messages
166
Yes, it works precisely this way. My procedure is not necessary for fresh installs.
Is there any reason the shorter method wouldn't work if you exported the config and zfs pool of the original system, did a fresh install, and imported both?
Edit: I think I worked out that there is no difference, just the first guide is to reclaim space from an existing SSD boot pool, whereas if you're moving a system to new hardware or installing on the SSD for the first time you can just do a fresh install and import the config and pool. Hopefully someone will correct me if I'm wrong.

Separately, I assume the moderator warning "will break the ability to replace failed drives" is referring to the partitioned SSD and not other HDDs in a separate pool?
 
Last edited:
Top