Swap partition on new pools

mm0nst3r

Dabbler
Joined
Sep 5, 2021
Messages
33
How do I disable or set the size to zero for swap partition on new pools? It used to be in settings in Core, but I can't find it in Scale.
 

Chris3773

Dabbler
Joined
Nov 14, 2021
Messages
17
I could not found the option in the GUI however you can change the value using the API.

Create an API key under the root user.

Then you can use a curd command to call the API for changing the swap partition size to 0 disabling swap for pools.
Code:
curl -X PUT -d "{\"swapondrive\": 0}" -H "Content-Type: application/json" -H "Authorization: Bearer {{APIKEY}}" https://{{TRUE SCALE IP}}/api/v2.0/system/advanced



Remember to remove the API key from the root user once done. You will need to recreate the pool to remove existing swap partitions.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
or you could just do it from the CLI/shell...


midclt call system.advanced.update '{"swapondrive": 0}'
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
or you could just do it from the CLI/shell...


midclt call system.advanced.update '{"swapondrive": 0}'
Thank you for the useful command, when I check the config, I don't see the option? Can you please let me know how to display the related advanced settings?
Code:
# midclt call system.advanced.config | jq
{
  "id": 1,
  "consolemenu": true,
  "serialconsole": false,
  "serialport": "ttyS0",
  "serialspeed": "9600",
  "powerdaemon": false,
  "swapondrive": 0,
  "overprovision": null,
  "traceback": true,
  "advancedmode": false,
  "autotune": false,
  "debugkernel": false,
  "uploadcrash": true,
  "anonstats": true,
  "anonstats_token": "",
  "motd": "Welcome to TrueNAS",
  "boot_scrub": 7,
  "fqdn_syslog": false,
  "sed_user": "USER",
  "sysloglevel": "F_INFO",
  "syslogserver": "",
  "syslog_transport": "UDP",
  "kdump_enabled": false,
  "isolated_gpu_pci_ids": [],
  "kernel_extra_options": "",
  "syslog_tls_certificate": null,
  "syslog_tls_certificate_authority": null,
  "consolemsg": false
}
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Uh, did you overlook this?
Code:
# midclt call system.advanced.config | jq
{
  "id": 1,
  "consolemenu": true,
  "serialconsole": false,
  "serialport": "ttyS0",
  "serialspeed": "9600",
  "powerdaemon": false,
  "swapondrive": 0,
...

Or was it something else you were looking for?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
We should probably have a quick Resource on some of these tunables. But, I will leave it to others who have more knowledge of them... (I did write 3 Resources on subjects where I do have enough knowledge.)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Please note that the extra / un-used space on data pool disks serves a function. If you get a replacement disk that is a tiny bit smaller, (like less than 2GBs), you can steal that space from the swap partition. But, if you don't have that extra space in your data pool disks, and use the full disks size, you would be screwed. (If the replacement was even 1 sector smaller...)

As I said elsewhere, it would be nice if we had standard sizes from all manufacturers. But, that's not what we have.
 
Joined
Oct 22, 2019
Messages
3,641
and remove that 2GB partition from zfs pools.
There's an added benefit to defaulting 2GB for "swap" when creating a new vdev. You create a buffer space so that when replacing a failed disk, you're not as likely to be hit with a disk that is just barely smaller than the capacities of the other disks in the vdev. (Even if the marketed size is "identical" to your existing drives, there's a chance that they slightly differ by a marginal amount.)

EDIT: @Arwen beat me to it by mere seconds. :tongue:
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
Please note that the extra / un-used space on data pool disks serves a function.
I see what you mean, makes sense now.
There's an added benefit to defaulting 2GB for "swap" when creating a new vdev.
In 2020, IXsystems removed the swapondrive option from UI and set as default "swapondrive": 0 in Core. I thought same setting should apply to Scale, especially that IXsystems is making sure the same standards are implemented into openzfs. Obviously, I'm here to learn and understand what are the best practices.

As an exercise, I wiped /dev/sda, here are the changes after resilvering, why the wipefs differences? cc @HoneyBadger

Disk with 2GB partition:
Code:
# fdisk -l /dev/sda
Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HUH728080ALE601
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 210FD27E-4AFF-44BF-B605-6C9B8142D498

Device       Start         End     Sectors  Size Type
/dev/sda1      128     4194304     4194177    2G Linux swap
/dev/sda2  4194432 15628053134 15623858703  7.3T Solaris /usr & Apple ZFS

# wipefs /dev/sda
DEVICE OFFSET        TYPE UUID LABEL
sda    0x200         gpt
sda    0x74702555e00 gpt
sda    0x1fe         PMBR

Wiped disk, imported fresh into pool:
Code:
# fdisk -l /dev/sda
Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HUH728080ALE601
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 210FD27E-4AFF-44BF-B605-6C9B8142D498

Device     Start         End     Sectors  Size Type
/dev/sda1     40 15628053134 15628053095  7.3T Solaris /usr & Apple ZFS

# wipefs /dev/sda
DEVICE OFFSET        TYPE       UUID LABEL
sda    0x3f000       zfs_member
sda    0x3e000       zfs_member
sda    0x3d000       zfs_member
sda    0x3c000       zfs_member
sda    0x3b000       zfs_member
sda    0x3a000       zfs_member
sda    0x39000       zfs_member
sda    0x38000       zfs_member
sda    0x37000       zfs_member
sda    0x36000       zfs_member
sda    0x35000       zfs_member
sda    0x34000       zfs_member
sda    0x33000       zfs_member
sda    0x32000       zfs_member
sda    0x31000       zfs_member
sda    0x30000       zfs_member
sda    0x2f000       zfs_member
sda    0x2e000       zfs_member
sda    0x2d000       zfs_member
sda    0x2c000       zfs_member
sda    0x2b000       zfs_member
sda    0x2a000       zfs_member
sda    0x29000       zfs_member
sda    0x28000       zfs_member
sda    0x200         gpt
sda    0x74702555e00 gpt
sda    0x1fe         PMBR
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
In 2020, IXsystems removed the swapondrive option from UI and set as default "swapondrive": 0 in Core.

I'm getting mixed messages then, especially when you see a red warning message, even with TrueNAS Core 13.0-U3.1.

swap-0.png


swap-tooltip.png
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
My procedure to set a 2GB swap on disks, I take the disk offline in UI and run the following commands as root:
Code:
# midclt call system.advanced.update '{"swapondrive": 2}' | jq '.swapondrive'
2
# midclt call system.advanced.config | jq '.swapondrive'
2

# wipefs -af /dev/sda
/dev/sda: 8 bytes were erased at offset 0x0003f000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0003e000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0003d000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0003c000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0003b000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0003a000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00039000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00038000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00037000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00036000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00035000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00034000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00033000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00032000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00031000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00030000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0002f000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0002e000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0002d000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0002c000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0002b000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x0002a000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00029000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00028000 (zfs_member): 0c b1 ba 00 00 00 00 00
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa

# fdisk -l /dev/sda
Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HUH728080ALE601
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Lastly, I force the disk replacement into UI. wipefs is very fast, takes seconds to wipe the disk. I'm waiting on resilvering to see if wipefs reports the same format as the other pool disks.
Code:
# zpool status default
  pool: default
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Dec 29 13:50:31 2022
    19.0T scanned at 19.1G/s, 1.52T issued at 1.53G/s, 19.0T total
    130G resilvered, 8.01% done, 03:15:26 to go
config:

    NAME                                      STATE     READ WRITE CKSUM
    default                                   ONLINE       0     0     0
      raidz2-0                                ONLINE       0     0     0
        b768e9a0-820f-47cb-95f1-0a205dbe69a2  ONLINE       0     0     0
        11c649c6-15fb-4e4d-bb9e-a6e49d92dbe4  ONLINE       0     0     0
        66f8cece-5550-4032-abdf-2c62f5c193f4  ONLINE       0     0     0
        576baa03-374f-435e-906a-1b897df113dc  ONLINE       0     0     0
        6b6ae667-ae27-46e1-b059-3ba6f5ca4c5c  ONLINE       0     0     0
        eed80073-49af-40d5-842c-5b6b607ce36c  ONLINE       0     0     0
        de70ed0b-3d3b-43c8-a5ec-c536dba8cea7  ONLINE       0     0     0
        e77f0ce2-b5d1-4b5e-af46-237519b4495b  ONLINE       0     0     0
        d0b1ac77-1ed9-462d-a954-7cf0da6451f6  ONLINE       0     0     0  (resilvering)
        03dfccd5-8388-43c0-90d6-481b53f73a4e  ONLINE       0     0     0
        0e89244d-6504-411b-82ee-32ace0ed0359  ONLINE       0     0     0
        17700665-232a-48c1-a99e-82bdc5e26c14  ONLINE       0     0     0

errors: No known data errors

This was actually a good learning experience, in case someone wants to safely change the disk swap partition size. If everything is okay, I'm going to add this into into my Bluefin Recommended Settings and Optimizations thread.
 
Last edited:

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
Everything looks good, I added the Pool Disks Swap Partition section into my guide, giving credit to @Arwen and @winnielinnie. What's the recommended way to resize the gpt partitions with tools like fdisk or parted? I want to add into guide how to reduce the swap partition to 1GB (as example) and move the new free space to zfs partition.

I'm used to working with logical volumes and Red Hat things. :smile:
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
@morganL

@Kris Moore

What is the official stance from iXsystems?

We choose the defaults to be safe... most of our systems get tested this way.

For our customers, we try not to mix drive types in a systems. So, we may not see and test all the drive sizing issues. We'd prefer to make sure users can easily resolve them.

While systems with lots of RAM don't need much swap in normal operation....sometimes the extra swap space is useful for abnormal circumstances.

So, we prefer defaults. We allow changes at your own risk. We'd prefer not recommending users change the defaults unless there's a specific reason.

We are open to recommendations on changing defaults... if there are known issues. However, we do prefer defaults to be safe rather than optimizing for efficiency in specific situations.

@Kris Moore has more technical experience with this specific issue.
 

bcat

Explorer
Joined
Oct 20, 2022
Messages
84
As I said elsewhere, it would be nice if we had standard sizes from all manufacturers. But, that's not what we have.
I'm just curious, is this primarily an issue with older drives, or do newer drives show it too? I've looked at a small-ish sample of drives manufactured from about ~2016 onward (about a dozen WD and Seagate consumer hard drives >= 8 TB in nominal capacity, about a half dozen consumer SSDs of 128 GB to 4 TB nominal capacity) and they all have the standard sector counts specified by SFF-8447 (warning: Word doc).

I'm sure there are older drives that don't follow that standard; I'm just wondering if non-standard sector counts are still an issue for drives being made today.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I need to poke about at this a fair bit, but I imagine there's some "artifacts" and a number of different behavior cases that could happen if a pool with/without existing swap is created on a system with/without the swapondrive value set, and how a resilver/replace operation engages with them.

But in general, the 2GB swap partition was added primarily to be the "release valve" - common boot devices prior to 9.3 were USB-based or other low-endurance media. With the switch to ZFS based boot, these devices rapidly fell out of favor, so the current strategy of "use the boot device for swap, if sufficiently sized" makes more sense.

As @morganL suggests the desired solution is "safe defaults" - we can do a lot to account for many use cases, but an incorrect assumption could rapidly degrade a boot device or otherwise have adverse effects.
 
Top