curl -X PUT -d "{\"swapondrive\": 0}" -H "Content-Type: application/json" -H "Authorization: Bearer {{APIKEY}}" https://{{TRUE SCALE IP}}/api/v2.0/system/advanced
Thank you for the useful command, when I check the config, I don't see the option? Can you please let me know how to display the related advanced settings?or you could just do it from the CLI/shell...
midclt call system.advanced.update '{"swapondrive": 0}'
# midclt call system.advanced.config | jq { "id": 1, "consolemenu": true, "serialconsole": false, "serialport": "ttyS0", "serialspeed": "9600", "powerdaemon": false, "swapondrive": 0, "overprovision": null, "traceback": true, "advancedmode": false, "autotune": false, "debugkernel": false, "uploadcrash": true, "anonstats": true, "anonstats_token": "", "motd": "Welcome to TrueNAS", "boot_scrub": 7, "fqdn_syslog": false, "sed_user": "USER", "sysloglevel": "F_INFO", "syslogserver": "", "syslog_transport": "UDP", "kdump_enabled": false, "isolated_gpu_pci_ids": [], "kernel_extra_options": "", "syslog_tls_certificate": null, "syslog_tls_certificate_authority": null, "consolemsg": false }
# midclt call system.advanced.config | jq { "id": 1, "consolemenu": true, "serialconsole": false, "serialport": "ttyS0", "serialspeed": "9600", "powerdaemon": false, "swapondrive": 0, ...
Yes, I need new glasses.Uh, did you overlook this?
swapondrive
value format, 2048? Edit: I found it, is 2.I'm going to update my disk formatting thread with the proper way to format the disks and remove that 2GB partition from zfs pools.We should probably have a quick Resource on some of these tunables
There's an added benefit to defaulting 2GB for "swap" when creating a new vdev. You create a buffer space so that when replacing a failed disk, you're not as likely to be hit with a disk that is just barely smaller than the capacities of the other disks in the vdev. (Even if the marketed size is "identical" to your existing drives, there's a chance that they slightly differ by a marginal amount.)and remove that 2GB partition from zfs pools.
I see what you mean, makes sense now.Please note that the extra / un-used space on data pool disks serves a function.
In 2020, IXsystems removed theThere's an added benefit to defaulting 2GB for "swap" when creating a new vdev.
swapondrive
option from UI and set as default "swapondrive": 0
in Core. I thought same setting should apply to Scale, especially that IXsystems is making sure the same standards are implemented into openzfs. Obviously, I'm here to learn and understand what are the best practices.wipefs
differences? cc @HoneyBadger# fdisk -l /dev/sda Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: HUH728080ALE601 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 210FD27E-4AFF-44BF-B605-6C9B8142D498 Device Start End Sectors Size Type /dev/sda1 128 4194304 4194177 2G Linux swap /dev/sda2 4194432 15628053134 15623858703 7.3T Solaris /usr & Apple ZFS # wipefs /dev/sda DEVICE OFFSET TYPE UUID LABEL sda 0x200 gpt sda 0x74702555e00 gpt sda 0x1fe PMBR
# fdisk -l /dev/sda Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: HUH728080ALE601 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 210FD27E-4AFF-44BF-B605-6C9B8142D498 Device Start End Sectors Size Type /dev/sda1 40 15628053134 15628053095 7.3T Solaris /usr & Apple ZFS # wipefs /dev/sda DEVICE OFFSET TYPE UUID LABEL sda 0x3f000 zfs_member sda 0x3e000 zfs_member sda 0x3d000 zfs_member sda 0x3c000 zfs_member sda 0x3b000 zfs_member sda 0x3a000 zfs_member sda 0x39000 zfs_member sda 0x38000 zfs_member sda 0x37000 zfs_member sda 0x36000 zfs_member sda 0x35000 zfs_member sda 0x34000 zfs_member sda 0x33000 zfs_member sda 0x32000 zfs_member sda 0x31000 zfs_member sda 0x30000 zfs_member sda 0x2f000 zfs_member sda 0x2e000 zfs_member sda 0x2d000 zfs_member sda 0x2c000 zfs_member sda 0x2b000 zfs_member sda 0x2a000 zfs_member sda 0x29000 zfs_member sda 0x28000 zfs_member sda 0x200 gpt sda 0x74702555e00 gpt sda 0x1fe PMBR
In 2020, IXsystems removed theswapondrive
option from UI and set as default"swapondrive": 0
in Core.
I don't understand also, thank you for the screenshots. See NAS-106531.I'm getting mixed messages then
# midclt call system.advanced.update '{"swapondrive": 2}' | jq '.swapondrive' 2 # midclt call system.advanced.config | jq '.swapondrive' 2 # wipefs -af /dev/sda /dev/sda: 8 bytes were erased at offset 0x0003f000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0003e000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0003d000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0003c000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0003b000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0003a000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00039000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00038000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00037000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00036000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00035000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00034000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00033000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00032000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00031000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00030000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0002f000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0002e000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0002d000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0002c000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0002b000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x0002a000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00029000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00028000 (zfs_member): 0c b1 ba 00 00 00 00 00 /dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa # fdisk -l /dev/sda Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: HUH728080ALE601 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
wipefs
is very fast, takes seconds to wipe the disk. I'm waiting on resilvering to see if wipefs
reports the same format as the other pool disks.# zpool status default pool: default state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Dec 29 13:50:31 2022 19.0T scanned at 19.1G/s, 1.52T issued at 1.53G/s, 19.0T total 130G resilvered, 8.01% done, 03:15:26 to go config: NAME STATE READ WRITE CKSUM default ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 b768e9a0-820f-47cb-95f1-0a205dbe69a2 ONLINE 0 0 0 11c649c6-15fb-4e4d-bb9e-a6e49d92dbe4 ONLINE 0 0 0 66f8cece-5550-4032-abdf-2c62f5c193f4 ONLINE 0 0 0 576baa03-374f-435e-906a-1b897df113dc ONLINE 0 0 0 6b6ae667-ae27-46e1-b059-3ba6f5ca4c5c ONLINE 0 0 0 eed80073-49af-40d5-842c-5b6b607ce36c ONLINE 0 0 0 de70ed0b-3d3b-43c8-a5ec-c536dba8cea7 ONLINE 0 0 0 e77f0ce2-b5d1-4b5e-af46-237519b4495b ONLINE 0 0 0 d0b1ac77-1ed9-462d-a954-7cf0da6451f6 ONLINE 0 0 0 (resilvering) 03dfccd5-8388-43c0-90d6-481b53f73a4e ONLINE 0 0 0 0e89244d-6504-411b-82ee-32ace0ed0359 ONLINE 0 0 0 17700665-232a-48c1-a99e-82bdc5e26c14 ONLINE 0 0 0 errors: No known data errors
gpt
partitions with tools like fdisk
or parted
? I want to add into guide how to reduce the swap partition to 1GB (as example) and move the new free space to zfs partition.I'm just curious, is this primarily an issue with older drives, or do newer drives show it too? I've looked at a small-ish sample of drives manufactured from about ~2016 onward (about a dozen WD and Seagate consumer hard drives >= 8 TB in nominal capacity, about a half dozen consumer SSDs of 128 GB to 4 TB nominal capacity) and they all have the standard sector counts specified by SFF-8447 (warning: Word doc).As I said elsewhere, it would be nice if we had standard sizes from all manufacturers. But, that's not what we have.
swapondrive
value set, and how a resilver/replace operation engages with them.