TrueNAS SCALE setting up RAID1 for ZFS mirrors?

Sawtaytoes

Patron
Joined
Jul 9, 2022
Messages
221
Ever since upgrading from TrueNAS Core to SCALE, I've been noticing these RAID drives show up:

1671485146969.png

None of these drives are in RAID1 configs except the last two (boot drives in md127). Those other drives are in ZFS mirrors, but two drives listed here are in different zpools, and the others are in different vdevs.

These are the log messages:
1671485218487.png

It's adding swap space to my ZFS drives?

What can I do to fix this?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Uh, there is nothing to fix. It is standard practice for TrueNAS, Core or SCALE, to have swap partitions on ZFS data pool disks.

This has the side benefit that if you buy a different brand for a replacement disk, and it is just a tiny bit smaller than the dead disk, you can "steal" the space from the swap partition.

All that said, TrueNAS SCALE, based on Linux, does appear to have some oddities in the swap mirror management. If a MD-RAID device is shutdown correctly, it should not need much additional re-sync on boot. There are some users of TrueNAS SCALE that state the swap RAID-1 mirrors are rebuilt on most / all reboots. Don't know why if that is true.
 

Sawtaytoes

Patron
Joined
Jul 9, 2022
Messages
221
Mine aren't just rebuilt on reboots; it seems that it happens periodically throughout the day.

Which made me wonder if this is causing drive read errors or other performance issues.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If the MD-RAID Mirrors are being re-synced long after reboot, then that is a problem. I don't have any thoughts why that would happen.

Assuming you can gather good data, open a Jira bug on the problem. (Report a Bug at the top of the forum.) Then post the number and link here.
 

Sawtaytoes

Patron
Joined
Jul 9, 2022
Messages
221
I have a lot of zpool disks (67), and only a few have SWAP partitions.

My boot drives have 16GB of SWAP as well as two drives in a single-mirror pool.

All hard drives and solid state drives in other pools have a 2GB partition, but it's only set as SWAP on a few drives.

Here's an example of disks with no SWAP:
Code:
sdan         66:112  0   9.1T  0 disk 
├─sdan1      66:113  0     2G  0 part 
└─sdan2      66:114  0   9.1T  0 part 
sdao         66:128  0   9.1T  0 disk 
├─sdao1      66:129  0     2G  0 part 
└─sdao2      66:130  0   9.1T  0 part 
sdap         66:144  0   9.1T  0 disk 
├─sdap1      66:145  0     2G  0 part 
└─sdap2      66:146  0   9.1T  0 part 

Here are some sprinkled throughout with SWAP:
Code:
sdac         65:192  0   9.1T  0 disk 
├─sdac1      65:193  0     2G  0 part 
└─sdac2      65:194  0   9.1T  0 part 
sdad         65:208  0   1.8T  0 disk 
├─sdad1      65:209  0     2G  0 part 
└─sdad2      65:210  0   1.8T  0 part 
sdae         65:224  0   9.1T  0 disk 
├─sdae1      65:225  0     2G  0 part 
└─sdae2      65:226  0   9.1T  0 part 
sdaf         65:240  0   9.1T  0 disk 
├─sdaf1      65:241  0     2G  0 part 
│ └─md124     9:124  0     2G  0 raid1
│   └─md124 253:2    0     2G  0 crypt [SWAP]
└─sdaf2      65:242  0   9.1T  0 part 
sdag         66:0    0   1.8T  0 disk 
├─sdag1      66:1    0     2G  0 part 
└─sdag2      66:2    0   1.8T  0 part 
sdah         66:16   0   9.1T  0 disk 
├─sdah1      66:17   0     2G  0 part 
│ └─md126     9:126  0     2G  0 raid1
│   └─md126 253:5    0     2G  0 crypt [SWAP]
└─sdah2      66:18   0   9.1T  0 part 
sdai         66:32   0   9.1T  0 disk 
├─sdai1      66:33   0     2G  0 part 
│ └─md124     9:124  0     2G  0 raid1
│   └─md124 253:2    0     2G  0 crypt [SWAP]

I've been having connection issues with some drives in the pool and have been debugging those issues. That might be why there's a re-sync.

The real question is why it's so random which drives have partitions that become SWAP.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The OS probably has a suggested count for the extra, on-data disks swap. After that limit is reached, it stops adding swap spaces.

As for why each disk still have the space reserved, I've already explained that the space is reserved so that if you need to replace a disk, and the replacement ends up being slightly smaller because it's a different manufacturer, no problem. Just remove the swap space on that disk.

In a perfect world, disks of a certain size, would be exactly the same size from all vendors. Sun Microsystems used to take the least size from multiple vendors and use that as the size for all vendors, (in that size slot). Basically vendors with the slightly larger size ended up with more spare sectors that originally designed.
 

Sawtaytoes

Patron
Joined
Jul 9, 2022
Messages
221
I understand the space-allocation issue, but is there a rhyme or reason to it?

I don't think the OS has a max-count because I added more drives and got a few more of those listed as SWAP.

Also, the swaps are spread out among arrays. Wouldn't it want to keep them within certain pools?

Lastly, is there a way to turn off that feature? I feel like using extra drive space could cause pool problems which is why ZFS typically likes to control the whole drive, not just partitions.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Their probably is a way to force TrueNAS to ignore the extra partition on the data disks for swap. I don't know it.

And to be clear, after the MD-RAID Mirrors are re-synced, their is no additional I/O to the data disks, except if swap ends up in use. The swap usage is more of a warning. If your server is swapping, then their is something wrong. (Instead of crashing due to lack of memory...)
 
Top