Constant writes to boot device despite moving system dataset

Takaani

Cadet
Joined
Aug 23, 2023
Messages
8
My current setup is using a HPE Proliant Microserver Gen 10+ with 2 16 TB HDDs in raid0 (I know it has no redundancy, but it is just a small homeserver to start with), 16 GB ECC memory, 1 TB NVME SSD to test things like L2ARC/SLOG and now moving the system dataset to it so the boot pool doesn't consume the whole drive but at least it can use its large write endurance. The boot pool is on an internal flash drive and this is why I am worried about the write endurance for longevity. I am limited by the microserver's HDD slots and its 1 PCIe slot. The setup is for an ISCSI (directly connected) gaming/media NAS with NextCloud with remote access outside the network.

I checked many posts regarding the noise issue; all of which says to move the system dataset to a SSD and to make the syslog use the system dataset. Both of which I have done. These posts also said that by doing this, it will virtually remove all r/w activity from the boot pool, but this was not the case. The other drives showed no activity on idle except the flash drive with the boot pool, which happens every 5 seconds. I know it is specifically 5 seconds because of the timeout flush and I know changing that timeout could result in data in that amount of time being lost., which is why I am avoiding it and the problem would still remain with this "solution." I unset the pool for docker/kube and ended it via sysctl stop to make sure it wasn't that. I also ran sudo find . -type f -mmin +6 to find what was changed in the last 6 minutes but I don't understand its output to be frank.

root@truenas[~]# sudo find . -type f -mmin +1
./dead.letter
./.zshrc
./.warning
./.zsh-histfile
./.freeipmi/sdr-cache/sdr-cache-truenas.localhost
./.gdbinit
./.zlogin
./.bashrc
./.profile
./.midcli.hist
./tdb/persistent/activedirectory_user.tdb
./tdb/persistent/snapshot_count.tdb
./tdb/persistent/ldap_group.tdb
./tdb/persistent/activedirectory_group.tdb
./tdb/persistent/ldap_user.tdb

1692799337235.png

sda and sdb would be the 2 16 TB HDDs. sdc would be the flash drive. I want to move the constant writes to nvme without moving the boot pool if possible. There was one more interesting post, but this pertained to FreeNAS.

https://www.truenas.com/community/threads/nas-is-writing-every-5-seconds.18925/

In the most recent replies, cyberjock mentioned that the syslog is actually writing to ramdisk rather than the actual boot pool. I would also like to know if that is true for TrueNAS and if these charts don't reflect something like that. I might have missed a few details but that should be the majority.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
In the most recent replies, cyberjock mentioned that the syslog is actually writing to ramdisk rather than the actual boot pool
Most recent is a long time ago in that case. He hasn't been here for at least 5 years and RAMdisk is a thing of the past since before FreeNAS 9.3.

For sure this will be the reporting data written to your system dataset, which you can move to a pool which is running on SSDs to avoid spinning HDD activity.

Under System Settings, Advanced, click the Configure Button on the Storage Widget, then move the System Dataset to a more appropriate place, which could be your boot pool (if it's on an SSD) or another SSD-based pool.
 

Takaani

Cadet
Joined
Aug 23, 2023
Messages
8
Most recent is a long time ago in that case. He hasn't been here for at least 5 years and RAMdisk is a thing of the past since before FreeNAS 9.3.

For sure this will be the reporting data written to your system dataset, which you can move to a pool which is running on SSDs to avoid spinning HDD activity.

Under System Settings, Advanced, click the Configure Button on the Storage Widget, then move the System Dataset to a more appropriate place, which could be your boot pool (if it's on an SSD) or another SSD-based pool.
That is what I already did. It is currently in the "System" pool that resides on nvme0n1, but the graph says that there's still constant writes on the flash drive. I'm not sure if the graph is telling the truth or the system dataset never moved. Also, in the post I said that I restarted after moving the system dataset without errors.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I just moved my system dataset from the boot pool (too many writes for the devices I have) to a HDD pool as a test. Have the same issue, even after rebooting, it shows the same writes to the boot pool ongoing. I did get some sort of error about not being able to stop or umount syslog when I did it but figured that would go away upon a reboot.

Here's what I see upon reboot:

zfs list -t filesystem | grep -F ".system" | column -t boot-pool/.system 1.59G 73.9G 1.40G legacy boot-pool/.system/configs-6185a5ad8d7a46da9a9af93ab68469e9 50.0M 73.9G 50.0M legacy boot-pool/.system/cores 120K 1024M 120K legacy boot-pool/.system/ctdb_shared_vol 96K 73.9G 96K legacy boot-pool/.system/glusterd 104K 73.9G 104K legacy boot-pool/.system/rrd-6185a5ad8d7a46da9a9af93ab68469e9 104M 73.9G 104M legacy boot-pool/.system/samba4 3.85M 73.9G 668K legacy boot-pool/.system/services 96K 73.9G 96K legacy boot-pool/.system/syslog-6185a5ad8d7a46da9a9af93ab68469e9 40.3M 73.9G 40.3M legacy boot-pool/.system/webui 96K 73.9G 96K legacy tank/.system 1.55G 8.62T 1.39G legacy tank/.system/configs-6185a5ad8d7a46da9a9af93ab68469e9 40.0M 8.62T 40.0M legacy tank/.system/cores 192K 1024M 192K legacy tank/.system/ctdb_shared_vol 192K 8.62T 192K legacy tank/.system/glusterd 236K 8.62T 236K legacy tank/.system/rrd-6185a5ad8d7a46da9a9af93ab68469e9 93.3M 8.62T 93.3M legacy tank/.system/samba4 812K 8.62T 812K legacy tank/.system/services 192K 8.62T 192K legacy tank/.system/syslog-6185a5ad8d7a46da9a9af93ab68469e9 28.5M 8.62T 28.5M legacy tank/.system/webui 192K 8.62T 192K legacy

And here was the error (really didn't want tank, wanted app pool so did it again):

[EFAULT] Unable to umount tank/.system/syslog-6185a5ad8d7a46da9a9af93ab68469e9: umount: /var/db/system/syslog-6185a5ad8d7a46da9a9af93ab68469e9: target is busy. The following processes are using '/var/db/system/syslog-6185a5ad8d7a46da9a9af93ab68469e9': [ { "pid": "72133", "name": "virtlogd", "cmdline": "/usr/sbin/virtlogd", "paths": [] } ]
 
Last edited:

Takaani

Cadet
Joined
Aug 23, 2023
Messages
8
I just moved my system dataset from the boot pool (too many writes for the devices I have) and a HDD pool. Have the same issue, even after rebooting, it shows the same writes to the boot pool ongoing. I did get some sort of error about not being able to stop or umount syslog when I did it but figured that would go away upon a reboot.

Here's what I see upon reboot:

zfs list -t filesystem | grep -F ".system" | column -t boot-pool/.system 1.59G 73.9G 1.40G legacy boot-pool/.system/configs-6185a5ad8d7a46da9a9af93ab68469e9 50.0M 73.9G 50.0M legacy boot-pool/.system/cores 120K 1024M 120K legacy boot-pool/.system/ctdb_shared_vol 96K 73.9G 96K legacy boot-pool/.system/glusterd 104K 73.9G 104K legacy boot-pool/.system/rrd-6185a5ad8d7a46da9a9af93ab68469e9 104M 73.9G 104M legacy boot-pool/.system/samba4 3.85M 73.9G 668K legacy boot-pool/.system/services 96K 73.9G 96K legacy boot-pool/.system/syslog-6185a5ad8d7a46da9a9af93ab68469e9 40.3M 73.9G 40.3M legacy boot-pool/.system/webui 96K 73.9G 96K legacy tank/.system 1.55G 8.62T 1.39G legacy tank/.system/configs-6185a5ad8d7a46da9a9af93ab68469e9 40.0M 8.62T 40.0M legacy tank/.system/cores 192K 1024M 192K legacy tank/.system/ctdb_shared_vol 192K 8.62T 192K legacy tank/.system/glusterd 236K 8.62T 236K legacy tank/.system/rrd-6185a5ad8d7a46da9a9af93ab68469e9 93.3M 8.62T 93.3M legacy tank/.system/samba4 812K 8.62T 812K legacy tank/.system/services 192K 8.62T 192K legacy tank/.system/syslog-6185a5ad8d7a46da9a9af93ab68469e9 28.5M 8.62T 28.5M legacy tank/.system/webui 192K 8.62T 192K legacy

And here was the error (really didn't want tank, wanted app pool so did it again):

[EFAULT] Unable to umount tank/.system/syslog-6185a5ad8d7a46da9a9af93ab68469e9: umount: /var/db/system/syslog-6185a5ad8d7a46da9a9af93ab68469e9: target is busy. The following processes are using '/var/db/system/syslog-6185a5ad8d7a46da9a9af93ab68469e9': [ { "pid": "72133", "name": "virtlogd", "cmdline": "/usr/sbin/virtlogd", "paths": [] } ]
I also had an error, but mine was a hang on trying to move the system dataset. I also rebooted and reattempted which went through fine. I also tried moving it back and forth through multiple pools which were successful but the writes still remain on the boot drive. I don't think it outputted any error logs but I wouldn't know where to look for them outside the GUI atm.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I agree that moving the System dataset around is important to avoid HDD pools spinning when not desired, but I don't think anyone here has ever said that it's possible to have the boot pool do no writing at all.

If you're not comfortable with your boot media being able to handle small writes from time to time (even quite regularly), you don't have the right boot media.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I did delete the extra remaining pools since the move did not delete them. The second time I moved them, I got better results and now the writes to the boot pool are minimal.
 

Takaani

Cadet
Joined
Aug 23, 2023
Messages
8
I did delete the extra remaining pools since the move did not delete them. The second time I moved them, I got better results and now the writes to the boot pool are minimal.
Could you show how much writing is being done and also explain a bit more about what you mean in remaining pools? I'm still new to TrueNas. Thanks!

1693301025525.png


For me, it is writing around 750 KiB per 5 seconds, which would result in about 12.5 GiB per day.

I agree that moving the System dataset around is important to avoid HDD pools spinning when not desired, but I don't think anyone here has ever said that it's possible to have the boot pool do no writing at all.

If you're not comfortable with your boot media being able to handle small writes from time to time (even quite regularly), you don't have the right boot media.
It's not that I'm not comfortable with it, but with the write endurance not being listed on most flash drives, I want to do all that I can to keep it healthy. There was actually a person who claimed "virtually no writing" but I don't know how little that is to them. Hoping to gain some insight on minimizing it outside of moving the dataset as well.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
write endurance not being listed on most flash drives
It's not something many people would be happy with if they knew the reality.

The best option is to replace a USB stick with a USB to SATA adapter with an SSD attached. (or an external SSD)
 

Takaani

Cadet
Joined
Aug 23, 2023
Messages
8
It's not something many people would be happy with if they knew the reality.

The best option is to replace a USB stick with a USB to SATA adapter with an SSD attached. (or an external SSD)
Just out of space constraints, what do you think of USB to M2?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Just out of space constraints, what do you think of USB to M2?
Effectively included in what I mentioned... (there are M.2 SSDs as well as SATA SSDs) Many external SSDs are in that category.
 

Takaani

Cadet
Joined
Aug 23, 2023
Messages
8
Effectively included in what I mentioned... (there are M.2 SSDs as well as SATA SSDs) Many external SSDs are in that category.
That's good then. I was asking primarily because M2 SSDs have an option to be a single solid stick without wires. Essentially a longer flash drive. It helps because I move the server from time to time and I don't want to open up the case each time along with the space constraints.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Just FYI starting in Cobia we log exclusively (/var/log) to the boot device. Writes will be unavoidable. Plan accordingly.
Why?
Given how many people read docs etc - this is gonna cause some "entertainment" with all those that have used USB Thumb drives for years and won't pick up this warning.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Why?
Given how many people read docs etc - this is gonna cause some "entertainment" with all those that have used USB Thumb drives for years and won't pick up this warning.

USB boot devices with low write endurance have been discouraged in our documentation since FreeNAS 9.3 was released. We're talking syslog here, not collectd rrds and other aspects of the system dataset.
 
Joined
Oct 22, 2019
Messages
3,641
Just FYI starting in Cobia we log exclusively (/var/log) to the boot device.
Is that related to this issue:


Long story short: OP does not have a "syslog" dataset mounted anywhere, nor can they view log entries before the current reboot. Whether they view /var/log/messages or use journalctl, it's the same issue.
 
Joined
Oct 22, 2019
Messages
3,641
Just FYI starting in Cobia we log exclusively (/var/log) to the boot device. Writes will be unavoidable. Plan accordingly.
An unintended issue of this is that you cannot keep your logs contained to an encrypted dataset. At least the System Dataset and its children datasets can inherit the encryption of the root dataset.

The boot device is always unencrypted, and thus your logs will now always be saved unencrypted. :frown:

I really hope this change isn't brought to TrueNAS Core...
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Is that related to this issue:


Long story short: OP does not have a "syslog" dataset mounted anywhere, nor can they view log entries before the current reboot. Whether they view /var/log/messages or use journalctl, it's the same issue.
I don't know about that particular case, but in general the extra complexity of putting logs into the system dataset created potential for errors to occur that would result in logs being missed / "lost". It was also an additional avenue for system dataset setup to fail leading to dependent services failing to start / properly intialize.

At the end of the day reliable logs / startup was more important that supporting legacy hardware choices and how-to guides.
Logs are mission-critical. Support for low write endurance boot devices is rather lower priority.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
An unintended issue of this is that you cannot keep your logs contained to an encrypted dataset. At least the System Dataset and its children datasets can inherit the encryption of the root dataset.

The boot device is always unencrypted, and thus your logs will now always be saved unencrypted. :frown:

I really hope this change isn't brought to TrueNAS Core...
This change is not being backported. Major potentially breaking changes are put in the next version. If you think that having an encrypted log path is important for your use case, you are more than welcome to file a feature request for Cobia / DragonFish.
 
Top