bug in scale 22.12.0?
my truenas scale is running under proxmox ve (7.3-3) with virtio-scsi drives. I upgraded from core some months ago to scale 22.02.4 without any problems. now I got 22.12.0 and get the following message on every reboot:
A Fail event had been detected on md device /dev/md127.
It could be related to component device /dev/sdc1.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sda1[1] sdc1[0](F)
2095040 blocks super 1.2 [2/1] [_U]
[=>...................] resync = 9.6% (201600/2095040) finish=0.1min speed=201600K/sec
every reboot gives me this message again
cat /proc/mdstat after starting up does not show this problem:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
zpool gives no errors and all pools are up to date
switching back to 22.02.4 does not generate this message during startup.
Any hints what to do (running 22.02.4 again)/anyone with similar problems?
update december 26:
problem seems to be solved - thanks a lot to the community
in summary this never showed before in Angelfish but became visible in Bluefish due to kernel updates
in a future version this will not be shown anymore:
my truenas scale is running under proxmox ve (7.3-3) with virtio-scsi drives. I upgraded from core some months ago to scale 22.02.4 without any problems. now I got 22.12.0 and get the following message on every reboot:
A Fail event had been detected on md device /dev/md127.
It could be related to component device /dev/sdc1.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sda1[1] sdc1[0](F)
2095040 blocks super 1.2 [2/1] [_U]
[=>...................] resync = 9.6% (201600/2095040) finish=0.1min speed=201600K/sec
every reboot gives me this message again
cat /proc/mdstat after starting up does not show this problem:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
zpool gives no errors and all pools are up to date
switching back to 22.02.4 does not generate this message during startup.
Any hints what to do (running 22.02.4 again)/anyone with similar problems?
update december 26:
problem seems to be solved - thanks a lot to the community
in summary this never showed before in Angelfish but became visible in Bluefish due to kernel updates
in a future version this will not be shown anymore:
# mdmonitor.service in particular causes mdadm to send emails to the MAILADDR line
# in /etc/mdadm/mdadm.conf. By default, that's the root account, so end-users are
# getting unnecessary emails about these devices. Since middlewared service is
# solely responsible for managing md devices there is no reason to run this monitor
# service. This prevents unnecessary emails from being sent out.
Last edited: