TrueNAS SCALE 22.02.1 Overview

boro

Cadet
Joined
Jun 29, 2022
Messages
2
I have 768G RAM 2*84 disk jbod, and 8 nvme ssds successfully set up a raid group in 22.02.0.1, now upgraded to 22.02.1, I get, I changed raid z1-3 1-17vdevs, none of them helps
do all of your drives show up under `Storage > Disks` or does it hang/load forever? I was getting the same error after trying to add my 2 fusionio ssd disks as mirrored slog devices to an existing pool. I had to add the slog drives to the pool via the `zpool` cli from the shell and then reboot the server and that fixed it for me on 22.02.1.
 

emsicz

Explorer
Joined
Aug 12, 2021
Messages
78
I just noticed this. I've been moving data into SCALE for about last ~30 mintues through both of it's NICs (I use them separately because bonding them didn't work) and they show up as In: ~ 100 MiB/s (gigabit ethernet), but the graphs stay flat. Why?
2022-07-15_155653.png
 

rodpas

Cadet
Joined
Jul 17, 2022
Messages
3
Upgraded to TrueNAS-SCALE-22.02.2, shares for windows seems to be all ok, but now the TimeMachine shares and the ISCSI shares are completely unusable. ISCSI for now I can care less, as it was simply in testing stages, but the TimeMachine we are using a lot to backup all our MacBooks. We started the datasets from scratch and we were able to connect to the shares from any of the Macs, but the TimeMachine stays in "Looking for Backup Drive" for several minutes and it finally fails. We couldn't pass that stage yet. Just to clarify, we are using TrueNas connected to our Active Directory, but for TimeMachine we use local TrueNAS users. I am not sure if that's a problem yet.
 

emsicz

Explorer
Joined
Aug 12, 2021
Messages
78
I just noticed this. I've been moving data into SCALE for about last ~30 mintues through both of it's NICs (I use them separately because bonding them didn't work) and they show up as In: ~ 100 MiB/s (gigabit ethernet), but the graphs stay flat. Why?
View attachment 56890
So after some time the graphs always start working. Somehow it feels like it first needs to collect like an hour of traffic to fill some internal buffer calculating the graph and then it just works. Given my uptime is usually in months, I'm not likely to see this very often. Still a bug, tho.
 
Top