Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

TrueNAS SCALE 21.06-BETA Now Available!

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
864
I only got this far with ACL:

I was able to save ACL permissions with user/group selected and assigned to "USERA" on the left side (File Information) AND UserObj; GroupObj; Other on the right side (ACL).

Given that I did not set permissions for Other, I can only connect to this share with USERA. That's as expected.

I cannot, however, add additional (read-only) permissions for another user "USERB". With varying error messages that do not necessarily make sense to me I cannot save any changes to my above maintained permissions.
In another thread it was indicated that its easier to do complex things with the dataset ACL type set to NFSv4 and not POSIX.
 

Gcon

Member
Joined
Aug 1, 2015
Messages
49
I can't create a stripe of mirrored vdevs from the GUI. These six 8TB SATA drives are hanging off an LSI raid controller (SAS9207-8i latest firmware, genuine part) with nothing else on that controller (two unused ports). Works fine in TrueNAS Core. I wiped the disks - didn't help. "Beta" I believe is software where "you should be able to get everything set up and going, but things sometimes break". Not being able to even set up a pool - guys, this is still alpha. Anyways, hopefully this error report helps the project in some way.

truenas scale error creating pool.png

truenas scale error creating pool 2.png
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,461
I wiped the disks - didn't help. "Beta" I believe is software where "you should be able to get everything set up and going, but things sometimes break". Not being able to even set up a pool - guys, this is still alpha.
ALPHA formally means "Still in active development, not all features added yet"
BETA formally means "Feature complete but still buggy"

SCALE is, in fact, feature complete but still needs a lot of bugfixes and polish.
Also: Just because YOU cannot create a pool, does not mean the complete feature is broken. It's more helpfull for iX of you just file a bugreport on Jira so they can checkout your issue, instead of complaining on the forum that it's not a beta because you encountered a bug.

Bugs that are not filled in the bugtracker (Jira) generally also donnot help the developers because they only work from Jira tickets and not forum posts.
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
3,434
Second, please post the entire "more info" as text, not just a screenshot of the upper third. Thanks!
 

dffvb

Neophyte
Joined
Apr 14, 2021
Messages
4
First of all: Scale looks really nice, and is even more intuitive than Core. Really excited about it.
Secondly: Unfortunately my Intel X550 only runs with 550 MB/s not 1 GB/s like Core. Just wanted to provide this as feedback, if wrong here, please delete, and maybe tell me, where I could give such feedback.
 

Gcon

Member
Joined
Aug 1, 2015
Messages
49
ALPHA formally means "Still in active development, not all features added yet"
BETA formally means "Feature complete but still buggy"
Also: Just because YOU cannot create a pool, does not mean the complete feature is broken.
Understood. From a purely semantic viewpoint you are right. But in my decades of IT work, "beta" software tends to have bugs that are more corner-case, and the product is actually somewhat usable. Creating storage is fundamental to a NAS, and I have the most commonplace hardware getting around - standard LSI SAS2 RAID card in IT mode with current firmware, running 6 Seagate IronWolf drives (mostly new - all tested). Nothing esoteric. You put the empasis on me with the all-caps "YOU" but I can tell you it's not me but SCALE with the issue as I did the exact same GUI pool build in CORE straight afterwards and it creates the pool fine, so perhaps lay off on the ad hominems and appreciate that I'm giving this project (which I'm quite excited about) some much-needed testing and feedback.

Second, please post the entire "more info" as text, not just a screenshot of the upper third. Thanks!
Would love to but alas I wanted to prove that my hardware setup was fine so I've already blown away the box with TrueNAS CORE to make sure that there were no hardware issues (considered proven, since it works fine in CORE). Test environment is a Dell R710 chassis. Dual socket 6-core CPUs. 288GB ECC RAM. 6x Seagate IronWolf 8TB SATA drives (current model and firmware). LSI SAS9207-8i RAID card (2308 controller) with IT mode 20.00.07.00 firmware (UEFI and BIOS OpRoms erased). No other PCIe cards except for a genuine Intel 10Gbps PCIe NIC X540-T2. (in production I'll be using Dell R730's / R740's).

Hopefully there is a JIRA case for this issue already? If not, then i might be persuaded to re-install SCALE again to replicate/test. Without even being able to create a working pool has thrown a big spanner in the works as I had planned a range of testing on iSCSI, NFS, SMB + ACLs, UPS, SNMP, NTP, LLDP, KVM and k8s over the weekend, which will now unfortunately have to wait.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
864
LSI SAS9207-8i RAID card (2308 controller) with IT mode 20.00.07.00 firmware (UEFI and BIOS OpRoms erased). No other PCIe cards except for a genuine Intel 10Gbps PCIe NIC X540-T2. (in production I'll be using Dell R730's / R740's).
Hopefully there is a JIRA case for this issue already?
Not sure if there is another case. It is probably true that 9200 series SAS cards are not part of automated testing and need community testing. We're working through the issues associated 9300, 9400 and 9500 series cards and these are the only cards our current products ship with.

TrueNAS CORE has billions of hours of testing on legacy hardware..... TrueNAS SCALE has much less (<0.1%) of that testing. So, I would advise users to test and assume there may be issues with older hardware that is not being used in new systems. Please report bugs when you find them.... we need help resolving them.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,461
@Gcon You keep acting like the product is inherently faulty and keep ignoring the fact you are (untill now) the only one reporting this issue. Serieus bugs just hitting a single user, are quite frequent in BETA's.


I'll leave it at that.
 
Last edited:

proligde

Junior Member
Joined
Jan 29, 2014
Messages
17
First of all - thanks for the amazing Scale beta! I've been using it since a few days and I'm really happy with it.

However, I stumbled across a strange problem which I found tricky to track down, so I wanted to share what I found out:

Problem was: The system often froze for several seconds (for example the GUI or when I logged in via SSH) - sometimes nearly a minute although running on fast hardware (ryzen5 3600, 32gig ECC, nvme-mirror plus 10 * spinning rust).

The load was above 2 or 3 even though nothing was running on the system at all. The cpu itself was not too high - only spiking sometimes on one or two cores. It was basically only the overall load reported by top.

I checked iostat, top, htop, iftop and so on but nothing seemed to really point me into the right direction until I tried to install a package which was super painfully slow. I think it took like 30minutes to install a tiny iotop package. And then there were also one or two timeout-messages visible in dmesg.

Although these error messages didn't explicitly mention any device, I now was suspecting it might have to do with my "boot pool" which is basically only one 32 GB USB stick.

Although I know it's discouraged, it worked perfectly fine and fast for years on an old usb2 stick with FreeBSD under Truenas CORE. And now it is a new Kingston Data Traveler USB 3 stick which -in theory- should be able to provide decent throughput, according to benchmarks. Maybe it's just kind-of-broken (without corrupting data - because it DOES work - it's just incredibly slow)

Solution: I exported all my settings, reinstalled TrueNAS SCALE onto an old, small SSD, and imported the settings again. Took me less than 20minutes and all problems are gone now. Load is down to 0.something in idle - just as I would expect it and the CPU cores are not spiking up anymore.

I'd still be interested if someone got an idea about why such a thing happens (where it doesn't on Truenas Core)
 
Last edited:

soleous

Neophyte
Joined
Apr 14, 2021
Messages
8
Yeah, I know people that still use USB sticks on Core, however, Scale uses Debian, which I've never had working well for USB.

You could try moving the "System Dataset Pool" off the boot-pool. This always helps with USB sticks.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,461
Although these error messages didn't explicitly mention any device, I now was suspecting it might have to do with my "boot pool" which is basically only one 32 GB USB stick.
As posted in another thread, SCALE is not (yet) optimised for USB media, so depending on the speed of your USB media that might cause these issues.
That being said: USB bootdrives have been troublesome in the past, though I expect iX to spend some more polish on that open none-the-less before release hits.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,461
You could try moving the "System Dataset Pool" off the boot-pool. This always helps with USB sticks.
This is definately a good idea, the combined IO of this and the systemfiles might indeed choke your, already limited, USB IOPS.
 

proligde

Junior Member
Joined
Jan 29, 2014
Messages
17
You could try moving the "System Dataset Pool" off the boot-pool. This always helps with USB sticks.
That being said: USB bootdrives have been troublesome in the past, though I expect iX to spend some more polish on that open none-the-less before release hits.
Thanks for the feedback! Didn't realize the quality of USB support is so different in FreeBSD vs. debian. If anything, I had assumed debian has the better consumer grade hardware support for stuff like that. And I thought the use of USB sticks was primarily discouraged in terms of reliability.

I already had my System Dataset Pool on one of the proper disk pools (even before moving to SCALE). Can't even imagine how bad it would have been otherwise.

Anyway - guess it's just "don't install scale on usb sticks", at least for now :smile:
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
17,231
9400 and 9500 series cards and these are the only cards our current products ship with.
How's the driver/firmware stack for the Tri-Mode stuff these days? When the 9400-series was released, it was buggy as hell (same for the 9300).
 

pico89

Newbie
Joined
Jul 13, 2021
Messages
1
Hello everyone, new user here, so forgive me if I'll miss something trivial.
I have been using linux for many years now but I'm new to TrueNas and to NAS systems in general.

I recently build myself a new system (hardware specs below) and I start to play around with it to check that everything was working properly.
I run into a strange problem with the NVME SSD on which I installed SCALE that I think is software related but I'm having a hard time to figure out the cause of it.

In short: the SMART information of the SSD show a huge amount of error-log entries, which is keep increasing of time. Additionally, the 'data unit written' are also very high for a new disk that has basically spend most time in idle. The two problem seems correlated but not completly.


In detailed: I installed the ALPHA version of SCALE shortly before the new BETA came out. I followed this guide to partition the SSD in order to have the boot-pool and a ssd-pool for the apps on the same disk (I know it is not good practise but it suits my need). The idea is similar to this.
Everything went smooth but after a while there was a system alert that the number of error-log of the SSD had increase by a certain amount.
I inspected the smartctl info because I fear an hardware problem with the disk.
Searching around I found this and this, which describe well my problem, so I thought it was a minor issue.
To clarify, the system has no HDD currently connected, I basically didn't change any settings and the system spend most its time in idle: basically I was checking temperature, power consumption and hardware integrity. The problem seems to got much worst when I activated the "apps" on the ssd-pool partition. After around a week (and after updating to the BETA version of SCALE, without any significant change), the number of error-log was in the order of hundreds of thousands (!!!) and the 'data unit written' over 300GB, if I recall correctly. This makes me worried a bit, because in term of wear out of the SSD hundreds of GB written for a system in idle seems quite strange (correct me I'm interpreting this number in the wrong way).

To further check if it was software or hardware, I made a fresh install of the stable version of TrueNas Core (12.0-U4) on the whole SSD.
In this case the problem was mostly absent, of the order of one to few error-log entry per day. Unfortunately I didn't check the data written, but I can do it if you think it is necessary or useful.

I then reinstall the BETA version of SCALE, this time on the whole SSD. Paying a closer attention to the smartctl information, I noticed that with the system in idle, the behaviour seems close to that of CORE, few error here and there. But as soon as I try to do something, which include action as simple as refreshing the dashboard, the error-log entries increased easily by tens or hundreds at a time. Additionally, even when the error are not increasing, I noticed that the 'data unit written' increase of around 10GB over a period of roughly 12h with the system in idle.
My conclusion was that in the previous installation activating the 'app' on the other partition was simply magnifying a problem already existing.

I attached at the bottom the smartctl and the nvme-cli info.
I do not know if it is a problem specifically related to my SSD or not, I do not have another one to test unfortunately.
Let me know what do you think and if I can provide further information, or if you think it is useful to open a bug-report.

Thank you very much for your help!


HARDWARE SPECS:
MB: Asus P11C-I + ASMB9-iKVM
CPU: Intel Core i3-9100
RAM: 1x16GB ECC Samsung M391A2K43BB1-CTD
SSD: Sabrent Rocket Nano NVME 512GB


SMARTCLT REPORT:

Code:
truenas# smartctl -a /dev/nvme0n1
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.42+truenas] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Sabrent Rocket nano
Serial Number:                      A5CD07140D9183415843
Firmware Version:                   RKT301.3
PCI Vendor/Subsystem ID:            0x1987
IEEE OUI Identifier:                0x6479a7
Total NVM Capacity:                 512,110,190,592 [512 GB]
Unallocated NVM Capacity:           0
Controller ID:                      1
NVMe Version:                       1.3
Number of Namespaces:               1
Namespace 1 Size/Capacity:          512,110,190,592 [512 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            6479a7 4ae020019f
Local Time is:                      Tue Jul 13 03:42:14 2021 PDT
Firmware Updates (0x12):            1 Slot, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005e):     Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x0e):         Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
Maximum Data Transfer Size:         64 Pages
Warning  Comp. Temp. Threshold:     85 Celsius
Critical Comp. Temp. Threshold:     95 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     4.50W       -        -    0  0  0  0        0       0
 1 +     2.70W       -        -    1  1  1  1        0       0
 2 +     2.16W       -        -    2  2  2  2        0       0
 3 -   0.0700W       -        -    3  3  3  3     1000    1000
 4 -   0.0020W       -        -    4  4  4  4     5000   45000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         1
 1 -    4096       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        17 Celsius
Available Spare:                    100%
Available Spare Threshold:          5%
Percentage Used:                    0%
Data Units Read:                    22,691 [11.6 GB]
Data Units Written:                 727,398 [372 GB]
Host Read Commands:                 485,158
Host Write Commands:                40,341,997
Controller Busy Time:               55
Power Cycles:                       42
Power On Hours:                     237
Unsafe Shutdowns:                   18
Media and Data Integrity Errors:    0
Error Information Log Entries:      511,463
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               37 Celsius

Error Information (NVMe Log 0x01, 16 of 16 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS
  0     511463     0  0x001b  0x4004      -            0     0     -
  1     511462     0  0x001a  0x4004      -            0     0     -
  2     511461     0  0x0017  0x4004      -            0     0     -
  3     511460     0  0x0016  0x4004      -            0     0     -
  4     511459     0  0x0014  0x4004      -            0     0     -
  5     511458     0  0x0017  0x4005      -            0     0     -
  6     511457     0  0x0015  0x4005      -            0     0     -
  7     511456     0  0x0014  0x4005      -            0     0     -
  8     511455     0  0x0018  0x4005      -            0     0     -
  9     511454     0  0x001b  0x4005      -            0     0     -
 10     511453     0  0x0019  0x4005      -            0     0     -
 11     511452     0  0x0018  0x4005      -            0     0     -
 12     511451     0  0x0016  0x4005      -            0     0     -
 13     511450     0  0x0015  0x4005      -            0     0     -
 14     511449     0  0x001a  0x4004      -            0     0     -
 15     511448     0  0x0019  0x4004      -            0     0     -




NVME-CLI REPORT:

Code:
truenas# nvme error-log /dev/nvme0n1
Error Log Entries for device:nvme0n1 entries:16
.................
 Entry[ 0]   
.................
error_count     : 511463
sqid            : 0
cmdid           : 0x1b
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 1]   
.................
error_count     : 511462
sqid            : 0
cmdid           : 0x1a
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 2]   
.................
error_count     : 511461
sqid            : 0
cmdid           : 0x17
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 3]   
.................
error_count     : 511460
sqid            : 0
cmdid           : 0x16
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 4]   
.................
error_count     : 511459
sqid            : 0
cmdid           : 0x14
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 5]   
.................
error_count     : 511458
sqid            : 0
cmdid           : 0x17
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 6]   
.................
error_count     : 511457
sqid            : 0
cmdid           : 0x15
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 7]   
.................
error_count     : 511456
sqid            : 0
cmdid           : 0x14
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 8]   
.................
error_count     : 511455
sqid            : 0
cmdid           : 0x18
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[ 9]   
.................
error_count     : 511454
sqid            : 0
cmdid           : 0x1b
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[10]   
.................
error_count     : 511453
sqid            : 0
cmdid           : 0x19
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[11]   
.................
error_count     : 511452
sqid            : 0
cmdid           : 0x18
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[12]   
.................
error_count     : 511451
sqid            : 0
cmdid           : 0x16
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[13]   
.................
error_count     : 511450
sqid            : 0
cmdid           : 0x15
status_field    : 0x4005(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[14]   
.................
error_count     : 511449
sqid            : 0
cmdid           : 0x1a
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 Entry[15]   
.................
error_count     : 511448
sqid            : 0
cmdid           : 0x19
status_field    : 0x4004(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field)
parm_err_loc    : 0xffff
lba             : 0
nsid            : 0
vs              : 0
trtype          : The transport type is not indicated or the error is not transport related.
cs              : 0
trtype_spec_info: 0
.................
 

albicocca

Newbie
Joined
Jul 13, 2021
Messages
1
I want to know how long to wait for the official version, after the beta version can be smoothly updated to the official version?:oops:
 

UmbraAtrox

Newbie
Joined
Jul 20, 2021
Messages
2
Hy, i found this. if one shuts down the server via Webinterface the Url truenas.local/ui/others/shutdown stays as last thing in the Tab. If i now go start the server and then switch to that tab chrome reloads it an shuts down the server again. I don't know if that is a bug, a mild inconvenience or intended. So if it is intended please ignore me.
 
Top