Couple items noted

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Took 11.3U1 running system, dumped my zpool before upgrading to TrueNAS-12.0-MASTER-202003250424. Built new zpool. So far so good; currently doing a snap-relica-pull from another 11.3U1 via NETCAT+SSH that I had set up as a job before; but disabled. Roughly 5TB of data will transfer over a 10G network so it should be done fairly quick. ;)

* Noted on dashboard it says I have non-ecc RAM. Untrue; Supermicro complete system; was/is part of the spec and I can see the RAM has the extra chip. Perhaps it could imply that ECC is somehow not enabled; but I'm skeptical the dashboard is accurate...

* Related - noticed that ARC seems to cap out at 96GB of RAM; I have 196GB installed; it says 190.06 installed (due to HW reservations; always said that), but it seems somewhere it's capped at using 96.

* Accidently left VMWare configuration; can't remove it without the UI reloading and it logged the following:
daemon 4885 - - regdb_init: Failed to open registry /var/db/system/samba4/registry.tdb (Permission denied)
daemon 4885 - - Failed to initialize the registry: WERR_ACCESS_DENIED
daemon 4885 - - error initializing registry configuration: SBC_ERR_BADFILE
daemon 4885 - - Traceback (most recent call last):
daemon 4885 - - File "/usr/local/bin/wsdd.py", line 767, in <module>
daemon 4885 - - main()
daemon 4885 - - File "/usr/local/bin/wsdd.py", line 751, in main
daemon 4885 - - LP_CTX.load("/usr/local/etc/smb4.conf")
daemon 4885 - - RuntimeError: Unable to load file /usr/local/etc/smb4.conf

* Reporting - yeah...not as useful as 11.3U1; seems broken when it comes to disk reporting. Hard to tell if my mirrored SSDs are being used with metadata/small-io functionality. To that, if that setup has any benefit. I know my SSDs sat idle being SLOG before; but I also didn't use sync-writes.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
There are open bugs for ECC reporting and the ARC usage.

Nightlies stopped building 0325 and haven’t started back up. That means what we see now doesn’t have all available fixes, until alpha is out or nightlies start back up.
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,448
We've implemented some better CI testing on the nightlies now. We don't push them until 100% of API tests are passing, so we can ensure that they work (reasonably) well.

We had some merges break a few things this week, but we expect them to be back and pushing normally soon.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
* Accidently left VMWare configuration; can't remove it without the UI reloading and it logged the following:
daemon 4885 - - regdb_init: Failed to open registry /var/db/system/samba4/registry.tdb (Permission denied)
daemon 4885 - - Failed to initialize the registry: WERR_ACCESS_DENIED
daemon 4885 - - error initializing registry configuration: SBC_ERR_BADFILE
daemon 4885 - - Traceback (most recent call last):
daemon 4885 - - File "/usr/local/bin/wsdd.py", line 767, in <module>
daemon 4885 - - main()
daemon 4885 - - File "/usr/local/bin/wsdd.py", line 751, in main
daemon 4885 - - LP_CTX.load("/usr/local/etc/smb4.conf")
daemon 4885 - - RuntimeError: Unable to load file /usr/local/etc/smb4.conf.
This was fixed last week and should not be an issue in the next release we push. 12.0 adds FSRVP support over SMB, which required changes to how the SMB configuration is managed (this feature relies on libsmbconf to dynamically generate shares). These changes ended up being incompatible with how I was using samba's python bindings to read the SMB configuration in wsdd.py, and so I rewrote how we were loading the configuration in that particular application. That said, it has no function impact other than log spam and broken WS-discovery.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Since last round I moved back to 11.3; but then yesterday, switched trains back to 12.

ECC is reporting correctly. Yay!
Reporting is working a LOT better now. Yay!!!
ARC still seems capped at approx 96GB =?

Special VDEVs are going to be a game changer!
I have 5 VDEV Mirror HDDs
1 Special VDEV Mirror SSD for Metadata/SBIO
1 NVME disk assigned as dedupe

Turned on Dedupe on the pool from the get-go, and performance is legit. Generally I'd see performance start tanking within a few hours of data ingestion and it's eating everything I can throw at it. WIN!!!

A bit of reading on 0.8.1 manpages relating to special VDEV; that by default metadata and dedupe tables are stored there; that smallblock IO also is, however, by default, it's set to "0" meaning that no blocks are redirected. FIrst, this means I didn't really need to spec that NVME disk for DeDupe, I could use the mirrored special VDEV for that. But, as a question - as I have a single NVME disk allocated there; should that disk fail, I imagine I would see a pool failure? IE, is that dedupe table stored elsewhere as well? IE, also not like L2ARC...

Second, and a bit of realization for me is that this isn't really 'tiered storage' in the sense that if you have 2TB of qualifying smallblocks and 1TB of storage; that it will fill the 1TB special VDEV and then the remaining data is written out to the data VDEV(s); until space is cleared by deletion within the special VDEV; IE there's no process that would move that data off. It also means that it's possible to 'poison' the special VDEV with non-frequently accessed/written data. This to me says that one should have a significantly larger special VDEV and be very careful with how one sets the smallblock IO threshold!
 
Last edited:

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Updated to latest posted build. Created a clone of a replicated dataset from an 11.3 FreeNAS to test SMB/SMB Multi-streaming (works; atl east seems similar to what's in 11.3 since multi-streaming is still not supported).

TN12 pool has dedup enabled; 6x2 vdevxMirror hdd, 2xMirror nvme m.2 1tb for metadata, 1tb nvme M.2 for l2arc, 2xmirror satassd for ZIL (but not enforced), 192GB ECC RAM; Supermicro box w Intel 4210.

Created SMB of that share, all seemed to work fine. Took out the share, disabled SMB, went to delete the cloned dataset and box rebooted unexpectedly. I'd provide logs if I knew where to grab.
 
Last edited:

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
...and it's repeatable; dataset was still there after the box rebooted itself, attempted to delete it again, and it bounced itself again.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
You can grab a debug from system -> advanced, and also open a case with debug from system -> support
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Yeah, not interested in creating yet another account.

Anyhow, updated to latest as of today, attempted again, did a capture of the console, pasted below.

1588615694322.png
 
Last edited:

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Updated to latest nightly (today) and still does it. Going to try dropping the dataset via CLI when it finishes booting back up =/

CLI blew chunks on it too. Looks like I'll blow out the pool and not do that again until release. =/
 
Last edited:
Top