NVME Namespaces with Kingston DC1500M

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Hello,
I want to write my first experience with nvme namespaces because I can't find a lot of information on this subject.
I'm not a FreeBSD or TrueNAS Core expert, so I probably made some mistake.
I buy one Kingston DC1500M (960GB, 64 namespaces supported).
I put the new disk on my system to see if it is working, made a pool with only the disk above, perform some test and then destroy the pool.
Then I start to use nvmecontrol command.
List NVME Controllers and namespaces
Code:
root@tn-epyc[~]# nvmecontrol devlist
 nvme0: Samsung SSD 980 PRO with Heatsink 2TB
    nvme0ns1 (1907729MB)
 nvme1: Samsung SSD 980 PRO with Heatsink 2TB
    nvme1ns1 (1907729MB)
 nvme2: Samsung SSD 980 PRO with Heatsink 2TB
    nvme2ns1 (1907729MB)
 nvme3: Samsung SSD 980 PRO with Heatsink 2TB
    nvme3ns1 (1907729MB)
 nvme4: KINGSTON SEDC1500M960G
    nvme4ns1 (915715MB)

The only disk that support namespaces is the last one. So I use this disk (nvme4) on the following commands.
Print summary of the IDENTIFY information
Code:
root@tn-epyc[~]# nvmecontrol identify nvme4
Controller Capabilities/Features
================================
Vendor ID:                   2646
Subsystem Vendor ID:         2646
Serial Number:               50026B7282F56809
Model Number:                KINGSTON SEDC1500M960G
Firmware Version:            S67F0103
Recommended Arb Burst:       3
IEEE OUI Identifier:         b7 26 00
Multi-Path I/O Capabilities: Not Supported
Max Data Transfer Size:      262144 bytes
Controller ID:               0x0001
Version:                     1.3.0

Admin Command Set Attributes
============================
Security Send/Receive:       Not Supported
Format NVM:                  Supported
Firmware Activate/Download:  Supported
Namespace Management:        Supported
Device Self-test:            Not Supported
Directives:                  Not Supported
NVMe-MI Send/Receive:        Not Supported
Virtualization Management:   Not Supported
Doorbell Buffer Config:      Not Supported
Get LBA Status:              Not Supported
Sanitize:                    Not Supported
Abort Command Limit:         4
Async Event Request Limit:   4
Number of Firmware Slots:    4
Firmware Slot 1 Read-Only:   No
Per-Namespace SMART Log:     No
Error Log Page Entries:      64
Number of Power States:      1
Total NVM Capacity:          960197124096 bytes
Unallocated NVM Capacity:    0 bytes
Firmware Update Granularity: 00 (Not Reported)
Host Buffer Preferred Size:  0 bytes
Host Buffer Minimum Size:    0 bytes

NVM Command Set Attributes
==========================
Submission Queue Entry Size
  Max:                       64
  Min:                       64
Completion Queue Entry Size
  Max:                       16
  Min:                       16
Number of Namespaces:        64
Compare Command:             Not Supported
Write Uncorrectable Command: Not Supported
Dataset Management Command:  Supported
Write Zeroes Command:        Not Supported
Save Features:               Supported
Reservations:                Not Supported
Timestamp feature:           Supported
Verify feature:              Not Supported
Fused Operation Support:     Not Supported
Format NVM Attributes:       All-NVM Erase, All-NVM Format
Volatile Write Cache:        Not Present

NVM Subsystem Name:          nqn.2023-02.com.kingston:nvm-subsystem-sn-50026B7282F56809

List active and allocated namespaces
Code:
root@tn-epyc[~]# nvmecontrol ns active nvme4
Active namespaces:
         1
root@tn-epyc[~]# nvmecontrol ns allocated nvme4
Allocated namespaces:
         1

Print IDENTIFY for allocated namespace
Code:
root@tn-epyc[~]# nvmecontrol ns identify nvme4ns1
Size:                        1875385008 blocks
Capacity:                    1875385008 blocks
Utilization:                 1875385008 blocks
Thin Provisioning:           Not Supported
Number of LBA Formats:       1
Current LBA Format:          LBA Format #00
Data Protection Caps:        Not Supported
Data Protection Settings:    Not Enabled
Multi-Path I/O Capabilities: Not Supported
Reservation Capabilities:    Not Supported
Format Progress Indicator:   Not Supported
Deallocate Logical Block:    Read Not Reported
Optimal I/O Boundary:        0 blocks
NVM Capacity:                960197124096 bytes
Globally Unique Identifier:  00000000000000000026b7282f568095
IEEE EUI64:                  0026b72002f56809
LBA Format #00: Data Size:   512  Metadata Size:     0  Performance: Best

List all controllers in NVM subsystem
Code:
root@tn-epyc[~]# nvmecontrol ns controllers nvme4
NVM subsystem includes 1 controller(s):
  0x0001

List controllers attached to a namespace
Code:
root@tn-epyc[~]# nvmecontrol ns attached nvme4ns1
Attached 1 controller(s):
  0x0001

I think that the commands above are safe on a running system.
Now I want to destroy this namespace (all the space on disk is used by this namespace) and try to create 4 namespaces.
Below this point the commands executed destroy your namespaces and all the data on your disk!!!
Don't use them!!!

Detach a controller from a namespace

Code:
root@tn-epyc[~]# nvmecontrol ns detach nvme4ns1
namespace 1 detached

# Check
root@tn-epyc[~]# nvmecontrol ns attached nvme4ns1
Attached 0 controller(s):

root@tn-epyc[~]# nvmecontrol ns active nvme4
Active namespaces:

root@tn-epyc[~]# nvmecontrol ns allocated nvme4
Allocated namespaces:
         1

Delete a namespace
Code:
root@tn-epyc[~]# nvmecontrol ns delete nvme4ns1
namespace 1 deleted

# Check
root@tn-epyc[~]# nvmecontrol devlist
...
 nvme4: KINGSTON SEDC1500M960G

root@tn-epyc[~]# nvmecontrol ns allocated nvme4
Allocated namespaces:

These are the sizes in bytes and blocks of my disk (from nvmecontrol ns identify nvme4ns1 command) and I want 4 namespaces:
Bytes:
960197124096 / 4 = 240049281024

Blocks:
1875385008 / 4 = 468846252
Create 4 namespace
Code:
root@tn-epyc[~]# nvmecontrol ns create -s 468846252 -c 468846252 -L 0 -d 0 nvme4
namespace 1 created
root@tn-epyc[~]# nvmecontrol ns create -s 468846252 -c 468846252 -L 0 -d 0 nvme4
namespace 2 created
root@tn-epyc[~]# nvmecontrol ns create -s 468846252 -c 468846252 -L 0 -d 0 nvme4
namespace 3 created
root@tn-epyc[~]# nvmecontrol ns create -s 468846252 -c 468846252 -L 0 -d 0 nvme4
namespace 4 created

# Check
root@tn-epyc[~]# nvmecontrol ns active nvme4
Active namespaces:
root@tn-epyc[~]# nvmecontrol ns allocated nvme4
Allocated namespaces:
         1
         2
         3
         4

Attach namespace and controllers
Code:
root@tn-epyc[~]# nvmecontrol ns attach -n 1 -c 1 nvme4
namespace 1 attached
root@tn-epyc[~]# nvmecontrol ns attach -n 2 -c 1 nvme4
namespace 2 attached
root@tn-epyc[~]# nvmecontrol ns attach -n 3 -c 1 nvme4
namespace 3 attached
root@tn-epyc[~]# nvmecontrol ns attach -n 4 -c 1 nvme4
namespace 4 attached

# Check
root@tn-epyc[~]# nvmecontrol ns active nvme4
Active namespaces:
         1
         2
         3
         4
root@tn-epyc[~]# nvmecontrol ns allocated nvme4
Allocated namespaces:
         1
         2
         3
         4
root@tn-epyc[~]# nvmecontrol devlist
 nvme0: Samsung SSD 980 PRO with Heatsink 2TB
    nvme0ns1 (1907729MB)
 nvme1: Samsung SSD 980 PRO with Heatsink 2TB
    nvme1ns1 (1907729MB)
 nvme2: Samsung SSD 980 PRO with Heatsink 2TB
    nvme2ns1 (1907729MB)
 nvme3: Samsung SSD 980 PRO with Heatsink 2TB
    nvme3ns1 (1907729MB)
 nvme4: KINGSTON SEDC1500M960G
    nvme4ns1 (228928MB)
    nvme4ns2 (228928MB)
    nvme4ns3 (228928MB)
    nvme4ns4 (228928MB)

To create all /dev/nvme4ns? device files I execute a controller reset
Code:
root@tn-epyc[~]# nvmecontrol reset nvme4

# Check
root@tn-epyc[~]# ls -la /dev/nvme4*
crw-------  1 root  wheel  0x55 Jun 21 14:42 /dev/nvme4
crw-------  1 root  wheel  0x87 Jun 21 14:42 /dev/nvme4ns1
crw-------  1 root  wheel  0xf2 Jun 22 14:06 /dev/nvme4ns2
crw-------  1 root  wheel  0xf3 Jun 22 14:06 /dev/nvme4ns3
crw-------  1 root  wheel  0xf4 Jun 22 14:06 /dev/nvme4ns4

Not all /dev/nvd? device are created, so I reboot the server.
After reboot all the devices are present
Code:
root@tn-epyc[~]# ls -la /dev/nvd?
crw-r-----  1 root  operator  0x73 Jun 22 14:49 /dev/nvd0
crw-r-----  1 root  operator  0x79 Jun 22 14:49 /dev/nvd1
crw-r-----  1 root  operator  0x7f Jun 22 14:49 /dev/nvd2
crw-r-----  1 root  operator  0x85 Jun 22 14:49 /dev/nvd3
crw-r-----  1 root  operator  0x8e Jun 22 14:49 /dev/nvd4
crw-r-----  1 root  operator  0x8f Jun 22 14:49 /dev/nvd5
crw-r-----  1 root  operator  0x90 Jun 22 14:49 /dev/nvd6
crw-r-----  1 root  operator  0x91 Jun 22 14:49 /dev/nvd7

But in TrueNAS web interface nvd4 is missing.
Screenshot.jpg

Probably is something wrong that I perform with nvmecontrol command.
If someone has some ideas how to fix my configuration...
Sorry for this long post!
Best Regards,
Antonio
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Hello,
Today I try to stop and start TrueNAS server and now Web interface show only nvd7 disk.
Seems like a problem with TrueNAS because nvmecontrol detect all 4 namespaces.
Best Regards,
Antonio
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Hello,
Also tried using nda driver (hw.nvme.use_nvd=0 type LOADER in Tunables) but the problem is not resolved.
In web interface I see only nda7 disk.
Best Regards,
Antonio
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
TrueNAS relies on disk serial numbers to identify individual disks. If 4 "disks" share a single serial number it won't work as expected. I doubt this can be considered a bug because it is intended behaviour. Can you set a serial number for a namespace?
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Hello,
When I create a Namespace I can't set a serial number:
Code:
root@tn-epyc[~]# nvmecontrol ns create
Missing arg controller-id|namespace-id
Usage:
    nvmecontrol ns create <args> controller-id|namespace-id

Create a namespace
Options:
 -s, --nsze=<NUM>              - The namespace size
 -c, --ncap=<NUM>              - The capacity of the namespace (<= ns size)
 -f, --lbaf=<NUM>              - The FMT field of the FLBAS
 -m, --mset=<NUM>              - The MSET field of the FLBAS
 -n, --nmic=<NUM>              - Namespace multipath and sharing capabilities
 -p, --pi=<NUM>                - PI field of FLBAS
 -l, --pil=<NUM>               - PIL field of FLBAS
 -L, --flbas=<NUM>             - Namespace formatted logical block size setting
 -d, --dps=<NUM>               - Data protection settings

And in dmesg log they have the same serial number after creation:
Code:
root@tn-epyc[~]# dmesg| grep nda
FreeBSD is a registered trademark of The FreeBSD Foundation.
...
nda4 at nvme4 bus 0 scbus40 target 0 lun 1
nda4: <KINGSTON SEDC1500M960G S67F0103 50026B7282F56809>
nda4: Serial Number 50026B7282F56809
nda4: nvme version 1.3 x4 (max x4) lanes PCIe Gen3 (max Gen3) link
nda4: 228928MB (468846252 512 byte sectors)
nda5 at nvme4 bus 0 scbus40 target 0 lun 2
nda5: <KINGSTON SEDC1500M960G S67F0103 50026B7282F56809>
nda5: Serial Number 50026B7282F56809
nda5: nvme version 1.3 x4 (max x4) lanes PCIe Gen3 (max Gen3) link
nda5: 228928MB (468846252 512 byte sectors)
nda6 at nvme4 bus 0 scbus40 target 0 lun 3
nda6: <KINGSTON SEDC1500M960G S67F0103 50026B7282F56809>
nda6: Serial Number 50026B7282F56809
nda6: nvme version 1.3 x4 (max x4) lanes PCIe Gen3 (max Gen3) link
nda6: 228928MB (468846252 512 byte sectors)
nda7 at nvme4 bus 0 scbus40 target 0 lun 4
nda7: <KINGSTON SEDC1500M960G S67F0103 50026B7282F56809>
nda7: Serial Number 50026B7282F56809
nda7: nvme version 1.3 x4 (max x4) lanes PCIe Gen3 (max Gen3) link
nda7: 228928MB (468846252 512 byte sectors)

But with identify command there are two fields that are different for each namespace:
Code:
root@tn-epyc[~]# nvmecontrol ns identify nvme4ns1 | grep -e Globally -e IEEE
Globally Unique Identifier:  00000000000000000026b7282f568095
IEEE EUI64:                  0026b72002f56809
root@tn-epyc[~]# nvmecontrol ns identify nvme4ns2 | grep -e Globally -e IEEE
Globally Unique Identifier:  00000000000000010026b7282f568095
IEEE EUI64:                  0026b72012f56809
root@tn-epyc[~]# nvmecontrol ns identify nvme4ns3 | grep -e Globally -e IEEE
Globally Unique Identifier:  00000000000000020026b7282f568095
IEEE EUI64:                  0026b72022f56809
root@tn-epyc[~]# nvmecontrol ns identify nvme4ns4 | grep -e Globally -e IEEE
Globally Unique Identifier:  00000000000000030026b7282f568095
IEEE EUI64:                  0026b72032f56809

Best Regards,
Antonio
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Again: TrueNAS relies on the serial number exclusively. It's irrelevant what other identifiers might be there.

You might want to open an issue in JIRA, I have no idea if that will be considered or not by iX, though.
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Hello,
Yesterday I powered off the server to add some ram, and now I can see all the namespaces in web interface:
namespaces.jpg

And I can create a new pool:
create-new-pool.jpg

The new pool seems OK:
Code:
root@tn-epyc[~]# zpool status -v test-namespaces
  pool: test-namespaces
 state: ONLINE
config:

        NAME                                          STATE     READ WRITE CKSUM
        test-namespaces                               ONLINE       0     0     0
          gptid/6638ae1e-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0
          gptid/66351e80-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0
          gptid/6636db73-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0
          gptid/663b1f0b-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0

errors: No known data errors

I don't think that adding ram is the solution (from 64GB to 128GB).
Don't remember other difference, maybe new core version TrueNAS-13.0-U5.3.

Best Regards,
Antonio
 

benda

Dabbler
Joined
Feb 23, 2024
Messages
17
Hello,
Yesterday I powered off the server to add some ram, and now I can see all the namespaces in web interface:
View attachment 68936
And I can create a new pool:
View attachment 68937
The new pool seems OK:
Code:
root@tn-epyc[~]# zpool status -v test-namespaces
  pool: test-namespaces
 state: ONLINE
config:

        NAME                                          STATE     READ WRITE CKSUM
        test-namespaces                               ONLINE       0     0     0
          gptid/6638ae1e-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0
          gptid/66351e80-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0
          gptid/6636db73-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0
          gptid/663b1f0b-31fe-11ee-9c41-a8a159f24f0f  ONLINE       0     0     0

errors: No known data errors

I don't think that adding ram is the solution (from 64GB to 128GB).
Don't remember other difference, maybe new core version TrueNAS-13.0-U5.3.

Best Regards,
Antonio
Are still use the Kingston DC1500M with multiple namespace?
I plan to to do this with two of them and create a mirrored boot pool and use the rest for slog and maybe metadata vdev.
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Yes, but only for testing purpose.
I'm using the following Intel NVME in TrueNas Scale:
Code:
root@tn-xeond[~]# nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme2n1          /dev/ng2n1            XXXXXXXXXXXXXXXX      Samsung SSD 970 PRO 512GB                1         511.14  GB / 512.11  GB    512   B +  0 B   1B2QEXP7
/dev/nvme1n1          /dev/ng1n1            YYYYYYYYYYYYYYYY      Samsung SSD 970 PRO 512GB                1         511.14  GB / 512.11  GB    512   B +  0 B   1B2QEXP7
/dev/nvme0n3          /dev/ng0n3            ZZZZZZZZZZZZZZZZZZZ   INTEL SSDPF2KX019T1M                     3           1.44  TB /   1.44  TB    512   B +  0 B   9CV10410
/dev/nvme0n2          /dev/ng0n2            ZZZZZZZZZZZZZZZZZZZ   INTEL SSDPF2KX019T1M                     2         412.32  GB / 412.32  GB    512   B +  0 B   9CV10410
/dev/nvme0n1          /dev/ng0n1            ZZZZZZZZZZZZZZZZZZZ   INTEL SSDPF2KX019T1M                     1          68.72  GB /  68.72  GB    512   B +  0 B   9CV10410

for slog (62GB) and l2arc (412GB) for HDD pool.
The big Namespace is used as Proxmox dataset for VM (connected using NFS).
No redundancy (very scary but is for home use); I made VM backup in HDD pool.
The two Samsung are used as Metadata VDEV for HDD pool.
Best Regards,
Antonio
 

benda

Dabbler
Joined
Feb 23, 2024
Messages
17
Yes, but only for testing purpose.
I'm using the following Intel NVME in TrueNas Scale:
Code:
root@tn-xeond[~]# nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme2n1          /dev/ng2n1            XXXXXXXXXXXXXXXX      Samsung SSD 970 PRO 512GB                1         511.14  GB / 512.11  GB    512   B +  0 B   1B2QEXP7
/dev/nvme1n1          /dev/ng1n1            YYYYYYYYYYYYYYYY      Samsung SSD 970 PRO 512GB                1         511.14  GB / 512.11  GB    512   B +  0 B   1B2QEXP7
/dev/nvme0n3          /dev/ng0n3            ZZZZZZZZZZZZZZZZZZZ   INTEL SSDPF2KX019T1M                     3           1.44  TB /   1.44  TB    512   B +  0 B   9CV10410
/dev/nvme0n2          /dev/ng0n2            ZZZZZZZZZZZZZZZZZZZ   INTEL SSDPF2KX019T1M                     2         412.32  GB / 412.32  GB    512   B +  0 B   9CV10410
/dev/nvme0n1          /dev/ng0n1            ZZZZZZZZZZZZZZZZZZZ   INTEL SSDPF2KX019T1M                     1          68.72  GB /  68.72  GB    512   B +  0 B   9CV10410

for slog (62GB) and l2arc (412GB) for HDD pool.
The big Namespace is used as Proxmox dataset for VM (connected using NFS).
No redundancy (very scary but is for home use); I made VM backup in HDD pool.
The two Samsung are used as Metadata VDEV for HDD pool.
Best Regards,
Antonio
Thank you for the information.
I board 2x Kingston DC1500M 1.92TB and plan to use these with mirror and for boot, slog (hdd pool), l2arc, metadata and dataset for app pool (docker/kunbernet).
Do you think it’s a good idea?
Background is to save power consumption, and not to use separate ssd for each purpose.

Do you know how i replace one ssd in the mirror if it dies? „Create same namespace and sizes on the new ssd and install it“?
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
I never tried to boot from a nvme with different namespaces: don't know if it works.
In general I think that yours is a good idea for home user, but TrueNas doesn't support it (yet?).
I mean: if you want to use this solution for work or you have a support contract for your system, I don't think is a good idea.

Also I don't know what to do if one disk fail: probably the way you suggest is the right one.

If you want the "fastest" nvme drive, there are better choice (and more expensive) than Kingston.

Best Regards,
Antonio
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
If you don't know the answer to these questions, you should not consider partioning drives. Period.

Incidentally, L2ARC and metadata roles are somewhat redundant, and mixing metadata vdev with any other duty is a Very Bad Idea™: A metadata vdev is pool critical, if it is lost the whole pool is lost.
 

benda

Dabbler
Joined
Feb 23, 2024
Messages
17
I never tried to boot from a nvme with different namespaces: don't know if it works.
In general I think that yours is a good idea for home user, but TrueNas doesn't support it (yet?).
I mean: if you want to use this solution for work or you have a support contract for your system, I don't think is a good idea.

Also I don't know what to do if one disk fail: probably the way you suggest is the right one.

If you want the "fastest" nvme drive, there are better choice (and more expensive) than Kingston.

Best Regards,
Antonio
I got the 2x 1.9tb for 179€ and they are on the way. Will do some test and let you know if it works.
Anything i should note if i do the namespaces on the ssd?
I will use this guide.
 

benda

Dabbler
Joined
Feb 23, 2024
Messages
17
If you don't know the answer to these questions, you should not consider partioning drives. Period.

Incidentally, L2ARC and metadata roles are somewhat redundant, and mixing metadata vdev with any other duty is a Very Bad Idea™: A metadata vdev is pool critical, if it is lost the whole pool is lost.
I will mirror it. Still not a good idea?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
That's one degree of redundancy only, which is not much for a pool critical device. Contrary to HDDs, which degrade gradually, SSDs tend to die suddenly with no advance warning. And some high write duties (L2ARC, SLOG if used at all), which in themselves are not pool critical, could cause symmetrical high wear on your pair of pool critical SSDs.

One of my NAS currently has a 900p Optane serving double duty as SLOG and persistent metadata L2ARC (for dedup… a misguided experiment), none of which is critical.
But I would personally NEVER mix special/dedup vdev duties with anything else.

I would advise to:
  • Get anything cheap and small for boot (no need to mirror).
  • Consider whether you actually have a use for a SLOG (i.e. mandatory sync writes).
  • Get a third drive if you want to proceed with a pool critical special vdev.
  • Otherwise, use a single drive as (possibly persistent) L2ARC provided you have enough RAM (need at least 192 GB RAM to support a 1.92 TB L2ARC!) AND yet there are still misses in ARC (as reported by arc_summary; L2ARC is of no use if hit rate is >99%).
But in the the end it's your data.
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
If you don't know the answer to these questions, you should not consider partioning drives. Period.

Incidentally, L2ARC and metadata roles are somewhat redundant, and mixing metadata vdev with any other duty is a Very Bad Idea™: A metadata vdev is pool critical, if it is lost the whole pool is lost.
Namespaces are not partitions for my understand of NVME drives.
In TrueNas (Core and Scale) I can use namespace from web interface, but not creating, deleting, etc namespaces (I have to use command line).

I'm using two NVME (each has one namespace only) as metadata vdev.

The only things that I'm not understand about metadata vdevs is in what condition you can remove safely a functioning metadata vdev from a pool.

In my case I have a pool of 4 hdd drives raidz1 with a mirror metadata vdev. I think that there is a rule about the same "resilience" inside vdevs: in my case the pool can work without one drive for each vdev:
Code:
root@tn-xeond[~]# zpool list -v tank-big
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank-big                                  58.7T  49.4T  9.21T        -         -    16%    84%  1.00x    ONLINE  /mnt
  raidz1-0                                58.2T  49.4T  8.79T        -         -    16%  84.9%      -    ONLINE
    9386a7b5-7cf8-4e9f-abd8-0b3646d93397  14.6T      -      -        -         -      -      -      -    ONLINE
    595ecabd-287b-4141-bc69-108465b30e93  14.6T      -      -        -         -      -      -      -    ONLINE
    e7aa7037-9b56-4294-94ef-8ff719641a5a  14.6T      -      -        -         -      -      -      -    ONLINE
    3cec06bf-7f44-404a-aebe-66e8167a78d9  14.6T      -      -        -         -      -      -      -    ONLINE
special                                       -      -      -        -         -      -      -      -         -
  mirror-1                                 476G  43.3G   433G        -         -    59%  9.10%      -    ONLINE
    efdef78e-6c2f-426a-a8c6-580613fc9a4a   477G      -      -        -         -      -      -      -    ONLINE
    7db1a718-e022-4f1c-9de4-23692ba2c1dc   477G      -      -        -         -      -      -      -    ONLINE
logs                                          -      -      -        -         -      -      -      -         -
  eb4c09d5-adf6-478b-9b39-3cf0c7ab5d6b    64.0G   936K  63.5G        -         -     0%  0.00%      -    ONLINE
cache                                         -      -      -        -         -      -      -      -         -
  nvme0n2p1                                384G   384G   470M        -         -     0%  99.9%      -    ONLINE


Best Regards,
Antonio
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Namespaces are not partitions for my understand of NVME drives.
Unless TrueNAS sees namespaces as completely different drives, it won't change the behaviour: Not supported by the middleware; you're on your own…

The only things that I'm not understand about metadata vdevs is in what condition you can remove safely a functioning metadata vdev from a pool.
Same as any data vdev: Can remove if and only if the pool only comprise mirrors.
With raidz1 you're toasted: To remove the special vdev you'd have to destroy the pool. You can increase resiliency by going for a 3-way mirror, though the weakest part is likely the raidz1 of large drives.
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
I got the 2x 1.9tb for 179€ and they are on the way. Will do some test and let you know if it works.
Anything i should note if i do the namespaces on the ssd?
I will use this guide.
This is exactly the same guide that I was using.
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Unless TrueNAS sees namespaces as completely different drives, it won't change the behaviour: Not supported by the middleware; you're on your own…
TrueNAS sees namespaces as completely different drives. The first three drives are 3 namespaces:
tn-nvme.png

Also the serial (not shown in the printscreen) are the same.
Core is different. Sometimes show them correctly, sometimes no.
Now I can't see all of them (only the last one nvd4):
tn-core-nvme-disk.png

But pool seems OK:
tn-core-nvme.png

To confirm that the pool is working I run some test on a zvol from this pool exported using ISCSI:
tn-core-nvme-ns-perf.png
 
Last edited:
Top