SOLVED Unable to Create Stripe Pool with One NVMe AIC

mroptman

Dabbler
Joined
Dec 2, 2019
Messages
23
Hello all!

Taking SCALE out for a spin and encountered a big blocker: unable to create a stripe pool from one NVMe SSD (Samsung PM1735). As a test, a 80mm NVMe card SSD (970 Evo) was used in the same test (creating a stripe pool with the single drive) and it works with the 970 Evo, but failed with PM1735.

Fresh install of 21.04 SCALE, additional hardware details are in the signature. I have not yet tried going in via CLI to erase the PM1735 (probably has existing partitions from previous system).

Any additional help would be greatly appreciated!

Here's the stack trace received when attempting to create a stripe pool with the PM1735:
Code:
Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 378, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 414, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1001, in nf
    return await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 673, in do_create
    formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1239, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1196, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
    await asyncio_map(format_disk, disks.items(), limit=16)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
    return await asyncio.gather(*futures)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
    return await real_func(arg)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 32, in format_disk
    devname = await self.middleware.call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1239, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1207, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1111, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3/dist-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info_linux.py", line 97, in gptid_from_part_type
    raise CallError(f'Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on nvme0n1
 

Altamous

Cadet
Joined
May 25, 2021
Messages
1
Hey man, I had the same issue with a nvme drive that had a linux distro on it already.
Go into shell, and use:
  • sudo fdisk /dev/*NVME Device* (mine was /dev/nvme0n1
  • option d (to delete partitions)
  • *repeat if multiple partitions exist*
  • option w to write
  • Reboot
It gave me a warning to reboot after repeating the above, and it couldn't find the partition to delete. After the reboot, you should be able to create the pool.
 

Trexx

Dabbler
Joined
Apr 18, 2021
Messages
29
This is likely caused by your NVMe drive already having a MBR based partition structure on the drive from prior usage. I ran into a similar error when try to create a vdev mirror with spinning drives.

Delete the existing partitions on the drive and reformat it with a GPT/GUID boot record and Fat32 partition. After that TrueNAS Scale should be happy, although I am not sure how you are going to create a stripe out of a single drive. Stripe by nature requires more than 1 device typically, although maybe different in ZFS (which I am new to).
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Trexx - In ZFS terms, saying striped means non-Mirrored and non-RAID-Zx. So, saying striped 1 disk pool is kind of a slang wording, for no redundancy.

Of course, even with a single disk, ZFS can support redundancy. Metadata by default has 2 copies and critical metadata has 3 copies, even on a single disk pool. Not to mention you can use "copies=2" to get DATA redundancy on a single disk pool. Not perfect, as loss of entire disk means both your "copies=2" are gone... but, better than nothing for some uses.
 

Trexx

Dabbler
Joined
Apr 18, 2021
Messages
29
@Trexx - In ZFS terms, saying striped means non-Mirrored and non-RAID-Zx. So, saying striped 1 disk pool is kind of a slang wording, for no redundancy.

Of course, even with a single disk, ZFS can support redundancy. Metadata by default has 2 copies and critical metadata has 3 copies, even on a single disk pool. Not to mention you can use "copies=2" to get DATA redundancy on a single disk pool. Not perfect, as loss of entire disk means both your "copies=2" are gone... but, better than nothing for some uses.

Arwen - Thanks for taking the time to help educate me on the nuances of zfs & it's terminology/conventions. I am very versed in storage/san/nas technology (non-zfs), so am used to striped drives (raid-0, etc.) from way back.

I also appreciate your non-condescending tone as I have not always experienced that from some of the community members.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Arwen - Thanks for taking the time to help educate me on the nuances of zfs & it's terminology/conventions. I am very versed in storage/san/nas technology (non-zfs), so am used to striped drives (raid-0, etc.) from way back.

I also appreciate your non-condescending tone as I have not always experienced that from some of the community members.
Sure.

In some implementations of RAID-0 / Stripe, the number of disks is fixed. In ZFS it's not. ZFS will select the number of disks to "stripe" the data across based on how much data, and the amount of disks at the time of write. Meaning a 1 disk stripe will hold all the data. Then, if you add another disk, (with no redundancy), for more storage, new data can be striped across both disks.

Further, ZFS will automatically stripe across multiple vDevs made up of Mirrors or RAID-Zx's. But, the pool would not be "called" a striped pool. Usually referred to as a Mirror or RAID-Zx pool. (Mixing Mirror and RAID-Zx in the same pool is not recommended, though allowed.)
 

Trexx

Dabbler
Joined
Apr 18, 2021
Messages
29
Further, ZFS will automatically stripe across multiple vDevs made up of Mirrors or RAID-Zx's. But, the pool would not be "called" a striped pool. Usually referred to as a Mirror or RAID-Zx pool. (Mixing Mirror and RAID-Zx in the same pool is not recommended, though allowed.)

I played around with the Mirror Pool and Raid-Z2 pool when creating my pool.

Based on my understanding, besides the capacity loss different between say a zPool of 6xMirror vDevs vs. a single Raid-Z2 pool, the main difference is that the mirror pool will have better random/small block (iscsi) write performance due to the parity calculation overhead across the raid-z2.
 

beagle

Explorer
Joined
Jun 15, 2020
Messages
91
I played around with the Mirror Pool and Raid-Z2 pool when creating my pool.

Based on my understanding, besides the capacity loss different between say a zPool of 6xMirror vDevs vs. a single Raid-Z2 pool, the main difference is that the mirror pool will have better random/small block (iscsi) write performance due to the parity calculation overhead across the raid-z2.
There is also a difference on the level of redundancy. On a single 12-disks RAID-Z2 vdev the pool would still recover after any 2 drives fail whilst on a 6 x 2-Mirror vdev pool if 2 drives on the same mirror fail it means 1 of the 6 vdev's has failed and therefore the whole pool is unrecoverable.

The data in a pool is stripped over vdev's and the reliability of your whole pool is defined by the vdev with lowest reliability.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
The main difference is that the mirror pool will have better random/small block (iscsi) write performance due to the parity calculation overhead across the raid-z2.
No.
Random/small block IO gets spread over multiple VDEVS, each of which has the IO of (about) a single disk.
With Mirrors you have beter performance with the same amount of disks, because you have more vdevs.

The parity calculations are just a sidenote in comparison to the gains by running multiple vdevs.

However:
You can also run a metadata SSD (tripple) mirror besides your raidz(2/3) pool and have the best of both worlds, because you can dump the content of every dataset completely onto those SSD's or filter based on blocksize.

There is also a difference on the level of redundancy. On a single 12-disks RAID-Z2 vdev the pool would still recover after any 2 drives fail whilst on a 6 x 2-Mirror vdev pool if 2 drives on the same mirror fail it means 1 of the 6 vdev's has failed and therefore the whole pool is unrecoverable.

Don't forget something else:
With 2 mirrored drives, if one completely dies, you have no way of repairing any damaged files that are detected during a rebuild.
 

mroptman

Dabbler
Joined
Dec 2, 2019
Messages
23
Hey man, I had the same issue with a nvme drive that had a linux distro on it already.
Go into shell, and use:
  • sudo fdisk /dev/*NVME Device* (mine was /dev/nvme0n1
  • option d (to delete partitions)
  • *repeat if multiple partitions exist*
  • option w to write
  • Reboot
It gave me a warning to reboot after repeating the above, and it couldn't find the partition to delete. After the reboot, you should be able to create the pool.

This solved the issue! Thanks so much for the help.
 
Top