Unable to create pool - invalid continuation byte

Steve_18

Cadet
Joined
Feb 20, 2021
Messages
1
Hello,
so, I recently installed FreeNAS on an old computer. THe server works as it should, however I can't create a pool. Whatever I do, it always says:

'utf-8' codec can't decode byte 0xc3 in position 19: invalid continuation byte

Here is the full error message:

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 91, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 111, in do_create
zfs.create(data['name'], topology, data['options'], data['fsoptions'])
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 111, in do_create
zfs.create(data['name'], topology, data['options'], data['fsoptions'])
File "libzfs.pyx", line 1294, in libzfs.ZFS.create
File "libzfs.pyx", line 977, in libzfs.ZFS.errstr.__get__
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc3 in position 19: invalid continuation byte
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 764, in do_create
raise e
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 701, in do_create
z_pool = await self.middleware.call('zfs.pool.create', {
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
return await methodobj(*prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/service.py", line 455, in create
rv = await self.middleware._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc3 in position 19: invalid continuation byte

Is there a reasonable solution for this?

Thanks
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It's helpful you list;
  • Complete hardware
  • Which OS of TrueNAS, (Core or Scale), and which version
  • Configuration, (where you are installing the OS)
There are some odd things. Some larger hard drives, (>2TB if I remember correctly), had trouble or out right did not work with SAS 1 controllers.
 

Z_BeTa

Cadet
Joined
May 5, 2021
Messages
3
Hi,
I've exaclty the same issue.
I built my NAS with :
- Fractal Node 304 as box
- Intel core I5-9600K
- Asus Prime H310I-Plus R2.0/CSM for the Motherboard
- Kingston 16GB 2400MHz DDR4 CL15 DIMM
- Samsung 970 EVO Plus NVMe M.2 - 250GB for my OS
- Corsair CX550M - 550 Watt
- Noctua NH-L9x65
- 1x Seagate IronWolf Pro 16 TB (I'd like to add some other in future)
I hope someone can help me
Thx by advance
 

Z_BeTa

Cadet
Joined
May 5, 2021
Messages
3
It came when I wanted to create a pool. I have one disk for the moment and I wanted to create a Vdev. True NAS told me it wasn't a good idea (no security if the disk breakup) but I confirmed that I wanted to do that. After that, True NAS started to create it, but this error came
 

Z_BeTa

Cadet
Joined
May 5, 2021
Messages
3
It came when I wanted to create a pool. I have one disk for the moment and I wanted to create a Vdev. True NAS told me it wasn't a good idea (no security if the disk breakup) but I confirmed that I wanted to do that. After that, True NAS started to create it, but this error came
And the version of my True NAS is
TrueNAS CORE 12.0-U3.1
 

appliance

Explorer
Joined
Nov 6, 2019
Messages
96
Happened with 2TB SSD on internal controller [AMD] FCH SATA Controller. in TrueNAS-SCALE-22.02.0.1.
Code:
Job <bound method returns.<locals>.returns_internal.<locals>.nf of <middlewared.plugins.pool.PoolService object at 0x7fc410ed2fd0>> failed
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 423, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 459, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 749, in do_create
    await self.middleware.call('pool.format_disks', job, disks)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 18, in format_disks
    await self.middleware.call('disk.sed_unlock_all')
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk.py", line 323, in sed_unlock_all
    result = await asyncio_map(lambda disk: self.sed_unlock(disk['name'], disk, advconfig), disks, 16)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
    return await asyncio.gather(*futures)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
    return await real_func(arg)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk.py", line 385, in sed_unlock
    locked, unlocked = await self.middleware.call('disk.unlock_ata_security', devname, _advconfig, password)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/sed_linux.py", line 20, in unlock_ata_security
    output = cp.stdout.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb6 in position 70: invalid start byte
[2022/04/30 00:13:48] (DEBUG) EtcService.generate():428 - No new changes for /etc/glusterfs/glusterd.vol
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

Moral of the story: don't use accented letters in the pool name.
 

appliance

Explorer
Joined
Nov 6, 2019
Messages
96
Moral of the story: don't use accented letters in the pool name.
the story is the storage system can't create one pool:)

for sake of diversity, I plugged all types of drives, no accented letters but i guess the UI would check it (just like the error message would appear in the UI instead of frozen dialog "Fetching data..."):

HDD: [EFAULT] Partition type xxxxxxx-1dd2-11b2-99a6-xxxxxxxxx not found on sdb
HDD: [EFAULT] Partition type xxxxxxx-1dd2-11b2-99a6-xxxxxxxxx not found on sdc
HDD: [EFAULT] Partition type xxxxxxx-1dd2-11b2-99a6-xxxxxxxxx not found on sdd
HDD: [EFAULT] Partition type xxxxxxx-1dd2-11b2-99a6-xxxxxxxxx not found on sde
SDD: 'utf-8' codec can't decode byte 0xd8 in position 74: invalid continuation byte
NVME: [EFAULT] Partition type xxxxxxx-1dd2-11b2-99a6-xxxxxxxxx not found on nvme0n1
 

appliance

Explorer
Joined
Nov 6, 2019
Messages
96
i've got the SSD working as unencrypted only so i proceed but transferring System Dataset to it doesn't work:
[EFAULT] Unable to umount boot-pool/.system/syslog-a617fea1951f4cf0a2768efxxxx2a087: umount: /var/db/system/syslog-a617fea1951f4cf0a2768efxxxx2a087: target is busy.
rebooted, now i can create NVME pool, but still can't move the System Data set. It is marked as moved, but shows little size so i worry next boot will fail.
look at the code line 305 of sysdataset.py it is in the beginning of the process, so it's better to move back to boot drive:
if mounted_pool and mounted_pool != config['pool']:
self.middleware.logger.debug('Abandoning dataset on %r in favor of %r', mounted_pool, config['pool'])
async with self._release_system_dataset():
await self.__umount(mounted_pool, config['uuid']) <----
await self.__setup_datasets(config['pool'], config['uuid'])

zfs list - all is still on the boot-pool. so the next boot will search for it on the SSD but it won't find it.
Here's the list of processes locking on /var/db/syslog*:

Code:
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
asyncio_l   644 root   57w   REG   0,56   400973  129 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/middlewared.log
asyncio_l   644 root   66r   REG   0,56   766391  267 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/messages
asyncio_l   644 root   68w   REG   0,56        0  131 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/zettarepl.log
syslog-ng  7208 root  mem    REG   0,56    16384   56 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/syslog-ng.persist
syslog-ng  7208 root   10u   REG   0,56    16384   56 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/syslog-ng.persist
syslog-ng  7208 root   15w   REG   0,56  3465369  537 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/syslog
syslog-ng  7208 root   16w   REG   0,56   766391  267 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/messages
syslog-ng  7208 root   17w   REG   0,56   193965  273 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/cron.log
syslog-ng  7208 root   18w   REG   0,56   889902  268 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/kern.log
syslog-ng  7208 root   19w   REG   0,56  3339713  539 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/daemon.log
syslog-ng  7208 root   22w   REG   0,56   464136  272 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/auth.log
syslog-ng  7208 root   23w   REG   0,56  8103468  269 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/error
winbindd   9138 root    2w   REG   0,56      816    4 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/samba4/log.winbindd-idmap
winbindd   9138 root   14w   REG   0,56      816    4 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/samba4/log.winbindd-idmap
winbindd   9138 root   20w   REG   0,56        0  264 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/samba4/auth_audit.log
python3   10173 root   57w   REG   0,56   400973  129 /var/db/system/syslog-a617fea1951f4cf0a2768ef9d432a087/log/middlewared.log


-> let's fix it by changing syslog location, there's such setting actually.. if i disable "[ ] Use System Dataset" the moving process might leave it alone. (actually the setting is ignored all the time)
-> didn't work. let's do syslog-ng-ctl stop
-> didn't work. let's kill all lsof syslog* (this logs out the user from UI; perhaps syslog is busy with the current issues 1 and 2)
-> done, System Dataset moved and is present in zfs list
 
Last edited:
Top