UPGRADE from Core to Scale

darksoul

Dabbler
Joined
Dec 1, 2020
Messages
11
When trying to update to Scale from core 13.02-U2 i get the below errors.
Note. I have downloaded the Update twice.
The Server is fully patched to 13.02-U2
The Server was rebooted after patching and again before trying to patch with second download
I have tried using the temp storage and my pool called TANK /mnt/tank. I always get the same errors.
On a second attempt using the temp storage i will get the /var/tmp/firmware drive is full. I am at a loss generally my
upgrades are flawless. Is there something i'm missing. that i need to do first?

zz_ksnip_20221019-120910.png




Error: Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 391, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/update.py", line 389, in file
await self.middleware.call('update.install_manual_impl', job, destfile, dest_extracted)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1278, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1246, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1151, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/update_/install_freebsd.py", line 66, in install_manual_impl
return self._install_scale(job, path)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/update_/install_freebsd.py", line 86, in _install_scale
return self.middleware.call_sync(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1305, in call_sync
return methodobj(*prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/update_/install.py", line 29, in install_scale
our_checksum = subprocess.run(["sha1", os.path.join(mounted, file)], **run_kw).stdout.split()[-1]
File "/usr/local/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['sha1', '/var/tmp/firmware/squashfs-root/rootfs.squashfs']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 359, in run
raise handled
middlewared.service_exception.CallError: [EFAULT] Command sha1 /var/tmp/firmware/squashfs-root/rootfs.squashfs failed (code 1):

sha1: /var/tmp/firmware/squashfs-root/rootfs.squashfs: No such file or directory
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
When trying to update to Scale from core 13.02-U2 i get the below errors.
Note. I have downloaded the Update twice.
The Server is fully patched to 13.02-U2
The Server was rebooted after patching and again before trying to patch with second download
I have tried using the temp storage and my pool called TANK /mnt/tank. I always get the same errors.
On a second attempt using the temp storage i will get the /var/tmp/firmware drive is full. I am at a loss generally my
upgrades are flawless. Is there something i'm missing. that i need to do first?

It's probably not a hardware issue, but for completeness can you document the hardware setup and the the services/VM/Apps that you are using.
 

darksoul

Dabbler
Joined
Dec 1, 2020
Messages
11
super micro 2 u server dual xeon x5670 procs 84G ECC mem 10 SAS 4TB 7200 RPM Drives in Z1 config + 1 HS 1 NVME rpartitioned for zill and logs
boot disk is a 128G SSD with a intel 4port pci nic card

i have 1 lag using the onboard nics and the other 4 ports only em2 is being used for my 3 jails.

Services enabled
iscsi (not in use anymore)
NFS
Rsync
SMART
SMB
SNMP
SSH
TFTP

i have 3 jails (yes i know they will be lost running SABNZBZ SONARR and RADARR
i have 1 Pool other than Boot
no zvol's
6 datasets
domain share
docker mnt (nfs storage for docker) Running on a external hypervisor
iocage for jails
ISO (for iso images) not really used anymore
backups (backups for hyperv server)
users ( remote mount for domain users )

i have 1 rsync task that uploads my backups to google drive
couple cron jobs that cleans up my backups

and that's about it. i don't use vm's because beehive is atrocious and fails constantly when i try to use it. have almost 0 cpu use and 74.2 G free of memory 9 G used by services

Their is nothing incompatible about my hardware. I understand i will no longer have access to my jails which is fine i will recreate them with dockers. honestly all the settings i care about is the pool tank.

truenas system info.png

zpool-stat.png
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
All you should care is your data stored into tank pool, honestly. For the rest, start from scratch with a new Scale install and import your pool. And use the TrueCharts apps. Also, you're playing with fire when you use RaidZ1 on a 10 disks pool. You should definitely run on RaidZ2. Watch when the day comes you resilver one disk and another disk fails, you lost all your data.
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
super micro 2 u server dual xeon x5670 procs 84G ECC mem 10 SAS 4TB 7200 RPM Drives in Z1 config + 1 HS 1 NVME rpartitioned for zill and logs
boot disk is a 128G SSD with a intel 4port pci nic card

i have 1 lag using the onboard nics and the other 4 ports only em2 is being used for my 3 jails.
Looking for anything unusual
84GB??? what's the config
Partitioned NVMe drive?

If the fresh install suggestion doesn't work... please report-a-bug
 

darksoul

Dabbler
Joined
Dec 1, 2020
Messages
11
Im not sure what your questions is?
yes i have 84GB of ECC ram
i have a 128 Gig NVME partitioned into 2 64G chunks.. i use 1 chunk for zil and one for logs

what d you mean whats the config?
 

GastonJ

Dabbler
Joined
Feb 2, 2022
Messages
15
I have this same issue. It looks as though /var/tmp/firmware isn't large enough to extract the upgrade file

manualupdate.tar

to do anything with it.

/dev/label/updatemdu 2.6G 8.0K 2.4G 0% /var/tmp/firmware

Monitoring that while running the upgrade from GUI it runs out of space

Oct 23 14:07:16 truenas kernel: pid 2920 (unsquashfs), uid 0 inumber 160258 on /var/tmp/firmware: filesystem full

Is it possible to increase the size of /var/tmp/firmware ?

Cheers
 

GastonJ

Dabbler
Joined
Feb 2, 2022
Messages
15
I ended up temporarily unmounting the filesystem

/var/tmp/firmware

This allowed it to use the space on the /var filesystem during the upgrade. It had 5GB free.

All now upgraded to Scale and working. Hope this helps someone else.

Cheers
 

Spirix

Cadet
Joined
Jul 2, 2017
Messages
1
I ended up temporarily unmounting the filesystem

/var/tmp/firmware

This allowed it to use the space on the /var filesystem during the upgrade. It had 5GB free.

All now upgraded to Scale and working. Hope this helps someone else.

Cheers
this worked like a charm
umount -f /var/tmp/firmware
is the command I used. Thanks.
 

pmsan

Cadet
Joined
Feb 17, 2023
Messages
8
I hade the same problem and the umount solution described here did not work for me. My filesystem also reported that var/tmp/firmware was full, but a umount did not help.

I then tried to just change the train in the dropdown on the update page.
Changed from TrueNAS core 13 to TrueNAS-Scale-Bluefin in the dropdown.
I held my breath, and then it worked for for me too.
 
Top