database sqllite3 [EFAULT] The uploaded file is not valid: file is not a database. how to convert ?

spyroot

Cadet
Joined
Aug 29, 2020
Messages
9
Hi Folks,

I didn't touch my NAS 2-3 years : ) uptime was 900 something days, Last night upgraded to FreeNAS from 10 to 11.4, (mostly due to more recent NFS code).

I didn't touch my pool or anything yet after install. it shows pool but not iSCSI etc.

I'm trying to load a saved config but it doesn't accept it.
" [EFAULT] The uploaded file is not valid: file is not a database"

I've upload DB file directly to the device to match sqlite3 version.
root@nas:/tmp # sqlite3 /tmp/freenas10_current.db "pragma integrity_check"
Error: file is not a database

I've checked the file and it looks like it a JSON file. Do I need to convert the file to some intermedia re-presentation so FreeNAS accept it and what are the steps?
Docs mostly indicate take file and upload but it doesn't work out of the box.

Can someone recommend the troubleshooting steps? Do I need to convert a file?
(Last time I check sqllite3 was 10 years back and it was binary) I guess in FreeNas 10 is stored as JSON.

Thank you very much,
MB>
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
FreeNAS 10 (Corral) was abandoned. 11 only takes configs from 9, sorry.
 

spyroot

Cadet
Joined
Aug 29, 2020
Messages
9
Could you please recommend the migration strategy because I already upgrade?
Can I downgrade and then go release by release upgrade chain?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Look under System->Boot, and pull down the 3 dots to activate your previous 10 installation. Reboot back into that.

The only migration strategy from 10 is to have a clean install and manually configure all your settings again, so once you're back in 10, go through your entire configuration and save or print screen shots.
 

spyroot

Cadet
Joined
Aug 29, 2020
Messages
9
just to update, I guess I can re-create since both pool are healthy. The only question I have how I can re-create iSCSI.
I see in each pool json file.

For example old share-iscsi

{"type": "iscsi", "enabled": true, "target_type": "ZVOL", "properties": {"%type": "share-iscsi", "size": 2199023255552, "serial": "ac1f6b05468700", "block_size": 512, "physical_block_size": true, "tpc": false, "vendor_id": null, "product_id": null, "device_id": null, "rpm": "SSD", "read_only": false, "xen_compat": false, "naa": "0x6589cfc000000b76db147f6020b6e757"}, "name": "iscsishare", "target_path": "share/vms", "immutable": false, "description": "", "updated_at": {"$date": "2017-03-15 00:41:02.932000"}, "created_at": {"$date": "2017-03-15 00:41:02.932000"}, "id": "78458a0e-4f2c-4320-b089-27cec31d80ca"}

and I don't see share/vms mount point

sharessd /mnt/sharessd
sharessd/.system /var/db/system
sharessd/.system/cores /var/db/system/cores
sharessd/.system/samba4 /var/db/system/samba4
sharessd/.system/syslog-76c11d7f8a944b3d8e42fe35420dbaa3 /var/db/system/syslog-76c11d7f8a944b3d8e42fe35420dbaa3
sharessd/.system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3 /var/db/system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3
sharessd/.system/configs-76c11d7f8a944b3d8e42fe35420dbaa3 /var/db/system/configs-76c11d7f8a944b3d8e42fe35420dbaa3
sharessd/.system/webui /var/db/system/webui
share /mnt/share
share/smb /mnt/share/smb
 

spyroot

Cadet
Joined
Aug 29, 2020
Messages
9
Samuel Tai thank you very much for your help I really appreciate it. I think I only need to restore config for iSCSI since a large pool just SMB shares so that not the problem.

I had two iscsi one on SSD one on a disk. But I don't see directory shared/VMS

{"type": "iscsi", "enabled": true, "description": "", "properties": {"%type": "share-iscsi", "size": 2199023255552, "block_size": 512, "physical_block_size": false, "tpc": false, "xen_compat": false, "rpm": "SSD", "read_only": false, "serial": "ac1f6b05468701", "vendor_id": null, "product_id": null, "device_id": null, "naa": "0x6589cfc000000d525c6a26022c8e31fa"}, "target_type": "ZVOL", "target_path": "sharessd/vms", "name": "vms", "permissions": {"user": "root", "group": "wheel"}, "immutable": false, "updated_at": {"$date": "2017-03-15 01:34:15.460000"}, "created_at": {"$date": "2017-03-15 01:34:15.460000"}, "id": "e85cbda4-289c-463f-8217-64e8915334f7"}
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, this is because your zvols don't actually reside in /mnt, but in /dev/zvol/share/vms and /dev/zvol/sharessd/vms. They only appear under their respective pools in the GUI.

I can't tell you if the installer correctly migrated them there or if they were removed by the upgrade.
 

spyroot

Cadet
Joined
Aug 29, 2020
Messages
9
I managed to restore both. So for other folks like me.

Make sure you check .file in each zpool, it has a JSON file, each JSON file has an old serial number. Based on JSON you can re-create old iSCSI LUN, same serial, etc.

For example in my case. Note that dotfiles located in each dataset.

root@nas:/mnt/share # ls -l /mnt/share/.con*
-rw------- 1 root wheel 613 Mar 14 2017 /mnt/share/.config-iscsi-iscsishare.json
-rw------- 1 root wheel 491 Mar 20 2017 /mnt/share/.config-nfs-nfsshare.json
-rw------- 1 root wheel 410 Mar 23 2017 /mnt/share/.config-webdav-webshare.json

root@nas:/mnt/share # ls -l /mnt/sharessd/.c*
-rw------- 1 root wheel 661 Mar 14 2017 /mnt/sharessd/.config-iscsi-vms.json

Content looks like this

{"type": "iscsi", "enabled": true, "target_type": "ZVOL", "properties": {"%type": "share-iscsi", "size": 2199023255552, "serial": "ac1f6b05468700", "block_size": 512, "physical_block_size": true, "tpc": false, "vendor_id": null, "product_id": null, "device_id": null, "rpm": "SSD", "read_only": false, "xen_compat": false, "naa": "0x6589cfc000000b76db147f6020b6e757"}, "name": "iscsishare", "target_path": "share/vms", "immutable": false, "description": "", "updated_at": {"$date": "2017-03-15 00:41:02.932000"}, "created_at": {"$date": "2017-03-15 00:41:02.932000"}, "id": "78458a0e-4f2c-4320-b089-27cec31d80ca"}

Note NAA ( that might be on the client-side in case you didn't record somewhere).

The only part I didn't get I have two zpool the first one has automatically detected the second pool I had import manually.
Note. I use block storage for ESXi and ESXi keeps track signatures in other cases you might hit issues.

Naa will be regenerate so it might have an issue for some iSCSI implementation.
In my case, I set a serial manually. ac1f6b05468700
 
Top