Help needed : impossible to mount disks on new TrueNAS server from previous FreeNAS server

dbucher

Cadet
Joined
Aug 19, 2023
Messages
1
Hello,

After trying dozens of commands and reading approximately 80 web pages and forums I am still unable to simply open my FreeNAS disks under TrueNAS, I am just becoming crazy about this crazyness ! There are all information :

1. The disks are 100% OK. (I reconnected them to the previous FreeNAS VM and they work perfectly again)
2. Under the previous FreeNAS I don't understall AT ALL how they are mounted because there is nothing in /etc/fstab
3. I don't find where they are configured in the web interface
4. But they work on the FreeNAS (under the names Storage1401 and Storage1801)

Could someone help me discovered how they are configured, and then how to open them under TrueNAS. (I would like to migrate).

Thanks a lot in advance for any help or hint !

STOP !!!!!!!!!!!!! With all the below information, I still not understand how it works but the disks seems to be used as "Pools". And when going in "Pools > Import" I was able to import them in the new system. All is OK ! I still publish my message in case it can help other people.

Denis

P. S. Under the old system :

# cat /etc/fstab
freenas-boot/grub /boot/grub zfs rw,noatime 1 0
/dev/da1p1.eli none swap sw 0 0
/dev/da2p1.eli none swap sw 0 0

# mount|grep Sto
Storage1401 on /mnt/Storage1401 (zfs, local, nfsv4acls)
Storage1801 on /mnt/Storage1801 (zfs, local, nfsv4acls)
Storage1401/.system on /var/db/system (zfs, local, nfsv4acls)
Storage1401/.system/cores on /var/db/system/cores (zfs, local, nfsv4acls)
Storage1401/.system/samba4 on /var/db/system/samba4 (zfs, local, nfsv4acls)
Storage1401/.system/syslog-74564c350ac445f98d2696edfb9a82ed on /var/db/system/syslog-74564c350ac445f98d2696edfb9a82ed (zfs, local, nfsv4acls)
Storage1401/.system/rrd-74564c350ac445f98d2696edfb9a82ed on /var/db/system/rrd-74564c350ac445f98d2696edfb9a82ed (zfs, local, nfsv4acls)

# blkid
(empty)

# dmesg | grep da1
da1 at mpt0 bus 0 scbus2 target 1 lun 0
da1: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da1: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da1: Command Queueing enabled
da1: 9437184MB (19327352832 512 byte sectors: 255H 63S/T 1203072C)
GEOM_ELI: Device da1p1.eli created.

# dmesg | grep da2
da2 at mpt0 bus 0 scbus2 target 2 lun 0
da2: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da2: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da2: Command Queueing enabled
da2: 19046400MB (39007027200 512 byte sectors: 255H 63S/T 2428075C)
GEOM_ELI: Device da2p1.eli created.

# fdisk /dev/da1
******* Working on device /dev/da1 *******
parameters extracted from in-core disklabel are:
cylinders=1203072 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=1203072 heads=255 sectors/track=63 (16065 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 238 (0xee),(EFI GPT)
start 1, size 4294967295 (2097151 Meg), flag 0
beg: cyl 0/ head 0/ sector 2;
end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>

fdisk /dev/da2
******* Working on device /dev/da2 *******
parameters extracted from in-core disklabel are:
cylinders=2428075 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=2428075 heads=255 sectors/track=63 (16065 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 238 (0xee),(EFI GPT)
start 1, size 4294967295 (2097151 Meg), flag 0
beg: cyl 0/ head 0/ sector 2;
end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>

# geom disk list da1
Geom name: da1
Providers:
1. Name: da1
Mediasize: 9895604649984 (9.0T)
Sectorsize: 512
Mode: r2w2e5
descr: VMware Virtual disk
ident: (null)
fwsectors: 63
fwheads: 255

# geom disk list da2
Geom name: da2
Providers:
1. Name: da2
Mediasize: 19971597926400 (18T)
Sectorsize: 512
Mode: r2w2e5
descr: VMware Virtual disk
ident: (null)
fwsectors: 63
fwheads: 255

# gpart show da1
=> 34 19327352765 da1 GPT (9.0T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 19323158360 2 freebsd-zfs (9T)
19327352792 7 - free - (3.5k)

# gpart show da2
=> 34 39007027133 da2 GPT (18T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 39002832728 2 freebsd-zfs (18T)
39007027160 7 - free - (3.5k)


# camcontrol devlist
<NECVMWar VMware IDE CDR10 1.00> at scbus1 target 0 lun 0 (cd0,pass0)
<VMware Virtual disk 1.0> at scbus2 target 0 lun 0 (da0,pass1)
<VMware Virtual disk 1.0> at scbus2 target 1 lun 0 (da1,pass2)
<VMware Virtual disk 1.0> at scbus2 target 2 lun 0 (da2,pass3)

# file -s /dev/da1p2
/dev/da1p2: data

# file -s /dev/da2p2
/dev/da2p2: data


# zpool status -v
pool: Storage1401
state: ONLINE
scan: scrub repaired 0 in 0h59m with 0 errors on Sun Jul 16 00:59:46 2023
config:

NAME STATE READ WRITE CKSUM
Storage1401 ONLINE 0 0 0
gptid/98db4062-273d-11e5-89a1-000c29725dcf ONLINE 0 0 0

errors: No known data errors

pool: Storage1801
state: ONLINE
scan: scrub repaired 0 in 13h6m with 0 errors on Sun Jul 23 13:06:49 2023
config:

NAME STATE READ WRITE CKSUM
Storage1801 ONLINE 0 0 0
gptid/21d20321-141f-11e8-84c5-000c29725dcf ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug 4 03:45:08 2023
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors


# glabel status
Name Status Components
gptid/b278836b-a8ae-11e4-b7b7-000c29725dcf N/A da0p1
gptid/98db4062-273d-11e5-89a1-000c29725dcf N/A da1p2
gptid/21d20321-141f-11e8-84c5-000c29725dcf N/A da2p2

# ls -la /dev/gptid
total 1
dr-xr-xr-x 2 root wheel 512 Aug 19 19:12 ./
dr-xr-xr-x 10 root wheel 512 Aug 19 19:12 ../
crw-r----- 1 root operator 0x69 Aug 19 19:12 21d20321-141f-11e8-84c5-000c29725dcf
crw-r----- 1 root operator 0x67 Aug 19 19:12 98db4062-273d-11e5-89a1-000c29725dcf
crw-r----- 1 root operator 0x60 Aug 19 19:12 b278836b-a8ae-11e4-b7b7-000c29725dcf

# zpool get version
NAME PROPERTY VALUE SOURCE
Storage1401 version - default
Storage1801 version - default
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I still not understand how it works but the disks seems to be used as "Pools".
Yes. TrueNAS uses ZFS, and storage volumes in ZFS are called pools. Strongly recommend you do some reading about it before you get yourself into a dangerous configuration. Some starting points:
...and Uncle Fester's guide, linked in my signature.
 
Top