Having issue importing a zfs pool from OM6

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
The pool imports no problem, it is seen and comes up with perfect health.

I then try to make a data set for it calling it "nas" and get this msg:

error CallError​

[ENOENT] Path not found [/mnt/Media].
I have no problems making ZFS pools from scratch, but importing is fraught with error after error. I don't know what to do, that ZFS pool has 85TB that I need.

Also even though the error comes up, it still makes the folder MEdia under neath it but it acts like if its just a dataset for the freespace, does not include any of my data. The ZFS is 85TB/144TB.... but the new data sat is 239bytes/144TB

Trying to share it makes the errors get worse and worse. I am new to freeNAS so I am lost I don't know what to do. I know the 1st error above comes when creating the dataset, and the dataset only shows freespace and none of my data.
 

Attachments

  • 1.jpg
    1.jpg
    152.8 KB · Views: 140
  • 2.jpg
    2.jpg
    123.1 KB · Views: 132

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
[ENOENT] Path not found [/mnt/Media].
More info...
Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 181, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1266, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1169, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1288, in nf
return func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1158, in nf
res = f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/filesystem.py", line 597, in acl_is_trivial
raise CallError(f'Path not found [{path}].', errno.ENOENT)
middlewared.service_exception.CallError: [ENOENT] Path not found [/mnt/Media].
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
How do create a dataset without getting the above error? I tried everything I could.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First off, can you supply the output of the following commands in code tags?

zpool status zpool list -v zfs list -t all -r zfs get mountpoint

It appears, (without having the above command outputs), you put all the 84TB in top level dataset. In general, it is best to have different datasets for different purposes. Even if the ZFS pool is all one purpose.

Next, you already have a "Media/nas" dataset.

Last, it is not recommended to have very wide RAID-Zx vDevs. The output of the second screen capture indicates you have a 17 disk wide RAID-Z2. Performance can be spotty whence the pool starts to fill up or get fragmented. Nothing to work on today, just something you should be aware of. In general, 12 disks is considered roughly the maximum, though others may think it is lower or slightly higher.
 
Last edited:

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
First off, can you supply the output of the following commands in code tags?

zpool status zpool list -v zfs list -t all -r zfs get mountpoint

It appears, (without having the above command outputs), you put all the 84TB in top level dataset. In general, it is best to have different datasets for different purposes. Even if the ZFS pool is all one purpose.

Next, you already have a "Media/nas" dataset.

Last, it is not recommended to have very wide RAID-Zx vDevs. The output of the second screen capture indicates you have a 17 disk wide RAID-Z2. Performance can be spotty whence the pool starts to fill up or get fragmented. Nothing to work on today, just something you should be aware of. In general, 12 disks is considered roughly the maximum, though others may think it is lower or slight higher.
Thank you so much for helping me.

My trueNAS does not seem to understand these commands. I type them in the shell, and SSH and in both instances it says bad command not recognized.

zpool is not understood at all. Nor are any zfs commands. My previous install of OpenMediaVault understood all these commands but the true NAS I installed does not. I downloaded the latest stable copy. TrueNAS 22.12.ISO.


I have a server with 36 drives, and I wanted to use it as my media server for plex. It runs perfectly under OMV6, but I was not happy with the performance, I have 10GBE NIC on both ends, and only get reads of 100MB/s-150MB/s while writes go fast up to 1GB/sbut ussually 750Mb/s.

I have 160GB RAM with 256GB more on order.
I also am waitting for 2.5 inch caddies to mount a 4TB L2ARC.


I am new to ZFS so I am don't know the best design practices if you could guide me that would be amazing.

I will have over 30 18TB drives, and 1TB ssd x 4 drives for L2ARC.


I was going to do this in OMV6 but read that trueNAS has native support of ZFS and I figured that my performance problems will be fixed if I migrated to it. I can't understandt why I am getting such crummy reads when 10gbe cards are on both ends and there are so many drives, the writes I have no problems with as it is very very fast.

------------------------------------------------

so back to my issue, how do I execute the commands in trueNAS when it does not seem to have the basic libraries and commands installed. I am new to unix and debian and did everything from examples on the OMV6 system. It works and I love ZFS, just thought it was a OMV6 issue as no matter what I did on that system, mount 1 drive, or make a zfs with 6 drives, the reads were always limited to 100-150MB/s. 100Mb/s on this first one, and 150Mb/s once it was in memory.

I am pulling my hair out with trueNAS, doesn't even understand zfs commands natively.
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
I also keep reading that you should do nothing in the shell or command prompt that everything must be done in the GUI, so I am totally confused by this system when the gui does not seem to work propery importing my zfs.
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
I would like to mount the ZFS with 17 drives so that I can read it and copy it to another nas, than I will rebuild with 25 18TB drives, as 8 of the 18TB will be used to hold the data as backup. So what is the best design for 25 drives?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
There is no "best design". There is simply a set of compromises that best match your use case
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
My trueNAS does not seem to understand these commands. I type them in the shell, and SSH and in both instances it says bad command not recognized.

zpool is not understood at all. Nor are any zfs commands. My previous install of OpenMediaVault understood all these commands but the true NAS I installed does not. I downloaded the latest stable copy. TrueNAS 22.12.ISO.

so back to my issue, how do I execute the commands in trueNAS when it does not seem to have the basic libraries and commands installed. I am new to unix and debian and did everything from examples on the OMV6 system. It works and I love ZFS, just thought it was a OMV6 issue as no matter what I did on that system, mount 1 drive, or make a zfs with 6 drives, the reads were always limited to 100-150MB/s. 100Mb/s on this first one, and 150Mb/s once it was in memory.

I am pulling my hair out with trueNAS, doesn't even understand zfs commands natively.
I'm going to assume you are running TrueNAS SCALE. in which case, those commands won't be understood unless you are logged in to the root shell. Re-login as root, and not your regular user accounts and those commands should work.
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
I'm going to assume you are running TrueNAS SCALE. in which case, those commands won't be understood unless you are logged in to the root shell. Relogin as root, and not your user accounts and those commands should work.
Yes I am running trueNAS scale, should I be using the CORE? I login using the admin password I made during install.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Yes I am running trueNAS scale, should I be using the CORE? I login using the admin password I made during install.
In that case, do sudo su - and you should gain the root shell.

This is kinda' like why I hate Linux sometimes most of the time....
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
Can 2 pools of 12 drives be made, or 3 pools of 10 drives, in such a configuration that it is all mounted as one drive? Is that possible under zfs?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Can 2 pools of 12 drives be made, or 3 pools of 10 drives, in such a configuration that it is all mounted as one drive? Is that possible under zfs?
I think you need to read the ZFS primer. Pools consist of vdevs, which then consist of your individual drives. Each pool then coincides to a mount point. So if you create 2 pools, you will end up with 2 mount points, 3 pools = 3 mount points, so on and so forth. How you arrange the drives in the vdevs is up to preference. You can have a bunch of mirrored vdevs or a RAIDZ type vdev. What you should use/prefer depends on your use case and risk tolerance.
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
root@truenas[~]# zpool status
pool: Media
state: ONLINE
scan: scrub canceled on Wed Jan 25 14:30:02 2023
config:

NAME STATE READ WRITE CKSUM
Media ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ee7c0b2d-25ca-c549-81f9-e05854d986c7 ONLINE 0 0 0
0d25223b-e373-8845-9292-cd737b7ff150 ONLINE 0 0 0
389cc3fb-8d92-b84d-83dc-a846ef8dd5f0 ONLINE 0 0 0
c7d3b1d5-caa9-6543-b8a6-da6019be34bf ONLINE 0 0 0
bfa2d4af-d979-9347-8a35-50a781bef9f0 ONLINE 0 0 0
e626ebfb-f5b2-d64a-b1f3-2aa7f732c6a8 ONLINE 0 0 0
f637ff90-1221-1647-b1e3-4bccd50352dc ONLINE 0 0 0
761308c0-a5e2-be4b-9862-bd8db319ca4b ONLINE 0 0 0
7b38c2c3-7c65-b64d-8074-acaf5f34d615 ONLINE 0 0 0
33716810-edc2-cc46-a1fd-ae050ee1d928 ONLINE 0 0 0
b27f7b08-a723-5d47-ab75-55ffb990703c ONLINE 0 0 0
4309bd7f-a2d6-0e47-bd58-194d07acbd94 ONLINE 0 0 0
8b189ec5-35e0-7d4d-8434-386fe9f36fb9 ONLINE 0 0 0
fc14c43d-073e-a441-af7a-01e81aa4b1e9 ONLINE 0 0 0
e3c2401a-7a16-b14f-9004-3356fed12fe1 ONLINE 0 0 0
c0c0a23a-ce4c-6748-b0dc-59ed462e6a01 ONLINE 0 0 0
db32a415-a58c-654b-a69f-c91ad1502706 ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
sda3 ONLINE 0 0 0

errors: No known data errors

pool: nas2
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
nas2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
7ac44cb4-9672-4137-ae4f-18aa30b5bdd9 ONLINE 0 0 0
cd5d56b6-98bb-44db-9d94-782041be3c5e ONLINE 0 0 0
f139eebb-c677-475d-bdb3-a22d406f8f7b ONLINE 0 0 0
58a8feca-45dd-4493-b075-1e34eb4cf394 ONLINE 0 0 0
30a8b7a6-0777-4586-8b86-38c4ea77f960 ONLINE 0 0 0
05b22030-0191-412b-8926-e5a16e80ef7f ONLINE 0 0 0

errors: No known data errors

----------------------------------------

dont worry about the last zpool i was testing to see how i could get freenas to work and i will be deleting it
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
root@truenas[~]# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Media 278T 102T 176T - - 0% 36% 1.00x ONLINE /mnt
raidz2-0 278T 102T 176T - - 0% 36.7% - ONLINE
ee7c0b2d-25ca-c549-81f9-e05854d986c7 16.4T - - - - - - - ONLINE
0d25223b-e373-8845-9292-cd737b7ff150 16.4T - - - - - - - ONLINE
389cc3fb-8d92-b84d-83dc-a846ef8dd5f0 16.4T - - - - - - - ONLINE
c7d3b1d5-caa9-6543-b8a6-da6019be34bf 16.4T - - - - - - - ONLINE
bfa2d4af-d979-9347-8a35-50a781bef9f0 16.4T - - - - - - - ONLINE
e626ebfb-f5b2-d64a-b1f3-2aa7f732c6a8 16.4T - - - - - - - ONLINE
f637ff90-1221-1647-b1e3-4bccd50352dc 16.4T - - - - - - - ONLINE
761308c0-a5e2-be4b-9862-bd8db319ca4b 16.4T - - - - - - - ONLINE
7b38c2c3-7c65-b64d-8074-acaf5f34d615 16.4T - - - - - - - ONLINE
33716810-edc2-cc46-a1fd-ae050ee1d928 16.4T - - - - - - - ONLINE
b27f7b08-a723-5d47-ab75-55ffb990703c 16.4T - - - - - - - ONLINE
4309bd7f-a2d6-0e47-bd58-194d07acbd94 16.4T - - - - - - - ONLINE
8b189ec5-35e0-7d4d-8434-386fe9f36fb9 16.4T - - - - - - - ONLINE
fc14c43d-073e-a441-af7a-01e81aa4b1e9 16.4T - - - - - - - ONLINE
e3c2401a-7a16-b14f-9004-3356fed12fe1 16.4T - - - - - - - ONLINE
c0c0a23a-ce4c-6748-b0dc-59ed462e6a01 16.4T - - - - - - - ONLINE
db32a415-a58c-654b-a69f-c91ad1502706 16.4T - - - - - - - ONLINE
boot-pool 7.25T 2.64G 7.25T - - 0% 0% 1.00x ONLINE -
sda3 7.26T 2.64G 7.25T - - 0% 0.03% - ONLINE
nas2 32.7T 14.3G 32.7T - - 0% 0% 1.00x ONLINE /mnt
raidz2-0 32.7T 14.3G 32.7T - - 0% 0.04% - ONLINE
7ac44cb4-9672-4137-ae4f-18aa30b5bdd9 5.46T - - - - - - - ONLINE
cd5d56b6-98bb-44db-9d94-782041be3c5e 5.46T - - - - - - - ONLINE
f139eebb-c677-475d-bdb3-a22d406f8f7b 5.46T - - - - - - - ONLINE
58a8feca-45dd-4493-b075-1e34eb4cf394 5.46T - - - - - - - ONLINE
30a8b7a6-0777-4586-8b86-38c4ea77f960 5.46T - - - - - - - ONLINE
05b22030-0191-412b-8926-e5a16e80ef7f 5.46T - - - - - - - ONLINE
root@truenas[~]# zfs list -t all -r
NAME USED AVAIL REFER MOUNTPOINT
Media 83.8T 144T 82.4T /mnt/data
Media@zfs-auto-snap_daily-2023-01-20-1251 51.1M - 37.3T -
Media@zfs-auto-snap_daily-2023-01-21-1305 20.6G - 55.6T -
Media@zfs-auto-snap_daily-2023-01-22-1310 173G - 76.9T -
Media@zfs-auto-snap_daily-2023-01-23-1331 40.3G - 82.6T -
Media@zfs-auto-snap_daily-2023-01-24-1323 111M - 82.4T -
Media@zfs-auto-snap_daily-2023-01-25-1313 0B - 82.2T -
Media@zfs-auto-snap_weekly-2023-01-25-1313 0B - 82.2T -
Media@zfs-auto-snap_daily-2023-01-27-1314 106M - 82.2T -
Media@zfs-auto-snap_daily-2023-01-28-1313 1.96M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-0417 0B - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-0517 0B - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-0617 197K - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-0717 1.44M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-0817 3.85M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-0917 1.77M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-1017 1.23M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-1117 2.02M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-1217 866K - 82.2T -
Media@zfs-auto-snap_daily-2023-01-29-1240 0B - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-1317 0B - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-1417 1.46M - 82.2T -
Media@zfs-auto-snap_hourly-2023-01-29-1517 394K - 82.3T -
Media@zfs-auto-snap_hourly-2023-01-29-1617 0B - 82.3T -
Media@zfs-auto-snap_hourly-2023-01-29-1717 0B - 82.3T -
Media@zfs-auto-snap_hourly-2023-01-29-1817 866K - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-29-1917 472K - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-29-2017 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-29-2117 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-29-2217 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-29-2317 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-30-0017 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-30-0117 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-30-0217 0B - 82.4T -
Media@zfs-auto-snap_frequent-2023-01-30-0245 0B - 82.4T -
Media@zfs-auto-snap_frequent-2023-01-30-0300 0B - 82.4T -
Media@zfs-auto-snap_frequent-2023-01-30-0315 0B - 82.4T -
Media@zfs-auto-snap_hourly-2023-01-30-0317 0B - 82.4T -
Media@zfs-auto-snap_frequent-2023-01-30-0330 0B - 82.4T -
boot-pool 2.64G 7.12T 96K none
boot-pool/ROOT 2.63G 7.12T 96K none
boot-pool/ROOT/22.12.0 2.63G 7.12T 2.62G legacy
boot-pool/ROOT/22.12.0@2023-01-30-03:51:15 6.86M - 2.62G -
boot-pool/ROOT/Initial-Install 8K 7.12T 2.62G /
boot-pool/grub 8.20M 7.12T 8.20M legacy
nas2 9.51G 21.7T 192K /mnt/nas2
nas2/.system 61.3M 21.7T 224K legacy
nas2/.system/configs-5a0a2a47cd884dbcbe527966286bfc29 320K 21.7T 320K legacy
nas2/.system/cores 192K 1024M 192K legacy
nas2/.system/ctdb_shared_vol 192K 21.7T 192K legacy
nas2/.system/glusterd 208K 21.7T 208K legacy
nas2/.system/rrd-5a0a2a47cd884dbcbe527966286bfc29 57.9M 21.7T 57.9M legacy
nas2/.system/samba4 527K 21.7T 527K legacy
nas2/.system/services 192K 21.7T 192K legacy
nas2/.system/syslog-5a0a2a47cd884dbcbe527966286bfc29 1.35M 21.7T 1.35M legacy
nas2/.system/webui 192K 21.7T 192K legacy
nas2/Nas2 9.45G 21.7T 9.45G /mnt/nas2/Nas2
root@truenas[~]# zfs get mountpoint
NAME PROPERTY VALUE SOURCE
Media mountpoint /mnt/data local
Media@zfs-auto-snap_daily-2023-01-20-1251 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-21-1305 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-22-1310 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-23-1331 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-24-1323 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-25-1313 mountpoint - -
Media@zfs-auto-snap_weekly-2023-01-25-1313 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-27-1314 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-28-1313 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0417 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0517 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0617 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0717 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0817 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0917 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1017 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1117 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1217 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-29-1240 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1317 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1417 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1517 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1617 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1717 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1817 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1917 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2017 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2117 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2217 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2317 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0017 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0117 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0217 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0245 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0300 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0315 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0317 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0330 mountpoint - -
boot-pool mountpoint none local
boot-pool/ROOT mountpoint none inherited from boot-pool
boot-pool/ROOT/22.12.0 mountpoint legacy local
boot-pool/ROOT/22.12.0@2023-01-30-03:51:15 mountpoint - -
boot-pool/ROOT/Initial-Install mountpoint / local
boot-pool/grub mountpoint legacy local
nas2 mountpoint /mnt/nas2 default
nas2/.system mountpoint legacy local
nas2/.system/configs-5a0a2a47cd884dbcbe527966286bfc29 mountpoint legacy local
nas2/.system/cores mountpoint legacy local
nas2/.system/ctdb_shared_vol mountpoint legacy local
nas2/.system/glusterd mountpoint legacy local
nas2/.system/rrd-5a0a2a47cd884dbcbe527966286bfc29 mountpoint legacy local
nas2/.system/samba4 mountpoint legacy local
nas2/.system/services mountpoint legacy local
nas2/.system/syslog-5a0a2a47cd884dbcbe527966286bfc29 mountpoint legacy local
nas2/.system/webui mountpoint legacy local
nas2/Nas2 mountpoint /mnt/nas2/Nas2 default
root@truenas[~]# >....
Media@zfs-auto-snap_daily-2023-01-28-1313 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0417 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0517 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0617 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0717 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0817 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-0917 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1017 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1117 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1217 mountpoint - -
Media@zfs-auto-snap_daily-2023-01-29-1240 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1317 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1417 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1517 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1617 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1717 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1817 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-1917 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2017 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2117 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2217 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-29-2317 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0017 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0117 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0217 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0245 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0300 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0315 mountpoint - -
Media@zfs-auto-snap_hourly-2023-01-30-0317 mountpoint - -
Media@zfs-auto-snap_frequent-2023-01-30-0330 mountpoint - -
boot-pool mountpoint none local
boot-pool/ROOT mountpoint none inherited from boot-pool
boot-pool/ROOT/22.12.0 mountpoint legacy local
boot-pool/ROOT/22.12.0@2023-01-30-03:51:15 mountpoint - -
boot-pool/ROOT/Initial-Install mountpoint / local
boot-pool/grub mountpoint legacy local
nas2 mountpoint /mnt/nas2 default
nas2/.system mountpoint legacy local
nas2/.system/configs-5a0a2a47cd884dbcbe527966286bfc29 mountpoint legacy local
nas2/.system/cores mountpoint legacy local
nas2/.system/ctdb_shared_vol mountpoint legacy local
nas2/.system/glusterd mountpoint legacy local
nas2/.system/rrd-5a0a2a47cd884dbcbe527966286bfc29 mountpoint legacy local
nas2/.system/samba4 mountpoint legacy local
nas2/.system/services mountpoint legacy local
nas2/.system/syslog-5a0a2a47cd884dbcbe527966286bfc29 mountpoint legacy local
nas2/.system/webui mountpoint legacy local
nas2/Nas2 mountpoint /mnt/nas2/Nas2 default
root@truenas[~]#
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
I would just like to be able to read and write to the 1st pool at this time /Media. I will read the primer, but I still need to get my data.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
can you raid 0 the mountpoints? to make 1 device?
No, what you need to do is create one pool and stripe the vdevs. Also, you can't really modify RAIDZ vdevs once created, so you're stuck with it.

I would just like to be able to read and write to the 1st pool at this time /Media. I will read the primer, but I still need to get my data.
Your pools look fine. I think all you need to do is probably just a zfs export Media` from the command line and then reimport the pool from the web UI.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Also, for goodness sake, please format your console dumps in CODE tags. They're so hard to read.
 

no1ninja

Dabbler
Joined
Jan 22, 2013
Messages
24
It won't let me do anything with the pool once its imported. I just exported in command line and brought it back in with the GUI and same issue.

Well, when I went in with the gui, it still showed the old pool, but health was an X. So I deleted and imported the pool again. Perfect health again.

The minute I try to do a dataset it complains about the mountpoint... and after that its impossible to give it shares.

So its uselss. I just want to be able to read and write to it with other machines.


Import is forgetting something. That is why I built that same 6 drive pool to see if I would get the same errors, and no... it worked fine. I can make everything work fine as long as I build it from scratch.

If I try to make the import pool work, the GUI complains about errors. The minute I try to create a dataset for the imported pool no matter what i call it says its an error due to mount points.
 
Top