Creating first (user defined) dataset in TrueNAS CORE 13

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Hi everyone,

I am kind of new to ZFS, but know a little and learning. I am finally getting around to setting up a new TrueNAS Mini X we purchased. I upgraded the installed CORE version from 12 to 13, and am now at the current 13 U3 release. May not matter here, but just for some context...

The purpose of the NAS is to create an SMB shared file storage. "Out of the box", the system on the spinning disks portion has a ZFS pool called "tank" and a dataset within this tank pool called "iocage", which of course I am sure is not news to people here.

My question is, just to make sure I am doing the right thing, should I just create a second dataset (first user-define dataset) within the "tank" pool? I would then of course expose/map that dataset to Samba/SMB to make accessible to user clients.

Again, might be obvious to those more knowledgeable than myself, but seems the right thing, just want to make sure I am doing the right thing before migrating a bunch of data over to this. As a side note, I already upgraded the ZFS pools to the latest version ("This system is currently running ZFS filesystem version 5. All filesystems are formatted with the current version.") just to get that out of the way.

Thanks!
Eric
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Last edited:

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Thanks Samuel! I just wanted to make sure I didn't need to create a new zpool, but I am starting to understand the concepts enough to realize that the current default "tank" zpool is what is residing on the vdevs (i.e. spinning disks) as a mirrored config, and so on this already existing "tank" pool I can create the additional dataset I need for the file/data storage. No need to delete the default tank zpool and just create anew. At least that makes the most sense to me. Again, still solidifying my understanding on the core concepts, but getting there.

And thanks for the links, very helpful!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@ehansin - You might want to put the output of zpool status tank & the size of the disks here in the forums. I don't know the default configuration, but we can advise you if your existing pool has limitations. (For larger disks, you don't want RAID-Z1 vDevs.)

Also, if you end up creating a second, user defined dataset for a different purpose, like backups, you can set quotas and reservations to limit and reserve space, for either user dataset, or both.


By the way, the ZFS file system version 5 is basically static today. It's the pool version that changes. If you upgrade the pool version, in general you won't be able to use any older version of TrueNAS.
 

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Here you go (and thanks!) Btw, we have three 12TB disks installed in addition to the included boot SATA DOM (seen as the "boot-pool" pool):

Code:
root@truenas[~]# zpool list

NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool  14.5G  6.37G  8.13G        -         -     3%    43%  1.00x    ONLINE  -
tank       9.08T  98.8M  9.08T        -         -     0%     0%  1.00x    ONLINE  /mnt


And...

Code:
root@truenas[~]# zpool status tank

  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 00:00:12 with 0 errors on Sun Nov  6 00:00:12 2022
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/21c6d93a-ec3a-11ec-b39a-d05099fb0e12  ONLINE       0     0     0
            gptid/21cd8352-ec3a-11ec-b39a-d05099fb0e12  ONLINE       0     0     0


Btw, I detached the spare drive from the pool as I wanted to convert the two-disk mirror into a three-disk mirror (at least I am thinking to do this.) I am not seeing able to add this disk through the web UI, but appears I can attach via a "zpool attach" terminal command after partitioning the drive with "gpart" to get a gptid (so I can attach with that vs. and adaX device name.) Another topic, but think I have it mostly figured out.

In summary, the exiting "tank" pool has plenty of space and is using (or will use when I add back in the third one) all the available spinning disks. The dataset question of course comes from still learning, but seems to make sense that I just add a dataset into the existing "tank" pool and should be good to go.

Thanks again.
 

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
I stand corrected, we have three 10TB disks. Just for clarity's sake. I had misinterpreted something.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
A 3 way mirror of 10TB disks will work fine. You get a little performance boost on parallel reads, because each disk can be reading something different. (They all have the same data.) Writes will have to hit all 3 disks, but you knew that. Plus, a 3 way mirror is a bit safer with huge disks like 10TB.
 

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Thanks Arwen! That is what I figured from things I read online (going 3 way mirror vs. 2 way mirror + hot spare.) Two disks could fail before any data loss, and no need to rebuild the spare in case of a single failure (which if the second disk failed before the rebuild, well there you go.) FYI, there will be a nightly backup pushed off-site as well. Oh, and there is a fourth drive bay available if I do want to attach another standby to the 3 way mirror ;)

I think I know know enough that I just need to attach the third disk to the existing two disk mirror, and then add a storage ZFS dataset to the "tank" pool the system came with from iXsystems. Then map that dataset to an SMB/Samba share and good to go!

Thanks everyone.
 

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Thanks Arwen! That is what I figured from things I read online (going 3 way mirror vs. 2 way mirror + hot spare.) Two disks could fail before any data loss, and no need to rebuild the spare in case of a single failure (which if the second disk failed before the rebuild, well there you go.) FYI, there will be a nightly backup pushed off-site as well. Oh, and there is a fourth drive bay available if I do want to attach another standby to the 3 way mirror ;)

I think I know know enough that I just need to attach the third disk to the existing two disk mirror, and then add a storage ZFS dataset to the "tank" pool the system came with from iXsystems. Then map that dataset to an SMB/Samba share and good to go!

Thanks everyone.

I'll add in one missing step first. Before attaching the third disk, I will partition to match the two existing disk that came with a 2GB swap partition and the rest space used for the mirror. That way I can add that third large partition to the existing mirror as a gptid!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I'll add in one missing step first. Before attaching the third disk, I will partition to match the two existing disk that came with a 2GB swap partition and the rest space used for the mirror. That way I can add that third large partition to the existing mirror as a gptid!
No need. Extending the VDEV in the GUI will automatically partition and format the third disk to match the other members of the VDEV.
 

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Thanks Samual (and everyone else.) I couldn't find that before, but just figured it out after you said it could be done (Storage > Pools > Pool Status, then the three vertical dots next to the VDEV (in my case a mirror) to expose the menu, then Expand.) Did exactly as desired. Glad to know I can to this via the UI, which by the way is fantastic.

That all said, glad I also struggled a little in a shell, good to see things at that level as well. I'm getting a "crash course" in ZFS and that is great. In the end though, happy to have the UI take care of things as that way I know things are consistent vs. partitioning myself with gpart, etc. Next task, create that dataset I need under the tank pool...

All is good in universe (for now), thanks again!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You are welcome. Glad we could help.


By the way, I lived in Colorado, (Denver area), for many years. It was nice always having some place to drive to on the weekends. Like hot springs, small towns or scenic places. Living near Mordor is not as much fun.
 

ehansin

Cadet
Joined
Nov 11, 2022
Messages
8
Arwen, yeah this was all pretty helpfully. Got me over the initial nervous hurdles! Really excited to be running ZFS in a "real world" scenario. I live in Boulder, work at the university here, where the TrueNAS is being put to use actually. Right now no car, so I don't get far ;) But hopefully again sometime. I get around a lot on an electric bike these days (don't get far, but I enjoy getting to where I do get!) Thanks again, appreciate so many people jumping in a giving help and advice.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
OMG, Boulder, I am so sorry for you :smile:.

When you finally get a car, (or go on a drive with someone with a car), Nederland is an easy drive from Boulder. Plus, if you go at the right time, you can attend Frozen Dead Guy Days festival.
 
Top