Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

SOLVED How to setup FreeNAS on a (partitioned) single SSD with boot and jails.

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
Please copy & paste text and not a screen shot. But besides - that looks good. What's your problem? Do you have only one disk for boot? Not a mirror?
 

zierbeek

Junior Member
Joined
Apr 4, 2021
Messages
20
Yes, fixed it in a code block, apologies.

I reboot but how can I create the pool with the extra room? I can't seem to find it.
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
Use gpart list ada0 to find the rawuuid for ada0p4. Then use zpool create <yourpoolname> gptid/<the-rawuuid> to create the pool. Without any redundancy this is quite risky, though.

Then use zpool export <yourpoolname> and finally import the pool from the UI.
 

zierbeek

Junior Member
Joined
Apr 4, 2021
Messages
20
SOLVED: did not need to do the following commands, I could just import through the ui of truenas

First and foremost, thank you for your time! I get this error.

Code:
zpool create ssd gptid/1bd5436e-9781-11eb-b91a-d050992848c3
cannot use '/dev/gptid/1bd5436e-9781-11eb-b91a-d050992848c3': must be a block device or regular file


also tried this one:
Code:
zpool create ssd ada0p4/1bd5436e-9781-11eb-b91a-d050992848c3
cannot open 'ada0p4/1bd5436e-9781-11eb-b91a-d050992848c3': no such device in /dev
must be a full path or shorthand device name


The pool will be used to store my jails, so no problem if it dies. Will reinstall then.
 
Last edited:

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
So you already created the pool? Did you use the UUID when you did that? What does zpool status ssd result in?
 

zierbeek

Junior Member
Joined
Apr 4, 2021
Messages
20
Online, I just followed the guide in the first post. No need to create the pool anymore. I just exported the 'jail' and imported it again in cli as ssd.
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
The guide is wrong in one way: it tells you to zpool create <poolname> nvd0p4. Or ada0p4, or whatever your matching device is. You should never do that in TrueNAS. Apart from the boot-pool always refer to disk via their gptid. The UI won't show your disks and the pool status correctly, otherwise. That's why I asked. The guide that I wrote respects that.
 

zierbeek

Junior Member
Joined
Apr 4, 2021
Messages
20
hmm allright, I get it. Have you rewritten the whole guide or are you referring to you statements above?
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
The guide in this thread is not mine. But in the second posting from the top I linked to mine.

Quote:

10. Create a new pool in the available space

Again we will need the command line, when we are finished the pool will be available in the GUI like any other:
Code:
gpart add -t freebsd-zfs -a 1m ada4
gpart add -t freebsd-zfs -a 1m ada5
gpart list ada4
gpart list ada5


Look for these in the output of the gpart list command:
Code:
3. Name: ada4p3
[...]
rawuuid: 25fe934a-19d6-11ea-82a1-ac1f6b76641c
[...]
3. Name: ada5p3
[...]
rawuuid: 3fc8e29a-19d0-11ea-9848-ac1f6b76641c


Now create the pool - use the UUIDs from the previous step:
Code:
zpool create ssd mirror gptid/25fe934a-19d6-11ea-82a1-ac1f6b76641c gptid/3fc8e29a-19d0-11ea-9848-ac1f6b76641c


Last we export the pool from the command line:
Code:
zpool export ssd


11. Import the new pool from the GUI
 

Lemming

Newbie
Joined
May 5, 2014
Messages
2
Hey all,

I just stumbled across a much easier way to do this, that also works for UEFI booting NVME devices.

Basically I did the normal install procedure as per the start of this guide, but when I did I selected both the USB drive and the SSD at the drive selection step.

After running through the installer I shut the machine down and immediately unplugged the USB then booted it off the SSD alone.

I then opened shell and ran the remaining steps listed below (on nvd0).

#ssd drive - add jail partition - will be ada0p4
gpart add -t freebsd-zfs -l jail0 ada0

#setup jail pool
zpool create jail /dev/ada0p4
umount /jail
zpool export jail


#wait for resilver! run 'zpool status' to check status
zpool offline freenas-boot /dev/da0p2
zpool detach freenas-boot /dev/da0p2
Rebooted the machine once more, logged into the UI and was able to import the jail pool and start using it, everything seems to be working fine and it's certainly a lot simpler.
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
@Lemming Great idea! Thanks for sharing. Minor improvement, though:
Don't use zpool create jail /dev/ada0p4 but use zpool create jail gptid/<rawuuid of ada0p4> instead.

Reason:
 

GuiPoM

Newbie
Joined
Apr 22, 2021
Messages
2
Hi !

Sorry for asking, I am not so familiar with TrueNAS and freebsd and zfs, and I guess what I am asking here might be technically not possible, but this topic is exactly what I was looking for.

Imagine you own a nice 500Gb m2 SSD. And that you would l already populated your SATA slots, so you rely on this SSD to run everything. That topic is a great start.
But would it make sense, if it is technically possible, to have a cache and OS sharing the same physical drive ?
I know from my little knowledges on proxmox that from zpool command, you can add a cache from a partition, rather than a full physical drive.

Would it makes sense, if someone knows how to proceed, to enhance these steps with that option ?

It would be great to have OS + jails + cache on one SSD, and buy 2...4 drives for storage. This remains cheap, maybe not the most reliable you can expect, but good enough for average use, I guess.

(I really hope this question is not stupid ! ^^)
Thanks !
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,777
@GuiPoM What most people new to ZFS think of as a cache device in reality isn't one.

1. There is no write cache device in ZFS. At all. An SLOG is not a write cache device.
2. An L2ARC is a read cache device. But maintaining the accounting for that read cache needs RAM. So unless you already have 64 GB of memory or more the fact that using an L2ARC will eat into your memory and hence your primary cache by far outweighs the possible benefits of the L2ARC. Your performance will probably get worse.

Specifically #2 and the "64 GB" figure are of course rules of thumb, but in most cases really don't think about "cache" - put more memory into the box if technically feasible and if you can afford it.

Now, what to do with that nice completely oversized SSD? This thread mainly revolves around the way to partition it (which Free/TrueNAS does not support right away) into a boot device and another zpool. Which gives you an SSD backed zpool in addition to your SATA HDD backed one.

You can put jails/plugins ("containers" in which you can run anything that runs on FreeBSD) on that pool, e.g. the Nextcloud application while the data resides on your HDD pool. Or you can put VMs on that pool to run Linux or Windows VMs.

And you can protect yourself from loss of all those things by replicating the jails/plugins/VMs on that single non-redundant SSD to your HDD pool, like, hourly. Or at least daily. That's cheap in terms of storage and computing resources.


So what I would recommend: unless you run a large system in an enterprise environment - but then you would not be asking about a single, not mirrored SSD - forget the "cache" idea. But if you want to run additional applications on your TrueNAS, read the whole of this thread and you can use your SSD to significantly speed these up. Just don't forget the replication to some redundant pool :wink:
 

GuiPoM

Newbie
Joined
Apr 22, 2021
Messages
2
Many thanks. It makes a lot more sense now that I can read it. I thought adding a cache on my Raid-z would help it.

And you are right. This is a home NAS I am running. So a Pentium Gold G6400 (4Ghz, so quite powerful, but not highly hyperthreaded) and 16Gb or RAM, which is maybe not that much for TrueNAS, but is already I guess a very good start.

Again, thanks for that clarification. Maybe I will not be the only one wondering if it could have been an option here !
 
Top