How do I partition a new disk in TrueNAS???

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
Hi all,

I want to partition some 1TB SSD's. How can I do this? There doesn't seem to be a way to do this through the GUI.

The two drives are showing up as da1 and da2.

Running gpart show on the CLI doesn't list the drives GEOM's.

I want to partition each drive with 4 partitions to use a 2x striped L2ARC and 2x mirrored SLOG to add to two zpools.

Thanks,

FS
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You cannot do that from the UI and it is an unsupported configuration. One drive, one role only.

If you want to go ahead you need to create the partitions from the CLI like this
Code:
gpart create -s gpt da1
gpart create -s gpt da2
gpart add -s 16g -a 1m -t freebsd-zfs da1
gpart add -s 16g -a 1m -t freebsd-zfs da2
gpart add -a 1m -t freebsd-zfs da1
gpart add -a 1m -t freebsd-zfs da2

That creates a 16G partition for the SLOG and the rest of the drive for the L2ARC.

The next step is important! You must use the GPTIDs when adding the partitions to your pool!

So get them with gpart show da1; gpart show da2, then do something like this
Code:
zpool add <yourpool> cache gptid/id-of-da1p2
zpool add <yourpool> cache gptid/id-of-da2p2
zpool add <yourpool> log mirror gptid/id-of-da1p1 gptid/id-of-da2p1

Whether that makes sense at all I leave up to you and possibly others who will chime in. But that's how you get it working the way you like.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I want to partition each drive with 4 partitions to use a 2x striped L2ARC and 2x mirrored SLOG to add to two zpools.
You probably need to read up on what you're intending to do there...

Using two SSDs for L2ARC is probably fine if you are convinced that ARC isn't enough and you've got enough RAM already (minimum 64GB before L2ARC should be considered).

Doing what you propose for SLOG is an entirely different thing and may actually make things worse unless your pool design is already right and supports sufficient IOPS. Also the SSDs you have chosen need to match with the SLOG concept or it's really not worth it.

 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
1 TB SSD is probably not Optane DC (960 GB), so the setup must be wrong.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
Thanks for the advice.

If I may be completely candid...

I am completely new to TrueNAS and ZFS so please forgive my inexperience. I should mention that this is not in a production environment. It's a home office/lab environment and nothing is mission-critical. Although I would like to go by the recommended implementations where possible, I'm also open to and not afraid to hack a few things in "it's possible, but not really supported" situations if it achieves the desired result. All of the data that will be stored on TrueNAS is backed up in the cloud already. It's currently just sitting in the cloud at the moment waiting for me to pull it down once I get everything set up. So if something goes wrong and I have to bin it all and start again, it's really not a big deal for me.

Most of what I have learned so far on TrueNAS and ZFS has been from watching YouTube videos and I will admit that while I do understand the caching feature, I don't completely understand the SLOG and I had never heard of it until I started looking into TrueNAS. But I watched a video on YouTube of Victor Bart - Retro Machines where he had a set of disks similar to mine and he partitioned them to do the striped L2ARC and the mirrored SLOG across two SSD's.

I will not profess to be a Unix expert but I am fairly comfortable on the Linux CLI and very comfortable in a terminal in general. However, I have not used FreeBSD on the CLI before and things are obviously a little different to Linux.

I only have the two SSD's assigned to TrueNAS for now and that won't be changing for a while so I want to try and make the best use of what I have.

The setup I have is as follows:

Proxmox Server

TrueNAS VM Assigned Hardware:
4 CPU Cores
128GB RAM
LSI SAS3008 9300-8I HBA - PCIe Passthrough to VM
4x 16TB SAS drives - via HBA
8x 6TB SATA drives - via HBA
2x 1TB SSD - Direct Passthrough to VM

I will have two pools. One pool of the 4x 16TB drives and another pool of the 8x 6TB drives. All the drives are showing up in TrueNAS and I have already created a zpool in RAIDZ2 for the 8x 6TB drives. That pool is already up and running and online.

I can always scale the CPU cores and RAM up if necessary but for now, in terms of drives, this is what I have.

With that in mind, would you be able to advise how to make the best use of the two SSD's? Do you think I even need them given the current hardware I have allocated to the VM?

Also, I appreciated the links to the articles. I will have a look over them but I must admit that I do suffer from dyslexia so reading big articles or blocks of text can be challenging for me. I'm more of a visual learner.

Thanks,

FS
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
With 128 GB of RAM you probably won't need an L2ARC.
You probably won't need an SLOG either, but that depends.

What you could do to put these large SSDs to good use: create a second mirrored pool on them and put the workloads that profit from running on an SSD there. Like jails and VMs.

That's what I do with my setup. It's a bit smaller and without the Proxmox, but the "part SSD, part HDD" is just the same. Additionally you can replicate the data on the SSDs on to the HDD pool periodically in case both SSDs fail more or less simultaneously due to wear.

What type of SSDs are these? What do you mean by "direct passthrough"? NVME, i.e. PCIe passthrough? If yes, that's good. Going to work well, probably.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
I'm open to taking any advice to get the best set up possible using the disks I have.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
With 128 GB of RAM you probably won't need an L2ARC.
You probably won't need an SLOG either, but that depends.

What you could do to put these large SSDs to good use: create a second mirrored pool on them and put the workloads that profit from running on an SSD there. Like jails and VMs.

That's what I do with my setup. It's a bit smaller and without the Proxmox, but the "part SSD, part HDD" is just the same. Additionally you can replicate the data on the SSDs on to the HDD pool periodically in case both SSDs fail more or less simultaneously due to wear.

What type of SSDs are these? What do you mean by "direct passthrough"? NVME, i.e. PCIe passthrough? If yes, that's good. Going to work well, probably.
Would it be better to reduce the amount of RAM and have the L2ARC instead?

The 2x SSD's are not connected to the HBA. They are connected to the SATA controller on the motherboard. They are just standard 2.5" SSD's. Samsung 860 EVO SATA drives. In Proxmox on the CLI I had to drill down to get the "dev-by-id" and then assign each individual disk to the TrueNAS VM using iSCSI using the "qm set" command. They show up in the VM hardware list in Proxmox as just extra hard drives connected.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Would it be better to reduce the amount of RAM and have the L2ARC instead?
No. RAM is always best. And managing the L2ARC uses RAM which will not be available for cacheing.

The 2x SSD's are not connected to the HBA. They are connected to the SATA controller on the motherboard. In Proxmox on the CLI I had to drill down to get the "dev-by-id" and then assign each individual disk to the TrueNAS VM using iSCSI using the "qm set" command.
Does that mean Proxmox is passing the entire SATA controller to the TrueNAS? If not, you should not use these SSDs that way. TrueNAS needs direct access to the drives hardware without any emulation layer inbetween.

I am not familiar with Proxmox, unfortunately. I run all my virtualisation workloads on TrueNAS. Linux, Windows, FreeBSD ...
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
No. RAM is always best. And managing the L2ARC uses RAM which will not be available for cacheing.


Does that mean Proxmox is passing the entire SATA controller to the TrueNAS? If not, you should not use these SSDs that way. TrueNAS needs direct access to the drives hardware without any emulation layer inbetween.

I am not familiar with Proxmox, unfortunately. I run all my virtualisation workloads on TrueNAS. Linux, Windows, FreeBSD ...
The whole motherboard SATA controller is not passed through no as Proxmox needs access to some of the drives connected for the OS and VM disk images.

I got the instructions from here...
Passthrough Physical Disk to Virtual Machine (VM) - Proxmox VE

And also got the idea from this...
Craft Computing - Proxmox with TrueNAS VM, passing individual SATA drives to TrueNAS
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
If I read the Proxmox documentation correctly, this emulates a SCSI controller and attached disk using the raw SATA block device as the storage backend. This is not pass-through. Not recommended at all, no matter what the fancy beer connaisseur is doing in his video.

Proceed at your own risk. You might want to read this guide first:

Why do you want to run TrueNAS virtualised in Proxmox? As far as I know the product it offers:
  • ZFS for storage
  • virtual machines
TrueNAS offers:
  • ZFS for storage
  • virtual machines
So, why?
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Proxmox uses KVM for virtualization while TrueNAS Core uses bhyve. Arguably KVM is superior to bhyve. That said, TrueNAS SCALE, which also uses KVM, will go into alpha this month, and even the pre-alpha builds have been decent. If there's the ability to layer on functionality as it's added to SCALE, then using the functionality that is expected to be solid in alpha - ZFS and VMs - can make sense.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
I think it's a case of either/or, isn't it? Personal preference. Either can do both. Hypervising and file serving. I did consider running TrueNAS as the main OS when building this server. Virtualization is the number one focus of this server though which is why I opted for Proxmox as it's a pure hypervisor first and foremost, has good hardware passthrough, and plus as it's Debian based and I am much more familiar with the CLI.

What sparked all this off was my 5-year-old Qnap NAS dying and Qnap saying it couldn't be repaired as they don't make the parts anymore and that I would just have to buy a new one. I didn't want to be at the mercy of Qnap any longer. Having TrueNAS as my Qnap replacement is important to me as well and it made sense (in my mind at least) to virtualize it. Or at least try to.

If it becomes a real problem down the line I can look at running it bare metal.

I have both zpools up and running now. Data is currently being pulled down from Dropbox.

I haven't done anything with the two SSD's and L2ARC/SLOG as of yet.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Why do you want to run TrueNAS virtualised in Proxmox?
Proxmox is a pretty lousy file server (i.e., it isn't one at all). And while it does (can) use ZFS for storage, and its pool management is better than it was a few years ago, it doesn't give you anywhere near the pool management tools that FreeNAS does. Meanwhile, TrueNAS isn't that great as a hypervisor IME (which admittedly is minimal; I have a three-node Proxmox cluster, so don't really need to run VMs on my FreeNAS box).
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I guess it depends on what you want to spend your time with. For me it was important to not have an "exotic" setup overall. Back in the days (mid 1990s) I loved to squeeze the last bit out of my hardware and (mostly) had a lot of fun. Today, though, I want the infrastructure to be hassle-free. That is why I decided to have a separate machine for VMs.
 
Top