How can I setup my nvme pool properly?

ECC

Explorer
Joined
Nov 8, 2020
Messages
65
Hello,
I have a main pool (see signature), which is used for media storage & backups (e.g. big files, mainly sequential R&W applications).

Additionally, I have a pair of intel optane nvmes (each 110G), which I want to use for multiple purposes:
- mirror for VMs
- persitent l2arc for metadata for my main pool, to reduce access times for small files (e.g. thumbnails)
- optional: mirror for ZIL

To achieve this, my plan was to create multiple partitions:
1) zfs swap, size 2G (for each nvme)
2) mirrored partition (between both nvmes) for storing VMs, size 70G mirrored
3) l2arc for metadata only, size 30G striped (for each nvme, in total 60G)
4) rest for ZIL (if configured)

This is how I started:

Code:
gpart destroy -F /dev/nvd0
gpart create -s gpt /dev/nvd0

gpart add -t freebsd-swap -s 2G /dev/nvd0  #swap
-> nvd0p1 added
gpart add -t freebsd-zfs -s 70G /dev/nvd0  #VM
-> nvd0p2 added
gpart add -t freebsd-zfs -s 30G /dev/nvd0  #L2ARC
-> nvd0p3 added
gpart add -t freebsd-zfs /dev/nvd0  # rest for SLOG
-> nvd0p4 added
# same for nvd1
zpool create -R /mnt optane_vm mirror gptid/XXX  gptid/YYY # for VM partition
zpool export optane_vm #necessary to import optane_vm via gui


I tried to create a vm on this pool via gui, but I got some weird error messages (sorry, I didn't copy them. I will provide them tomorrow). Do you see any error which I made during creation of those partitions above?

My next question: How can I add the striped l2arc (for metadata only) to my existing pool? Due to partitions, I cannot use the gui for this. GUI only allows to declare complete drives for each purpose (VM pool, l2arc, zil, ...)

So far, I was not successfull to achieve my approach, so maybe you can help me on this on please?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
GUI only allows to declare complete drives for each purpose (VM pool, l2arc, zil, ...)
yup. there is a reason for this (simplicity and easy of support/recovery - the same reason for dedicated boot drives). when you start trying to CLI this, the GUI gets confused, as what you are doing is not supported in truenas.


ZIL is the wrong terminology; the ZIL ONLY exists the pool. you are referring to SLOG. SLOG is rarely useful outside of VM and Database storage, and ONLY applies to sync writes, which CIFS does not support interacting with.

persitent l2arc for metadata for my main pool, to reduce access times for small files (e.g. thumbnails)
if you put your metadata on a special vdev, that would NOT reduce access times for small files. the files would still be in the pool. all that would speed up is ZFS functions, but the data will still have to come from disk.

in addition to that, often used small files will be in ARC already, and you don't really have enough ram to bother with L2ARC in the first place (IIRC 64GB is the bare minimum - might be 128GB but I dont remember for sure).

highly recommend to just make your SSD's VM storage and continue with life.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Additionally, I have a pair of intel optane nvmes (each 110G), which I want to use for multiple purposes:
- mirror for VMs
- persitent l2arc for metadata for my main pool, to reduce access times for small files (e.g. thumbnails)
- optional: mirror for ZIL
Typical newbie disease here: Overthinking and overcomplicating things to use more of ZFS features. (I know it: I'm in the process of removing dedup on some of my datsets, whose primary copies reside on a pool with a 900p Optane partitioned for SLOG and persistent metadata L2ARC…)
As described, your main pool does not appear to require sync writes and have a use for a SLOG. A VM pool on mirrorred Optane certainly does NOT need a SLOG: The data vdev is already as good, or better, than any dedicated SLOG. So keep it simple:
Drop the SLOG, and get a third SSD (not necessarily Optane) to act as persistent metadata L2ARC for the main pool.

The only issue is having enough lanes… I suppose your HDDs are in an external shelf so you need the -16e HBA, the 10 GbE NIC, and that leaves just two x4 slots (from the chipset, so all going through the x4 uplink to CPU).
Maybe using SATA SSDs from the chipset ports would be an easier solution—even if it performs lower than Optane.

My next question: How can I add the striped l2arc (for metadata only) to my existing pool? Due to partitions, I cannot use the gui for this. GUI only allows to declare complete drives for each purpose (VM pool, l2arc, zil, ...)
Add the devices for the command line, using GPTIDs rather than /dev/nvXXpX.
But my real advice is above: Keep it simple, and within supported configurations.
 
Top