Multiple VDEV's on disk storage pool

leshankinson

Cadet
Joined
Jan 6, 2023
Messages
3
Hi,

I realize that the purpose of the VDev is to create a virtual hard disk out of many disks (so a pool of disks). However, is it possible within TrueNAS Scale to create more than one VDev on the same set of hard disks? I need/ want to create a SLOG, but it needs so little space and I don't want to use up 2 SATA or M.2 slots and waste a bunch of storage space (since the SLOG uses so little and storage is cheap) if I don't have to. Ideally, with a pool of 2 disks (let's say 250GB), I would be able to put 2 VDevs side by side or even just partition the VDev to be used by the SLOG and maybe by something that else doesn't require a lot of IO's on a regular basis (like some backup storage). Honestly, unless I just don't understand (newbie), this seems to be a big limitation w/ TrueNAS and maybe ZFS... would be great if future version would allow partitioning of the pool. Please let me know if there is a work-around.

Thanks,
TrueNAS newbie
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You can have as many vdevs in a pool as you like. To increase capacity or IOPS, for example. Pools made from a dozen mirrored pairs or a handful of RAIDZn vdevs are common. You cannot have a single disk in more than one vdev with TrueNAS. You can work with partitions in ZFS on the command line but the UI and middleware will get confused and you will not be able to use the UI for replacing a failed disk etc.

The more important thing to consider is if you need an SLOG and if you have a suitable device. An SLOG is not a write cache. Repeat after me. An SLOG is not a write cache.

Do you have synchronous writes (iSCSI, NFS, VMs running directly on TrueNAS and sync set to "always", ...)? If the answer is no, an SLOG will not do you any good. It will simply never be used.

In case you really need an SLOG consider that this device will in normal operation constantly written to and never read. Anything below a small Optane, specifically consumer/"prosumer" SSDs is not fit for this task.

You might want to read the ZFS Primer and the documents linked therein.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I think ZFS doesn't make it that easy to do custom partitions scheme and things like this, partly because, for the most part, it's a niche use case and completely unnecessary with ZFS as technically, you could create "partitions (aka dataset)" on demand since ZFS is both a file system AND a volume manager at once.

I think what confuses people is that a dataset (for better or worse) is presented to the user as if it's just another folder/directory. In reality, it's a completely distinct separate file system (kind of like a separate partition) and it's the reason why moving files from one dataset to another isn't instant (it's not a simple pointer reassignment operation) and also why recursive NFS shares require its own distinct exports.
 

leshankinson

Cadet
Joined
Jan 6, 2023
Messages
3
Thanks for the responses. I have heard I don't need Sync (and therefore don't need SLOG). It is probably something iX should consider revising/ qualifying in its documentation. As I understand it, the SLOG is really only used if there is a power outage and you will lose only the write speed x the 5 seconds or so that the SLOG is holding the data before dumping it to disk.... which is why the SLOG drives need only be so small. Outside of a power outage, I don't really understand why Sync would be important... maybe for family photos or business critical data. There should be something to qualify this really wasteful need for 2 tiny SLOG SSD's/ M2's. Frankly it is probably less expensive overall for many users to just buy more disk space and keep a back up copy of your back up for the really critical things... should the power actually go out.

I also think it would be really cool if iX could at lease respect, if not create/manage, some sort of bios/ uefi/ bootit disk boundaries to allow for separate vDev's on the same set (e.g raid z) of physical disks. In theory, I could have 2 disks in TrueNAS in a mirror array and use half of the array for a data vDev for rarely used storage, and the other half for a log (SLOG), metadata, or cache (L2ARC) vDev. Even for a separate vDev to put all my TrueNAS/ TrueCharts apps on (to keep them separate from my other data pool). My old QNAP (which is also not perfect) for instance allows 2 separate/ distinct pools on the same RAID 10 array of 4 hard disks. Would be nice in a future edition to have this managed by the TrueNAS OS interface (so officially, not unofficially and having it wiped out accidently because it is unsupported.

Thanks again.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
business critical data
Well, yes.
I also think it would be really cool if iX could at lease respect, if not create/manage, some sort of bios/ uefi/ bootit disk boundaries to allow for separate vDev's on the same set (e.g raid z) of physical disks. In theory, I could have 2 disks in TrueNAS in a mirror array and use half of the array for a data vDev for rarely used storage, and the other half for a log (SLOG), metadata, or cache (L2ARC) vDev. Even for a separate vDev to put all my TrueNAS/ TrueCharts apps on (to keep them separate from my other data pool). My old QNAP (which is also not perfect) for instance allows 2 separate/ distinct pools on the same RAID 10 array of 4 hard disks. Would be nice in a future edition to have this managed by the TrueNAS OS interface (so officially, not unofficially and having it wiped out accidently because it is unsupported.
What you say makes little sense. The system firmware does not play any role in this. Additionally, if you really want to use a disk for multiple pools (which is rarely productive), you're free to partition the disks and create the pools on your own, then import them in TrueNAS. Of course, you'd be on your own for disk replacements because there is no way in hell iX can predict what crazy setups might have been created.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@leshankinson

Some clarifications on what TrueNAS is and is not;
  • Is written for Enterprise users, (meaning people who pay for both hardware & software support)
  • Is not intended for low end servers, (but it can work on them!)
  • Sharing disks via partitions among multiple pools introduces complexities that Enterprise users don't need / want
  • iXSystems is making TrueNAS Core & SCALE available for free, but does not generally accept feature requests that won't help their Enterprise, (aka paying customers), users. (You can still ask via "Report a Bug" at the top of every forum page.)
  • Sync is used to prevent SERIOUS corruption in things like VM storage & databases
  • SLOG is used to help stabilize Sync datasets & zVols
The last 2 are pretty important. VM storage & databases, (plus other things), can be totally corrupted by missing writes. This may mean a loss of a day or week's worth of new data. Thus, Sync is used. SLOG can speed up and stabilize Sync writes.

All that said, you can do pretty much anything you want with the software. And ask advice about unusual things here. It is just that we, the free TrueNAS users, tend to want reliability over extreme tuning that could be lost on software update. You could say we are data loss adverse.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I don't really understand why Sync would be important
Sync is important if the system writing to the storage needs to provide certain guarantees about data committed to stable storage. As @Arwen wrote that is mostly databases and hypervisor hosts with their virtual disks. Definitely not your family pictures. Regular file sharing writes to memory, the data is collected in what is called a transaction group, then all of it is written to disk.

If the system crashes or the power fails before the data is on stable storage, you still have the original on your desktop/phone/whatever, right? And the flushing to disk happens every couple of seconds. So five minutes after you batch copied that huge picture collection all will be well.

Compare to a hypervisor where a missed write might trash the entire virtual disk.

Now to your idea of sharing an SSD for multiple purposes - this will not work out as you think it will in most cases, because e.g. read IOPS drop to very low figures if the same device is simultaneously written to.
 

leshankinson

Cadet
Joined
Jan 6, 2023
Messages
3
thanks for replies. as mentioned I am a newbie (and not an enterprise user). appreciate the courteous clarifications and suggestions. might not be business critical, but it is my data and important to me. very much appreciate iX sharing this software with users like me (the free community).

thanks!
 
Top