Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

TrueNAS 12.0 Features

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

kspare

Senior Member
Joined
Feb 19, 2015
Messages
405
ah ok I get ya now. You left out the 4 nics part prior. we run 10gb and 40gb so i've been lucky and haven't had to worry about that at all.

We're stuck with veeam so no going back there.

What vcenter integration are you using?
 

Lothian

Member
Joined
May 12, 2018
Messages
28
I have a question. will TrueNAS 12 be based on FreeBSD 12? Perhaps this is obvious but I wanted to be clear about it.
 

jenksdrummer

Member
Joined
Jun 7, 2011
Messages
160
4x with 4 NICs in 4 subnets and in vmware you feed it the 4 ip addresses of the FN filer when you config a NFS 4 datastore. That is assuming your on the same switch or your cross connected switches have a big pipe/backplane. In VMware you have to tell it that for vlan1 use vnic0 as active and the others as standby and repeat the logic for all threee other NICs.

VEEAM is terrible, they do dirty snapshots. If you do the vcenter integration with FreeNAS even the NFS snapshots are application aware as they chat with the vmware tools.
Sorry, seems to not be my experience with VEEAM - there's a couple ways to use Veeam with VMWare backups...

vCenter integration (IE, point VEEAM to the vCenter box w/ creds) - this trips the VMWare API that just about every backup product uses when interacting with VMWare. If you have a VMWare-based Proxy deployed, it can then mount the snapped VMDK and back it up; else, if you have a hybrid Hyper-V/VMware environment and the proxy is Hyper-V based (like my production environment is), then it uses the management interface on the ESX host.

Storage Snapshots - if you have a supported storage platform (there are a good number of them; though I think FreeNAS is not one of them), then VEEAM can trip a VM snapshot via VMWare API; discover which datastores correlate to the VM, then it does a storage snap, then releases the VM snapshot, and does a backup of the storage snap; then releases the storage snap. Bonus points that it can batch based on what's also on the same storage. This can be ridiculously fast. One VM we have in our prod environment is about 10TB. It took 68 hours to complete it's first full backup. Doing a job using storage based, it completed inside of 12 hours with no other differences between jobs.

Agent-Based - nutshell, treat it like a physical machine. Bonus here is that you can also restore this to alternative locations, such as AWS/Azure/Hyper-V or pretty much whatever; once one builds the destination machine, installs the agent, target the restore and let it rip.
 

jenksdrummer

Member
Joined
Jun 7, 2011
Messages
160
Microbursting (and having switches with big enough buffers) has been the biggest issue I've had with iSCSI, anything this side of a Cisco 4948 i've had underwhelming performance from. FC not so.

The non-automatic unmap I presume you're referring to the VMware limitation in vSphere < 6.5 where you can only unmap from esxcli, they fixed that (iirc) from 6.7 onwards. There's a page on the datastore configuration that lets you enable automatic unmap and set the priority for it, it works for me.
6.5 has it, though you have to have the right VMFS version; IE, migrate to new datastores.
 

Herr_Merlin

Member
Joined
Oct 25, 2019
Messages
89
Just noted that dual port SAS drives will become an enterprise feature, while working perfectly fine with freenas 11.x. Meaning dual port SAS drives connected to two HBAs are working as desired. I fail an HBA the other takes over and vice versa.. Dual port SAS drives offer a great way to archive more redundancy and split the HBA load if implemented correctly.
This was working with every HBAs, Backplanes and external enclosures I have seen so far.
This feature is not money only and TrueNAS hardware only?
Is there a way to buy a license for our own hardware?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
259
Just noted that dual port SAS drives will become an enterprise feature, while working perfectly fine with freenas 11.x.
HI Merlin,

There is no change in behavior.. if it worked in FreeNAS, it will work in TrueNAS CORE.

"Dual ported SAS" was a shorthand for "dual storage controllers with SCSI reservations of dual ported SAS drives". Hard to fit that in a small cell.

Morgan
 

EtienneB

Junior Member
Joined
Feb 19, 2018
Messages
23
Do I understand correctly that a special metadata vdev is permanently attached? Contrary to L2ARC which you can remove (it is not a similar cache I know).
And will the metadata be backed up back to the hdd's (when idle)? To prevent a pool loss when the metadata vdev dies?
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
1,901
Yes and no. Yes, metdata is permanently attached, and no, the metadata is not backed up to any other storage. You are supposed to create a metadata vdev with a suitable level of redundancy.
 

EtienneB

Junior Member
Joined
Feb 19, 2018
Messages
23
Yes and no. Yes, metdata is permanently attached, and no, the metadata is not backed up to any other storage. You are supposed to create a metadata vdev with a suitable level of redundancy.
Thanks. Clear.
A RAIDZ-2 pool with just a mirror VDEV for the metadata is then a reduction in security, towards effectively RAIDZ-1, if I understand correctly.
 

Herr_Merlin

Member
Joined
Oct 25, 2019
Messages
89
or can we go with 3 way mirror? I would have 3x 800GB SAS ZeusIOPS free for that drive once 12 is released ..
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
1,901

jasonsansone

Member
Joined
Jul 18, 2019
Messages
79
I apologize if this has already been asked. I don't see Persistent L2ARC on the feature list. Is that still slated to be included in TrueNAS 12? Is it in the Beta? My understanding is that the Commit has already been merged to OpenZFS Master. https://github.com/openzfs/zfs/pull/9582
The BETA2 release article discusses persistent L2ARC. Does anything need to be set or configured or should it "just work"?

ZFS Persistent L2ARC: L2ARC (flash-based read cache) is typically cleared on a controller reboot or failover. For smaller systems with less than a TB of L2ARC, that can be ok. For larger systems with 10TB of L2ARC, it may take hours or even days to rehydrate the L2ARC. The persistent L2ARC option avoids clearing the cache allowing performance sensitive systems to get back to full speed without delay.

Also, how can we schedule manual trim jobs?

ZFS Asynchronous TRIM: OpenZFS 2.0 includes asynchronous automatic and manual trim capabilities. Manual Trims can be scheduled overnight or each weekend to provide more performance during business hours.
 

ornias

Senior Member
Joined
Mar 6, 2020
Messages
473
The BETA2 release article discusses persistent L2ARC. Does anything need to be set or configured or should it "just work"?
It's a "just work" thing, afaik... according to the github discussions with the developers of persistent l2arc :)


something else:
@Kris Moore
Considering Zstandard compression is merged into OpenZFS2.0 and draid might be integrated into OpenZFS2.0 (depending on review feedback and bug testing), are you guys going to support those features too, or is TrueNAS Core 12 going to ship without actual support for these key selling points (if draid makes the cut)?

*edit*
ZSTD is merged :D
 
Last edited:

jasonsansone

Member
Joined
Jul 18, 2019
Messages
79
It's a "just work" thing, afaik... according to the github discussions with the developers of persistent l2arc :)


something else:
@Kris Moore
Considering Zstandard compression is most likely to be integrated into OpenZFS2.0 and draid might be integrated into OpenZFS2.0 (depending on review feedback and bug testing), are you guys going to support those features too, or is TrueNAS Core 12 going to ship without actuall support for these key features (if draid makes the cut)?

I'm asking as Zstandard compression wouldn require some significant GUI rework (considering the many levels, you might want to split the compression level from the algorithm selection) and draid would actually require a totally new gui....

*edit*
Once we get zstandard merged I might look into submitting a PR for the required middleware+gui changes for TrueNAS... But don't hold me to it ;)
Thank you. That was my impression from the PR, but just checking. I’ve been tracking your work on the zstd PR. Awesome contribution. Looks very close.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
259
Persistent L2ARC is expected to get a button because in some cases, users will prefer it is not persistent. It may increase boot time with large L2ARCs.

Manual trims will be enabled via CLI or script in short term. If someone has a reason to use this, please experiment and provide some feedback.
 

ornias

Senior Member
Joined
Mar 6, 2020
Messages
473
Persistent L2ARC is expected to get a button because in some cases, users will prefer it is not persistent. It may increase boot time with large L2ARCs.
Thanks Morgan, my comment indeed wasn't as complete as I would've hoped: It's indeed: "just works, but can be disabled", not just "just works"
 

jasonsansone

Member
Joined
Jul 18, 2019
Messages
79
Persistent L2ARC is expected to get a button because in some cases, users will prefer it is not persistent. It may increase boot time with large L2ARCs.

Manual trims will be enabled via CLI or script in short term. If someone has a reason to use this, please experiment and provide some feedback.
Manual trim via CLI works fine. Persistent L2ARC doesn't appear to be functioning. L2ARC reset on reboot (see image).

Screen Shot 2020-08-24 at 11.11.18 AM.png


Screen Shot 2020-08-24 at 11.14.36 AM.png
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,700
That's ARC not L2ARC. There's no L2ARC on your pool from what I can see, or am I missing something?
ARC, rather obviously, can't be persistent. It being in DRAM and all.
 
Top