ZFS Compatibility between FreeNAS & Linux

Status
Not open for further replies.

NASbox

Guru
Joined
May 8, 2012
Messages
644
Hi All

I noticed from discussions on this forum that @Arwen is using ZFS on her linux laptop, so I decided to give ZFS on linux a shot for my home desktop PC. So far so good, but what about interoperability with FreeNAS ZFS.

Since ZFS is coming from two development streams (FreeBSD/Linux Debian/Ubuntu) am I likely to run into problems if I want to send/receive ZFS snapshots (for backup/restore)? Does anyone have experience doing this? Am I likely to corrupt a pool doing this?

Maybe someone familiar with the process/politics of the upstream development could comment on the likelihood of long term compatibility?

I've included some details of my current ZFS implementations below. Any guidance, suggestions or references on the web would be most helpful.

FreeNAS
Code:
dmesg | grep -i zfs
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Trying to mount root from zfs:freenas-boot/ROOT/11.1-U6 []...

#>zpool get all TANK
NAME PROPERTY					 VALUE						 SOURCE
TANK health						 ONLINE						 -
TANK version						-							 default
TANK bootfs						 -							 default
TANK delegation					 on							 default
TANK autoreplace					off							default
TANK cachefile					 /data/zfs/zpool.cache		 local
TANK failmode					 continue					 local
TANK listsnapshots				 off							default
TANK autoexpand					 on							 local
TANK dedupditto					 0							 default
TANK dedupratio					 3.01x						 -
TANK free						 23.3T						 -
TANK allocated					 20.2T						 -
TANK readonly					 off							-
TANK comment						-							 default
TANK expandsize					 -							 -
TANK freeing						0							 default
TANK fragmentation				 3%							 -
TANK leaked						 0							 default
TANK feature@async_destroy		 enabled						local
TANK feature@empty_bpobj			active						 local
TANK feature@lz4_compress		 active						 local
TANK feature@multi_vdev_crash_dump enabled						local
TANK feature@spacemap_histogram	 active						 local
TANK feature@enabled_txg			active						 local
TANK feature@hole_birth			 active						 local
TANK feature@extensible_dataset	 enabled						local
TANK feature@embedded_data		 active						 local
TANK feature@bookmarks			 enabled						local
TANK feature@filesystem_limits	 enabled						local
TANK feature@large_blocks		 enabled						local
TANK feature@sha512				 enabled						local
TANK feature@skein				 enabled						local


Linux
Code:
$ dmesg | grep -i zfs
ZFS: Loaded module v0.6.5.6-0ubuntu25, ZFS pool version 5000, ZFS filesystem version 5

zpool get all RAID

NAME PROPERTY					VALUE					 SOURCE
RAID health					 ONLINE					 -
RAID version					 -						 default
RAID bootfs					 -						 default
RAID delegation				 on						 default
RAID autoreplace				 off						 default
RAID cachefile				 -						 default
RAID failmode					wait						default
RAID listsnapshots			 off						 default
RAID autoexpand				 off						 default
RAID dedupditto				 0						 default
RAID dedupratio				 1.00x					 -
RAID free						789G						-
RAID allocated				 139G						-
RAID readonly					off						 -
RAID ashift					 0						 default
RAID comment					 -						 default
RAID expandsize				 -						 -
RAID freeing					 0						 default
RAID fragmentation			 10%						 -
RAID leaked					 0						 default
RAID feature@async_destroy	 enabled					 local
RAID feature@empty_bpobj		 active					 local
RAID feature@lz4_compress		active					 local
RAID feature@spacemap_histogram active					 local
RAID feature@enabled_txg		 active					 local
RAID feature@hole_birth		 active					 local
RAID feature@extensible_dataset enabled					 local
RAID feature@embedded_data	 active					 local
RAID feature@bookmarks		 enabled					 local
RAID feature@filesystem_limits enabled					 local
 

mjt5282

Contributor
Joined
Mar 19, 2013
Messages
139
almost all ZFS implementations are downstream of openzfs - freebsd, ZFS on Linux, ZFS on Mac OS X, the major exception is Solaris's zfs, which is closed source. My backup server can currently boot into FN11.2RC1, and Debian 9 (stretch) with ZFS on linux (ZoL). Debian ZoL can import and mount the main zfs pool successfully. ZoL has many enhancements that aren't available on freebsd's implementation. I would suggest you try ZoL. It will at the very least be able to accept zfs send/receive streams.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
As long as the features enabled are supported it should be compatible.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks very much @mjt5282 & @Stux for the input on experience and info with ZFS development.

IIUC the Linux version of ZFS is likely a bit more advanced than the version used in FreeNAS. It's good to know that I likely can mount a FreeNAS volume if I had to (say because of hardware failure), but in my expected use case I would be moving data from Linux->FreeNAS (Backup to Server), and very rarely from FreeNAS ->Linux (Restore if volume got corrupted).

As long as the features enabled are supported it should be compatible.
Can you elaborate please - based on what I have shown above it appears that the features aore slightly different - How significant is that? Do I need to make changes?

Thanks in advance for any additional input/advice.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Can you elaborate please - based on what I have shown above it appears that the features aore slightly different - How significant is that? Do I need to make changes?
Features recently included (in FreeNAS 11.2 RC1) the ability to remove a device from a pool, so if you needed that as an example, you would need to go to the Release candidate to get it right now, whereas ZFS on Linux probably has it already.

Generally, the good operating procedure would be to create the pools on FreeNAS and never do a zpool upgrade on the Linux side, only ever in FreeNAS, which would keep you operable on both all the time.

It's probably as simple as that. All reports I've seen are saying that pools are portable between OSes with no issue if the versioning is properly handled as I suggest above.
 

mjt5282

Contributor
Joined
Mar 19, 2013
Messages
139
openzfs is very granular in specifying which options can be / should be enabled. for example, on linux there is a big performance increase for storing the extended attributes in the inodes (xattr=sa). This probably would break compatibility with the Freebsd version. So, you have to be precise and disciplined if you are going cross platform with (open)ZFS.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
@NASbox, looks like your ZFS on Linux implementation is a bit old. I run 0.7.9 have have these additional pool feature flags available;

feature@multi_vdev_crash_dump
feature@large_blocks
feature@large_dnode
feature@sha512
feature@skein
feature@edonr
feature@userobj_accounting


However, since I don't need or use them, I try and make my Linux ZFS pools with them disabled, (except large_blocks...) Mostly so I don't try and use them, just to find out that Grub 2 won't like them and refuse to boot.

In regards to sending your Linux ZFS snapshots to FreeNAS, looks like it would work just fine. Your Linux ZFS pool features are a sub-set of FreeNAS' pool features. Perfect for your normal use of backing up Linux to FreeNAS. Note that you don't want certain features used, (and you can force it by leaving them disabled in the pool). Any feature that FreeNAS does not yet support, like edonr checksum or large_dnode, should be avoided for now.

Here are some links showing what is different, (can be a bit behind current);

http://open-zfs.org/wiki/Platform_code_differences
http://open-zfs.org/wiki/Features
https://docs.google.com/spreadsheets/d/1CFapSYxA5QRFYy5k6ge3FutU7zbAWbaeGN2nKVXgxCI/edit#gid=0
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks @mjt5282 and @sretalla for the quick replies. You both make some excellent points which make me wonder if it's practical to use ZFS send/receive or if I should just rsync.


Features recently included (in FreeNAS 11.2 RC1) the ability to remove a device from a pool, so if you needed that as an example, you would need to go to the Release candidate to get it right now, whereas ZFS on Linux probably has it already.

Generally, the good operating procedure would be to create the pools on FreeNAS and never do a zpool upgrade on the Linux side, only ever in FreeNAS, which would keep you operable on both all the time.

It's probably as simple as that. All reports I've seen are saying that pools are portable between OSes with no issue if the versioning is properly handled as I suggest above.

Here are the issues that I can imagine might occur - are there "practical" work rounds?

Since I am using a "distro" and not building linux from scratch, I might have an update come down though the package management system that "adds a feature".

What happens when I upgrade FreeNAS? Do I then have to blow away the pool on linux and completely recreate it? That's time consuming and puts a lot of data at potential risk if I screw up (which is possible since I don't do that type of thing very often-I've removed a few files from snapshots, I've replaced a bad drive and resilvered, but I've only had to blow away and replace a pool from backup once in the 4-5 years I've been a FreeNAS user.)

openzfs is very granular in specifying which options can be / should be enabled. for example, on linux there is a big performance increase for storing the extended attributes in the inodes (xattr=sa). This probably would break compatibility with the Freebsd version. So, you have to be precise and disciplined if you are going cross platform with (open)ZFS.

Given the very different pool configurations and workloads (-L-: Linux Desktop: 2x1TB Mirror -F-: FreeNAS: 8x6TB RaidZ2) differences between the pools make sense. If I need them completely identical, then this is a deal breaker.

I've done a compare of my two main datasets as they are currently configured, and below are the differences. I assume that zfs send Linux -> FreeNAS -> Linux would result in corruption? (I'm afraid to try as I only have my production systems!)

[Maybe I should add that I am unlikely to send/receive the "root" dataset, but rather I would be sending a bunch of small datasets, and the root is just a container for them.]

Code:
NAME PROPERTY					  VALUE						 SOURCE
-F- cachefile					 /data/zfs/zpool.cache		 local
-L- cachefile				 -								 default

-F- failmode					  continue					  local
-L- failmode					  wait						  default

-F- autoexpand					on							local
-L- autoexpand					off						   default

-F- dedupratio					3.01x						 -
-L- dedupratio					1.00x						 -


-F- feature@multi_vdev_crash_dump enabled					   local
-F- feature@large_blocks		  enabled					   local
-F- feature@sha512				enabled					   local
-F- feature@skein				 enabled					   local

-L- ashift						0							 default


Thanks again, I appreciate the advice... should I just give up on zfs send/receive and rsync, or is there a practical work around that is maintainable for the long term?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
The area you need to focus on in the "feature" family of flags.

The features being "enabled" is not an issue; it means the pool supports but is not using them - if they become "active" though, then the pool cannot be imported onto a ZFS implementation that doesn't support them (or at least implement a manner of handling them, like ZoL's handling of multi_vdev_crash_dump by simply ignoring the presence of dumps that use it, IIRC)

dedupratio showing a 3x reduction makes me wonder what you're using it on though. ;)
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks @Arwen you were posting while I was writing my reply to the other posts. Your response is VERY helpful:

@NASbox, looks like your ZFS on Linux implementation is a bit old. I run 0.7.9 have have these additional pool feature flags available;

feature@multi_vdev_crash_dump
feature@large_blocks
feature@large_dnode
feature@sha512
feature@skein
feature@edonr
feature@userobj_accounting

That's an unfortunate side effect of using a "derivative distro" - I'm not a guru (at least not yet) that can cope with a distro like Gentoo so I have to wait for stuff to trickle down.

The only way I could find to determine "version" was from dmesg, and the info on FreeNAS just says version 5000 (which I assume is Open ZFS) it doesn't add a separate line to dmesg with the version number like Linux.

@NASbox
However, since I don't need or use them, I try and make my Linux ZFS pools with them disabled, (except large_blocks...) Mostly so I don't try and use them, just to find out that Grub 2 won't like them and refuse to boot.

I'm not brave enough to go ZFS boot since that's not what my distro uses, so I'm sort of stuck that way as I need a stable system-I wish I could, OS snapshot/rollback would be absolutely awesome!!!

IIRC you are running ZFS on a fairly "generic" laptop with a single drive? HD/SSD?
Do you push backups of /home (and possibly other directories) to your FreeNAS box?

I currently have a 2x1TB ZFS Mirror (big improvement of mdadm raid/lvm) for high value data, 1x2TB ext4 for downloads and other "junk", and ext4 boot drive with 2 partitions-one for the OS, and one for data. (Simplifies backup as "home is super small and one one file system and only contains "config data" that I didn't "explicitly" install and don't really know what is there. I then use symlinks into the other drives for my data directories i.e. /Documents, /Downloads etc. which point to the appropriate storage volume.

@NASbox
In regards to sending your Linux ZFS snapshots to FreeNAS, looks like it would work just fine. Your Linux ZFS pool features are a sub-set of FreeNAS' pool features. Perfect for your normal use of backing up Linux to FreeNAS. Note that you don't want certain features used, (and you can force it by leaving them disabled in the pool). Any feature that FreeNAS does not yet support, like edonr checksum or large_dnode, should be avoided for now.

Sorry for my noobishness... I'm assuming that you mean I should explicitly run zfs set parameter=disable against my linux pool?

Can you please be a bit more specific what features I should disable and/or give me a bit of guidance as to how I can figure it out myself?

According to smartctl, my mirror drives are:
Sector Sizes: 512 bytes logical, 4096 bytes physical
should I be enabling large_blocks?


Thanks for the references... I'll take a bit of time to study in detail. About that google doc... where did you find this reference? ... and ... am I correct in my interpretation that ZFS on Linux doesn't support TRIM?

Thanks again... I really appreciate the input/advice.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks for the reply @HoneyBadger, I was replying to another post while you were posting:

The area you need to focus on in the "feature" family of flags.

The features being "enabled" is not an issue; it means the pool supports but is not using them - if they become "active" though, then the pool cannot be imported onto a ZFS implementation that doesn't support them (or at least implement a manner of handling them, like ZoL's handling of multi_vdev_crash_dump by simply ignoring the presence of dumps that use it, IIRC)

Can you give me some guidance as to what I need to do? Are there specific things I need to disable?
Do you zfs send/receive data between Linux and FreeNAS?

dedupratio showing a 3x reduction makes me wonder what you're using it on though. ;)

Does dedupratio refer to the dedup option?

Your question caused me to check, and I only found one small test dataset with 4 child datasets with dedup verify in the pool. I was doing some testing with web server backups (bunch of wordpress sites, a lot of duplicate files) with dedup on. I got busy and didn't clean things up, but my recollection was that I didn't appear to save much if any space, so I just went with gzip 6 compression which seemed to have a much bigger impact, but I don't remember the details.

The biggest % of the pool are my library of audio/video/work files - things that don't change very often. I also have a windows box that I back up with Veeam - Full image backup every 30 days and differential backup every night, and I keep about 70 snapshots. There may be a lot of redundant data there.

The server also contains a lot of rsync backups I of a web server full of wordpress sites.that I snapshots afterward, and I keep a ton of snapshots so I can go back if something happens. Almost nothing changes on a daily basis, and a lot of files are duplicated since I am running multiple copies of WordPress.

Any idea what that is about from my description?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
So many questions...
...
The only way I could find to determine "version" was from dmesg, and the info on FreeNAS just says version 5000 (which I assume is Open ZFS) it doesn't add a separate line to dmesg with the version number like Linux.
...
Here is a way in Linux to get ZFS version;
> modinfo zfs | grep -iw version
version: 0.7.9-r0-gentoo
> modinfo spl | grep -iw version
version: 0.7.9-r0-gentoo

...
I'm not brave enough to go ZFS boot since that's not what my distro uses, so I'm sort of stuck that way as I need a stable system-I wish I could, OS snapshot/rollback would be absolutely awesome!!!
...
Not necessary, though the OS alternate boot environments are nice.
...
IIRC you are running ZFS on a fairly "generic" laptop with a single drive? HD/SSD?
Do you push backups of /home (and possibly other directories) to your FreeNAS box?
...
Yes, my laptop is a cheap Asus with a single drive. Right now it has a 1TB Samsung SSD. I carve out 2 partitions for each of "/boot", swap and root pool, (which includes my home directory). Then the remaining space is simply un-mirrored pool for general dumping of files. The laptop was fine speed wise, until Meltdown. Now it's performance sucks when performing updates, (Gentoo style...).

As for /home, no I don't bother pushing it anywhere. I do make 2 groups of snapshots on my /home. 24 hourly based ones, and 7 day of week ones, (taken hourly but overwriting the previous hour). Rsync backups are sent to my FreeNAS about once a month, which does keep snapshots of the prior backups.
...
I currently have a 2x1TB ZFS Mirror (big improvement of mdadm raid/lvm) for high value data, 1x2TB ext4 for downloads and other "junk", and ext4 boot drive with 2 partitions-one for the OS, and one for data. (Simplifies backup as "home is super small and one one file system and only contains "config data" that I didn't "explicitly" install and don't really know what is there. I then use symlinks into the other drives for my data directories i.e. /Documents, /Downloads etc. which point to the appropriate storage volume.
...
Sounds like you have a decent setup, which you understand. Sometimes that's more important than fancy setups which are poorly understood.
...
Sorry for my noobishness... I'm assuming that you mean I should explicitly run zfs set parameter=disable against my linux pool?
...
Some parameters allow you do to that. Others are "whence enabled, always enabled". So I create the pool without any features and add the ones I want. Similar to this;
zpool create -d -o ashift=12 \
-o feature@async_destroy=enabled \
-o feature@empty_bpobj=enabled \
-o feature@lz4_compress=enabled
...

...
Can you please be a bit more specific what features I should disable and/or give me a bit of guidance as to how I can figure it out myself?

According to smartctl, my mirror drives are:
Sector Sizes: 512 bytes logical, 4096 bytes physical
should I be enabling large_blocks?
...
No, you likely don't need large_blocks. That's for support of ZFS blocks greater than 128Kbyte, (if I remember correctly). Not disk block sizes.

As for which feature flags to use on Linux, I would limit them to ones common between FreeNAS and the ZFS on Linux version you have available. Thus, make sure not to use ones like feature@edonr.
...
Thanks for the references... I'll take a bit of time to study in detail. About that google doc... where did you find this reference? ... and ... am I correct in my interpretation that ZFS on Linux doesn't support TRIM?

Thanks again... I really appreciate the input/advice.
I picked up that spread sheet link from the ZFS leadership E-Mail thread;
https://openzfs.topicbox.com/groups/developer/T9efc2d0ff44a52b0/second-openzfs-leadership-meeting

Correct, ZFS on Linux does not yet support TRIM.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Can you give me some guidance as to what I need to do? Are there specific things I need to disable?

As mentioned by @Arwen some features can be disabled on-the-fly, but most of them are set at pool creation time and will likely be stuck in their enabled state. For most of them you can avoid them becoming active by not using the feature - eg: never use the sha512 or skein algorithms for checksumming, and don't set recordsize greater than 128K on any datasets. For multi_vdev_crash_dump though - you might be out of luck, although I believe ZoL has the ability to import a pool that's actively using them; it just ignores their presence.

Do you zfs send/receive data between Linux and FreeNAS?

No, I don't have any use-cases for this, and I make very limited use of ZoL in general.

Does dedupratio refer to the dedup option?

Yes, and I'm glad that you're only using it in a very limited space. ZFS deduplication eats memory like popcorn, so I'd suggest you turn it off on the datasets where it's enabled. If you're able, consider migrating the data off of them and destroying them entirely to free up the memory used by the deduplication tables.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks @Arwen, @HoneyBadger & @Ericloewe for the replies.

So many questions...
.....
Correct, ZFS on Linux does not yet support TRIM.
So many questions... that's what makes this stuff so difficult, there is often a lot of prerequisite knowledge, and I often find that I don't know what I don't know, or am not sure what question to ask, or it is cross project, and the active members of the forum may not be using the other project the way I need/want to. I will say the community here has been great and I am very appreciative of the effort others have made to help!!!!

What are the implications of using ZFS on an SDD? I thought trim was vital to decent disk life?


Thanks... this makes all the other answers make sense now.

For multi_vdev_crash_dump though - you might be out of luck, although I believe ZoL has the ability to import a pool that's actively using them; it just ignores their presence.

I'm thinking of sending backups from Linux->FreeNAS and very rarely (disaster recovery), sending the original back from FreeNAS->Linux. IIUC, that shouldn't be a problem? Am I correct, or am I missing something?

Yes, and I'm glad that you're only using it in a very limited space. ZFS deduplication eats memory like popcorn, so I'd suggest you turn it off on the datasets where it's enabled. If you're able, consider migrating the data off of them and destroying them entirely to free up the memory used by the deduplication tables.

Any way of assessing the memory consumption?
The dataset in questions is very rarely used - does that memory ever get released if needed, or does the existence of a dataset with dedupe load those tables even if the dataset is not accessed?

Putting all this together, it seems that FreeNAS has more features enabled than my version of Linux: (These are "enabled" on FreeNAS, and don't appear on my current Linux setup)

feature@multi_vdev_crash_dump
feature@sha512
feature@skein

Based on comments by @Arwen it appears as though feature@multi_vdev_crash_dump might be added in an upcoming update.

I started to dig into man zfs-features (relevant extracts below), and I'm missing the context with reference to com.example:feature_name - how do I make use of this info. I assume that I should be looking at these references?

(@Ericloewe is it appropriate/relevant to add a short 1/2 sentence note to your excellent resource?)

Any input/suggestions/recommendations are much appreciated.

man zfs-features from FreeNAS
Code:
   Identifying features
	 Every feature has a guid of the form com.example:feature_name.  The
	 reverse DNS name ensures that the feature's guid is unique across all ZFS
	 implementations.  When unsupported features are encountered on a pool
	 they will be identified by their guids.  Refer to the documentation for
	 the ZFS implementation that created the pool for information about those
	 features.

	 Each supported feature also has a short name.  By convention a feature's
	 short name is the portion of its guid which follows the ':' (e.g.
	 com.example:feature_name would have the short name feature_name ),
	 however a feature's short name may differ across ZFS implementations if
	 following the convention would result in name conflicts.

man zfs-features from Linux
Code:
   Identifying features
	   Every feature has a guid of the form com.example:feature_name. The reverse DNS name ensures that the feature's guid is unique  across  all
	   ZFS  implementations.  When unsupported features are encountered on a pool they will be identified by their guids. Refer to the documenta‐
	   tion for the ZFS implementation that created the pool for information about those features.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
I'm thinking of sending backups from Linux->FreeNAS and very rarely (disaster recovery), sending the original back from FreeNAS->Linux.
send and recv are compatible across versions of ZFS by default. You can enable some of the specific features manually (compressed send sends the blocks without decompressing them but requires support for the same compression, etc.).

I'm missing the context with reference to com.example:feature_name
Those are just the formal names for the features. They use standard Sun syntax (popular in Java, for instance) of using the "responsible" entity's domain in reverse order. So, async_destroy was introduced by Delphix in Illumos, and Delphix's domain is delphix.com. So, the feature's GUID becomes com.delphix:async_destroy.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
What are the implications of using ZFS on an SDD? I thought trim was vital to decent disk life?

It is, also to sustained performance under write load. Thankfully that only affects ZFSonLinux - the FreeBSD implementation of OpenZFS has functional TRIM support. You need to confirm that your HBA will pass the TRIM commands through though, and strongly consider increasing the spare area on the SSD beyond the factory setting (to reserve some space for overhead)

I'm thinking of sending backups from Linux->FreeNAS and very rarely (disaster recovery), sending the original back from FreeNAS->Linux. IIUC, that shouldn't be a problem? Am I correct, or am I missing something?

Shouldn't be a problem unless you cause a feature to become active by accident - it sounds like your ZoL version is rather old if it's lacking the features you mentioned; I'm going to guess 0.6.5

Any way of assessing the memory consumption?

zpool status -D poolname | grep dedup will result in a line similar to DDT entries X, size Y on disk, Z in core

Multiply "Entries" by "Size in Core" and your result is how much memory your deduplication table occupies in bytes.

The dataset in questions is very rarely used - does that memory ever get released if needed, or does the existence of a dataset with dedupe load those tables even if the dataset is not accessed?

The mere existence of deduplicated data in a pool will cause that table to be loaded into the metadata area of your ARC (by default, this is 1/4th of your total ARC size) - if it's too big to fit into that area then Very Bad Things tend to happen such as; your pool either being various levels of "intolerably slow to write to" if there's only a small overflow of the DDT or it lands into L2ARC, up to your pool being effectively un-importable due to a DDT that bloats beyond capacity or the L2ARC device you were relying on deciding to fail and leave you walking through a multi-GB DDT from spinning disks for every write.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
Thanks @Ericloewe / @HoneyBadger for the replies.

It is, also to sustained performance under write load. Thankfully that only affects ZFSonLinux - the FreeBSD implementation of OpenZFS has functional TRIM support. You need to confirm that your HBA will pass the TRIM commands through though, and strongly consider increasing the spare area on the SSD beyond the factory setting (to reserve some space for overhead)

Shouldn't be a problem unless you cause a feature to become active by accident - it sounds like your ZoL version is rather old if it's lacking the features you mentioned; I'm going to guess 0.6.5

My problem is likely to be with Linux, if I set up my OS on an SSD.

On FreeNAS I have a 120G HP Drive that I use for the system, the database and a small custom dataset for maintenance/scripts. There is currently about 100G free which I would assume would make the endurance of the SSD far exceed the life of the hardware it is running on? (Obviously anything could fail at any time, but based on expected/rated wear and tear for the workload given all the unallocated space for wear leveling.) Or do I need to do something to make sure proper wear leveling can take place?

zpool status -D poolname | grep dedup will result in a line similar to DDT entries X, size Y on disk, Z in core

Multiply "Entries" by "Size in Core" and your result is how much memory your deduplication table occupies in bytes.

The mere existence of deduplicated data in a pool will cause that table to be loaded into the metadata area of your ARC (by default, this is 1/4th of your total ARC size) - if it's too big to fit into that area then Very Bad Things tend to happen such as; your pool either being various levels of "intolerably slow to write to" if there's only a small overflow of the DDT or it lands into L2ARC, up to your pool being effectively un-importable due to a DDT that bloats beyond capacity or the L2ARC device you were relying on deciding to fail and leave you walking through a multi-GB DDT from spinning disks for every write.

This is what I got from the command you recommended:
dedup: DDT entries 11734, size 1270 on disk, 191 in core

IIUC and have done the math correctly, that is a bit over 2M, which shouldn't be a big deal on a system with 32G and a light workload.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
On FreeNAS I have a 120G HP Drive that I use for the system, the database and a small custom dataset for maintenance/scripts. There is currently about 100G free which I would assume would make the endurance of the SSD far exceed the life of the hardware it is running on? (Obviously anything could fail at any time, but based on expected/rated wear and tear for the workload given all the unallocated space for wear leveling.) Or do I need to do something to make sure proper wear leveling can take place?

For your boot volume you shouldn't be seeing enough writes to make hardware over-provisioning really necessary. Keeping 100GB free should give it more than enough extra area to cycle about merrily, as well as having TRIM to clean it up when needed.

This is what I got from the command you recommended:
dedup: DDT entries 11734, size 1270 on disk, 191 in core

IIUC and have done the math correctly, that is a bit over 2M, which shouldn't be a big deal on a system with 32G and a light workload.

Correct. Looks like your dedup data is very small and hardly impacting things at all. Keeping it off is the safe bet though.
 

usergiven

Dabbler
Joined
Jul 15, 2015
Messages
47
Anyone heard this news about rebasing ZFS to ZoL? Don't really understand it but first thought was what was going to happen to Freenas?

The future of ZFS in FreeBSD
 
Status
Not open for further replies.
Top