Writing drops to 0MB/s, DM-SMR, Z1, LZ4, dedup

draand28

Cadet
Joined
Feb 10, 2022
Messages
3
Hello!
I'm a TrueNAS newbie, just got up and running my first machine a few days ago.
I'm having writing issues over SMB on my new NAS. Whenever I'm trying to write files over 1GB, the writing speed falls to 0MB/s for minutes until it gets back to 5-20MB/s for a few secs and then falls back to 0 again for some more minutes. This is an always happening loop and it kinda bothers me as it takes about 30h to copy 900GB to the NAS.
At this I had only 4GB of RAM so I thought that this would be normal, especially having DM-SMR drives with LZ4 and dedup, all while using Z1 (basically worst case scenario), so I upgraded to 20GB, but the RAM isn't even used, nor is the CPU. I am aware that I need about 5GB of ram per TB, but to be honest, I don't think this is the case here.
I read some topics here for a while, but other than the RAM, I can't put my finger on what the issue could be.
On the other hand, reads work great, fully saturating the 1gbps link (soon to be 10gbps, as the NAS is my only machine without 10gbps).
In the end I don't really mind it if it's that slow, as I'm using it for archieving purposes, but of course I would be more than happy to get some actual decent writing speeds. I really wanted to use dedup as my use case scenario includes a lot of randomly allocated duplicates and I save quite a lot by using dedup.
I tried without dedup, only LZ4 compression and it works great, always 113MB/s (basically 1gbps).
Also tried to see the actual speed of the array with dedup+LZ4, with LZ4 only and without compression, but it it always the same, 3.6GB/s (I'm probably using dd wrong)
WhatsApp Image 2022-02-10 at 10.46.51 AM.jpeg


Specs:
Raid Z1 with 4x4TB DM-SMR drives (probably part of the issue) with LZ4 compression and deduplication (this kinda is the issue)
120GB 2.5 Inch Hitachi HDD for boot (I really didn't have any spare drives left, I will upgrade this to a sata SSD when I have the money) - on that note, I noticed that while writing to the NAS, when the speed drops to 0, the built-in WebGUI Shell and Reporting graphs don't load at all, might have something to do with the boot drive?
4+16GB DDR4 2400mhz (I know it's not really dual channel, but at the moment I am very budget limited.
Intel Pentium G4400
No HBA, just using the built-in motherboard sata ports (some Asrock H110M board).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm having writing issues over SMB on my new NAS. Whenever I'm trying to write files over 1GB, the writing speed falls to 0MB/s for minutes until it gets back to 5-20MB/s for a few secs and then falls back to 0 again for some more minutes. This is an always happening loop and it kinda bothers me as it takes about 30h to copy 900GB to the NAS.

This is totally expected with and characteristic of DM-SMR hard drives, which are not good with ZFS. ZFS has a large write buffer, which fills up, and then when it is flushing to disk at much slower than expected rates, you get throttled to zero, because the hard drives are busy slowly writing transactions that should have gone through quickly but aren't doing so because of DM-SMR.

At this I had only 4GB of RAM

The minimum requirement for TrueNAS is 16GB per the download page.

For dedupe, you need an additional 5GB per TB of disk space, so with your 4x4=16GB pool, you should really have no less than 32GB. This isn't optional or suggested, it's actually needed in order to accommodate the dedupe tables. It won't use it all right away, but it is needed as the amount of data stored grows.

In general, we do not recommend the use of dedupe because no one wants to throw the needed resources at it. @Stilez has an excellent writeup of what it took to make dedup successful:


writing to the NAS, when the speed drops to 0, the built-in WebGUI Shell and Reporting graphs don't load at all, might have something to do with the boot drive?

It probably has to do with the UI calling some ZFS functions that end up getting blocked waiting for the transaction group being written to flush first.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh wow, that got bumped up to 16 GB? Not a bad idea, all things considered.

There are still references to 8GB floating around, including in the "official" docs.

As I am the person who originally bumped the requirement up to 8GB years ago, you probably know that I am tired of butting heads with people who feel that 8GB was merely a recommendation, or is somehow onerous.

Every single Galaxy S20 comes with 12GB of RAM. The ASUS ROG Phone 5 Ultimate comes with 18GB of damn RAM. Those are frickin' cell phones.
This is not 2005 and 8GB of RAM is no longer $1800 (per some of our distributor invoices).

Quite frankly I run FreeNAS in virtualized environments and generally that's at least 12GB, because I found 8GB to be a bit nonperformant. I'm probably not going to go to 16GB yet simply because I expect some of that is driven by Scale services like Gluster and containers/VM's/etc., but I can easily add dozens of GB with just a knob twist and a reboot, yay virtualization.

The middleware design and competition for system memory always made 8GB a bit of a compromise. I had no problems going to bat with iXsystems for the 8GB change because I was able to produce a list of people with 4GB APU's that had trashed a ZFS pool, and some 6GB examples too. I'm pretty sure that the pool loss/panic issues have been solved for years, but I had been warily observing the growth of other things on the system, and the Linux default tuning for ARC size is another questionable variable as well.
 

draand28

Cadet
Joined
Feb 10, 2022
Messages
3
Thank you so much for your thorough and quick answer. Now I have also read the entire post that you shared about deduplication to better understand it. I think I'll keep the dedupe off when I'm transfering data that wouldn't benefit much from dedupe and turn it on when it would.
Also I'll probably upgrade to 2x16GB of RAM in the near future.

One more question, if you don't mind, is there any way that I can linearize the transfer speed when using dedupe, so that I can get a somewhat accurate ETA?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What kind of duplicate data do you have that maked dedup worth the hassle that you can't manually deduplicate using generic filesystem tools that find duplicates?
 

draand28

Cadet
Joined
Feb 10, 2022
Messages
3
To be honest, I didn't think to use a generic tool to deduplicate the files before moving them to the NAS. This would've been probably smarter. An issue though with this, I believe it won't really work as the total size of all of my backups exceed 4TB and I don't have SSDs or non-SMR HDDs of that size in order to do a full dedupe faster, locally.
I usually store backups of VMs, photos, videos (identical ones, but a lot of them are duplicated a lot of times).
On the other hand, is there a cronjob script for doing a dedupe when the system is idle or at a certain time interval?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
ZFS deduplication is so painful to use that I wouldn't worry about not having SSDs to speed up the process.
On the other hand, is there a cronjob script for doing a dedupe when the system is idle or at a certain time interval?
@jgreco wrote something about that the other day, he might remember the context and ease the search.
 
Top