Slow write speeds

Lipe123

Dabbler
Joined
Dec 14, 2022
Messages
13
I read a ton of threads on this and almost every one had issues with SMB or general networking issues. I ruled those out and still no luck

I have an older 17-2600k cpu and 16Gb of RAM

Using some random 250GB SSD as boot drive and 3 Silicon Power 2Tb SSD's as the raid-z pool

Using DD I get:
Code:
dd if=/dev/zero of=/mnt/DupPool/SMB/testfile bs=1G count=100
100+0 records in
100+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 33.1105 s, 3.2 GB/s

I specifically went for something large to try and get past issues like write cache .
I get the same ~3GB/s with every size of DD test I'm doing.

Using iperf to the server I get 940Mbs which seems perfectly normal for 1Gb ethernet.

When I try and transfer a bunch of video files to the NAS write speeds hover around 30-40Mb/s.
After a while the transfer completely stalls and then picks up to 20Mb/s again for a few
I tried Sftp as a comparison and that was around the same speed, so its not just a SMB thing.

CPU is nowhere near 100% and ram usage looks fine too.

If DD shows good speeds, does that mean the drives are fine and something else is a problem or is that not a true test of the drive performance?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
not a true test of the drive performance?

That's a completely awesome test of the compression speed your CPU is capable of. Zeroes compress very nicely at 3.2GB/sec.

Please post a description of your hardware, especially including your specific motherboard and what sort of ethernet chipset you're using. Many times, performance problems result from the use of gamer-grade or desktop-grade boards that have poor performance characteristics. We can't help you if you don't tell us what you're using.

Further, it's hard to tell what you're saying here because you're not conveying units accurately. You say you have "16Gb" of RAM, which corresponds to 2GBytes; you talk about "2Tb" SSD's which is a unit that NO one measures SSD's in, you don't mention whether these are SATA SSD's or NVMe, and you use the horrifying abbreviation of "940Mbs" which is only clear from context that you mean Mbit/sec.

Please stop over at the Terminology and Abbreviations Primer and then please try to use either standard abbreviations, or at least unambiguous ones. This will make it easier to determine whether you really meant that you were seeing speeds slow down to 20 megabits per second, or if you meant 20 megabytes per second. These are very different things.
 

Lipe123

Dabbler
Joined
Dec 14, 2022
Messages
13
Motherboard: Asus MAXIMUS IV GENE-Z/GEN3
Ram: 16 GB (2x8)
CPU: Core(TM) i7-2600K CPU @ 3.40GHz
SSD: 3X SATA Silicon Power A58 2TB (https://www.silicon-power.com/web/product-A58) raid-Z1configuratiuon
NIC: Onboard Intel card, I'm not exactly sure what the model is.

Already confirmed with iperf3 that the ethernet speed is 940Mb/s which should rule out simple things like a rogue 10/100 switch or something similar. Unless of course iperf3 also does not give any useful information.

So with that out of the way, you said using DD just writes 0's and means nothing? (it was hard to tell under all the judgement and sarcasm)
I have looked all over these forums and the internet for a way to measure the disk performance and everyone recommended that method, if there is a better way can you please point me in the right direction?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Unless of course iperf3 also does not give any useful information.
all the judgement and sarcasm

Please lose the inappropriate attitude. One of my functions as a moderator is to steer discussions in a productive direction; when people are not communicating clearly, you do yourself a disservice because many forum members will just skip over your post rather than trying to clarify.

So with that out of the way, you said using DD just writes 0's and means nothing?

ZFS compresses blocks of data prior to writing them to disk. You either need to use incompressible data, which I view as a bad idea, since ZFS will still make the attempt to compress the data, or disable compression, in which case writing zeroes will happen at whatever speed the underlying hardware is capable of.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
So with that out of the way, you said using DD just writes 0's and means nothing?

Yes; DD will only write zeroes. You can quickly disable compression (for the purposes of testing, I strongly suggest keeping it on for almost every real-world workload) by using the command zfs set compression=off poolname/datasetname from a shell.

I have looked all over these forums and the internet for a way to measure the disk performance and everyone recommended that method, if there is a better way can you please point me in the right direction?

DD is great for filesystems that don't do compression, and for testing sequential access such as your "write a bunch of movie files" test. A more adjustable option is fio although it requires a bit more research to identify a pattern that will accurately represent your actual workload.

In your case, disabling compression first and then running the DD command again should suffice. Although the A58 is a DRAMless TLC drive, I'd expect sustained performance from three of them in a Z1 to be well above gigabit networking speeds.
 

Lipe123

Dabbler
Joined
Dec 14, 2022
Messages
13
Thanks HoneyBadger

I tried a Fio command from another thread and it caused the entire system to lock up, had to power cycle it to get it back haha.

I disabled compression to test as 90% of the data I plan to store is video and photo content that does not really compress.

I used rsync to sync one folder with about 2.5GB of video files to another and initially transfer speeds are around 150-200MB/s but then after about 5 seconds it drops down to 1-2MB/s and the entire system becomes completely unresponsive.
The reporting screen on the GUI with the graphs just stop updating completely.
Using DD now with compression disabled I get way better results than my rsync test but not sure if that means there is hope yet or I just need to give up on this project.

Code:
root@truenas[~]# dd if=/dev/zero of=/mnt/DupPool/SMB/testfile bs=1G count=10
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 31.4165 s, 342 MB/s
root@truenas[~]# dd if=/dev/zero of=/mnt/DupPool/SMB/testfile bs=1G count=20
20+0 records in
20+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 53.0126 s, 405 MB/s


Honestly I'm confused right now, I manually ran "zpool trim <poolname>" and for whatever reason things are working fine now.
I don't see any option in the pool settings to enable trim, I only see it on the disks.
While typing this I manged to transfer 18GB of data at 45-50MB/s consistently.

cautiously optimistic that it's somehow related to trim but not sure where/how to schedule it to do that every day.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks HoneyBadger

I tried a Fio command from another thread and it caused the entire system to lock up, had to power cycle it to get it back haha.

That's concerning to me, and suggests there's bad or incompatible hardware in play. Have you run something like memtest and checked your CPU temperatures/thermal paste? The 2700K is pretty "vintage" at this point. A heavy workload could cause a slow or "temporarily unresponsive" system, but a full lock-up requiring power cycle isn't normal.

I disabled compression to test as 90% of the data I plan to store is video and photo content that does not really compress.

I used rsync to sync one folder with about 2.5GB of video files to another and initially transfer speeds are around 150-200MB/s but then after about 5 seconds it drops down to 1-2MB/s and the entire system becomes completely unresponsive.
The reporting screen on the GUI with the graphs just stop updating completely.

Was this an internal rsync? I'd like to come back to:

Using some random 250GB SSD as boot drive

If the boot device chokes up, it might cause the UI to stall out. There's a couple controllers that historically haven't played nice with FreeBSD's TRIM implementation, but autotrim is disabled on the boot pool now - I believe they were the SiliconMotion brand often used in WD's Green SSD line, among others. What model SSD is it?

Using DD now with compression disabled I get way better results than my rsync test but not sure if that means there is hope yet or I just need to give up on this project.

Code:
root@truenas[~]# dd if=/dev/zero of=/mnt/DupPool/SMB/testfile bs=1G count=10
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 31.4165 s, 342 MB/s
root@truenas[~]# dd if=/dev/zero of=/mnt/DupPool/SMB/testfile bs=1G count=20
20+0 records in
20+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 53.0126 s, 405 MB/s


Honestly I'm confused right now, I manually ran "zpool trim <poolname>" and for whatever reason things are working fine now.
I don't see any option in the pool settings to enable trim, I only see it on the disks.
While typing this I manged to transfer 18GB of data at 45-50MB/s consistently.

cautiously optimistic that it's somehow related to trim but not sure where/how to schedule it to do that every day.

45-50MB/s is still well below what you should get on sequential writes to HDDs, let alone SSDs. Again, I know they're DRAMless TLC, so not exactly a "racecar" kind of speed, but that's an extremely poor result from the rsync/copy speed.

I do have to ask - your pool is named "DupPool" ... you aren't using deduplication, are you?

Is your SATA controller set to RAID mode in the BIOS, or AHCI? Your DD results seem to suggest you aren't.
 

Lipe123

Dabbler
Joined
Dec 14, 2022
Messages
13
That's concerning to me, and suggests there's bad or incompatible hardware in play. Have you run something like memtest and checked your CPU temperatures/thermal paste? The 2700K is pretty "vintage" at this point. A heavy workload could cause a slow or "temporarily unresponsive" system, but a full lock-up requiring power cycle isn't normal.
I did a full memtest run yes, first I had a 3770k given to me that was bad and it took me day of struggles with that, memtest was part of that diagnostic round. The 260k IS old yes but it's served me well, I used the machine as a game server for a few years now with no issues.
I re-applied new thermal paste and cpu temps reach a max of around 60Celcius running Truenas.

Was this an internal rsync? I'd like to come back to:
If the boot device chokes up, it might cause the UI to stall out.
Yes just from one folder in the pool to another. The boot drive is not horrible, its a Corsair Force LS 250GB drive. I've been using the same disk before to run a debian setup with the game server thing.

I do have to ask - your pool is named "DupPool" ... you aren't using deduplication, are you?

Is your SATA controller set to RAID mode in the BIOS, or AHCI? Your DD results seem to suggest you aren't.
No De Duplication, just a fun coincidence with a last name.
The SATA controller is set to AHCI.
The DD results were in line with the drive spec sheet:
  • Performance Read(max.)
  • ATTO: up to 560MB/s
    CDM: up to 500MB/s
  • Performance Write(max.)
  • ATTO: up to 530MB/s
    CDM: up to 450MB/s
Honestly I'm 100% fine with 50MB/s, it's just to store old family photos and stuff and I was able to transfer close to 100GB last night at that sustained speed and the GUI remained responsive and while the transfer was happening I could still access other folders via SMB in windows with no latency issues. I don't know if the unscheduled power cycle did anything to help(I did reboot the system normally a few days ago already) or of the zpool trim command was the ticket.

I Saw another thread somewhere here complain about trim on the pool and issues with it and their solution was to disable autotrim and put the zpool command into a weekly cron job. I've done the same on my setup now.
I don't see any option in the pool settings to enable trim, I only see it on the disks.
I lied here and was just confused but could not edit my post as it was awaiting moderation.
The option was enabled on the pool and there is no option on the disks.
What I am a little confused by is how the boot drive gets trimmed, I only see it in disks section but nowhere else in the GUI.

Tl;dr : I'm happy now, the system remains responsive and I'm able to transfer a large amount of data at reasonable speeds.
 
Top