Seagate Ironwolf 6TB Write Speeds Below Single Disk Speed On SMB Transfers

dizydre21

Dabbler
Joined
Apr 10, 2023
Messages
15
Hello,

The title says it all. In TrueNAS Scale, I have a 2 drive mirrored pool for testing, with more to be added when the drives arrive. I am seeing 270MBps transfers until the cache dumps and then the transfers dip to below 100MBps and sometimes go as low as 30 or 40MBps. They never really stabilize at one speed. The drives are rated for 190MBps on single disk transfers, but it is not clear if that is just read or if it includes write speed. I am using Intel 2.5gb NICs and I know people tend to blame these, but all of my testing points to them being just fine. The reasons are below.

  • I have ran iperf3 between machines with the i225 NICs and it pegs 280MBps until I stop iperf. Tested for several minutes at a time
  • I created a single disk pool with a SATA SSD. Transfers to and from the same machines to the SSD pool pegged 280MBps for the entire 20+GB transfer.
  • I ran several of the fio tests from this website: https://forums.lawrencesystems.com/t/linux-benchmarking-with-fio/11122
    • Speeds look normal. I can retest and paste in the results if needed. I am mostly concerned with larger file transfers such as movies and other media
  • I have sent files to my backup Synology NAS and large files peg 110MBps because it has a 1gig NIC. These speeds do not fluctuate really at all.

Some additional information:
I am running TrueNAS as a VM in Proxmox with an LSI-9211-8i passed through to it. It is in IT mode and has p20 firmware. For shits and giggles, I created another single disk pool with another 6TB Ironwolf drive and had it connected to a motherboard SATA port. Same behavior.

I played around with 3 or 4 compression types, but went back to LZ4 as there wasn't a huge difference and sorry, but I don't remember which ones I tried.

I have allotted anywhere from 16GB to 28GB of RAM (32GB total on host machine) to the VM with only it running. The additional RAM does lengthen the initial burst of speed before the cache dumps, but does not change the slow and fluctuating speeds. I have an I7-8700k on this machine and have also tried giving the TrueNAS VM varying core counts to no avail.

On Monday, I took a spare NVMe drive and installed TrueNAS on bare metal and imported my pools. The same behavior occurred, but the speeds were a little more stable and a little faster.


So what about ZFS/TrueNAS and SMB transfers is tanking my READ AND WRITE speeds?? Are there parameters I can tune to help out? I do not think I am a candidate for running cache disks and the like.


Hardware list below.

Asus Z370 Prime-A
I7-8700k
32GB 3000MHz RAM - Non ECC
2x6TB Seagate Ironwolf 5400RPM HDDs- ST6000VN001-2BB186
970 Evo Boot Drive for Proxmox (250GB)
Sandisk SATA SSD - Dedicated for disk TrueNAS
LSI-9211-8i - HDDs connected here and passed through to TrueNAS VM - In a PCIEx16 slot running at x8
RTX-2070 Super -Installed in first x16 slot, but running in x8-x8 with the LSI card. Motherboard manual says it's capable of doing this.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How full is the pool? ZFS is a copy-on-write filesystem and naturally gets slower as the space fragments and the pool fills.
 

dizydre21

Dabbler
Joined
Apr 10, 2023
Messages
15
How full is the pool? ZFS is a copy-on-write filesystem and naturally gets slower as the space fragments and the pool fills.
It's just a hair over 50% full.

I'm going to destroy the pool and start from scratch tonight. I have a 3rd and 4th identical pair of drives to add and was thinking I would do a 2x2vdev setup to see how performance changed with a lower percentage of the pool filled.

Besides that, any suggestions on why I am regularly seeing less than single disk speeds? 80% of the theoretical 190MBs would still be about 150MBps. Is 50% enough to start seeing speeds slow down?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It can depend on many factors. The speeds you're quoting such as "190MBytes/sec" are only valid for strictly sequential data reads or writes. It is important to remember that a hard drive incurring a seek after every 4KByte I/O is only capable of about 400KBytes/sec. That's certainly an engineered worst-case scenario, but it is important to note regardless. Things in the real world rarely work as well as the theoretical best case scenario. That said, there are lots of reasons you might not be getting good performance. Does it work better on the bare metal platform? PCIe passthru on Proxmox is graded "experimental" by the Proxmox folks, and this would not be the first time I've heard about weird performance problems on Proxmox with a virtualized TrueNAS. It's best to try experimenting with one thing at a time to isolate the cause.
 

dizydre21

Dabbler
Joined
Apr 10, 2023
Messages
15
It can depend on many factors. The speeds you're quoting such as "190MBytes/sec" are only valid for strictly sequential data reads or writes. It is important to remember that a hard drive incurring a seek after every 4KByte I/O is only capable of about 400KBytes/sec. That's certainly an engineered worst-case scenario, but it is important to note regardless. Things in the real world rarely work as well as the theoretical best case scenario. That said, there are lots of reasons you might not be getting good performance. Does it work better on the bare metal platform? PCIe passthru on Proxmox is graded "experimental" by the Proxmox folks, and this would not be the first time I've heard about weird performance problems on Proxmox with a virtualized TrueNAS. It's best to try experimenting with one thing at a time to isolate the cause.
Running it bare metal did seem to perform a little better, but I did see speeds dip down similarly, albeit to a lesser degree. I suppose I could go back to the spare NVMe and troubleshoot further on bare metal regardless. I basically just installed it, ran a couple transfers and then pulled it out. Probably should've actually taken notes to compare. Wouldn't the fast transfer to the SSD pool rule out issues with passthrough though? It is also connected via the HBA card.

Another thing I was thinking about trying was going over to TrueNAS Core to see if it made a difference. I will probably only use TrueNAS for it's NAS functions if I am able to run it in Proxmox. I created a VM with Core this morning and will test this first when I get home. Then, I'll wipe the pool to see if that changes anything. As a last resort I will go back to bare metal, but I really don't want to do that. Anything else I could try before going bare metal?
 
Joined
Jun 15, 2022
Messages
674

dizydre21

Dabbler
Joined
Apr 10, 2023
Messages
15

dizydre21

Dabbler
Joined
Apr 10, 2023
Messages
15
I am thus far happy to add some additional comments.

Moving to TrueNAS Core actually bore a bit better performance with the above mentioned hardware still running in Proxmox. I had write transfers settle at about 130MBps with the occasional dip on what I assume were cache dumps. I only ran a couple short read transfers and they basically pegged the 2.5gig network and I didn't see them dip on several single digit GB transfers.

Anyway, my 4th 6TB disk came in this evening so I wiped my entire pool and set up another mirrored pool with two 2x6TB vdevs (just under 11TB usable). I am about 120GBs into the first major transfer on the 2.5GB network and I have not dipped below 260MBps. Most of the time I am above 270MBps.

In closing, I have no idea what was causing my issues, perhaps the still young Scale OS and some conflict it had with Proxmox?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have no idea what was causing my issues,

There are days like that for all of us. We create these incredibly complicated houses of cards (TrueNAS, Proxmox, etc.) and then stack them on top of each other, and they sometimes interact bizarrely with the hardware or other software bits. You wouldn't be the first person for whom a reboot or power cycle cleared up some weird problem.
 
Joined
Jun 15, 2022
Messages
674
There are days like that for all of us. We create these incredibly complicated houses of cards (TrueNAS, Proxmox, etc.) and then stack them on top of each other, and they sometimes interact bizarrely with the hardware or other software bits. You wouldn't be the first person for whom a reboot or power cycle cleared up some weird problem.
This is why I run TrueNAS on bare metal with no VMs.
A NAS should be far cleaner than the mess that was "last night..."
 

dizydre21

Dabbler
Joined
Apr 10, 2023
Messages
15
Thanks for chiming in. I will post again once something else stumps me.
 
Top