How to achieve continuous 10g speeds?

ArCoN

Cadet
Joined
Jun 12, 2023
Messages
5
Hi World

I have a little problem getting the full speed, of my 10g connection
So i like to ask what can be done to achieve that.

I am running a 12 spinning disk raidz2 on a E-2356G, 64G ram system so should be fast enough?

The first 4 sec it is running at full speed, then it will go down to 400MB/s+-.

Is it possible to expand that write cache or add a L2arc to retain the speed??

Thanks.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
Like ARC in RAM, L2ARC is a read cache, not a write cache. L2ARC gets filled by reads from the ZFS pool.

The behavior you see is the transaction queue for an async write being written to RAM, but then once there is too much data to safely hold in RAM, the system starts to write it out to the pool. So, the first 4 seconds are where no data was actually written to disk, but was instead stored in RAM. After that, you are at the mercy of the speed of the disks.

It is possible to increase the amount of time that ZFS holds data in RAM before flushing the async write to disk, but this means your data is in danger if the system crashes. Even if you don't care about that, if you are truly saturating the network connection for a long time, eventually you'd run out of RAM and be right where you are now, but with far less data safety.

You could increase the write speed if you changed your pool configuration to something like 3x RAIDz, with each vdev having 4 disks total. This is because ZFS stripes writes across vdevs. So, you would get 2-3x the performance compared to what you now have.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi @ArCoN

Being able to run at "full speed" - assumed to be around 1GB/s - for the first few seconds means that the problem likely doesn't reside in the network layer. @nabsltd has accurately summarized the "transactional" behavior of ZFS when handling writes, so the question becomes "what is the maximum throughput your pool can sustain?"

What is the storage controller being used, how are the drives connected, and what make/model are the drives?
 

ArCoN

Cadet
Joined
Jun 12, 2023
Messages
5
#2 Thanks for clarifying. if i decide to play around with a bigger write cache how do i do that? :)

#3 You actualy forced me to look at the specs for the MB (P12R-E) :D and discovered the HBA (SAS 9305-16i) is only connected to a x4 lane. so it fits with the 400MB/s+- :P
4 of the drives is going to the onboard controller, because of one dead port on the HBA :/

So first I will try to move the HBA to the one of the X8 lanes and see what happens ;)

Thanks again!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Your problem is that the underlying storage is not able to even remotely sustain 10 Gbps. Move to NVMe SSDs in mirrors. if you really need a sustained speed of 1 GB/s. A cache will only help for data that has been read before.

12 disks in a single RAIDZ2 vdev is not ideal either, but that is somewhat separate from your question.

Please spend a few hours and read up on ZFS. My signature has a number of good starting points.

Also, you will get better answer with a detailed description of your hardware (as per the forum rules).
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
PCIe 3.0 x1 is capable of around 1GB/s so the x4 lane shouldn't be throttling it to those speeds. What are the drive models?

You can increase the value of the tunable zfs_dirty_data_max - measured in bytes - but record the existing value first (line 1 using cat) so you can revert if desired.

Code:
cat /sys/module/zfs/parameters/zfs_dirty_data_max
echo NEWVALUEINBYTES > /sys/module/zfs/parameters/zfs_dirty_data_max
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I have a little problem getting the full speed, of my 10g connection
So i like to ask what can be done to achieve that.

I am running a 12 spinning disk raidz2 on a E-2356G, 64G ram system so should be fast enough?
I changed my pool layout from 2x 8-way RaidZ2 to mirrors in order to achieve full 10gbit speed.

Also better for VMs.
 

ArCoN

Cadet
Joined
Jun 12, 2023
Messages
5
#6 Haha, I just assumed it was 1gigabit/lane and not byte. anyways thanks, I will try to play with the tunables. 7 of them is ST8000DM004. cant remember the rest :P i guess its the slowest one that sets the agenda.

#7 I dont see the reason to change my pool layout? If i want to optimize space and have the redundancy for 2 hdd failures, the write speeds i can live with if there is no other option.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
#6 Haha, I just assumed it was 1gigabit/lane and not byte. anyways thanks, I will try to play with the tunables. 7 of them is ST8000DM004. cant remember the rest :P i guess its the slowest one that sets the agenda.

#7 I dont see the reason to change my pool layout? If i want to optimize space and have the redundancy for 2 hdd failures, the write speeds i can live with if there is no other option.
It is one of those things when trying to see the grass grow, you realize it is growing much faster when you are actually not looking at it.
 

ArCoN

Cadet
Joined
Jun 12, 2023
Messages
5
It is one of those things when trying to see the grass grow, you realize it is growing much faster when you are actually not looking at it.
If i cloud just give the pc some fertilizers, it would be much easier
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
ST8000DM004

Well, there's a major contributing factor. Those are SMR (Shingled Magnetic Recording) technology drives which are known to have pathological performance problems with ZFS if they have to perform any rewrites.
 

ArCoN

Cadet
Joined
Jun 12, 2023
Messages
5
Your are probably right ;) I got the zfs_dirty_data_max to work in tunables, seems to do the job for me.

Thanks again!
 
Top