Kuro Houou
Contributor
- Joined
- Jun 17, 2014
- Messages
- 193
Curious if this is really fixed or not.. one person said no the other yes...
Same here, I went back to freenas 11. everything is back. the speed and reliability and options.
I wonder if the upgrade itself was not the problem. If I have time, I'll try to do a fresh install of truenas 12 instead of a migration.
Actually, it's the fastest I've ever seen!! Never went above 800s for read on 11.3
Is this exclusively via SMB or are you seeing similar issues with other protocols (iscsi, NFS, etc)?I have only seen it on 10g connections for myself and others that have posted. It's possible that it's just less noticeable on 1g. My 1g connections are actually much faster than my 10g though when I switch over.
I never said 12.1, it's 12.0-U1. I don't think there is an actual upgrade path per se, it just boots up directly to it like a network switch firmware AFAIK. I just booted from 11.3-U5 to 12.0-U1. You can always boot back to the old version if you don't upgrade your zpool.Curious did you go from 11.3 straight to 12.1? Or were you on 12.0?
In 12.0-U1 I made several tweaks / bugfixes to samba's AIO, specifically in regards to behavior in overload situations. Big picture, AIO can be fine-tuned by modifying the values of the `vfs.aio.*` sysctlsI never said 12.1, it's 12.0-U1. I don't think there is an actual upgrade path per se, it just boots up directly to it like a network switch firmware AFAIK. I just booted from 11.3-U5 to 12.0-U1.
sysctl -a | grep aio
. vfs.aio.target_aio_procs
and vfs.aio.max_aio_procs
. These control the low-water and high-water marks for kernel threads handling AIO requests. You can view vfs.aio.num_queue_count
to see the current queue depth for AIO requests. The kernel will create new aio threads as needed until it hits the high water mark. As we receive AIO requests, if the current queue is too deep, aio_read() or aio_write() will fail with EAGAIN.Not really a bug, it's just a matter of how you handle overload situations. We bumped target aio procs to 16 and max aio procs to 128. You can do further tweaking if you need to by setting different values for these sysctls.Can you detail the amendments of the aio settings?
This sounds similar to: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=212128
Have you tested the case of opening aio reading and writing plus smb encryption?Not really a bug, it's just a matter of how you handle overload situations. We bumped target aio procs to 16 and max aio procs to 128. You can do further tweaking if you need to by setting different values for these sysctls.
I'm not aware of issues with samba's AIO and SMB encryption. Can you provide a reference for this issue?Have you tested the case of opening aio reading and writing plus smb encryption?
Now that both aio and smb encryption are turned on at the same time, it will cause transmission data errors, and if smb encryption is turned off, data errors will not occur.