SOLVED Nextcloud file transfer is slow: how to debug the cause? Could it be the jail performance?

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
Hello,
I've been a happy user of FreeNAS, however the upload and download speed of NextCloud doesn't pass 100 mbit (up or down). This is weird given that I have a gigabit internet connection and that everything is connected on cable.
In Nextcloud forum I've been suggested that Nextcloud has little involvement in file transfer and that the problem is possibly more related to the machine.

Could it be a problem with the jail configuration?

The hardware I'm using (which is usually at 0% load) is the following:
- SuperMicro CSE-846 X9DRi-F 4U Server 2x CPU E5-2650 v2 @ 2.60GHz 2x PSU LSI9750-8i W/RAILS
- HP H220 6Gbps SAS PCI-E 3.0 HBA LSI 9207-8i P20 IT Mode for ZFS FreeNAS unRAID
- Micron (2x 16GB) 2Rx4 PC3L-12800R DDR3-1600Mhz ECC REG Server Memory RAM 240Pin
- I have 6x HDD (wd red, before the whole "shingle" issue popped up) all 3TB. I have 3 pools and each pair is just 1HDD + a mirror

The guide I followed for setting up Nextcloud is: https://www.samueldowling.com/2020/...n-freenas-iocage-jail-with-hardened-security/
I'm also a software developer and familiar with operations, which means I'm usually able to infer what's going on, but there is a lot at play here. I initially thought it was a problem with missing SendFile configuration, but that didn't do the trick.

I noticed that when the file is being downloaded, the cpu usage is 13%. That's high, but with hyperthreading, this machine supposedly can show a cpu usage of 800%, which was not the case.

Any idea what should I be looking at?

Notice that the jail cpuset is already set to "all"
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
I have 3 pools and each pair is just 1HDD + a mirror

Do you have 3 pools or 3 vdevs in a single pool ? Usually, there are only few reasons to go for multiple pools...

As for the speed, is there any encryption involved ? Here, I do server side encryption in my Nextcloud.

A jail is using a virtual interface. Did you check if that one is 100 Mbits only ?

Are you sure all of your network ports are 1G, including those in your switch ? What model is that switch ?

Are you doing any pf this over WiFi ?

You must give us the complete details of your entire network setup for us to be able to review it...
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
Do you have 3 pools or 3 vdevs in a single pool ? Usually, there are only few reasons to go for multiple pools...

As for the speed, is there any encryption involved ? Here, I do server side encryption in my Nextcloud.

A jail is using a virtual interface. Did you check if that one is 100 Mbits only ?

Are you sure all of your network ports are 1G, including those in your switch ? What model is that switch ?

Are you doing any pf this over WiFi ?

You must give us the complete details of your entire network setup for us to be able to review it...

Thanks, I think I can answer all this.
I have 3 pools, I did this on purpose, that's why I have a 24 slots server. Each pool is 1 HDD + 1 clone. Honestly, I even forgot all the rationale behind it, but I don't store much data, but I value the safety of that data a lot (I had a lot of bad HDD experiences). The simplicity of having each pool being 1 HDD + 1 mirror has been also extremely valuable, it proven the right decision for my use-case. (I think the rationale was behind re-using my existing 3TB HDDs without being locked on this smaller cut).

Where do I check the interface speed of the interface for the jail? I've been looking through the UI and couldn't find a way.

Yes I'm sure. I have everything is 1G, including the switch and cables are cat7. ASUS RT-N66U, it's able to handle this speed (I presume) given that on LAN I can get ~95MB and when downloading from foreign servers (Steam), I can get to ~85MB.

Wifi does not exist in this setup (it's disabled on my laptop).

I discovered something very important though: when I access nextcloud through direct LAN ip address (192.168.1.21) and download the file from there, I get ~95MB speed, which is what I would expect. So the problem is limited to WAN access (when accessing from a domain pointing to my external IP address).
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
With additional testing, it seems I'm not able to pass ~15MB/sec on WAN (but I am on LAN), I wonder if the source is indeed the router.
What I can't explain is that on speedtest.net I can get to 80MB/sec up and down (on various servers).

I setup a digitalocean server with 4 cpus and executed some tests there. I discovered I was able to reach 288 mbit in upload speed (~35MB) using iperf and ~28MB/sec with scp.
The router might be a bottleneck, however the difference in speed (15 vs 35) is so wide that I'm not completely confident this is the case. Not to mention, I get full speed over LAN.
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
I did some more serious tests this morning and it seems like I can upload ~760mbit/sec to a friend (has gigabit too) by serving a file with NGINX from my laptop (directly from RAM). I am able to download a file with the same speed from a server. Both have about 5 to 10% cpu usage of my router.
However, when I download a file "from myself" when passing through the internet, the cpu usage skyrockets to 100% and I get 150mbit speed.

It seems like it's a router misconfiguration (or a firmware issue, who knows). I'll start investigating!
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Each pool is 1 HDD + 1 clone. Honestly, I even forgot all the rationale behind it, but I don't store much data, but I value the safety of that data a lot (I had a lot of bad HDD experiences). The simplicity of having each pool being 1 HDD + 1 mirror has been also extremely valuable, it proven the right decision for my use-case. (I think the rationale was behind re-using my existing 3TB HDDs without being locked on this smaller cut).
You would be much safer AND faster by going tripple-mirror and just have 1 pool. Actually even mirrors in one pool might be safer, due to the decreased rebuild times because data is spread over more disks and rebuild is based on actual space used.

Considering you have trouble with basic ZFS terminology, maybe also reread the zfs primer?
Because you made some basic mistakes too... multiple vdevs in a pool wouldn't look you into the size of the smallest drive, that goes per vdev. So if you had all those pools, into one pool with 3x2disk mirrors, you would have exactly the same size you're having now. except faster.
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
You would be much safer AND faster by going tripple-mirror and just have 1 pool. Actually even mirrors in one pool might be safer, due to the decreased rebuild times because data is spread over more disks and rebuild is based on actual space used.

Considering you have trouble with basic ZFS terminology, maybe also reread the zfs primer?
Because you made some basic mistakes too... multiple vdevs in a pool wouldn't look you into the size of the smallest drive, that goes per vdev. So if you had all those pools, into one pool with 3x2disk mirrors, you would have exactly the same size you're having now. except faster.

Thanks for the suggestion, but I think the discussion is going a bit out of topic.

Also please remember that the first pool existed long before the second one was added. The 3rd pool is made of a single disk (discardable data. Works as a backup point for the other PCs, the data is already replicated on the PCs).

Yeah regarding the terminology it would benefit. I did the setup 2 years ago (or more, lost track) and never had to touch it again until much later. The speed however wasn't a concern at all.
Again there was some rationale there, but I can't remember all of it.
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
OK, I was able to solve this nightmare without purchasing a new router.
So apparently when I hit computers on my local network from WAN, my router uses "NAT Loopback" (which makes sense). However this functionality does not run on hardware (no hardware acceleration), hence the CPU usage.
My solution was to use dnsmasq (preinstalled on the router) to point the domains to local IP addresses. (Hosts file on the router would have achieved the same).
It's not perfect, but it does the trick, I'm now getting ~90MB/sec download speed.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Thanks for the suggestion, but I think the discussion is going a bit out of topic.
Things go out of topic quickly, because your setup can be catagorised as "wrong, due to misunderstanding zfs"

Also please remember that the first pool existed long before the second one was added. The 3rd pool is made of a single disk (discardable data. Works as a backup point for the other PCs, the data is already replicated on the PCs).
Thats pretty irrelevant, you can add vdevs to pools whenever you like. So there never was a reason for a second pool (unless it's a full ssd pool, system-pool or, as your third pool, a discardable pool)

Yeah regarding the terminology it would benefit. I did the setup 2 years ago (or more, lost track) and never had to touch it again until much later. The speed however wasn't a concern at all.
Again there was some rationale there, but I can't remember all of it.
There are a few reasons to multipool (not many) and I don't see them here.
Maybe your rationale back then was just wrong, as you also seemed to think mixing and matching size in a pool isn't possible and adding vdevs later isn;t possible, where both are perfectly fine ;-)
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
Things go out of topic quickly, because your setup can be catagorised as "wrong, due to misunderstanding zfs"


Thats pretty irrelevant, you can add vdevs to pools whenever you like. So there never was a reason for a second pool (unless it's a full ssd pool, system-pool or, as your third pool, a discardable pool)


There are a few reasons to multipool (not many) and I don't see them here.
Maybe your rationale back then was just wrong, as you also seemed to think mixing and matching size in a pool isn't possible and adding vdevs later isn;t possible, where both are perfectly fine ;-)

I appreciate the thought, however I'm feeling attacked rather than supported.

Way back then I had a very good grasp of all the concepts and the setup was validated by multiple people of this community (some well known people of the forum, some in chat). I just didn't need to touch it for so long.
 
Last edited:

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I appreciate the thought, however I'm feeling attacked rather than supported.
I'm not here to give you supporting feelings. I give feedback on what I read, all feedback you seem to ignore.
Thats fine, you're free to ignore feedback, just as I'm free to give it.


Way back then I had a very good grasp of all the concepts and the setup was validated by multiple people of this community (some well known people of the forum, some in chat).
Why would I care if santa himself validated it? It doesn't mater.
You never cared to explain why it was a good idea to go this route in the first place, you don't have to... but don't go hiding behind anonymous folk.

As you clearly are not open for any feedback that isn't related to your question, i'll leave it at this.
But remember: If you start defending weird configs with complete non-sense (like saying not being able to expand a pool), don't be amazed people start commenting on it.


*edit*
I went back in your post history and referenced who the heck adviced you to run 2 mirror pools instead of 1 pool with 2 vdevs and (most importantly) why. Seems no one did.
 
Last edited:

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
I re-read the primer.
Your suggestion is to have 1 Pool with 2 vdevs, each vdev composed of 2 HDD (1 mirror of the other). Essentially a striped with mirroring. Is that correct?
 

Fire-Dragon-DoL

Explorer
Joined
Dec 22, 2018
Messages
97
I'm not here to give you supporting feelings. I give feedback on what I read, all feedback you seem to ignore.
Thats fine, you're free to ignore feedback, just as I'm free to give it.



Why would I care if santa himself validated it? It doesn't mater.
You never cared to explain why it was a good idea to go this route in the first place, you don't have to... but don't go hiding behind anonymous folk.

As you clearly are not open for any feedback that isn't related to your question, i'll leave it at this.
But remember: If you start defending weird configs with complete non-sense (like saying not being able to expand a pool), don't be amazed people start commenting on it.


*edit*
I went back in your post history and referenced who the heck adviced you to run 2 mirror pools instead of 1 pool with 2 vdevs and (most importantly) why. Seems no one did.

I'm happy to receive feedback, I'm not happy with the presumption that back then, no research was made and the possibility that I might simply have forgotten things is not an option. I have no good memory, but read an incredible amount of resources before taking any step. No doubt I could have made mistakes, but I'd like to be treated with no malice. The reasoning were not due to a lack of understanding.

Regarding the pool part, the rationale way back then is potentially not valid anymore now (and it was given in the IRC chat, I chatted a lot there). I can't remember it, unfortunately I didn't take notes at the time (that's a mistake for sure).
It could have been something as simple as "I want to be able to copy the content of this pool on an external hard drive easily". Of course if I keep adding drives to a pool, at some point this won't be possible due to the size. Now of course with 3TB drives this is still doable.

What happens with backups? I currently use 1 Hard Drive (1 vdev in 1 pool) to backup a single pool with ZFS send. Given the mirroring in the other pool, this is enough for my needs: I can just unplug the hard drive after copying the data over and store it in a safe box.
However with 2 vdevs in a pool, the only option seems to convert my "backup pool" to striping (1 pool of 2 vdevs, striping. This is because I foresee the pool growing even more, so becoming 3 vdevs), otherwise data won't fit. Is that correct? This does seem more dangerous for my backups given higher chance of failures (any 1 drive can fail, all backups are lost)
 
Last edited:

Piereligio

Dabbler
Joined
Mar 9, 2021
Messages
13
OK, I was able to solve this nightmare without purchasing a new router.
So apparently when I hit computers on my local network from WAN, my router uses "NAT Loopback" (which makes sense). However this functionality does not run on hardware (no hardware acceleration), hence the CPU usage.
My solution was to use dnsmasq (preinstalled on the router) to point the domains to local IP addresses. (Hosts file on the router would have achieved the same).
It's not perfect, but it does the trick, I'm now getting ~90MB/sec download speed.
Thanks to you, I had the slight suspect it was my router, otherwise I'd been hopefully messing with my TrueNAS/Nextcloud settings for who knows how long.

But I need to advise that it may happen for something completely different: SQM egress shaping, in my case.

Seems like SQM on my OpenWRT router decides that Nextcloud traffic has to be cut down, to about the 70-80% of my internet upload speed.
Maybe it thinks that the traffic is going over the WAN and would saturate the upload, even when I'm using LAN instead?

As a workaround I disabled both SQM ingress and egress shaping, so it doesn't apply a fixed limit on download or upload speed on any traffic. I hope it will still manage to give priorities without getting the network stuck in any scenario.
 
Last edited:
Top