Read in RAID as slow as single drive

  • Thread starter DeletedUser122199
  • Start date
D

DeletedUser122199

Guest
Hello. I am new to TrueNAS and NAS in general, but I believe I have set up things as I should, yet I've ran into a strange issue.

Hardware - repurposed Asustor Nimbustor 4-AS5304T:
CPU: Intel Celeron J4105
RAM: 2x4GB 2400MHz
Boot drive: repurposed WD Blue 3D NAND, M.2 - 500 GB connected from external case over USB-C to USB-A 3.x
Storage drives: 4xWD Red Pro 8TB WD8003FFBX
LAN: 2x2.5Gbit => 2x2.5Gbit switch => 10Gbit PCI-E card in PC

I have tried to configurate the drives in RAID Z1, Z2 and in mirror but in all configs the read performance is the same as if it's from single drive. Eventually I have tried to create a volume from two drives in mirror and one volume from one single drive and tried to compare the performance for read by simply moving files to and from PC. The speeds were pretty much the same in both cases and they reflected the specs of read/write of a single drive WD8003FFBX, cca 200 MB/s. In the screenshot you can also see that the two drives that are in mirror just split the read load, but stayed under the speed of a single drive, while the single drive used its full read speed, and reached the same speed as the two.

I am not aware of dong any special settings that could cause this and I have verified that the machine as a whole could deliver much higher read speeds, when I tried the original Asustor OS for a while. However I am not really an expert, so I might be wrong.

I would appreciate any advice you could give me.
 

Attachments

  • Screenshot 2023-02-01 141926.png
    Screenshot 2023-02-01 141926.png
    597 KB · Views: 141

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
1: you're under the minimum RAM
2: what speeds do you expect? With the 2.5Gbps NICs you have a _theoretical_ maximum single stream throughput of about 250Mbps - combine this with no info about the chipset, and I would guess it's some cr*ppy Realtek.
 
D

DeletedUser122199

Guest
1: you're under the minimum RAM
2: what speeds do you expect? With the 2.5Gbps NICs you have a _theoretical_ maximum single stream throughput of about 250Mbps - combine this with no info about the chipset, and I would guess it's some cr*ppy Realtek.
1. I'm not under minimum requirement with RAM
2. I have 2x2.5Gbit, and even if I had theoretical maximum 250 Mbps that would still mean I am getting 50 Mbps less than I should
3. You've completely ignored that I had reached way higher speeds with different OS (which was actually using just 4GB RAM and so could this, because I haven't seen it use more than 3 GB, besides caching)
4. If you are just going to act as arrogant j*rk without even reading what I say then stop bothering writing here because it's helping no one, maybe except your own ego, thanks.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
RAM: 2x4GB 2400MHz

Your system has barely enough memory to run; the recommended minimum is 16GB. ZFS is highly dependent on its ARC cache to help with performance. Starving it of ARC causes it to cache much less, and it won't try to do anywhere as much readahead, etc.

CPU: Intel Celeron J4105

This is really a very weak CPU. It's 1.5GHz although thankfully four cores. It is likely to be VERY slow at compression, so you might try turning off ZFS's default compression to see if that speeds things up.

LAN: 2x2.5Gbit

I think it was previously established by another user that these are Realtek parts. They are not expected to perform well.

My opinion is that you're just expecting too much out of this. Your best tweak is probably to disable compression.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1. I'm not under minimum requirement with RAM

CORE has a 16GB requirement, SCALE is only 8GB, but 16GB is strongly recommended. I'm actually unclear on why that isn't 32GB with the way ARC management works on Linux. Either way, ZFS performs poorly with limited resources, and it is correct to indicate that you are very short on memory.

You've completely ignored that I had reached way higher speeds with different OS (which was actually using just 4GB RAM and so could this, because I haven't seen it use more than 3 GB, besides caching)

So you had an operating system that was presumably carefully tuned and tweaked by the manufacturer for the hardware of your device. You got happy speeds. You then discarded that, and instead loaded on an operating system that is known to be a heavy resource consumer, and certainly not optimized for your particular tiny platform, and are now complaining that your speeds are poor? Maybe it's just me, but I feel like that's a bit irrational. You are certainly going to find some generic Linux BusyBox with EXT3, or whatever your vendor firmware had, to be a lot faster than ZFS. ZFS is not designed for this.

4. If you are just going to act as arrogant j*rk without even reading what I say then stop bothering writing here because it's helping no one, maybe except your own ego, thanks.

[mod hat] You're a new user, so we can certainly grant you a little leeway. However, please note that this tone is generally unacceptable. Please note the Forum Rules, conveniently linked at the top of every page in red, and in particular note that forum members are expected to be friendly and pleasant. This is a global forum, and many users do not speak English as their native tongue. You should avoid reading into what is presented any offense unless offense was clearly and unambiguously meant (in which case, you may report the content to the moderation team). Please drop the attitude and instead assume that people are generously taking time out of their day to try to help you out with your questions. It makes for a much more pleasant forum experience, and there's a lot of stuff to be learned. Thanks!
 
D

DeletedUser122199

Guest
Perhaps I assumed too much, if so, I apologize. However I am not getting advice here any relevant advice, only things that are quite literally in conflict with the reality I see here, and I am being told those things with such a confidence, that I cannot help but feel somewhat unsure of the intentions.

I haven't seen the CPU to go beyond 50% yet and RAM has almost always 2GB free, which I know does not necessarily mean that RAM could not be A bottleneck, but we aren't talking here about NAS getting hammered by 10 different PCs downloading each different file at the same time. This is just one connection and one large zip file and the read speed is not fluctuating, after caching is no lobnger possible, it's the same speed the whole time.

The previous operation system did not last one day before corrupting my volumes, so that's why I have decided to switch, and anyone and everyone on the internet always points to TrueNAS.

After further investigation I believe the ethernet ports seem to be the problem, but not that they could not together deliver higher speed, but rather that one is always being underused. No matter if read or write, only one of them is being used at the time, and if I want to use the other one I have to manually disconnect the cable. If you can give me any hint for why that could be happening I would be glad.

If you are still sure the problem is in the CPU/RAM/chipset itself, well, ok... I give up and admit I probably had no idea what I was doing in the first place.
 

Attachments

  • Screenshot 2023-02-01 161214.png
    Screenshot 2023-02-01 161214.png
    605.7 KB · Views: 149
  • Screenshot 2023-02-01 161319.png
    Screenshot 2023-02-01 161319.png
    531.8 KB · Views: 153

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Perhaps I assumed too much, if so, I apologize. However I am not getting advice here any relevant advice, only things that are quite literally in conflict with the reality I see here, and I am being told those things with such a confidence, that I cannot help but feel somewhat unsure of the intentions.

I've been here a decade. I'm a moderator. I've read this entire thread. I deem everything said to be quite reasonable. You are not the first person to be disappointed by performance, and most of the users who answer questions have also seen these things too. We say things with confidence because you are not unique, your experience is not unique, and the resolution to issues like this have been worked out years ago.

One of the oldest servers here is an AMD Athlon II Neo based unit; two 1.3GHz cores, 16GB RAM. It's slow. Compression is awful, and the original 8GB RAM was terrible. It got much faster with 16GB RAM. No rocket ship, but faster.

I haven't seen the CPU to go beyond 50% yet and RAM has almost always 2GB free, which I know does not necessarily mean that RAM could not be A bottleneck, but we aren't talking here about NAS getting hammered by 10 different PCs downloading each different file at the same time. This is just one connection and one large zip file and the read speed is not fluctuating, after caching is no lobnger possible, it's the same speed the whole time.

You have four CPU cores, and a situation where probably only two of them will be busy, one is ZFS in-kernel and busy doing stuff like compression, and the other is whatever userland daemon like SMB is serving your files. 50% is eminently reasonable, but also going to be part of the reason you are not going faster. Steady speeds are a good thing, they mean that there aren't unexpected variables contributing to your troubles.

The previous operation system did not last one day before corrupting my volumes,

Sorry to hear it.

After further investigation I believe the ethernet ports seem to be the problem, but not that they could not together deliver higher speed, but rather that one is always being underused.

Of course. You're not allowed to have multiple network interfaces on a single network, that's broken, bad network design. You could potentially aggregate them using LACP if you had multiple clients and a switch that did LACP, but that won't help single-client performance. Please read the following links.



If you are still sure the problem is in the CPU/RAM/chipset itself, well, ok... I give up and admit I probably had no idea what I was doing in the first place.

As I previously suggested, please try disabling compression and see if this boosts your speeds. On a slow CPU, ZFS will spend a lot of time doing compression and it really drags you down.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@hb.beny - One note, RealTek NICs, (which is probably what your board is using), have optimized drivers for MS-Windows and potentially vendor supplied specialized software / firmware.

However, open source software, (Linux for SCALE and FreeBSD for Core), have been known to be less optimized for RealTek NIC drivers. This generally represents its self as less than full wire speed, like 70-80%. Though sometimes it shows up as an unreliable connection.


Another issue is with the speed of the CPUs. In certain cases, higher speed CPUs work better than more cores. As @jgreco mentioned, SMB will likely take one core. But, he did not mention that if that core is 100% busy, SMB WON'T assign another core. So the solution to slower SMB on the NAS side, is faster CPU cores.


I hope you get something working reliably.


ZFS, (and TrueNAS), work better on better spec'ed hardware. Not to say it won't work well & reliably on lighter hardware. It is just that ZFS' data protection and other features like compression actually make ZFS slower than a simpler file system, like NTFS or EXT3.
 
Last edited:

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
After further investigation I believe the ethernet ports seem to be the problem, but not that they could not together deliver higher speed, but rather that one is always being underused. No matter if read or write, only one of them is being used at the time, and if I want to use the other one I have to manually disconnect the cable. If you can give me any hint for why that could be happening I would be glad.
Have you tried seeing if the network itself is the bottleneck? Try running iperf3 test between the two machines.
 
D

DeletedUser122199

Guest
Thanks for everyone responding, even if I acted as a bit of a j*rk...

I really might be wrong but from what I have seen with the resources monitors and from my general pc hardware knowledge I do not believe the problem are the main parts themselves, only that I misunderstood how devices with multiple ethernet ports communicate with each other.

I assumed that 2 x 2.5GBit ports can be used together to get effectively 5GBit, which, as I understand it after your explanations, is just not the case.

I would try to use the original OS again to see if I was not mistaken about the original speed, but it seems I have unintentionally corrupted the original OS and from what I understand it cannot be reinstalled with simple USB key and BIOS, as TrueNAS can be.

No matter... I originally bought this pre-made system because I didn't want to study the details for NAS and networking in general but that didn't end well, so...

I'm putting together DIY NAS with these parts. Please let me know if you think there might be something wrong with the configuration.

CPU: R5 4600g
RAM: 2x8GB 3200MHz (I know you might say to go for 32 just to be sure, but I got these 16GB just lying around, so I'll try those first)
Motherboard: ASRock A520M-ITX AC
LAN PCIE 10Gbit: ASUS XG-C100C
Boot drive: WD Blue 3D NAND, M.2 - 500 GB
Storage drives: 4xWD Red Pro 8TB WD8003FFBX

Connected to switch: QNAP QSW-2104-2T with 2x10GBit
Connected to PC with LAN PCIE 10GBit: Zyxel XGN100C 10G RJ45

I believe these should be more than enough for my needs and the max speeds for read should come somewhat close to 10Gbit, assuming of course they will be all configurated in mirror. But if I am wrong, please, let me know.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for everyone responding, even if I acted as a bit of a j*rk...

The community here is generally forgiving. Frustration is understandable, we've all been at the start of the ZFS journey at one point. This is harder than it looks. :smile:

LAN PCIE 10Gbit: ASUS XG-C100C

The Aquantia ethernet cards are basically garbage. They are the Realteks of the 10G world. I posted a resource that discusses the difference between client-optimized ethernet cards (like the Aquantia's) and real ethernet chipsets (Chelsio, Intel). You need to be aware that this card may not work particularly well. Read this, substituting "Aquantia" for "Realtek" in your head.


RAM: 2x8GB 3200MHz (I know you might say to go for 32 just to be sure, but I got these 16GB just lying around, so I'll try those first)

This is probably very tight. There's absolutely no harm in trying it if you have it though, especially if you're aware that 32GB might do better for you later.

Storage drives: 4xWD Red Pro 8TB WD8003FFBX

This may not be enough to get "near 10G" especially with the Aquantia and low RAM. The problem is that your read speeds will be highly dependent on your having lots of data cached and available in the ARC to send to the client, and 16GB isn't good for readahead. SCALE is even worse because Linux only allocates half the RAM to ARC, so you really only have 8GB ARC.

The raw math works out that even if all four of your drives were optimally reading at 200MBytes/sec, you're still going to fall short of 10Gbps.

We've been discussing somewhat similar stuff in another thread you might find helpful.

 
Top