Poor pool performance due to weak hardware?

boggie1688

Explorer
Joined
Jul 9, 2015
Messages
58
Background Info:

Truenas Scale 22.02.0 Release
AMD 3400G
MSI B450-A Pro MAX
Corsair 2x16 DDR 4 2933
Intel x520-DA2
Sun Oracle 375-3609-03 8 Port 6 Gb/s SAS HBA
Netapp DS-4246 with IOM6

Pool #1:
7x Toshiba 5TB 7200rpm 128mb
Raidz1

Pool #2:
1x Toshiba 18TB 7200rpm 256mb

Desktop:
AMD 3950x
Intel x520-DA2
32GB DDR4 3200

Long story short, I've been struggling with getting my 10G network to produce decent speeds. I bought two Intel x520-DA2s, stuck one in my desktop and the other in the NAS box. I purchased two transceivers from FS.com, both coded for intel hardware. Upon plugging everything in, and configuring the network for a direct connection, my SMB file copy produce some really POOR results.

I'll fully admit, I'm not a pro user like many people here, but I did notice one interesting thing. If I copy from Pool#1, my speed vary wildly. On a fresh reboot with no apps running, I can see upwards of 800mb/s. However if my apps are running suddenly my copy speeds are bouncing up and down 50mb/s to 400mb/s, averaging somewhere around 250-300mb/s. Whether or not I have apps running, if I copy a file from Pool#2, I always copy at around 240mb/s.

Given Pool#2, is just a single drive and Pool#1 has 7 drives, I assume there is something wrong with how my Pool#1 is configured. I've had this pool for almost 7 years. I also, think I noticed, that copying files from Pool#1 seems to draw on some CPU power where as copying from Pool#2 doesn't. Although, my CPU doesn't come close to maxing out, as the dashboard show only 35-50% usage.

Is my pool#1 performance due to having a weak processor? Or is there something else I should be looking into?
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
read and writes use RAM first (ARC), and then dump to disk in consolidated writes. if your test file fits into RAM, your measurement will be RAM speeds, not just your disks.
300MB/s from a single vdev raidz1 is mostly about the right. mirrors you might get more, but ultimately, zfs is limited by the slowest vdev, vdev speed is limited by the slowest drive, and HDD's top out at ~150MB/s
getting actual 10GBE speeds pretty much requires SSDs, or a huge number of vdevs.
a zpool will naturally take more calculation than a stripe, as the stripe has zero parity calculation. mirrors have the best overall performance.
 

boggie1688

Explorer
Joined
Jul 9, 2015
Messages
58
Thanks for replying.

I did specifically pick files much larger than RAM, and also never copied the same file back to back to avoid pulling directly from the cache in ram.

If 300mb/s sounds about right, then I'm happy. I just thought with 7x drives it would be more and randomly sometimes I would get 800mb/s. BUT perhaps there was some caching involved.

Thanks again!
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
most zfs performace is based on the vdev performance and vdev number, not the number of drives. more vdevs, more performance. this changes a bit with raidz, which can give more performance in certain types of single stream read/writes (mostly things like movies) but this is why mirrors, which usually are 3 drives at the most, and have no parity calculation (they are basically just writing the file to 2 drives instead of 1) is the best performance. mirrors have the potential for much better read performance, since any drive in the mirror can supply data, and so different drives can be fulfilling different reads, or, i think, different parts of the same read.

also note that radiz1 is extremely not recomedend for disks larger than 2-3TB or so. a resilver has a very high chance of killing another drive before it can complete, loosing the entire pool.
 
Last edited:

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Is your system dataset on Pool#1?

Ideally you have mirrored boot pool (ssd) large enough to hold the system dataset and swap (32-64GB), thereby keeping any IO from your data pools.
 

boggie1688

Explorer
Joined
Jul 9, 2015
Messages
58
Is your system dataset on Pool#1?

Ideally you have mirrored boot pool (ssd) large enough to hold the system dataset and swap (32-64GB), thereby keeping any IO from your data pools.

When I started with Freenas years ago, it was just a simple NAS. I threw a bunch of drives in and just upgraded the software whenever new versions came out. I only just migrated to scale, and within the last 5 months realized that I can move my system dataset off my spinning disks to my ssd. I also changed the application pool from my Pool#1 to the same ssd that has the system dataset. Seriously, poking around the GUI always leads to new discoveries for me.

I found out last night that pointing the applications to a ssd doesn't truly migrate all the data, because I deleted the ix-application dataset, on Pool#1, and half my applications would not start. 4 out of 9 started fine, but the other 5 did not. I spent a couple hours reinstalling the applications and my copy/write performance has improved.

Long story short, I think I discovered what you are getting at. At least it seem that these applications may have been accessing Pool#1 and eating into some of the performance. Test today show reads are much better averaging around 450mb/s and writes around 275mb/s.

By swap are you saying my SSD should have an extra 32-64gb of space for when my memory fills up?? The ssd has about 140gb of free space, but its not a mirrored. Given how cheap these small ssds are these days maybe I should buy a pair.

To be honest, I probably don't have this stuff configured anywhere near where it is suppose to be. I didn't even know you could have multiple vdevs in a pool, but thanks to artlessknave, I've been reading up and watching videos last night.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
truenas_forum_temp.PNG
 
Top