Newbie NVME read speed slow - point of dispair

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
On the other hand Truenas is mentioned as super NAS system for video editing and render stuff where ever you look.
A lot of this content is bullshit. There is a fundamental difference between most of the videos about TrueNAS and real life. Most videos are about "building a NAS". They basically end when the machine boots successfully and serves some files.

What people should think about instead is not building but operating a NAS over a long period of time with hardware issues along the way. Only when the data on the NAS have survived that hardship, one can really determine whether the system is fit for duty. Otherwise it is simply geek entertainment, but nothing to use as a basis for valuable data.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
A lot of this content is bullshit. There is a fundamental difference between most of the videos about TrueNAS and real life. Most videos are about "building a NAS". They basically end when the machine boots successfully and serves some files.

What people should think about instead is not building but operating a NAS over a long period of time with hardware issues along the way. Only when the data on the NAS have survived that hardship, one can really determine whether the system is fit for duty. Otherwise it is simply geek entertainment, but nothing to use as a basis for valuable data.

Agree, as I have also been going through endless videos and such before even considering asking a question, most video's miss key content or settings, and are also people setting things up in an environment that they have already previously configured everything, thus they leave out those important little things you need to do before you can even do the other things.

"Set up an NFS share, just turn on NFS, do a share and your done!" - Well no....it is not that easy from a clean install..
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
A lot of this content is bullshit. There is a fundamental difference between most of the videos about TrueNAS and real life. Most videos are about "building a NAS". They basically end when the machine boots successfully and serves some files.

What people should think about instead is not building but operating a NAS over a long period of time with hardware issues along the way. Only when the data on the NAS have survived that hardship, one can really determine whether the system is fit for duty. Otherwise it is simply geek entertainment, but nothing to use as a basis for valuable data.
Welcome to the real....
You are right.

So I will give it another try.

The current setup is
- direct connection of NIC. No switch/hub.
- two intel as stripe served as NFS share. Changing any settings in Truenas or via systemd on client doesn't really influence the speeds.

All seems to be fine.
- iperf3 shows ~1Gb/s in both directions. So the link should be ok.
- fio shows the expected speed on any ssd/nvme. I've double checked with Samsungs nvmes as well.
- NFS writes to NAS ~1000Mb/s
- NFS reads from NAS ~500MB/s

I have tried SMB with worse performance in both directions.

Then I switched to FTP. Filezilla on the client gave me quite the opposite result???!!!
- FTP writes ~500MB/s
- FTP reads ~1000MB/s

Any guess?
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
What hint or suggestion posted in this thread was outdated or misleading? Or did you mean "someone was wrong on another part of the internet"?



I am no Linux-advocate (any more). But the idea behind it was "be free to learn and to do it yourself", not a clicki bunti set and forget process. *rant* Even though modern Linux distributions (apart from kits like gentoo/funtoo or LFS) now "advertise" a Windows-/Mac-like experience. COUGHING */rant* That's why the above search is not "the right way" and leads to frustration. Search for documentation, how to setup a Linux client for NFS/SMB in general. It exists. There is no need for "try and error". (If you chose your distribution wisely, that is. If I were you, I would ditch any systemd distribution any time. Even though the recommendations @ your topic in the manjaro-forums were pointing to the opposite direction. It is not about "old ways" or "new ways", but about knowledge and control.)

If you do not care, then all of this is not ... well ... "within the scope of your application". You will be better off buying any off the shelf solution. Get a decent support contract, too.

I do not want to sound rude. And I agree, that it is wrong to take the instrument to achieve a goal for the ultimate goal itself. Most of the time we just want "that job done". But with software it is completely different, IMHO.

All the best (from Germany)!

I apologize. My mistake. I should have make clear that it was about things I found on the internet. I'm happy to get support here.
I'm also open (now) to dig into Truenas. Using Manjaro was the same. Learning by doing. I experienced the same frustration from time to time.

I will have to upgrade my NAS system sooner or later. For now I have to play with the given hardware.
I'm quite sure that would make things easier.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Any guess?
Could you please add your pool configuration to the information about your NAS? That will make it easier to find, also since this is now a relatively long thread. Thanks!
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Could you please add your pool configuration to the information about your NAS? That will make it easier to find, also since this is now a relatively long thread. Thanks!
Code:
poolHDD  (System Dataset Pool) ONLINE     check_circle  |  12.44 MiB (0%) Used  |  7.14 TiB Free 
 settings


Name Type Used Available Compression Compression Ratio Readonly Dedup Comments
poolHDD FILESYSTEM12.44 MiB7.14 TiBlz414.34falseOFFmore_vert
dataset FILESYSTEM96 KiB7.14 TiBInherits (lz4)1.00falseOFFmore_vert
poolintel ONLINE check_circle | 250.08 GiB (14%) Used | 1.56 TiB Free settings
Name Type Used Available Compression Compression Ratio Readonly Dedup Comments
poolintel FILESYSTEM250.08 GiB1.56 TiBoff1.00falseOFFmore_vert
dataset FILESYSTEM180.08 GiB1.56 TiBInherits (off)1.00falseOFFmore_vert
datasetftp FILESYSTEM70 GiB1.56 TiBlz41.00falseOFFmore_vert
Desktop Motherboard: Rog Rampage VI Extreme Operating System: Manjaro Linux KDE Plasma Version: 5.26.3 KDE Frameworks Version: 5.99.0 Qt Version: 5.15.7 Kernel Version: 5.15.78-1-MANJARO (64-bit) Graphics Platform: X11 Processors: 18 x Intel® Core™ i9-7980XE CPU @ 2.60GHz Memory: 125.5 GiB of RAM Graphics Processor: NVIDIA GeForce RTX 3070/PCIe/SSE2 Ethernet controller: Aquantia Corp. AQC107 (onboard) Server Motherboard: Asrock Z79M OC Formula Operating System: TrueNAS-13.0-U3 Processors: 4 x Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz Memory: 32 GiB RAM DDR3 Ethernet controller: X540 2 x 2TB Intel 660P SSD (stripe) 1 x Ironwolf 8TB (second for a mirror is on my wishlist)
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Your speed will basically be that of your single HDD combined with the RAM cache. The description of your tests seems to indicate a single execution of each one and does not specify the file size.

What needs to be done is to run the test in such a way that it is clear what is accomplished by the cache and what isn't. I recommend to run multiple times, run multiple session per execution in parallel, and use files bigger than your cache. That should yield additional insights.
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Your speed will basically be that of your single HDD combined with the RAM cache.
I'm happy wth the speed of the HDD pool. Slow as expected.
Writing is fast until the RAM is full. I'm talking about the Intel SSD pool.
I will create a new pool with the mirror of the two 660P - 2TB.
The description of your tests seems to indicate a single execution of each one and does not specify the file size.
?
What needs to be done is to run the test in such a way that it is clear what is accomplished by the cache and what isn't. I recommend to run multiple times, run multiple session per execution in parallel, and use files bigger than your cache. That should yield additional insights.
Sure, I will test the new SSD pool which has 128k as record size, no compression and standard sync.
There are 32GB of available RAM. I will use a 35GB file for the tests.
Unfortunately I have no idea how to do the test in parallel. Is it a fio option? Or just open several Truenas GUIs?

Code:
root@truenas[~]# sync;fio --randrepeat=1 --ioengine=posixaio --gtod_reduce=1 --name=test --filename=/mnt/poolintel/dataset/testX --direct=1 --bs=128k --iodepth=4 --size=35G --readwrite=randread --ramp_time=4

test: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=4
fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 35840MiB)
Jobs: 1 (f=1): [r(1)][61.5%][r=4516MiB/s][r=36.1k IOPS][eta 00m:05s]
test: (groupid=0, jobs=1): err= 0: pid=2263: Thu Dec  8 01:05:36 2022
read: IOPS=35.5k, BW=4434MiB/s (4649MB/s)(17.4GiB/4015msec)
bw (  MiB/s): min= 4265, max= 4547, per=100.00%, avg=4444.45, stdev=104.47, samples=8
iops        : min=34122, max=36382, avg=35555.50, stdev=835.71, samples=8
cpu          : usr=8.00%, sys=8.87%, ctx=121316, majf=0, minf=1
IO depths    : 1=0.1%, 2=6.2%, 4=93.8%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=142408,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=4434MiB/s (4649MB/s), 4434MiB/s-4434MiB/s (4649MB/s-4649MB/s), io=17.4GiB (18.7GB), run=4015-4015msec


Code:
root@truenas[~]# sync;fio --randrepeat=1 --ioengine=posixaio --gtod_reduce=1 --name=test --filename=/mnt/poolintel/dataset/testX --direct=1 --bs=1M --iodepth=4 --size=35G --readwrite=randread --ramp_time=4
test: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=4
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [r(1)][62.5%][r=3136MiB/s][r=3135 IOPS][eta 00m:06s]
test: (groupid=0, jobs=1): err= 0: pid=2356: Thu Dec  8 01:14:02 2022
read: IOPS=3287, BW=3288MiB/s (3448MB/s)(17.0GiB/5286msec)
bw (  MiB/s): min= 3042, max= 4134, per=100.00%, avg=3301.78, stdev=366.44, samples=10
iops        : min= 3042, max= 4134, avg=3301.50, stdev=366.59, samples=10
cpu          : usr=1.10%, sys=1.59%, ctx=17424, majf=0, minf=1
IO depths    : 1=0.0%, 2=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=17377,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=2303MiB/s (2414MB/s), 2303MiB/s-2303MiB/s (2414MB/s-2414MB/s), io=26.0GiB (27.9GB), run=11546-11546msec


Code:
root@truenas[~]# sync;fio --randrepeat=1 --ioengine=posixaio --gtod_reduce=1 --name=test --filename=/mnt/poolintel/dataset/testX --direct=1 --bs=32k --iodepth=4 --size=35G --readwrite=randread --ramp_time=4

test: (g=0): rw=randread, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-32.0KiB, (T) 32.0KiB-32.0KiB, ioengine=posixaio, iodepth=4
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [r(1)][88.2%][r=1518MiB/s][r=48.6k IOPS][eta 00m:04s]
test: (groupid=0, jobs=1): err= 0: pid=2473: Thu Dec  8 01:25:33 2022
read: IOPS=39.3k, BW=1229MiB/s (1289MB/s)(30.7GiB/25550msec)
bw (  MiB/s): min= 1142, max= 1518, per=99.68%, avg=1225.44, stdev=80.99, samples=50
iops        : min=36574, max=48603, avg=39213.78, stdev=2591.71, samples=50
cpu          : usr=6.30%, sys=12.10%, ctx=857040, majf=0, minf=1
IO depths    : 1=0.1%, 2=6.7%, 4=93.3%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1005095,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=1229MiB/s (1289MB/s), 1229MiB/s-1229MiB/s (1289MB/s-1289MB/s), io=30.7GiB (32.9GB), run=25550-25550msec
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Ok, if found a good site explaining fio options...;-)

So here are some parallel tests:

Code:
root@truenas[~]# sync;fio --randrepeat=1 --ioengine=posixaio --gtod_reduce=1 --name=test --filename=/mnt/poolintel/dataset/testX --direct=1 --bs=128k --iodepth=1 --size=35G --readwrite=randread --ramp_time=1 --numjobs=2 --group_reporting
test: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=1
...
fio-3.28
Starting 2 processes
Jobs: 1 (f=1): [r(1),_(1)][97.7%][r=779MiB/s][r=6231 IOPS][eta 00m:01s]
test: (groupid=0, jobs=2): err= 0: pid=2631: Thu Dec  8 01:41:09 2022
  read: IOPS=13.7k, BW=1717MiB/s (1800MB/s)(67.7GiB/40377msec)
   bw (  MiB/s): min= 1545, max= 2323, per=100.00%, avg=1791.92, stdev=123.68, samples=154
   iops        : min=12366, max=18585, avg=14334.61, stdev=989.50, samples=154
  cpu          : usr=1.04%, sys=1.55%, ctx=554983, majf=0, minf=1
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=554643,0,0,0 short=0,0,0,0 dropped=0,0,0,0
    latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):

   READ: bw=1717MiB/s (1800MB/s), 1717MiB/s-1717MiB/s (1800MB/s-1800MB/s), io=67.7GiB (72.7GB), run=40377-40377msec


Code:
root@truenas[~]# sync;fio --randrepeat=1 --ioengine=posixaio --gtod_reduce=1 --name=test --filename=/mnt/poolintel/dataset/testX --direct=1 --bs=128k --iodepth=1 --size=35G --readwrite=randread --ramp_time=1 --numjobs=4 --group_reporting
test: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=1
...
fio-3.28
Starting 4 processes
Jobs: 2 (f=2): [r(2),_(2)][97.4%][r=2147MiB/s][r=17.2k IOPS][eta 00m:01s]
test: (groupid=0, jobs=4): err= 0: pid=2707: Thu Dec  8 01:46:40 2022
  read: IOPS=30.3k, BW=3784MiB/s (3968MB/s)(136GiB/36828msec)
   bw (  MiB/s): min= 3380, max= 4618, per=100.00%, avg=3943.62, stdev=68.88, samples=279
   iops        : min=27041, max=36945, avg=31547.35, stdev=551.04, samples=279
  cpu          : usr=1.40%, sys=1.83%, ctx=1119017, majf=0, minf=1
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1114955,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=3784MiB/s (3968MB/s), 3784MiB/s-3784MiB/s (3968MB/s-3968MB/s), io=136GiB (146GB), run=36828-36828msec
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I'm happy wth the speed of the HDD pool. Slow as expected.
Writing is fast until the RAM is full. I'm talking about the Intel SSD pool.
Sorry, I missed that.
 
Top