Newbie NVME read speed slow - point of dispair

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Hi all,

I'm a German street artist and desperately in the need of a fast NAS.
So I upgraded an old z79 Asrock board to work as a Truenas server. See details in signature please.

This is all brand new to me and the learning curve is high.
Now I'm facing an issue by using an Intel SSD 660p 2TB.
The write speed is 1.1 - 1.2 GB/s: However reading is super slow in compare. ~300MB

It is just the system disk and the Intel installed.

I hope this is not the average outcome of an nvme. I'm super confused by designations and usage of such.
I built a pool with this single SSD.
lzh4 | 128K | NFS

Despite the results of Fio (see below) the read performance on my desktop is terrible.
I've tried many settings with no success. It just gets worth.

I hope someone can help me.
Please let me know what information you might need in order to find a solution.
Thank you for your time.


Code:
root@truenas[~]# fio --filename=/mnt/pool-intel/dataset/test --sync=1 --rw=randread --bs=1M --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm /mnt/pool-intel/dataset/test

fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1)
test: (groupid=0, jobs=1): err= 0: pid=3837: Sat Nov 26 05:39:22 2022
read: IOPS=5848, BW=5848MiB/s (6132MB/s)(10.0GiB/1751msec)
clat (usec): min=90, max=400, avg=170.38, stdev= 5.33
lat (usec): min=90, max=401, avg=170.42, stdev= 5.33
clat percentiles (usec):
|  1.00th=[  165],  5.00th=[  165], 10.00th=[  167], 20.00th=[  167],
| 30.00th=[  169], 40.00th=[  169], 50.00th=[  169], 60.00th=[  172],
| 70.00th=[  172], 80.00th=[  174], 90.00th=[  178], 95.00th=[  180],
| 99.00th=[  184], 99.50th=[  188], 99.90th=[  192], 99.95th=[  194],
| 99.99th=[  223]
bw (  MiB/s): min= 5839, max= 5863, per=100.00%, avg=5852.95, stdev=12.25, samples=3
iops        : min= 5839, max= 5863, avg=5852.67, stdev=12.34, samples=3
lat (usec)   : 100=0.03%, 250=99.96%, 500=0.01%
cpu          : usr=0.40%, sys=99.54%, ctx=28, majf=0, minf=257
IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete  :



root@truenas[~]# fio --filename=/mnt/pool-intel/dataset/test --sync=1 --rw=randwrite --bs=1M --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm /mnt/pool-intel/dataset/test

fio-3.28
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=658MiB/s][w=658 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3987: Sat Nov 26 05:52:58 2022
write: IOPS=663, BW=664MiB/s (696MB/s)(10.0GiB/15424msec); 0 zone resets
clat (usec): min=1278, max=19988, avg=1496.00, stdev=602.73
lat (usec): min=1288, max=20001, avg=1505.20, stdev=602.76
clat percentiles (usec):
|  1.00th=[ 1303],  5.00th=[ 1319], 10.00th=[ 1401], 20.00th=[ 1401],
| 30.00th=[ 1418], 40.00th=[ 1418], 50.00th=[ 1418], 60.00th=[ 1434],
| 70.00th=[ 1483], 80.00th=[ 1516], 90.00th=[ 1696], 95.00th=[ 1745],
| 99.00th=[ 1909], 99.50th=[ 2008], 99.90th=[15401], 99.95th=[15795],
| 99.99th=[19792]
bw (  KiB/s): min=656095, max=699017, per=100.00%, avg=680666.37, stdev=12931.45, samples=30
iops        : min=  640, max=  682, avg=664.20, stdev=12.71, samples=30

 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Please post the output of

Code:
pciconf -lv
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Please post the output of

Code:
pciconf -lv
Code:
root@truenas[~]# pciconf -lv
hostb0@pci0:0:0:0:      class=0x060000 rev=0x06 hdr=0x00 vendor=0x8086 device=0x0c00 subvendor=0x1849 subdevice=0x0c00
    vendor     = 'Intel Corporation'
    device     = '4th Gen Core Processor DRAM Controller'
    class      = bridge
    subclass   = HOST-PCI
pcib1@pci0:0:1:0:       class=0x060400 rev=0x06 hdr=0x01 vendor=0x8086 device=0x0c01 subvendor=0x1849 subdevice=0x0c01
    vendor     = 'Intel Corporation'
    device     = 'Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller'
    class      = bridge
    subclass   = PCI-PCI
pcib2@pci0:0:1:1:       class=0x060400 rev=0x06 hdr=0x01 vendor=0x8086 device=0x0c05 subvendor=0x1849 subdevice=0x0c05
    vendor     = 'Intel Corporation'
    device     = 'Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller'
    class      = bridge
    subclass   = PCI-PCI
vgapci0@pci0:0:2:0:     class=0x030000 rev=0x06 hdr=0x00 vendor=0x8086 device=0x0412 subvendor=0x1849 subdevice=0x0412
    vendor     = 'Intel Corporation'
    device     = 'Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller'
    class      = display
    subclass   = VGA
xhci0@pci0:0:20:0:      class=0x0c0330 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8cb1 subvendor=0x1849 subdevice=0x8cb1
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family USB xHCI Controller'
    class      = serial bus
    subclass   = USB
none0@pci0:0:22:0:      class=0x078000 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8cba subvendor=0x1849 subdevice=0x8cba
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family ME Interface'
    class      = simple comms
ehci0@pci0:0:26:0:      class=0x0c0320 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8cad subvendor=0x1849 subdevice=0x8cad
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family USB EHCI Controller'
    class      = serial bus
    subclass   = USB
pcib3@pci0:0:28:0:      class=0x060400 rev=0xd0 hdr=0x01 vendor=0x8086 device=0x8c90 subvendor=0x1849 subdevice=0x8c90
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family PCI Express Root Port 1'
    class      = bridge
    subclass   = PCI-PCI
pcib4@pci0:0:28:2:      class=0x060400 rev=0xd0 hdr=0x01 vendor=0x8086 device=0x8c94 subvendor=0x1849 subdevice=0x8c94
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family PCI Express Root Port 3'
    class      = bridge
    subclass   = PCI-PCI
pcib5@pci0:0:28:4:      class=0x060400 rev=0xd0 hdr=0x01 vendor=0x8086 device=0x8c98 subvendor=0x1849 subdevice=0x8c98
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family PCI Express Root Port 5'
    class      = bridge
    subclass   = PCI-PCI
ehci1@pci0:0:29:0:      class=0x0c0320 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8ca6 subvendor=0x1849 subdevice=0x8ca6
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family USB EHCI Controller'
    class      = serial bus
    subclass   = USB
isab0@pci0:0:31:0:      class=0x060100 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8cc4 subvendor=0x1849 subdevice=0x8cc4
    vendor     = 'Intel Corporation'
    device     = 'Z97 Chipset LPC Controller'
    class      = bridge
    subclass   = PCI-ISA
ahci0@pci0:0:31:2:      class=0x010601 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8c82 subvendor=0x1849 subdevice=0x8c82
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family SATA Controller [AHCI Mode]'
    class      = mass storage
    subclass   = SATA
ichsmb0@pci0:0:31:3:    class=0x0c0500 rev=0x00 hdr=0x00 vendor=0x8086 device=0x8ca2 subvendor=0x1849 subdevice=0x8ca2
    vendor     = 'Intel Corporation'
    device     = '9 Series Chipset Family SMBus Controller'
    class      = serial bus
    subclass   = SMBus
aq0@pci0:1:0:0: class=0x020000 rev=0x02 hdr=0x00 vendor=0x1d6a device=0x07b1 subvendor=0x7053 subdevice=0x1001
    vendor     = 'Aquantia Corp.'
    device     = 'AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion]'
    class      = network
    subclass   = ethernet
nvme0@pci0:2:0:0:       class=0x010802 rev=0x03 hdr=0x00 vendor=0x8086 device=0xf1a8 subvendor=0x8086 subdevice=0x390d
    vendor     = 'Intel Corporation'
    device     = 'SSD 660P Series'
    class      = mass storage
    subclass   = NVM
nvme1@pci0:3:0:0:       class=0x010802 rev=0x00 hdr=0x00 vendor=0x144d device=0xa808 subvendor=0x144d subdevice=0xa801
    vendor     = 'Samsung Electronics Co Ltd'
    device     = 'NVMe SSD Controller SM981/PM981/PM983'
    class      = mass storage
    subclass   = NVM
alc0@pci0:4:0:0:        class=0x020000 rev=0x10 hdr=0x00 vendor=0x1969 device=0xe091 subvendor=0x1849 subdevice=0xe091
    vendor     = 'Qualcomm Atheros'
    device     = 'Killer E220x Gigabit Ethernet Controller'
    class      = network
    subclass   = ethernet
e091 subvendor=0x1849 subdevice=0xe091
    vendor     = 'Qualcomm Atheros'
    device     = 'Killer E220x Gigabit Ethernet Controller'
    class      = network
    subclass   = ethernet
nvme2@pci0:5:0:0:       class=0x010802 rev=0x03 hdr=0x00 vendor=0x8086 device=0xf1a8 subvendor=0x8086 subdevice=0x390d
    vendor     = 'Intel Corporation'
    device     = 'SSD 660P Series'
    class      = mass storage
    subclass   = NVM
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Please add the output of

Code:
dmesg -a


Code:
kldstat


The Aquantia driver for your network card is not considered "stable", even though it exists for FreeBSD. And even if it "works" (in a way) I would not expect peak performance from this card. If I were you, the first thing I'd do is get an Intel or Chelsio (t5) nic with confirmed stable driver support.

What switch do you connect to?
 
Last edited:

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Please add the output of

Code:
uname -a


Code:
dmesg -a


Code:
kldstat


The Aquantia driver for your network card is not considered "stable", even though it exists for FreeBSD. Even if it "works" (in a way) I would not expect peak performance via NFS from this card. If I were you, the first thing I'd do is get an Intel nic.

What switch do you connect to?
Code:
$uname -a
Linux edgar-system 5.15.78-1-MANJARO #1 SMP PREEMPT Thu Nov 10 20:50:09 UTC 2022 x86_64 GNU/Linux

$dmesg -a                                                                                ✔  
dmesg: Ungültige Option -- a
Rufen Sie »dmesg --help« auf, um weitere Informationen zu erhalten.

$kldstat
zsh: command not found: kldstat

 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Well, Linux doesn't support these commands and/or options. Do it on TrueNAS (GUI -> Sidemenu -> Shell), please. And the uname-part isn't necessary. I didn't read carefully enough and didn't get, that you already wrote in your sig that you are running TrueNAS 13-U3.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The Aquantia NIC is in the Linux client, not in the NAS.
The NAS uses "Synology E10G18-T1", which is an unknown quantity to me.

But the network cannot explain why a QLC drive can read at a quarter of its write speed.
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
If the reads are down, it's because someone is running single jobs with fio in most of the cases. But as far as he wrote:

Code:
fio --filename=/mnt/pool-intel/dataset/test --sync=1 --rw=randread --bs=1M --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm /mnt/pool-intel/dataset/test
[...]
read: IOPS=5848, BW=5848MiB/s (6132MB/s)(10.0GiB/1751msec)
[...]


The bottleneck'd 300MB/s are via network (nfs), as far as I got it. And the Synology card comes with an Aquantia chip.
 
Last edited:

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
I apologize,

here are the outputs from the commands in Truenas shell.
I have to attach it as file. Limit of 30000 characters here.

And $ kldstat

Code:
Id Refs Address                Size Name
1  113 0xffffffff80200000  23c08e0 kernel
2    1 0xffffffff825c1000    9ce58 hptrr.ko
3    1 0xffffffff8265e000    175b8 if_atlantic.ko
4    1 0xffffffff82677000    32cf8 if_bnxt.ko
5    1 0xffffffff826aa000    839c8 hptnr.ko
6    1 0xffffffff8272e000    859a0 ispfw.ko
7    1 0xffffffff827b4000    11968 ipmi.ko
8    3 0xffffffff827c6000     3cb0 smbus.ko
9    1 0xffffffff827ca000    a7cf8 ice_ddp.ko
10    1 0xffffffff82872000   224328 if_qlxgbe.ko
11    1 0xffffffff82a97000   5a2c28 openzfs.ko
12    1 0xffffffff8303a000   11bac0 hpt27xx.ko
13    1 0xffffffff83518000     3250 ichsmb.ko
14    1 0xffffffff83600000   53e438 vmm.ko
15    1 0xffffffff8351c000     21cc nmdm.ko
16    1 0xffffffff8351f000    3de40 ctl.ko
17    1 0xffffffff8355d000     2268 dtraceall.ko
18    9 0xffffffff83560000     8a60 opensolaris.ko
19    9 0xffffffff83569000    372f8 dtrace.ko
20    1 0xffffffff835a1000     2274 dtmalloc.ko
21    1 0xffffffff835a4000     2cb8 dtnfscl.ko
22    1 0xffffffff835a7000     3331 fbt.ko
23    1 0xffffffff83b3f000    55570 fasttrap.ko
24    1 0xffffffff835ab000     2258 sdt.ko
25    1 0xffffffff835ae000     91b4 systrace.ko
26    1 0xffffffff835b8000     91b4 systrace_freebsd32.ko
27    1 0xffffffff835c2000     234c profile.ko
28    1 0xffffffff835c5000     589c geom_multipath.ko
29    1 0xffffffff835cb000    11624 hwpmc.ko
30    1 0xffffffff835dd000    16438 t4_tom.ko
31    1 0xffffffff835f4000     20f0 toecore
32    1 0xffffffff835f7000     2a08 mac_ntpd.ko
 

Attachments

  • dmesg-a.txt
    32 KB · Views: 96

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Code:
$ iperf3 -c 192.200.1.0 -p 5201 -f m -R                                                          ✔
Connecting to host 192.200.1.0, port 5201
Reverse mode, remote host 192.200.1.0 is sending
[  5] local 192.200.1.1 port 45896 connected to 192.200.1.0 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.08 GBytes  9309 Mbits/sec                 
[  5]   1.00-2.00   sec  1.10 GBytes  9409 Mbits/sec                 
[  5]   2.00-3.00   sec  1.09 GBytes  9402 Mbits/sec                 
[  5]   3.00-4.00   sec  1.09 GBytes  9405 Mbits/sec                 
[  5]   4.00-5.00   sec  1.09 GBytes  9401 Mbits/sec                 
[  5]   5.00-6.00   sec  1.10 GBytes  9406 Mbits/sec                 
[  5]   6.00-7.00   sec  1.09 GBytes  9404 Mbits/sec                 
[  5]   7.00-8.00   sec  1.09 GBytes  9404 Mbits/sec                 
[  5]   8.00-9.00   sec  1.09 GBytes  9396 Mbits/sec                 
[  5]   9.00-10.00  sec  1.09 GBytes  9392 Mbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.9 GBytes  9393 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  10.9 GBytes  9393 Mbits/sec                  receiver


$ iperf3 -c 192.200.1.0 -p 5201 -f m                                                     ✔  10s 
Connecting to host 192.200.1.0, port 5201
[  5] local 192.200.1.1 port 59686 connected to 192.200.1.0 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.06 GBytes  9112 Mbits/sec    0   1.68 MBytes       
[  5]   1.00-2.00   sec  1.05 GBytes  9007 Mbits/sec    0   1.68 MBytes       
[  5]   2.00-3.00   sec  1.06 GBytes  9070 Mbits/sec    0   1.68 MBytes       
[  5]   3.00-4.00   sec  1.04 GBytes  8965 Mbits/sec    0   1.68 MBytes       
[  5]   4.00-5.00   sec  1.05 GBytes  9039 Mbits/sec    0   1.68 MBytes       
[  5]   5.00-6.00   sec  1.07 GBytes  9203 Mbits/sec    0   1.68 MBytes       
[  5]   6.00-7.00   sec  1.06 GBytes  9115 Mbits/sec    0   1.68 MBytes       
[  5]   7.00-8.00   sec  1.05 GBytes  9018 Mbits/sec    0   1.68 MBytes       
[  5]   8.00-9.00   sec  1.06 GBytes  9102 Mbits/sec    0   1.68 MBytes       
[  5]   9.00-10.00  sec  1.06 GBytes  9091 Mbits/sec    0   1.68 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.6 GBytes  9072 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  10.6 GBytes  9069 Mbits/sec                  receiver



Code:
root@truenas[~]# dd if=/dev/zero of=/mnt/poolintel/dataset/testMeNow2 oflag=direct bs=1024M count=10 oflag=direct
10+0 records in
10+0 records out
10737418240 bytes transferred in 4.936961 secs (2174904386 bytes/sec)

root@truenas[~]# dd of=/dev/zero if=/mnt/poolintel/dataset/testMeNow2 iflag=direct20971520+0 records in
20971520+0 records out
10737418240 bytes transferred in 27.093779 secs (396305671 bytes/sec)
root@truenas[~]#
 

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Am I wrong about this?

When I copy a file to the nas share it stays in memory. When I read that file from share right after copy it should be actually a RAM read not the disk read. So the bottleneck is NOT the bandwith of the network, NOT the performance capacity of any SSD/Nvme no matter if striped or single.

I have three very old 2.5" HDD. Put them in for a test run. Set up as raidz1 pool.
fio shows ~130MB/s read on 10GB. When I read that 10GB file from Desktop the transfer speed starts with ~300MB/s and drops down to the HDD pool actual read speed.
That would proof that is not about the disks speed. It is a RAM thing.
I have the same effect on the Nvme pool except that it stays on the ~300MB/s read performance.
 
Last edited:

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Nope. Not necessarily so. ZFS is about data integrity.

[1] ZFS collects data and writes transaction groups to disk while „managing“ free space. In a simplified view: Any block that „fits“ gets written to a large enough free space. The more data gets written (over time, that is) the more the pool gets fragmented. (In addition: NFS enforces sync writes.) So reads get possibly collected from many different blocks. Shouldn't be a big hit with NVME, though.

[2] Reads are not necessarily read from ARC. The cache is organized in a combined way: „statistically" and "chronologically". Shouldn't be a big hit either with your testing scenario. (Not that big.)

How do you share data? Via NFS3 or NFS4? TCP/UDP? Multichannel/Nconnect enabled? Please post the output of mount on your client machine.

The next step I would try is reading large (sequential) data from the NAS to a client's RAM disk, maybe, and keeping an eye on CPU spikes on either side.

Could you eventually try another client machine and switch to another sharing protocol?

Maybe I have overlooked it, but again: What switch do you connect your machines to?
 
Last edited:

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
Thank you for the detailed reply. I watched some youtubes yesterday about ZFS/NFS, they underline your explanation.
Don't get me wrong. I'm quite happy with the speed.
I just want to make sure that it is a good setup (more or less) before I start storing data.

On the other hand Truenas is mentioned as super NAS system for video editing and render stuff where ever you look.
I'm dealing with tons of data. All files are jpegs, videos, Raw files. And read speed is way more important than writing data.
I wonder about all this videos from guys that they speed up their NAS with extra SSD's. How come when read is kind of slow?

Nope. Not necessarily so. ZFS is about data integrity.

[1] ZFS collects data and writes transaction groups to disk while „managing“ free space. In a simplified view: Any block that „fits“ gets written to a large enough free space. The more data gets written (over time, that is) the more the pool gets fragmented. (In addition: NFS enforces sync writes.) So reads get possibly collected from many different blocks. Shouldn't be a big hit with NVME, though.
So setting async option on client site doesn't make sense?

[2] Reads are not necessarily read from ARC. The cache is organized in a combined way: „statistically" and "chronologically". Shouldn't be a big hit either with your testing scenario. (Not that big.)

How do you share data? Via NFS3 or NFS4? TCP/UDP? Multichannel/Nconnect enabled? Please post the output of mount on your client machine.
Direct connection, no switch or hub.
NFS4. Tried NFS3 as well.
Tried both TCP/UPD. * edit as protocol for nfs
Multichannel (where to set for NFS?)
Nconnect enabled by option in systemd unit on client. "nconnect=16"

Code:
$ mount

proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=65774408k,nr_inodes=16443602,mode=755,inode64)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p2 on / type ext4 (rw,noatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=48560)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,nr_inodes=1048576,inode64)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
systemd-1 on /mnt/nas/data type autofs (rw,relatime,fd=48,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=40980)
systemd-1 on /mnt/nas2/data type autofs (rw,relatime,fd=49,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=40983)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/vg_data1-lv_data1 on /mnt/data-I type xfs (rw,nosuid,nodev,noatime,attr2,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=512,noquota,x-gvfs-show,x-gvfs-name=DATA-I)
/dev/md2 on /mnt/cache type ext4 (rw,nosuid,nodev,noatime,stripe=256,x-gvfs-show,x-gvfs-name=CACHE)
/dev/md1 on /mnt/work-I type ext4 (rw,nosuid,nodev,noatime,stripe=256,x-gvfs-show,x-gvfs-name=WORK-I)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=13158072k,nr_inodes=3289518,mode=700,uid=1000,gid=1000,inode64)
portal on /run/user/1000/doc type fuse.portal (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)



The next step I would try is reading large (sequential) data from the NAS to a client's RAM disk, maybe, and keeping an eye on CPU spikes on either side.

Could you eventually try another client machine and switch to another sharing protocol?

Maybe I have overlooked it, but again: What switch do you connect your machines to?
Unfortunately I don't have the option to switch the client machine with 10GB. But I will give SMB another try.
No switch. Pear to pear.

THANK you for your time.
 
Last edited:

awasb

Patron
Joined
Jan 11, 2021
Messages
415
[...]
On the other hand Truenas is mentioned as super NAS system for video editing and render stuff where ever you look.
I'm dealing with tons of data. All files are jpegs, videos, Raw files. And read speed is way more important than writing data.
I wonder about all this videos from guys that they speed up their NAS with extra SSD's. How come when read is kind of slow?
[...]
I'm doing some video editing myself. My "scratch disk" is a locally attached thunderbolt drive. To be a little pathetic: I would rather die than accept the latency via network (it is there while skimming, no matter what bandwidth the network connection advertises — at least at my desk with my equipment). But the editing machine backups projects (hourly) to the TrueNAS server.


So setting async option on client site doesn't make sense?

No. Sync is all about writing. It makes sense on the server side if you want to make sure that as soon as data is transmitted it gets written to physical disk. It will not affect reading.

Direct connection, no switch or hub.
NFS4. Tried NFS3 as well.
Tried both TCP/UPD.
Multichannel (where to set for NFS?)
Nconnect enabled by option in systemd unit on client. "nconnect=16"

OK. (nconnect effectively means "multichannel" as parallel tcp connections are established to maximize bandwidth.)

Code:
$ mount

[...]
systemd-1 on /mnt/nas/data type autofs (rw,relatime,fd=48,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=40980)
systemd-1 on /mnt/nas2/data type autofs (rw,relatime,fd=49,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=40983)
[...]
Another reason to hate systemd. Could you please try to mount the nfs share(s) "statically"? Just for kicks/testing. Try to mount via /etc/fstab by adding the following line (don't forget to adapt IPs and the mount point!):

Code:
$SERVERIP:/mnt/nas/data /mnt/$MOUNTDIR nfs4 _netdev,nofail,noatime,nodiratime,noauto,x-systemd.automount


and then

Code:
sudo mount -o remount /mnt/$MOUNTDIR


Rerun tests. Please post the output of sudo nfsstat on the Linux client.

Unfortunately I don't have the option to switch the client machine with 10GB. But I will give SMB another try.
No switch. Pear to pear.
[...]
Just for quick comparison. Should be an easy thing to do. Don't expect miracles, though. SMB depends even more on implementation than NFS.
 
Last edited:

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
I find myself hating this share stuff. I will put this on hold for a while.
Never thought that it will be that difficult. I experience a Déjà-vu to the past when I was using windows.
I cannot see any error messages no more. What a mess!
I will put the a 8TB HDD into my Desktop and will be happy for a while. At least I can start doing fun stuff.
Sorry for stealing your time.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
TrueNAS is not a quick, install, click next next next solution really. TrueNAS as many will tell you, is an enterprise level storage system, the fact it runs on low end hardware, does not mean it should be used in that manner. (I am learning all this myself so in a similiar situation as your self finding out all the nitty gritty details!)

If you want proper data protection, you set up TrueNAS in a proper manner, which does take time, but once you do set it up, it should be solid for you.

Now you noted you just want some fast storage, how big of files are you working with?

Also note, QLC is the lowest end of SSD storage, the slowest and the drives usually come with no DDR cache on them either, so performance tanks very quickly on those. The 660p though is a great drive but if you plan to work with larger files and often, you will then notice the slow downs of it vs other SSDs, but of course, you do the best you can with the budget you have!

And with that, please make sure you have proper backups as well!
 
Last edited:

metanamorph

Dabbler
Joined
Nov 25, 2022
Messages
14
TrueNAS is not a quick, install, click next next next solution really. TrueNAS as many will tell you, is an enterprise level storage system, the fact it runs on low end hardware, does not mean it should be used in that manner. (I am learning all this myself so in a similiar situation as your self finding out all the nitty gritty details!)

If you want proper data protection, you set up TrueNAS in a proper manner, which does take time, but once you do set it up, it should be solid for you.

Now you noted you just want some fast storage, how big of files are you working with?

Also note, QLC is the lowest end of SSD storage, the slowest and the drives usually come with no DDR cache on them either, so performance tanks very quickly on those. The 660p though is a great drive but if you plan to work with larger files and often, you will then notice the slow downs of it vs other SSDs, but of course, you do the best you can with the budget you have!

And with that, please make sure you have proper backups as well!

I'm just tired and can't see outdated or misleading solutions no more.
When I search "Truenas set up smb share Manjaro" or similar I can not find a working solution.
Is it that complicated or is it such a magic to set up a smb share Truenas-Manjaro?
I think after two weeks of try and error I will have to stop here.
NFS is working but with slow read as mentioned. Also on a Pro samsung ssd. I don't setup a system that has a quarter of the actual speed on nvme. Why should I?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
In all fairness, this is not (only) a TrueNAS issue, this is performance optimisation, addressing the whole setup, NAS, network and client. And this is tedious and difficult by its very nature.
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
I'm just tired and can't see outdated or misleading solutions no more.
What hint or suggestion posted in this thread was outdated or misleading? Or did you mean "someone was wrong on another part of the internet"?

When I search "Truenas set up smb share Manjaro" or similar I can not find a working solution.
Is it that complicated or is it such a magic to set up a smb share Truenas-Manjaro?
[...]

I am no Linux-advocate (any more). But the idea behind it was "be free to learn and to do it yourself", not a clicki bunti set and forget process. *rant* Even though modern Linux distributions (apart from kits like gentoo/funtoo or LFS) now "advertise" a Windows-/Mac-like experience. COUGHING */rant* That's why the above search is not "the right way" and leads to frustration. Search for documentation, how to setup a Linux client for NFS/SMB in general. It exists. There is no need for "try and error". (If you chose your distribution wisely, that is. If I were you, I would ditch any systemd distribution any time. Even though the recommendations @ your topic in the manjaro-forums were pointing to the opposite direction. It is not about "old ways" or "new ways", but about knowledge and control.)

If you do not care, then all of this is not ... well ... "within the scope of your application". You will be better off buying any off the shelf solution. Get a decent support contract, too.

I do not want to sound rude. And I agree, that it is wrong to take the instrument to achieve a goal for the ultimate goal itself. Most of the time we just want "that job done". But with software it is completely different, IMHO.

All the best (from Germany)!
 
Last edited:

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
I'm just tired and can't see outdated or misleading solutions no more.
When I search "Truenas set up smb share Manjaro" or similar I can not find a working solution.
Is it that complicated or is it such a magic to set up a smb share Truenas-Manjaro?
I think after two weeks of try and error I will have to stop here.
NFS is working but with slow read as mentioned. Also on a Pro samsung ssd. I don't setup a system that has a quarter of the actual speed on nvme. Why should I?


For Linux (I run Manjaro myself) I found NFS always had better performance than SMB did.
 
Top