5MB/s will be a function of the test workload...
What software is testing
What access protocol
I/O size vs record size
Queue depth of test software.
You need to specify your tests well before anyone can see whether this is expected or not.
Are you IOPS or bandwidth oriented for your use-case.. what reliability is required? Important database or photos?
I did quite a few tests but here is the simples one:
Context: I have another all flash pool with two nvme drives in mirror, so all iops oriented workloads should be placed there anyhow. the raid pool is targeted for high volume data (primary home and group drives). Access directly by NAS and NextCloud, photoprism or simlar tools
I did test with fio, DD, network copy from my other nas ... here an example output of DD direcly running on the truenas box to just look on the filesystems behave, before any SMB etc:
I created 4 different datasets to evaluate the impact of sync and compression:
drwx-w---- 2 1000 root 3 Sep 7 01:34 all_off
drwxrwxr-x 2 root root 3 Sep 6 08:27 default
drwxrwxr-x 2 root root 4 Sep 6 08:27 no_compress
drwxrwxr-x 2 root root 3 Sep 6 07:59 no_sync
then I did run a DD with bandwith focus with different setup on these 4 datasets:
Skip caching ot test array performance
dd if=/dev/random of=/mnt/raid/all_off/testfile bs=1024000 count=500 oflag=dsync
dd if=/dev/random of=/mnt/raid/default/testfile bs=1024000 count=500 oflag=dsync
dd if=/dev/random of=/mnt/raid/no_compress/testfile bs=1024000 count=500 oflag=dsync
dd if=/dev/random of=/mnt/raid/no_sync/testfile bs=1024000 count=500 oflag=dsync
With caching
dd if=/dev/random of=/mnt/raid/all_off/testfile bs=1024000 count=500
dd if=/dev/random of=/mnt/raid/default/testfile bs=1024000 count=500
dd if=/dev/random of=/mnt/raid/no_compress/testfile bs=1024000 count=500
dd if=/dev/random of=/mnt/raid/no_sync/testfile bs=1024000 count=500
With zeros to eliminate CPU bottleneck
dd if=/dev/zero of=/mnt/raid/all_off/testfile bs=1024000 count=500
dd if=/dev/zero of=/mnt/raid/default/testfile bs=1024000 count=500
dd if=/dev/zero of=/mnt/raid/no_compress/testfile bs=1024000 count=500
dd if=/dev/zero of=/mnt/raid/no_sync/testfile bs=1024000 count=500
this results in the following:
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.84614 s, 277 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 18.8364 s,
27.2 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 18.2582 s,
28.0 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 18.1279 s,
28.2 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.86237 s, 275 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.87071 s, 274 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.85611 s, 276 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.91697 s, 267 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.289289 s, 1.8 GB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.343046 s, 1.5 GB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.354501 s, 1.4 GB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.355857 s, 1.4 GB/s
Especially the marked in red once are surprising for me... (ps its now 28mb since I added two more drives and now have 2x a vdev of 3+1 raidz vs the 5+1 vdev where I only got 5MB/s in this tests.
If I now add nother 256GB slog nvme drive, I can solve the issue, but still I'm not able to really understand why zfs would end of with 27MB in a test like that:
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.82277 s, 281 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 2.77786 s, 184 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 2.37096 s, 216 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 2.99992 s, 171 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 2.0491 s, 250 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.89032 s, 271 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 2.03548 s, 252 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 1.85154 s, 277 MB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.33542 s, 1.5 GB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.363928 s, 1.4 GB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.357674 s, 1.4 GB/s
500+0 records in
500+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 0.331725 s, 1.5 GB/s
I hope this gives some insight :)