TrueNAS Scale - iSCSI Speed issue

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
Good day,
I would like to use my iSCSI share for VMs or similar, but I don't want everything to take a long time to load. That's why I did a speed test and was a bit shocked by the slow performance of Share. I also tested the SMB share and it's faster(?) than the iSCSI share. So if you have any hints on what I should adjust it would be very helpful.

The Truenas server:
Ryzen 5 1400
32GB RAM
2.5G PCI card

1 pool
1x 120gb M.2 Cache SSD
4x 4TB HDD Raid Z VDEV
4x 3TB HDD Raid Z VDEV

the server is connected to my PC directly with a 2.5G USB 3.0 adapter.
 

Attachments

  • disk test 06 z.png
    disk test 06 z.png
    53.5 KB · Views: 79
  • disk test 05  y.png
    disk test 05 y.png
    60.9 KB · Views: 85

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's nothing commendable about this system. You have too little memory, a bad pool design, and possible other issues. Please be sure to read the forum guide to successful block storage.

 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
iSCSI and SMB are very different protocols.
For example, SMB does not require writes to be committed on stable media (e.g SLOG) before acknowledging.
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
So if I made a new pool where only 8x 4TB HDDs were installed instead of 4x 3TB, would it be faster in Raid Z2? Because going so far as to do a 4 VDEVs with two disks in mirror, I don't want to do it just so the disk speed is usable. And how much more ram would it take to make it more efficient?
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Well, let's continue gathering some information here, before I can help steer you in a direction.

Are you familiar with iPerf? https://iperf.fr/

It's a tool that is included with TrueNAS. We can use it to see if any of your problem is due to the USB ethernet adapter, which I believe is a contributing factor. You can use it like this:

On my TrueNAS I typed:
Code:
root@prod[/mnt]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------


On my PC I typed:
Code:
C:\Users\nickf.FUSCO\Downloads\iperf-3.1.3-win64 (1)\iperf-3.1.3-win64>iperf3 -c 10.69.10.8
Connecting to host 10.69.10.8, port 5201
[  4] local 10.69.10.57 port 60282 connected to 10.69.10.8 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   809 MBytes  6.78 Gbits/sec
[  4]   1.00-2.00   sec   772 MBytes  6.48 Gbits/sec
[  4]   2.00-3.00   sec   796 MBytes  6.68 Gbits/sec
[  4]   3.00-4.00   sec   847 MBytes  7.11 Gbits/sec
[  4]   4.00-5.00   sec   848 MBytes  7.11 Gbits/sec
[  4]   5.00-6.00   sec   854 MBytes  7.16 Gbits/sec
[  4]   6.00-7.00   sec   840 MBytes  7.04 Gbits/sec
[  4]   7.00-8.00   sec   790 MBytes  6.62 Gbits/sec
[  4]   8.00-9.00   sec   832 MBytes  6.98 Gbits/sec
[  4]   9.00-10.00  sec   832 MBytes  6.98 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  8.03 GBytes  6.89 Gbits/sec                  sender
[  4]   0.00-10.00  sec  8.03 GBytes  6.89 Gbits/sec                  receiver

iperf Done.


What does yours say?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So if I made a new pool where only 8x 4TB HDDs were installed instead of 4x 3TB, would it be faster in Raid Z2? Because going so far as to do a 4 VDEVs with two disks in mirror, I don't want to do it just so the disk speed is usable. And how much more ram would it take to make it more efficient?

I pointed you at a resource that answers both of these questions.

Because going so far as to do a 4 VDEVs with two disks in mirror, I don't want to do it just so the disk speed is usable.

Then get yourself much faster disks. SSD's are good. The 4TB 870 EVO has fallen in price this year as Samsung has become desperate for profits. Can easily be found for $220 or sometimes even $210 per unit.

And how much more ram would it take to make it more efficient?

For ZFS, we don't recommend less than 64GB of ARC to do block storage, and more is better. For SCALE, you have to use double that number for RAM (i.e. 128GB) to get 64GB of usable ARC.

You should really go and read that article that I linked to previously.

I also tested the SMB share and it's faster(?) than the iSCSI share.

Of course it is, why does this come as a surprise?


This is, I believe, linked to from the block storage resource. You should really go read that and the linked articles it contains too. There's nothing here that hasn't been explained there.
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
Well, let's continue gathering some information here, before I can help steer you in a direction.

Are you familiar with iPerf? https://iperf.fr/

It's a tool that is included with TrueNAS. We can use it to see if any of your problem is due to the USB ethernet adapter, which I believe is a contributing factor. You can use it like this:

On my TrueNAS I typed:
Code:
root@prod[/mnt]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------


On my PC I typed:
Code:
C:\Users\nickf.FUSCO\Downloads\iperf-3.1.3-win64 (1)\iperf-3.1.3-win64>iperf3 -c 10.69.10.8
Connecting to host 10.69.10.8, port 5201
[  4] local 10.69.10.57 port 60282 connected to 10.69.10.8 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   809 MBytes  6.78 Gbits/sec
[  4]   1.00-2.00   sec   772 MBytes  6.48 Gbits/sec
[  4]   2.00-3.00   sec   796 MBytes  6.68 Gbits/sec
[  4]   3.00-4.00   sec   847 MBytes  7.11 Gbits/sec
[  4]   4.00-5.00   sec   848 MBytes  7.11 Gbits/sec
[  4]   5.00-6.00   sec   854 MBytes  7.16 Gbits/sec
[  4]   6.00-7.00   sec   840 MBytes  7.04 Gbits/sec
[  4]   7.00-8.00   sec   790 MBytes  6.62 Gbits/sec
[  4]   8.00-9.00   sec   832 MBytes  6.98 Gbits/sec
[  4]   9.00-10.00  sec   832 MBytes  6.98 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  8.03 GBytes  6.89 Gbits/sec                  sender
[  4]   0.00-10.00  sec  8.03 GBytes  6.89 Gbits/sec                  receiver

iperf Done.


What does yours say?

Code:
D:\Download\iperf-3.1.3-win64>iperf3.exe -c 192.168.2.101
Connecting to host 192.168.2.101, port 5201
[ 4] local 192.168.2.32 port 57292 connected to 192.168.2.101 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 95.8 MBytes 803 Mbits/sec
[ 4] 1.00-2.00 sec 95.0 MBytes 797 Mbits/sec
[ 4] 2.00-3.00 sec 95.5 MBytes 801 Mbits/sec
[ 4] 3.00-4.00 sec 95.4 MBytes 800 Mbits/sec
[ 4] 4.00-5.00 sec 95.2 MBytes 800 Mbits/sec
[ 4] 5.00-6.00 sec 95.4 MBytes 800 Mbits/sec
[ 4] 6.00-7.00 sec 95.4 MBytes 799 Mbits/sec
[ 4] 7.00-8.00 sec 95.4 MBytes 801 Mbits/sec
[ 4] 8.00-9.00 sec 95.4 MBytes 800 Mbits/sec
[ 4] 9.00-10.00 sec 95.2 MBytes 799 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 954 MBytes 800 Mbits/sec sender
[ 4] 0.00-10.00 sec 954 MBytes 800 Mbits/sec receiver

iperf Done.
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
Then get yourself much faster disks. SSD's are good. The 4TB 870 EVO has fallen in price this year as Samsung has become desperate for profits. Can easily be found for $220 or sometimes even $210 per unit.
Yes, I could upgrade to SSDs, but 4x $210 is already $840. Unfortunately I don't have the money for my server. I only use it privately or for testing.

For ZFS, we don't recommend less than 64GB of ARC to do block storage, and more is better. For SCALE, you have to use double that number for RAM (i.e. 128GB) to get 64GB of usable ARC.
Unfortunately, with the mainboard I use, I only have 4 slots free for RAM. And a ram kit with 4x 16Gb already costs $110. and twice as big twice as much. Can a 4-core processor actually handle this well? I actually wanted to upgrade to the Ryzen 5 5600G, but does a RAM upgrade make more sense?

And of course I could have read the documentation better beforehand, but I hadn't found anything specific and was hoping for a quick solution. And some of the documentation is difficult to understand, since unfortunately English is not my first language.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Code:
D:\Download\iperf-3.1.3-win64>iperf3.exe -c 192.168.2.101
Connecting to host 192.168.2.101, port 5201
[ 4] local 192.168.2.32 port 57292 connected to 192.168.2.101 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 95.8 MBytes 803 Mbits/sec
[ 4] 1.00-2.00 sec 95.0 MBytes 797 Mbits/sec
[ 4] 2.00-3.00 sec 95.5 MBytes 801 Mbits/sec
[ 4] 3.00-4.00 sec 95.4 MBytes 800 Mbits/sec
[ 4] 4.00-5.00 sec 95.2 MBytes 800 Mbits/sec
[ 4] 5.00-6.00 sec 95.4 MBytes 800 Mbits/sec
[ 4] 6.00-7.00 sec 95.4 MBytes 799 Mbits/sec
[ 4] 7.00-8.00 sec 95.4 MBytes 801 Mbits/sec
[ 4] 8.00-9.00 sec 95.4 MBytes 800 Mbits/sec
[ 4] 9.00-10.00 sec 95.2 MBytes 799 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 954 MBytes 800 Mbits/sec sender
[ 4] 0.00-10.00 sec 954 MBytes 800 Mbits/sec receiver

iperf Done.


So Somehow, you are SMB reading/writing at 250MByte/s = 2000 Mbits/s on a network that is only delivering 800 Mbit/s

The test setup doesn't appear to be valid...... perhaps you need a much longer interval?
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
So Somehow, you are SMB reading/writing at 250MByte/s = 2000 Mbits/s on a network that is only delivering 800 Mbit/s

The test setup doesn't appear to be valid...... perhaps you need a much longer interval?
I have made a mistake. I used the 1G network for the first test, which is actually my home network, but I use a 2.5G USB hub for iSCSI and SMB and have to use the IP for that too.

D:\Download\iperf-3.1.3-win64>iperf3.exe -c 169.254.100.3
Connecting to host 169.254.100.3, port 5201
[ 4] local 169.254.100.2 port 50375 connected to 169.254.100.3 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 281 MBytes 2.36 Gbits/sec
[ 4] 1.00-2.00 sec 283 MBytes 2.37 Gbits/sec
[ 4] 2.00-3.00 sec 282 MBytes 2.37 Gbits/sec
[ 4] 3.00-4.00 sec 281 MBytes 2.36 Gbits/sec
[ 4] 4.00-5.00 sec 282 MBytes 2.37 Gbits/sec
[ 4] 5.00-6.00 sec 283 MBytes 2.37 Gbits/sec
[ 4] 6.00-7.00 sec 283 MBytes 2.37 Gbits/sec
[ 4] 7.00-8.00 sec 283 MBytes 2.37 Gbits/sec
[ 4] 8.00-9.00 sec 283 MBytes 2.37 Gbits/sec
[ 4] 9.00-10.00 sec 283 MBytes 2.37 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec sender
[ 4] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec receiver

iperf Done.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, I could upgrade to SSDs, but 4x $210 is already $840. Unfortunately I don't have the money for my server. I only use it privately or for testing.

This is unfortunately not magic but rather computer science. There isn't a magic incantation to "iSCSIo Maxima" or anything like that; you actually need to do one of the things that will actually make it go faster. With hard disks and RAIDZ2, you are fundamentally tied down to a bunch of inconvenient facts that are an outcome of the copy-on-write design of ZFS, which typically make things slow to begin with, and only get slower as time passes due to the introduction of fragmentation.

Unfortunately, with the mainboard I use, I only have 4 slots free for RAM. And a ram kit with 4x 16Gb already costs $110. and twice as big twice as much. Can a 4-core processor actually handle this well? I actually wanted to upgrade to the Ryzen 5 5600G, but does a RAM upgrade make more sense?

I've had very good luck on a six-core Xeon E5-1650v3 platform serving iSCSI; with the exception of using high compression (like gzip-9) or deduplication, I was never able to get the system to less than about 80% idle.

ZFS more or less lives on the amount of memory you can give it. A vague and general answer is that more memory never hurts. I've often said that I would much prefer having 512GB of DDR3-1066 than 64GB of DDR4-2400. Unless you are serving something like 100 gigabit ethernet, the much larger quantity of slower memory is very preferable.

And of course I could have read the documentation better beforehand, but I hadn't found anything specific and was hoping for a quick solution. And some of the documentation is difficult to understand, since unfortunately English is not my first language.

Well, I suggest looking at the resources I've produced. I try to avoid unnecessary complexity and I try to write in a manner that is accessible to users who are not already expert in the topics at hand.
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
I found 4x 16GB Ram with a speed of 3200 for $60 used. Ordered it and hope it makes a visible difference, or am I wrong?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It will help, but fixing your pool will help more.
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
Couldn't I also create two pools, of which the first pool with the 4x 3TB HDDs would run in Raid-5/Z for containers, like UrBackup, which I take for backups and shouldn't be particularly fast, and the remaining 4x 4TB in one Raid-10 means two mirrors that are striped, which I can then use as an iSCSI share. Would that increase my speed? I can replace the 3TB HDDs later with larger ones.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, you have options available to you. Which one will work out best is left as an exercise for you to discover, since you are the person with the hardware.
 

Johnephen

Dabbler
Joined
Apr 8, 2022
Messages
35
I have adjusted the datasets and now use two. 'Main', with 4x 3TB HDDs in Raid-Z for backups and containers & 'Share', with 4x 4TB HDDs in Raid 10 and the 128gb cache ssd. The speed is really noticeably faster, but I haven't been able to install the 64GB Ram kit yet, because I'm still backing up everything on the server that was on it and I just got the Ram delivered.

The pictures show the SMB share on Z and the iSCSI share on N.
 

Attachments

  • disk test 09 y Auf SHARE nach pool Umstruckturrirung.png
    disk test 09 y Auf SHARE nach pool Umstruckturrirung.png
    52.2 KB · Views: 64
  • disk test 08 z Auf SHARE nach pool Umstruckturrirung.png
    disk test 08 z Auf SHARE nach pool Umstruckturrirung.png
    49.8 KB · Views: 83
Top