10gb Ethernet hitting 1gbs then dropping to 180mbs

SomeDumbNAS

Dabbler
Joined
Aug 23, 2021
Messages
22
So i have a 10Gb network card. Transfer speeds start off at 1GB/s for about 2-3 sec then drops to 160-200mbs for the remainder of the download (5GB file). I can ping my NAS using openspeedtest on the NAS and it reads 900mbs down and 1gb up. So it reaches the speeds to show the 10gb NIC is working but it drops very quickly. Any idea of what settings I need to change or something along those lines? also any info needed could you please tell me the comand of how to get them if needed from shell.

Truenas Scale specs:
CPU - I7-6850k 6core
ram - 32GB
MoBo - MSI godlike gaming carbon
HDD - WD1002FAEX-00Z3A0, WD 1TB - sata. x2 in RaidZ1
cache -samsung 950pro NVMe Pro 512GB - m.2
NIC - Asus XG-C100C on NAS and Desktop
Cat6 100ft cable.
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You do realize the Aquantia driver on the TrueNAS side isn't even an alpha; it's a developer preview, and hasn't been fully debugged or optimized?

Also, the transfer speed floor you're seeing is probably the raw throughput of your WD drive. Lastly, your drive is SMR, which is known to have this sort of bad performance running ZFS.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
For an explanation of SMR, and why it has dreadful performance with ZFS, see:

 

SomeDumbNAS

Dabbler
Joined
Aug 23, 2021
Messages
22
okay thank you, acctually its my buddy who is having the issue its just a random build he wanted to build. but can you explain how the cache works? I was thinking it would take the data in store it on the cache then write it to the HDD at a slower speed?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Without knowing the details of this pool, a cache VDEV implements L2ARC read cache. If the M.2 is configured as a SLOG VDEV, this isn’t a write cache, but an indirect write journal.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I think calling the vdev "cache" on the storage screen was a mistake. It should have been called an L2ARC vdev without mentioning the word cache. It confuses everyone who hasn't done their research properly

 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I can ping my NAS using openspeedtest on the NAS and it reads 900mbs down and 1gb up. So it reaches the speeds to show the 10gb NIC is working
Incidentally, you need to pay attention to the difference between "bits" and "bytes", "b" and "B", and either properly capitalise or write in full. Otherwise it will soon be impossible to follow. 1 GB/s on 10 Gb/s link is indeed fine. 1 gb/s on 10 gb/s would NOT be fine.
(Not to mention the difference between "g" (gram) and "G" (giga)…)

ram - 32gb
cache -samsung 950pro NVMe Pro 512gb - m.2
32 GB RAM is not enough to add a L2ARC. A 950 Pro is not suitable as SLOG.
So on either (mis)interpretation of "cache", the configuration has issues.
 

SomeDumbNAS

Dabbler
Joined
Aug 23, 2021
Messages
22
Incidentally, you need to pay attention to the difference between "bits" and "bytes", "b" and "B", and either properly capitalise or write in full. Otherwise it will soon be impossible to follow. 1 GB/s on 10 Gb/s link is indeed fine. 1 gb/s on 10 gb/s would NOT be fine.
(Not to mention the difference between "g" (gram) and "G" (giga)…)


32 GB RAM is not enough to add a L2ARC. A 950 Pro is not suitable as SLOG.
So on either (mis)interpretation of "cache", the configuration has issues.
You are correct on that ill fix it just for good practice.

so the drives are in RaidZ1, but also them being SMR could be the issue? I removed the M.2 from "cache" made it a single drive and the transfer speeds to it were 1.05GB/s the whole time. but when doing the writing to the HDD's it transferes 180-220 Gb/s no better then 1Gb but not near 10Gb speeds. With CMR drives what should I expect to see? roughly of corse.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The writing speed seems adequate for spinning drives. CMR may bring an improvement for sustained writes, or not, but the main issue with SMR is that these drives resilver very slowly—and in some cases, the strain of resilvering caused drives to fail and lost the pool.
For the safety of your data, replace all SMR drives with CRM drives as soon as possible!
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
when doing the writing to the HDD's it transferes 180-220 Gb/s no better then 1Gb but not near 10Gb speeds.
Writes aren't amplified by RAID, only reads are (IIRC) so the bandwidth you're getting from the writes is that of one drive.
 
Top