Slow write speeds over 10Gb

isternbu

Dabbler
Joined
Jan 22, 2017
Messages
13
Hi. I am getting slow write speeds when copying files from Windows desktop to FreeNAS via SMB share over 10Gb copper. Any suggestions? Thank you.

I've tried adding the 10Gb tunable settings from this page, no change/

Similar results with Windows explorer copier and Teracopy


Results (44GB single file)
From Windows desktop to FreeNAS
169 MB/s

From FreeNAS to Windows desktop
547 MB/s

Also tried to my Synology DS1812 (6-disk SHR-2)
From Windows desktop to Synology
325 MB/s

From Synology to Windows desktop
439 MB/s


Here is my setup
FreeNAS-9.10.2-U6
X9SRL-F
E5-1620v2
96GB RAM
12-disk 8TB RAIDZ3 w/ 1MiB recordsize
X540-T2 NIC

Windows 10 build 1809
C246-WU4
E-2288G
32GB RAM
P3600 SSD
X550-T2 NIC

Netgear XS505M
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
spindle disks rarely achieve more than 150MB/s read or write. zfs can give you a bit of a boost but you are limited by physics. zfs currently doesn't have any hybrid setups available (possibly FreeNAS 12 will), the best you can do is to create a SLOG, but that is expensive and takes a bit of work and if done wrong introduces risk, and I don't think SMB even does synchonous writes which means the SLOG would do nothing anyway (unless you turn sync on for the whole pool, but if you do your SLOG better be good hardware as it will absord ALL writes).
try putting in an SSD and testing to that, you will probably see much higher speeds. I assum from the speeds you posted that windows is running on an SSD.
I'm not sure what "(6-disk SHR-2) " is as I only use FreeNAS, but it seems to be some kind of proprietary synology hybrid-RAID-like feature with possibly write caching. if this is so and is either using SSD's or SSD write cache, then it would be faster than a zfs all spindle array.
I did exactly the same thing, added 10gbe and got 120MB/s and was like "WTF?!?!".
 

isternbu

Dabbler
Joined
Jan 22, 2017
Messages
13
I know individual disk speeds are in the ~150 MB/s range but this is a 12-disk RAIDZ3 and read speeds on the same pool are 547 MB/s second.
SHR-2 is Synology's version of RAID6/RAIDZ2.
Yes, the disk on Window's side is a SSD with read/write speeds over 1000 MB/s
I do plan to put a couple of SSDs in the FreeNAS in RAID0 just for another data point
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
yes, but are those read speeds of data already in the ARC? because that will be lightning fast. also, copying TO freenas is write speed not read speed. read speeds will be faster, but write speeds will be a function of the write speeds of the disks.
not only does raidz not give you an aggregate of speed, your raidz3 is 12 disks wide. your write speeds are exactly equal to the write speed of the slowest device in each vdev; you only HAVE one vdev, and the fastest write speed possible within that vdev is ~150MB/s. I'm surprised your even getting 169MB/s, thats probably zfs's improvement or else you have drives that are a bit faster than average.
 

drinking12many

Contributor
Joined
Apr 8, 2012
Messages
148
artless is right you want speed choosing any raidz is not the way to go. Where I used to work we did triple mirroring before flash was so cheap I think we have something like 24 triple mirror 15K Vdevs plus slogs/logs and it was still tough to fully saturate multiple 10Gb links on reads/writes. (2011 or so)
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
any raidz is not the way to go
this is not really accurate, raidz can give you speed but to do so you need more vdevs. mirror is fast partly because it has more vdevs. the problem here isnt raidz but the configuration of it being one that sacrifices speed for storage space.
a 12 drive raidz3 has 1 less parity disk than a 2x6 disk raidz2 but full 1/2 the potential write speed. dual raidz2 wouldnt give you double the write speed but it should be at least noticibly more than the 12 disk raidz3
I use 3xmirrors in my own nas but that's largely because raidz is too inflexible for management (mirror vdevs can be modified, added, and removed from a pool), I hope that will change with the updates to expand and remove from raidz vdevs that are in the pipeline and I might be able to move to raidz.
 
Last edited:

drinking12many

Contributor
Joined
Apr 8, 2012
Messages
148
Yeah its possible as you said but most home NAS users aren't going to have 10 RaidZ vdevs. Personally If I want performance I automatically go mirrored as you said its faster because you have more vdevs but also easier to expand etc. I pretty much personally only use RaidZ of any type if I just want bulk storage and care less about speed. Its always up to the person setting it up, but there are plenty of primers out there explaining what your getting and giving up with each setup type. I do miss the Oracle ZFS appliance we had at that place though they had their quirks but before flash you couldn't beat them for price/performance.
 

isternbu

Dabbler
Joined
Jan 22, 2017
Messages
13
I don’t understand how a 12-disk vdev would only get 170 MB/s. I’ve also seen multiple examples of similar setups getting in the 400-500 MB/s range for write speeds.

I’m going to do some tests of write speeds directly on the FreeNAS machine and I’ll report back if anyone is interested.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
what examples? are you sure these examples weren't using SLOG? because that could dramatically improve the pool speed, since SLOG would change the write location....
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
You should be able to get decent speeds out of that pool. I have a 8 drive RAIDZ2 single vdev pool which I can read and write to at over 1GBs (locally, and over the network.)

My understanding is that large files are streamed over multiple drives at once - it's smaller files and random access loads that are more subject to the IOPS limitations of one drive. I've tested the same pool of drives as referenced above with all mirror vdevs and for large file writes it was actually slower.

I usually use dd to write and read back a 100GB file. In your case you may want to increase the size of that file so that it's significantly over your ARC size. There should be instructions for this on the forum. If that gives you higher speeds, next step might be iperf to isolate network specific issues.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
hmm. it is my understanding that those numbers are not possible, so either my understand is wrong or there is something skewing those numbers. im not sure which.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
For reference this is mine, again, 8 X 8TB RAIDZ2:

root@nas:/mnt/MainPool # dd if=/dev/zero of=test.dat bs=2048k count=50000 50000+0 records in 50000+0 records out 104857600000 bytes transferred in 95.481032 secs (1098203466 bytes/sec) root@nas:/mnt/MainPool # dd of=/dev/null if=test.dat bs=2048k count=50000 50000+0 records in 50000+0 records out 104857600000 bytes transferred in 85.704446 secs (1223479119 bytes/sec)
This is on a supermicro X11SSH-CTF with 64GB RAM and an i3-7320.

Dataset is set to sync standard with 1MB recordsize, compression off. There was data stored on the pool, but utilization was low. No SLOG or L2ARC.
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
ok, well, first off, this one is writing zeros to disk and then reading those zeros from disk... I'm very doubful that that would give anything other than skewed numbers
[shdt1s@glpnas1 /mnt/prod/tmp]$ dd if=/dev/random of=./testfile.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 273.435843 secs (149797479 bytes/sec)
/mnt/prod/tmp]$ dd if=/dev/zero of=./testfile.out0 bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 48.883991 secs (837902130 bytes/sec)
/mnt/prod/tmp]$
/mnt/prod/tmp]$ dd if=./testfile.out of=/dev/null bs=4096
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 71.741553 secs (570938294 bytes/sec)
/mnt/prod/tmp]$ dd if=./testfile.out0 of=/dev/null bs=4096
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 132.852370 secs (308312152 bytes/sec)
oddly, the /random file was read faster than the /zero file, so I assume my test is also a bit flawed somewhere
 
Last edited:

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Why would the numbers be skewed? It's a sequential read/write designed to test the streaming bandwidth of the pool. It's not a test applicable to a iSCSI load or something else, but my numbers are about the same if for example I write a 50GB ISO file.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
also, i only just remembered, SMB/CIFS is, i think, single core on freenas, and greatly limits speed. it's been awhile since I looked into this when I first upgraded to 10gbe so I'm trying to remember all the things that, at the time, I thought explained only getting 150MB/s
Why would the numbers be skewed? It's a sequential read/write designed to test the streaming bandwidth of the pool. It's not a test applicable to a iSCSI load or something else, but my numbers are about the same if for example I write a 50GB ISO file.

all kinds of things can skew numbers, if your methodology is flawed the data can often be made to mean all kinds of things. I'm not saying that it IS skewed, because I am not confident enough that my own understanding isn't flawed to be able to accurately rate these particular numbers.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
@artlessknave Your test looks like it was 40GB, of which a significant amount might fit in ARC, thus skewing your results on reading it back.

In my experience SMB performance does place a significant single threaded load on CPU if you are trying to max out 10gb. Both ends might need a well performing CPU per core to max out 10gb, though I would think the CPUs that @isternbu listed should be adequate. For me, using jumbo frames from FreeNAS to my Windows desktop gave me a boost, although most on the forums will not recommend using jumbo frames.

@isternbu I'm not sure if you included what disks you are using, how full the pool is, how active it is at the time, but that might help along with your tests directly on the pool.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
@artlessknave Your test looks like it was 40GB, of which a significant amount might fit in ARC, thus skewing your results on reading it back.
absolutely. the system has 64gb. so im reruning it with a bigger file, but that takes...time.
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I get pretty much the same result with 99 GB file, including the faster read for the random which makes no sense to me.
/mnt/prod/tmp]$ dd if=/dev/random of=./testfile.out.rand bs=4096 count=26000000
26000000+0 records in
26000000+0 records out
106496000000 bytes transferred in 709.529994 secs (150093725 bytes/sec)
/mnt/prod/tmp]$ dd if=/dev/zero of=./testfile.out.zero bs=4096 count=26000000
26000000+0 records in
26000000+0 records out
106496000000 bytes transferred in 124.474234 secs (855566620 bytes/sec)
/mnt/prod/tmp]$ dd if=./testfile.out.rand of=/dev/null bs=4096
26000000+0 records in
26000000+0 records out
106496000000 bytes transferred in 185.819053 secs (573116685 bytes/sec)
/mnt/prod/tmp]$ dd if=./testfile.out.zero of=/dev/null bs=4096
26000000+0 records in
26000000+0 records out
106496000000 bytes transferred in 350.356585 secs (303964602 bytes/sec)
 
Top