New to FreeNAS - slow write speeds

Status
Not open for further replies.

andrewmoore

Cadet
Joined
Aug 23, 2017
Messages
6
Hi all,

I've just setup FreeNAS for the first time. As part of my migration I moved my data onto a Synology DS216j temporarily, moving the data onto the Synology I was getting about 100MB/s write.

I've just tried to move the data off the Synology onto my new FreeNAS build and the write speeds are really slow, peaking at around 25MB/s for larger files.

The two methods I've tried are as follows:

Creating an NFS share on the Synology and mounting it on FreeNAS, then using rclone to copy it over to my FreeNAS volume.
Code:
mount -t nfs 172.16.1.207:/volume1/backup /mnt/backup/
rclone -avhP /mnt/backup /mnt/Storage

This resulted in 25MB/s write.

rclone from the Synology pushing to FreeNAS
Code:
rsync -avhP /volume1/backup/ root@172.16.1.240:/mnt/Storage/

Again, 25MB/s write.

If I do a local disk speed check I get slightly higher results but still very poor.

Code:
dd if=/dev/urandom of=/mnt/Storage/testfile bs=1024 count=5000000 
5000000+0 records in
5000000+0 records out
5120000000 bytes transferred in 137.445803 secs (37251046 bytes/sec)

Config is as follows:
  • FreeNAS 11.1-U1
  • 8x 3TB WD Red in RAIDZ2
  • 32GB DDR4
  • 2x gigabit ports (LACP - Intel I350 controller) - I've tested without LACP and the speed doesn't change.
The Synology I'm copying from is 2x2TB WD Blacks in RAID0, so I definitely don't think read speed from there is an issue.

Any help would be appreciated. New to all this, thanks in advance.
 
Last edited:

andrewmoore

Cadet
Joined
Aug 23, 2017
Messages
6
What's the output of ifconfig?
I don't think this is a network issue anymore, please can this be moved to "New to FreeNAS?". As my latest dd test shows low speeds as well.

Here is ifconfig output anyway:
Code:
root@nas01:~ # ifconfig
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
	ether 5c:b9:01:7b:8f:83
	hwaddr 5c:b9:01:7b:8f:83
	nd6 options=9<PERFORMNUD,IFDISABLED>
	media: Ethernet autoselect (1000baseT <full-duplex>)
	status: active
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
	ether 5c:b9:01:7b:8f:83
	hwaddr 5c:b9:01:7b:8f:84
	nd6 options=9<PERFORMNUD,IFDISABLED>
	media: Ethernet autoselect (1000baseT <full-duplex>)
	status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
	options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
	inet6 ::1 prefixlen 128
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
	inet 127.0.0.1 netmask 0xff000000
	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
	groups: lo
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
	ether 5c:b9:01:7b:8f:83
	inet 172.16.1.240 netmask 0xffffff00 broadcast 172.16.1.255
	nd6 options=9<PERFORMNUD,IFDISABLED>
	media: Ethernet autoselect
	status: active
	groups: lagg
	laggproto lacp lagghash l2,l3,l4
	laggport: igb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
	laggport: igb1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
dd if=/dev/urandom of=/mnt/Storage/testfile bs=1024 count=5000000
You don't describe your hardware (please do), but on modern drives with 4K blocks, that tiny buffer size will result in much more overhead and slow write speeds. Always use at least a 64K buffer size when writing with dd. 1M is not too much. Note that dd understands humanized notation, so that can be bs=1m.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
True. Linux has an oddly strict units implementation in dd, so that would be bs=1M.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
True. Linux has an oddly strict units implementation in dd, so that would be bs=1M.

I was referring to older stuff. I'm too lazy to be certain, but I think at least one of BSD4.2/SunOS4, Ultrix, or IRIX really wanted the whole number. A lot of my UNIX tends to be "backwards compatible" for quirks that may not have existed in years, so because I hated typing things twice (remember the days when /bin/sh lacked history and editing? Before FreeBSD 9?) I had a strong tendency to learn the least-common-denominator solutions. So it would really be "bs=1048576". :tongue:
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
The /bin/sh history and editing is pretty new, actually. Just a couple of years.

But anyway, point taken. If anybody wants to know why they can't erase a disk with dd on IRIX, I'll be ready. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The /bin/sh history and editing is pretty new, actually. Just a couple of years.

It appears I'm in error (but you are even more!). It appears to have been enabled by default starting in FreeBSD 9 (which is still more than a "couple of years") but it actually dates back all the way to FreeBSD 5, when /bin/sh stopped being linked statically.

It annoys me, in part because it introduces dependencies on more libraries (libedit/ncurses). Back in the good old days, there were no dependencies on the dynamic linker for such critical-path items as /bin/sh. This botch is pretty annoying to anyone who has done manual repair and reconstruction of a damaged production system, since you then have to use /rescue/*. Fortunately FreeBSD 5 and 6 were train wrecks in their own right in other ways, so we pretty much skipped them and kept building 4.11R until around the time 7.0R came about, which fixed some of the worst SMP panicks and crashes for doing things you'd think would be trivial.

But on the other hand, I guess I am very happy that I can't recall any recent history where I had to do an OS recovery, probably due to everything being virtualized these days with RAID1 datastore backing. The days of heads wearing a hole in your OS disk may finally be past.
 
Status
Not open for further replies.
Top