Problem with Spike read/writes to FreeNAS 9.10 zfs vol system

Status
Not open for further replies.

jafin

Explorer
Joined
May 30, 2011
Messages
51
I'm attempting to setup a new system for Freenas Build is

16GB RAM HP ML10 V2 Box 6 x 3TB RED
Booting off a 8GB USB Stick

One volume configured as ZFS2.

When i run a rsync from an existing freenas 9.10 over to this system I'm noticing spikes in disk performance. Reads shouw a similar pattern.

I also have attempted to transfer data from another system and noticed similar. (excluding the nas as a culprit)
Graphs show minimal CPU activity and unsaturated network.

Files being copied are large >1gb files, no small files.


What I noticed is transfer will start and hover around the 100MB/s mark, it will then after approx 1 min dip to ~1MB/s transfer, sit there for a little while, then go back up to the 100MB/s mark.

I've run DD tests on the server, and they seem to get mixed results.
Code:
[root@freenas] /mnt/tank1/temp# dd if=/dev/zero of=ddfile bs=8k count=4000000  
4000000+0 records in
4000000+0 records out
32768000000 bytes transferred in 29.260153 secs (1119884778 bytes/sec)

[root@nas2] /mnt/tank1/temp# dd if=ddfile of=/dev/zero bs=8k count=4000000
4000000+0 records in
4000000+0 records out
32768000000 bytes transferred in 60.502920 secs (541593694 bytes/sec)


DMESG doesn't show anything interesting,.
I've enabled SMART and nothing is coming back with any errors.

The images are a rsync from the other nas, which when xfering to another system will saturate the gigabit lan without peaks.

freenas-network.jpg
Freenas-cpu.jpg


I don't know what else to test/toggle to attempt to resolve the issue.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
What is the NIC chipset?

Did you burn in the drives? The graph for ada0 looks different from the other disks. Could be something, or nothing, but a drive with erratic write performance would cause the whole pool to behave erratically.

Did you run your dd tests on a dataset with compression enabled? If so, they aren't revealing pool performance.

Have you tested the transfer with the systems directly connected?
 

jafin

Explorer
Joined
May 30, 2011
Messages
51
@robert,

sorry ada0 wasn't part of the zvol.
I have connected the 2 nases together and the problem still persisted.
I have only run smartctl -t short on the drives which is possibly not enough.
I'll trash the zvol on the weekend and run a better burn in test on each of the drives., I'll take this as a next step to ensure the drives are all ok.
I disabled lz4 compression on the zvol.
NIC is the onboard HP 332i adapter.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Please say "RAIDZ2" when you mean "RAIDZ2", and "pool" when you mean "pool", and "dataset" when you mean "dataset", and "zvol" when you mean "zvol", etc. Otherwise, clarity of communication tends to suffer.
sorry ada0 wasn't part of the zvol.
So, what else was going on while you were running this test, to make ada0 just as busy as the other disks? You have a RAIDZ2 volume, and four disks total, and one not part of the pool? Something doesn't add up.
I have only run smartctl -t short on the drives which is possibly not enough.
Not even close.
I disabled lz4 compression on the zvol.
Just to be clear, leave it enabled for normal use, just disable it if you're trying to test performance using /dev/zero.
Broadcom? What does dmesg show for it?
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
We need system specs, array specs, and how the dataset was being accessed for the transfer.

Your system was under some decent load. What command did you use to start the transfer. Also need to know the specs of the other FreeNAS system along with how full the pool is that the data is coming from along with how fragmented it is.


Sent from my iPhone using Tapatalk
 
Status
Not open for further replies.
Top