Issue with writing to disk

Status
Not open for further replies.
Joined
Feb 16, 2014
Messages
2
Hi!
Have spent a whole day trying to get my "new" Freenas box to work. My problem is that when I start copying big files to it, the first gigs or so are copyed in high speed. After this is takes a breake(see screenshot), and then it continues. This only happens during write, so no pauses when reading the same data.

My hardware:

ASUS P6X58D-E
Intel Core i7-950
24 gig of Corsair Vengeance DDR3 1600MHz
Gigbit network
Western digital 1 TB(for testing)

Setup:
Freenas 9.2.1 x64
ZFS file system
AFP share

Please help me.


Skjermbilde%202014-02-16%20kl.%2018.36.48.png
Skjermbilde%202014-02-16%20kl.%2018.20.15.png
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
If I read your attached graph correctly you are writing data at an average rate of 87.4 MB/sec. That is close to the average write speed of your single 1 TB disk.

The chart is showing behavior that I would expect. ZFS accepts incoming data into a 'txg' (Transaction Group) for 5 seconds or until an upper limit is reached and then (eventually) writes the txg out to disk. If the physical disk is not fast enough to keep up with incoming data then ZFS will thorottle/pause accepting new data until the disk can catch up.

If you don't like seeing the start/pauses on your transfer chart you could add additional disks to your ZFS pool in order to increase its throughput. Then your bottleneck will shift to your network card.
 
Joined
Feb 16, 2014
Messages
2
Thanks for replay!

Why does this not happen with a similar setup, with 6 * 2TB WD Green drives in Raidz2?
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
Oops, I just saw that you had two graphs in your post. I assume the top graph shows your max write speed, while the bottom graph is the write speed during the "pauses".

I think my first reply still is technically correct. Perhaps the 87.4 MB/sec you see in the top graph is the max speed that you can move the data across the network using AFP. If your 1 TB disk can't keep up with a sustained write speed of 87.4 MB/sec then it would still be the bottleneck.

=-=-=

The reason why you don't see the same copy behavior with a 6 disk RAIDZ2 pool is that the 6-disk pool can write data faster then your network can deliver it. So the bottleneck has moved to your network instead of your disks.

See here for some more information:

http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance#raidz

"For write bandwidth, you may get more: Large blocks at the file system level are split into smaller blocks at the disk level that are written in parallel across the vdev's individual data disks and therefore you may get up to n times an individual disk's bandwidth for an n+m type RAID-Z vdev."
 
Status
Not open for further replies.
Top