Very bad performance 8.1beta

Status
Not open for further replies.

sirdir

Dabbler
Joined
Apr 20, 2012
Messages
25
Hi!

I have 2 identical servers, one running under 8.2 and one under 8.3
I have very bad performance on the 8.3, as it seems on the zfs level.

8.2 looks like this:
dd if=/dev/zero of=test bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 15.038447 secs (697263481 bytes/sec)

not great, but OK.

now 8.3beta1:
dd if=/dev/zero of=Test bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 100.252199 secs (10459382 bytes/sec)

Last time I tried it was even 350 seconds... anybody an idea what's going wrong here? unfortunately I already upgraded the pool so there's no going back.

regards
Patrick
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
What's the ZFS volume configuration on each one?
 

sirdir

Dabbler
Joined
Apr 20, 2012
Messages
25
Hi! I'm not sure what you're reffering to. It's more or less identical. But the slow one was quite full, 'only' ~90 GB (hm, I remember times when I thought I'd never need more than 90 MB in my life...) out of something over 5 TB free, I guess that was the problem? I deleted some snapshots and now performance is back to normal...
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
ie. ZFS Mirror, RAID-Z1, RAID-Z2. If the volumes are vey nearly full that would absolutely explain it, ZFS is copy-on-write and needs free space available any time a write is being made or else it spends a lot of time seeking (any filesystem would suffer from such low capacity, though).
 

sirdir

Dabbler
Joined
Apr 20, 2012
Messages
25
OK, it's raidz1. I wasn't aware that 90GB free is considered 'such' low capacity but I'll try to free up some more space.

Regards
Patrick
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
It's a tiny percentage of the disk, so it would absolutely add seek time to every write. Chances are the free areas are highly fragmented, as well.
 

sirdir

Dabbler
Joined
Apr 20, 2012
Messages
25
It's a tiny percentage of the disk, so it would absolutely add seek time to every write. Chances are the free areas are highly fragmented, as well.

To be honest, I still don't really 'see' it. Of course it's a small percentage, but when I try to write 1G and 90 are free, I still don't see how a copy-on-write could slow down copying that much. And I don't see how a fragmentation of 4 disks could slow the speed down *that* much. Even a heavily fragmented disk 20 years ago was much faster and I don't see why 4 drives should even be slower. But anyway, I'm glad I found the problem.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
To be honest, I still don't really 'see' it. Of course it's a small percentage, but when I try to write 1G and 90 are free, I still don't see how a copy-on-write could slow down copying that much.
You don't see how when your filesystem is a bit more than 98% full? It's staring you in the face. Not to mention ZFS has switched from performance to space based optimization at this point.

Unless you are running a read-only zpool most recommendations are to keep the zpool under 80% utilization.
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
Think of it this way: every time the filesystem wants to write a 512KB block, it has to find an empty one (ZFS won't just write over the existing block, that's what copy on write means). Because the disk is full end-to-end, it has to perform an expensive seek operation. Additionally, very few open spaces it has will be next to others, so it will be performing such time-consuming seeks significantly more often than usual, rather than writing data relatively contiguously.
 

sirdir

Dabbler
Joined
Apr 20, 2012
Messages
25
Think of it this way: every time the filesystem wants to write a 512KB block, it has to find an empty one (ZFS won't just write over the existing block, that's what copy on write means). Because the disk is full end-to-end, it has to perform an expensive seek operation. Additionally, very few open spaces it has will be next to others, so it will be performing such time-consuming seeks significantly more often than usual, rather than writing data relatively contiguously.

Well probably it's just that I don't know enough about ZFS and maybe the 'space saving' mode that PaleoN mentioned is a good explanation, but still, the performance hit is really striking. Look at it this way: To get such a performance a tenfold or even much more of every bit written has to be ... read or written. And that seems a lot to me. And still, even if 90GB is only 2% of my drive it's still enough to write huge tables, btrees, whatever you want..
Anyway, as I said, I'm glad I know what to do now.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Well probably it's just that I don't know enough about ZFS and maybe the 'space saving' mode that PaleoN mentioned is a good explanation, but still, the performance hit is really striking. Look at it this way: To get such a performance a tenfold or even much more of every bit written has to be ... read or written. And that seems a lot to me. And still, even if 90GB is only 2% of my drive it's still enough to write huge tables, btrees, whatever you want..
Anyway, as I said, I'm glad I know what to do now.

It's not a ZFS thing. All modern filesystems work that way, though some are more efficient near-full than others. ben's post says it most simply. While "90GB" is "still enough to write huge" whatever-you-haves, if that 90GB is in teeny chunks all over the place (which it is), it's going to be slow. Do you have a pile of USB keys laying around? Many people do. How much space do you think you have free? Probably a lot. But maybe only 50MB on this one and 20MB on that one and 30MB on another. Writing to that "free" 100MB is going to be slower than writing to 100MB of contiguous space on a single USB key, because you have to swap USB keys several times.

ZFS, however, is really designed to operate with substantial free space available; this is a natural side-effect of COW filesystems in general. You need to have space nearby for a write to succeed, or else long seeks and severe fragmentation are the eventual result.
 
Status
Not open for further replies.
Top