ZFS Noob
Contributor
- Joined
- Nov 27, 2013
- Messages
- 129
I'm still trying to figure out why I can get decent synchronous transfers on my system, and I'm looking at whether there's a problem with either my SSD or the card I've used to mount it into the system.
Here's what I've come up with today:
1. Adding the Intel 320 SSD as an SLOG seems to decrease write performance.
I'm using the dd command I see referenced here so I'm measuring throughput directly from the command line. Here's the command:
dd if=/dev/zero of=/mnt/nas1mirror/testing/tmp.000 bs=2048k count=50k
And here's what I got:
Those are megabytes per second, by the way. Looks like ZFS is caching the hell out of the writes when it can to get great performance on async writes. Great. But the SSD speed decrease? That sucks.
2. The SSD isn't getting "boosted" the way the disk pool is.
So I pulled the SSD out of the pool and set it up as its own pool and reran the test. Sync=standard on both datasets:
The SSD transfer number looks about right, as reviews I've read of the Intel 320 online suggest it can support 220 MB/s or so on sustained writes, and I got 409 MB/s. That's greater than I'd expect, but maybe that's attributable to the on-disk cache. Cool.
But the 4 drives I've got as a striped mirror? WTH? If ZFS is caching writes in memory and pushing transactions as I'd expect, shouldn't it be doing that for both? Why the better performance here?
Or is this more likely to be 409 MB/s for the SSD after caching is taken into account, in which case I've got a problem with either the SSD or the card it's mounted in?
Where should I be looking next to help diagnose this?
Thanks. Sorry for the wall of text. At least I threw in pictures to make it easier...
Here's what I've come up with today:
1. Adding the Intel 320 SSD as an SLOG seems to decrease write performance.
I'm using the dd command I see referenced here so I'm measuring throughput directly from the command line. Here's the command:
dd if=/dev/zero of=/mnt/nas1mirror/testing/tmp.000 bs=2048k count=50k
And here's what I got:

Those are megabytes per second, by the way. Looks like ZFS is caching the hell out of the writes when it can to get great performance on async writes. Great. But the SSD speed decrease? That sucks.
2. The SSD isn't getting "boosted" the way the disk pool is.
So I pulled the SSD out of the pool and set it up as its own pool and reran the test. Sync=standard on both datasets:

The SSD transfer number looks about right, as reviews I've read of the Intel 320 online suggest it can support 220 MB/s or so on sustained writes, and I got 409 MB/s. That's greater than I'd expect, but maybe that's attributable to the on-disk cache. Cool.
But the 4 drives I've got as a striped mirror? WTH? If ZFS is caching writes in memory and pushing transactions as I'd expect, shouldn't it be doing that for both? Why the better performance here?
Or is this more likely to be 409 MB/s for the SSD after caching is taken into account, in which case I've got a problem with either the SSD or the card it's mounted in?
Where should I be looking next to help diagnose this?
Thanks. Sorry for the wall of text. At least I threw in pictures to make it easier...