sync=always is always slow?

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Having the slog/zil on the zpool itself vice an external device has no affect on where the data will ultimately be stored, and therefore has no effect on fragmentation.
 

jixam

Dabbler
Joined
May 1, 2015
Messages
47
In my case, I've got the zfs equivalent of a 4-drive RAID10 of SSDs. If a VM lives in the SSD-based datastore in vSphere, a simple "dd" test (dd if=/dev/zero of=zero bs=1M) shows that it can write at just over 60MB/sec if sync=always is set for the zpool (or zvol) in question. If I change sync=standard, the throughput goes to close to 650MB/sec.

Now I finally got the gear that I mentioned previously (2x Intel P3600 NVMe), so I can do some tests of my own.

However, it seems that I do not have enough information to reproduce your test. You only mention "dd if=/dev/zero of=zero bs=1M" and your zfs sync= settings, so I have these questions:
  • What was the value of the zfs recordsize property?
  • What was the value of the zfs compression property?
  • What was the filesystem used within the VM?
  • How do you know that the data fully hit the physical storage (with apparently no fsync inside the VM)?
  • If you are just testing the sync=always impact, why not devise a non-networked test?
I have tried dd on FreeNAS itself and my very preliminary testing indicates that the recordsize can really affect throughput, while compression (it's all zeroes!), sync=always and indeed the SLOG has little impact.
 

Will Dormann

Explorer
Joined
Feb 10, 2015
Messages
61
All ZFS values like record size, compression, etc. aside from sync=always are FreeNAS defaults. The guest OS was a stripped-down Ubuntu VM, which defaults to ext4. I know that the data has hit the physical storage only as much as I trust the vsphere/iscsi/zfs chain. I'm not convinced that the benchmark is very valid at all, esp. considering compression. But it was able to uncover a "terrible" vs. "good" performance.

One thing that I did notice was that the SAS3 HGST Ultrastar seemed to be a touch faster than the SAS2 Zeusram with the above silly benchmark. But in practice, I suspect that no difference would be noticed.

I did test out a local-only dd experiment, but we've had some hardware issues that have distracted me in the meantime.
 

jixam

Dabbler
Joined
May 1, 2015
Messages
47
I'm not convinced that the benchmark is very valid at all, esp. considering compression. But it was able to uncover a "terrible" vs. "good" performance.

With compression enabled in your test, essentially no data is stored (all-zero blocks compress to "nothing"). Thus, the 650MB baseline is most likely not correct. Maybe the real "full speed" of your disk (for a VM workload) is about 120MB/s and then your benchmark doesn't seem as shocking.

I can unfortunately not repeat your test as I currently only have access to 1Gbit/s networking and I can fully saturate that, no matter what I do ...
 
Status
Not open for further replies.
Top