Turn on iSCSI Sync Write

Status
Not open for further replies.

eroji

Contributor
Joined
Feb 2, 2015
Messages
140
I found a lot of sources saying to turn iscsi sync=on but nothing saying where or how to set this config. Can someone tell me the steps to set this?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, you have to set it from the command line. "zfs set sync=always pool/foo/bar"

Expect to see a performance hit.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Expect to see a performance hit.

That's putting it mildly.

If your workload actually requires sync writes (ie: VMs on iSCSI storage) then you should get a good SLOG device and install it first, otherwise your VMs may not appreciate their lightning-fast write speeds getting brought back down to sync-write reality.
 

GeoffK

Dabbler
Joined
Apr 29, 2015
Messages
29
Yes, you have to set it from the command line. "zfs set sync=always pool/foo/bar"

Expect to see a performance hit.

Reiterating this - you need sync=always for vmware iSCSI pools.

And yes, if you don't have a SLOG, its gonna suck - on an array with 7200rpm drives, etc a cap of 700iops or there abouts.

Get a DC S3500/3700 and this will boost to 30-40k - dont' forget to resize it to 8-16GB...
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
on an array with 7200rpm drives, etc a cap of 700iops or there abouts.

I cannot even begin to imagine how you would come up with that.

A single mirror vdev of two 7200RPM drives might get you 100 IOPS write or 150 IOPS read.

A shelf of 48 7200RPM drives in mirror vdev should get you at least 2000 IOPS write or 3000 IOPS read.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Real world experience.

It was 16 Mirrored vdevs ;)

Well, the point was that it's really dependent on the size and configuration of the pool, you can't just say "on an array with 7200rpm drives".

But that strikes me as maybe a little on the low side for 16 mirror vdevs. Are you sure you had enough parallelism in the workload while testing?
 

GeoffK

Dabbler
Joined
Apr 29, 2015
Messages
29
But that strikes me as maybe a little on the low side for 16 mirror vdevs. Are you sure you had enough parallelism in the workload while testing?

Eh, it was hardly emperical - it was a single iometer guest on esxi :)
 
Status
Not open for further replies.
Top