Write cache enable or disable?

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
Hi All,

Is it good or a requirement to disable the on disk write cache? If I see 10-13% fluctuation in vdev usage I think this means the larger vdev will stall the scrub and slow down. This is my lab where the data is random dd 100gb files and then I copy them to fill the pool to 79%

I noticed this with 24 mirrors striped and when I did 6 vdevs of RAIDz2 I could really see the one vdev doing all the work on a scrub. It takes 6 hours or so to scrub this 79% zpool and I start to see where all the IO is going to the first vdev. I see the same thing when doing striped mirrors.

1671657462339.png
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Do you mean the ZIL?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi All,

Is it good or a requirement to disable the on disk write cache?

No, provided that the write cache is "honest" and does not lie about its capabilities. ZFS is aware of and leverages the write cache on physical devices, issuing flushes where necessary to protect things like metadata (written synchronously) and uberblock updates.

ZFS should be correctly enabling the write cache on disks connected via HBAs or non-RAID SATA; the introduction of a RAID controller however will remove any guarantees of this. You can verify on a per-disk basis with camcontrol identify daX and looking/grepping for the "write cache" line, to ensure it shows as present and enabled.
 

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
Do you mean the ZIL?

for i in {0..61}; do echo "WCE: 1" | camcontrol modepage da$i -m 0x08 -e; done
 
Top