ezekiel.incorrigible
Dabbler
- Joined
- Aug 10, 2018
- Messages
- 46
You can adjust the size of the "buffer" that ZFS will allow to be fully of dirty (uncommitted to pool) data, but unless that's on a stable storage device (eg: NAND flash) you also have to accept the risk of losing that much data in case of a crash.
There's also the issue of how quickly you're going to "fill" and "empty" that buffer. Assuming an infinitely large "buffer", if you have a 10Gbps connection and shuttle in a 10GB file, that will take roughly 10 seconds. But you'll only be able to "empty" it at the pool's write speed, whether that's 800MB/s as proposed above, or 400MB/s if things get full and fragmented over time. If the writes are spaced out far enough to let the buffer empty, cool. Nothing bad happens. But keep leaning on it too hard, and you'll fall back to disk speed - and you'll also be impacting the reads while it's doing the writes.
Writes will be relatively infrequent so the buffer being able to flush itself shouldn't be an issue - I would also be happy to risk losing the 'dirty' data in the case of a powercut with sync=never - but if there is a powercut and the system is halfway through an asynchronous write of dirty data, straight from RAM, is there an elevated chance of losing my whole pool versus if I had left sync=default?