SuperWhisk
Dabbler
- Joined
- Jan 14, 2022
- Messages
- 19
My current understanding is that ZFS's default sync setting of "standard" allows the "client application" to determine if a write should be synchronous or not (and 99% of the time, this is exactly how it should stay). Async writes are cached in memory until a transaction group is filled, or a timeout is reached. Sync writes are immediately written to ZIL (possibly disrupting the writing of a previous transaction group), then added to the transaction group in memory like Async writes.
Then there's NFS (I'm using NFSv4). You can mount NFS on the client machine as either sync or async, and this affects (among other things) whether the client will cache writes locally before sending them to the NFS server in a batch - conceptually similar to ZFS transaction groups, but probably very very different in implementation.
The part I am much less sure about:
From what I can gather, it seems that if I use the "sync" mount option on the NFS client, all writes will be synchronous, regardless of what the application running on the client machine wanted. This in turn affects how ZFS handles these writes, forcing it to immediately write everything to ZIL before continuing with "regular" operation.
Assuming all of the above is correct (if not, please help me set the record straight), is there a way to have NFSv4 not cache writes locally on the client, but also not force ZFS to immediately flush all writes to disk?
Basically I want writes to be immediately flushed to the server and acknowledged to the client as complete, but still allow the server to decide when it wants to commit those to disk.
Now you might say "well that seems dangerous, why would you want that? Just do everything sync" and certainly you might have a point, but if I am going to allow asynchronous writing for performance reasons, I'd rather have writes cached and batched in a single location, not two. As far as data loss concerns go, the solitary NFS client in this case is a VM running on the TrueNAS host that is exporting the NFS share. They are connected over a virtual network bridge. If the server goes down, there won't be a client left running with a different idea of what the data should be anyway. This is also all protected from power loss with a UPS, and there are hourly, daily, and monthly snapshots, with the latter two being replicated to a separate TrueNAS machine daily. In the case of unexpected shutdown or crash, I would likely lose no more than an hour, or up to a day in a really bad situation, which is more than adequate for my needs.
Then there's NFS (I'm using NFSv4). You can mount NFS on the client machine as either sync or async, and this affects (among other things) whether the client will cache writes locally before sending them to the NFS server in a batch - conceptually similar to ZFS transaction groups, but probably very very different in implementation.
The part I am much less sure about:
From what I can gather, it seems that if I use the "sync" mount option on the NFS client, all writes will be synchronous, regardless of what the application running on the client machine wanted. This in turn affects how ZFS handles these writes, forcing it to immediately write everything to ZIL before continuing with "regular" operation.
Assuming all of the above is correct (if not, please help me set the record straight), is there a way to have NFSv4 not cache writes locally on the client, but also not force ZFS to immediately flush all writes to disk?
Basically I want writes to be immediately flushed to the server and acknowledged to the client as complete, but still allow the server to decide when it wants to commit those to disk.
Now you might say "well that seems dangerous, why would you want that? Just do everything sync" and certainly you might have a point, but if I am going to allow asynchronous writing for performance reasons, I'd rather have writes cached and batched in a single location, not two. As far as data loss concerns go, the solitary NFS client in this case is a VM running on the TrueNAS host that is exporting the NFS share. They are connected over a virtual network bridge. If the server goes down, there won't be a client left running with a different idea of what the data should be anyway. This is also all protected from power loss with a UPS, and there are hourly, daily, and monthly snapshots, with the latter two being replicated to a separate TrueNAS machine daily. In the case of unexpected shutdown or crash, I would likely lose no more than an hour, or up to a day in a really bad situation, which is more than adequate for my needs.