EdvBeratung
Dabbler
- Joined
- Oct 25, 2014
- Messages
- 12
Hi FreeNAS gurus, nerds and heros, :)
I have been reading through tons of threads and documentation but I am not 100% clear yet if it is safe to use vfs.zfs.cache_flush_disable so I hope that someone might be able to help me with some answers.
Quick story:
I am trying to use FreeNAS with a custom built database application which I don't trust so I am using "sync=always" to make sure it stays consistent. Unfortunately the DB is storing huge binaries and it is transacting with 4 kB records and even with a LSI 9207 (IT firmware) and Samsung 850 Pro SSDs (those are terribly fast) as SLOG I can't get any higher than 5 MB/s in writes. Iozone (with "-r 4k" option) confirms that. With larger record sizes I'm getting 400 MB/s and more but unfortunately the DB can't be adjusted and it always sends tiny 4 kB records.
Now I found out that there is an option called vfs.zfs.cache_flush_disable which I set to "1" in tunables -> loader and i'm now getting nearly 50 MB/s write speed which is about 10 times faster - great.
Since data consistency is important I want to make sure I understand what I'm doing and therefore I have some questions.
1. What does this option vfs.zfs.cache_flush_disable affect? The LSI 9207 in IT mode doesn't have any cache and basically works passthrough. So will it affect HDD integrated write cache? Or cache in PC's RAM? Or both? If it would affect PC RAM than the data would be no more secure than using "sync=disabled", correct?
2. The system got redundant power supplies and a separate UPS for each power supply so the system should never go down because of power issues.
If the BSD kernel crashes (panic etc) for whatever reason and there is still something in the HDD write cache in the SLOG HDD will that be written to the HDD even the kernel is dead as long as there is still power on the HDD?
3. I will try to simulate crashes anyway so I'm curious to see if rebuild from SLOG works. Are there any logs aynwhere that would show me that the pool has been recovered using information from the SLOG?
I have been reading through tons of threads and documentation but I am not 100% clear yet if it is safe to use vfs.zfs.cache_flush_disable so I hope that someone might be able to help me with some answers.
Quick story:
I am trying to use FreeNAS with a custom built database application which I don't trust so I am using "sync=always" to make sure it stays consistent. Unfortunately the DB is storing huge binaries and it is transacting with 4 kB records and even with a LSI 9207 (IT firmware) and Samsung 850 Pro SSDs (those are terribly fast) as SLOG I can't get any higher than 5 MB/s in writes. Iozone (with "-r 4k" option) confirms that. With larger record sizes I'm getting 400 MB/s and more but unfortunately the DB can't be adjusted and it always sends tiny 4 kB records.
Now I found out that there is an option called vfs.zfs.cache_flush_disable which I set to "1" in tunables -> loader and i'm now getting nearly 50 MB/s write speed which is about 10 times faster - great.
Since data consistency is important I want to make sure I understand what I'm doing and therefore I have some questions.
1. What does this option vfs.zfs.cache_flush_disable affect? The LSI 9207 in IT mode doesn't have any cache and basically works passthrough. So will it affect HDD integrated write cache? Or cache in PC's RAM? Or both? If it would affect PC RAM than the data would be no more secure than using "sync=disabled", correct?
2. The system got redundant power supplies and a separate UPS for each power supply so the system should never go down because of power issues.
If the BSD kernel crashes (panic etc) for whatever reason and there is still something in the HDD write cache in the SLOG HDD will that be written to the HDD even the kernel is dead as long as there is still power on the HDD?
3. I will try to simulate crashes anyway so I'm curious to see if rebuild from SLOG works. Are there any logs aynwhere that would show me that the pool has been recovered using information from the SLOG?
Last edited: