Drive write cache and UFS/ZFS

Status
Not open for further replies.

ldx00

Cadet
Joined
Aug 9, 2014
Messages
3
All,

Complete noob here, although I've done a lot of reading, so hopefully this isn't too obvious a question!

I've got one of those HP microservers and the "drive write cache" option is turned off by default. I have a UPS, but was wondering if maybe FreeNAS has some in-built resistance to issues that may be caused by turning drive write caching on, or whether it's only recommended if you have a UPS. I haven't managed to find information that definitively says it's better to have it on or off, but I have read a few threads where people have said turning it on has stopped weird hanging behaviour and improved throughput dramatically (both over the network and internal internal transfers), but in truth I'm not too bothered about raw speed here, just looking to do backups and simple file serving in my home, probably mostly via Plex with no transcoding.

Any help appreciated!
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I wondered this too, and equally failed to find a definitive answer to the question of why the option is there if it is obviously a bad thing.
 

ldx00

Cadet
Joined
Aug 9, 2014
Messages
3
OK, I've had the weekend to look into it a bit more and frankly a lot of the below is guesswork based on my limited knowledge of discs and file systems, but I was hoping some expert would set me straight so I can stop worrying about it!

I'm not sure of the exact way it works but I think that by turning it on, instead of writing data directly to disc, the controller writes it to a cache on board the drive and when there's enough data in there, the controller writes it to disc at once, which improves performance, but because the cache is just a bit of RAM, if you suffer a power failure, the stuff in the cache is lost. By disabling it, everything is committed to disc directly. My understanding though is that depending on how your operating system operates, you can insist that unless a write process is completed correctly, the file system may see the write as never happing happened. It probably works something along the lines of writing the data, then when the disc reports that that the data has been written to disc, then and only then, is the file system updated with the information about that file. In many respects, this is probably the way that most file systems work, but the cache just gets in the way of that, maybe because the hard disc reports that anything committed to cache, is as good as written and reports it back in that way. Either way, for the large sequential reads and writes I'm doing, I've found out from some testing that the setting makes only a minor difference to performance and I'm willing to take the hit, in favour of better data integrity.

The UPS and (when I get round to it) ZFS are both designed to help preserve the integrity of my data, but if this write caching setting is going to put that all in jeopardy, I'd really like to know about it. As mentioned above, in my scenario (fortunately) it doesn't seem to make a great deal of difference, so I'm being safe and turning it off, but lots of people seem to have it on and don't appear to be getting a bashing for doing so (unlike with the whole ECC or not ECC RAM thing), so the only real worry is that I'm giving up performance for nothing.

Sorry for mini essay, and as ever, any help appreciated!
 
Last edited:
L

L

Guest
So by default zfs will send down to the disks a "flush disk cache" on regular intervals. On some systems I have seen this paralyze performance while writes are being flushed. On some disks they completely ignore the request and zfs sits and waits for the drives to return a completed message. So the best thing to do with your disks is test.
 

ldx00

Cadet
Joined
Aug 9, 2014
Messages
3
Thanks for the reply Linda. I've got WD Reds, the second gen ones I think. Again from limited Googling I think these behave themselves, but I'm still not really convinced this write cache thing isn't going to mess something up. It appears that cache flushing occurs in order to ensure that all data is committed to the disk proper, but in some respects, the more I read about write caching, the less I like the sound of it! I could be misunderstanding the situation, but it seems now that there are times when data is held in the cache for a fairly long time (at least long enough to warrant flushing out) which can really only be a bad thing, but I suppose it also means that zfs is aware of caches and in some respects is trying to work around their limitations.

Still though, I've basically convinced myself now that it doesn't seem like a good idea. I suppose the fact that the default setting is OFF on my server, should really have been the give-away!

Anyway, thanks to those who pitched in!
 
Status
Not open for further replies.
Top