Suggested setup for MS SQL storage

Status
Not open for further replies.

MilesB

Dabbler
Joined
Nov 26, 2014
Messages
16
Thanks, appreciate it.
From what I've read, the slog is optional - the zil either resides on the slog or in the main pool. If using async writes the zil just resides in system memory. The only way I think I'll be using a slog device is if I can lay my hands on an nvram drive cheaply.
 

MilesB

Dabbler
Joined
Nov 26, 2014
Messages
16
After having read numerous blog posts on using COW file systems with databases, I am moving away from zfs. Unfortunately FreeNAS doesn't support any other file systems anymore. This is a real shame for me as I will have to look for another product with support for non-COW file systems. I was really looking forward to using FreeNAS for its excellent management interface. For anyone that reads this post in my position later, I recommend you investigate experiences with Oracle on zfs. People are noticing phenomenal slow downs even on volumes with <50% util. This is caused by the nature of COW - every update moves blocks around the disk and loses your ability to quickly table scan through sequential read. There are also problems with large creation of gang blocks as the disk fills. This is a problem that IMHO faces all COW file systems (especially for databases) and products based on them: FreeNAS and TrueNAS as well as btrfs and Microsoft's ReFS. I would really like to see a hybrid of the two file systems: overwrite-on-write with ZIL and the awesome block CRC and cache capabilities of zfs. There are some excellent articles written on issues like ZIL in pool, fragmentation slowdown vs disk fullness and time, and comparison of Oracle updates on ext3 and zfs.

Thankyou to those who contributed constructively to my questions. The mention of COW slowdown (which I had already done a *little* reading on) was what helped avert a major problem for me in a couple months. No thanks at all to the original "contributor" in this thread zambanini.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
After having read numerous blog posts on using COW file systems with databases, I am moving away from zfs. Unfortunately FreeNAS doesn't support any other file systems anymore. This is a real shame for me as I will have to look for another product with support for non-COW file systems. I was really looking forward to using FreeNAS for its excellent management interface. For anyone that reads this post in my position later, I recommend you investigate experiences with Oracle on zfs. People are noticing phenomenal slow downs even on volumes with <50% util. This is caused by the nature of COW - every update moves blocks around the disk and loses your ability to quickly table scan through sequential read. There are also problems with large creation of gang blocks as the disk fills. This is a problem that IMHO faces all COW file systems (especially for databases) and products based on them: FreeNAS and TrueNAS as well as btrfs and Microsoft's ReFS. I would really like to see a hybrid of the two file systems: overwrite-on-write with ZIL and the awesome block CRC and cache capabilities of zfs. There are some excellent articles written on issues like ZIL in pool, fragmentation slowdown vs disk fullness and time, and comparison of Oracle updates on ext3 and zfs.

Thankyou to those who contributed constructively to my questions. The mention of COW slowdown (which I had already done a *little* reading on) was what helped avert a major problem for me in a couple months. No thanks at all to the original "contributor" in this thread zambanini.
Glad we could help. It always pays to do research before putting things into production.

If you have some time, post links to the articles you read that convinced you CoW wasn't appropriate for your use case. (For the benefit of that guy 2 years from now who'll be reading this).
 

zmi

Cadet
Joined
Jul 8, 2015
Messages
9
Interesting read. The 2nd link from delphix was interesting. Wouldn't using a SLOG have helped to resolve part of the problem? A SLOG should group I/Os together, preventing defragmentation. And what's totally missing in all the discussion is the fact that most storages are not used for one single application or server. We have lots of servers, doing a mix of database, file, NFS serving, ftp, mail, www, ... everything. And of course in virtualisation you use snapshots. That alltogether leads to fragmenation, and is the reason of large over-provisioning: performance suffers after some time, when the storage fragments. That's why NetApp storages have a defrag tool builtin for example. Using an L2ARC is a way to help here.
 
Status
Not open for further replies.
Top