vitaprimo
Dabbler
- Joined
- Jun 28, 2018
- Messages
- 27
I've a tiny TruNAS system exclusively for VMDK storage over NFS. The inflated thin VMDK don't go beyond 1TB, so it's only a basic mirrored single mirrored VDEV plus a a matching flash mirrored VDEV for metadata. The flash pool is relatively massive compared to the spinning pool.
Searching for how to enable special_small_blocks storage on the metadata VDEV instead I learned that I had another issue to fix first, the record size. I've been reading for hours articles and blog post and most of them point towards a 16K-ish record size as a more appropriate for generic VMs (database storage is elsewhere but there's a low traffic Exchange Server--circular logging enabled). I dug around the settings for the pool and the datasets and found that the record size is not greyed out like it is in some of the other settings of a dataset.
And sure, I found it can be changed but I don't know if it's on the fly. I do know that it won't grow the record size by itself: data must be rewritten--I'm not growing anything though so I think I'm on the clear for that, but I did find something about shrinking it would trigger the rewriting of data, it was called differently something like send/receive, there's were also warnings about something-something involving zdb and but it wasn't all that clear how to proceed in the first place. I lost the page buried in one of the browser windows with way too many tabs.
Can I safely reduce the record size of an existing dataset? If, what are the next steps so data blocks are broken apart into smaller ones? I created a new dataset with the 16K desired size but before moving files--thankfully only like 200GB-worth of VMDKs--I though it might be a good idea to ask first, hopefully I can offload it from the network. Isolated, but still. :)
Searching for how to enable special_small_blocks storage on the metadata VDEV instead I learned that I had another issue to fix first, the record size. I've been reading for hours articles and blog post and most of them point towards a 16K-ish record size as a more appropriate for generic VMs (database storage is elsewhere but there's a low traffic Exchange Server--circular logging enabled). I dug around the settings for the pool and the datasets and found that the record size is not greyed out like it is in some of the other settings of a dataset.
And sure, I found it can be changed but I don't know if it's on the fly. I do know that it won't grow the record size by itself: data must be rewritten--I'm not growing anything though so I think I'm on the clear for that, but I did find something about shrinking it would trigger the rewriting of data, it was called differently something like send/receive, there's were also warnings about something-something involving zdb and but it wasn't all that clear how to proceed in the first place. I lost the page buried in one of the browser windows with way too many tabs.
Oh, and [speaking of zdb] this one command string I found to size up metadata no longer works. I have a feeling it's because my only pool is named "z" as in /mnt/z/someDatasetOrZvol, I now get an error about some cache, or that the thing I'm referencing (z) doesn't exist. It not that concerned about that anyway, the record size thing is stealing my attention. :/
Can I safely reduce the record size of an existing dataset? If, what are the next steps so data blocks are broken apart into smaller ones? I created a new dataset with the 16K desired size but before moving files--thankfully only like 200GB-worth of VMDKs--I though it might be a good idea to ask first, hopefully I can offload it from the network. Isolated, but still. :)