Changing volblocksize

Dunuin

Contributor
Joined
Mar 7, 2013
Messages
110
Hi,

One of the 3 SDDs in my raidz1 pool failed so I thought it would be a good idea to destroy it and redo it with 5 SSDs as raidz1. I've got 3 VMs with 2 zvols each. These are using 16K volblocksize because this was optimal for a 3 disk raidz1. But but new pool needs a 32K volblocksize.

I know from ProxmoxVE that the volblocksize of a zvol can only be set at creation and can't be changed later. Also my old zvol was unencrypted (because it was stored on a GELI encrypted legacy pool) but the new zvol should be using ZFS native encryption like the dataset it is a child of.

So how to do that?

My first idea was to use "zfs send | zfs recv" so a new zvol would be created but this doesn't allow to change the volblocksize via arguments and it is complaining about the ZFS native encryption (I think it don't likes copying datasets/zvols from an unencrypted to an encrypted dataset).

Then I tried it with dd. I created new zvols with the same sizes and options but with enabled native encryption and the new volblocksize. Then I used "dd -bs=1M -if=/dev/zvol/oldzvol -of=/dev/zvol/newzvol" to copy the contents on block level from the old zvols to the new zvols. For all zvols dd wrote the same amount of blocks it had read. Then I changed my VMs to use the new zvols instead. For one of my 3 VMs this worked and the VM bootet into Debian. But the other 2 VMs (OPNsense and another Debian) won't start anymore. The Debian stops at initramfs because it can't find a root partition with the correct UUID anymore.
Maybe my 1M blocksize was wrong? What blocksize would be correct? The VMs are using virtio with 512B and 4K blocksize. Old zvols use a volblocksize of 16K or 64K. New zvols all use 32K volblocksize.

So whats the correct way to do this?
 
Top