zpool status, please.Output ofzpool status, please.
zpool status as asked and copy & paste the resulting text output into a "code" block. Thanks!you cannot remove a disk without destroying the pool
Thank you for your comment! Like this?This is a pool consisting of two single disk vdevs, so you cannot remove a disk without destroying the pool.
P.S. Next time please enter the commandzpool statusas asked and copy & paste the resulting text output into a "code" block. Thanks!
Almost. Why are you posting a picture? Copy and paste the text. Like this:Thank you for your comment! Like this?
root@freenas-pmh[~]# zpool status ssd
pool: ssd
state: ONLINE
scan: scrub repaired 0B in 00:36:44 with 0 errors on Sat Sep 26 17:16:47 2020
config:
NAME STATE READ WRITE CKSUM
ssd ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c ONLINE 0 0 0
gptid/0c661dcc-e247-11ea-b73e-ac1f6b76641c ONLINE 0 0 0
errors: No known data errorsroot@freenas-pmh[/mnt/hdd]# zpool remove testpool /mnt/hdd/disk1
root@freenas-pmh[/mnt/hdd]# zpool status testpool
pool: testpool
state: ONLINE
remove: Removal of vdev 0 copied 37K in 0h0m, completed on Fri Oct 2 14:05:08 2020
120 memory used for removed device mappingsroot@truenas[~]# truncate -s 1T zfs-sparse-0
root@truenas[~]# truncate -s 1T zfs-sparse-1
root@truenas[~]# zpool create BadIdeaMan /root/zfs-sparse-0 /root/zfs-sparse-1
root@truenas[~]# zpool status
pool: BadIdeaMan
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
BadIdeaMan ONLINE 0 0 0
/root/zfs-sparse-0 ONLINE 0 0 0
/root/zfs-sparse-1 ONLINE 0 0 0
root@truenas[~]# zpool remove BadIdeaMan /root/zfs-sparse-1
root@truenas[~]# zpool status
pool: BadIdeaMan
state: ONLINE
remove: Removal of vdev 1 copied 63K in 0h0m, completed on Fri Oct 2 08:09:35 2020
216 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
BadIdeaMan ONLINE 0 0 0
/root/zfs-sparse-0 ONLINE 0 0 0
errors: No known data errors
root@truenas[~]# zpool attach BadIdeaMan /root/zfs-sparse-0 /root/zfs-sparse-1
root@truenas[~]# zpool status
pool: BadIdeaMan
state: ONLINE
scan: resilvered 278K in 00:00:00 with 0 errors on Fri Oct 2 08:10:17 2020
remove: Removal of vdev 1 copied 63K in 0h0m, completed on Fri Oct 2 08:09:35 2020
216 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
BadIdeaMan ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/root/zfs-sparse-0 ONLINE 0 0 0
/root/zfs-sparse-1 ONLINE 0 0 0
errors: No known data errors
Almost. Why are you posting a picture? Copy and paste the text. Like this:
Code:root@freenas-pmh[~]# zpool status ssd pool: ssd state: ONLINE scan: scrub repaired 0B in 00:36:44 with 0 errors on Sat Sep 26 17:16:47 2020 config: NAME STATE READ WRITE CKSUM ssd ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c ONLINE 0 0 0 gptid/0c661dcc-e247-11ea-b73e-ac1f6b76641c ONLINE 0 0 0 errors: No known data errors
But back to your question: what are you trying to achieve? Of course you can create a pool of more than one disk. But you have to consider the level of redundancy and performance constraints, first. A pool consists of vdevs and in general you cannot remove a vdev afterwards. Because if that was possible ZFS would need to move data around for removal. And ZFS does not needlessly write data that is already on stable storage. What is written in a certain place stays there unless it is changed in some way. Then the changed data is written to a new place (copy-on-write) and the old space freed.
A vdev consiste of one or more disks that can be configured as single disks, mirrors, various RAIDZn levels ...
But you always need to plan ahead. Adding vdevs is easy, but adding and removing disks at will is not a usage scenario that ZFS is designed for.
You might want to read this:
27. ZFS Primer — FreeNAS®11.3-U5 User Guide Table of Contents
www.ixsystems.com
HTH,
Patrick
Apparently I need to devote more time to details...
Yes, you can.This is a pool consisting of two single disk vdevs, so you cannot remove a disk without destroying the pool.
man zpool and take a look at the remove command. Not available through the GUI AFAIK, but it can be done.@Tasmana you say VM - is this block storage for a hypervisor, via iSCSI or nfs? If so, read the path to success with block storage sticky. Among other pearls of wisdom, it explains why you really don’t want to go above roughly 50% full in your pool if using block storage.