Sprint
Explorer
- Joined
- Mar 30, 2019
- Messages
- 72
So I know most peoples attitude to duplication is "don't bother", I know the risks, I have the horse power, I wanted to try it, so I am... I'm using it for iSCSI storage to my Proxmox servers (I have two nodes)
The Pool in question is 6x1Tb 860 Evo SSDs in a two RaidZ1 vdevs (I considered mirrors but the performance of a single vdev was able to saturate my 10Gb links so decided I wanted the extra capacity).
I'm also running 2x 280Gb 900p Optane drives, But i didn't want to waste these entirely for DDTs, as I also wanted a slog for each pool. Now again, I know people are going to say "you shouldn't use the same drive for more than one purpose", but there is more than enough IOPS and throughput to these optane drives, they can handle it, and until recently I had a single Optane doing 3 SLOG partitions and 200Gb of L2ARC, and it worked superbly! (Plus I've not run out of PCIE lanes)
So, I partitioned up each Optane drive identically with 2x15G partitions and 1x30Gb.
The plan was to assign 15Gb (mirrored) to my main spinning rust pool, and another 15Gb (mirrored) to my SSD array. (thats all working great)
The two 30Gb partitions were for my DDT.
Here are the commands I used to add the partitions to the pools for reference
Optane drive 0 partitions
gpart create -s gpt nvd0
#Create Slogs
gpart add -t freebsd-zfs -s 15G nvd0
gpart add -t freebsd-zfs -s 15G nvd0
# create DDT partitions
gpart add -t freebsd-zfs -s 30G nvd0
# Optane drive 1 partitions
gpart create -s gpt nvd1
#Create Slogs
gpart add -t freebsd-zfs -s 15G nvd1
gpart add -t freebsd-zfs -s 15G nvd1
# create DDT partitions
gpart add -t freebsd-zfs -s 30G nvd1
...and the commands i used to add the partitions to the pools[/FONT]
zpool add Primary_Array log nvd0p1 nvd1p1
zpool add SSD_Array log nvd0p2 nvd1p2
zpool add SSD_Array special mirror nvd0p3 nvd1p3
(L2ARC is now on its own NVME)
Anyway, its working superbly, speeds are great, seeing 1.3 Dedup ratio and I've only loaded a handful of VMs into it, but looking at the capacity, I think I should have set them to more than 30Gb. I'm using 9Gb and the pool is only 7% full (see below, under "special" vdev under "SSD_Array"
So my question itself is simple, can i extend the partitions (as they are followed by unallocated space) with them in situ, and they'll register the extra space once both are done (bit like when i swapped all the drives in a pool out for bigger drives)...
or
do I remove one partition from the pool at a time, delete it, recreate it, re add it, allow it to resilver, then repeat....
or
Do i need to delete back all the data up, destroy the pool, delete the partitions, and rebuild the pool before restoring the VMs?
Didn't want to break it, or do the 3rd set if there was a smarter way todo it
Thanks in advance
2x Xeon E5-2630-V4
256Gb DDR4
16x8Tb (8x WD Reds in a vDev, 8x WD Golds in another vDev)
6x 1Tb Samsung 860Evo SSDs in twin 3 drive RaidZ1 vDevs
Intel X520 10Gb NIC
PCIE 16x ASRock 4xM.2 slot card (board bifurcated 4x4x4x4)
200Gb Crucial M.2 L2ARC
2x 280Gb Optane 900p connecting via M.2
3x 9207-8i HBAs
2x 120Gb Boot SSDs
The Pool in question is 6x1Tb 860 Evo SSDs in a two RaidZ1 vdevs (I considered mirrors but the performance of a single vdev was able to saturate my 10Gb links so decided I wanted the extra capacity).
I'm also running 2x 280Gb 900p Optane drives, But i didn't want to waste these entirely for DDTs, as I also wanted a slog for each pool. Now again, I know people are going to say "you shouldn't use the same drive for more than one purpose", but there is more than enough IOPS and throughput to these optane drives, they can handle it, and until recently I had a single Optane doing 3 SLOG partitions and 200Gb of L2ARC, and it worked superbly! (Plus I've not run out of PCIE lanes)
So, I partitioned up each Optane drive identically with 2x15G partitions and 1x30Gb.
The plan was to assign 15Gb (mirrored) to my main spinning rust pool, and another 15Gb (mirrored) to my SSD array. (thats all working great)
The two 30Gb partitions were for my DDT.
Here are the commands I used to add the partitions to the pools for reference
Optane drive 0 partitions
gpart create -s gpt nvd0
#Create Slogs
gpart add -t freebsd-zfs -s 15G nvd0
gpart add -t freebsd-zfs -s 15G nvd0
# create DDT partitions
gpart add -t freebsd-zfs -s 30G nvd0
# Optane drive 1 partitions
gpart create -s gpt nvd1
#Create Slogs
gpart add -t freebsd-zfs -s 15G nvd1
gpart add -t freebsd-zfs -s 15G nvd1
# create DDT partitions
gpart add -t freebsd-zfs -s 30G nvd1
...and the commands i used to add the partitions to the pools[/FONT]
zpool add Primary_Array log nvd0p1 nvd1p1
zpool add SSD_Array log nvd0p2 nvd1p2
zpool add SSD_Array special mirror nvd0p3 nvd1p3
(L2ARC is now on its own NVME)
Anyway, its working superbly, speeds are great, seeing 1.3 Dedup ratio and I've only loaded a handful of VMs into it, but looking at the capacity, I think I should have set them to more than 30Gb. I'm using 9Gb and the pool is only 7% full (see below, under "special" vdev under "SSD_Array"
---------------------------------------------- ----- ----- ----- ----- ----- -----
[B]capacity [/B]operations bandwidth
pool [B]alloc free [/B]read write read write
---------------------------------------------- ----- ----- ----- ----- ----- -----
Primary_Array 91.1T 25.2T 0 0 0 0
raidz2 57.6T 596G 0 0 0 0
gptid/ae89d119-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/af47c6f6-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/b0210bb3-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/b0092041-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/b05786d4-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/b0324733-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/b0c01156-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
gptid/b0e656cc-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0
raidz2 33.5T 24.6T 0 0 0 0
gptid/0819d83e-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/08566e8c-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/087bdb7d-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/08c528f0-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/09097e7a-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/096a562f-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/0978ab4b-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
gptid/09ccbd73-5165-11ec-84c2-000c29f20725 - - 0 0 0 0
logs - - - - - -
nvd0p1 40K 14.5G 0 0 0 0
nvd1p1 128K 14.5G 0 0 0 0
cache - - - - - -
nvd2p1 200G 92.3M 0 0 0 0
---------------------------------------------- ----- ----- ----- ----- ----- -----
SSD_Array 228G 5.21T 0 2 0 160K
raidz1 109G 2.60T 0 0 0 0
gptid/653f3d1e-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0
gptid/65594328-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0
gptid/65865955-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0
raidz1 109G 2.60T 0 0 0 0
gptid/637fef3b-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0
gptid/65485f99-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0
gptid/65b01095-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0
special - - - - - -
mirror [B]9.79G 19.7G[/B] 0 0 0 0
nvd0p3 - - 0 0 0 0
nvd1p3 - - 0 0 0 0
logs - - - - - -
nvd0p2 1.02M 14.5G 0 1 0 97.0K
nvd1p2 1.02M 14.5G 0 0 0 63.4K
---------------------------------------------- ----- ----- ----- ----- ----- -----
boot-pool 1.19G 94.3G 0 0 0 0
mirror 1.19G 94.3G 0 0 0 0
ada0p2 - - 0 0 0 0
ada1p2 - - 0 0 0 0
So my question itself is simple, can i extend the partitions (as they are followed by unallocated space) with them in situ, and they'll register the extra space once both are done (bit like when i swapped all the drives in a pool out for bigger drives)...
or
do I remove one partition from the pool at a time, delete it, recreate it, re add it, allow it to resilver, then repeat....
or
Do i need to delete back all the data up, destroy the pool, delete the partitions, and rebuild the pool before restoring the VMs?
Didn't want to break it, or do the 3rd set if there was a smarter way todo it
Thanks in advance
2x Xeon E5-2630-V4
256Gb DDR4
16x8Tb (8x WD Reds in a vDev, 8x WD Golds in another vDev)
6x 1Tb Samsung 860Evo SSDs in twin 3 drive RaidZ1 vDevs
Intel X520 10Gb NIC
PCIE 16x ASRock 4xM.2 slot card (board bifurcated 4x4x4x4)
200Gb Crucial M.2 L2ARC
2x 280Gb Optane 900p connecting via M.2
3x 9207-8i HBAs
2x 120Gb Boot SSDs
Last edited: