Adding special vdev to existing pool sanity check

Ricko1

Dabbler
Joined
Jan 29, 2017
Messages
12
Hello everyone,

Currently I'm planning on changing my data storage for better security, faster accessibility and to remove unnecessary duplication on one device (plus clean up a decade of junk data)

My goal is to have automatic sync to my pc and laptop plus acces to the NAS for my mobile devices. Critical data would would then have at least 3 copies (PC, Laptop and NAS)

Last year I upgraded my network so I can access my NAS at 2.5gbit from my pc to an intel X520 10gb NIC in the NAS. Services that don't rely on local storage are now on my new Proxmox server (intel 1220P NUC). That backups to the NAS.

I would like to increase the speed of small files and the IOPS in general use. The Blackmagicdesign speed test is only 43MB/s on write.
The random IO is a bit annoying (ex. snapshot list loading takes +-30sec, I just cleaned up a terabyte of old snapshots)

Relevant hardware
Supermicro X11SCH-F
Intel Core i3-9100F (SRF7W)
2x Crucial CT16G4WFD8266 16GB ECC 2666Mhz (32GB total)
Crucial BX500 120GB boot SSD
6x WD Red Plus, 4TB (64MB cache) in RaidZ2 14,4 TiB available main array
APC Back-UPS BX1400U-GR (battery replaced last year)
Intel X520 10gb NIC (added 2023)

pool is currently 51% full
I have 2 free M.2 slots on my motherboard


Questions
  1. If i understand it correctly I need to copy the dataset(s) to another pool (ex. my external backup 6TB WD Elements), delete the dataset and copy it back over to rewrite the metadata to use the special vdev. I can do this one dataset at a time, Right?
  2. Would a 1TB mirror be enough if i replace the six 4TB drives with 16TB drives when my storage needs increase? 1% of 64TB is 640GB so it should be fine? If i go to 128TB then it would need to be bigger.
  3. Power loss protection is recommended for dram SSD's. What about dramless SSD's?
    1. I'm looking two of the WD Blue SN580 1TB. 600TBW should be plenty for metadata and very small files, right?
    2. Another option is the Samsung 980 1TB. which has 500.000 IOPS instead of 600.000 IOPS (pcie version doesn't matter since my board is 3.0) Am i looking at the right class of SSD? They are both in the €65 to €70 range right now.
I ran a speed test of the current setup:

[root@truenas[/mnt/tank/temp]# fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test1 --bs=4k --iodepth=64 --size=4G --readwrite=randwrite --ramp_time=4

test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=64

fio-3.28

Starting 1 process

test: Laying out IO file (1 file / 4096MiB)

Jobs: 1 (f=1): [w(1)][90.9%][w=160MiB/s][w=41.0k IOPS][eta 00m:07s]

test: (groupid=0, jobs=1): err= 0: pid=98326: Sat Feb 24 08:38:36 2024

write: IOPS=14.7k, BW=57.3MiB/s (60.0MB/s)(3780MiB/66005msec); 0 zone resets

bw ( KiB/s): min=10435, max=210300, per=98.77%, avg=57913.46, stdev=38532.37, samples=131

iops : min= 2608, max=52575, avg=14478.13, stdev=9633.09, samples=131

cpu : usr=2.24%, sys=39.01%, ctx=311565, majf=0, minf=1

IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

issued rwts: total=0,967567,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=64



Run status group 0 (all jobs):

WRITE: bw=57.3MiB/s (60.0MB/s), 57.3MiB/s-57.3MiB/s (60.0MB/s-60.0MB/s), io=3780MiB (3963MB), run=66005-66005msec

Thank you in advance for your time
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
If i understand it correctly I need to copy the dataset(s) to another pool (ex. my external backup 6TB WD Elements), delete the dataset and copy it back over to rewrite the metadata to use the special vdev. I can do this one dataset at a time, Right?
metadata would be moved on the next write.
Would a 1TB mirror be enough if i replace the six 4TB drives with 16TB drives when my storage needs increase?
I havent really looked into this myself but that sounds sane.
Power loss protection is recommended for dram SSD's. What about dramless SSD's?
keep in mind that once metadata is on a special vdev, that special vdev becomes a critical part of the pool, and losing it means loosing the pool. this means that loosing any write to it can mean loosing the pool, or at least it being damaged. as such, you generally want this vdev as bullet proof as you can manage.

something you can do, however, is have an l2arc of only metadata; this can improve read speeds with the advantage of it being l2arc so its just a copy of the metadata, rather than the original. this requires a manual setting i believe.
 
Top