Migrating pool to SSD

Patrick_3000

Contributor
Joined
Apr 28, 2021
Messages
167
I'm wondering if someone can give me advice on migrating from a hard drive pool to an SSD (M.2 NVME) pool.

I have a Scale installation with a pool that's a three-way mirror, on three 10TB HDDs (currently containing around 4.75 TB of data). I have a server motherboard: an ASRock Rack x570d4u-2l2t, with a Ryzen Pro 4350G CPU and 32GB ECC memory.

My network is 10-gigabit ethernet (10 gbps), but since it's a hard drive pool, transfers are maxing out around 1.75 gbps when writing to the pool, or 2.75 gpbs when reading from the pool.

I'm considering migrating to an all-SSD pool. In particular, I'm looking at a 5 x 4TB SSDs (NVME), with a Raid Z2 configuration, so that I'll have 12TB of storage. SSDs have dropped so much in price that there are plenty of cheap, consumer-grade 4TB SSDs selling for around $200, so the upgrade would cost around $1000. Moreover, my motherboard supports bifurcation, so I'll be able to put four of the SSDs on a PCIE card in an x16 slot bifurcated to 4x4, and the remaining SSD in an open slot I have on the motherboard.

I've tested SSD-to-SSD transfers on my network involving other devices (not Truenas) and am getting close to 10 gbps even using cheap NVME SSDs, so I'm guessing I'll get something like this transfer rate once I set up the pool.

Does anyone have any thoughts or suggestions? In particular, has anyone migrated to SSD recently? Does Raid Z2 sound like a good configuration?
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
For high-speed transfers having lots of RAM is not an option... especially on SCALE. I'd double your amount.

Also, read the following resource.

Regarding layout performances, please read the following resource.

Make sure to buy a proper expansion card and not a port multiplier.

You can use zfs send | zfs receive. Search for related threads.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Also consider the following. Depending on what SSD you are thinking off - performance will drop to near HDD speeds after you have used up the DRAM cache. Or for drives that don't have DRAM, the SLC or equivalent cache.
Also, many consumer drives use system memory as their cache (its cheaper) but I believe there is no support for this on Linux or FreeBSD

So be very careful. And if you are thinking of Samsung QVO (of which I have a slightly irrational dislike) then think again.
 

Patrick_3000

Contributor
Joined
Apr 28, 2021
Messages
167
Thanks for the suggestions. It sounds like I'll need to upgrade my memory from 32GB to 64GB. I will say that with my current configuration, 32GB is more than enough as I don't run any apps or VMs and Scale always shows something like 16GB (minimum) free, but with the faster speed of SSD, perhaps memory usage will go up. In any event, RAM, like SSD, has fallen in price, so the upgrade won't be particularly expensive.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Scale only ever uses 50% unfortunately - its a feature of ZFS on Linux
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Top