i am trying to benchmark some CMR drives that will replace some SMR drives.
benchmark system specs
do i need more data in the dataset in order to show the super low IOPS due to rewrites on SMR during testing? because using fio attempting to show the difference between the two pools of drives, i am getting some confusing results.
or do i need to force a resilver to see what all the fuss is about?
the confusing part, is the 3x Seagate Barracuda SMR pool (Storage) usually outperforms, or performs simlarly the HGST CMR pool (HDD-Storage) i want to replace it with.
20 minutes of 1M random writes (from a youtube video), ran twice.:
from the 2nd run:
20 minutes of 4k random writes:
a few days ago, running
today (after the previous tests w/screenshots), i got very different results.
if anyone could please provide particular tests that would provide more concrete results, it is greatly appreciated.
please ask if you need more information, or the logs of todays fio tests, i will try to edit them in (they made the post very long).
thanks!
benchmark system specs
- TrueNAS SCALE 22.12.3
- ASUS P7P55D-PRO
- Intel Core i7-870
- 16GB DDR3
- 1x Kingston 120GB SSD (boot-pool)
- 3x Seagate Barracuda 8TB (ST8000DM004) (SMR) (raidz-1)
- 3x HGST Ultrastar DC HC510 8TB (HUH728080ALE600) (CMR/PMR) (raidz-1)
- 2x Seagate Enterprise 240G SSD (mirror)
- LSI 9240-8i (PCI-E v2.0)
do i need more data in the dataset in order to show the super low IOPS due to rewrites on SMR during testing? because using fio attempting to show the difference between the two pools of drives, i am getting some confusing results.
or do i need to force a resilver to see what all the fuss is about?
the confusing part, is the 3x Seagate Barracuda SMR pool (Storage) usually outperforms, or performs simlarly the HGST CMR pool (HDD-Storage) i want to replace it with.
20 minutes of 1M random writes (from a youtube video), ran twice.:
sync && sudo fio --ioengine=libaio --direct=1 --name=test[hdd] --filename=/mnt/[HDD-]Storage/share/testWrite --size=10G --iodepth=4 --bs=1M --rw=randwrite --group_reporting --time_based --runtime=1200 --numjobs=1
from the 2nd run:

20 minutes of 4k random writes:
sync && sudo fio --ioengine=libaio --direct=1 --name=test[hdd] --filename=/mnt/[HDD-]Storage/share/testWrite2 --size=100G --iodepth=16 --bs=4k --rw=randwrite --group_reporting --time_based --runtime=1200 --numjobs=1

a few days ago, running
sync && sudo fio --ioengine=libaio --direct=1 --name=test --filename=/mnt/[HDD-]Storage/share/testWrite --size=10G --iodepth=4 --bs=4k --rw=randwrite --group_reporting --time_based --runtime=600 --numjobs=1
originally resulted in the Seagates averaging ~20GiB written, and ~13GiB on HGST.today (after the previous tests w/screenshots), i got very different results.


if anyone could please provide particular tests that would provide more concrete results, it is greatly appreciated.
please ask if you need more information, or the logs of todays fio tests, i will try to edit them in (they made the post very long).
thanks!