Benchmarking CMR vs SMR

rjp421

Cadet
Joined
Jun 16, 2023
Messages
5
i am trying to benchmark some CMR drives that will replace some SMR drives.

benchmark system specs
  • TrueNAS SCALE 22.12.3
  • ASUS P7P55D-PRO
  • Intel Core i7-870
  • 16GB DDR3
  • 1x Kingston 120GB SSD (boot-pool)
  • 3x Seagate Barracuda 8TB (ST8000DM004) (SMR) (raidz-1)
  • 3x HGST Ultrastar DC HC510 8TB (HUH728080ALE600) (CMR/PMR) (raidz-1)
  • 2x Seagate Enterprise 240G SSD (mirror)
  • LSI 9240-8i (PCI-E v2.0)
there are two separate pools for the 3x HGST (CMR) and 3x Seagate (SMR), both in raidz-1, both with approximately the same data (replicated after setting up the new pool), both with approx 8.1TB used and 13.7TB free. excuse the redundant pool names, "Storage" are the Seagates, and "HDD-Storage" are the HGST. eventually Storage will be removed/renamed.

do i need more data in the dataset in order to show the super low IOPS due to rewrites on SMR during testing? because using fio attempting to show the difference between the two pools of drives, i am getting some confusing results.

or do i need to force a resilver to see what all the fuss is about?


the confusing part, is the 3x Seagate Barracuda SMR pool (Storage) usually outperforms, or performs simlarly the HGST CMR pool (HDD-Storage) i want to replace it with.

20 minutes of 1M random writes (from a youtube video), ran twice.:
sync && sudo fio --ioengine=libaio --direct=1 --name=test[hdd] --filename=/mnt/[HDD-]Storage/share/testWrite --size=10G --iodepth=4 --bs=1M --rw=randwrite --group_reporting --time_based --runtime=1200 --numjobs=1
from the 2nd run:
I14QsZC.png



20 minutes of 4k random writes:
sync && sudo fio --ioengine=libaio --direct=1 --name=test[hdd] --filename=/mnt/[HDD-]Storage/share/testWrite2 --size=100G --iodepth=16 --bs=4k --rw=randwrite --group_reporting --time_based --runtime=1200 --numjobs=1
JQsaomz.png




a few days ago, running
sync && sudo fio --ioengine=libaio --direct=1 --name=test --filename=/mnt/[HDD-]Storage/share/testWrite --size=10G --iodepth=4 --bs=4k --rw=randwrite --group_reporting --time_based --runtime=600 --numjobs=1 originally resulted in the Seagates averaging ~20GiB written, and ~13GiB on HGST.


today (after the previous tests w/screenshots), i got very different results.
WMSiEgY.png

HmHPi3Q.png



if anyone could please provide particular tests that would provide more concrete results, it is greatly appreciated.

please ask if you need more information, or the logs of todays fio tests, i will try to edit them in (they made the post very long).


thanks!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The fundamental issue with SMR and ZFS is that the CMR/buffer area on SMR disks is not big enough to cope with operations like a resilver or very large write (a 1TB disk might have something like 30-100GB of CMR area).

As soon as the CMR area is full, the performance is forced to slow as the data in the CMR area is written out immediately to the SMR area to clear out space for new incoming writes (which is painfully slow and detrimental to the incoming writes)

I don't see anything that you stand to gain from learning the exact performance differences between SMR and CMR disks as SMR simply aren't and never will be appropriate as part of a ZFS pool.

If it's something you really want to see, you need to write something like 100GB+ in your test. It's your time, waste it as you please.
 

rjp421

Cadet
Joined
Jun 16, 2023
Messages
5
The fundamental issue with SMR and ZFS is that the CMR/buffer area on SMR disks is not big enough to cope with operations like a resilver or very large write (a 1TB disk might have something like 30-100GB of CMR area).

As soon as the CMR area is full, the performance is forced to slow as the data in the CMR area is written out immediately to the SMR area to clear out space for new incoming writes (which is painfully slow and detrimental to the incoming writes)

I don't see anything that you stand to gain from learning the exact performance differences between SMR and CMR disks as SMR simply aren't and never will be appropriate as part of a ZFS pool.

If it's something you really want to see, you need to write something like 100GB+ in your test. It's your time, waste it as you please.
thanks for the reply.

i am trying to prove to myself that these new drives are going to fix whatever problem the SMR disks posed.

the HGST disks do not appear to be performing as i expected, and seem to perform worse than the seagates at random writes.

as for tests writing more than 100G, i did that yesterday. or do i need a file larger than 100G, and not just writes?


from yesterdays 20 minutes of 1M random writes:
Code:
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 102400MiB)

test: (groupid=0, jobs=1): err= 0: pid=640526: Fri Jun 16 10:26:25 2023
  write: IOPS=963, BW=3854KiB/s (3946kB/s)(4516MiB/1200002msec); 0 zone resets
    slat (usec): min=10, max=543265, avg=1032.61, stdev=5858.06
    clat (usec): min=7, max=803489, avg=15571.55, stdev=32803.25
     lat (usec): min=525, max=812720, avg=16604.78, stdev=34456.67
    clat percentiles (usec):
     |  1.00th=[   611],  5.00th=[   701], 10.00th=[   766], 20.00th=[   857],
     | 30.00th=[   988], 40.00th=[  1205], 50.00th=[  1876], 60.00th=[  3621],
     | 70.00th=[ 10945], 80.00th=[ 26084], 90.00th=[ 51119], 95.00th=[ 69731],
     | 99.00th=[109577], 99.50th=[156238], 99.90th=[404751], 99.95th=[442500],
     | 99.99th=[549454]
   bw (  KiB/s): min=    8, max=76440, per=99.98%, avg=3853.06, stdev=8369.68, samples=2399
   iops        : min=    2, max=19110, avg=963.20, stdev=2092.43, samples=2399
  lat (usec)   : 10=0.01%, 500=0.01%, 750=8.83%, 1000=22.00%
  lat (msec)   : 2=20.15%, 4=10.30%, 10=7.86%, 20=7.33%, 50=13.11%
  lat (msec)   : 100=9.01%, 250=1.02%, 500=0.38%, 750=0.02%, 1000=0.01%
  cpu          : usr=0.55%, sys=6.63%, ctx=297859, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1156103,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  WRITE: bw=3854KiB/s (3946kB/s), 3854KiB/s-3854KiB/s (3946kB/s-3946kB/s), io=4516MiB (4735MB), run=1200002-1200002msec
 

rjp421

Cadet
Joined
Jun 16, 2023
Messages
5
apologies that last paste was from 20 minutes of 4k random writes.....


this is from 20 minutes of 1M random writes.
Code:
test: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=4                                                                     
fio-3.25                                                                                                                                                                                    
Starting 1 process                                                                                                                                                                          
test: Laying out IO file (1 file / 102400MiB)                                                                                                                                               
                                                                                                                                                                                            
test: (groupid=0, jobs=1): err= 0: pid=3872620: Fri Jun 16 09:48:47 2023                                                                                                                    
  write: IOPS=102, BW=102MiB/s (107MB/s)(120GiB/1200035msec); 0 zone resets                                                                                                                 
    slat (usec): min=204, max=523167, avg=9750.23, stdev=23377.25                                                                                                                           
    clat (usec): min=5, max=649327, avg=29278.47, stdev=45855.54                                                                                                                            
     lat (usec): min=874, max=721214, avg=39029.78, stdev=55778.56                                                                                                                          
    clat percentiles (usec):                                                                                                                                                                
     |  1.00th=[   758],  5.00th=[   865], 10.00th=[  1123], 20.00th=[  1778],                                                                                                              
     | 30.00th=[  6980], 40.00th=[ 11076], 50.00th=[ 13960], 60.00th=[ 18744],                                                                                                              
     | 70.00th=[ 23725], 80.00th=[ 34866], 90.00th=[ 85459], 95.00th=[123208],                                                                                                              
     | 99.00th=[208667], 99.50th=[263193], 99.90th=[429917], 99.95th=[471860],                                                                                                              
     | 99.99th=[530580]                                                                                                                                                                     
   bw (  KiB/s): min= 4096, max=2195456, per=100.00%, avg=104940.03, stdev=91705.84, samples=2398                                                                                           
   iops        : min=    4, max= 2144, avg=102.45, stdev=89.55, samples=2398                                                                                                                
  lat (usec)   : 10=0.01%, 750=0.78%, 1000=7.68%                                                                                                                                            
  lat (msec)   : 2=13.23%, 4=3.76%, 10=11.54%, 20=25.54%, 50=21.30%                                                                                                                         
  lat (msec)   : 100=8.53%, 250=7.09%, 500=0.53%, 750=0.02%
  cpu          : usr=0.50%, sys=3.82%, ctx=104957, majf=0, minf=14
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,122973,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=120GiB (129GB), run=1200035-1200035msec
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
randwrite slows down the process of generating the content to be written, perhaps enough that SMR can swallow it. You need a dataset with compression off and use zeroes like with the "write" option for rw with fio. Random 4k writes complicates your measure as again, everything slows down for that.

SMR drives have large cache, so smaller writes will probably be faster than CMR drives with smaller cache to play with.

The average calculations will potentially include both the time when only CMR writes are happening and when the drive has exhausted the CMR area, so I would say some more thinking needs to go into generating a test that can cause write timeouts via SMR overload first, then run the test for a true representation.

It's all moot, since we already know that SMR will catastrophically kill your pool when all of the SMR drives are simultaneously pushed to the same timeout-heavy condition under heavy enough write loads.
 

rjp421

Cadet
Joined
Jun 16, 2023
Messages
5
i guess that was the point of my original post, to ask for some help figuring out a test that will, from the community which would know more.

thanks.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
SMR drives don't tend to suffer from performance issues until the drive itself needs to overwrite a physical LBA in the middle of a zone.

Based on the numbers above, 20 minutes of 1M writes is only putting about 130G into your pool, which still has 13.7T free (thanks to what I assume is some very good compression of the existing 8.1T of data written to it) - so assuming that the disks were clean when put into service, you're still quite a ways away from this biting you.

DM-SMR drives will also take periods when they don't receive host I/O to internally re-shingle themselves, much like an idle SSD will take that time to do prepare dirty NAND pages for their next write.

Other sites have benchmarked the impact of SMR in resilvering events, which I think is likely the easiest way to simulate the performance problems:

 

rjp421

Cadet
Joined
Jun 16, 2023
Messages
5
thank you both for the info.

i have decided that since there are 3 sources (original data is on another 8TB drive, stored in a safe-ish place, plus the two RAIDZ1), that i might as well go ahead and try to resilver the (new, first) RAIDZ1 just to see.

is there a specific howto or guide on the best/safest way to force a resilver of a single drive?

also, do i need to write more data to the drives for a resilver to show the performance issues? if so, how much?

should i create a dataset with compression off and use fio rw=write (as mentioned above), then write enough data to increase the pool to >60% capacity?


thanks.
 

Cupcake

Dabbler
Joined
Jan 1, 2014
Messages
42
Out of curiosity I replaced a failed WD Red 4 TB CMR drive with a Barracuda 2.5" 5B SMR drive. This meant that the drive's first task was the resilvering of a 16TB pool (6x4TB with Z2 Raid).

I did not observe any notable slowdowns or prolonged re-silvering duration. Unfortunately I did not save the numbers, so I'm tempted to run the experiment again now. If I recall correctly, it took around 12 hours to resilver, which is in line with what I observed before with the WD drives.

To me the SMR topic in this forum is tainted by the initial issues with the WD SMR drives that everyone is referring to. Every thread I see about SMR has people saying "you are asking for trouble" and "SMR will never be good for ZFS". I don't doubt that there are performance impacts, particularly if one were to use 10GB Ethernet and a more modern CPU. My NAS has 1GB/s interfaces only, and a crusty old AMD Athlon II X2 270. But if in practice and under real load I do not notice any negative impact, and the re-silvering only takes maybe 10-20% longer, SMR drives are a valid option in my opinion. I for one don't want to dismiss this option just because one manufacturer messed up the initial firmware on a newly released SMR drive.

I will keep testing on my setup with 5x 4TB WD Red + 1x 5TB Barracuda 2.5". Maybe I'll provide accurate numbers once I did more scientific tests.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
This is one of the threads I really don't understand. You use ZFS if you want maximum protection for your data. If that is not a priority it does not make too much sense to use ZFS with its limitations and performance demand. While you can of course still use it, there are other file systems that make more sense here.

Using SMR drives increases the risk of loosing data. Yes, depending on the use-case this may not show at all during normal operation. But that argument is like saying "in 30 years of driving my car, I never really needed the seat belt, so I will not use it any more". You build a NAS with paranoia in mind, if you value your data. If the latter is not your concern, feel free to ignore this.

But if others read this thread, they need to aware that anecdotal observations have little value when it comes to looking at risk. This is a statistical exercise and statistics often go against gut feeling. So the question I need to ask myself is basically this: Do I want to risk many years of photos etc. by saving 200 USD/EUR ?
 

Cupcake

Dabbler
Joined
Jan 1, 2014
Messages
42
This is one of the threads I really don't understand. You use ZFS if you want maximum protection for your data. If that is not a priority it does not make too much sense to use ZFS with its limitations and performance demand. While you can of course still use it, there are other file systems that make more sense here.
Most things in life require a cost-value analysis. Are you running a 64 core Threadripper in your NAS for photo backups? Probably not. With CPUs the cost, performance and risk of under-performance is clearly defined and well understood. With SMR drives not so much. But that doesn't mean users should categorically exclude them in my opinion.

Some people also gravitate to Truenas out of love for open-source.
Using SMR drives increases the risk of loosing data.
How? If re-silvering does not take significantly longer in low power setups, where is the increased risk?

Yes, depending on the use-case this may not show at all during normal operation. But that argument is like saying "in 30 years of driving my car, I never really needed the seat belt, so I will not use it any more". You build a NAS with paranoia in mind, if you value your data. If the latter is not your concern, feel free to ignore this.
It is not the same. I'm not arguing based on the fact that my system has been running fine with an SMR drive for a few months. I'm arguing on the fact that I did not observe any performance impacts while copying to, reading from, or re-silvering the pool. If I can show numbers that for my use case there is no difference, that's not anecdotal. It's a fact related to my exact setup.

The question remains how you know, that SMR drives increase the risk of data loss with ZFS. Did you also take into account, that for a given budget, one could run SMR drives in Z3 or CMR drives in Z2? We don't have the data today to clearly say which is safer and which is not.
Or what if by using low-power 2.5" drives we can lower the risk of the PSU dying because of all drives spinning up simultaneously? A PSU outage is a significant risk for all disks that are spinning. The TrueNAS Mini sold by ixsystems does not come with a redundant power supply.

But if others read this thread, they need to aware that anecdotal observations have little value when it comes to looking at risk. This is a statistical exercise and statistics often go against gut feeling.
I'm not operating on gut feeling. That's why I bought one SMR drive for testing and comparing against other CMR drives that I own.

So the question I need to ask myself is basically this: Do I want to risk many years of photos etc. by saving 200 USD/EUR ?
I agree that users should be aware of past issues. But this forum is dismissing SMR drives completely and always linking to the same blog post from someone in 2020. The post describes a setup using a brandnew product from WD for which we now know had firmware issues. The honest answer would be that we don't have enough data to support either claim. This is my point. Things are not black and white.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
There is a reason if even WD says that SMR drives aren't for ZFS. SMR drives will spat out errors and get thrown out of the pool, period. It's something proven and documented, just search for threads in this forum.

The increased amount of sustained random writes during ZFS resilvering (similar to a rebuild) causes a lack of idle time for DMSMR drives to execute internal data management tasks, resulting in significantly lower performance reported by users.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

AlexGG

Contributor
Joined
Dec 13, 2018
Messages
171
Maybe I'll provide accurate numbers once I did more scientific tests.

That's a good idea, but you must have an old, used, aged pool. Back it up and then replace the drives in it with SMR; see what happens. I don't know what will happen, but I know there is a significant (and very difficult to quantify) difference between a brand-new, written-once, filesystem and a system that was in use for several years.
 

Cupcake

Dabbler
Joined
Jan 1, 2014
Messages
42
There is a reason if even WD says that SMR drives aren't for ZFS. SMR drives will spat out errors and get thrown out of the pool, period. It's something proven and documented, just search for threads in this forum.
Intersting, thanks for the link. I do admit that if SMR drives get kicked out of the pool, I do see how this can be a problem. Especially if you have a pool with only SMR drives and it constantly degrades. It was my understanding that this was a particular issue with the WD drives, but I'm not sure right now to be honest.

That's a good idea, but you must have an old, used, aged pool. Back it up and then replace the drives in it with SMR; see what happens. I don't know what will happen, but I know there is a significant (and very difficult to quantify) difference between a brand-new, written-once, filesystem and a system that was in use for several years.
Yeah, that's true. The hardware and pool are almost 10 years old. And even back then it was a low-performance build. So I don't even know what to expect from decent hardware in terms of re-silvering speed etc.
Currently planning to upgrade to a A2SDi-4C-HLN4F or A2SDi-8C-HLN4F as soon as I figure out whether I need 4 or 8 cores.

Yeah, it's probably pointless to look into SMR now. I was hoping to get a very compact build with 2.5" drives only some day, but I'll probably just wait until SSDs are affordable enough for the space that I need. We are getting there.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
It was my understanding that this was a particular issue with the WD drives, but I'm not sure right now to be honest.
Recalling from memory I can say I have seen way more issues with WD's SMR drives than with Seagate's SMR drives... whether this means Seagate's SMR is better for ZFS or simply that WD's SMR drives were more popular, I don't know; however, I clearly remember a (single) thread about Seagate's SMR drives having issues.

If you are willing to bring us data about this, we will be happy.
 
Last edited:

Skeleton66

Cadet
Joined
Aug 15, 2023
Messages
1
Recalling from memory I can say I have seen way more issues with WD's SMR drives than with Seagate's SMR drives... whether this means Seagate's SMR is better for ZFS or simply that WD's SMR drives were more popular, I don't know; however, I clearly remember a (single) thread about Seagate's SMR drives having issues.

If you are willing to bring us data about this, we will be happy.

I'm joining the SMR vs. CMR party on very unfortunate circumstances. (3 days prior didn't even know these things existed).

Originally I had TrueNAS scale setup on 1 very old computer (*all specs will be below) with an old Seagate 1 TB HDD to try it out, this was something very new to me but wanted to give it a go to setup a media server. It worked amazingly and I fell in love. So I decided I needed more storage, did some research, cost calculations and found myself a nicely priced 4TB Seagate SkyHawk HDD on amazon. Which then I used to in Stripe (I know, I know) with my old 1 TB HDD. Everything worked amazingly, and I ran out of storage. So I was like I rly love this thing let's actually do it properly and have redundancy and also more space. So I settled on getting a 3x 4TB HDD setup in RAIDZ1, enough space, if I need more space I can add another pool with 3x drives, and I'm only worried about losing data to the sense of how much time it would take me to index my media again and setup everything all over. So from a cost perspective this looked to be ideal. I found good priced Seagate HDDs that matched RPMs and disk space and also made sure to do my best to get the 2 Disks from different batches to not temper with fate.

I've setup everything, created the 3 disk pool, moved all the data over. It was a bit of a troublesome process which I thought it was just my inexperience and me doing something wrong but I couldn't use the replication tasks from the GUI as it kept failing. But at the end got it all working. But as soon as I started running things, the problems started popping up. First thing that caught my eye was
Code:
Device /dev/disk/by-partuuid/5c4d39b3-b612-483c-90c4-718241f96b47 is causing slow I/O on pool Data1

type errors popping up, didn't think much of it, I was like yeah I writing a but load of data right now, seems understandable. Then apps started freezing and rebooting constantly. qBittorrent was totally unusable and crashed every 5mins. After searching around I'm now realising my issue might have something to do with my pool being made up of 2 SMR and 1 CMR drives. 1 moved qBittorrent to download to the old 1TB HDD which now is functional but again crashes every 30-70mins or so (the app itself is still on the RAIDZ1 Pool), and I'm constantly getting the slow I/O warnings and things just not seem to work right.

I've set up a RAIDZ1 pool with 2 SMR and 1 CMR Seagte 4TB drives and now I'm fucked, qBittorrent crashes every 5mins, and apps don't function right, and constant
Code:
... is causing slow I/O on pool Data1
alerts.

My setup:
Code:
CPU:            AMD FX-8350
Motherboard:    GA-78LMT-USB3 (rev. 5.0)
Boot drive:     120GB Kingston SSD (SA400S37)
"Old 1TB HDD":  1TB Seagate HDD (ST1000DM003)

RAIDZ1 Pool:
Newly bought drives for the raid:  2x Seagate Barracuda (ST4000DM004) (SMR as now I know)
First new drive:                   1x Seagate SkyHawk (ST4000VX016) (CMR luckaly)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I'm joining the SMR vs. CMR party on very unfortunate circumstances. (3 days prior didn't even know these things existed).

Originally I had TrueNAS scale setup on 1 very old computer (*all specs will be below) with an old Seagate 1 TB HDD to try it out, this was something very new to me but wanted to give it a go to setup a media server. It worked amazingly and I fell in love. So I decided I needed more storage, did some research, cost calculations and found myself a nicely priced 4TB Seagate SkyHawk HDD on amazon. Which then I used to in Stripe (I know, I know) with my old 1 TB HDD. Everything worked amazingly, and I ran out of storage. So I was like I rly love this thing let's actually do it properly and have redundancy and also more space. So I settled on getting a 3x 4TB HDD setup in RAIDZ1, enough space, if I need more space I can add another pool with 3x drives, and I'm only worried about losing data to the sense of how much time it would take me to index my media again and setup everything all over. So from a cost perspective this looked to be ideal. I found good priced Seagate HDDs that matched RPMs and disk space and also made sure to do my best to get the 2 Disks from different batches to not temper with fate.

I've setup everything, created the 3 disk pool, moved all the data over. It was a bit of a troublesome process which I thought it was just my inexperience and me doing something wrong but I couldn't use the replication tasks from the GUI as it kept failing. But at the end got it all working. But as soon as I started running things, the problems started popping up. First thing that caught my eye was
Code:
Device /dev/disk/by-partuuid/5c4d39b3-b612-483c-90c4-718241f96b47 is causing slow I/O on pool Data1

type errors popping up, didn't think much of it, I was like yeah I writing a but load of data right now, seems understandable. Then apps started freezing and rebooting constantly. qBittorrent was totally unusable and crashed every 5mins. After searching around I'm now realising my issue might have something to do with my pool being made up of 2 SMR and 1 CMR drives. 1 moved qBittorrent to download to the old 1TB HDD which now is functional but again crashes every 30-70mins or so (the app itself is still on the RAIDZ1 Pool), and I'm constantly getting the slow I/O warnings and things just not seem to work right.

I've set up a RAIDZ1 pool with 2 SMR and 1 CMR Seagte 4TB drives and now I'm fucked, qBittorrent crashes every 5mins, and apps don't function right, and constant
Code:
... is causing slow I/O on pool Data1
alerts.

My setup:
Code:
CPU:            AMD FX-8350
Motherboard:    GA-78LMT-USB3 (rev. 5.0)
Boot drive:     120GB Kingston SSD (SA400S37)
"Old 1TB HDD":  1TB Seagate HDD (ST1000DM003)

RAIDZ1 Pool:
Newly bought drives for the raid:  2x Seagate Barracuda (ST4000DM004) (SMR as now I know)
First new drive:                   1x Seagate SkyHawk (ST4000VX016) (CMR luckaly)
Thank you for your feedback.
 
Joined
Jun 15, 2022
Messages
674
I'm joining the SMR vs. CMR party on very unfortunate circumstances. (3 days prior didn't even know these things existed).
Most unfortunately ZFS + SMR is more of a funeral, and it's a cash bar.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Can I ask a silly question.

I have an array of 3x750GB 2.5" drives in my system. I use them just as a 'scrap backup' to copy content once per night to. The performance of this, doesn't matter within reason.

Assuming no faulty disks if I upgrade to 3x2TB SMR disks, what is the kind of write speed I might expect? Could I mirror say 2TB from midnight to 6am?

I'm not a huge fan of SMR, I'm not silly but I can only fit 3x2.5" drives left in my particular instance.
Option B, I simply go without, but I did like this idea of a secondary random backup.
 
Top