Short Stroke a Raid-z array?

Status
Not open for further replies.

nattan

Explorer
Joined
May 19, 2013
Messages
57
I know on windows you can remove/reduce/partition about X % of the storage platter and it increases the write speed by narrowing the usable disk to the outer edge. Would there be any abnormal issues if this was done with a raid-z array? ( aside from a smaller pool )


would this be possible? or would the disks have to be partitioned before they are put into an array?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You can't really do this with FreeNAS.

It's also a somewhat silly way of dealing with things. Need more performance? Do one of the following:
  • Use several striped mirrors
  • Use faster drives (not worth it, typically)
  • Use SSDs
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Perhaps you can post the details of the performance problem you are having. I.e. your hardware, version of freenas, sharing protocol being used, network details, and some metrics on how bad performance is.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I haven't tested this, but I'm betting short stroking would hurt performance.

When a pool first starts filling its extremely fast. By the time the pool is 30% full you're at something like 80% of peak performance. As you use more disk space in a pool the performance tanks rapidly before leveling out at nearly-full pools.

So unless you planned to short-stroke the pool, THEN use less than 30% of the now-smaller pool I'd bet your gain is zero.

And considering how your data is thrown onto disks (not to mention cached in RAM before being written, reads and writes "taking turns" with ZFS) I'd be surprised if there was any way to get more speed aside from shortstroking the disks to like 25% of their disk space, then only using 25% of that. Who'd buy a 4TB drive just to get a "really fast" 250GB of disk space? You could buy SSDs and go that route and have *guaranteed* performance increases versus this attempt.

Not to mention the fact that you'd have to do the zpool creation yourself, and you'd have to turn off autotune.

Seems like a real hassle for a small amount of potentially non-existent gains. :P

I wouldn't even try doing it "just to see what happens" when considering the option with just going with SSDs.
 

nattan

Explorer
Joined
May 19, 2013
Messages
57
Perhaps you can post the details of the performance problem you are having. I.e. your hardware, version of freenas, sharing protocol being used, network details, and some metrics on how bad performance is.

I think my major issue is low IO/far spindle travel, I have a (3) drive raid-z(nearing 90% usage), a ssd for plugins/torrents/game servers. I have noticed that when streaming files from plex usually around 5-6 streams ( i share it) there will be sudden stutter reading files from the raid-z. Now if I was to play a file from the CIFS share directly there would be no issue for the local stream, I am not 100% sure how if effects remote streams at the time.

The reason I was wondering about short stroking is, if i just "removed" lets say 10% off each drive I think that could possibly improve my issues. I am basing this off a drive I have short stroked on my daily pc. I was having horrible random IO issues until I cut off ~10% of the drive capacity after that everything worked wonderfully. Not sure exactly why it worked, I assumed 10% wouldn't exactly keep the needle from reaching the center of the platter, the minimum speed did not increase much either but the IO was far superior.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If you really need more IO, you can fill your pools less (avoids fragmentation), add vdevs (N vdevs should give N times IO performance) - this means mirrors, in practice - or be hardcore and move to SSDs.

90% is clearly "don't expect performance" territory. 80% is the recommended maximum, 95% is "you're fscked if you don't do something".
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That sounds like you are CPU bound. If it plays fine from CIFS your CPU is trying to transcode and can't keep up with the workload.

Plus what Eric said. 90% full is asking for poor performance due to fragmentation issues. And you can't undo the damage once its done except to empty the pool out or recreate a new pool. There's a reason why you get the WebGUI warning at 80%. ;)
 

nattan

Explorer
Joined
May 19, 2013
Messages
57
so at this point, would it be better to buy 3 more drives and make another raidz? or flush it after back up and go raid-z2?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'd do a 6 disk RAIDZ2.
 
Status
Not open for further replies.
Top