SOLVED Conceptual idea of running a stripe in production

lachdanan

Dabbler
Joined
Sep 15, 2020
Messages
19
Ok, so before yelling, please hear me out.

With Plex or Emby you can designate a directory to use for transcoding. I have an idea of creating a separate zpool of 2 drives that are striped (so it's similar to RAID 0) then attaching that to the jails and using it exclusively as a transcoding directory. My idea with this is that it would only do about half of the writes on each disk as a single disk would normally absorb giving better longevity. Regardless of quality of drive consumer vs enterprise etc.

My thought and question is what happens when one of the drives in the stipe fails with ZFS. Obviously whatever was being transcoded at the time would fail. The movie or TV show would stop playing as it would try and read data that would no longer be retrievable. Assuming that happened, and the movie was told to stop or was jumped out of. When the movie was resumed or started a second time would ZFS be able to successfully use the stripe volume and only write/read the new transcode data to the single remaining drive as it is the only place to write or would it simply fail to work at all?

To help re-state my specific question, does anyone know if, when a stripe losses one of it's disks, if it can still be written to and used?

If there is a better way of doing something similar please let me know. I would like to avoid the following if possible,
1. RAM disk - I only have 64GB of memory so I don't want to burn any for this if at all possible, an ordinary SATA SSD is much faster than my transcoding needs require.
2. ZFS 'RAID 1/5/6' equivalents - I would like to avoid increasing my writes if at all possible like ZFS's mirror or RAIDZ's would (as its just temporary data anyway).
3. Going down - as stated above, the movie failing to play back due to the loss of a drive is acceptable, I would like for it to be able to recover if you were to try to play it again though.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, when your pool loses a vdev (your "stripe" devices are each a vdev), the pool is toast and cannot recover. The loss of either drive in your scenario torches the pool.
 

lachdanan

Dabbler
Joined
Sep 15, 2020
Messages
19
Thank you,
You have saved me a decent amount of testing time.

Looks like the only way to do something similar would be to use a hot-spare set to replace the failed drive upon failure.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
But... Why bother? I can't imagine any reasonable transcode being limited by how fast the storage is and the longevity argument is weird, since SSDs don't wear out that quickly unless you're going crazy with how much you write.
 
Joined
Oct 22, 2019
Messages
3,641
since SSDs don't wear out that quickly unless you're going crazy with how much you write.

Not only that, but correct me if I'm wrong: the real bottleneck for transcoding is your GPU/CPU. Isn't the "temporary transcodes" folder simply used as a cache to hold the converted video for future use and seeking? How would re-locating this folder to its own dedicated pool (made solely of a striped vdev) help the clients livestream the movie any smoother?

As for reducing wear and tear, like @Ericloewe mentioned, you'd have to reach crazy levels of constant writes to kill any quality SSD.
 

lachdanan

Dabbler
Joined
Sep 15, 2020
Messages
19
But... Why bother? I can't imagine any reasonable transcode being limited by how fast the storage is and the longevity argument is weird, since SSDs don't wear out that quickly unless you're going crazy with how much you write.

With Plex and Emby transcoding in general there are a lot of people that try to put it on faster drive in general. I have seen obsessions so far as to putting it on a RAM disk. Right now my transcoding is being written/read from my general RAIDZ2 which is spindle drives. I havn't had issues even with this setup personally. However, the concept of separating transcoding to SSDs is a pretty normal action for people to take with Plex/Emby. Which obviously from my postings above, turned into an idea of a mirror would double the writes to a single disk but a stripe would cut them in half when everything written is temporary anyway.

I definitely agree SSDs are very impressive for write endurance. I have honestly not had a single one fail on me yet, even my very first WD blues I bought 6+ years ago for what are now old desktops still work. It was more a investigation into behavior of ZFS when a stripe fails as my needs would be unique to what most people would use a pool for.

I think what I'll end up doing is a mirror. With my particular setup I don't really have any limits when it comes to SATA ports so unnecessary things like this type of setup isn't difficult for me to implement.
 
Last edited:

lachdanan

Dabbler
Joined
Sep 15, 2020
Messages
19
Not only that, but correct me if I'm wrong: the real bottleneck for transcoding is your GPU/CPU. Isn't the "temporary transcodes" folder simply used as a cache to hold the converted video for future use and seeking? How would re-locating this folder to its own dedicated pool (made solely of a striped vdev) help the clients livestream the movie any smoother?

As for reducing wear and tear, like @Ericloewe mentioned, you'd have to reach crazy levels of constant writes to kill any quality SSD.

Sorry, yeah i didn't give enough information in my first post to show the desire to move to a different directory. Right now its using a RAIDZ2 of spindle drives for the transcode directory. Even with this it doesn't have a bottleneck at the disks. There is obviously more of a delay in using a RAIDZ2 spindle drives that are also being used to play the video than if there was a separate SSD for the transcoding to write to. So there will/would be a slight increase in performance but probably not very noticeable. I am definitely going to be changing it to some SSDs instead but wanted to get some ideas of ways it could be done. probably the smartest way is just to make a mirror and call it a day instead of playing the games I have been thinking of doing instead.
 

tripodal

Dabbler
Joined
Oct 8, 2020
Messages
19
Generally recovering, non-recommended pools with odd vdev arrangements, can be a PITA. And after torching dozens of pools and recovering a few; i've come to the conclusion that loads of mirrored Vdev's is the best Time:Result value for me.

If you want to use 'weird' configurations; you will want to put in a lot of hours really digging into how ZFS works and spend time causing intentional failures so you know what to expect.

I built a vdev (raidz2) with 24 used laptop drives in various states of failure, used some cache SSD's that only powered up every so often.

I accidentally built a pool that had 1 smr & 3pmr drives.

I encrypted a 30tb iscsi target on SMR drives with dedupe / compression enabled.

These things were mistakes.

But, I still use the above hardware as simple mirrored vdevs. (throwing out the actually failing hardware) and its actually stable.

So really what I'm saying is mirrors are good.
 
Top