Help Me Optimize for MythTV (simultaneously reading and writing 2+ GB mpeg2 files)

Status
Not open for further replies.

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Hey all,

I was hoping someone might give me some guidance on how to optimize settings for use in this environment.

Setup:

FreeNAS and my MythTV backend are directly connected on a separate subnet. File sharing happens via NFS.

My FreeNAS backend has an 8 drive RaidZ2 volume.

What works:
The MythTV backend can write 6 mpeg2 streams (the limit of my tuner) to the FreeNAS volume just fine. (these streams are up to 18mbit) without a problem, at the same time as clients are playing OTHER files from the volume. So i know there is plenty of performance there to handle my workload.

What Doesn't:
The problem arises when I try to view a recording at the same time it is being recorded. MythTV does this for all live recordings (write file to disk, create a couple of second buffer, play same file from disk to clients) and is how it supports pause, fast forward, etc. operations.

I know the system has plenty of performance to do the reads and writes that I need, but the NFS/ZFS combo is somehow getting hung up when any given file is being simultaneously read from and written to, occasionally (every 15 minutes or so) freezing up for a second or two, before continuing. (sometimes this is enough to cause the frontend to time out while playing)

What I've Tried

1.) Having a separate local SSD in the mythbuntu backend for liveTV recordings, and only pushing scheduled recordings to FreeNAS.

This works reasonably well, except for when a scheduled recording is started to FreeNAS, and I want to watch it until I catch up to live. (this is usually the case for sports, like the World Cup recently)

2.) Using a local SSD on mythbuntu together with cachefilesd/CacheFS and mounting the NFS share with the FSC option to cache it

I had hoped this would solve the issue, such that reads and writes to recent/inprocess recordings would all come off of the fast 60GB partition on the SSD, but for whatever reason this has the same problem when reading/writing to the same file at the same time, as going directly to NFS.

So I am suspecting there is something with the NFS/ZFS combo regarding how files are locked out, or how caches are committed to disk that is causing this problem. Either it has to do with my configuration, or it is innate with the protocols.


If it is configuration based, I'm hoping someone with experience optimizing these things might give me some pointers. :)


Server specs in sig.

Current configuration:
link: virtual 10gig ethernet. (tops out at ~4Gbit using iperf in practice)
Server NFS settings set up to share 100 servers (to make sure I don't run out)
Client NFS settings mounting using following options: "rsize=8192,wsize=8192,timeo=14,intr,fsc,rw,user"

Maybe different block sizes or something might help?

Appreciate any thoughts / suggestions :)
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
I have been experiencing this as well with the same exact results. This also prohibits me from having Myth do realtime commercial detection during the recording for me. I have to wait for the recording to complete then once it completes Myth does its commercial detection. I have my recordings going to a mirror on the FreeNAS in order to keep them separate from the rest of my storage (No need to bother my storage with tv recordings) I only have the myth drives in the NAS because the Dell tower that I bought from the University can't hold the additional drives and well it works just fine as a backend. I've attempted to modify whatever I could to improve this but to no love. I even gave my Myth box a dedicated LAN run between it and the FreeNAS to see if that would help.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
I have been experiencing this as well with the same exact results. This also prohibits me from having Myth do realtime commercial detection during the recording for me. I have to wait for the recording to complete then once it completes Myth does its commercial detection. I have my recordings going to a mirror on the FreeNAS in order to keep them separate from the rest of my storage (No need to bother my storage with tv recordings) I only have the myth drives in the NAS because the Dell tower that I bought from the University can't hold the additional drives and well it works just fine as a backend. I've attempted to modify whatever I could to improve this but to no love. I even gave my Myth box a dedicated LAN run between it and the FreeNAS to see if that would help.


Thank you for sharing your experiences. I will continue searching and if I find a solution share it here.

MythTV DOES suggest that using individual drives is better for performance than RAID due to IOPS issues inherent in RAID, but just based on the fact that I can record six streams and play 2 streams (most I have tested) all at the same time, as long as they are not the same file suggests to me that something other than raw performance is at play here.

I opted to ignore this recommendation, as I didn't want to buy dedicated drives for the mythtv backend, and having one centralized shared storage location appeals to me. Besides doing something other than the "recommended" approach is really just an interesting challenge, right? :p

What is really puzzling to me is that even when I have a 60GB dedicated SSD cache on the myth backend this problem persists. In theory with cachefs it ought to write through the SSD, and read operations should be fast and unaffected by ZFS as they are hitting the local cache, not the ZFS volume.

Considering how many people use FreeNAS, NFS and ZFS as a VMWare datastore, and run their guests off of images on their volumes, where there are FAR more simultaneous reads and writes to the same file than with a simple sequential write and read at the same time as with a video file, and do so successfully, it really ought to work.

Maybe there is some setting in Cachefilesd I can change that will help. I will have to research that as well.

My backup plan is as follows (though I don't like it as much)

Create two file locations in myth backend for each of livetv and recordings, one on the local SSD and one on the mounted FreeNAS volume. Somehow tell Mythbackend not to write to the ones on FreeNAS (I'm hoping this setting exists, can't check right now as the box has scheduled recordings, and entering backend setup shuts it down).

Then run a daily chronjob at 5am or something like that, moving all recordings (live and scheduled) older than 24 hours to FreeNAS using an argument somethign like this:

find /sourcedirectory -mtime 1 -exec mv "{}" /destination/directory/ \;

It's not as elegant, but as a last resort it should work. I highly doubt I'm going to fill the backends local 60GB SSD in a day, so it ought to work, and this way all files being simultaneously read from and written to will be on the local SSD, whereas all older recordings that just need to be read from will be on FreeNAS and should work just fine...
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
At least double your RAM. You might also try mounting the filesystem async, if you haven't already.

After that, there's a lot of instrumentation you can use to see where the bottleneck is. But first, you need a ton of available RAM..
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
c32767a, and where should this ton of ram be located? On the backend or the FreeNAS machine. My backend currently has 4GB (It came with it) and my FreeNAS has 12GB (Its what it came with as well) I'll also check the fstab on the myth box to make sure its setup for async. Thanks.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
c32767a, and where should this ton of ram be located? On the backend or the FreeNAS machine. My backend currently has 4GB (It came with it) and my FreeNAS has 12GB (Its what it came with as well) I'll also check the fstab on the myth box to make sure its setup for async. Thanks.


On the FreeNAS box.

The cache is unified and is only a subset of available RAM on the NAS. I would up your RAM to 24 or 32G and see how it behaves. You'll also need to make sure you either manually tune, or rerun the auto tune script so that the kernel limits get resized for your available ram.


Have you reviewed this:

http://doc.freenas.org/index.php/Arcstat

http://christopher-technicalmusings.blogspot.com/2010/09/zfs-and-nfs-performance-with-zil.html
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
I'll have to add that to my wish list Lol don't have the free funds for that kind of RAM at the moment
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Autotune in FreeNAS is broken. You shouldn't be enabling it and should delete any sysctls or tunables that you have from Autotune in the past.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Autotune in FreeNAS is broken. You shouldn't be enabling it and should delete any sysctls or tunables that you have from Autotune in the past.

Is there a bug or release note that details this?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Its in the manual that it doesn't add value and actually makes ZFS slower.... Is that not enough of a hint that you shouldn't use it.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
At least double your RAM. You might also try mounting the filesystem async, if you haven't already.

After that, there's a lot of instrumentation you can use to see where the bottleneck is. But first, you need a ton of available RAM..


Hmm. Well my server is currently maxed out at 32GB (unless I can find some of those elusive 16GB ECC UDIMM's) and 25GB is about all I can afford to give to FreeNAS, (already as it is my other guests are very low on ram, so I'm hoping RAM isn't the only solution here.

What doesn't make sense to me - however - is that writing 6 streams and reading 2 at the same time is perfectly fluid and fine as long as they are different files, but writing one, and reading from the same file is not.

What is the theory behind what an async mount accomplishes?

Thanks,
Matt
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Its in the manual that it doesn't add value and actually makes ZFS slower.... Is that not enough of a hint that you shouldn't use it.


yeah.

I was confused, because on this page of the manual http://doc.freenas.org/index.php/Settings it says nothing about not using auto tune. It only has a vague warning about trying to use auto tune to overcome <4g RAM.

I was further confused because this bug is marked as fixed in 9.2.1.5: https://bugs.freenas.org/issues/4597

I haven't seen anything that says it doesn't add any value or makes ZFS slower.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Hmm. Well my server is currently maxed out at 32GB (unless I can find some of those elusive 16GB ECC UDIMM's) and 25GB is about all I can afford to give to FreeNAS, (already as it is my other guests are very low on ram, so I'm hoping RAM isn't the only solution here.

What doesn't make sense to me - however - is that writing 6 streams and reading 2 at the same time is perfectly fluid and fine as long as they are different files, but writing one, and reading from the same file is not.

What is the theory behind what an async mount accomplishes?

Thanks,
Matt


I think as long as you have enough ram available to cache all the files you're simultaneously writing, plus some spare space, you should be "good enough".

With synchronous writes turned on, each write is fully flushed through to disk, effectively disabling any caching. If you enable async, the NFS server lies to the client and confirms the write was fully written to disk when it still might only be in RAM cache on the NFS server. By enabling async, you do expose yourself to the possibility that data 'in the air' and not yet on disk will be lost if you have an unclean shutdown, but you gain performance.

I think the relevant issue to you is the fact that multiple IO streams to different files are fine, while read and write to the same file is problematic. That implies a cache problem with that one file. If you run gstat or another tool to read zfs statistics while the problem is occurring, I think you'll find high IO latency and your drives very busy doing random I/O, which would imply to me that your file is not getting cached properly.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I haven't seen anything that says it doesn't add any value or makes ZFS slower.

Well, there's this, from the link you posted:
FreeNAS® provides an autotune script which attempts to optimize the system depending upon the hardware which is installed. For example, if a ZFS volume exists on a system with limited RAM, the autotune script will automatically adjust some ZFS sysctl values in an attempt to minimize ZFS memory starvation issues. It should only be used as a temporary measure on a system that hangs until the underlying hardware issue is addressed by adding more RAM. Autotune will always slow the system down as it caps the ARC.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Well, there's this, from the link you posted:


Perhaps the manual section needs to be rewritten then. Because at the bottom of that section it says:

"If you are trying to increase the performance of your FreeNAS® system and suspect that the current hardware may be limiting performance, try enabling auto tune."

Which is misleading.. Either auto tune doesn't limit performance, or it does. I interpreted the fact that the warning was in the first paragraph to mean it applied when there was <4G of ram.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Umm.. if you are using FreeNAS in a VM performance sucks because of how ESXi's process manager works. You shouldn't be running FreeNAS on a VM and you do so at your own risk (or reward).
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Re: autotune. Anyone that wants to see what it does can review the script. I hope it's not "broken" because it's an available option in the just released 9.2.1.6. It would be odd to issue RELEASE software with an known broken feature enabled (or able to turn on).

Basically the script will do two things - size the arc and look at some sysctls (and possibly adjust) related to networking and the l2arc settings. It's been rewritten in 9.2.1.6 vs. previous versions. It used to "severely" cap the arc (e.g. to 10G in a 16G system, now it will size to ~13G, at least based on my 16G system.) I haven't reviewed all the changes in the script itself though. I just followed a couple bug reports in the 9.2.1.x branches.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Yeah, I'd read the script when I first started using it. That's why I was confused by the tone of the warning about using it.

It caps the ARC at a percentage of system ram. In theory, you don't want your fs cache to eat all system RAM (hi Solaris) so a cap is a good thing in my mind. Does it force you to possibly leave some RAM on the table, that could otherwise be used for FS cache? probably. But I'm not sure how that leads to a slower filesystem.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Yes. But when it capped to 10G I basically overrode the settings. That was way too much of a cap. The arc is supposed to auto size down if the system requests more RAM. when I left it off the system hovered around 14G used. I'm ok with a cap, and with the current cap at 13G. Seems reasonable.

I think the point on slowing down the FS has to simply be due to limiting the ARC. i.e. the smaller it is, the less the hit rate in the memory cache (on most workloads). Hence the "net" slowdown for a smaller ARC.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Anyway,

I promised I'd post back here with what I did to solve the issue. Couldn't come up with anything wrong on the FreeNAS side, so I dedicated a 60GB partition on my local SSD for livetv and recording, and did a cronjob that runs every morning at 4:45am and moves all files that haven't been modified in more than 4 hours using this command:
Code:
find /ssd/recording/directory/* -mmin +240 -exec mv "{}" /FreeNAS/archive/folder/ \;

Then I added the archive folder as a new storage group in mythtv. When the MythTV database can't find the files it is looking for as expected in the local recording directory, it searches all storage groups for them, and updates itself. This occurs in a fraction of a second, so there is no delay.

Works very well for me.

Meantime, I might have found a reason for why this occurs only while reading and writing from the same file here and here.

Seems like it might be a database issue. Not sure why it goes away on local drives for me, but maybe having slower networked storage just exacerbates the existing problem.

Good news is that fixes are going into MythTV.28. Bad news is, it does not appear any .27 versions will be patched, and a stable .28 is a ways off yet.
 
Last edited:
Status
Not open for further replies.
Top