Help Me Optimize for MythTV (simultaneously reading and writing 2+ GB mpeg2 files)

Status
Not open for further replies.

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Anyway,
Works very well for me.

Meantime, I might have found a reason for why this occurs only while reading and writing from the same file here and here.

Seems like it might be a database issue. Not sure why it goes away on local drives for me, but maybe having slower networked storage just exacerbates the existing problem.

Good news is that fixes are going into MythTV.28. Bad news is, it does not appear any .27 versions will be patched, and a stable .28 is a ways off yet.

I didn't catch if you said where your Myth BE stores it's databases? I assume locally on your BE machine? Or is the DB on the FreeNAS as well?

Lots of moving parts.. Glad you at least found a workaround and maybe the answer.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
I didn't catch if you said where your Myth BE stores it's databases? I assume locally on your BE machine? Or is the DB on the FreeNAS as well?

Lots of moving parts.. Glad you at least found a workaround and maybe the answer.

Have to revisit this.

No, the Database is local on the SSD. Only storage groups are remote via NFS.

I finished my FreeNAS upgrade a while back.

My MythTV database degraded terribly, to the point where it was barely useable. I'm thinking my workaround script above was to blame. The backend is supposed to be able to find moved files in different storage groups and automatically update the database, but I think with the massive amount of moves I was doing it was just too much , and over time degraded the database.

I figured now that my FreeNAS box is faster, has more RAM, more drives and cache up the wazoo, I'd give MythTV with storage over NFS another try.

I still have the same issue. Live TV skips, and frequently freezes completely. Scheduled recording do better for some reason, but if watching them before they have finished recording, performance is iffy.

I have read the MythTV NFS Server Guide, but its suggestions haven't helped much. I've tried pretty much every combination of mount options, wsize, rsize, tcp, udp, nfsvers=3, etc. etc. with no success. I have also tried both a dual Intel Gigabit adapter in LACP mode, as well as 10gig. Both result in the same LiveTV freezes.

I don't think my storage pool in and of itself is the issue. It is very fast with 12 drives configured in two 6 drive RAIDz2 vdevs striped together to one large pool The storage box has 72GB of RAM for plenty of caching, and has two striped 128GB SSD's for read cache (L2ARC). I created a dedicated dataset just for MythTV in which I forced sync=off, just in case my mirrored Intel S3700 SLOG's weren't cutting it, but still no dice. When MythTV is recording to the storage box, disk activity is very very light, so it can't be that the drives can't keep up.

When I have benchmarked it (making sure to first disable lz4 compression) I achieved 975MB/s reads and 675MB/s writes, with large enough 500GB tests, to make sure I wasn't just measuring cache speed.

Everything else connected to the FreeNAS box performs very well, which leads me to believe there is something specific about MythTV and how it interacts with NFS.

I have temporarily installed an old 1.5TB WD Green drive I had kicking around, dedicated for LiveTV and everything works well. Recordings are still going directly over NFS, and they seem to be working, but they appear to be a bit skippy when watching in-process recordings.

I'd really rather fix this correctly though, and have everything going directly to FreeNAS.

Does anyone have any suggestions? Does MythTV just not play nice with NFS? Would going with SMB be a better choice?

I'd appreciate any thoughts or recommendations!

Thanks,
Matt
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So you virtualized FreeNAS and are having problems....

Anyone think this is surprising?
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
So you virtualized FreeNAS and are having problems....

Anyone think this is surprising?

I don't think FreeNAS is the problem. I have seen threads with similar problems in combining MythTV and NFS, and heard some accounts with FreeNAS too, the overwhelming majority of which, on bare metal. The fact that I virtualize does not mean every issue is attributable to virtualization. My theory is that MythTV may be very sensitive to network mounts. I have faith these can be overcome with correct configuration of either MyhTV or FreeNAS.

If you are just going to hop in and blame Virtualization without anything to back it up, then you are not being helpful. I understand you personally are opposed to virtualization, and are willing to ignore the many examples of such working configurations over extended periods of time. No one is asking you to virtualize your own setup, but it would be kind of nice if you just backed off your zealotry and argumentativeness on this subject.

Don't get me wrong. You are very knowledgeable when it comes to FreeNAS and ZFS, and your help is often appreciated, but the purpose of forums is not to beat down and continually insult those who try different solutions. If you don't want to support people who virtualize, then don't. Allow other users who might want to reply, or have run into similar solutions do so.

This is supposed to be a user forum, where users help eachother solve issues. If you don't want to help, then maybe your time and energy is better spent elsewhere.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Do you have a slog? If not use zilstat to see if you're making lots of sync writes. Btw, IOPS on a single RAIDZ2 vdev will suck once you start hitting the spinning rust
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Do you have a slog? If not use zilstat to see if you're making lots of sync writes. Btw, IOPS on a single RAIDZ2 vdev will suck once you start hitting the spinning rust
Thank you,

Yes, I do have a set of mirrored S3700's as my slog, but I set up a dataset for MythTV with sync=off, (my thought process is if something goes wrong, and the system goes down, the in process recordings will be junk anyway, so no need for sync). I also have two RAIDz2 vdevs. Does this improve IOPS or are they just as bad (or worse?)

Appreciate the response,
Matt
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Thank you,

Yes, I do have a set of mirrored S3700's as my slog, but I set up a dataset for MythTV with sync=off, (my thought process is if something goes wrong, and the system goes down, the in process recordings will be junk anyway, so no need for sync). I also have two RAIDz2 vdevs. Does this improve IOPS or are they just as bad (or worse?)

Appreciate the response,
Matt
More vdevs = more iops. I haven't bothered to read this thread in depth, but it seems that there are too many variables at play right now to figure out your problem. You can try striped mirrors. I would turn sync back on.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
More vdevs = more iops.

Thank you. I knew adding vdevs helped with sequential reads and writes, but I was not certain about the impact on IOPS.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I always find it interesting when great gear falls on it's face. I'd be isolating things to see what's up. Not sure if you've tested how much bang for the buck you get out of those two drives for L2ARC... but with your nice chunk of ram it would take a really interesting workload to see a huge uptick... not too mention a single 850 is likely every bit as fast for this scenario? Anyway, I only mention them as they are there begging you to isolate your mythtv workload on a high iops low latency pool. Basically if a zpool made of those ssds falls flat, the spinning rust doesn't have a hope, imho.

Your pool should be very high throughput, but 2 vdevs, is roughly 2 spinny drives iops, and from your linked data seems like mythtv is sending extra seeks and writes to the main pool that are enough to interrupt the stream. No choice but to break things down and isolate to look for the problem. You may be able to easily compare baremetal performance as well and rule out interference by the hypervisor layer. Guess it depends how badly you want it to work. But I'd want to validate an nfs share from an ssd pool to test. If that works, what do we need to do to get the 'Main' pool working acceptably. I'd hit cifs and iscsi as well for testing... but I sometimes go a little crazy ;)

Not a MythTV wiz or I'd offer more. But I do love making things go fast. Good Luck.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
I don't think FreeNAS is the problem. I have seen threads with similar problems in combining MythTV and NFS, and heard some accounts with FreeNAS too, the overwhelming majority of which, on bare metal. The fact that I virtualize does not mean every issue is attributable to virtualization. My theory is that MythTV may be very sensitive to network mounts. I have faith these can be overcome with correct configuration of either MyhTV or FreeNAS.

If you are just going to hop in and blame Virtualization without anything to back it up, then you are not being helpful. I understand you personally are opposed to virtualization, and are willing to ignore the many examples of such working configurations over extended periods of time. No one is asking you to virtualize your own setup, but it would be kind of nice if you just backed off your zealotry and argumentativeness on this subject.

Don't get me wrong. You are very knowledgeable when it comes to FreeNAS and ZFS, and your help is often appreciated, but the purpose of forums is not to beat down and continually insult those who try different solutions. If you don't want to support people who virtualize, then don't. Allow other users who might want to reply, or have run into similar solutions do so.

This is supposed to be a user forum, where users help eachother solve issues. If you don't want to help, then maybe your time and energy is better spent elsewhere.

Very well-spoken. I have been having this sentiment also -- just because a user is virtualizing or running non-ECC RAM does mean all other problems are irrelevant.

The easiest way to debunk this is if you can replicate the issue with a bare-metal FreeNAS install. I don't know if you have enough hardware or not to do so, but it would sure remove any ambiguity about virtualization being the cause of your problem.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
More vdevs = more iops. I haven't bothered to read this thread in depth, but it seems that there are too many variables at play right now to figure out your problem. You can try striped mirrors. I would turn sync back on.

I have sync=always set for my root pool, as I don't want to lose or damage any of my important data, but I figured mythtv recordings weren't critical, and if I lost one, or a couple, it wouldn't be the end of the world, besides, if a recording were interrupted mid stream due to some sort of crash or unexpected shutdown, I wouldn't be interested in watching half of a show anyway.

Is there any specific reason you think sync being enabled for mythtv recordings is a good idea? I don't have the database on FreeNAS, it is stored on an SSD locally mounted to the MythBuntu backend. I also do database backups to FreeNAS to a separate mount that is located in the sync=always area of my pool, as that aspect I believe to be more critical.

Appreciate any thoughts!
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Thank you for your thoughts!

I always find it interesting when great gear falls on it's face. I'd be isolating things to see what's up. Not sure if you've tested how much bang for the buck you get out of those two drives for L2ARC... but with your nice chunk of ram it would take a really interesting workload to see a huge uptick... not too mention a single 850 is likely every bit as fast for this scenario? Anyway, I only mention them as they are there begging you to isolate your mythtv workload on a high iops low latency pool. Basically if a zpool made of those ssds falls flat, the spinning rust doesn't have a hope, imho.

Yeah, I didn't think I actually needed striped SSD's for performance. Throughput is going to be limited by the network interfaces anyway (even the 10gig ones, as I find moving to 10gig doesn't scale well over gigabit)

The reason I used them as a striped pair was simply because I already had them for another project I was working on that didn't pan out. I had two open slots in my backplane, and wanted to make sure my working set fit in my cache, so I put them both in to get 256GB, rather than just 128GB. Any extra performance coming from the striping would likely only have any effect in local file transfers SSH:ing into FreeNAS, but I guess it would be a small bonus.

Your pool should be very high throughput, but 2 vdevs, is roughly 2 spinny drives iops, and from your linked data seems like mythtv is sending extra seeks and writes to the main pool that are enough to interrupt the stream. No choice but to break things down and isolate to look for the problem. You may be able to easily compare baremetal performance as well and rule out interference by the hypervisor layer. Guess it depends how badly you want it to work. But I'd want to validate an nfs share from an ssd pool to test. If that works, what do we need to do to get the 'Main' pool working acceptably. I'd hit cifs and iscsi as well for testing... but I sometimes go a little crazy ;)

I could plug in a spare drive to FreeNAS as a standalone drive (create a new temporary UFS pool?) and see if it still has the issue, or possibly even try writing to a separate machine on the network to rule out issues.

I could also temporarily export my pool, boot from a USB stick and reimport it, to test Virtualization vs. bare metal.

Tough to find time do all this, as - while not a production machine - it is often in use as "home production" but I can try to find low use time slots to do some testing.

Not a MythTV wiz or I'd offer more. But I do love making things go fast. Good Luck.

I definitely appreciate your input.

One thing that I have noticed about my performance troubles is that I never have them when doing simple writes per file. I can record 6 ~20mbit streams just fine without issues, while doing playback from a separate recording, and everything works fine without a hiccup.

The problems seem occur when doing anything that involves reading and writing to the same file at the same time. MythTV does this all the time. LiveTV streams are written to the drive with a small upfront buffer, and read from the same file for playback. Issues also happen when starting to play recordings that have not yet completed.

So, it seems related to doing reads and writes at the same time to the same file. Is there any theoretical reason ZFS shouldn't be good at this? Considering how many people use ZFS to host VMWare datastores with disk images that are continually read and written to at the same time, I would image not, but it is a pattern I have noticed, so I figured I'd bring it up.

Thanks again,
Matt
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Very well-spoken. I have been having this sentiment also -- just because a user is virtualizing or running non-ECC RAM does mean all other problems are irrelevant.

The easiest way to debunk this is if you can replicate the issue with a bare-metal FreeNAS install. I don't know if you have enough hardware or not to do so, but it would sure remove any ambiguity about virtualization being the cause of your problem.

In order to not derail this thread, I am going to drop this subject here, but it's nice to know I am not the only person who feels this way. The user forums should serve to help keep people in the fray, not to drive them away.

As mentioned above, a thing I could test would be to export my pool, boot FreeNAS from a USB stick, re-import it, and see if it is still a problem. This would be difficult, as I'd ahve to find a different machine to install the MythTV backend to (they are currently both running on the same ESXi host, connected internally using the VMXNet3 virtual networking driver, but I also tried using direct-io forwarded Intel gigabit NICs, just to make sure VMXNent3 wasn't the problem, as I have had compatibility issues with it in the past, but this didn't help.)

Anyway, thanks for your reply!

--Matt
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
As mentioned above, a thing I could test would be to export my pool, boot FreeNAS from a USB stick, re-import it, and see if it is still a problem. This would be difficult, as I'd ahve to find a different machine to install the MythTV backend to (they are currently both running on the same ESXi host, connected internally using the VMXNet3 virtual networking driver, but I also tried using direct-io forwarded Intel gigabit NICs, just to make sure VMXNent3 wasn't the problem, as I have had compatibility issues with it in the past, but this didn't help.)

Anyway, thanks for your reply!

Can you use one of your client boxes to simulate a bare-bones FreeNAS server? You wouldn't need ECC or redundant hard drives just to test basic functionality. I don't have a feel for how much oomph you would need on the hardware side. If you could get something handling the streaming load you originally mentioned, you could see if the issue asserts itself. If not, then I guess you wouldn't be able to tell anything.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Out of curiousity, how many cores are assigned to MythTV and other VM's? And have you looked at the CPU usage in FreeNAS? 6 Cores seems excessive and could actually make contention issues worse (I'd start with 2 and work up from there). Any chance you can look at the ESX host performance stats while this issue is happening to look for clues?
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Out of curiousity, how many cores are assigned to MythTV and other VM's? And have you looked at the CPU usage in FreeNAS? 6 Cores seems excessive and could actually make contention issues worse (I'd start with 2 and work up from there). Any chance you can look at the ESX host performance stats while this issue is happening to look for clues?

MythTV has 4 cores assigned to it, as it on occasion gets some semi-heavy workloads searching for ads in recorded files, and during transcodes (which I haven't set up yet)

FreeNAS got 6 cores, as I found in the past that during local speed tests it can be rather heavy on the CPU. I was concerned that the wimpy little L5640 cores (2.26Ghz bloomfield era, with turbo up to 2.8Ghz) wouldn't be sufficient to keep up with all the parity calculations required for a high speed write to the dual RAIDz2 vdevs, so I probably overprovisioned a little bit. I probably haven't seen the FreeNAS guest git more than 60% utilization on those six cores, but I figured having a little bit of a margin wouldn't be a bad thing.

Everything else has one or two cores assigned to it. From a core count perspective my 12 core (with HT, so 24 logical) server is a little overkill for what I'm doing compared to the ratios I have seen of VCPU's to actual cores in the IT world:

This is all I have running on it right now:
  • pfSense (2 cores, 1GB RAM)
  • FreeNAS (6 cores, 72GB RAM)
  • MythTV (4 cores, 4GB RAM)
  • General Purpose Ubuntu Server (2 cores, 2GB RAM)
  • Dedicated UPS control Ubuntu Server (for automated shutdowns) (1 core, 128mb RAM)
  • Dedicated, isolated SFTP Ubuntu Server (1 core, 128MB RAM)
  • (in progress) Dedicated crashplan backup Ubuntu server (undetermined)

So, as you can see I have assigned 16 cores, to a 12 core (24 logical) machine many of which spend most of their time idling. The only guests that see significant CPU load are FreeNAS, MythTV and occasionally pfSense when the NAT table is large and I am maxing out my 150mbit/150mbit connection.

Many would consider that somewhat underutilized. I could scale them back, but I don't want to hurt performance. pfSense could probably run on one core, but I used two, becasue I don't want some local script or update to spike, causing a performance drop. Same can be said for the general purpose Ubuntu Server. FreeNAS could probably drop down a little, but probably no lower than 4 based on utilization during high traffic. MythTV could drop down to two if necessary, but I'd have to reduce the number of simultaneous instances in the processing queue from 3 down to 1. (I like having one core for all backend activity, and a separate one for all processing activity, so they don't fight with eachother, and recordings get insufficient CPU resources)

Based on the explanation of my setup, do you still think this is a potential issue?

Most of the cores assigned spend most of their time idling, and occasionally but rarely spike, and rarely at the same time. The exception to this is probably pfSense which may spike together with whatever else is accessing the outside world and FreeNAS which will spike together with whatever is accessing the storage. I've probably never seen the overall system utilization go over ~25%. I figured since I was dealing with a lot of idling processes, I wouldn't have a problem with the cores conflicting with each other as ESXi does its magic and dynamically reallocates stuff.

Appreciate the help!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Are you able to look at esxtop stats while you're getting these stalls? Specifically looking for %RDY and %CSTP under CPU, and N%L under MEM.

I'm curious as to seeing if downsizing the FreeNAS VM to 48GB and pinning it to a single NUMA node would be beneficial, but that's a pretty big change as an "experimental" change.
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
So just a quick reply here. I have my MythTV on a bare metal server along with my FreeNAS box as well. My MythTV has a Intel Core 2 Duo in it (not sure on the specs as I stuck it in my crawlspace over a year ago and haven't touched it) with 4GB of ram and my FreeNAS is a Xeon L5420 with 12GB of ECC Ram. I know I am no where near the hardware that the OP has here I could only wish lol. However I too experience the same exact issue with watching the file while its being recorded. My LiveTV goes to local storage however my recordings go to the NAS on a 1TB mirror. I didn't want to put the Myth drives into the main pool due to how active these drives are. So I can confirm that this does happen to me as well, I also have the flag turned on to search for commercials while its recording. I've recorded up to 4 shows at once so far I haven't attempted to max my tuners out at 6 recordings at once although I'm sure that everyone could handle it well. I sometimes wonder if the MythTV jail were to come back if that would fix this? Just a thought.
 

ljw1

Dabbler
Joined
Apr 29, 2012
Messages
16
Everything you have discussed could be attributed to the copy on write file nature of zfs. Every time the file is written to a new copy is made which will not allow access until it is completed. NFS has an annoying behaviour of locking the file system while it is being used which could easily timeout the playback machine. In the case of your NFS settings they are woefully small for rsize and wsize, please change them to a reasonable value. I have no idea why they leave that as the defaults as they are only suitable for a 10mbit network not a 10G network. The other thing you could try is mounting the share using a different protocol eg cifs or sshfs and see if the behaviour persists.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Are you able to look at esxtop stats while you're getting these stalls? Specifically looking for %RDY and %CSTP under CPU, and N%L under MEM.

I'm curious as to seeing if downsizing the FreeNAS VM to 48GB and pinning it to a single NUMA node would be beneficial, but that's a pretty big change as an "experimental" change.


Thank you for the suggestions. I will look at esxtop.

I highly doubt NUMA is the issue here, as I had the exact same problem on my previous build (the one I was using back when I started this thread) which was a single socket FX-8350, with 32GB total RAM, 25GB assigned to FreeNAS. No NUMA in single socket systems. I will give it a try though.
 
Status
Not open for further replies.
Top