praecorloth
Contributor
- Joined
- Jun 2, 2011
- Messages
- 159
Hello,
I'm running into a couple of issues and I was wondering if people wouldn't mind sharing some knowledge. I've been around the internet a couple of times on these issues and I'm drawing a blank. First off, though, my actual issue is with ZFS on Linux, not a FreeNAS box. The questions I have relate to tools available on both systems, so I hope it's okay to ask about those tools here.
The first question I have is just about interpreting the output of arcstat.py.
I've been doing a lot of reading recently, and from what I recall, the arcsz is the current size of the ARC, and c is the maximum size. In this particular case I have a server that maxes out its ARC (not a big shock given the 3.7G size), but once it hits that, and the actual ARC size shrinks, the max shrinks with it. At its lowest, it was about 50MB. Crazy. I just want to verify that my understanding is correct, that c is supposed to be the ARC max, and that the ARC max should not change dynamically without human intervention.
The other has to do with disk performance, zvols, and iostat.
In FreeBSD the last column is %b, for %busy like we see in gstat. In this case, since it's Linux it's %util. What I'm wondering about here is the zvol performance. This particular snippet was taken when the system was largely bored out of its mind, there was just 1 VM doing Windows updates. We see the zvol for that VM is pegged at just shy of 100%, while the underlying disks aren't nearly as busy.
My question here is: Should that be the case? Shouldn't zvol's utilization or busy percentage max out when the underlying disks are maxed out? Or are there valid scenarios in which this wouldn't be the case?
Just to be clear, I know this is the FreeNAS forums, and as such I don't expect anyone to jump in and start troubleshooting the root issues of these problems on Linux. What I'm looking for are potential, ZFS specific reasons for the above output.
I'm running into a couple of issues and I was wondering if people wouldn't mind sharing some knowledge. I've been around the internet a couple of times on these issues and I'm drawing a blank. First off, though, my actual issue is with ZFS on Linux, not a FreeNAS box. The questions I have relate to tools available on both systems, so I hope it's okay to ask about those tools here.
The first question I have is just about interpreting the output of arcstat.py.
Code:
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c 17:31:19 3 3 100 3 100 0 0 1 100 3.7G 3.7G 17:31:20 2.3K 467 20 434 18 33 76 10 58 3.7G 3.7G 17:31:21 1.1K 791 71 502 62 289 98 8 38 3.7G 3.7G 17:31:22 1.3K 832 63 468 50 364 92 38 20 3.7G 3.7G 17:31:23 1.9K 1.2K 66 643 51 598 97 28 11 3.5G 3.5G 17:31:24 2.2K 749 34 515 27 234 72 33 26 3.5G 3.5G 17:31:25 2.1K 1.0K 50 446 31 595 93 32 8 3.5G 3.5G 17:31:26 1.6K 880 54 862 53 18 66 34 82 3.5G 3.5G
I've been doing a lot of reading recently, and from what I recall, the arcsz is the current size of the ARC, and c is the maximum size. In this particular case I have a server that maxes out its ARC (not a big shock given the 3.7G size), but once it hits that, and the actual ARC size shrinks, the max shrinks with it. At its lowest, it was about 50MB. Crazy. I just want to verify that my understanding is correct, that c is supposed to be the ARC max, and that the ARC max should not change dynamically without human intervention.
The other has to do with disk performance, zvols, and iostat.
Code:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 77.00 115.00 308.00 640.00 9.88 2.02 10.33 9.92 10.61 3.58 68.80 sdb 0.00 0.00 81.00 116.00 324.00 644.00 9.83 1.32 6.72 6.42 6.93 2.50 49.20 ... sde 0.00 0.00 77.00 117.00 308.00 640.00 9.77 1.16 6.25 5.25 6.91 2.35 45.60 sdf 0.00 0.00 78.00 116.00 312.00 640.00 9.81 1.25 6.45 5.64 7.00 2.47 48.00 ... zd32 0.00 0.00 0.00 197.00 0.00 788.00 8.00 1.09 5.54 0.00 5.54 5.06 99.60
In FreeBSD the last column is %b, for %busy like we see in gstat. In this case, since it's Linux it's %util. What I'm wondering about here is the zvol performance. This particular snippet was taken when the system was largely bored out of its mind, there was just 1 VM doing Windows updates. We see the zvol for that VM is pegged at just shy of 100%, while the underlying disks aren't nearly as busy.
My question here is: Should that be the case? Shouldn't zvol's utilization or busy percentage max out when the underlying disks are maxed out? Or are there valid scenarios in which this wouldn't be the case?
Just to be clear, I know this is the FreeNAS forums, and as such I don't expect anyone to jump in and start troubleshooting the root issues of these problems on Linux. What I'm looking for are potential, ZFS specific reasons for the above output.