Sudden SAN Performance Drop

Status
Not open for further replies.

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Hi all. Just started having some big performance issues.

Setup:
FreeNAS-9.10.1-U4
2x E5-2620 v3
256GB RAM
Pool:
14x 4TB 7.2k NLSAS mirror vdevs,
2x 400GB P3700's mirrored for SLOG
1x 800GB P3700 for cache
28TB usable, committing to only use 50% so 14TB really usable

This was all setup in December. We have a 3 node Hyper V cluster that access SAN via iSCSI. I wish I had metrics to share, but I don't, so the best way I can explain the issue is, VMs used to boot up (all 50 or so) in about 5 mn. It now takes over 2 hours.

I believe the issue might be due to my poorly chosen way to chop up the "usable" 50% space.

What I did was, for the purpose of.....I guess separating virtual disks (which I realize doesn't really matter, as they're all on the same zpool), was to create a number of zvols, totaling in just under 50% of the total pool. There were no problems at all up front. Performance was great. But very recently that slowed waaayyyy down. The only thing I see is that a couple of those zvols are right around 80%, though the pool is only at 18% or so total.

So, it would seem the 80% rule applies to an individual zvol, regardless of the free space on the underlying zpool. Is this true? If so, I'm going to work on moving some virtual disk around, and then expanding or recreating the zvols somehow. If not, I need to dig further.
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Here's the zpool list info:

NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
vol0 25.4T 4.67T 20.7T - 23% 18% 1.00x ONLINE /mnt
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
No, this is still going on :/

I've set sync=standard for now, which has seemed to help some, but that isn't the solution.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
What are the guest OS, and are they defragging themselves? What is the blocksize on the zvols?
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
GuestOS:
-47 total VMs on cluster
-35 Windows VMs (7, 10, 2008R2, 2012R2)
-12 Linux (CentOS, Debian, Ubuntu

Defrag:
Quite possibly. Why on earth haven't I checked this already? I've meant to do this from the beginning, but honestly totally forgot. Beyond that, I have iSCSI Target presenting LUNs as SSD's, as to help prevent defrag. But, I'm guessing that doesn't get passed on to the Guest OSes? If not, then yes, we can assume at least all/most of the Windows VMs are defragging away.

Now that said, with the very large amount of free space I have available still, would defrag still be a probable cause?

Blocksize:
zfs get volblocksize shows 16K for each zvol. However, under extents, the Logical Block Size for each is set to 512.

Thanks in advance.

UPDATE: I went through Windows VMs and disabled defrag. About half of them already had it disabled (from a few years back). Of the ones that didn't about half of those had never ran the task. The other half had been.
 
Last edited:

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hello,

indeed you have still a lot of free space on your pool. The under 50% filling performance best practice is for the whole pool, not each individual zvol.

Concerning the defrag matter:
It could indeed provoke much more fragmentation, but that should not be so much of a worry since Freenas is presenting "SSD Luns" by default.
All Windows versions since Windows 7 & 2008R2 are having defragmentation tasks, but they are disabled on any "SSD" volume.
This is why you saw that it newer run on most systems.

Concerning the performance:
1. What are symptoms ? The VMs seem laggy/slow ?
Did you measure and compare the disk throughput between now and before ?

2. How does a gstat or/and zpool <poolname> iostat -v 1 look ?
Are all drives busy ? Maybe some of them more than others ?

3. Did you try to set up another new zVOL, dumped a VM onto it and see if it's reacting the same ?
What about the SWAP/Pagefile of these machines ? Is it stored with the VM ? Or is on a dedicated Datastore ?
 
Last edited:

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Makes sense why most weren't defragging.

1. Yes, laggy/slow. Unfortunately, I don't have any metrics to share from before. So the best way to explain it is how the VMs have slowed down. When this system was setup, there was time for install, some burn in, and then right into production. Best way to explain that experience is that most everything booted like they were running on SSD's, and then would function as such afterwards. They way I knew something went sour was by a few things: asterisk servers started giving users slowed down voicemails as well as some quality issues, a complete startup of all 47 VMs took over 2 hours when it use to take 5 or maybe 10 minutes. We've since had a handful of users complain about performance as well.

I wish it was more scientific. To help, we can go offline tonight if someone can give me a few good tests to run that will help answer the problem.

Again, didn't measure throughput before, but now: I'm getting 10-20M Bytes per second xfer speeds when transfering some 2GB .iso files from one drive to another within the same server, one virtual disk to the other, both residing on the same zvol. Now, just now when I tested it, it started around 150MBs, but within a few seconds went down to 10-20. Shot backup up towards the end.

2. They all fluctuate. On gstat, its all over the place. Over a minutes time, I see everything from ever drive green, to every drive red around 99%, and everything in between. On zpool iostat, all drives are doing about the same amount of work.

3. I have not tried setting up another zvol yet.

Yes, the swap for each VM does exist on the same volume as its OS. How should I have it configured?
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Also I'd like to add that the drives rarely go over 3MBps. It's like they're too choked from IO to be able to perform at normal transfer speeds.

And most of the VMs, if I look under their disk response time in the OS, is from 20-50 ms (at least right now). So it seems like maybe there's an IO issue.

I'm using MPIO with Round Robin. Each node of 3 has two 10G direct twinax connections to the SAN.
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Again, didn't measure throughput before, but now: I'm getting 10-20M Bytes per second xfer speeds when transfering some 2GB .iso files from one drive to another within the same server, one virtual disk to the other, both residing on the same zvol. Now, just now when I tested it, it started around 150MBs, but within a few seconds went down to 10-20. Shot backup up towards the end.

Yes, the swap for each VM does exist on the same volume as its OS. How should I have it configured?

The fact that the transfers are starting at a decent speed and are suddenly breaking in is actually interesting.
That could indicate that the TxGroups are filling up in RAM, but then we are hitting a speed bumber when it's about getting the data written to disk.
Maybe because the pool is busy handling reads instead of concentrating on writes

Concerning the swap:
It's a common practice to configure the servers to use a dedicated drive and move the VMDK files off to another datastore to reduce the IOs.
Also I'd like to add that the drives rarely go over 3MBps. It's like they're too choked from IO to be able to perform at normal transfer speeds.

And most of the VMs, if I look under their disk response time in the OS, is from 20-50 ms (at least right now). So it seems like maybe there's an IO issue.

I'm using MPIO with Round Robin. Each node of 3 has two 10G direct twinax connections to the SAN.

Could you actually post a run of zpool <poolname> iostat -v 1 ?
There's also an information about read/write IOs, so that could actually tell us much.

How are the ARC/L2ARC stats looking ?
Run arc_summary.py to get these infos.

Is the disk response time always hovering around 20-50ms ?
Did you have a look at the stats over a week/month ?

When did you actually hit that problem ?
Or did the issue appear slowly with the number of VMs growing ?
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
I don't think it's a TxGroup issue, but heck what do I know. I've been scratching my head for two weeks now about this. Note that things were slowed way up before I set sync=standard (just a few days ago) from originally being sync=always since the SAN was setup in December. I have two 400GB Intel 3700's mirrored for a SLOG.

So for the swaps, I need to setup a separate storage server essentially?

Code:
[root@hvc01san01] ~# zpool iostat -v 1
  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  1.63K  10.8K
  mirror  686M  28.3G  0  0  1.63K  10.8K
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  1.55K  10.8K
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  1.21K  10.8K
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  403  1.07K  4.90M  13.8M
  mirror  690G  2.95T  57  131  712K  945K
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  373K  946K
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  375K  946K
  mirror  695G  2.95T  57  131  718K  947K
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  947K
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  947K
  mirror  696G  2.95T  57  131  719K  946K
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  946K
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  946K
  mirror  694G  2.95T  57  131  716K  944K
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  377K  944K
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  377K  944K
  mirror  695G  2.95T  57  131  718K  945K
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  379K  946K
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  377K  946K
  mirror  694G  2.95T  57  131  717K  945K
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  377K  946K
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  946K
  mirror  696G  2.95T  57  131  719K  947K
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  947K
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  23  23  378K  947K
logs  -  -  -  -  -  -
  mirror  145M  370G  0  177  91  7.30M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  177  93  7.30M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  177  93  7.30M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  185  171  2.25M  4.64M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  549  2.96K  6.08M  21.5M
  mirror  690G  2.95T  64  427  938K  2.95M
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  27  95  545K  2.95M
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  19  95  535K  2.95M
  mirror  695G  2.95T  172  292  1.74M  2.23M
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  77  19  1.13M  2.23M
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  69  19  718K  2.23M
  mirror  696G  2.95T  132  421  1.69M  2.51M
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  61  79  959K  2.51M
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  55  79  828K  2.51M
  mirror  694G  2.95T  89  579  957K  2.97M
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  41  84  675K  2.97M
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  33  84  454K  2.97M
  mirror  695G  2.95T  38  473  400K  2.55M
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  26  64  352K  2.56M
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  8  66  66.2K  2.57M
  mirror  694G  2.95T  22  326  199K  2.31M
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  6  66  31.1K  2.31M
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  15  67  168K  2.31M
  mirror  696G  2.95T  28  376  224K  2.79M
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  11  50  106K  2.79M
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  16  50  118K  2.79M
logs  -  -  -  -  -  -
  mirror  145M  370G  0  132  0  3.17M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  132  0  3.17M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  132  0  3.17M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  36  256  221K  4.71M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  2.49K  480  35.0M  3.52M
  mirror  690G  2.95T  266  112  3.62M  296K
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  65  69  1.81M  300K
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  65  69  2.40M  300K
  mirror  695G  2.95T  257  112  3.22M  296K
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  58  37  2.24M  300K
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  51  36  1.41M  300K
  mirror  696G  2.95T  320  0  4.81M  0
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  86  0  2.54M  0
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  84  0  2.85M  0
  mirror  694G  2.95T  452  9  6.04M  16.5K
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  108  6  3.89M  16.5K
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  95  6  2.89M  16.5K
  mirror  695G  2.95T  512  16  7.11M  33.9K
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  126  13  4.67M  31.9K
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  91  12  3.61M  20.9K
  mirror  694G  2.95T  436  9  5.85M  16.5K
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  99  6  3.69M  16.5K
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  95  6  3.04M  16.5K
  mirror  696G  2.95T  308  112  4.35M  296K
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  63  62  3.81M  300K
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  43  62  1.17M  300K
logs  -  -  -  -  -  -
  mirror  121M  370G  0  105  0  2.59M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  105  0  2.59M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  105  0  2.59M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  91  330  1016K  8.35M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  464  658  5.34M  23.7M
  mirror  690G  2.95T  81  41  771K  160K
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  32  5  322K  160K
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  39  9  501K  277K
  mirror  695G  2.95T  43  138  275K  1.68M
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  17  14  117K  1.68M
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  25  14  158K  1.68M
  mirror  696G  2.95T  31  31  227K  425K
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  21  11  148K  425K
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  9  15  79.8K  735K
  mirror  694G  2.95T  14  10  115K  134K
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  2  14  16.5K  717K
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  11  15  98.2K  341K
  mirror  695G  2.95T  28  28  419K  311K
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  15  15  328K  339K
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  8  13  123K  311K
  mirror  694G  2.95T  101  13  1.40M  197K
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  35  13  767K  1.25M
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  38  15  801K  644K
  mirror  696G  2.95T  160  113  2.17M  635K
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  63  16  1.27M  946K
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  68  15  1.21M  635K
logs  -  -  -  -  -  -
  mirror  121M  370G  0  278  0  20.2M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  278  0  20.2M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  278  0  20.2M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  70  468  712K  8.41M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  496  5.01K  4.12M  191M
  mirror  690G  2.95T  222  186  1.57M  2.21M
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  114  66  904K  2.21M
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  95  62  851K  2.10M
  mirror  695G  2.95T  41  450  333K  4.35M
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  20  36  229K  4.35M
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  21  36  104K  4.35M
  mirror  696G  2.95T  99  405  1.12M  3.87M
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  40  107  547K  3.87M
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  35  108  745K  3.78M
  mirror  694G  2.95T  26  483  341K  4.90M
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  11  98  279K  4.35M
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  9  112  104K  5.60M
  mirror  695G  2.95T  26  566  268K  4.81M
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  15  102  152K  4.78M
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  10  104  116K  4.81M
  mirror  694G  2.95T  39  583  335K  6.04M
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  28  79  212K  5.02M
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  11  77  123K  5.62M
  mirror  696G  2.95T  40  572  181K  5.54M
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  14  99  88.0K  5.24M
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  26  104  93.4K  5.66M
logs  -  -  -  -  -  -
  mirror  121M  370G  0  1.85K  0  160M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  1.85K  0  160M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  1.85K  0  160M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  321  538  4.39M  11.0M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  440  1.18K  3.42M  16.7M
  mirror  690G  2.95T  97  469  732K  4.05M
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  57  80  437K  4.05M
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  37  89  307K  4.35M
  mirror  695G  2.95T  178  0  1.40M  0
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  83  0  653K  0
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  86  0  829K  0
  mirror  696G  2.95T  75  276  472K  2.77M
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  42  82  263K  2.77M
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  32  90  209K  3.23M
  mirror  694G  2.95T  40  175  466K  2.14M
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  13  49  140K  2.14M
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  24  33  356K  1.22M
  mirror  695G  2.95T  18  115  186K  1.40M
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  6  31  63.1K  1.40M
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  11  31  123K  1.40M
  mirror  694G  2.95T  10  0  73.6K  0
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  5  0  38.3K  0
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  4  0  35.3K  0
  mirror  696G  2.95T  17  40  134K  652K
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  7  6  52.7K  652K
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  9  5  81.0K  525K
logs  -  -  -  -  -  -
  mirror  121M  370G  0  133  0  5.67M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  133  0  5.67M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  133  0  5.67M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  157  419  1.84M  8.03M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  2.25K  557  26.9M  7.07M
  mirror  690G  2.95T  253  141  2.84M  915K
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  16  76  269K  923K
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  44  65  3.20M  610K
  mirror  695G  2.95T  242  100  3.36M  268K
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  37  29  1.38M  271K
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  42  29  2.34M  268K
  mirror  696G  2.95T  642  52  6.32M  815K
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  70  15  2.11M  815K
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  107  1  5.57M  127K
  mirror  694G  2.95T  561  9  5.91M  24.4K
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  67  9  3.14M  24.4K
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  74  9  3.31M  24.4K
  mirror  695G  2.95T  277  9  3.94M  24.4K
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  37  7  2.22M  24.4K
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  75  8  2.02M  24.4K
  mirror  694G  2.95T  156  8  2.21M  23.9K
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  35  7  1.87M  23.9K
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  12  8  519K  23.9K
  mirror  696G  2.95T  170  103  2.26M  271K
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  27  51  1.92M  271K
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  26  53  511K  271K
logs  -  -  -  -  -  -
  mirror  121M  370G  0  130  0  4.78M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  130  0  4.78M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  130  0  4.78M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  799  73  35.1M  1.13M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  239  648  2.94M  6.28M
  mirror  690G  2.95T  4  182  49.3K  1.30M
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  3  93  45.3K  1.30M
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  97  3.98K  1.32M
  mirror  695G  2.95T  14  179  137K  1.12M
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  5  50  49.8K  1.36M
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  8  51  87.6K  1.12M
  mirror  696G  2.95T  11  112  141K  524K
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  3  58  54.8K  528K
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  7  58  86.7K  528K
  mirror  694G  2.95T  65  18  757K  85.7K
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  26  2  284K  87.2K
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  34  2  502K  85.7K
  mirror  695G  2.95T  92  2  1.23M  21.4K
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  50  2  700K  21.4K
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  41  2  561K  21.4K
  mirror  694G  2.95T  43  3  612K  27.9K
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  25  7  349K  39.3K
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  17  6  262K  31.9K
  mirror  696G  2.95T  4  50  48.8K  625K
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  2  24  24.9K  754K
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  1  23  23.9K  628K
logs  -  -  -  -  -  -
  mirror  61.3M  370G  0  97  0  2.61M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  97  0  2.61M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  97  0  2.61M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  87  340  898K  8.69M
--------------------------------------  -----  -----  -----  -----  -----  -----

  capacity  operations  bandwidth
pool  alloc  free  read  write  read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot  686M  28.3G  0  0  0  0
  mirror  686M  28.3G  0  0  0  0
  gptid/dd322f3b-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
  gptid/dd5d1e5e-b78c-11e6-8e2f-b8ca3a6c6380  -  -  0  0  0  0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol0  4.75T  20.6T  759  3.14K  9.50M  34.9M
  mirror  690G  2.95T  91  375  1.18M  3.61M
  gptid/a8a450bb-b871-11e6-a9f6-b8ca3a6c6380  -  -  26  109  304K  3.61M
  gptid/ac876fb5-b871-11e6-a9f6-b8ca3a6c6380  -  -  33  122  932K  3.78M
  mirror  695G  2.95T  63  1.04K  845K  11.9M
  gptid/b06952c9-b871-11e6-a9f6-b8ca3a6c6380  -  -  19  97  171K  11.7M
  gptid/b44661ae-b871-11e6-a9f6-b8ca3a6c6380  -  -  14  104  704K  12.5M
  mirror  696G  2.95T  56  361  783K  3.42M
  gptid/b822fcab-b871-11e6-a9f6-b8ca3a6c6380  -  -  8  111  161K  3.54M
  gptid/bc031cee-b871-11e6-a9f6-b8ca3a6c6380  -  -  14  105  661K  3.42M
  mirror  694G  2.95T  130  379  1.47M  3.88M
  gptid/bfe22c41-b871-11e6-a9f6-b8ca3a6c6380  -  -  38  94  285K  4.95M
  gptid/c3c8a78a-b871-11e6-a9f6-b8ca3a6c6380  -  -  45  79  1.21M  3.88M
  mirror  695G  2.95T  126  325  1.61M  3.35M
  gptid/c79fe9e8-b871-11e6-a9f6-b8ca3a6c6380  -  -  33  98  413K  3.70M
  gptid/cb7d08d4-b871-11e6-a9f6-b8ca3a6c6380  -  -  46  87  1.26M  3.35M
  mirror  694G  2.95T  164  282  1.93M  2.99M
  gptid/cf5e8f11-b871-11e6-a9f6-b8ca3a6c6380  -  -  69  78  1.44M  2.99M
  gptid/d344af65-b871-11e6-a9f6-b8ca3a6c6380  -  -  54  86  645K  3.18M
  mirror  696G  2.95T  126  361  1.72M  3.53M
  gptid/d729d3ab-b871-11e6-a9f6-b8ca3a6c6380  -  -  36  105  528K  3.41M
  gptid/db115396-b871-11e6-a9f6-b8ca3a6c6380  -  -  38  113  1.33M  3.83M
logs  -  -  -  -  -  -
  mirror  61.3M  370G  0  70  0  2.17M
  gptid/db990d37-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  70  0  2.17M
  gptid/dbc91c5a-b871-11e6-a9f6-b8ca3a6c6380  -  -  0  70  0  2.17M
cache  -  -  -  -  -  -
  gptid/dbf613a1-b871-11e6-a9f6-b8ca3a6c6380  644G  101G  90  369  641K  12.9M
--------------------------------------  -----  -----  -----  -----  -----  -----

 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
And here's the arc_summary.py output:

Code:
[root@hvc01san01] ~#  arc_summary.py
System Memory:

  0.00%  11.79  MiB Active,  0.33%  854.58  MiB Inact
  97.96%  244.23  GiB Wired,  0.00%  0  Bytes Cache
  1.70%  4.24  GiB Free,  0.00%  0  Bytes Gap

  Real Installed:  256.00  GiB
  Real Available:  99.95%  255.88  GiB
  Real Managed:  97.43%  249.31  GiB

  Logical Total:  256.00  GiB
  Logical Used:  98.02%  250.93  GiB
  Logical Free:  1.98%  5.07  GiB

Kernel Memory:  4.16  GiB
  Data:  99.37%  4.13  GiB
  Text:  0.63%  26.88  MiB

Kernel Memory Map:  319.85  GiB
  Size:  68.93%  220.48  GiB
  Free:  31.07%  99.37  GiB
  Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
  Storage pool Version:  5000
  Filesystem Version:  5
  Memory Throttle Count:  0

ARC Misc:
  Deleted:  494.06m
  Mutex Misses:  261.68k
  Evict Skips:  261.68k

ARC Size:  99.16%  228.37  GiB
  Target Size: (Adaptive)  99.16%  228.37  GiB
  Min Size (Hard Limit):  12.50%  28.79  GiB
  Max Size (High Water):  8:1  230.29  GiB

ARC Size Breakdown:
  Recently Used Cache Size:  16.50%  37.67  GiB
  Frequently Used Cache Size:  83.50%  190.70  GiB

ARC Hash Breakdown:
  Elements Max:  79.59m
  Elements Current:  96.11%  76.49m
  Collisions:  1.19b
  Chain Max:  15
  Chains:  22.29m
  Page:  2
------------------------------------------------------------------------

ARC Total accesses:  2.00b
  Cache Hit Ratio:  73.66%  1.48b
  Cache Miss Ratio:  26.34%  527.99m
  Actual Hit Ratio:  69.51%  1.39b

  Data Demand Efficiency:  75.30%  1.43b
  Data Prefetch Efficiency:  42.17%  275.98m

  CACHE HITS BY CACHE LIST:
  Anonymously Used:  1.82%  26.90m
  Most Recently Used:  24.79%  366.13m
  Most Frequently Used:  69.57%  1.03b
  Most Recently Used Ghost:  2.65%  39.08m
  Most Frequently Used Ghost:  1.16%  17.20m

  CACHE HITS BY DATA TYPE:
  Demand Data:  73.09%  1.08b
  Prefetch Data:  7.88%  116.39m
  Demand Metadata:  18.85%  278.29m
  Prefetch Metadata:  0.19%  2.77m

  CACHE MISSES BY DATA TYPE:
  Demand Data:  67.05%  354.01m
  Prefetch Data:  30.22%  159.58m
  Demand Metadata:  2.29%  12.09m
  Prefetch Metadata:  0.44%  2.31m
  Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
  Passed Headroom:  2.04m
  Tried Lock Failures:  16.92m
  IO In Progress:  1.32k
  Low Memory Aborts:  25
  Free on Write:  242.67k
  Writes While Full:  279.16k
  R/W Clashes:  0
  Bad Checksums:  0
  IO Errors:  0
  SPA Mismatch:  4.00m

L2 ARC Size: (Adaptive)  978.52  GiB
  Header Size:  0.42%  4.13  GiB

L2 ARC Evicts:
  Lock Retries:  48.03k
  Upon Reading:  0

L2 ARC Breakdown:  527.99m
  Hit Ratio:  33.84%  178.67m
  Miss Ratio:  66.16%  349.32m
  Feeds:  1.07m

L2 ARC Buffer:
  Bytes Scanned:  117.65  TiB
  Buffer Iterations:  1.07m
  List Iterations:  4.26m
  NULL List Iterations:  252.43k

L2 ARC Writes:
  Writes Sent:  100.00% 1.06m
  Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:  438.90m
  Hit Ratio:  3.86%  16.95m
  Miss Ratio:  96.14%  421.95m

  Page:  5
------------------------------------------------------------------------

  Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
  kern.maxusers  16712
  vm.kmem_size  343437246464
  vm.kmem_size_scale  1
  vm.kmem_size_min  0
  vm.kmem_size_max  1319413950874
  vfs.zfs.vol.unmap_enabled  1
  vfs.zfs.vol.mode  2
  vfs.zfs.sync_pass_rewrite  2
  vfs.zfs.sync_pass_dont_compress  5
  vfs.zfs.sync_pass_deferred_free  2
  vfs.zfs.zio.exclude_metadata  0
  vfs.zfs.zio.use_uma  1
  vfs.zfs.cache_flush_disable  0
  vfs.zfs.zil_replay_disable  0
  vfs.zfs.version.zpl  5
  vfs.zfs.version.spa  5000
  vfs.zfs.version.acl  1
  vfs.zfs.version.ioctl  6
  vfs.zfs.debug  0
  vfs.zfs.super_owner  0
  vfs.zfs.min_auto_ashift  9
  vfs.zfs.max_auto_ashift  13
  vfs.zfs.vdev.write_gap_limit  4096
  vfs.zfs.vdev.read_gap_limit  32768
  vfs.zfs.vdev.aggregation_limit  131072
  vfs.zfs.vdev.trim_max_active  64
  vfs.zfs.vdev.trim_min_active  1
  vfs.zfs.vdev.scrub_max_active  2
  vfs.zfs.vdev.scrub_min_active  1
  vfs.zfs.vdev.async_write_max_active  10
  vfs.zfs.vdev.async_write_min_active  1
  vfs.zfs.vdev.async_read_max_active  3
  vfs.zfs.vdev.async_read_min_active  1
  vfs.zfs.vdev.sync_write_max_active  10
  vfs.zfs.vdev.sync_write_min_active  10
  vfs.zfs.vdev.sync_read_max_active  10
  vfs.zfs.vdev.sync_read_min_active  10
  vfs.zfs.vdev.max_active  1000
  vfs.zfs.vdev.async_write_active_max_dirty_percent60
  vfs.zfs.vdev.async_write_active_min_dirty_percent30
  vfs.zfs.vdev.mirror.non_rotating_seek_inc1
  vfs.zfs.vdev.mirror.non_rotating_inc  0
  vfs.zfs.vdev.mirror.rotating_seek_offset1048576
  vfs.zfs.vdev.mirror.rotating_seek_inc  5
  vfs.zfs.vdev.mirror.rotating_inc  0
  vfs.zfs.vdev.trim_on_init  0
  vfs.zfs.vdev.larger_ashift_minimal  0
  vfs.zfs.vdev.bio_delete_disable  0
  vfs.zfs.vdev.bio_flush_disable  0
  vfs.zfs.vdev.cache.bshift  16
  vfs.zfs.vdev.cache.size  0
  vfs.zfs.vdev.cache.max  16384
  vfs.zfs.vdev.metaslabs_per_vdev  200
  vfs.zfs.vdev.trim_max_pending  10000
  vfs.zfs.txg.timeout  5
  vfs.zfs.trim.enabled  1
  vfs.zfs.trim.max_interval  1
  vfs.zfs.trim.timeout  30
  vfs.zfs.trim.txg_delay  32
  vfs.zfs.space_map_blksz  4096
  vfs.zfs.spa_slop_shift  5
  vfs.zfs.spa_asize_inflation  24
  vfs.zfs.deadman_enabled  1
  vfs.zfs.deadman_checktime_ms  5000
  vfs.zfs.deadman_synctime_ms  1000000
  vfs.zfs.debug_flags  0
  vfs.zfs.recover  0
  vfs.zfs.spa_load_verify_data  1
  vfs.zfs.spa_load_verify_metadata  1
  vfs.zfs.spa_load_verify_maxinflight  10000
  vfs.zfs.ccw_retry_interval  300
  vfs.zfs.check_hostid  1
  vfs.zfs.mg_fragmentation_threshold  85
  vfs.zfs.mg_noalloc_threshold  0
  vfs.zfs.condense_pct  200
  vfs.zfs.metaslab.bias_enabled  1
  vfs.zfs.metaslab.lba_weighting_enabled  1
  vfs.zfs.metaslab.fragmentation_factor_enabled1
  vfs.zfs.metaslab.preload_enabled  1
  vfs.zfs.metaslab.preload_limit  3
  vfs.zfs.metaslab.unload_delay  8
  vfs.zfs.metaslab.load_pct  50
  vfs.zfs.metaslab.min_alloc_size  33554432
  vfs.zfs.metaslab.df_free_pct  4
  vfs.zfs.metaslab.df_alloc_threshold  131072
  vfs.zfs.metaslab.debug_unload  0
  vfs.zfs.metaslab.debug_load  0
  vfs.zfs.metaslab.fragmentation_threshold70
  vfs.zfs.metaslab.gang_bang  16777217
  vfs.zfs.free_bpobj_enabled  1
  vfs.zfs.free_max_blocks  18446744073709551615
  vfs.zfs.no_scrub_prefetch  0
  vfs.zfs.no_scrub_io  0
  vfs.zfs.resilver_min_time_ms  3000
  vfs.zfs.free_min_time_ms  1000
  vfs.zfs.scan_min_time_ms  1000
  vfs.zfs.scan_idle  50
  vfs.zfs.scrub_delay  4
  vfs.zfs.resilver_delay  2
  vfs.zfs.top_maxinflight  32
  vfs.zfs.delay_scale  500000
  vfs.zfs.delay_min_dirty_percent  60
  vfs.zfs.dirty_data_sync  67108864
  vfs.zfs.dirty_data_max_percent  10
  vfs.zfs.dirty_data_max_max  4294967296
  vfs.zfs.dirty_data_max  4294967296
  vfs.zfs.max_recordsize  1048576
  vfs.zfs.zfetch.array_rd_sz  1048576
  vfs.zfs.zfetch.max_distance  33554432
  vfs.zfs.zfetch.min_sec_reap  2
  vfs.zfs.zfetch.max_streams  8
  vfs.zfs.prefetch_disable  0
  vfs.zfs.send_holes_without_birth_time  1
  vfs.zfs.mdcomp_disable  0
  vfs.zfs.nopwrite_enabled  1
  vfs.zfs.dedup.prefetch  1
  vfs.zfs.l2c_only_size  0
  vfs.zfs.mfu_ghost_data_lsize  14109196288
  vfs.zfs.mfu_ghost_metadata_lsize  5689496064
  vfs.zfs.mfu_ghost_size  19798692352
  vfs.zfs.mfu_data_lsize  209672318464
  vfs.zfs.mfu_metadata_lsize  412371968
  vfs.zfs.mfu_size  214459128320
  vfs.zfs.mru_ghost_data_lsize  193753674240
  vfs.zfs.mru_ghost_metadata_lsize  31657463296
  vfs.zfs.mru_ghost_size  225411137536
  vfs.zfs.mru_data_lsize  13650699264
  vfs.zfs.mru_metadata_lsize  149949952
  vfs.zfs.mru_size  16018729472
  vfs.zfs.anon_data_lsize  0
  vfs.zfs.anon_metadata_lsize  0
  vfs.zfs.anon_size  7278592
  vfs.zfs.l2arc_norw  0
  vfs.zfs.l2arc_feed_again  1
  vfs.zfs.l2arc_noprefetch  0
  vfs.zfs.l2arc_feed_min_ms  200
  vfs.zfs.l2arc_feed_secs  1
  vfs.zfs.l2arc_headroom  2
  vfs.zfs.l2arc_write_boost  40000000
  vfs.zfs.l2arc_write_max  10000000
  vfs.zfs.arc_meta_limit  61818704179
  vfs.zfs.arc_free_target  453112
  vfs.zfs.arc_shrink_shift  7
  vfs.zfs.arc_average_blocksize  8192
  vfs.zfs.arc_min  30909352089
  vfs.zfs.arc_max  247274816716
  Page:  7



Regarding the disk response time, that's within the VMs to be clear. One of our exchange servers has 5-10 files being accessed with that kind of lag right now. Another Remote Desktop Server, with about 8 users, has 20 or so files trying to be accessed from 20ms all the way to 100ms.

The problem seems to have crept up over the last couple weeks. No real VM change, just time and usage.
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
So, wanted to update this post with some more info.

Here's a graph a transferring some files inside of a VM from one virtual drive to another, both drives sitting on the same 3.5TB zvol:

upload_2017-3-24_14-31-10.png


Here's what top -P shows:

upload_2017-3-24_14-31-36.png


But here's htop (isn't the high cpu strange? with no processes to justify it?):

upload_2017-3-24_14-32-14.png


So is there something the CPU is doing that FreeNAS isn't seeing that is using up CPU and causing this performance drop? Note what FreeNAS see as far as CPU (not much):

upload_2017-3-24_14-33-42.png
 
Status
Not open for further replies.
Top