Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

SOLVED Strange performance issues

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Specs/configuration:
  • FreeNAS-11.1-U6
  • Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
  • 262,071 MB ECC RAM
  • Six 10-GbE ports in a lagg going into a NETGEAR ProSafe M7100-24x
  • Jumbo frames enabled everywhere, in each member of the trunk, in all the ports in the switch, and at each client
  • Four RAID-Z3 vdevs, each with 11 HGST Ultrastar He8s
  • Two 400 GB SSDs striped as L2ARC
Performance to a variety of workstations has been great for years, anywhere from 400-800 MB/s, ideal for video post-production.

Current clients include a mix of:
  • SMB3 to macOS 10.12 and 10.13
  • SMB3 to Windows 10 Pro
  • NFSv4 to CentOS Linux 7.3
Problem: Lately, performance to the macOS and Windows clients has been quite strange. On a Windows 10 client, Adobe Media Encoder will will have good and fast performance for anywhere from 30 seconds to several minutes and than tank down to speeds below 5 MB/s.

On a 10.13 Mac, an rsync transfer from an SMB3 share to a DAS [LaCie Rugged RAID] will be chugging along at 150 MB/s or so, and then after a minute or so, also plummet to below 5 MB/s.

Obviously, it's not going to fly for the server I've been relying on at 400-800 MB/s for post-production suddenly to drop to 5 MB/s.

Question: Any ideas as to why this system is choking? I've poked around and tried a few things, but there's nothing obviously wrong with the configuration. Thanks in advance.
 

diskdiddler

Dedicated Sage
Joined
Jul 9, 2014
Messages
2,187
What has changed lately? Did you upgrade to U6 recently for example?

When things drop to 5MB/s - any chance you can SSH in and do a manual copy of data from X to Y (or even X to X) and see if the speeds match?
 

kdragon75

Dedicated Sage
Joined
Aug 7, 2016
Messages
2,465
Considering your hardware, your performance awful from the beginning. What's up with the 6 10gbe lagg? Also an 800GB L2ARC could be killing you depending on your block/record size.
Try running arcstat.py for the console whennyour expecting the slow down. This will give us some insight.
 

kdragon75

Dedicated Sage
Joined
Aug 7, 2016
Messages
2,465
We should also look at you free space fragmentation.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
3,176
What has changed lately? Did you upgrade to U6 recently for example?
This is the first place I'd look - "what's changed"? The SMB server was updated in 11.1 to 4.7.x which defaults to supporting strict sync.
 

kdragon75

Dedicated Sage
Joined
Aug 7, 2016
Messages
2,465
You also initially mentioned that you're running Mac 10.12 and then later 10.13. perhaps the 10.13 is requesting sync writes? With that many drives and there combined cache, about a minute seems right if zfs is still using the drives write cache.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
3,176
Quick way to tell - open a shell to your machine and enter the command zilstat

If you see anything but a row of zeroes, you've got activity hitting the ZIL.
 

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
What has changed lately? Did you upgrade to U6 recently for example?

When things drop to 5MB/s - any chance you can SSH in and do a manual copy of data from X to Y (or even X to X) and see if the speeds match?
Yes, did recently upgrade to U6, and I had also had deleted the entire previous pool. Also did a scrub after destroying the previous pool.

When you say manually copy X to Y or X to X, are you referring to cping within the same pool or dataset?

Considering your hardware, your performance awful from the beginning. What's up with the 6 10gbe lagg? Also an 800GB L2ARC could be killing you depending on your block/record size.
Try running arcstat.py for the console whennyour expecting the slow down. This will give us some insight.
What's wrong with 6 trunks in a 10 GbE LACP lagg?

Also, given that L2ARC can only be striped via the GUI, it seems that it's only 400 GB worth of usable L2ARC cache. How would I check the block or record size?

Output of arcstat.py before and after the choke is the same. Now it's choking within about 10 seconds of the render starting on a Windows client on SMB3.

Code:
root@redacted:~ # arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
13:17:48	 0	 0	  0	 0	0	 0	0	 0	0	52G  248G


We should also look at you free space fragmentation.
How would I go about doing that?

This is the first place I'd look - "what's changed"? The SMB server was updated in 11.1 to 4.7.x which defaults to supporting strict sync.
The pool and all of its child datasets have Sync set to Inherit (standard).

You also initially mentioned that you're running Mac 10.12 and then later 10.13. perhaps the 10.13 is requesting sync writes? With that many drives and there combined cache, about a minute seems right if zfs is still using the drives write cache.
Using both. The particular problematic rsync transfer was from 10.13. However, on a different 10.13 machine, I've been able to offload to LTO tapes just fine, to LTFS via YoYottaID.

Quick way to tell - open a shell to your machine and enter the command zilstat

If you see anything but a row of zeroes, you've got activity hitting the ZIL.
All 0s, before and after it chokes up.
 
Last edited:

kdragon75

Dedicated Sage
Joined
Aug 7, 2016
Messages
2,465
What's wrong with 6 trunks in a 10 GbE LACP lagg?
Nothing inherently, just WAY OVERKILL.
Also, given that L2ARC can only be striped via the GUI, it seems that it's only 400 GB worth of usable L2ARC cache. How would I check the block or record size?
Stripes add space. Mirrors devid space. If we look at zpool list -v we will see exactly how the "cache" is configured. To check record size zfs get recordsize or you can use zfs get recordsize <pool>/<dataset> to check one specific dataset.
My understanding is that each record or zfs block stored in the L2ARC requires 70 bytes of ARC. Small records are great for low latency random IO but add lots of over head for "large" sequential reads and writes. When working with video files, a larger than default block can improve performance and L2ARC efficiency.
arcstat.py is telling us that your ARC is only 52GB with a "target" of 248GB. Was this recently rebooted?
How would I go about doing that?
Code:
zpool list -v
Will also show free space fragmentation.

Regarding arcstat.py and zilstat
arcstat.py should be run as arcstat.py 1 and be running during active work. This will log to the screen once per second and wil give us an idea of the trend in ARC usage.
zilstat Should log once per second by its self and we would prefer to see that full log during asymptomatic transfer as well. Not just a before and after (unless it's all 0 all the time).
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,282
If you're concerned that samba's making sync writes you can run the following script

Code:
syscall::fsync:entry
/execname=="smbd"/
{
   printf( " %Y", (walltimestamp) );
}


It'll basically output sync writes as they happen. Just write to file, chmod +x, then execute without arguments.
 

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Can you try setting the following syctl sysctl net.inet.tcp.reass.maxqueuelen=1437. This sysctl was introduced in 11.1-U6 to address the following security advisory.

https://www.freebsd.org/security/advisories/FreeBSD-SA-18:08.tcp.asc
Would I do that just by adding a tunable and restarting?

Nothing inherently, just WAY OVERKILL.

Stripes add space. Mirrors devid space. If we look at zpool list -v we will see exactly how the "cache" is configured. To check record size zfs get recordsize or you can use zfs get recordsize <pool>/<dataset> to check one specific dataset.
My understanding is that each record or zfs block stored in the L2ARC requires 70 bytes of ARC. Small records are great for low latency random IO but add lots of over head for "large" sequential reads and writes. When working with video files, a larger than default block can improve performance and L2ARC efficiency.
arcstat.py is telling us that your ARC is only 52GB with a "target" of 248GB. Was this recently rebooted?

Code:
zpool list -v
Will also show free space fragmentation.

Regarding arcstat.py and zilstat
arcstat.py should be run as arcstat.py 1 and be running during active work. This will log to the screen once per second and wil give us an idea of the trend in ARC usage.
zilstat Should log once per second by its self and we would prefer to see that full log during asymptomatic transfer as well. Not just a before and after (unless it's all 0 all the time).
Output of zpool list -v:
Code:
root@redacted:~ # zpool list -v
NAME									 SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
freenas-boot							14.4G   912M  13.5G		 -	  -	 6%  1.00x  ONLINE  -
  da46p2								14.4G   912M  13.5G		 -	  -	 6%
pool									 320T  4.47T   316T		 -	 0%	 1%  1.00x  ONLINE  /mnt
  raidz3								  80T  1.04T  79.0T		 -	 0%	 1%
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
  raidz3								  80T  1.16T  78.8T		 -	 0%	 1%
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
  raidz3								  80T  1.13T  78.9T		 -	 0%	 1%
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
  raidz3								  80T  1.14T  78.9T		 -	 0%	 1%
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
	gptid/redacted	  -	  -	  -		 -	  -	  -
cache									   -	  -	  -		 -	  -	  -
  gptid/redacted   447G  9.45G   438G		 -	 0%	 2%
  gptid/redacted   447G  9.47G   438G		 -	 0%	 2%


Checking zfs get recordsize shows that the pool and all datasets are at the default of 128K.

I indeed rebooted the system last night, in trying to troubleshoot, so neither L1ARC or L2ARC has probably had a chance to fill up yet.

zilstat was indeed 0 the whole time.
 
Last edited:

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Stress-testing with arcstat.py 1 running shows:

Code:
root@redacted:~ # arcstat.py 1
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:34:52	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:34:53	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:34:54	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:34:55	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:34:56	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
root@redacted:~ # arcstat.py 1
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:36:00	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:01	 3	 1	 33	 1   33	 0	0	 0	0   9.5G  248G
16:36:02	 6	 5	 83	 4   80	 1  100	 0	0   9.5G  248G
16:36:03	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:04	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:05	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:06	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:07	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:08	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:09	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:10	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:11	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:12	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:13	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:14	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:15	59	52	 88	18   81	34   91	 7   87   9.5G  248G
16:36:16   123   122	 99	58  100	64   98	50   98   9.5G  248G
16:36:17	 9	 8	 88	 8   88	 0	0	 0	0   9.5G  248G
16:36:18	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:19	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:20	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:36:21	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:22	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:23	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:25	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:26	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:27	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:28	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:29	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:30	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:31	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:32	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:33	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:34	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:35	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:36	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:37	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:38	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:39	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:40	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:41	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:42	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:36:43	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:44	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:45	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:46   253	48	 18	48   18	 0	0	48   18   9.5G  248G
16:36:47	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:48	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:49	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:50	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:51	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:52	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:53	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:55	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:56	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:57	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:58	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:36:59	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:00	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:01	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:02	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:03	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:04	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:37:05	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:06	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:07	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:08	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:09	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:10	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:11	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:12	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:13	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:14	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:15	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:16   219   147	 67   147   67	 0	0   147   67   9.5G  248G
16:37:17	 1	 1	100	 1  100	 0	0	 1  100   9.5G  248G
16:37:19	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:20	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:21	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:22	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:23	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:24	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:25	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:26	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:37:27	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:28	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:29	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:30	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:31	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:32	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:33	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:34	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:35	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:36	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:37	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:38	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:39	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:40	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:41	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:42	 2	 2	100	 2  100	 0	0	 2  100   9.5G  248G
16:37:43	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:45	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:46	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:47   253	48	 18	48   18	 0	0	48   18   9.5G  248G
16:37:48	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:37:49	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:50	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:51	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:52	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:53	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:54	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:55	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:56	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:57	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:58	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:37:59	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:00	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:01	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:02	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:03	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:04	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:05	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:06	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:07	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:08	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:10	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:38:11	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:12	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:13	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:14	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:15	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:16	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:17	50	50	100	50  100	 0	0	50  100   9.5G  248G
16:38:18	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:19	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:20	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:21	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:22	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:23	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:24	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:25	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:26	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:27	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:28	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:30	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:31	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:32	83	57	 68	57   87	 0	0	40   60   9.5G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:38:33	23	22	 95	22   95	 0	0	15   93   9.5G  248G
16:38:34	73	72	 98	72   98	 0	0	45   97   9.5G  248G
16:38:35	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:36	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:37	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:38	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:39	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:40	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:41	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:42	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:43	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:44	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:45	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:46   253	48	 18	48   18	 0	0	48   18   9.5G  248G
16:38:47	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:48	 0	 0	  0	 0	0	 0	0	 0	0   9.5G  248G
16:38:49   164   146	 89	95   84	51   98	48   88   9.6G  248G
16:38:50   127	83	 65	35   44	48  100	16   94   9.6G  248G
16:38:52	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:38:53	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:38:54	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:38:55	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:38:56	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:38:57	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:38:58	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:38:59	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:00	 2	 2	100	 2  100	 0	0	 2  100   9.6G  248G
16:39:01	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:02	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:03	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:04	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:05	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:06	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:07	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:08	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:09	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:10	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:11	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:13	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:14	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:15	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:16	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:39:17	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:18	50	50	100	50  100	 0	0	50  100   9.6G  248G
16:39:19	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:20	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:21	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:22	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:23	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:24	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:25	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:26	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:27	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:28	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:29	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:30	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:31	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:32	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:33	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:34	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:35	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:36	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:38	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:39:39	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:40	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:41	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:42	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:43	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:44	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:45	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:46	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:47   253	48	 18	48   18	 0	0	48   18   9.6G  248G
16:39:48	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:49	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:50	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:51	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:52	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:53	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:54	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:55	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:56	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:57	69	61	 88	53   86	 8  100	35  100   9.6G  248G
16:39:58	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:39:59	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:40:01	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:02	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:03	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:04	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:05	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:06	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:07	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:08	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:09	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:10	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:11	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:12	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:13	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:14	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:15	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:16	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:17	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:18	50	50	100	50  100	 0	0	50  100   9.6G  248G
16:40:19	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:20	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:21	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:40:22	 0	 0	  0	 0	0	 0	0	 0	0   9.6G  248G
16:40:23   125	93	 74	13   44	80   83	 6  100   9.6G  248G
16:40:24   602   286	 47	 6	2   280   87	 6  100   9.7G  248G
16:40:25	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:26	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:27	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:28	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:29	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:30	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:32	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:33	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:34	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:35	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:36	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:37	 0	 0	  0	 0	0	 0	0	 0	0   9.7G  248G
16:40:38  1.2K   600	 50	 0	0   600   99	 0	0   9.8G  248G
16:40:39   120	64	 53	 0	0	64  100	 0	0   9.8G  248G
16:40:40	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:41	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:42	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:43	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:40:44	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:45	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:46	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:47   253	48	 18	48   18	 0	0	48   18   9.8G  248G
16:40:48	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:49	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:50	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:51	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:52	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:53	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:55	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:56	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:57	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:58	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:40:59	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:00	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:01	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:02	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:03	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:04	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:05	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:41:06	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:07	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:08	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:09	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:10	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:11	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:12	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:13	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:14	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:15	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:16	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:17	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:18	49	49	100	49  100	 0	0	49  100   9.8G  248G
16:41:19	 1	 1	100	 1  100	 0	0	 1  100   9.8G  248G
16:41:20	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:21	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:22	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:23	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:24	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:25	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:27	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:41:28	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:29	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:30	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:31	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:32	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:33	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:34	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:35	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:36	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:37	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:38	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:39	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:40	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:41	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:42	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:43	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:44	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:45	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:46	 2	 2	100	 2  100	 0	0	 2  100   9.8G  248G
16:41:47   253	48	 18	48   18	 0	0	48   18   9.8G  248G
16:41:48	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:41:49	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:50	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:51	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:52	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:53	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:54	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:56	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:57	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:58	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:41:59	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:00	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:01	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:02	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:03	 2	 1	 50	 1   50	 0	0	 1   50   9.8G  248G
16:42:04	 6	 6	100	 6  100	 0	0	 6  100   9.8G  248G
16:42:05	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:06	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:07	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:08	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:09	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:10	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:42:11	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:12	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:13	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:14	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:15	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:16	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:17	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:18	40	40	100	40  100	 0	0	40  100   9.8G  248G
16:42:19   180   108	 60   108   60	 0	0   108   60   9.8G  248G
16:42:21	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:22	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:23	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:24	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:25	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:26	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:27	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:28	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:29	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:30	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:31	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:32	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:42:33	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:34	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:35	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:36	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:37	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:38	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:39	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:40	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:41	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:42	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:44	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:45	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:46	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:47   253	48	 18	48   18	 0	0	48   18   9.8G  248G
16:42:48	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:49	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:50	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:51	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:52	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:53	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:54	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:42:55	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:56	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:57	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:58	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:42:59	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:00	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:01	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:02	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:04	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:05	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:06	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:07	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:08	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:09	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:10	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:11	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:12	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:13	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:14	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:15	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:16	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
16:43:17	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G
16:43:18	 0	 0	  0	 0	0	 0	0	 0	0   9.8G  248G


Adobe Media Encoder chokes within a few seconds.

This was right after a reboot--due to some other attempts at troubleshooting.
 
Last edited:

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Now, I can't even really read from a dataset. Blackmagic Disk Speed Test will show up to 600 MB/s to write, but navigating to a SMB share and loading the files takes maybe 5 to 10 seconds.

Even sshing in and cding into a directory in the pool and tying to ls everything out takes 5 to 10 seconds. It seems that something is deeply wrong within the ZFS pool itself.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
3,176
Check the SMART info on all of the disks (manually if you have to with smartctl from a shell) - this feels like a sudden hardware problem like a bad disk, failing cable, etc.
 

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Yup, looks like one of the disks is failing. Would this really be the difference between night and day in terms of read performance though?
Code:
  • CRITICAL: Nov. 1, 2018, 1:56 a.m. - Device: /dev/da6, Self-Test Log error count increased from 1 to 2
  • CRITICAL: Nov. 1, 2018, 4:27 a.m. - Device: /dev/da6, Read SMART Self-Test Log Failed

I thought ZFS was built to be robust--can one drive failure in a RAID-Z3 vdev really cause performance to totally tank everywhere?
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
3,176
Yup, looks like one of the disks is failing. Would this really be the difference between night and day in terms of read performance though?
Yes, because there could be any amount of data being spread across the vdev hitting that volume.

Offline that disk out from the GUI and your performance should (mostly) return.

I thought ZFS was built to be robust--can one drive failure in a RAID-Z3 vdev really cause performance to totally tank everywhere?
If the drive doesn't support TLER (Time Limited Error Recovery) then it may continue to try to read/write data from a failing area of the drive and be extremely slow, because it's doing what works best for a typical desktop user - "return the data at all costs, because this is the only copy" - in a RAID scenario, you want it to fail quickly and just pull the data from another drive. This isn't unique to ZFS - this would happen in a hardware RAID or software like Linux md/etc.

Question: Are your He8's SAS or SATA, and were they shucked from external enclosures by any chance? The latter interface (and the shucking) usually results in them having TLER present but disabled by default.
 

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Wonderful! Offlined the drive and the pool is usable again. Stress-testing now, and the performance isn't great, but once I get a fresh drive in and resilver, I'd expect full performance again. Also I'll need some time for the L1ARC and L2ARC to fill up again.

Don't know if these drives support TLER, but it wasn't shucked from some other enclosure.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
3,176
If you just offlined the drive, ARC/L2ARC won't be impacted by that - if you had to shut down the whole system to do it though, then obviously yes. Persistent L2ARC isn't in OpenZFS yet. ;)

Don't know if these drives support TLER, but it wasn't shucked from some other enclosure.
Ultrastar He8's should support TLER, it just might not be on. Try smartctl -l scterc /dev/da0 and see what it comes back with. If you see something like

Code:
SCT Error Recovery Control:
Read: Disabled
Write: Disabled

Then it's there, but not set. You should be able to use smartctl -l scterc,70,70 /dev/da0 to set it - the "70" is the TLER timeout in tenths of a second - and then re-run the first smartctl command and make sure the setting took.

Code:
SCT Error Recovery Control:
Read:	 70 (7.0 seconds)
Write:	70 (7.0 seconds)
 

sethgoldin

Junior Member
Joined
May 9, 2015
Messages
24
Ultrastar He8's should support TLER, it just might not be on. Try smartctl -l scterc /dev/da0 and see what it comes back with. If you see something like

Code:
SCT Error Recovery Control:
Read: Disabled
Write: Disabled

Then it's there, but not set.
Running that for any of the He8s just returned some version information about smartctl.
 
Status
Not open for further replies.
Top