ZFS/NETWORK Tuning advice?

Status
Not open for further replies.

tbaror

Contributor
Joined
Mar 20, 2013
Messages
105
Hi All,

I am building a central storage for our dev builders/compilers for them to build in central location and be able save use of expensive SSD's for each

I mounted experimental storage with following spec:
Build FreeNAS 9.3-BETA GMT
Platform Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
Memory 49061MB
Disks 6X250GB seagate Created raidz , +1x 480 ocz vertex460 zil + 1x 480 ocz vertex460 l2arc
Nework
1 x gbe console
Nework 1x 10 gbe storage
with file based iscsi extent
The work load would be many io writes small kb to medium mb size files
My question is can someone advice me with given spec what would be best tuning for such system
for zfs and network?
Thanks

ARC Summary
===========
System Memory:

0.74% 351.74 MiB Active, 0.50% 236.83 MiB Inact
75.58% 35.11 GiB Wired, 0.00% 1016.00 KiB Cache
23.18% 10.77 GiB Free, 0.00% 1.29 MiB Gap

Real Installed: 48.00 GiB
Real Available: 99.82% 47.91 GiB
Real Managed: 96.95% 46.45 GiB

Logical Total: 48.00 GiB
Logical Used: 77.08% 37.00 GiB
Logical Free: 22.92% 11.00 GiB

Kernel Memory: 381.79 MiB
Data: 93.82% 358.18 MiB
Text: 6.18% 23.61 MiB

Kernel Memory Map: 45.23 GiB
Size: 74.58% 33.73 GiB
Free: 25.42% 11.49 GiB
Page: 1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Storage pool Version: 5000
Filesystem Version: 5
Memory Throttle Count: 0

ARC Misc:
Deleted: 23
Recycle Misses: 252
Mutex Misses: 0
Evict Skips: 0

ARC Size: 73.24% 33.29 GiB
Target Size: (Adaptive) 73.60% 33.45 GiB
Min Size (Hard Limit): 12.50% 5.68 GiB
Max Size (High Water): 8:1 45.45 GiB

ARC Size Breakdown:
Recently Used Cache Size: 93.75% 31.36 GiB
Frequently Used Cache Size: 6.25% 2.09 GiB

ARC Hash Breakdown:
Elements Max: 295.25k
Elements Current: 100.00% 295.25k
Collisions: 35.59k
Chain Max: 3
Chains: 5.09k
Page: 2
------------------------------------------------------------------------

ARC Total accesses: 25.33m
Cache Hit Ratio: 96.81% 24.52m
Cache Miss Ratio: 3.19% 806.67k
Actual Hit Ratio: 96.77% 24.51m

Data Demand Efficiency: 92.85% 9.05m
Data Prefetch Efficiency: 86.80% 5.24k

CACHE HITS BY CACHE LIST:
Anonymously Used: 0.04% 9.93k
Most Recently Used: 5.31% 1.30m
Most Frequently Used: 94.64% 23.21m
Most Recently Used Ghost: 0.00% 347
Most Frequently Used Ghost: 0.00% 1

CACHE HITS BY DATA TYPE:
Demand Data: 34.28% 8.40m
Prefetch Data: 0.02% 4.55k
Demand Metadata: 65.68% 16.10m
Prefetch Metadata: 0.02% 5.73k

CACHE MISSES BY DATA TYPE:
Demand Data: 80.20% 646.95k
Prefetch Data: 0.09% 692
Demand Metadata: 19.60% 158.11k
Prefetch Metadata: 0.11% 922
Page: 3
------------------------------------------------------------------------

Page: 4
------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)
DMU Efficiency: 39.26m
Hit Ratio: 59.61% 23.40m
Miss Ratio: 40.39% 15.86m

Colinear: 15.86m
Hit Ratio: 0.00% 506
Miss Ratio: 100.00% 15.86m

Stride: 22.90m
Hit Ratio: 100.00% 22.90m
Miss Ratio: 0.00% 926

DMU Misc:
Reclaim: 15.86m
Successes: 0.11% 16.87k
Failures: 99.89% 15.84m

Streams: 495.22k
+Resets: 0.00% 4
-Resets: 100.00% 495.22k
Bogus: 0
Page: 5
------------------------------------------------------------------------

Page: 6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
kern.maxusers 3402
vm.kmem_size 49876267008
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 329853485875
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 41992192
vfs.zfs.mfu_ghost_metadata_lsize 60928
vfs.zfs.mfu_ghost_size 42053120
vfs.zfs.mfu_data_lsize 1308554240
vfs.zfs.mfu_metadata_lsize 1916928
vfs.zfs.mfu_size 1313045504
vfs.zfs.mru_ghost_data_lsize 1478706176
vfs.zfs.mru_ghost_metadata_lsize 563200
vfs.zfs.mru_ghost_size 1479269376
vfs.zfs.mru_data_lsize 34194030080
vfs.zfs.mru_metadata_lsize 16794624
vfs.zfs.mru_size 34261998080
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 164864
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 12200631296
vfs.zfs.arc_meta_used 239099088
vfs.zfs.arc_shrink_shift 5
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 6100315648
vfs.zfs.arc_max 48802525184
vfs.zfs.dedup.prefetch 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 4294967296
vfs.zfs.free_max_blocks 131072
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.recover 0
vfs.zfs.space_map_blksz 32768
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 1
vfs.zfs.txg.timeout 5
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 64
vfs.zfs.vdev.trim_max_bytes 2147483648
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.larger_ashift_minimal 0
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.snapshot_list_prefetch 0
vfs.zfs.version.ioctl 4
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.use_uma 1
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 2
Page: 7
------------------------------------------------------------------------
 
Last edited:

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
SSDs - toss them in the nearest trash bin. Get two Intel S3700 or ZeusRAM SSDs for SLOG (which is ZIL stored on a separate SSD). Expensive, but reliable.
HDDs - raidz is not feasible for many random operations. raid10 (striped mirrorsets) is the best practice for that since in your worst case the read-IOPS of only one disk would be used.

Honestly, Flash Storage is cheap. Better rely on Intel S3500/S3700 SSDs for the zpool without SLOG and L2ARC and you should still see nice IOPS - definitely more stable than you would see with that SLOG/L2ARC hickhack.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What marbus90 said. Start with proper hardware. Once that is done, if performance is an issue (and only if performance is an issue) should you even consider tuning.
 
Status
Not open for further replies.
Top