Slow SMB - Lost - Need direction

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
ok - so I've been working on setting up Truenas Scale and working to migrate off my older QNAP to my new Truenas server.

Specs:
INTEL S1200SPLR Motherboard
LSI 9211-8i HBA in IT Mode
Intel I3-7100T CPU
16 GB of ECC RAM
4x12TB WDC WD120EDBZ-11B1HA0 shucked drives
AOC-STGN-I2S REV 2.10 Supermicro 10GbE Dual Port SFP+ Network Card Intel 82599

I am testing using a server I built for use with Emby:
Intel I5 10400 CPU
16 GB RAM
All SSD drives
Windows 10 Pro
AOC-STGN-I2S REV 2.10 Supermicro 10GbE Dual Port SFP+ Network Card Intel 82599

10gb network cards are all connected to a brocade icx-6450 24P switch all via SFP/Fiber

I can get about 250 MB/s to 300 MB/s while transferring from the emby server to the truenas server (large video files)
I can get about 400 MB/s to 500 MB/s while transferring from the truenas server to the emby server (large video files)

iperf and ntttcp show max throughput at about 9.xx gb over the network.

I've noticed that if I copy a file up to the nas and copy it right back down - it looks like it hits the ARC and I can see around 10gb of rate - but once it leaves the cache (I assume) - it drops pretty quick.

I'm not sure if the bottleneck is hardware or software (CIFS) - but I'm pretty sure it's not the 10gbe network based on the iperf/ntttcp results and the fact that cached data comes back pretty quickly.

Could it be the lsi-9211 (as in should I move to a 3008 series HBA)
Could it be tuning of some sort I need to do?
More ram?

Please help! lol! I'm happy to run tests/send screenshots - whatever I need to do to troubleshoot this, as I'm puzzled.

TIA for any help offered!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
What is your pool layout?
Please read the following resource.
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
see attached - I'll go read that link as well. Thanks!


1672871263170.png

1672871242095.png
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I can get about 250 MB/s to 300 MB/s while transferring from the emby server to the truenas server (large video files)
I can get about 400 MB/s to 500 MB/s while transferring from the truenas server to the emby server (large video files)
As you can understand from the resource I asked you to read, 250-300 MB/s for writes and 400-500 MB/s for reads are standard speeds for a pair of 2-way mirrors. I see nothing strange here, simply your spinning rust can't go faster.

Could it be tuning of some sort I need to do?
I would suggest you to set the dataset (ugh, horrible phrasing) propriety called "record size" at 1M in order to optimize head readings/sector allocation (or something along those lines, it's a bit late). Do note that this applies to new writes only.

I've noticed that if I copy a file up to the nas and copy it right back down - it looks like it hits the ARC and I can see around 10gb of rate - but once it leaves the cache (I assume) - it drops pretty quick.
This is likely correct. Please look at here for a similar example.

More ram?
More ARC (RAM) will help you sustain those speeds for longer proven that the data you want is into it.
If you want to investigate, you can use the arc_summary command.

could it be the lsi-9211 (as in should I move to a 3008 series HBA)
@jgreco will be able to tell you more, but it shouldn't be an issue.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@jgreco will be able to tell you more, but it shouldn't be an issue.

I agree that it shouldn't be an issue. The basic issue is that the LSI SAS2008 was really designed for hard drives and used older/slower MIPS CPU cores, and is limited to an aggregate speed less than the 48Gbps max theoretically offered by its eight 6Gbps SAS lanes. It is a PCIe 2.0 x8 design, and with PCIe 2.0 lanes being limited to 5Gbps, that means that you have a 40Gbps cap. In practice it seems like the CPU is a little draggy as well, and I have seen credible measurements as low as maybe 30Gbps aggregate against a bank of known-fast SATA SSD's. It is definitely recommended to avoid overloading the SAS2008 (or accepting reduced performance). The 2308 improved on this with overclocked MIPS cores (hot!) and PCIe 3.0, which comes much closer to full theoretical performance. Even the 3008 has a slight "data handling tax" from the onboard CPU, but you should get performance very close to SATA AHCI out of a 3008 talking directly to modern SSD's.

Bearing in mind that 4x conventional HDD will not even be generating 3Gbps of traffic each, you should be fine running an LSI 2008. If you are going to run this virtualized under ESXi, or move to using SSD's, or be using an external shelf with an SAS3 SAS expander, then there would be reasons to upgrade to the 3008. Otherwise I wouldn't recommend it.

You can find out what your pool performance is like by running solnet-array-test available in the Resources section. It is designed to help quantify the actual performance you are getting.
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
@Davvo and @jgreco - thanks for much for the quick responses and input, they are very welcome and appreciated!!!

A couple of quick notes/thoughts.

1. I'm migrating from my QNAP and will be a 8x12TB setup when done - so it sounds like I should be ok hardware wise (at least on the LSI 9211 front at this time)
2. I'm thinking I need to get to 32 or 64GB of RAM - is more always better for truenas or is 32gb good to go for my usage (mostly tv/movie files being sent to Emby)?
3. Can anyone give me any idea(s) on why a friend of mine, who has a QNAP (one of the newer ones - but don't know the model this minute) and how come he is using the same drives I am but is more performant and can generally hit 700 MB/s or more while transferring? I'm just puzzled - I've read in the forums it might be more around the file systems (ext 3/4 vs ZFS) - but with so little hardware/memory? I know the OS is very specific to the hardware and if that's the answer, then so be it - but I am quite curious.

Thanks again for the input - I'll look forward to your thoughts and I think that will set me on a good path moving forward once I hear back one 1 and 2 if you have the time.

Appreciate the help!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
1. I'm migrating from my QNAP and will be a 8x12TB setup when done - so it sounds like I should be ok hardware wise (at least on the LSI 9211 front at this time)
We are gonna need talk about your pool layout.

2. I'm thinking I need to get to 32 or 64GB of RAM - is more always better for truenas or is 32gb good to go for my usage (mostly tv/movie files being sent to Emby)?
Depends what you want. If you aren't running emby on the TrueNAS system you can go with just 16GB. It rerally depends on the performance you seek... if you wanna use that 10Gbps network you want as much as you can fit.

3. Can anyone give me any idea(s) on why a friend of mine, who has a QNAP (one of the newer ones - but don't know the model this minute) and how come he is using the same drives I am but is more performant and can generally hit 700 MB/s or more while transferring? I'm just puzzled - I've read in the forums it might be more around the file systems (ext 3/4 vs ZFS) - but with so little hardware/memory? I know the OS is very specific to the hardware and if that's the answer, then so be it - but I am quite curious.
If I am not wrong QNAP doesn't use ZFS: life is easier when you don't have cheksums and hashes.
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
@Davvo - thanks for those points!

So on the pool layout - right now I have 4 new 12TB drives installed in the Truenas box with 4 12TB in the old QNAP NAS - when I'm done migrating - I'm going to move the 4 QNAP 12TB drives into the new Truenas server. So what should my pool layout look like from your point of view?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
@Davvo - thanks for those points!
To expand on the last one, quoting the Introduction to ZFS:
To this end, ZFS is completely Copy-on-Write (CoW) and checksums all data and metadata.
Checksums are kept separate from their blocks, so ZFS can verify that data is valid and that is it the correct data and not something else that your evil disks or storage controller sent your way.

So on the pool layout - right now I have 4 new 12TB drives installed in the Truenas box with 4 12TB in the old QNAP NAS - when I'm done migrating - I'm going to move the 4 QNAP 12TB drives into the new Truenas server. So what should my pool layout look like from your point of view?
Well, what do you want? Storage efficency? Saturate as much as possible your 10Gbps network?
Also, yeah with 8x12TB drives you want at least 32GB of RAM.
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
To expand on the last one, quoting the Introduction to ZFS:



Well, what do you want? Storage efficency? Saturate as much as possible your 10Gbps network?
Also, yeah with 8x12TB drives you want at least 32GB of RAM.
32 gb it is min and will try for 64gb if I can swing it. Thanks for that note!

I'm looking for performance overall with at least 'some' kind of fault tolerance. I have another QNAP I'll be backing up data to that I need to make sure I don't lost (pics, documents etc) but I won't be backing up tv/movies as I can always replace (although it would take a while...so I'd rather not lose them if possible...). But yes - I'd like to leverage as much of my newly installed 10gbe network as possible. I hope that helps - if not - ask more questions - I'll answer asap.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I'm looking for performance overall with at least 'some' kind of fault tolerance. I have another QNAP I'll be backing up data to that I need to make sure I don't lost (pics, documents etc) but I won't be backing up tv/movies as I can always replace (although it would take a while...so I'd rather not lose them if possible...). But yes - I'd like to leverage as much of my newly installed 10gbe network as possible. I hope that helps - if not - ask more questions - I'll answer asap.
I think your best shot is 4x2-way mirrors, basically what you were already doing. The alternatives would be a single 8 disks vide vdev in RAIDZ2, or 2x4-wide vdevs in RAIDZ1, and both are ugly (and not worth) in their own way.
With the mirrors you get a good level of resilience and great read performance, and are able to continue the expansion without having to destroy your current pool.

32 gb it is min and will try for 64gb if I can swing it. Thanks for that note!
There is a way you could kinda work around increasing your RAM: you could add two or three SSDs in a mirror to the pool as special vdevs.
Quoting the fusion pool documentation:
A special VDEV can store metadata such as file locations and allocation tables. The allocations in the special class are dedicated to specific block types. By default, this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks. This is a great use case for high performance but smaller sized solid-state storage. Using a special vdev drastically speeds up random I/O and cuts the average spinning-disk I/Os needed to find and access a file by up to half.
It could end up cheaper than buying more RAM, but it does have its heavy disadvantages: if you lose the vdev (all the SSDs) the pool is gone and being a vdev it's not removable, so no step back is allowed once the choice is made.
Honestly this would probably benefit more the single vdev RAIDZ2 option, but could work as a sort of cache (it's not a cache!!) for the metadata.

The more RAM option is likely better, especially if you manage to reach 64GB since you would be able to use a real cache (L2ARC) for metadata.
Also, the community's experience with the special vdevs is (at least apparently) way less than what we have with the L2ARC.
 
Last edited:

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
I think your best shot is 4x2-way mirrors, basically what you were already doing. The alternatives would be a single 8 disks vide vdev in RAIDZ2, or 2x4-wide vdevs in RAIDZ1, and both are ugly (and not worth) in their own way.
With the mirrors you get a good level of resilience and great read performance, and are able to continue the expansion without having to destroy your current pool.


There is a way you could kinda work around increasing your RAM: you could add two or three SSDs in a mirror to the pool as special vdevs.
Quoting the fusion pool documentation:

It could end up cheaper than buying more RAM, but it does have its heavy disadvantages: if you lose the vdev (all the SSDs) the pool is gone and being a vdev it's not removable, so no step back is allowed once the choice is made.
Honestly this would probably benefit more the single vdev RAIDZ2 option, but could work as a sort of cache (it's not a cache!!) for the metadata.

The more RAM option is likely better, especially if you manage to reach 64GB since you would be able to use a real cache (L2ARC) for metadata.
Also, the community's experience with the special vdevs is (at least apparently) way less than what we have with the L2ARC.
@Davvo - that was plan was to continue adding drives in pairs as vdevs and adding to the pool - so I guess it would be 2xmirror 8 wide if I'm correct. If not - please correct me :)

I ended up ordering another 16gb of ram to get me to 32 for now - if that doesn't work I'll save up and order 64gb (2x32) and be done with it since my motherboard maxes out at 64gb anyway.

If there's more to consider etc - let me know. Otherwise, I'll go with this plan for now and report back in once the ram arrives and go from there.

Thanks a ton for stepping in and giving me your advice - very much appreciated!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
@Davvo - that was plan was to continue adding drives in pairs as vdevs and adding to the pool - so I guess it would be 2xmirror 8 wide if I'm correct. If not - please correct me :)
It's 4 (vdevs) x (each composed of) 2-way (2 disks in) mirror, or 2-way mirror x4 if you prefer. The final pool I mean (you have already half of it).

I ended up ordering another 16gb of ram to get me to 32 for now - if that doesn't work I'll save up and order 64gb (2x32) and be done with it since my motherboard maxes out at 64gb anyway.
Cheers!

If there's more to consider etc - let me know. Otherwise, I'll go with this plan for now and report back in once the ram arrives and go from there.
Nah, with such a layout you should pump nice reads... according to my 3am math 8Gbps, but being limited by 2000 read IOPS will likely bring down that number quite a bit.
Just do note that the usable space (80% of total storage since we want to leave 20% of the pool free to not see our performance crawl) will be 37TB.

Thanks a ton for stepping in and giving me your advice - very much appreciated!
It's my pleasure!
 
Last edited:

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
@Davvo - I like it!! Will report back in when ram gets here and see if that helps and go from there. We'll be in touch soon! Take care!
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
I wanted to report back in - adding ram made a big difference. I'm hitting 500MB up and down to the truenas server consistently now and have seen over 900MB on some transfers (seems file dependant..like the one I saw 900+ on was a 1gb exe installer - but media files for emby are more like 500) - at some point I'll most likely put in 64gb and max the server. But this did help quite a bit. Thanks again for the help @Davvo and @jgreco !
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I wanted to report back in - adding ram made a big difference. I'm hitting 500MB up and down to the truenas server consistently now and have seen over 900MB on some transfers (seems file dependant..like the one I saw 900+ on was a 1gb exe installer - but media files for emby are more like 500) - at some point I'll most likely put in 64gb and max the server. But this did help quite a bit. Thanks again for the help @Davvo and @jgreco !

Thanks for the report back. There is nothing that convinces you that ZFS is a resource pig like actually seeing such results. At least it is no longer thousands of dollars worth of RAM to get that kind of improvement. Enjoy!
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
@jgreco - you're right about that on all accounts. It's about $200 for 64gb of Ecc UDIMM 2400 - so very doable in the near future. I'm loading up data for the cutover as we speak - so should be cut over in the next few days. Thanks again for all the help and input!
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
@jgreco @Davvo - wanted to pop back in on this for a minute and see if you guys have any other suggestions/thoughts. I'm thinking some kind of possible performance tuning, but not sure.

I dropped the 64gb in yesterday - little to no change in terms of speed. About 400MBs-500MBs reading from Truenas to Emby server - I can write from Emby server to Truenas at about the same rate. However, I would have thought the read performance would have been higher than the write performance?

I did add a L2Arc cache drive to the media pool this morning - no change really either.

Are there possibly any other performance tuning tips/guides I should be looking at or is this kind of speed (both read and write) pretty much expected with my setup?

Including some metrics jic.

Thanks as always!

1674842079672.png


1674842231231.png
1674842262270.png


1674842307701.png
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
How big is this L2ARC drive?
Cache grows with time (and reads).

Can you post the output of arc_summary?
 

vexter0944

Dabbler
Joined
Jan 4, 2023
Messages
34
the l2arc is a 100gb SSD


------------------------------------------------------------------------
ZFS Subsystem Report Fri Jan 27 12:19:13 2023
Linux 5.15.79+truenas 2.1.6-1
Machine: truenas (x86_64) 2.1.6-1

ARC status: HEALTHY
Memory throttle count: 0

ARC size (current): 76.0 % 23.7 GiB
Target size (adaptive): 100.0 % 31.2 GiB
Min size (hard limit): 6.2 % 2.0 GiB
Max size (high water): 16:1 31.2 GiB
Most Frequently Used (MFU) cache size: 93.1 % 22.0 GiB
Most Recently Used (MRU) cache size: 6.9 % 1.6 GiB
Metadata cache size (hard limit): 75.0 % 23.4 GiB
Metadata cache size (current): 1.2 % 289.1 MiB
Dnode cache size (hard limit): 10.0 % 2.3 GiB
Dnode cache size (current): 1.8 % 43.6 MiB

ARC hash breakdown:
Elements max: 91.9k
Elements current: 85.7 % 78.8k
Collisions: 1.9k
Chain max: 2
Chains: 396

ARC misc:
Deleted: 38.6k
Mutex misses: 10
Eviction skips: 7.2k
Eviction skips due to L2 writes: 0
L2 cached evictions: 11.6 GiB
L2 eligible evictions: 49.0 GiB
L2 eligible MFU evictions: 93.1 % 45.6 GiB
L2 eligible MRU evictions: 6.9 % 3.4 GiB
L2 ineligible evictions: 6.4 GiB

ARC total accesses (hits + misses): 4.7M
Cache hit ratio: 97.5 % 4.5M
Cache miss ratio: 2.5 % 117.6k
Actual hit ratio (MFU + MRU hits): 97.2 % 4.5M
Data demand efficiency: 98.4 % 1.2M
Data prefetch efficiency: 0.1 % 86.8k

Cache hits by cache type:
Most frequently used (MFU): 87.7 % 4.0M
Most recently used (MRU): 12.0 % 542.2k
Most frequently used (MFU) ghost: < 0.1 % 842
Most recently used (MRU) ghost: 0.1 % 3.2k
Anonymously used: 0.2 % 10.8k

Cache hits by data type:
Demand data: 27.0 % 1.2M
Demand prefetch data: < 0.1 % 125
Demand metadata: 72.6 % 3.3M
Demand prefetch metadata: 0.3 % 14.9k

Cache misses by data type:
Demand data: 17.1 % 20.1k
Demand prefetch data: 73.7 % 86.6k
Demand metadata: 8.0 % 9.4k
Demand prefetch metadata: 1.2 % 1.5k

DMU prefetch efficiency: 557.9k
Hit ratio: 17.9 % 100.1k
Miss ratio: 82.1 % 457.9k

L2ARC status: HEALTHY
Low memory aborts: 0
Free on write: 0
R/W clashes: 0
Bad checksums: 0
I/O errors: 0

L2ARC size (adaptive): 18.4 GiB
Compressed: 99.9 % 18.4 GiB
Header size: < 0.1 % 303.8 KiB
MFU allocated size: 95.5 % 17.6 GiB
MRU allocated size: 4.5 % 849.4 MiB
Prefetch allocated size: 0.0 % 0 Bytes
Data (buffer content) allocated size: 100.0 % 18.4 GiB
Metadata (buffer content) allocated size: < 0.1 % 5.2 MiB

L2ARC breakdown: 86.2k
Hit ratio: 0.5 % 407
Miss ratio: 99.5 % 85.8k
Feeds: 3.3k

L2ARC writes:
Writes sent: 100 % 1.3k

L2ARC evicts:
Lock retries: 0
Upon reading: 0

Solaris Porting Layer (SPL):
spl_hostid 0
spl_hostid_path /etc/hostid
spl_kmem_alloc_max 8388608
spl_kmem_alloc_warn 65536
spl_kmem_cache_kmem_threads 4
spl_kmem_cache_magazine_size 0
spl_kmem_cache_max_size 32
spl_kmem_cache_obj_per_slab 8
spl_kmem_cache_reclaim 0
spl_kmem_cache_slab_limit 16384
spl_max_show_tasks 512
spl_panic_halt 1
spl_schedule_hrtimeout_slack_us 0
spl_taskq_kick 0
spl_taskq_thread_bind 0
spl_taskq_thread_dynamic 1
spl_taskq_thread_priority 1
spl_taskq_thread_sequential 4

Tunables:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 18446744073709551615
dbuf_cache_shift 5
dbuf_metadata_cache_max_bytes 18446744073709551615
dbuf_metadata_cache_shift 6
dmu_object_alloc_chunk_shift 7
dmu_prefetch_max 134217728
ignore_hole_birth 1
l2arc_exclude_special 0
l2arc_feed_again 1
l2arc_feed_min_ms 200
l2arc_feed_secs 1
l2arc_headroom 2
l2arc_headroom_boost 200
l2arc_meta_percent 33
l2arc_mfuonly 0
l2arc_noprefetch 1
l2arc_norw 0
l2arc_rebuild_blocks_min_l2size 1073741824
l2arc_rebuild_enabled 1
l2arc_trim_ahead 0
l2arc_write_boost 8388608
l2arc_write_max 8388608
metaslab_aliquot 1048576
metaslab_bias_enabled 1
metaslab_debug_load 0
metaslab_debug_unload 0
metaslab_df_max_search 16777216
metaslab_df_use_largest_segment 0
metaslab_force_ganging 16777217
metaslab_fragmentation_factor_enabled 1
metaslab_lba_weighting_enabled 1
metaslab_preload_enabled 1
metaslab_unload_delay 32
metaslab_unload_delay_ms 600000
send_holes_without_birth_time 1
spa_asize_inflation 24
spa_config_path /etc/zfs/zpool.cache
spa_load_print_vdev_tree 0
spa_load_verify_data 1
spa_load_verify_metadata 1
spa_load_verify_shift 4
spa_slop_shift 5
vdev_file_logical_ashift 9
vdev_file_physical_ashift 9
vdev_removal_max_span 32768
vdev_validate_skip 0
zap_iterate_prefetch 1
zfetch_array_rd_sz 1048576
zfetch_max_distance 67108864
zfetch_max_idistance 67108864
zfetch_max_sec_reap 2
zfetch_max_streams 8
zfetch_min_distance 4194304
zfetch_min_sec_reap 1
zfs_abd_scatter_enabled 1
zfs_abd_scatter_max_order 13
zfs_abd_scatter_min_size 1536
zfs_admin_snapshot 0
zfs_allow_redacted_dataset_mount 0
zfs_arc_average_blocksize 8192
zfs_arc_dnode_limit 0
zfs_arc_dnode_limit_percent 10
zfs_arc_dnode_reduce_percent 10
zfs_arc_evict_batch_limit 10
zfs_arc_eviction_pct 200
zfs_arc_grow_retry 0
zfs_arc_lotsfree_percent 10
zfs_arc_max 0
zfs_arc_meta_adjust_restarts 4096
zfs_arc_meta_limit 0
zfs_arc_meta_limit_percent 75
zfs_arc_meta_min 0
zfs_arc_meta_prune 10000
zfs_arc_meta_strategy 1
zfs_arc_min 0
zfs_arc_min_prefetch_ms 0
zfs_arc_min_prescient_prefetch_ms 0
zfs_arc_p_dampener_disable 1
zfs_arc_p_min_shift 0
zfs_arc_pc_percent 0
zfs_arc_prune_task_threads 1
zfs_arc_shrink_shift 0
zfs_arc_shrinker_limit 10000
zfs_arc_sys_free 0
zfs_async_block_max_blocks 18446744073709551615
zfs_autoimport_disable 1
zfs_btree_verify_intensity 0
zfs_checksum_events_per_second 20
zfs_commit_timeout_pct 5
zfs_compressed_arc_enabled 1
zfs_condense_indirect_commit_entry_delay_ms 0
zfs_condense_indirect_obsolete_pct 25
zfs_condense_indirect_vdevs_enable 1
zfs_condense_max_obsolete_bytes 1073741824
zfs_condense_min_mapping_bytes 131072
zfs_dbgmsg_enable 1
zfs_dbgmsg_maxsize 4194304
zfs_dbuf_state_index 0
zfs_ddt_data_is_special 1
zfs_deadman_checktime_ms 60000
zfs_deadman_enabled 1
zfs_deadman_failmode wait
zfs_deadman_synctime_ms 600000
zfs_deadman_ziotime_ms 300000
zfs_dedup_prefetch 0
zfs_delay_min_dirty_percent 60
zfs_delay_scale 500000
zfs_delete_blocks 20480
zfs_dirty_data_max 4294967296
zfs_dirty_data_max_max 4294967296
zfs_dirty_data_max_max_percent 25
zfs_dirty_data_max_percent 10
zfs_dirty_data_sync_percent 20
zfs_disable_ivset_guid_check 0
zfs_dmu_offset_next_sync 1
zfs_embedded_slog_min_ms 64
zfs_expire_snapshot 300
zfs_fallocate_reserve_percent 110
zfs_flags 0
zfs_free_bpobj_enabled 1
zfs_free_leak_on_eio 0
zfs_free_min_time_ms 1000
zfs_history_output_max 1048576
zfs_immediate_write_sz 32768
zfs_initialize_chunk_size 1048576
zfs_initialize_value 16045690984833335022
zfs_keep_log_spacemaps_at_export 0
zfs_key_max_salt_uses 400000000
zfs_livelist_condense_new_alloc 0
zfs_livelist_condense_sync_cancel 0
zfs_livelist_condense_sync_pause 0
zfs_livelist_condense_zthr_cancel 0
zfs_livelist_condense_zthr_pause 0
zfs_livelist_max_entries 500000
zfs_livelist_min_percent_shared 75
zfs_lua_max_instrlimit 100000000
zfs_lua_max_memlimit 104857600
zfs_max_async_dedup_frees 100000
zfs_max_log_walking 5
zfs_max_logsm_summary_length 10
zfs_max_missing_tvds 0
zfs_max_nvlist_src_size 0
zfs_max_recordsize 1048576
zfs_metaslab_find_max_tries 100
zfs_metaslab_fragmentation_threshold 70
zfs_metaslab_max_size_cache_sec 3600
zfs_metaslab_mem_limit 25
zfs_metaslab_segment_weight_enabled 1
zfs_metaslab_switch_threshold 2
zfs_metaslab_try_hard_before_gang 0
zfs_mg_fragmentation_threshold 95
zfs_mg_noalloc_threshold 0
zfs_min_metaslabs_to_flush 1
zfs_multihost_fail_intervals 10
zfs_multihost_history 0
zfs_multihost_import_intervals 20
zfs_multihost_interval 1000
zfs_multilist_num_sublists 0
zfs_no_scrub_io 0
zfs_no_scrub_prefetch 0
zfs_nocacheflush 0
zfs_nopwrite_enabled 1
zfs_object_mutex_size 64
zfs_obsolete_min_time_ms 500
zfs_override_estimate_recordsize 0
zfs_pd_bytes_max 52428800
zfs_per_txg_dirty_frees_percent 5
zfs_prefetch_disable 0
zfs_read_history 0
zfs_read_history_hits 0
zfs_rebuild_max_segment 1048576
zfs_rebuild_scrub_enabled 1
zfs_rebuild_vdev_limit 33554432
zfs_reconstruct_indirect_combinations_max 4096
zfs_recover 0
zfs_recv_queue_ff 20
zfs_recv_queue_length 16777216
zfs_recv_write_batch_size 1048576
zfs_removal_ignore_errors 0
zfs_removal_suspend_progress 0
zfs_remove_max_segment 16777216
zfs_resilver_disable_defer 0
zfs_resilver_min_time_ms 3000
zfs_scan_blkstats 0
zfs_scan_checkpoint_intval 7200
zfs_scan_fill_weight 3
zfs_scan_ignore_errors 0
zfs_scan_issue_strategy 0
zfs_scan_legacy 0
zfs_scan_max_ext_gap 2097152
zfs_scan_mem_lim_fact 20
zfs_scan_mem_lim_soft_fact 20
zfs_scan_strict_mem_lim 0
zfs_scan_suspend_progress 0
zfs_scan_vdev_limit 4194304
zfs_scrub_min_time_ms 1000
zfs_send_corrupt_data 0
zfs_send_no_prefetch_queue_ff 20
zfs_send_no_prefetch_queue_length 1048576
zfs_send_queue_ff 20
zfs_send_queue_length 16777216
zfs_send_unmodified_spill_blocks 1
zfs_slow_io_events_per_second 20
zfs_spa_discard_memory_limit 16777216
zfs_special_class_metadata_reserve_pct 25
zfs_sync_pass_deferred_free 2
zfs_sync_pass_dont_compress 8
zfs_sync_pass_rewrite 2
zfs_sync_taskq_batch_pct 75
zfs_traverse_indirect_prefetch_limit 32
zfs_trim_extent_bytes_max 134217728
zfs_trim_extent_bytes_min 32768
zfs_trim_metaslab_skip 0
zfs_trim_queue_limit 10
zfs_trim_txg_batch 32
zfs_txg_history 100
zfs_txg_timeout 5
zfs_unflushed_log_block_max 131072
zfs_unflushed_log_block_min 1000
zfs_unflushed_log_block_pct 400
zfs_unflushed_log_txg_max 1000
zfs_unflushed_max_mem_amt 1073741824
zfs_unflushed_max_mem_ppm 1000
zfs_unlink_suspend_progress 0
zfs_user_indirect_is_special 1
zfs_vdev_aggregate_trim 0
zfs_vdev_aggregation_limit 1048576
zfs_vdev_aggregation_limit_non_rotating 131072
zfs_vdev_async_read_max_active 3
zfs_vdev_async_read_min_active 1
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_async_write_max_active 10
zfs_vdev_async_write_min_active 2
zfs_vdev_cache_bshift 16
zfs_vdev_cache_max 16384
zfs_vdev_cache_size 0
zfs_vdev_default_ms_count 200
zfs_vdev_default_ms_shift 29
zfs_vdev_initializing_max_active 1
zfs_vdev_initializing_min_active 1
zfs_vdev_max_active 1000
zfs_vdev_max_auto_ashift 14
zfs_vdev_min_auto_ashift 9
zfs_vdev_min_ms_count 16
zfs_vdev_mirror_non_rotating_inc 0
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_vdev_mirror_rotating_inc 0
zfs_vdev_mirror_rotating_seek_inc 5
zfs_vdev_mirror_rotating_seek_offset 1048576
zfs_vdev_ms_count_limit 131072
zfs_vdev_nia_credit 5
zfs_vdev_nia_delay 5
zfs_vdev_queue_depth_pct 1000
zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
zfs_vdev_read_gap_limit 32768
zfs_vdev_rebuild_max_active 3
zfs_vdev_rebuild_min_active 1
zfs_vdev_removal_max_active 2
zfs_vdev_removal_min_active 1
zfs_vdev_scheduler unused
zfs_vdev_scrub_max_active 3
zfs_vdev_scrub_min_active 1
zfs_vdev_sync_read_max_active 10
zfs_vdev_sync_read_min_active 10
zfs_vdev_sync_write_max_active 10
zfs_vdev_sync_write_min_active 10
zfs_vdev_trim_max_active 2
zfs_vdev_trim_min_active 1
zfs_vdev_write_gap_limit 4096
zfs_vnops_read_chunk_size 1048576
zfs_wrlog_data_max 8589934592
zfs_xattr_compat 0
zfs_zevent_len_max 512
zfs_zevent_retain_expire_secs 900
zfs_zevent_retain_max 2000
zfs_zil_clean_taskq_maxalloc 1048576
zfs_zil_clean_taskq_minalloc 1024
zfs_zil_clean_taskq_nthr_pct 100
zil_maxblocksize 131072
zil_nocacheflush 0
zil_replay_disable 0
zil_slog_bulk 786432
zio_deadman_log_all 0
zio_dva_throttle_enabled 1
zio_requeue_io_start_cut_in_line 1
zio_slow_io_ms 30000
zio_taskq_batch_pct 80
zio_taskq_batch_tpq 0
zvol_inhibit_dev 0
zvol_major 230
zvol_max_discard_blocks 16384
zvol_prefetch_bytes 131072
zvol_request_sync 0
zvol_threads 32
zvol_volmode 2

VDEV cache disabled, skipping section

ZIL committed transactions: 42.9k
Commit requests: 3.4k
Flushes to stable storage: 3.4k
Transactions to SLOG storage pool: 0 Bytes 0
Transactions to non-SLOG storage pool: 86.5 MiB 3.5k
 
Top