Slow when exporting video

Status
Not open for further replies.
Joined
Mar 3, 2016
Messages
8
Hi, Whenever I use Autodesk Flame or even a 3rd party video transcoder I am having a very strange issue. When Transcoding video from either local storage or freenas and transcode the output file to the freenas from the application (Flame) it is extremely slow. When performing this function on a simple network share it works fast and as expected, however when using the freenas it is very slow.

currently on system test I've been getting 700+ writes and 400+ reads (10 Gig network)

Supermicro
32 Gigs ram
Xeon
10 Gig
4 Promise x10 disk chassis
ZIL and l2arc on SSD's
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Supermicro
That leaves about 100 possible models.

That leaves about 200 possible models.

That leaves about 20 possible models.

ZIL and l2arc on SSD's
That leaves about 10 000 models. Each.

So, since we can't offer any insightful commentary if we have no idea which of the 4*10^13 configurations you have, I highly recommend you provide us some more detail.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
And, to add to what EricLoewe said, we don't know how many disks you have in these, nor how you laid out your pool.

Do they use hardware RAID?

4 Promise x10 disk chassis
 
Joined
Mar 3, 2016
Messages
8
Here is some more info on the hardware

Server: Supermicro SDR-6028R-T
Proc: E5-2600 v3 CPU Xeon
RAM 32 Gigs ECC
FreeNAS 9.3
1 128 Gig SSD for ZIL
1 128 Gig SSD for l2arc
10 Gig Myricom
q-logic QLE-2562 (I have two 2 port cards each fibre line goes to a promise chassis)
Storage 3 Promise Chassis x10 16 bay (NO HW RAID)
raidz x2 per chassis
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
How big are the disks?

Do you have 16 drives in each Promise chassis? What about the Supermicro box?


Sent from my iPhone using Tapatalk
 
Joined
Mar 3, 2016
Messages
8
16 drives per chassis they are Seagate Constellations 3TB
the L2ARC and ZIL are in the supermicro.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Can we start with the output of zpool status, zfs list, and arc_summary.py in code tags? And what protocol are you using to share this storage?

And can you clarify how you have the promise enclosures connected. I saw "Qlogic (Fibre Channel), Fiber, and JBOD" and I'm confused. The x10 looks like a JBOD, but maybe I found the wrong part.
 
Joined
Mar 3, 2016
Messages
8
Code:
[root@freenas] ~# zpool status

  pool: creative

state: ONLINE

status: One or more devices has experienced an error resulting in data

    corruption.  Applications may be affected.

action: Restore the file in question if possible.  Otherwise restore the

    entire pool from backup.

  see: http://illumos.org/msg/ZFS-8000-8A

  scan: resilvered 41.5M in 1h31m with 0 errors on Fri Feb 26 17:56:08 2016

config:



    NAME                                            STATE     READ WRITE CKSUM

    creative                                        ONLINE       0     0     0

      raidz1-0                                      ONLINE       0     0     0

        gptid/febd9d05-d80e-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/ff34ba6f-d80e-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/ffa7eb6d-d80e-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/004d81c1-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/01153cea-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/01d24c7b-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/028e3a2f-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

      raidz1-1                                      ONLINE       0     0     0

        gptid/03c94e04-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/048379ba-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/052e25f6-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/05e1d90a-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/069a56eb-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/07531d3f-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/082327b4-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

      raidz1-2                                      ONLINE       0     0     0

        gptid/097aa2b8-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/0a50a94e-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/0b13dd08-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/0be99b5d-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/0cb82e5f-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/0d792ce2-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/0e49169d-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

      raidz1-3                                      ONLINE       0     0     0

        gptid/0f7db54f-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/102bad50-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/10dabb9c-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1199c5e5-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/12613754-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1319667a-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/13de0ff3-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

      raidz1-4                                      ONLINE       0     0     0

        gptid/15403b93-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/160216e9-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/16c176b7-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/176a585c-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1829efa4-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/18fd0f46-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/19dd2b09-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

      raidz1-5                                      ONLINE       0     0     0

        gptid/1b24d3ee-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1bf3e2b4-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1cb8cba1-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1d74c327-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1e301fdb-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1eefba20-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

        gptid/1fb5d675-d80f-11e5-973d-8cdcd4ab4594  ONLINE       0     0     0

    logs

      gptid/2085387e-d80f-11e5-973d-8cdcd4ab4594    ONLINE       0     0     0

    cache

      gptid/20e1714f-d80f-11e5-973d-8cdcd4ab4594    ONLINE       0     0     0



errors: 7 data errors, use '-v' for a list



  pool: freenas-boot

state: ONLINE

  scan: none requested

config:



    NAME        STATE     READ WRITE CKSUM

    freenas-boot  ONLINE       0     0     0

      ada0p2    ONLINE       0     0     0



errors: No known data errors

[root@freenas] ~#

[root@freenas] ~# zfs list

NAME                                                        USED  AVAIL  REFER  MOUNTPOINT

creative                                                   44.6T  17.1T  18.9T  /mnt/creative

creative/.system                                           15.2M  17.1T  7.28M  legacy

creative/.system/configs-0dc2ca1e7fa9464d8c4d7c4fd81f6855  1.64M  17.1T  1.64M  legacy

creative/.system/cores                                     1.20M  17.1T  1.20M  legacy

creative/.system/rrd-0dc2ca1e7fa9464d8c4d7c4fd81f6855       162K  17.1T   162K  legacy

creative/.system/samba4                                    3.43M  17.1T  3.43M  legacy

creative/.system/syslog-0dc2ca1e7fa9464d8c4d7c4fd81f6855   1.47M  17.1T  1.47M  legacy

creative/nl02                                              10.2T  17.1T  10.2T  /mnt/creative/nl02

creative/nl03                                              15.6T  17.1T  15.6T  /mnt/creative/nl03

freenas-boot                                                720M   115G   144K  none

freenas-boot/ROOT                                           710M   115G   144K  none

freenas-boot/ROOT/Initial-Install                             8K   115G   674M  legacy

freenas-boot/ROOT/default                                   710M   115G   706M  legacy

freenas-boot/grub                                          7.76M   115G  7.76M  legacy

[root@freenas] ~#

[root@freenas] ~# arc_summary.py

System Memory:



    0.37%    235.87    MiB Active,    1.67%    1.04    GiB Inact

    89.08%    55.17    GiB Wired,    0.01%    4.97    MiB Cache

    8.86%    5.49    GiB Free,    0.00%    1.16    MiB Gap



    Real Installed:                64.00    GiB

    Real Available:            99.79%    63.87    GiB

    Real Managed:            96.97%    61.93    GiB



    Logical Total:                64.00    GiB

    Logical Used:            89.80%    57.47    GiB

    Logical Free:            10.20%    6.53    GiB



Kernel Memory:                    1.25    GiB

    Data:                98.17%    1.22    GiB

    Text:                1.83%    23.32    MiB



Kernel Memory Map:                61.21    GiB

    Size:                74.89%    45.84    GiB

    Free:                25.11%    15.37    GiB

                                Page:  1

------------------------------------------------------------------------



ARC Summary: (HEALTHY)

    Storage pool Version:            5000

    Filesystem Version:            5

    Memory Throttle Count:            0



ARC Misc:

    Deleted:                3.10m

    Recycle Misses:                146.56k

    Mutex Misses:                222

    Evict Skips:                222



ARC Size:                83.38%    50.80    GiB

    Target Size: (Adaptive)        83.47%    50.86    GiB

    Min Size (Hard Limit):        12.50%    7.62    GiB

    Max Size (High Water):        8:1    60.93    GiB



ARC Size Breakdown:

    Recently Used Cache Size:    93.75%    47.68    GiB

    Frequently Used Cache Size:    6.25%    3.18    GiB



ARC Hash Breakdown:

    Elements Max:                2.07m

    Elements Current:        84.23%    1.74m

    Collisions:                885.55k

    Chain Max:                6

    Chains:                    171.34k

                                Page:  2

------------------------------------------------------------------------



ARC Total accesses:                    42.80m

    Cache Hit Ratio:        74.01%    31.68m

    Cache Miss Ratio:        25.99%    11.12m

    Actual Hit Ratio:        59.77%    25.58m



    Data Demand Efficiency:        63.22%    11.58m

    Data Prefetch Efficiency:    73.85%    7.41m



    CACHE HITS BY CACHE LIST:

      Anonymously Used:        18.72%    5.93m

      Most Recently Used:        13.44%    4.26m

      Most Frequently Used:        67.32%    21.32m

      Most Recently Used Ghost:    0.24%    76.94k

      Most Frequently Used Ghost:    0.28%    88.63k



    CACHE HITS BY DATA TYPE:

      Demand Data:            23.10%    7.32m

      Prefetch Data:        17.28%    5.47m

      Demand Metadata:        57.55%    18.23m

      Prefetch Metadata:        2.08%    657.53k



    CACHE MISSES BY DATA TYPE:

      Demand Data:            38.27%    4.26m

      Prefetch Data:        17.42%    1.94m

      Demand Metadata:        43.23%    4.81m

      Prefetch Metadata:        1.07%    119.43k

                                Page:  3

------------------------------------------------------------------------



L2 ARC Summary: (HEALTHY)

    Passed Headroom:            14.09m

    Tried Lock Failures:            1.72k

    IO In Progress:                0

    Low Memory Aborts:            1

    Free on Write:                17.00k

    Writes While Full:            7.62k

    R/W Clashes:                0

    Bad Checksums:                0

    IO Errors:                0

    SPA Mismatch:                238.52m



L2 ARC Size: (Adaptive)                86.32    GiB

    Header Size:            0.19%    169.00    MiB



L2 ARC Evicts:

    Lock Retries:                0

    Upon Reading:                0



L2 ARC Breakdown:                11.12m

    Hit Ratio:            0.40%    44.46k

    Miss Ratio:            99.60%    11.08m

    Feeds:                    293.15k



L2 ARC Buffer:

    Bytes Scanned:                18.33    TiB

    Buffer Iterations:            293.15k

    List Iterations:            18.61m

    NULL List Iterations:            15.20k



L2 ARC Writes:

    Writes Sent:            100.00%    19.97k

                                Page:  4

------------------------------------------------------------------------



File-Level Prefetch: (HEALTHY)

DMU Efficiency:                    338.27m

    Hit Ratio:            97.17%    328.69m

    Miss Ratio:            2.83%    9.58m



    Colinear:                9.58m

      Hit Ratio:            0.09%    8.35k

      Miss Ratio:            99.91%    9.57m



    Stride:                    329.83m

      Hit Ratio:            98.98%    326.48m

      Miss Ratio:            1.02%    3.35m



DMU Misc:

    Reclaim:                9.57m

      Successes:            2.21%    211.58k

      Failures:            97.79%    9.36m



    Streams:                2.21m

      +Resets:            0.02%    391

      -Resets:            99.98%    2.21m

      Bogus:                0

                                Page:  5

------------------------------------------------------------------------



                                Page:  6

------------------------------------------------------------------------



ZFS Tunable (sysctl):

    kern.maxusers                           4423

    vm.kmem_size                            66496450560

    vm.kmem_size_scale                      1

    vm.kmem_size_min                        0

    vm.kmem_size_max                        1319413950874

    vfs.zfs.l2c_only_size                   77451220992

    vfs.zfs.mfu_ghost_data_lsize            42203619840

    vfs.zfs.mfu_ghost_metadata_lsize        410561024

    vfs.zfs.mfu_ghost_size                  42614180864

    vfs.zfs.mfu_data_lsize                  5890801664

    vfs.zfs.mfu_metadata_lsize              8055808

    vfs.zfs.mfu_size                        5962265600

    vfs.zfs.mru_ghost_data_lsize            10866037248

    vfs.zfs.mru_ghost_metadata_lsize        51564032

    vfs.zfs.mru_ghost_size                  10917601280

    vfs.zfs.mru_data_lsize                  40092598784

    vfs.zfs.mru_metadata_lsize              2316882944

    vfs.zfs.mru_size                        42793547264

    vfs.zfs.anon_data_lsize                 0

    vfs.zfs.anon_metadata_lsize             0

    vfs.zfs.anon_size                       163840

    vfs.zfs.l2arc_norw                      1

    vfs.zfs.l2arc_feed_again                1

    vfs.zfs.l2arc_noprefetch                1

    vfs.zfs.l2arc_feed_min_ms               200

    vfs.zfs.l2arc_feed_secs                 1

    vfs.zfs.l2arc_headroom                  2

    vfs.zfs.l2arc_write_boost               8388608

    vfs.zfs.l2arc_write_max                 8388608

    vfs.zfs.arc_meta_limit                  16355677184

    vfs.zfs.arc_shrink_shift                5

    vfs.zfs.arc_average_blocksize           8192

    vfs.zfs.arc_min                         8177838592

    vfs.zfs.arc_max                         65422708736

    vfs.zfs.dedup.prefetch                  1

    vfs.zfs.mdcomp_disable                  0

    vfs.zfs.nopwrite_enabled                1

    vfs.zfs.zfetch.array_rd_sz              1048576

    vfs.zfs.zfetch.block_cap                256

    vfs.zfs.zfetch.min_sec_reap             2

    vfs.zfs.zfetch.max_streams              8

    vfs.zfs.prefetch_disable                0

    vfs.zfs.max_recordsize                  1048576

    vfs.zfs.delay_scale                     500000

    vfs.zfs.delay_min_dirty_percent         60

    vfs.zfs.dirty_data_sync                 67108864

    vfs.zfs.dirty_data_max_percent          10

    vfs.zfs.dirty_data_max_max              4294967296

    vfs.zfs.dirty_data_max                  4294967296

    vfs.zfs.free_max_blocks                 131072

    vfs.zfs.no_scrub_prefetch               0

    vfs.zfs.no_scrub_io                     0

    vfs.zfs.resilver_min_time_ms            3000

    vfs.zfs.free_min_time_ms                1000

    vfs.zfs.scan_min_time_ms                1000

    vfs.zfs.scan_idle                       50

    vfs.zfs.scrub_delay                     4

    vfs.zfs.resilver_delay                  2

    vfs.zfs.top_maxinflight                 32

    vfs.zfs.mg_fragmentation_threshold      85

    vfs.zfs.mg_noalloc_threshold            0

    vfs.zfs.condense_pct                    200

    vfs.zfs.metaslab.bias_enabled           1

    vfs.zfs.metaslab.lba_weighting_enabled  1

    vfs.zfs.metaslab.fragmentation_factor_enabled1

    vfs.zfs.metaslab.preload_enabled        1

    vfs.zfs.metaslab.preload_limit          3

    vfs.zfs.metaslab.unload_delay           8

    vfs.zfs.metaslab.load_pct               50

    vfs.zfs.metaslab.min_alloc_size         33554432

    vfs.zfs.metaslab.df_free_pct            4

    vfs.zfs.metaslab.df_alloc_threshold     131072

    vfs.zfs.metaslab.debug_unload           0

    vfs.zfs.metaslab.debug_load             0

    vfs.zfs.metaslab.fragmentation_threshold70

    vfs.zfs.metaslab.gang_bang              16777217

    vfs.zfs.spa_load_verify_data            1

    vfs.zfs.spa_load_verify_metadata        1

    vfs.zfs.spa_load_verify_maxinflight     10000

    vfs.zfs.ccw_retry_interval              300

    vfs.zfs.check_hostid                    1

    vfs.zfs.spa_slop_shift                  5

    vfs.zfs.spa_asize_inflation             24

    vfs.zfs.deadman_enabled                 1

    vfs.zfs.deadman_checktime_ms            5000

    vfs.zfs.deadman_synctime_ms             1000000

    vfs.zfs.recover                         0

    vfs.zfs.space_map_blksz                 32768

    vfs.zfs.trim.max_interval               1

    vfs.zfs.trim.timeout                    30

    vfs.zfs.trim.txg_delay                  32

    vfs.zfs.trim.enabled                    1

    vfs.zfs.txg.timeout                     5

    vfs.zfs.min_auto_ashift                 9

    vfs.zfs.max_auto_ashift                 13

    vfs.zfs.vdev.trim_max_pending           10000

    vfs.zfs.vdev.metaslabs_per_vdev         200

    vfs.zfs.vdev.cache.bshift               16

    vfs.zfs.vdev.cache.size                 0

    vfs.zfs.vdev.cache.max                  16384

    vfs.zfs.vdev.larger_ashift_minimal      0

    vfs.zfs.vdev.bio_delete_disable         0

    vfs.zfs.vdev.bio_flush_disable          0

    vfs.zfs.vdev.trim_on_init               1

    vfs.zfs.vdev.mirror.non_rotating_seek_inc1

    vfs.zfs.vdev.mirror.non_rotating_inc    0

    vfs.zfs.vdev.mirror.rotating_seek_offset1048576

    vfs.zfs.vdev.mirror.rotating_seek_inc   5

    vfs.zfs.vdev.mirror.rotating_inc        0

    vfs.zfs.vdev.write_gap_limit            4096

    vfs.zfs.vdev.read_gap_limit             32768

    vfs.zfs.vdev.aggregation_limit          131072

    vfs.zfs.vdev.trim_max_active            64

    vfs.zfs.vdev.trim_min_active            1

    vfs.zfs.vdev.scrub_max_active           2

    vfs.zfs.vdev.scrub_min_active           1

    vfs.zfs.vdev.async_write_max_active     10

    vfs.zfs.vdev.async_write_min_active     1

    vfs.zfs.vdev.async_read_max_active      3

    vfs.zfs.vdev.async_read_min_active      1

    vfs.zfs.vdev.sync_write_max_active      10

    vfs.zfs.vdev.sync_write_min_active      10

    vfs.zfs.vdev.sync_read_max_active       10

    vfs.zfs.vdev.sync_read_min_active       10

    vfs.zfs.vdev.max_active                 1000

    vfs.zfs.vdev.async_write_active_max_dirty_percent60

    vfs.zfs.vdev.async_write_active_min_dirty_percent30

    vfs.zfs.snapshot_list_prefetch          0

    vfs.zfs.version.ioctl                   4

    vfs.zfs.version.zpl                     5

    vfs.zfs.version.spa                     5000

    vfs.zfs.version.acl                     1

    vfs.zfs.debug                           0

    vfs.zfs.super_owner                     0

    vfs.zfs.cache_flush_disable             0

    vfs.zfs.zil_replay_disable              0

    vfs.zfs.sync_pass_rewrite               2

    vfs.zfs.sync_pass_dont_compress         5

    vfs.zfs.sync_pass_deferred_free         2

    vfs.zfs.zio.use_uma                     1

    vfs.zfs.vol.unmap_enabled               1

    vfs.zfs.vol.mode                        2

                                Page:  7

------------------------------------------------------------------------
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Please post each inside CODE tags. It keeps the formatting and provides the proper indentation. Thanks!
upload_2016-3-7_16-53-43.png
 
Joined
Mar 3, 2016
Messages
8
Unfortunately I am not sure how you want me to be more clear of my q-logic card to the chassis's controller. I apologize if you do not understand the promise hardware, however you can purchase a x10 or x30 with a controller or a SAS expansion card. This expansion card is where you would attach the "JBOD" to the "controller" However because ZFS places data on the platters its always best to disable the Raid function of your controller and export all the drives via a JBOD mode.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Unfortunately I am not sure how you want me to be more clear of my q-logic card to the chassis's controller. I apologize if you do not understand the promise hardware, however you can purchase a x10 or x30 with a controller or a SAS expansion card. This expansion card is where you would attach the "JBOD" to the "controller" However because ZFS places data on the platters its always best to disable the Raid function of your controller and export all the drives via a JBOD mode.
I'm wondering how you are hooking up a Fibre Channel card using Fiber to a SAS JBOD. My guess is that you aren't and that you are instead connecting a Fibre Channel SAN to FreeNAS and presenting each drive as a separate target.
What is the Promise family or model number because "Promise x10 16 drive" isn't coming up with much useful.

You have a complicated situation, and the more info you can provide the better we can help. We are all volunteers.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Have you checked that the network is working properly? In other words, have you run iperf and arrived at reasonable (for 10GbE) results?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
One or more devices has experienced an error resulting in data corruption. Applications may be affected.
Seagate Constellations 3TB
raidz1-0
raidz1-1
raidz1-2
raidz1-3
raidz1-4
raidz1-5
Are you running regular SMART tests, with properly configured email notifications? I suspect at least one of your drives is flaky. Hopefully there's nothing important on this system that isn't backed up elsewhere.
 
Joined
Mar 3, 2016
Messages
8
I'm consistently getting 700 Megs Read and 400 Write. So I'm thinking this is more something with pre-allocation. Its only when I am exporting a video from a video transcoder. Weather the video source is on the local drive or the freenas it makes no difference, however when I try it on another SMB share (other mac drive) it runs at normal speed, and is about 20 times faster. So why is it when I write to the freenas on a 10 gig line it runs slow, I also tried it on the 1 gig line with no success.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I'm s little confused by your statement that says you have the issue on both a local drive as well as FreeNAS.

We still don't know which protocol you are using to export the file to FreeNAS.

One thing I would do is delete the 7 corrupted files on your server. Zpool status -v


Sent from my iPhone using Tapatalk
 
Joined
Mar 3, 2016
Messages
8
We are sharing out SMB. I know its a bit confusing, let me try to explain a little better.

When transcoding video you need a source of video to transcode, then a output file you get after the transcode is done or while its being written. If the source file was on the freenas and then being written to the freenas it would indicate that there is either a problem with the source file or a problem writing to the target file. To eliminate the problem with the source file, I've placed it on the local drive, where I can write super fast to the NAS or super slow to the freenas.

This problem is consistent with other video transcoders.
 
Status
Not open for further replies.
Top