Rync to new Volume Very, Very Slow.

Status
Not open for further replies.

Nomad

Contributor
Joined
Oct 14, 2013
Messages
125
Code:
[root@freenas] ~# rsync --progress -av --exclude /mnt/zedpm/jails/ --log-file=/mnt/zedpm/rsync.log.final /mnt/zedpm/ /mnt/zero/
sending incremental file list
rsync.log.final
6173 100% 0.00kB/s 0:00:00 (xfer#1, to-check=1010/1014)
x/y/ <some file 4gb>
4679587840 100% 12.85MB/s 0:05:47 (xfer#2, to-check=1705/1760)
x/y/ <next file>
1803321344 43% 28.46MB/s 0:01:20

Volume 1 zedpm = 4x3TB Mirrored Stripe ~5TB
Volume 2 Zero = 2x3TB Raid0 ~ 5TB
Hardware in Sig.
This is a local to local xfer.
Any ideas? Or any commands to run to help?
When it first starts it'll burst to 100-200MB/s then just "freeze" no updates to screen then jump to whats displayed above.
Code:
last pid: 83990;  load averages:  0.69,  0.29,  0.11  up 13+17:29:56  11:24:24
45 processes:  2 running, 43 sleeping
CPU:  7.1% user,  0.0% nice, 15.3% system,  0.5% interrupt, 77.1% idle
Mem: 196M Active, 405M Inact, 13G Wired, 650M Buf, 1363M Free
ARC: 12G Total, 5753M MFU, 3434M MRU, 2123M Anon, 106M Header, 704M Other
Swap: 12G Total, 12G Free
 
  PID USERNAME    THR PRI NICE  SIZE    RES STATE  C  TIME  WCPU COMMAND
83929 root          1  48    0 82604K  4664K CPU2    2  0:25 35.89% rsync
83931 root          1  44    0 82604K  4388K select  3  0:23 31.30% rsync
2433 root          7  20    0  122M 16696K uwait  5  24:13  0.00% collectd
7840 root          1  20    0 91784K 14644K select  4  15:59  0.00% smbd
3117    972      13  20    0  252M  130M uwait  2  5:57  0.00% Plex Media
1597    972      13  35  15  297M 86788K select  4  3:22  0.00% python
3755    972      12  20    0  227M 84172K uwait  4  2:18  0.00% Plex DLNA
2134 root          6  20    0  466M  140M usem    1  0:48  0.00% python2.7
3348 root          6  36    0  155M 34068K usem    0  0:43  0.00% python2.7
2084 root          1  20    0 73724K  7620K select  2  0:24  0.00% nmbd
1810 root          1  20    0 22212K  3912K select  5  0:23  0.00% ntpd
1999 root          1  20    0 34292K  4924K select  4  0:11  0.00% proftpd
3120 root          1  32  10 18600K  3188K wait    0  0:08  0.00% sh
2402 avahi        1  20    0 30316K  3060K select  1  0:06  0.00% avahi-daem
1566 root          1  20    0 12032K  1732K select  3  0:06  0.00% syslogd
2562 root          1  52    0 14124K  1836K nanslp  1  0:06  0.00% cron


Code:
  pool: zedpm
 state: ONLINE
  scan: scrub repaired 0 in 17h20m with 0 errors on Sun Feb 23 17:20:32 2014
config:
 
        NAME                                            STATE     READ WRITE CKS                                                  UM
        zedpm                                           ONLINE       0     0                                                       0
          mirror-0                                      ONLINE       0     0                                                       0
            gptid/97cfacd2-348e-11e3-80b2-20cf3007fa56  ONLINE       0     0                                                       0
            gptid/98341b7e-348e-11e3-80b2-20cf3007fa56  ONLINE       0     0                                                       0
          mirror-1                                      ONLINE       0     0                                                       0
            gptid/41c4b4da-351e-11e3-ac34-20cf3007fa56  ONLINE       0     0                                                       0
            gptid/427ec90c-351e-11e3-ac34-20cf3007fa56  ONLINE       0     0                                                       0
 
errors: No known data errors
 
  pool: zero
 state: ONLINE
  scan: resilvered 1.21M in 0h2m with 0 errors on Sun Feb  9 12:56:10 2014
config:
 
        NAME                                          STATE     READ WRITE CKSUM
        zero                                          ONLINE       0     0     0
          gptid/2fec0f8c-8eb2-11e3-9a4c-20cf3007fa56  ONLINE       0     0     0
          gptid/30a9ac79-8eb2-11e3-9a4c-20cf3007fa56  ONLINE       0     0     0
 
errors: No known data errors
 

Nomad

Contributor
Joined
Oct 14, 2013
Messages
125
Looks like the drives are hitting 100% but I don't understand the output or why.

Code:
dT: 1.001s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    0      0      0      0    0.0      0      0    0.0    0.0| ada0
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p1
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada1
    0      0      0      0    0.0      0      0    0.0    0.0| ada2
    0      0      0      0    0.0      0      0    0.0    0.0| ada3
   10    599      0      0    0.0    599  76721   16.6   99.9| ada4
    0      0      0      0    0.0      0      0    0.0    0.0| ada5
    0      0      0      0    0.0      0      0    0.0    0.0| da0
    0      0      0      0    0.0      0      0    0.0    0.0| ada1p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/98341b7e-348e-11e3-80b2-20cf3007fa56
    0      0      0      0    0.0      0      0    0.0    0.0| ada1p1
    0      0      0      0    0.0      0      0    0.0    0.0| ada1p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada2p1
    0      0      0      0    0.0      0      0    0.0    0.0| ada2p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada3p1
    0      0      0      0    0.0      0      0    0.0    0.0| ada3p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada4p1
   10    599      0      0    0.0    599  76721   16.7  100.0| ada4p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada5p1
    0      0      0      0    0.0      0      0    0.0    0.0| ada5p2
    0      0      0      0    0.0      0      0    0.0    0.0| da0s1
    0      0      0      0    0.0      0      0    0.0    0.0| da0s2
    0      0      0      0    0.0      0      0    0.0    0.0| da0s3
    0      0      0      0    0.0      0      0    0.0    0.0| da0s4
    0      0      0      0    0.0      0      0    0.0    0.0| ada3p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/41c4b4da-351e-11e3-ac34-20cf3007fa56
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/97cfacd2-348e-11e3-80b2-20cf3007fa56
    0      0      0      0    0.0      0      0    0.0    0.0| ada4p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/427ec90c-351e-11e3-ac34-20cf3007fa56
    0      0      0      0    0.0      0      0    0.0    0.0| ada5p1.eli
   10    599      0      0    0.0    599  76721   16.7  100.0| gptid/2fec0f8c-8eb2-11e3-9a4c-20cf3007fa56
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/30a9ac79-8eb2-11e3-9a4c-20cf3007fa56
    0      0      0      0    0.0      0      0    0.0    0.0| da0s1a
    0      0      0      0    0.0      0      0    0.0    0.0| ufs/FreeNASs3
    0      0      0      0    0.0      0      0    0.0    0.0| ada2p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| ufs/FreeNASs4
    0      0      0      0    0.0      0      0    0.0    0.0| ufs/FreeNASs1a
    0      0      0      0    0.0      0      0    0.0    0.0| md0
    0      0      0      0    0.0      0      0    0.0    0.0| md1
    0      0      0      0    0.0      0      0    0.0    0.0| md2
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
%busy is not a useful variable. Its discussed in any documention using %busy that the way that 100% number is determined is not a useful characteristic for people that don't understand how that number is arrived at. Look at the KBps for throughput.

rsync is slow. Period. Doing 5TB of rsync could take hours or days. rsync is also single threaded(kind of.. it's really 2 threads, but not the way you think) and is very sensitive to latency. Even 1ms of latency due to network traffic is a performance killer for rsync. When I was migrating from my "old" server to my "new" server and the systems were cross connected I got 25-35MB/sec tops. This was despite the fact they were connected directly via Gb and could each do over 600MB/sec internally.

You go with rsync to save on network traffic and to somewhat ensure the destination file isn't corrupt. Notice I didn't say "to get lightning fast backups". If you want lightning fast backups you want ZFS snapshot/replication.
 
Status
Not open for further replies.
Top