Hi Everyone,
I am desperate for another set of eyes to look onto my problem.
I've been using FreeNas for a while now in a home NAS environment.
Here's my hardware which worked fine until now. I'm running on a asrock J3710-ITX with an Intel Pentium J3710 and 8GB RAM.
I've been running on a Raidz1 Pool with 3x2TB WD Red Disk until now. Those are in a pool "Old". I decided that I need more storage so I bought 2x8TB which I put into a new mirror pool named "Daten". Daten is a geli encrypted pool, Old is not encrypted.
I have
Using the system now I get write speeds of less than 5 mb/s on Daten. Write speeds on Old are normal. Read speeds on both are normal, which makes me assume CPU is not the bottle neck due to encryption. I've been testing by writing to files in the pool as well as writing to zvols in the pool. Performance seems to be the same. I am unsure on how to diagnose this issue. CPU does not seem to be the bottleneck as far as I can tell. dd sits around 50% CPU usage.
After having resilvered the pool a couple of times I am now at 25MiB/s but still slow. I have no idea why resilvering improved the situation, though. I suspect I might have messed something up with geli the first time I tested my ability to restore the pool with geli.
/dev/da2 should be capable of much higher speeds as tessted on another machine:
Do you guys have any idea? How can I analyze this issue?
zpool list
zpool status
resilver was because I was testing the disks in another machine to make sure.
zdb with ashift seems to be fine as well. My drives have 4k sectors:
	
		
			
		
		
	
			
			I am desperate for another set of eyes to look onto my problem.
I've been using FreeNas for a while now in a home NAS environment.
Here's my hardware which worked fine until now. I'm running on a asrock J3710-ITX with an Intel Pentium J3710 and 8GB RAM.
I've been running on a Raidz1 Pool with 3x2TB WD Red Disk until now. Those are in a pool "Old". I decided that I need more storage so I bought 2x8TB which I put into a new mirror pool named "Daten". Daten is a geli encrypted pool, Old is not encrypted.
I have
 zfs send | zfs recv all my date from Old to Daten, and I don't know how long that took but I feel like it was at an reasonable pace, certainly less than 12 hours.Using the system now I get write speeds of less than 5 mb/s on Daten. Write speeds on Old are normal. Read speeds on both are normal, which makes me assume CPU is not the bottle neck due to encryption. I've been testing by writing to files in the pool as well as writing to zvols in the pool. Performance seems to be the same. I am unsure on how to diagnose this issue. CPU does not seem to be the bottleneck as far as I can tell. dd sits around 50% CPU usage.
After having resilvered the pool a couple of times I am now at 25MiB/s but still slow. I have no idea why resilvering improved the situation, though. I suspect I might have messed something up with geli the first time I tested my ability to restore the pool with geli.
root@freenas[/mnt]# dd if=/dev/da2 bs=5m | pv -s256g | dd of=/dev/zvol/Daten/testzvol
952MiB 0:00:40 [22.4MiB/s]/dev/da2 should be capable of much higher speeds as tessted on another machine:
$ dd if=/cygdrive/e/de_windows_8_1_x64_dvd_2707227.iso bs=1M seek=7340032 of=/dev/sdc
3949217792 Bytes (3,9 GB, 3,7 GiB), 41,5303 s, 95,1 MB/sDo you guys have any idea? How can I analyze this issue?
zpool list
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Daten 7.25T 1.77T 5.48T - - 0% 24% 1.00x ONLINE /mnt Old 5.44T 2.48T 2.96T - - 19% 45% 1.00x ONLINE /mnt
zpool status
Code:
  pool: Daten
state: ONLINE
  scan: resilvered 71.6M in 0 days 00:00:02 with 0 errors on Fri Mar 29 22:03:28 2019
config:
        NAME                                                STATE     READ WRITE CKSUM
        Daten                                               ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            gptid/0bd46e1a-500f-11e9-ae97-7085c22555c2.eli  ONLINE       0     0     0
            gptid/0d1c9f79-500f-11e9-ae97-7085c22555c2.eli  ONLINE       0     0     0
errors: No known data errors
  pool: Old
state: ONLINE
  scan: resilvered 4K in 0 days 00:00:01 with 0 errors on Wed Mar 27 21:03:50 2019
config:
        NAME                                            STATE     READ WRITE CKSUM
        Old                                             ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/b6053b61-422f-11e7-afe7-7085c22555c2  ONLINE       0     0     0
            gptid/b6da5f17-422f-11e7-afe7-7085c22555c2  ONLINE       0     0     0
            gptid/b7b89b77-422f-11e7-afe7-7085c22555c2  ONLINE       0     0     0
errors: No known data errorsresilver was because I was testing the disks in another machine to make sure.
Code:
oot@freenas[/mnt]# geli status
                                          Name  Status  Components
                              mirror/swap1.eli  ACTIVE  mirror/swap1
                              mirror/swap2.eli  ACTIVE  mirror/swap2
gptid/0d1c9f79-500f-11e9-ae97-7085c22555c2.eli  ACTIVE  gptid/0d1c9f79-500f-11e9-ae97-7085c22555c2
gptid/0bd46e1a-500f-11e9-ae97-7085c22555c2.eli  ACTIVE  gptid/0bd46e1a-500f-11e9-ae97-7085c22555c2zdb with ashift seems to be fine as well. My drives have 4k sectors:
Code:
zdb -U /data/zfs/zpool.cache
Daten:
    version: 5000
    name: 'Daten'
    state: 0
    txg: 50424
    pool_guid: 15516724398489669549
    hostid: 3116882024
    hostname: 'freenas.local'
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 15516724398489669549
        create_txg: 4
        children[0]:
            type: 'mirror'
            id: 0
            guid: 16389544199983235145
            metaslab_array: 39
            metaslab_shift: 36
            ashift: 12
            asize: 7984378019840
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 36
            children[0]:
                type: 'disk'
                id: 0
                guid: 10871911267321680461
                path: '/dev/gptid/0bd46e1a-500f-11e9-ae97-7085c22555c2.eli'
                whole_disk: 1
                DTL: 80
                create_txg: 4
                com.delphix:vdev_zap_leaf: 37
            children[1]:
                type: 'disk'
                id: 1
                guid: 17126202940566585759
                path: '/dev/gptid/0d1c9f79-500f-11e9-ae97-7085c22555c2.eli'
                whole_disk: 1
                DTL: 84
                create_txg: 4
                com.delphix:vdev_zap_leaf: 38
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
Old:
    version: 5000
    name: 'Old'
    state: 0
    txg: 7368637
    pool_guid: 420297742057380397
    hostid: 3116882024
    hostname: 'freenas.local'
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 420297742057380397
        create_txg: 4
        children[0]:
            type: 'raidz'
            id: 0
            guid: 4413359244737712072
            nparity: 1
            metaslab_array: 40
            metaslab_shift: 35
            ashift: 12
            asize: 5994739924992
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 36
            children[0]:
                type: 'disk'
                id: 0
                guid: 7363565121042714212
                path: '/dev/gptid/b6053b61-422f-11e7-afe7-7085c22555c2'
                whole_disk: 1
                DTL: 453
                create_txg: 4
                com.delphix:vdev_zap_leaf: 37
            children[1]:
                type: 'disk'
                id: 1
                guid: 16961918986075126235
                path: '/dev/gptid/b6da5f17-422f-11e7-afe7-7085c22555c2'
                whole_disk: 1
                DTL: 452
                create_txg: 4
                com.delphix:vdev_zap_leaf: 38
            children[2]:
                type: 'disk'
                id: 2
                guid: 12975178285482062768
                path: '/dev/gptid/b7b89b77-422f-11e7-afe7-7085c22555c2'
                whole_disk: 1
                DTL: 450
                create_txg: 4
                com.delphix:vdev_zap_leaf: 39
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
			
				Last edited: 
			
		
	
								
								
									
	
		
			
		
		
	
	
	
		
			
		
		
	
								
							
							
				