Hi,
i am a little bit frustrated ... her my setup
Dell R710 with 288GB mem and 16cores (a former ESXi-host ... a waste to discard it)
H200 HBA in IT-mode
D2600-shelve with 6x10T HST SAS drives
purpose is a backup-repository for veeam
Situation:
backup-server is a Win2016-VM with the zvol-LUN connected as RDM
the disk at the client is formated with ReFS (64k Blocksize)
phoron61 is the name of the FreeNAS .. so test are done local on the machine
backups run quite good (so write-throughput is about 400MB/s and more ... bottleneck is the source-SAN)
but reading from the zvol is terrible slow (expect cached readings of course)
root@phoron61v:~ # pv < /dev/zvol/backup2/bkp2_zvol1 > /dev/null
^C.2GiB 0:00:14 [51.7MiB/s] [ <=> ]
so reading from the zvol is about 40-100MB/s ...
BTW:
is this a sequential read?
or ist this - because auf COW - random reads?
dd with different blocksizes shows the same ...
reading writing from a local testfile (1TB, so no cache-hit) work fine ... with about 600MB/s
dd if=/dev/dax of=/dev/null with different blocksizes also utilize the disk
any ideas?
any hints?
i will try some test with NFS instead of FC/iSCSI, so no zvol involved ... but thats just a fallback
FreeNAS data
Build FreeNAS-11.2-U5
Platform Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
Memory 294859MB
zpool
root@phoron61v:~ # zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup2 54T 10.5T 43.5T - - 0% 19% 1.00x ONLINE /mnt
freenas-boot 68G 1.52G 66.5G - - - 2% 1.00x ONLINE -
root@phoron61v:~ # zfs get recordsize backup2
NAME PROPERTY VALUE SOURCE
backup2 recordsize 128K default
root@phoron61v:~ # zfs get volblocksize backup2/bkp2_zvol1
NAME PROPERTY VALUE SOURCE
backup2/bkp2_zvol1 volblocksize 64K -
root@phoron61v:~ # zpool status backup2
pool: backup2
state: ONLINE
scan: resilvered 36K in 0 days 00:00:01 with 0 errors on Thu Sep 12 10:38:41 2019
config:
NAME STATE READ WRITE CKSUM
backup2 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/57525669-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5855889e-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/58d986cb-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/59c9e8f0-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5abd4c62-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5b3222a7-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
errors: No known data errors
Thanks for some
Chris
i am a little bit frustrated ... her my setup
Dell R710 with 288GB mem and 16cores (a former ESXi-host ... a waste to discard it)
H200 HBA in IT-mode
D2600-shelve with 6x10T HST SAS drives
purpose is a backup-repository for veeam
Situation:
backup-server is a Win2016-VM with the zvol-LUN connected as RDM
the disk at the client is formated with ReFS (64k Blocksize)
phoron61 is the name of the FreeNAS .. so test are done local on the machine
backups run quite good (so write-throughput is about 400MB/s and more ... bottleneck is the source-SAN)
but reading from the zvol is terrible slow (expect cached readings of course)
root@phoron61v:~ # pv < /dev/zvol/backup2/bkp2_zvol1 > /dev/null
^C.2GiB 0:00:14 [51.7MiB/s] [ <=> ]
so reading from the zvol is about 40-100MB/s ...
BTW:
is this a sequential read?
or ist this - because auf COW - random reads?
dd with different blocksizes shows the same ...
reading writing from a local testfile (1TB, so no cache-hit) work fine ... with about 600MB/s
dd if=/dev/dax of=/dev/null with different blocksizes also utilize the disk
any ideas?
any hints?
i will try some test with NFS instead of FC/iSCSI, so no zvol involved ... but thats just a fallback
FreeNAS data
Build FreeNAS-11.2-U5
Platform Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
Memory 294859MB
zpool
root@phoron61v:~ # zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup2 54T 10.5T 43.5T - - 0% 19% 1.00x ONLINE /mnt
freenas-boot 68G 1.52G 66.5G - - - 2% 1.00x ONLINE -
root@phoron61v:~ # zfs get recordsize backup2
NAME PROPERTY VALUE SOURCE
backup2 recordsize 128K default
root@phoron61v:~ # zfs get volblocksize backup2/bkp2_zvol1
NAME PROPERTY VALUE SOURCE
backup2/bkp2_zvol1 volblocksize 64K -
root@phoron61v:~ # zpool status backup2
pool: backup2
state: ONLINE
scan: resilvered 36K in 0 days 00:00:01 with 0 errors on Thu Sep 12 10:38:41 2019
config:
NAME STATE READ WRITE CKSUM
backup2 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/57525669-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5855889e-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/58d986cb-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/59c9e8f0-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5abd4c62-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5b3222a7-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
errors: No known data errors
Thanks for some
Chris