So I'm testing my zfs pool using the following tests -
As you can see, my single crucial mx500 is just as fast as 6 Samsung 860 Pro drives. I've done a ton of reading and I don't see the issue here.
Code:
#This is my Crucial MX500 boot disk.
dd if=/dev/zero of=/tempfile bs=1M count=5k
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 14.0936 s, 381 MB/s
dd if=/dev/zero of=/tempfile bs=1M count=5k oflag=direct
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 12.7091 s, 422 MB/s
#This is my array
zdb
kvm:
version: 5000
name: 'kvm'
state: 0
txg: 67
pool_guid: 11505207513529118781
errata: 0
hostid: 4285015651
hostname: 'prod'
vdev_children: 3
vdev_tree:
type: 'root'
id: 0
guid: 11505207513529118781
children[0]:
type: 'mirror'
id: 0
guid: 13930234995489822907
metaslab_array: 38
metaslab_shift: 33
ashift: 9
asize: 1000189984768
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 17328537525774326006
path: '/dev/disk/by-id/scsi-35002538e40c2eb67-part1'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 7440773068342029255
path: '/dev/disk/by-id/scsi-35002538e40e5c209-part1'
whole_disk: 1
create_txg: 4
children[1]:
type: 'mirror'
id: 1
guid: 13633949727752237663
metaslab_array: 36
metaslab_shift: 33
ashift: 9
asize: 1000189984768
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 13401956080122052633
path: '/dev/disk/by-id/scsi-35002538e40da4ca2-part1'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 9163985048442606291
path: '/dev/disk/by-id/scsi-35002538e000cd7ac-part1'
whole_disk: 1
create_txg: 4
children[2]:
type: 'mirror'
id: 2
guid: 15210515774431942742
metaslab_array: 34
metaslab_shift: 33
ashift: 9
asize: 1000189984768
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 4547121569999765888
path: '/dev/disk/by-id/scsi-35002538e40e0481c-part1'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 12916523593271101090
path: '/dev/disk/by-id/scsi-35002538e40e5d5d9-part1'
whole_disk: 1
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
#This is the speeds I'm getting on the array
root@prod:~# dd if=/dev/zero of=/kvm/testfile bs=1G count=5 oflag=direct
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 12.0899 s, 444 MB/s
root@prod:~# dd if=/dev/zero of=/kvm/testfile bs=1G count=5
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 14.778 s, 363 MB/s
root@prod:~# zfs set dedup=off kvm
root@prod:~# dd if=/dev/zero of=/kvm/testfile bs=1G count=5
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 14.6284 s, 367 MB/s
#This is the settings of the pool, I get similar speeds if dedupe is on or off.
zfs get kvm
bad property list: invalid property 'kvm'
usage:
get [-rHp] [-d max] [-o "all" | field[,...]]
[-t type[,...]] [-s source[,...]]
<"all" | property[,...]> [filesystem|volume|snapshot] ...
The following properties are supported:
PROPERTY EDIT INHERIT VALUES
available NO NO <size>
clones NO NO <dataset>[,...]
compressratio NO NO <1.00x or higher if compressed>
creation NO NO <date>
defer_destroy NO NO yes | no
logicalreferenced NO NO <size>
logicalused NO NO <size>
mounted NO NO yes | no
origin NO NO <snapshot>
refcompressratio NO NO <1.00x or higher if compressed>
referenced NO NO <size>
type NO NO filesystem | volume | snapshot | bookmark
used NO NO <size>
usedbychildren NO NO <size>
usedbydataset NO NO <size>
usedbyrefreservation NO NO <size>
usedbysnapshots NO NO <size>
userrefs NO NO <count>
written NO NO <size>
aclinherit YES YES discard | noallow | restricted | passthrough | passthrough-x
acltype YES YES noacl | posixacl
atime YES YES on | off
canmount YES NO on | off | noauto
casesensitivity NO YES sensitive | insensitive | mixed
checksum YES YES on | off | fletcher2 | fletcher4 | sha256
compression YES YES on | off | lzjb | gzip | gzip-[1-9] | zle | lz4
context YES NO <selinux context>
copies YES YES 1 | 2 | 3
dedup YES YES on | off | verify | sha256[,verify]
defcontext YES NO <selinux defcontext>
devices YES YES on | off
exec YES YES on | off
filesystem_count YES NO <count>
filesystem_limit YES NO <count> | none
fscontext YES NO <selinux fscontext>
logbias YES YES latency | throughput
mlslabel YES YES <sensitivity label>
mountpoint YES YES <path> | legacy | none
nbmand YES YES on | off
normalization NO YES none | formC | formD | formKC | formKD
overlay YES YES on | off
primarycache YES YES all | none | metadata
quota YES NO <size> | none
readonly YES YES on | off
recordsize YES YES 512 to 1M, power of 2
redundant_metadata YES YES all | most
refquota YES NO <size> | none
refreservation YES NO <size> | none
relatime YES YES on | off
reservation YES NO <size> | none
rootcontext YES NO <selinux rootcontext>
secondarycache YES YES all | none | metadata
setuid YES YES on | off
sharenfs YES YES on | off | share(1M) options
sharesmb YES YES on | off | sharemgr(1M) options
snapdev YES YES hidden | visible
snapdir YES YES hidden | visible
snapshot_count YES NO <count>
snapshot_limit YES NO <count> | none
sync YES YES standard | always | disabled
utf8only NO YES on | off
version YES NO 1 | 2 | 3 | 4 | 5 | current
volblocksize NO YES 512 to 128k, power of 2
volsize YES NO <size>
vscan YES YES on | off
xattr YES YES on | off | dir | sa
zoned YES YES on | off
userused@... NO NO <size>
groupused@... NO NO <size>
userquota@... YES NO <size> | none
groupquota@... YES NO <size> | none
written@<snap> NO NO <size>As you can see, my single crucial mx500 is just as fast as 6 Samsung 860 Pro drives. I've done a ton of reading and I don't see the issue here.