Unequal data written to vdevs in a mirror

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
Hi All,

I have a host with 48 drives, 24 vdevs of mirrors. I am using the DD command to fill the disk to 75% and I notice that one vdev has 23% less data on it.

Some are 101% of the average and others are much less than that.

Should this be any of my concern?

Thanks,
Joe

Code:
dd if=/dev/random of=/mnt/vold/benchmark/dataRandFile.dd bs=1024k count=102400


Code:
for i in {100..349}; do cp -v dataRandFile.dd "/mnt/vol54/benchmark/dataRandFile_$i.dd" ; done


root@store54[/mnt/vold/benchmark]# zpool iostat -v vol54 capacity operations bandwidth pool alloc free read write read write ---------------------------------------------- ----- ----- ----- ----- ----- ----- vol54 23.8T 41.5T 0 1.97K 2.41K 1.31G mirror 1.01T 1.71T 0 96 141 57.1M gptid/27e73d12-edae-11eb-9251-246e968dc06a - - 0 57 70 28.6M gptid/27f708f8-edae-11eb-9251-246e968dc06a - - 0 38 70 28.6M mirror 959G 1.78T 0 68 105 53.3M gptid/8a6cb504-edae-11eb-9251-246e968dc06a - - 0 34 55 26.6M gptid/8a7b7f39-edae-11eb-9251-246e968dc06a - - 0 33 49 26.6M mirror 1.01T 1.71T 0 93 105 57.6M gptid/a7cc9933-edae-11eb-9251-246e968dc06a - - 0 58 51 28.8M gptid/a7de9fa8-edae-11eb-9251-246e968dc06a - - 0 34 54 28.8M mirror 1.01T 1.71T 0 74 106 57.6M gptid/c3a44e64-edae-11eb-9251-246e968dc06a - - 0 34 53 28.8M gptid/c3b8c75a-edae-11eb-9251-246e968dc06a - - 0 39 53 28.8M mirror 1.11T 1.60T 0 105 101 63.7M gptid/e05ad3be-edae-11eb-9251-246e968dc06a - - 0 41 50 31.8M gptid/e06b8e92-edae-11eb-9251-246e968dc06a - - 0 63 50 31.8M mirror 1.01T 1.71T 0 75 93 57.9M gptid/001fc690-edaf-11eb-9251-246e968dc06a - - 0 35 47 28.9M gptid/0055bbe0-edaf-11eb-9251-246e968dc06a - - 0 40 46 28.9M mirror 1.12T 1.60T 0 107 102 64.0M gptid/19337621-edaf-11eb-9251-246e968dc06a - - 0 66 51 32.0M gptid/1969532d-edaf-11eb-9251-246e968dc06a - - 0 41 51 32.0M mirror 1.01T 1.71T 0 74 104 57.7M gptid/31c44ade-edaf-11eb-9251-246e968dc06a - - 0 40 54 28.9M gptid/31da7dbf-edaf-11eb-9251-246e968dc06a - - 0 34 50 28.9M mirror 944G 1.80T 0 93 114 53.0M gptid/5277c406-edaf-11eb-9251-246e968dc06a - - 0 32 59 26.5M gptid/528d97a2-edaf-11eb-9251-246e968dc06a - - 0 60 55 26.5M mirror 1.00T 1.71T 0 74 109 57.8M gptid/6ec685de-edaf-11eb-9251-246e968dc06a - - 0 39 55 28.9M gptid/6eddc531-edaf-11eb-9251-246e968dc06a - - 0 35 54 28.9M mirror 1.16T 1.56T 0 94 117 66.6M gptid/8c46ceb8-edaf-11eb-9251-246e968dc06a - - 0 53 57 33.3M gptid/8c8378a1-edaf-11eb-9251-246e968dc06a - - 0 40 60 33.3M mirror 956G 1.78T 0 69 100 53.9M gptid/abf284a1-edaf-11eb-9251-246e968dc06a - - 0 34 47 26.9M gptid/ac2c8c27-edaf-11eb-9251-246e968dc06a - - 0 35 53 26.9M mirror 1.15T 1.57T 0 97 132 66.5M gptid/cc2f7a5d-edaf-11eb-9251-246e968dc06a - - 0 39 66 33.2M gptid/cc435735-edaf-11eb-9251-246e968dc06a - - 0 57 65 33.2M mirror 1.01T 1.71T 0 76 114 58.5M gptid/f0006667-edaf-11eb-9251-246e968dc06a - - 0 41 57 29.3M gptid/f014a3e8-edaf-11eb-9251-246e968dc06a - - 0 35 56 29.3M mirror 1.01T 1.71T 0 95 117 58.6M gptid/15d3085a-edb0-11eb-9251-246e968dc06a - - 0 34 64 29.3M gptid/15bfe491-edb0-11eb-9251-246e968dc06a - - 0 60 53 29.3M mirror 768G 1.97T 0 77 98 43.6M gptid/3bdac715-edb0-11eb-9251-246e968dc06a - - 0 32 47 21.8M gptid/3bf43b64-edb0-11eb-9251-246e968dc06a - - 0 44 50 21.8M mirror 1.01T 1.71T 0 95 97 58.6M gptid/683671b3-edb0-11eb-9251-246e968dc06a - - 0 34 48 29.3M gptid/68780d56-edb0-11eb-9251-246e968dc06a - - 0 60 49 29.3M mirror 949G 1.79T 0 69 100 54.0M gptid/92d2ddc8-edb0-11eb-9251-246e968dc06a - - 0 34 52 27.0M gptid/92ec40f2-edb0-11eb-9251-246e968dc06a - - 0 34 48 27.0M mirror 936G 1.80T 0 94 90 53.4M gptid/be53cb42-edb0-11eb-9251-246e968dc06a - - 0 61 45 26.7M gptid/be70cc6c-edb0-11eb-9251-246e968dc06a - - 0 33 44 26.7M mirror 882G 1.86T 0 69 99 50.4M gptid/ec24c6cd-edb0-11eb-9251-246e968dc06a - - 0 37 50 25.2M gptid/ec9982df-edb0-11eb-9251-246e968dc06a - - 0 31 48 25.2M mirror 1.02T 1.70T 0 102 103 59.7M gptid/1d99cec1-edb1-11eb-9251-246e968dc06a - - 0 63 55 29.8M gptid/1dec1148-edb1-11eb-9251-246e968dc06a - - 0 39 47 29.8M mirror 1.02T 1.70T 0 87 84 60.0M gptid/50133054-edb1-11eb-9251-246e968dc06a - - 0 52 42 30.0M gptid/505f52aa-edb1-11eb-9251-246e968dc06a - - 0 35 42 30.0M mirror 942G 1.80T 0 96 83 54.5M gptid/e3e81ab3-edb1-11eb-9251-246e968dc06a - - 0 62 41 27.2M gptid/e3fe0b5a-edb1-11eb-9251-246e968dc06a - - 0 33 42 27.2M mirror 962G 1.78T 0 70 84 55.8M gptid/0bf011df-edb2-11eb-9251-246e968dc06a - - 0 35 42 27.9M gptid/0c073dfe-edb2-11eb-9251-246e968dc06a - - 0 35 42 27.9M ---------------------------------------------- ----- ----- ----- ----- ----- ----- root@rstore54[/mnt/vold/benchmark]#
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
It's not inherently a problem, but it points towards some lower-performance vdevs possibly in the mix. It could be just normal disk performance variation between units, a narrower pipe to the affected disks (one more SAS expander hop?), some disks receiving more noise, ...
I hesitate to add failing disks to the list because the distribution you see is not really crazy or anything.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
the diskinfo -citv both look good

root@store54[/mnt/vold/benchmark]# diskinfo -citv /dev/da19 /dev/da19 512 # sectorsize 3000592982016 # mediasize in bytes (2.7T) 5860533168 # mediasize in sectors 0 # stripesize 0 # stripeoffset 364801 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. SEAGATE ST33000650SS # Disk descr. Z295L56G # Disk ident. id1,enc@n5001438028c48080/type@0/slot@9 # Physical path No # TRIM/UNMAP support 7200 # Rotation rate in RPM Not_Zoned # Zone Mode I/O command overhead: time to read 10MB block 0.175608 sec = 0.009 msec/sector time to read 20480 sectors 1.844049 sec = 0.090 msec/sector calculated command overhead = 0.081 msec/sector Seek times: Full stroke: 250 iter in 4.932825 sec = 19.731 msec Half stroke: 250 iter in 3.650686 sec = 14.603 msec Quarter stroke: 500 iter in 5.868195 sec = 11.736 msec Short forward: 400 iter in 1.754105 sec = 4.385 msec Short backward: 400 iter in 2.308485 sec = 5.771 msec Seq outer: 2048 iter in 0.216428 sec = 0.106 msec Seq inner: 2048 iter in 2.894013 sec = 1.413 msec Transfer rates: outside: 102400 kbytes in 0.741436 sec = 138110 kbytes/sec middle: 102400 kbytes in 0.909188 sec = 112628 kbytes/sec inside: 102400 kbytes in 1.474911 sec = 69428 kbytes/sec Asynchronous random reads: sectorsize: 856 ops in 3.595347 sec = 238 IOPS 4 kbytes: 735 ops in 3.788959 sec = 194 IOPS 32 kbytes: 708 ops in 3.775741 sec = 188 IOPS 128 kbytes: 586 ops in 3.879611 sec = 151 IOPS root@store54[/mnt/vold/benchmark]# diskinfo -citv /dev/da35 /dev/da35 512 # sectorsize 3000592982016 # mediasize in bytes (2.7T) 5860533168 # mediasize in sectors 0 # stripesize 0 # stripeoffset 364801 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. IBM-XIV ST33000650SS B1 # Disk descr. Z29560XF00009314G6YX # Disk ident. id1,enc@n5001438023682700/type@0/slot@9 # Physical path No # TRIM/UNMAP support 7200 # Rotation rate in RPM Not_Zoned # Zone Mode I/O command overhead: time to read 10MB block 0.079987 sec = 0.004 msec/sector time to read 20480 sectors 1.711372 sec = 0.084 msec/sector calculated command overhead = 0.080 msec/sector Seek times: Full stroke: 250 iter in 4.900277 sec = 19.601 msec Half stroke: 250 iter in 3.542248 sec = 14.169 msec Quarter stroke: 500 iter in 3.620755 sec = 7.242 msec Short forward: 400 iter in 1.733959 sec = 4.335 msec Short backward: 400 iter in 1.923815 sec = 4.810 msec Seq outer: 2048 iter in 0.114103 sec = 0.056 msec Seq inner: 2048 iter in 0.188406 sec = 0.092 msec Transfer rates: outside: 102400 kbytes in 0.683563 sec = 149803 kbytes/sec middle: 102400 kbytes in 0.806302 sec = 127000 kbytes/sec inside: 102400 kbytes in 1.339599 sec = 76441 kbytes/sec Asynchronous random reads: sectorsize: 867 ops in 3.639245 sec = 238 IOPS 4 kbytes: 712 ops in 3.772032 sec = 189 IOPS 32 kbytes: 699 ops in 3.779598 sec = 185 IOPS 128 kbytes: 632 ops in 3.860267 sec = 164 IOPS root@store54[/mnt/vold/benchmark]#
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
I have to say I am a tool.

I was doing a copy operation while I told TrueNAS to do a long SMART test on all those same disks. So it could be the one disk did not like doing double duty.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
And I just checked and /dev/da19 is still running the log test, 10.5 hours when the smartctl results say it should take 7.5 hours. I am testing all the others now to see if there are other disks still in the NOW and not completed.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
You've identified your oversight, corrected it, and posted a clear update that can help others who might be in the same situation in the future.

I'd say in the very literal sense, you're a tool - something aiding others in the accomplishment of a task. :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
10.5 hours when the smartctl results say it should take 7.5 hours
Those "estimates" are optimistic when there is no workload. With workload, they're fantasy numbers.
 
Top