Rewrite performance and general question

Status
Not open for further replies.

elec

Cadet
Joined
Aug 23, 2015
Messages
5
System Hardware:

CPU: Intel(R) Xeon(R) CPU L5420 @ 2.50GHz (2500.14-MHz K8-class CPU)
Real memory = 35165044736 (33536 MB)
mps0: <LSI SAS2308> port 0x2000-0x20ff mem 0xda240000-0xda24ffff,0xda200000-0xda23ffff irq 16 at device 0.0 on pci3
mps0: Firmware: 16.00.00.00, Driver: 16.00.00.00-fbsd
mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc>

There are 16 disks in a raid10 like setup of this:
da0 at mps0 bus 0 scbus0 target 8 lun 0
da0: <TOSHIBA MG03SCA200 0108> Fixed Direct Access SCSI-5 device
da0: Serial Number 45D0A02LFZZ9
da0: 600.000MB/s transfers
da0: Command Queueing enabled
da0: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)

Benchmark command: dd if=/dev/daXX of=/dev/null bs=262144

Single HD performance:

Code:
[root@nas01] ~# iostat da16 1
       tty            da16             cpu
tin  tout  KB/t tps  MB/s  us ni sy in id
   0   135 112.83  50  5.49   0  0  2  0 97
   0   133 128.00 1279 159.89   0  0  0  0 100
   0    48 128.00 1298 162.21   0  0  0  0 99
   0    47 128.00 1303 162.84   0  0  0  0 100
   0    48 128.00 1304 162.96   0  0  0  0 99
   0    47 128.00 1299 162.34   0  0  1  0 99
   0    47 128.00 1281 160.09   0  0  0  0 100
   0    48 128.00 1281 160.09   0  0  0  0 99
   0    47 128.00 1325 165.58   0  0  1  0 99


Aggr HD performance:

Code:
[root@nas01] ~# iostat da16 da17 da18 da19 da20 da21 da22 da23 da24 da25 da26 da27 da28 da29 da30 da31 1
       tty            da16             da17             da18             da19             da20             da21             da22             da23             da24             da25             da26             da27             da28             da29             da30             da31             cpu
tin  tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0   135 112.40  49  5.38  112.30  49  5.37  112.29  49  5.37  112.37  49  5.36  112.45  49  5.38  112.35  49  5.36  112.33  49  5.39  112.39  49  5.37  112.43  49  5.38  111.73  50  5.42  112.49  49  5.34  112.66  49  5.38  112.69  49  5.37  112.57  49  5.38  112.52  49  5.36  112.44  49  5.36   0  0  2  0 97
   0   912 128.00 386 48.22  128.00 387 48.34  128.00 387 48.34  128.00 386 48.22  128.00 383 47.84  128.00 379 47.35  128.00 392 48.97  128.00 401 50.09  128.00 387 48.34  128.00 381 47.59  128.00 391 48.84  128.00 403 50.34  128.00 381 47.59  128.00 377 47.10  128.00 382 47.72  128.00 393 49.09   0  0  2  0 97
   0   315 128.00 388 48.45  128.00 381 47.58  128.00 402 50.20  128.00 386 48.20  128.00 383 47.83  128.00 384 47.95  128.00 395 49.33  128.00 397 49.58  128.00 396 49.45  128.00 381 47.58  128.00 391 48.83  128.00 385 48.08  128.00 387 48.33  128.00 381 47.58  128.00 375 46.83  128.00 384 47.95   0  0  2  1 97
   0   315 128.00 394 49.20  128.00 395 49.33  128.00 406 50.70  128.00 394 49.20  128.00 385 48.08  128.00 396 49.45  128.00 384 47.95  128.00 381 47.58  128.00 376 46.95  128.00 386 48.20  128.00 381 47.58  128.00 390 48.70  128.00 382 47.70  128.00 383 47.83  128.00 375 46.83  128.00 390 48.70   0  0  3  0 97
   0   315 128.00 383 47.83  128.00 380 47.45  128.00 396 49.45  128.00 381 47.58  128.00 384 47.95  128.00 389 48.58  128.00 387 48.33  128.00 389 48.58  128.00 381 47.58  128.00 387 48.33  128.00 398 49.70  128.00 399 49.82  128.00 375 46.83  128.00 387 48.33  128.00 389 48.58  128.00 392 48.95   0  0  2  1 97
   0   315 128.00 383 47.83  128.00 398 49.70  128.00 386 48.20  128.00 391 48.83  128.00 394 49.20  128.00 381 47.58  128.00 388 48.45  128.00 396 49.45  128.00 390 48.70  128.00 383 47.83  128.00 388 48.45  128.00 388 48.45  128.00 382 47.70  128.00 393 49.08  128.00 376 46.95  128.00 380 47.45   0  0  2  2 97
   0   315 128.00 385 48.08  128.00 400 49.95  128.00 382 47.70  128.00 394 49.20  128.00 385 48.08  128.00 379 47.33  128.00 394 49.20  128.00 392 48.95  128.00 407 50.82  128.00 390 48.70  128.00 393 49.08  128.00 378 47.20  128.00 378 47.20  128.00 391 48.83  128.00 378 47.20  128.00 372 46.45   0  0  2  1 97
   0   315 128.00 381 47.58  128.00 401 50.07  128.00 384 47.95  128.00 395 49.33  128.00 383 47.83  128.00 389 48.58  128.00 387 48.33  128.00 396 49.45  128.00 390 48.70  128.00 390 48.70  128.00 382 47.70  128.00 384 47.95  128.00 379 47.33  128.00 385 48.08  128.00 390 48.70  128.00 381 47.58   0  0  2  1 97
   0   315 128.00 388 48.45  128.00 386 48.20  128.00 388 48.45  128.00 384 47.95  128.00 385 48.08  128.00 390 48.70  128.00 399 49.83  128.00 385 48.08  128.00 393 49.08  128.00 387 48.33  128.00 383 47.83  128.00 402 50.20  128.00 384 47.95  128.00 375 46.83  128.00 376 46.95  128.00 391 48.83   0  0  3  0 97
   0   315 128.00 386 48.20  128.00 386 48.20  128.00 376 46.95  128.00 386 48.20  128.00 378 47.20  128.00 389 48.58  128.00 387 48.33  128.00 390 48.70  128.00 390 48.70  128.00 391 48.83  128.00 405 50.57  128.00 386 48.20  128.00 387 48.33  128.00 386 48.20  128.00 381 47.58  128.00 393 49.08   0  0  2  1 96


I am not an expert but this seems like my LSI card limited the bandwidth to 50Mbs per drive?

ZFS Test. Compression disabled for tests, the rest paremeters are default. All tests done locally on the NAS, no iSCSI or NFS involved.

Code:
[root@nas01] /mnt/tank# iozone -r 4k -r 8k -r 16k -r 32k -r 64k -r 128k -s 64g -i 0 -i 1
        Iozone: Performance Test of File I/O
                Version $Revision: 3.420 $
                Compiled for 64 bit mode.
                Build: freebsd

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                     Vangel Bojaxhi, Ben England, Vikentsi Lapa.

        Run began: Sun Aug 23 00:39:31 2015

        Record Size 4 KB
        Record Size 8 KB
        Record Size 16 KB
        Record Size 32 KB
        Record Size 64 KB
        Record Size 128 KB
        File size set to 67108864 KB
        Command line used: iozone -r 4k -r 8k -r 16k -r 32k -r 64k -r 128k -s 64g -i 0 -i 1
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                  
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        67108864       4  272495   47919   529063   508837                                                                         
        67108864       8  261976   49975   581346   650581                                                                         
        67108864      16  265560   50986   599449   650338                                                                         
        67108864      32  279449   51662   631565   656612                                                                         
        67108864      64  263125   52159   641028   667229                                                                         
        67108864     128  303179  307754   645718   665924 


I guess all this parameters look normal except for the rewrite, is this normal?
 

elec

Cadet
Joined
Aug 23, 2015
Messages
5
Missed a small detail that might matter, I am using multipath.

Code:
[root@nas01] ~# gmultipath list
Geom name: disk16
Type: AUTOMATIC
Mode: Active/Passive
UUID: a84c32e2-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk16
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da31
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da15
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk15
Type: AUTOMATIC
Mode: Active/Passive
UUID: a83230da-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk15
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da30
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da14
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk14
Type: AUTOMATIC
Mode: Active/Passive
UUID: a81b5dd6-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk14
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da29
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da13
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk13
Type: AUTOMATIC
Mode: Active/Passive
UUID: a802851b-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk13
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da28
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da12
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk12
Type: AUTOMATIC
Mode: Active/Passive
UUID: a7e9886a-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk12
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da27
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da11
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk11
Type: AUTOMATIC
Mode: Active/Passive
UUID: a7bd98a5-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk11
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da26
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da10
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk10
Type: AUTOMATIC
Mode: Active/Passive
UUID: a7a82945-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk10
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da25
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da9
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk9
Type: AUTOMATIC
Mode: Active/Passive
UUID: a7932210-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk9
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da24
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da8
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk8
Type: AUTOMATIC
Mode: Active/Passive
UUID: a7797260-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk8
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da23
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da7
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk7
Type: AUTOMATIC
Mode: Active/Passive
UUID: a762aa6d-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk7
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da22
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da6
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk6
Type: AUTOMATIC
Mode: Active/Passive
UUID: a733dd0c-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk6
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da21
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da5
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk5
Type: AUTOMATIC
Mode: Active/Passive
UUID: a71dad25-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk5
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da20
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da4
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk4
Type: AUTOMATIC
Mode: Active/Passive
UUID: a70691b9-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk4
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da19
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da3
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk3
Type: AUTOMATIC
Mode: Active/Passive
UUID: a6ef7ea0-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk3
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da18
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da2
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk2
Type: AUTOMATIC
Mode: Active/Passive
UUID: a6c1e8de-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk2
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da17
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

Geom name: disk1
Type: AUTOMATIC
Mode: Active/Passive
UUID: a6abb0d7-485a-11e5-8658-002590294d4e
State: OPTIMAL
Providers:
1. Name: multipath/disk1
   Mediasize: 2000398933504 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0
   State: OPTIMAL
Consumers:
1. Name: da16
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
2. Name: da0
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r1w1e1
   State: PASSIVE

[root@nas01] ~#
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
What's for read speed, how do you have your drives connected? If you are using expander, and especially if it is old, it can be a bottleneck. Though 800MB/s total is indeed low. I would expect up to 2.4GB/s total per usual 4x wide 6Gbps SAS 2.0 port, unless your expander is old SAS 1.0.

What's for rewrite -- that is known weak point of ZFS. If you have dataset record size set to default 128K, then smaller rewrites turn into series of read-modify-write, that are very slow. You may see that 128K rewrites are fast in your results. If you need smaller rewrites to be faster, you may reduce dataset record size, but that will increase metadata overhead and may somewhat reduce speed of large operations.
 

elec

Cadet
Joined
Aug 23, 2015
Messages
5
I have a single SAS9207-8e card plugged into a PCIE x4 slot. I have two (2 - for multipath) SFF8088 cables connected to two (2) daisy chained Lenovo SA120's.

This pool is mainly for virtualization (vmware and proxmox) is there a way I can watch what record size is being used the most the storage in realtime while I do some workload on the vmware to see or theres no way other than benchmarks?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
At this point FreeNAS defaults to 16-32KB record sizes for ZVOLs, used for block storage for VMware, etc. Lower values could be more efficient from rewrite speed point, but they are too expensive from overhead point. 16K seems like an acceptable middle. Block sizes used by virtualization depend not on VMware, but on guest OS running and their loads. For Windows with NTFS it would be good to have record size of 4K, but that is too expensive, so we had to increase it.
 
Status
Not open for further replies.
Top