zvol on RAIDz incredibly slow
Hi,
Given: New HP Proliant Microserver N40L (4GB RAM), newly created RAIDz1 with 3x2TB. Created 500G zvol, exported via iSCSI (Gigabit LAN). In Linux (iSCSI initiator):
Everything OK. Then, creating ext3 on this volume:
Hang (or very slow progress).
In the meantime:
I have done this twice now, it's repeatable.
Last week, I did performance experiments with smaller disks (1GB) in order to see if this setup meets my requirements. It gave me an end-to-end performance (rsync) of 30-40MB/s which is acceptable.
What's happening suddenly?
Regards,
divB
EDIT: I did some experiments and I got results which I can't explain: I created a 500G zvol and exported via iSCSI. I did linear writes with dd in 4M blocks one time with a single drive and one time with a 3x2TB RAIDz1 (force 4k Blocks enabled).
The result diagrams are attached: While I get realistic performance for no-raid (>50MB/s), the performance drops horribly in case of RAIDz1: Except for the high starting performance (I guess filled buffers), the throughput decrases mostly linearily afterwards. I stopped the experiment at 7,8GB with 2,7MB/s.
I could imagine that more random writes such as formatting the drive with mkfs.ext3 would soon lead to a performance of a few kb/s.
Is there anything I can do about? This is so totally different from e.g. http://forums.freenas.org/showthrea...amarks-and-Cache&p=24532&viewfull=1#post24532 ...
I read that RAIDz1 does not have a very good performance but these numbers are far from being useable at all.
The disks are all different vendors but all SATA-300MB/s:
One more EDIT: For testing, did not use zvol+iSCSI but created a dataset and copied data via rsync+ssh (same network!). df now states 86776412k blocks used since I started 50 minutes ago. This yields about 28MB/s which would be acceptable. What the hell is going on here? May be there is a problem is istgt? Or zvols?
Hi,
Given: New HP Proliant Microserver N40L (4GB RAM), newly created RAIDz1 with 3x2TB. Created 500G zvol, exported via iSCSI (Gigabit LAN). In Linux (iSCSI initiator):
Code:
# hdparm -tT /dev/sdd /dev/sdd: Timing cached reads: 1608 MB in 2.00 seconds = 803.78 MB/sec Timing buffered disk reads: 222 MB in 3.02 seconds = 73.46 MB/sec
Everything OK. Then, creating ext3 on this volume:
Code:
# mkfs.ext4 /dev/sdd mke2fs 1.41.12 (17-May-2010) Dateisystem-Label= OS-Typ: Linux Blockgröße=4096 (log=2) Fragmentgröße=4096 (log=2) Stride=1 Blöcke, Stripebreite=256 Blöcke 32768000 Inodes, 131072000 Blöcke 6553600 Blöcke (5.00%) reserviert für den Superuser Erster Datenblock=0 Maximale Dateisystem-Blöcke=4294967296 4000 Blockgruppen 32768 Blöcke pro Gruppe, 32768 Fragmente pro Gruppe 8192 Inodes pro Gruppe Superblock-Sicherungskopien gespeichert in den Blöcken: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Schreibe Inode-Tabellen: 1800/4000
Hang (or very slow progress).
In the meantime:
Code:
# hdparm -tT /dev/sdd /dev/sdd: Timing cached reads: 1216 MB in 2.00 seconds = 608.15 MB/sec Timing buffered disk reads: 2 MB in 122.25 seconds = 16.75 kB/sec
I have done this twice now, it's repeatable.
Last week, I did performance experiments with smaller disks (1GB) in order to see if this setup meets my requirements. It gave me an end-to-end performance (rsync) of 30-40MB/s which is acceptable.
What's happening suddenly?
Regards,
divB
EDIT: I did some experiments and I got results which I can't explain: I created a 500G zvol and exported via iSCSI. I did linear writes with dd in 4M blocks one time with a single drive and one time with a 3x2TB RAIDz1 (force 4k Blocks enabled).
The result diagrams are attached: While I get realistic performance for no-raid (>50MB/s), the performance drops horribly in case of RAIDz1: Except for the high starting performance (I guess filled buffers), the throughput decrases mostly linearily afterwards. I stopped the experiment at 7,8GB with 2,7MB/s.


I could imagine that more random writes such as formatting the drive with mkfs.ext3 would soon lead to a performance of a few kb/s.
Is there anything I can do about? This is so totally different from e.g. http://forums.freenas.org/showthrea...amarks-and-Cache&p=24532&viewfull=1#post24532 ...
I read that RAIDz1 does not have a very good performance but these numbers are far from being useable at all.
The disks are all different vendors but all SATA-300MB/s:
Code:
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0 ada1: <SAMSUNG HD204UI 1AQ10001> ATA-8 SATA 2.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada2 at ahcich2 bus 0 scbus2 target 0 lun 0 ada2: <ST32000542AS CC34> ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada3 at ahcich3 bus 0 scbus3 target 0 lun 0 ada3: <WDC WD20EARX-00PASB0 51.0AB51> ATA-8 SATA 3.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
One more EDIT: For testing, did not use zvol+iSCSI but created a dataset and copied data via rsync+ssh (same network!). df now states 86776412k blocks used since I started 50 minutes ago. This yields about 28MB/s which would be acceptable. What the hell is going on here? May be there is a problem is istgt? Or zvols?