tmueko
Explorer
- Joined
- Jun 5, 2012
- Messages
- 82
tonight, iSCSI on FreeNAS-8.3.1 stopped working, again.
Installation are 2 ESX-i with about 15 VMs and 1 FreeNAS just running iSCSI. Snapshots are done every hour and mirrored to a FreeBSD-9.1 Maschine.
Error Message was this:
After stopping and staring of iSCSI-Service using the FreeNAS-GUI the VMs on that maschine came back to life, so o don't think it's a hardware problem (we already changed the Hardware running FreeNAS).
Error Message on the ESXi was
There was no Problem with the zpool
Hardware of FreeNAS ist
Supermicro X9DR3-F
2x Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
32 GB RAM
3ware 9750-4i with 4 x 3TB WD red
I also attached the output of "sysctl -a", "dmesg" and "arc_summary.py", maybe it helps.
Installation are 2 ESX-i with about 15 VMs and 1 FreeNAS just running iSCSI. Snapshots are done every hour and mirrored to a FreeBSD-9.1 Maschine.
Error Message was this:
Code:
Apr 17 04:02:41 wolfgang istgt[1818]: istgt_iscsi.c:4764:istgt_iscsi_transfer_out: ***ERROR*** iscsi_read_pdu() failed, r2t_sent=0 Apr 17 04:02:41 wolfgang istgt[1818]: istgt_iscsi.c:5852:worker: ***ERROR*** iscsi_task_transfer_out() failed on <IQN>:wolfgang,t,0x0001(<IQN>.esxi02,i,0x00023d000001)
After stopping and staring of iSCSI-Service using the FreeNAS-GUI the VMs on that maschine came back to life, so o don't think it's a hardware problem (we already changed the Hardware running FreeNAS).
Error Message on the ESXi was
Code:
Device t10.FreeBSD_iSCSI_Disk______120000010________________ _______ performance has deteriorated. I/O latency increased from average value of 18663 microseconds to 1812759 microseconds. warning 17.04.2013 05:24:38 192.168.222.11
There was no Problem with the zpool
Code:
[root@wolfgang] ~# zpool status -v pool: daten state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM daten ONLINE 0 0 0 gptid/d9f131e8-a1a6-11e2-8e6c-0025909ac99e ONLINE 0 0 0
Hardware of FreeNAS ist
Supermicro X9DR3-F
2x Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
32 GB RAM
3ware 9750-4i with 4 x 3TB WD red
Code:
[root@wolfgang] ~# tw_cli /c0 show all /c0 Driver Version = 10.80.00.003 /c0 Model = 9750-4i /c0 Available Memory = 488MB /c0 Firmware Version = FH9X 5.12.00.013 /c0 Bios Version = BE9X 5.11.00.007 /c0 Boot Loader Version = BT9X 6.00.00.004 /c0 Serial Number = SV23104102 /c0 PCB Version = Rev 001 /c0 PCHIP Version = B4 /c0 ACHIP Version = 05000e00 /c0 Controller Phys = 8 /c0 Connections = 4 of 128 /c0 Drives = 4 of 127 /c0 Units = 1 of 127 /c0 Active Drives = 4 of 127 /c0 Active Units = 1 of 32 /c0 Max Drives Per Unit = 32 /c0 Total Optimal Units = 1 /c0 Not Optimal Units = 0 /c0 Disk Spinup Policy = 1 /c0 Spinup Stagger Time Policy (sec) = 1 /c0 Auto-Carving Policy = off /c0 Auto-Carving Size = 2048 GB /c0 Auto-Rebuild Policy = on /c0 Rebuild Mode = Adaptive /c0 Rebuild Rate = 1 /c0 Verify Mode = Adaptive /c0 Verify Rate = 1 /c0 Controller Bus Type = PCIe /c0 Controller Bus Width = 8 lanes /c0 Controller Bus Speed = 5.0 Gbps/lane Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy ------------------------------------------------------------------------------ u0 RAID-5 OK - - 256K 8381.87 RiW ON VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 2.73 TB SATA 0 - WDC WD30EFRX-68AX9N0 p1 OK u0 2.73 TB SATA 1 - WDC WD30EFRX-68AX9N0 p2 OK u0 2.73 TB SATA 2 - WDC WD30EFRX-68AX9N0 p3 OK u0 2.73 TB SATA 3 - WDC WD30EFRX-68AX9N0
I also attached the output of "sysctl -a", "dmesg" and "arc_summary.py", maybe it helps.