"no ping reply (NOP-Out) after 5 seconds" kills iSCSI volumes under load? (v11.1)

Status
Not open for further replies.

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
I've been working on a new FreeNAS iSCSI SAN solution and I've been having a problem where iSCSI connections drop and don't recover after running under load for several hours.

The console is filled with the following text:

Code:
Jan 21 03:16:23 san1 daemon[6448]:	 2018/01/21 03:16:23 [WARN] Timed out (30s) running check '/usr/local/etc/consul-checks/freenas_health.sh'
Jan 21 03:18:53 san1 daemon[6448]:	 2018/01/21 03:18:53 [WARN] Timed out (30s) running check '/usr/local/etc/consul-checks/freenas_health.sh'
Jan 21 03:21:23 san1 daemon[6448]:	 2018/01/21 03:21:23 [WARN] Timed out (30s) running check '/usr/local/etc/consul-checks/freenas_health.sh'
Jan 21 03:23:53 san1 daemon[6448]:	 2018/01/21 03:23:53 [WARN] Timed out (30s) running check '/usr/local/etc/consul-checks/freenas_health.sh'
Jan 21 03:26:23 san1 daemon[6448]:	 2018/01/21 03:26:23 [WARN] Timed out (30s) running check '/usr/local/etc/consul-checks/freenas_health.sh'
Jan 21 03:27:45 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:27:45 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:27:45 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 21 03:27:45 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 21 03:27:45 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 21 03:27:45 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 21 03:27:51 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:00 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:00 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:17 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:20 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:28 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:28 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 21 03:28:28 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 21 03:28:33 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:28:33 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 21 03:28:33 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 21 11:29:12 san1 ctld[67350]: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): read: Connection reset by peer
Jan 21 03:29:12 san1 ctld[20887]: child process 67350 terminated with exit status 1
Jan 21 03:29:31 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:29:36 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:29:39 san1 ctld[67443]: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): read: Connection reset by peer
Jan 21 03:29:39 san1 ctld[20887]: child process 67443 terminated with exit status 1
Jan 21 11:29:45 san1 ctld[67361]: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): read: Connection reset by peer
Jan 21 03:29:45 san1 ctld[20887]: child process 67361 terminated with exit status 1
Jan 21 03:29:59 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:30:19 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection
Jan 21 03:30:19 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 5 seconds; dropping connection


This issue crops up on v11 and v11.1

Hardware:

FreeNAS:

Supermicro H8SGL-F Motherboard
AMD Opteron Processor 6128 (8 cores @ 2GHz)
16GB RAM
LSI 9240-8i (Flashed to 9220-IT Mode)
8x Samsung EVO 850 250GB SSD in RAID-10 pool in FreeNAS
500GB Samsung magnetic disk for OS (I've also tried this on a 16GB USB stick with similar results.)
HP Infiniband 4X DDR Connect-X PCI-e Dual Port HBA (Flashed to Mellanox 2.9.1)

I have configured the IB HBA to Ethernet mode running at 10gbps and it is directly connected to a Windows Server 2016 machine running the similar card. Both sides of the link are running mtu 9014.

Server 2016 box:
HP DL160 G6
2x Xeon E5620 @ 2.4GHz (16 cores)
48GB RAM
HP Infiniband 4X DDR Connect-X PCI-e Dual Port HBA (Flashed to Mellanox 2.9.1)

My test setup is this:

FreeNAS
2 targets configured with 3 100GB file-based extents each, running under the same target IP address that is associated with one of the ports on the Mellanox adapter.

Server 2016
6 extents mounted as local iSCSI volumes
IOmeter configured with two workers pointing to each of the six volumes (12 workers total)
- Access Specification All In One (so a variable mix of reads/writes sequential/random of varying sizes)


After kicking off the test, network traffic hits 9+Gbps as the drives are initialized, then levels off at 2Gbps inbound/outbound during the test, generating a stable 12,000 combined IOPS until the test dies after about four hours. During that time, the FreeNAS CPU is running at about 97% utilization with System Load at about 15 or so.

Memory shows 200M free with 15G used by Wired and swap utilization stable at 520M.

How can I make this setup stable under load?
 
Last edited:

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
Also, tail -f /var/log/messages gives me a bunch of these messages during the test:

[WARN] Timed out (30s) running check '/usr/local/etc/consul-checks/freenas_health.sh'

Edit: Attaching debug file. I ran a recent test at approximately 12:10am and it ran until about 3:30am before failing.
 
Last edited:

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Have you tried the same test on 11.1-U1? Quite a few bug fixes.
 

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
Yep. I upgraded last night and ran it with the same results. I then applied the "kern.cam.ctl.iscsi.ping_timeout" loader variable with a value of 30 seconds and it failed again:

Code:
Jan 22 04:44:11 san1 daemon[36105]:	 2018/01/22 04:44:11 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 04:45:55 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 3 tasks
Jan 22 04:45:55 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 22 04:45:55 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 04:46:12 san1 daemon[36105]:	 2018/01/22 04:46:12 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 04:46:14 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 22 04:46:14 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 22 04:46:49 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 04:46:49 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 22 04:46:49 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 22 04:46:54 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 04:46:54 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 22 04:46:54 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 22 04:47:25 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 12:47:57 san1 ctld[31452]: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): read: Connection reset by peer
Jan 22 04:47:57 san1 ctld[2348]: child process 31452 terminated with exit status 1
Jan 22 04:48:02 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 04:48:12 san1 daemon[36105]:	 2018/01/22 04:48:12 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 04:48:33 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 04:49:14 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 04:50:13 san1 daemon[36105]:	 2018/01/22 04:50:13 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 04:50:16 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 04:51:17 san1 ctld[31905]: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): exiting due to timeout
Jan 22 04:51:17 san1 ctld[2348]: child process 31905 terminated with exit status 1
Jan 22 04:51:55 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
That AMD CPU in your FreeNAS box is quite weak and running at 97% utilization I'd expect to see timeout issues. 16GB of RAM is also quite low. Do you have compression turned on for the datasets that host your extents? If yes, you may try turning off compression and creating new extents (btw, I'd use device based, zvol, and not file based unless you have a specific reason you need to) and trying again. I'm wondering if your cpu is stressing to much with compression.
 
Last edited:

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
Yes to all of the above. :)

I've killed my extents and rebuilt them as compression-free zvols instead of files and kicked off a new test. *Crosses fingers*


edit: already something is different, since the disk initialization process is only driving about 5.7Gbps over the link instead of the typical 9.7 from prior tests. Curious to see what happens to disk I/O performance without compression.
 

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
Nope. Crashed about 80 minutes in:

Code:
Jan 22 13:27:13 san1 daemon[36105]:	 2018/01/22 13:27:13 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 13:29:15 san1 daemon[36105]:	 2018/01/22 13:29:15 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 13:29:53 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 13:29:53 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 22 13:29:53 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 22 13:29:54 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): no ping reply (NOP-Out) after 30 seconds; dropping connection
Jan 22 13:29:54 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): waiting for CTL to terminate 1 tasks
Jan 22 13:29:54 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): tasks terminated
Jan 22 13:31:17 san1 daemon[36105]:	 2018/01/22 13:31:17 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 13:32:03 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 13:32:03 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 13:32:03 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 13:33:18 san1 daemon[36105]:	 2018/01/22 13:33:18 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 13:35:18 san1 daemon[36105]:	 2018/01/22 13:35:18 [WARN] agent: Check 'freenas_health' is now warning
Jan 22 13:36:17 san1 WARNING: 10.100.1.2 (iqn.1991-05.com.microsoft:db1.bleh.local): connection error; dropping connection
Jan 22 13:37:19 san1 daemon[36105]:	 2018/01/22 13:37:19 [WARN] agent: Check 'freenas_health' is now warning


m8yMsnk.jpg
 

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
Update:

I've gutted the box, ripped out the storage subsystem and stuck it into an HP DL160 G6 with dual Xeon 2.4GHz CPUs and 48GB Registered ECC RAM.

Going on nine hours now without a hiccup, so things are looking promising!


Funny thing about hardware: when you buy it new it's awesome and works great, but as time passes I seldom notice that software requirements are passing by. What was an awesome 8-core 16GB box a few years ago is now barely keeping up with a modern desktop.

Grrr... Off to eBay for a new CPU/motherboard/RAM combo.
 

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
From eBay I purchased:

Supermicro X8DTE-F dual Xeon motherboard ($95)
Matched pair of Xeon E5620 Quad Core CPU @ 2.4GHz ($12.95)
48 gigs Hynix PC3-10600R DDR-1333 Registered ECC RAM ($126)


This hardware goes in with the LSI SAS2008 HBA, Intel dual-port gigE NIC and a Mellanox ConnectX HBA.

Without any modifications to FreeNAS 11.1u1 like MTU size or whatnot, I'm throwing two iOmeter tests at this new box from two separate Windows servers across 8 extents of varying sizes.

The test has been running for about three hours and I'm seeing IOPS holding steady at around 25,000 for the All-In-One IOmeter specification. 15 minute system load is 14.2 and the eight CPU cores hover at about 60% utilization. Memory is flat at 30G utilization.

We shall see what the morning brings. Amazing what you can buy for $250.
 

Greg10

Dabbler
Joined
Dec 16, 2016
Messages
24
After 48 hours of nonstop IOmeter load testing, my FreeNAS box is solidly generating 25,000 IOPS between two load-generating nodes using the All-In-One specification. I stopped that test and am running the 100% read 100% sequential test using 512 byte chunks and am seeing 120,000 IOPS between the two nodes.

Thanks to bigphil for pointing out the problem caused by anemic hardware!
 

ZataH

Cadet
Joined
Jul 17, 2017
Messages
5
I have the exact same problem.
Happened 2 times the last week now.

The issues first started after I added 10G network
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
So was this ever resolved ? I have 8 Core Zeon CPU and 64 Gigs ram so its not a resource issue, and also YES it started after I added the 10G network. Would love a solution right now my only option is to reboot freenas nightly to get through the day.
 

ZataH

Cadet
Joined
Jul 17, 2017
Messages
5
So was this ever resolved ? I have 8 Core Zeon CPU and 64 Gigs ram so its not a resource issue, and also YES it started after I added the 10G network. Would love a solution right now my only option is to reboot freenas nightly to get through the day.
Mine seem to have stopped for now. I am running FreeNAS-11.1-U5. Except for changing MTU back to 1500, I dont think I did anything else than update my freenas
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
Yes my MTU is 9000 in VMware, on the Switch and in Freenas. I get sometimes a day or so out of it and it starts again. If I go to shell I can ping the IP that claims to be down. So maybe I should experiment with setting every thing to 1500?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I have 8 Core Zeon CPU and 64 Gigs ram so its not a resource issue, and also YES it started after I added the 10G network.
You don't say what *Xeon* your using so it could still be a resource issue. we cant say for sure as you provided insufficient information.
Yes my MTU is 9000 in VMware, on the Switch and in Freenas. I get sometimes a day or so out of it and it starts again. If I go to shell I can ping the IP that claims to be down. So maybe I should experiment with setting every thing to 1500?
Did you also set the initiator to 9000? In the case of vmware, you would need the vswitch and the kernel port to both be set to an MTU of 9000.
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
You don't say what *Xeon* your using so it could still be a resource issue. we can't say for sure as you provided insufficient information.

Did you also set the initiator to 9000? In the case of vmware, you would need the vswitch and the kernel port to both be set to an MTU of 9000.
Yes My CPU runs about IDLE 84% System Load 13% max.
Build FreeNAS-11.1-U6
Platform Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
Memory 65387MB
Switch Ports 9000, Iscsi vKernal set to 9000, Network on Freenas set NIC Options "mtu 9000" When I was setting up nothing worked till I got them all set to 9000. Also when I see the No Ping on the Freenas console, I ssh to the box and I can ping the exact IP it says is not ping able. MY 10GB card in Freenas is a Chelsio 110-1088 10GB 2-Port PCI-e OPT Adapter Card. Please let me know what other info I could provide, Thanks
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
If I go to shell I can ping the IP that claims to be down. So maybe I should experiment with setting every thing to 1500?

The MTU will only have an impact if one side exceeds what the other can do. The ping packets are almost certainly 100 bytes or less, so the MTU wouldn't be relevant. It sure sounds to me like there are other issues, most likely network ones. Your original post says the FreeNAS is directly connected to the windows server. I can't give you too much more to go on, but I would suspect windows is doing something wonky. I might have to do with name resolution or something like that. Windows has a nasty habit of wanting to bridge interfaces together if you aren't careful. Run an arp -an on both sides when the connection is working, and when it is not working. I bet the ARP table is getting jacked up.
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
The MTU will only have an impact if one side exceeds what the other can do. The ping packets are almost certainly 100 bytes or less, so the MTU wouldn't be relevant. It sure sounds to me like there are other issues, most likely network ones. Your original post says the FreeNAS is directly connected to the windows server. I can't give you too much more to go on, but I would suspect windows is doing something wonky. I might have to do with name resolution or something like that. Windows has a nasty habit of wanting to bridge interfaces together if you aren't careful. Run an arp -an on both sides when the connection is working, and when it is not working. I bet the ARP table is getting jacked up.

No windows Servers, only EXSI Hosts. Via Netapp switch. I have the Dual Port chelsio 10GB this goes into NetApp Switch 16 port 10GB. Each Esxi Host has 2 10GB Nic's that also connect to the Switch. No Vlans just all ports set to 9000 mtu. Each nic on Freenas is a diff subnet 172.16.10 and 172.16.11. I initially used Bridge with 4 nic's in Freenas to get esxi to communicate, since I had this Ping issue and got switch thinking 10g Switch would help but turns out same issue persists. so has to be setting or something with Freenas. I do not use any hostnames, everything is IPv4. I rebooted Freenas last night and now all the ping errors are gone. when everything is working the 10g network is fast, I can put esxi in maintenance mode and 40 VM's will evacuate in about 8 seconds to the other host. Boot up time for each vm is 6-8 seconds. So it works great, just once it gets in to this ping timeout mode then everything goes to Lunch. Netapp switch MTU set to 12288 which is Max

  • Hypervisor:VMware ESXi, 6.0.0, 9313334
  • Model:Z10PE-D16 Series
  • Processor Type:Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
  • Logical Processors:24
  • NICs:6
  • Virtual Machines:54
  • State:Connected
  • Uptime:23 hours

  • Hypervisor:VMware ESXi, 6.0.0, 9313334
  • Model:Z10PE-D16 Series
  • Processor Type:Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
  • Logical Processors:24
  • NICs:6
  • Virtual Machines:17
  • State:Connected
  • Uptime:23 hours
Build FreeNAS-11.1-U6
  • Platform Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
    Memory 65387MB
    System Time Sat, 6 Oct 2018 11:55:52 -0400
    Uptime 11:55AM up 14:17, 1 user
    Load Average 0.09, 0.15, 0.18
  • Name
  • IPv4 Address
  • IPv6 Address
  • cxgb0
  • 172.16.10.100/24
    172.16.10.200/24
  • cxgb1
  • 172.16.11.100/24
    172.16.11.200/24
  • igb010.0.1.222/24
  • igb510.0.5.25/24
  • lagg010.0.2.20/24
  • Nameserver10.0.1.8 10.0.1.10 10.0.1.254
  • Default route10.0.1.1
  • (CN1610) #show sysinfo
    System Description............................. NetApp CN1610, 1.2.0.7, Linux 3.8.13-4ce360e8
    interface 0/1
    description 'Esxi Host 4'
    mtu 12288
    exit
    interface 0/2
    description 'Esxi Host 4'
    mtu 12288
    exit
    interface 0/3
    mtu 12288
    exit
    interface 0/4
    mtu 12288
    exit
    interface 0/5
    description 'Esxi Host 0'
    mtu 12288
    exit
    interface 0/6
    description 'Esxi Host 0'
    mtu 12288
    exit
    interface 0/7
    mtu 12288
    exit
    interface 0/8
    mtu 12288
    exit
    interface 0/9
    description 'Freenas iSCSI Vmotion'
    mtu 12288
    exit
    interface 0/10
    description 'Freenas iSCSI Vmotion'
    mtu 12288
    exit
    interface 0/11
    description 'Freenas iSCSI Vmotion'
    mtu 12288
    exit
    interface 0/12
    description 'Freenas iSCSI Vmotion'
    mtu 12288
    exit

    exit
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
Why are there two IP addresses on each NIC? I don't see any VLAN definitions in the switch, so all the traffic is getting blended together. I can tell you for certain that there is a problem, but that certainly would not be how I would do it.
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
On the 10G I did not setup any Vlans, No reason for the 2 IP addresses, Just added additional one when I changed the IP address. No ping time outs since that last change.
I would like to setup the Vlans for Iscsi on the netapp switch need to learn more netapp cli.
All traffic blended together Yes it is but all the Traffic is ISCSI and vmotion.
 
Status
Not open for further replies.
Top