The Journey of a Network Engineer

July 9, 2011  3:37 AM

Review – SolarWinds Engineer’s Toolset – Part1

Sulaiman Syed Profile: Sulaiman Syed

SolarWinds provide various solutions for Network management, network monitoring, storage, VMware, and servers monitoring and management. We have purchased the license for Engineer’s Toolset. It certainly made my life easier when it comes to managing Cisco devices, and monitoring some other critical Cisco devices.

In this entry, I would highlight all the functions that can be done using the Engineer’s Toolset, later in other entries we will see some of them, and how they work.

Once the Toolset is installed you can operate the following.

  1. SolarWinds Engineer’s Toolset
    1. WorkSpace Studio
    2. Classic Tools
      1. Cisco Tools
        1. Cisco Router Password Decryption
        2. Compare Running vs Startup Configs
        3. Config Downloader
        4. Config Transfer
        5. Config Upload
        6. Config Viewer
        7. CPU Gauge
        8. IP Network Browser
        9. Netflow configurator
        10. Netflow Realtime
        11. Proxy Ping
        12. Router CPU load
        13. TFTP server
      2. IP address Management
        1. Advanced Subnet Calculator
        2. DHCP Scope Monitor
        3. DNS & Who Is resolver
        4. DNS Analyzer
        5. DNS Audit
        6. IP address Management
        7. IP Network Browser
        8. Ping Sweep
      3. Network Discovery
        1. DNS Audit
        2. IP Address Management
        3. Mac Address Discovery
        4. IP Network Browser
        5. Network Sonar
        6. Ping Sweep
        7. Ping
        8. Port Scanner
        9. SNMP Sweep
        10. Subnet List
        11. Switch Port Mapper
      4. Network Monitoring
        1. Advance CPU load
        2. Bandwidth Gauges
        3. Network Monitor
        4. Network Performance Monitor
        5. Real time interface Monitor
        6. Router CPU Load
        7. SNMP Real time Graph
        8. Syslog Server
        9. Watch it!
      5. Ping & diagnostic
        1. DNS Analyzer
        2. Enhanced Ping
        3. Ping Sweep
        4. Ping
        5. Proxy Ping
        6. Send Page
        7. Spam Blacklist
        8. TraceRoute
        9. Wake-On-LAN
        10. WAN Killer
      6. Security
        1. Cisco Router Password Decryption
        2. Edit Dictionaries
        3. Port Scanner
        4. Remote TCP Session Reset
        5. SNMP Brute Force Attack
        6. SNMP Dictionary Attack
        7. Spam Blacklist
      7. SNMP Tools
        1. MIP Viewer
        2. MIP Walk
        3. SNMP MIP Browser
        4. SNMP Trap Editor
        5. SNMP Trap Receiver
        6. Update System MIB

Although the list have some duplicated items. It is cause these tools can be categorized under more than one name.

July 3, 2011  2:39 AM

How to manipulate BGP Routes

Sulaiman Syed Profile: Sulaiman Syed

Border Gateway Protocol (BGP) is the back bone protocol that connects the internet. It falls under the External Gateway Protocols (EGP), interestingly it is the only routing protocols used in the external networks.

BGP is a robust protocol that can handle 100k routes, which are increasing. That as for IPv4 addresses, IPv6 addresses will have even more routes!

Manipulation of routes within the BGP cloud is one of the most challenging tasks a network engineer will be given. To manipulate the routes various Path Attributes (PAs) can be changed. They are done mainly by using:

Articles been posted on how to use the above mentioned ways. It is not easy, and required getting used to. Happy BGP routing!

July 3, 2011  2:08 AM

Review “Manage Enginer IT360”

Sulaiman Syed Profile: Sulaiman Syed

We had to test Manage Engine IT360 for the use in our enterpirse network. IT360 is a move toward IT service management (ITSM) which is a part or ITIL.

In brief words IT360 is

IT360 is an Integrated IT management solution by ManageEngine designed to Monitor and Manage IT Infrastructure for Medium and Large Enterprise. ManageEngine IT360 adds a business context to monitoring IT Resources, there by helping the various stakeholders understand the impact of downtimes on the business.

This review is actually old. But im making a formal point where all the links to the blog entries can be found from one place. The review consisted of three parts as following:

  1. Manage Engine IT360 Review – Part 1
  2. Manage Engine IT360 Review – Part 2
  3. Manage Engine IT360 Review – Part 3
Hopefully the review was comprehensive, covered most of the aspects. I would be doing further reviews for other solutions that i would be using.

May 30, 2011  4:35 AM

GRE Tunnel ARP entry never times out! – part 2

Sulaiman Syed Profile: Sulaiman Syed

I have been trying to figure out why the APR entries don’t timeout as they should do naturally from the tunnels. As it seems, the natural time of 4hr is not being applied here. For some uknown reason yet. We have opened up a TAC case with Cisco. Roger Nobel (CCIE WIreless#23679) is really helpful and efficient.

So, in our troubleshooting so far, we tested how the MN is associated with AP, is the association with AP remains after MN is disconnected, does the SUP720 maintains a record for this MN. what we found so far is the following.

After the MN is disconnected from AP. The AP will clear the association in less than 1 min. and in another 5 mins this association will be cleared from the SUP720 as well. it can be seen from the following commands

WLAN-CORE-1#show mobility mn ip
MN Mac Address  MN IP Address  AP IP Address  Wireless Network-ID  Flags
————–  ————-  ————-  ——————-  —–
b407.f9ea.a941  8                      F

Flags: D=Dynamic network ID, F=Fresh, G=Grace Period

WLAN-CORE-1#show mobility mn ip
MN with ip is not found in database

Now naturally, the ARP entry should stay for 4 hrs (default Cisco). but in our case it says forever! we have ARP entries as old as 10 days without adding any configurations. The command does not even show any timer for timeout as it shows in other physical interfaces.

WLAN-CORE-1#show int gig 5/1
GigabitEthernet5/1 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 0011.5cb4.c2a4 (bia 0011.5cb4.c2a4)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is T
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00

here is how the tunnel interface looks like

WLAN-CORE-1#show int tunnel 1
Tunnel1 is up, line protocol is up
Hardware is Tunnel
Internet address is X.X.X.253/20
MTU 1514 bytes, BW 1000000 Kbit, DLY 500000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel source X.X.X.1 (Loopback1), fastswitch TTL 255
Tunnel protocol/transport multi-GRE/IP, key disabled, sequencing disabled
Checksumming of packets disabled, fast tunneling enabled
Last input 00:00:00, output 00:00:01, output hang never
Last clearing of “show interface” counters never
Input queue: 0/75/125/37 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/0 (size/max)
5 minute input rate 318000 bits/sec, 226 packets/sec
5 minute output rate 3458000 bits/sec, 355 packets/sec
L2 Switched: ucast: 0 pkt, 0 bytes – mcast: 0 pkt, 0 bytes
L3 in Switched: ucast: 0 pkt, 0 bytes – mcast: 0 pkt, 0 bytes mcast
L3 out Switched: ucast: 0 pkt, 0 bytes mcast: 2989660 pkt, 922842977 bytes
249194378 packets input, 54362827775 bytes, 0 no buffer
Received 1308901 broadcasts (71327 IP multicasts)
0 runts, 0 giants, 18 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
327413145 packets output, 259801658657 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out

I would wait for Mr. Roger to come back and see what possible thing is causing this.

May 24, 2011  5:56 AM

GRE Tunnel ARP entry never times out! – part 1

Sulaiman Syed Profile: Sulaiman Syed

I would like to clear the ARP entries automatically from the GRE tunnel made by the WLSM to the AP. here are the configurations of the tunnels.

interface Loopback1
description tunnel_source
ip address 10.x.x.1

interface Tunnel1
description TO_Wireless_Faculty
bandwidth 1000000
ip address 10.x.x.253
ip access-group deny_nbns in
ip helper-address 10.x.x.100
ip helper-address 10.x.x.101
no ip redirects
ip mtu 1476
ip pim sparse-dense-mode
tunnel source Loopback1
tunnel mode gre multipoint
mobility network-id 1
mobility trust
mobility tcp adjust-mss
mobility multicast

The output of show ip arp

show ip arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.x.x.114        5652   3038.5541.5214  TUNNEL Tunnel8
Internet  10.x.x.126         994   9084.0da7.e68d  TUNNEL Tunnel8
Internet  10.x.x.66          6696   dc2b.6151.9bb4  TUNNEL Tunnel5
Internet  10.xx.124        1226   8c71.f8e5.ae28  TUNNEL Tunnel8
Internet  10.x.x.68         11103   a86a.6fa7.dc11  TUNNEL Tunnel5
Internet  10.x.x.115       11206   581f.aa17.dbda  TUNNEL Tunnel8
Internet  10.x.x.70          2333   b407.f938.c36b  TUNNEL Tunnel5
Internet  10.x.x.122       13955   e4ec.1047.a562  TUNNEL Tunnel8
Issue is, that these entries never time out (we found as entries as old as 10 days). As some of the Mobile Nodes leave, and never come back. the ARP remains there for 8 days (our DHCP lease time), then when a new Mobile Node get that IP address we get a message like this

*May 22 02:24:17: %L3MM-4-DUP_IPADDR: MN 5c57.c8ed.d0ba is requesting ip which is being used by MN 7c6d.6215.6dcd

So, i would like to make the ARP entry in the TUNNEL to expire in 8 days (exactly the same timer as the DHCP lease time or lesser). This is something that has been happening for quite some time. I would like to solve this issue for once and all.

May 22, 2011  5:07 AM

How to troubleshoot EIGRP?

Sulaiman Syed Profile: Sulaiman Syed

In the previous entry I explained how EIGRP is configured. I would like to follow on that, how to check the operation of EIGRP, and the commands required for troubleshooting.

First, to check the neighbor:

#show ip eigrp neighbors
IP-EIGRP neighbors for process 10
H   Address                 Interface   Hold Uptime   SRTT   RTO  Q  Seq Type
(sec)         (ms)       Cnt Num
8             Vl1           12 5d22h      25   200  0  76435
7             Vl1           12 5d22h      25   200  0  4134
3             Vl1           10 5d22h      23   200  0  571
1             Vl1           11 5d22h      22   200  0  54511
0             Vl1           14 5d22h      18   200  0  3354
10           Gi2/9         13 4w0d        1   200  0  764
19           Gi3/8         12 13w4d       2   200  0  4008
16           Gi3/1         11 15w4d       1   200  0  1007
13           Gi3/6         10 15w4d      21   200  0  1010
5            Gi2/3         11 16w4d       1   200  0  37489
31           Gi2/11        10 16w4d      15   200  0  54827
2            Gi2/1         14 16w6d       2   200  0  4024
47            Gi2/2         14 17w0d      16   200  0  2925

H is the sequence of neighbor discovery. Interface, is where the neighbor is located. Hold is the timer responsible to consider the neighbor dead in case Hellos ceased to receive. up-time is obvious. SRTT is the time between transmission of hello to receiving acknowledgment. RTO – in the case of multicast failure, the router will send a unicast to the neighbor. RTO is the time wait for acknowledgment for the unicast packet. Q number of queued packets. Seq Num is the sequence number of the last EIGRP packet received.

Second is to check the topology, it will indicate the cost, and how many routes are available for a unique destination network.

#show ip eigrp topology
IP-EIGRP Topology Table for AS(10)/ID(

Codes: P – Passive, A – Active, U – Update, Q – Query, R – Reply,
r – reply Status, s – sia Status

P, 2 successors, FD is 28928
via (28928/28672), Vlan1
via (28928/28672), GigabitEthernet2/3
P, 1 successors, FD is 3072
via (3072/2816), Vlan1
P, 1 successors, FD is 281600
via Rstatic (281600/0)
P, 1 successors, FD is 3072
via (3072/2816), GigabitEthernet3/1
P, 1 successors, FD is 2816
via Connected, GigabitEthernet3/3
P, 1 successors, FD is 2816
via Connected, GigabitEthernet3/1
P, 1 successors, FD is 3072
via (3072/2816), GigabitEthernet3/3
P, 2 successors, FD is 3072
via (3072/2816), GigabitEthernet2/6
via (3072/2816), Vlan1
P, 1 successors, FD is 3072
via (3072/2816), GigabitEthernet3/6
P, 1 successors, FD is 2816
via Connected, GigabitEthernet2/7
P, 2 successors, FD is 3072
via (3072/2816), GigabitEthernet3/3
via (3072/2816), Vlan 1

P means passive, this indicate a stable route. Active is a lost route which the protocol will try to find alternative path for it through quires. The first entry has FD of 28928. (this is Feasible Successor Plus the cost to that neighbor). This is considered the total cost. The second number 28672 is the Feasible Successor (the cost advertise by the neighbor to that network). For any path to become eligible as a successor, the FD should be equal or greater than FS. This ensures a loop free routing.  via are the neighbors, and which interface they connected through.

Lastly,the command that will give the summary of all routing protocols running in the router/switch.

#show ip protocols
Routing Protocol is “eigrp 10”
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Default networks flagged in outgoing updates
Default networks accepted from incoming updates
EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0
EIGRP maximum hopcount 100
EIGRP maximum metric variance 1
Default redistribution metric is 10000 100 255 1 1500
Redistributing: static, eigrp 10
Automatic network summarization is not in effect
Maximum path: 4
Routing for Networks:

Routing Information Sources:
Gateway         Distance      Last Update         90      05:15:10         90      05:15:10          90      05:15:10

Here, we can see which networks are advertised, any access-list filtering for routes, the K values, redistribution, and in other cases any passive-interfaces. The last column showing the neighbors connected.

May 20, 2011  12:01 PM

How to congifure EIGRP?

Sulaiman Syed Profile: Sulaiman Syed

EIGRP is a Cisco proprietary protocol. It is one of the most widely used within enterprises that use Cisco switches/ routers. It stands for Enhanced Interior Gateway Routing Protocol. The reason for such widely deployment is the ease of use compared to OSPF, and the effectiveness of the protocol.

Before EIGRP can update and send topology information, building relationships between EIGRP enabled routers is the first process. For two routers to become neighbors, the following conditions should be met.

  • Autonomous System number should be same.
  • The K values should be same. (they are same if left on default).
  • The routers should be in the same subnet.

Here is the syntax for configuring EIGRP

Router> enable
Router# config terminal
Router(config)# router eigrp 1
Router(config-router)# network ?
  A.B.C.D  EIGRP wild card bits
Router(config-router)# network
Router(config-router)# no auto-summary
Router(config-router)# end

EIGRP by default uses Auto-Summary for routes within certain condition. The router will summarize when it is residing between two different Networks (not subnets). For example between and since they are two different class C networks.

The calculation of Metrics (cost) is complicated slightly, but when using default K values =1. The equation is straight forward. Here is the Equation

Cost = [(K1 X Bandwidth + ((K2 X bandwidth)/(256-load)) +K3 X delay) X K5/(K4+reliability)] X 256

With default values K1, K3=1, and K2, K4, K5 = 0. the equation becomes.

Cost = (Bandwidth + delay) X 256

Where bandwidth is the minimum in the link, and delay is cumulative.

May 16, 2011  12:49 PM

What is the difference between M1 and F1 Cisco Nexus Line cards?

Sulaiman Syed Profile: Sulaiman Syed

Cisco Nexus series switches brought a new technology to the data center. The whole designed is changed from the Catalyst 6500 series. Nexus is no longer dependent on SUP’s backplane, it is more like a midplane architecture. Let me elaborate a little on this, what that statement means that currently if there is any limitation of speed, then it is posed by the Line Card. Then how the Line cards communicate with each other, they do with Fabric Modules. Read for further details into basic architecture difference between Catalyst 6500 vs Nexus 7000

Nexus Line card modules fall into two major categories. M1, and F1. There is another variation to the M1 which is M1-XL. Brad Hedlund wrote a good article that can be referenced for reading, titled “Cisco Nexus 7000 connectivity solutions for Cisco UCS

M1, M1-XL

M1 Series were the introductory line cards that were offered by Cisco for Nexus. They come with a fabric of 80GB. These cards have 10Gig links making them ideal for Distribution layer. Lets put down the specifications or performance Metrics from the data sheets. These cards provide the Layer 2 and Layer 3 connectivity! You can always multiply these numbers with the maximum line cards possible to install into a chassis to get the marketing figures.
1- Delivery at 60 Million Packets per second (Mpps) for layer 2,3 IPv4.
2- Delivery at 30 Mpps IPv6 unicast.
3- Delivery of Access Control List (ACL) to 64k entries per module. The entries include address of Layer 2,3,4 and Cisco’s Metadata fields- security group tags (SGTs)
4- in 32 Port line card, each 4 ports share 10GB of Fabric. They can run either 1 port 10GIG disable 2,3, and 4 OR all 4 in shared mode.
5- Memory 1GB DRAM
6- Network management: Cisco DCNM 4.0
7- Mac addresses table size of 128k entry
8- FIB table of 128k entry
9- Netflow supports 512k Entry in both Ingres and Egress
10- 16384 bridge domains and 4096 vlan per Virtual Device Context (VDC)
11- Policers of 16k entry

M1-XL Series offers the flexibility or the performance to be internet-facing deployment with wider transceivers module support. What it basically offers the possibility of larger FIB. This can be seen from the following:
* up to 1M IPv4 routes (depending on prefix distribution)
* up to 350k IPv6 routes (depending on prefix distribution)

This was not possible in the M1 Line Cards. M1-XL does provide extra ACL entries support compared to M1, which increased DRAM
1- Memory 2GB DRAM
2- Delivery of Access Control List (ACL) to 128k entries per module.
3- Network management: Cisco DCNM 5.1

F1 Series Line Cards were introduced after the M1. They provide a slight cheaper and more port density with ONLY layer 2 forwarding. This makes an ideal Line card for Access layer. What happens if layer three processing is required? The Line card will forward that traffic to M1, M1-XL cards for processing. These cards have Fabric of 230 GB.

1- 480 Mpps layer two forwarding
2- Delivery of Access Control List (ACL) to 32k entries per module. The entries include address of Layer 2,3,4 and Cisoc’s Metadata fields- security group tags (SGTs)
3- in 32 Port line card with 230GB of fabric.
4- Memory 1GB DRAM
5- Network managment: Cisco DCNM 5.1
6- Mac addresses table size of 16k entry per forwarding engine.

The forwarding engine is something new. Every two ports are connected by a switch on chip. (SoC), these SoC are the forwarding engine. So each SoC supports 16k. What this implies (How marketing figured came) that for 32 port, we have 16 SoC. With careful planning, if we use one VLAN per SoC we get total of 256k of Mac address support. But if we span one vlan among all SoC then we are bounded by max limit of 16k MAC entry.

These cards have the Cisco FiberPath Technology. From the data sheet

The benefits of Cisco FabricPath include:

• Operational simplicity: Cisco FabricPath embeds an autodiscovery mechanism that does not require any additional platform configuration. By offering Layer 2 connectivity, this “VLAN anywhere” characteristic simplifies provisioning and offers workload flexibility across the network.

• High resiliency and performance: Since Cisco FabricPath is a Layer 2 routed protocol, it offers stability, scalability, and optimized resiliency along with network failure containment.

• Massively scalable fabric: By building a forwarding model on 16-way ECMP, Cisco FabricPath helps prevent bandwidth bottlenecks and allows capacity to be added dynamically, without network disruption.

They also have the ability to connect FCoE. these features include
1-Virtual Sans (VSANs)
2-Inter-VSAN Routing
3-PortChannels (UP to 16 links)
4- Storage VDC.

This sums up what I found. I would include or add more things later as I learn or gather them.

May 12, 2011  2:47 AM

Nexus 7000 Vs Catalyst 6500 (Backplan capacity)

Sulaiman Syed Profile: Sulaiman Syed

Cisco has introduced Nexus. Nexus are the new line of data center switches. They come in the variables of 7000, 5000, 2000, and lastly 1000.

Nexus 7000 with their functionality sit at distribution layer, while Nexus 5000 come in the access layer. Nexus 2000 are nothing but extension to the 5k switches. In easier analogy, they work as line cards in 6500 chassis.

In this article, I would say why or when to use Nexus 7000 in the enterprise core layer. By purpose, the Nexus was designed for data center. But with the increased requirements of backbone network, and network growth the current top of the line 6500 switches comes short.

The backplane/fabric of 6500 switch is part of Supervisor Engine, in the case of SUP720 a 40GB per line module is the maximum bandwidth. What happens if you connect 8 ports with speed of 10G line card is that we are oversubscribing 1:2 ratio. This will be doubled with 16 ports of 10 GB line card to 1:4. The issue when multiple (30 or more) distribution switches are linked with 10G then the chassis with 9 slots becomes not enough. That is in the case of connecting without oversubscription.

The Nexus switches have different architecture. They line cards don’t depend on Supervisor Engine’s fabric for traffic processing exclusively. Each module will have its own fabric. This fabric rather connected by a fabric module that can be upgraded by itself. Each fabric supports 46GB per slot. Nexus 7000 with 10 slots support 5 fabrics, that is equal of 230GB per module slot. This is 5.75 times more than the original 6500 fabric. Still, a 32 10GIG port have a 80GP backplan, thus these 32 ports line card are oversubscripted at the rate of 1:4. while the 8 port Line cards are Non-oversubscripting.

Honestly, I still can’t figure out the reason for their fabric modules, since their line card modules are all having a limit 80GP fabric. So with 8 line cards, we have requirement of 640 GP, and we still lacking the support from the fabric module for these line card.

May 5, 2011  3:53 PM

Solved: Distribution Switch Acting Weird

Sulaiman Syed Profile: Sulaiman Syed

In my previous entry of “Troubleshoot: Distribution Switch Acting Weird” I have mentioned a strange problem that was happening in the network. So, I have gone through a lot of trouble to find out what was going on. We checked Spanning-Tree with full details drawing all ports, roots, etc. We were sure that something stopping traffic of our Server Farm Vlan from propagating into the Routing Vlan. As mentioned in the earlier post, we knew the general idea but we were looking at the wrong place.

We concentrated on the distribution switch since the traffic was stopping over there. The problem was at the core switch! We never applied VTP pruning on the interface level, so we never really thought it could be an issue. What we found that the command vtp pruning was enabled. Checking the operation of vtp pruning requires the following command to be typed “show vtp status | in pruning”. Since, it was “enabled”. The following scenario happened.

The core switches were the servers, while wireless devices were client. Between those two devices were the Server Farm Distribution switch which was operating in Transparent mode. This meant that the distribution switch will pass all VTP packets, just wont process them, in other words effective communication was taking place between client and server in the VTP domain. What happened is that when we were shutting down the Vlan Interface in the VTP client. It sent a message to server that the Server Farm Vlan could be pruned. This caused the communication to stop in the link between the server farm distribution switch and core switch for that vlan. The scenario can be seen from the figure below.

VTP pruning

This is why, we should never use “vtp pruning”. Just prun the vlans we want from the trunk links manually. It gives the proper control and predicted behavior of network.

In general, it is best to do everything manually in networks. Never use the “auto” let it be speed, duplex negotiation, trunk, etherchannel, routing summary, or anything. The most predictable network behavior, the easier to troubleshoot.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: