Saturday, May 11, 2013

Catalyst 6500 Switches



Catalyst 6500

6500 is a modular chassis switch. Capable of delivering speeds of up to 400 million packets per second and power up to 471 15.4W devices.

A 6500 comprises of a chassis, power supplies, one or two supervisors, line cards and service modules. A chassis can have 3, 4, 6, 9 or 13 slots each with the option of one or two modular power supplies. The supervisor engine provides centralised forwarding information and processing, the line cards provide port connectivity and service modules allow for devices such as firewalls to be integrated within the switch.

Supervisor


Supervisor 32

The Supervisor 32 (or Sup32) is a classic-only supervisor engine, comprising of MSFC2A and PFC3B.

The MSFC2A is the software portion of the supervisor. This particular version comprises of just a route processor, with the switch processor being part of the base board (similar to the Supervisor 2).

The PFC3B is the hardware component. Feature wise, this is identical to the Supervisor 720 and thus the majority of hardware features available in the 720 are available in the Supervisor 32.
Neither MSFC nor PFC are optional on the Supervisor 32.
Presently, the Supervisor 32 comes in two options:


  • Supervisor 32 8-GE - 8xSFP 1GE Uplinks + 1 10/100/1000 management port (9 switchports total)

  • Supervisor 32 10-GE - 2xXenpak 10GE Uplinks + 1 10/100/1000 management port
    Note: The management ports are merely suggested as such. In practise, they are just regular switchports.
Note: The management ports are merely suggested as such. In practise, they are just regular switchports.


Supervisor 720

Sup720 components

The Supervisor 720 (or Sup720) is a fabric-enabled supervisor engine, comprising of MSFC3 and PFC3B. It supports all flavours of line cards, including Classic, cef/dcef256 and cef/dcef720.

The MSFC3 is the software portion of the supervisor. Version 3 of the MSFC includes both the route and switch processors, and thus handles all software processing of the supervisor (unlike the Supervisor 32 and Supervisor 2 where this functionality is on the base board).

The PFC3B is the hardware component. Feature wise, this is identical to the Supervisor 32 and thus the majority of hardware features available in the 720 are available in the Supervisor 32.

Original Supervisor 720 units shipped with the PFC3A. The PFC3B addressed a number of limitations, notibly hardware MPLS support, an improved Netflow hash, ACL counters, 4K of ACL labels (as opposed to 512) and an increase in the ACL LOU registers from 32 to 64.

Neither MSFC nor PFC are optional on the Supervisor 720.

Unlike the Supervisor 32, the Supervisor 720 has the option of either PFC3B or PFC3BXL.

Port Options

The 6500 supports four port configuration options. A port may either be:
  • An access interface - This port type may carry a single vlan and is typically for wiring closets.
  • A trunk interface - This may carry multiple vlans. A port with a voice vlan is technically a trunk link. We may use ISL or 802.1q to carry vlans.
  • A Routed interface - This port has an IP address and is used to make layer 3 routing decisions, like on a router. Importantly, these interfaces consume an internal vlan out of the 6500's available pool.
  • A Subinterface - Used to carry multiple virtual links across a single physical link.

Operating System

The 6500 currently supports three operating systems. CatOS, Native IOS and Modular IOS.

CatOS

CatOS is supported for layer 2 (switching) operations only. To be able to perform routing functions (e.g. Layer 3) operations, the switch must be run in hybrid mode. In this case, CatOS runs on the Switch Processor (SP) portion of the MSFC and IOS runs on the Route Processor (RP). To make configuration changes, the user must then manually switch between the two environments.

While CatOS does have some functionality missing from IOS, it's generally considered obsolete compared to running a switch in Native Mode.

Native IOS

Cisco IOS can be run on both the SP and RP. In this instance, the user is unaware of where a command is being executed on the switch, even though technically two IOS images are loaded -- one on each processor. This mode is the default shipping mode for Cisco products and enjoys support of all new features and line cards.

Modular IOS

Modular IOS (or more properly, IOS with Software Modularity is a version of Cisco IOS that employs a modern UNIX-based kernel to overcome many of the limitations of IOS. See Modular IOS for a breakdown of the improvements brought in this software release.

Control and Data Planes

The 6500 is a hardware switch. This means that all basic forwarding functions are provided in hardware through CEF. This is achieved by the PFC. To obtain Layer 3 information, the MSFC first builds a routing table from the running routing protocols in the Control Plane. This is called the routing information base, or RIB. This is then converted to an optimised format, called a Forwarding Information Base (FIB) and then copied to the PFC TCAM, in the Data Plane.

On ingress, a key is made from the incoming packet and the TCAM is queried in parallel. The most relevant match contains a pointer to the adjacency table, which provides the L2 rewrite information and, if applicable, any MPLS information to push or pop from the packet.

The PFC is a shared memory, in to which ACLs, Routes, NetFlow entries etc are all placed. It is therefore important to provision this when ordering a 6500.

MSFC

The MSFC or Multilayer Switch Feature Card, provides software functionality, largely involving building the routing table and maintaining layer 2 and 3 protocols.

The MSFC itself is really comprised of two systems, each with their own DRAM, CPU and bootflash. These are the Route Processor (RP) and Switch Processor (SP).

Switch Processor

The SP handles the initial bootup of the switch when running native or modular IOS. In this case, the SP portion of the bootflash is first copied to the SP DRAM and booted. Upon completion, the SP copies the RP portion of IOS to the RP's DRAM. The SP then hands control of the boot process to the RP and from here on, the user interface is controlled from the RP.

During normal operation, the SP handles layer 2 protocols such as arp and spanning-tree. While you can upgrade the memory for the SP, it is largely redundant due to the fact that most L2 information is now stored in the PFC.

Route Processor

The RP handles the general UI of the 6500, handling control over to the SP for L2 specific functions. When running native or modular IOS, it is generally not helpful to think of the 6500 as two separate processors, but instead as a parallel machine. The RP's bootflash is not used in native or modular IOS and therefore requires no code to be placed on it.

During normal operation, the RP handles routing protocols (such as OSPF, EIGRP, BGP etc) and builds the RIB in software before programming the FIB in to the PFC TCAM. Additionally, any QoS commands not supported in hardware will be handled by the RP.

Generally, out of memory errors on the 6500 are handled by upgrading the RP DRAM. It is a supported configuration to have the SP and RP with different DRAM amounts.

PFC

The PFC, or Policy Feature Card, is where the hardware is processed on the 6500. For the Supervisor 32, PFC3B is the only option and is the default for the Supervisor 720. This card supports up to 239K (where K = 1024) routes, ACLs etc as a shared memory, which is typically adequate for campus environments with default routes.

PFC3BXL for the Supervisor 720 is also an option, allowing up to 1M ipv4 routes, acls etc as a shared memory. This is the required PFC TCAM size for holding full internet routing tables.
Newer line cards support PFC3C, which allows up to to 96K MAC addresses as well as some additional undisclosed enhancements.

Note: Each PFC variety has an equivalent DFC option. In the case of mismatching DFCs and PFCs, the chassis will drop to the lowest common denominator (i.e. a chassis with PFC3B and a DFC3BXL will run at PFC3B throughout).

Methods of Operation

The 6500 has five major modes of operation. Classic, cef256, dcef256, cef720 and dcef720.
These make use of the Switch Fabric and the Classic Bus. All 6500 chassis have both of these, but the mode of operation depends if, how and when each of these modes are used.

Classic Bus

The 6500 classic architecture provides 32 Gbps centralised forwarding performance. The design is such that an incoming packet is first queued on the line card and then placed on to the global data bus (dBus) and is copied to all other line cards, including the supervisor. The supervisor then looks up the correct egress port, access lists, policing and any relevant rewrite information on the PFC. This is placed on the result bus (rBus) and sent to all line cards. Those line cards for whom the data is not required terminate processing. The others continue forwarding and apply relevant egress queuing.

The speed of the classic bus is 16gb full duplex (hence the 32gb) and is the only supported way of connecting a Supervisor 32 engine to a 6500.

cef256

This method of forwarding was first introduced with the Supervisor 2 engine. When used in combination with a switch fabric module, each line card has an 8gb connections to the switch fabric and additionally a connection to the classic bus. In this mode, assuming all line cards have a switch fabric connection, an ingress packet is queued as before and its headers are sent along the dBus to the supervisor. They are looked up in the PFC (including ACLs etc) and then the result is placed on the rBus. The initial egress line card takes this information and forwards the data to the correct line card along the switch fabric. The main advantage here is that there is a dedicated 8gbps connection between the line cards. The receiving line card queues the egress packet before sending it from the desired port.

The '256' is derived from a chassis using 2x8gb ports on 8 slots of a 6509 chassis.

16 * 8 = 128 * 2 = 256. The number is doubled to the switch fabric being 'full duplex'.

dcef256

dcef256 uses distributed forwarding. These line cards have 2x8gb connections to the switch fabric and no classic bus connection.

Unlike the previous examples, the line cards holds a full copy of the supervisors routing tables locally, as well as its own L2 adjacency table (i.e. MAC addresses). This eliminates the need for any connection to the classic bus or requirement to use the shared resource of the supervisor. In this instance, an ingress packet is queued, but its destination looked up locally. The packet is then sent across the switch fabric, queued in the egress line card before being sent.

cef720
This mode of operation is similar to cef256, except now there are 2x20gb connections to the switch fabric and there is no need for a switch fabric module (this is now integrated in to the supervisor). This was first introduced in to the Supervisor Engine 720.

In addition, the concept of a classic bus architecture on the line card has gone, replaced instead by a fabric ASIC controlling the forwarding from ports to the fabric.

The '720' is derived from a chassis using 2x40gb ports on 9 slots of a 6509 chassis. 40 * 9 = 360 * 2 = 720. The number is doubled to the switch fabric being 'full duplex'. The reason we use 9 slots for the calculation instead of 8 for the cef256 is that we no longer need to waste a slot with the switch fabric module.

dcef720

This mode of operation acts identically to cef720, except now we have a DFC (Distributed Forwarding Card) which is able to make forwarding descisions locally (like the dCEF256 architecture).

Mixing Modes
The 6500 runs in three major methods of operation: Bus/Flowthrough, Compact and Truncated.
Bus/Flowthrough is the classic means of sending packets and is used for communication between two classic cards. In this case, the whole packet is sent along the dBus. Centralised box performance in this case is 15mpps in the best case (64-byte packets). In this case, we send a 16byte cycles with a wait cycle when we're done sending.

Compact is where a chassis has all fabric-enabled cards. In this case, a special 32-byte header is sent along the classic bus containing information such as source/destination IP, CoS, etc. Because we have a guaranteed header size (2 clock cycles), we have no need to send wait cycles and are guaranteed 30mpps regardless of packet size.

Truncated is where a fabric chassis has a classic card installed. As classic cards will generate bus errors on compact frames (as they're not valid), we have to send full 64-byte packets (with a full L2/L3 header and null payload). In this case, we must also send a wait cycle for the classic modules. Performance in this case is reduced to 15mpps best case (64-byte packets and when fabric cards communicate).

Note: Distributed cards are unaffected by the above modes, as they only ever use the switch fabric for forwarding data and never use the classic bus for header lookup. In this case, 48mpps can be gained per slot, giving (for example) 432mpps for the box on a fully loaded 6509 chassis.

Power Redundancy Options

The 6500 supports dual power supplies for redundancy. These may be run in one of two modes, redundant or combined mode.

Redundant Mode
When running in Redundant Mode, each power supply provides approximately 50% of its capacity to the chassis. In the event of a failure, the unaffected power supply will then provide 100% of its capacity and an alert will be generated. As there was enough to power the chassis ahead of time, there is no interruption to service in this configuration. This is also the default and recommended way to configure power supplies.

Combined Mode
In combined mode, each power supply provides approximately 83% of its capacity to the chassis. This allows for greater utilisation of the power supplies and potentially increased PoE densities.

In the event of a failure, we power down all devices except the supervisor. During this time, there will be a temporary network outage while we return power to the system. The order at which we do this is as follows:

  1. First we power up service modules from the top down
  2. Then we power up line cards from the top most slot to the bottom most. We do _not_ permit PoE at this stage.
  3. Next we power up PoE from the highest line card and the highest port (i.e. line card 0/port 0) down through to the lowest. We go through the above until we have hit our power capacity of the remaining power. Normally, a single power supply will be able to power all service modules and line cards, but not give the PoE densities required.

We go through the above until we have hit our power capacity of the remaining power. Normally, a single power supply will be able to power all service modules and line cards, but not give the PoE densities required.

QoS
When performing QoS there are three places where the 6500 gets involved. Primarily on the line card for ingress queuing (normally a regular queue + priority queue), the PFC (for rate limiting and policing) and finally queuing on the egress line card.

Port Trust

Generally, a port will be set to a certain trust:

  • Un-trusted - all QoS values are ignored and internal DSCP is set to a default (normally 0)
  • Trust-CoS - The CoS value will be remapped to internal DSCP and all other fields ignored
  • Trust-IPP - The IPP value will be remapped to internal DSCP and all other fields ignored
  • Trust-DSCP - The DSCP value will be copied to internal DSCP and all other fields ignored

By default the egress packet will inherit the internal DSCP value in all fields. We can, however, preserve the original settings should this be desired.

Packet Flow

An ingress packet will have its CoS or ToS value copied to internal DSCP (see above) and then queued either in the standard or priority queue of that line card. At this point, it will be ingress & egress policed through the PFC by ACLs (either forward, mark or drop) based on the burst (see Token Buckets) and burst rate parameters. We then queue on the egress card either in a standard queue (chosen by mapped-CoS) or the priority queue. In the standard queue, some kind of scheduling algorithm will determine de-queuing (e.g. Weighted Round Robin, Shaped Round Robin etc).

Line Card Notation
All queuing is performed on the individual line cards. When looking at a line card, there is a standard notation to determine what queuing features you will get, in the form: XpYqZt, where:

  • X is the number of priority queues (1 or 0)
  • Y is the number of standard queues
  • Z is the number of queuing thresholds

For example, 1p4q8t would be 1 priority queue, 4 regular queues and 8 thresholds.
Note, today, the 6500 does not support DSCP to queue mapping (only CoS to queue) with the exception of the 6708.

Online Insertion & Removal

OIR is a feature of the 6500 allowing you to hot swap most line cards without first powering down the chassis. The advantage of this is that one may perform an in-service upgrade. However, before attempting this, it is important that one understands the process of OIR and how it may still require a reload.

To prevent bus errors, the chassis has three pins in each slot which correspond with the line card. Upon insertion, the largest of these makes first contact and stalls the bus (as to avoid corruption). As the line card is pushed in further, the middle pin makes the data connection. Finally, the smallest pin removes the bus stall and allows the chassis to continue operation.

However, if any part of this operation is skipped, errors will occur (resulting in a stalled bus and ultimately a chassis reload). Common problems include:

  • Line cards being inserted incorrectly (and thus making contact with only the stall and data pins and thus not releasing the bus)
  • Line cards being inserted too quickly (and thus the stall removal signal is not received)
  • Line cards being inserted too slowly (and thus the bus is stalled for too long and forces a reload).

Therefore, you are strongly advised not to perform OIR outside of maintenance windows. It is also for the above that OIR is commonly referred to as On Insertion, Reload.

Friday, May 10, 2013

Virtual Switch System

Virtual Switch System
The Virtual Switch System 1440 (or VSS-1440) on the Catalyst 6500 is a mode of operation that virtualises two switches in to a single unit. At a minimum it requires a Supervisor 720-10GE to operate.


The primary benefit of this mode allows Multichassis EtherChannel (MEC) to occur. Additional benefits include a common configuration across two aggregation switches, NSF/SSO failover between chassis and the ability to avoid layer 3 in the access (desirable in a datacentre deployment) .


Architecture

At the heart of the Virtual Switching System, there is the Virtual Switch Link or VSL. The VSL is a dedicated 10GE link (or bundle of links) that are used to carry control plane traffic between VSS members.


The VSL essentially acts as an extension to the backplane, allowing control traffic to travel between chassis. As it uses existing X2 optics, the distance limitations of the VSS is limited only by supported 10GE ethernet standards.


When a VSS system is configured, the two chassis share a common configuration (see below for how this works). The devices also share a common MAC address and can truly load balance. In many ways, it is similar to how a Catalyst 3750 operates.


A common application is datacentre or high-end distribution where layer-2 links can etherchannel in to two chassis for high availability.


Hardware Requirements

The system must be running the Supervisor 720 10GE running PFC3C or PFC3CXL. In addition, any 10GE cards used for the VSL must be:
  • 6708 (8-port 10GE line card) or Supervisor (Sup720-10GE) uplinks

  • Using the DFC3C or DFC3CXL in conjunction with th PFC3C or PFC3CXL Supervisor

A caveat is that any DFCs in the system must also be upgraded to DFC3C or DFC3CXL. DFC3 and DFC3B and their XL equivalents are not supported. All line cards must also be 67xx cards (CEF or dCEF 720) .

Dual Active

In the worst case failure of a VSS (the link between the two switches goes down) we end up with a dual active situation. This is because the standby switch assumes the active has gone down and thus takes over.

Configuration

The configuration is summarised in to the following steps:

  1. Configure the Virtual Switch Domain on both devices and designate each switch as either Switch 1 (primary) or Switch 2 (secondary)

  2. (Optional) Configure switch priority settings

  3. Configure the virtual switch links

  4. Run the conversion (causing switches to reload)

  5. Reconfigure the standby switches VSL on the active switch to complete configuration

Configuring the Virtual Switch Domain

The Virtual Switch Domain defines the grouping for the switches within the VSS system. The domain itself is an ID between 1 and 255 and should be unique for its layer 2 domain.

On Switch 1:

router-1# conf trouter-1(config)# switch virtual domain

router-1(config-vs-domain)# switch 1

On Switch 2:

router-2# conf trouter-2(config)# switch virtual domain

router-2(config-vs-domain)# switch 2

Configuring Switch Priorities (Optional)

VSS priorities are similar in nature to HSRP priorities. The highest priority node will assume to be active and the lowest standby. Both switches must have the same priority configuration settings to form a VSS system. By default, switch 1 will assume active (primary) and switch 2 will assume it is secondary.

To make switch 2 the primary switch, do the following on both switches:

router(config-vs-domain)# switch 1 priority 100

router(config-vs-domain)# switch 2 priority 110

Configure the Virtual Switching Link

The VSL link is a special link that carries control plane data between the two chassis. This must be configured on a 10GE port from either the Sup720-10GE or a 6708 line card.

In a deployment scenario the individual interfaces will differ. This article assumes the supervisor ports will be used to create the VSL link.

On Switch 1:

router-1(config)# interface port-channel 1

router-1(config-if)# no shut

router-1(config-if)# switch virtual link 1

router-1(config-if)# exit

router-1(config)# interface range tenGigabitEthernet 1/4 - 5

router-1(config-if-range)# no shut

router-1(config-if-range)# channel-group 1 mode on

router-1(config-if-range)# end

On Switch 2:

router-2(config)# interface port-channel 2

router-2(config-if)# no shut

router-2(config-if)# switch virtual link 2

router-2(config-if)# exit

router-2(config)# interface range tenGigabitEthernet 1/4 - 5

router-2(config-if-range)# no shut

router-2(config-if-range)# channel-group 2 mode on

router-2(config-if-range)# end

Executing the Conversion

This step creates a VSS system. This will convert all interface names in to a three mode notation, chassis/slot/port. Executing this mode will require a reload to merge both switches configurations, renumber all ports and to negotiation NSF/SSO etc between chassis and supervisors.
On both switches, issue:

router# switch convert mode virtual

You should select "yes" to reload the switch.

Finalising the Conversion

The conversion must be finalised by reconfiguring the port channel on the secondary switch.

On the active switch (probably Switch 1 unless you have set priority) enter the following:

router(config)# interface port-channel 2

router(config-if)# no shutrouter(config-if)# switch virtual link 2

router(config-if)# exit

router(config)# interface range tenGigabitEthernet 2/1/4 - 5

router(config-if)# channel-group 2 mode on

router(config-if)# no shut

router(config-if)# end

The system should now be a virtual switch! At this point, you should save your config and verify the system.

You can do this via the show switch virtual command.

router# sh switch virtual

Switch mode : Virtual Switch

Local switch number : 1

Local switch operational role: Virtual Switch Active

Peer switch number : 2

Peer switch operational role : Virtual Switch Standby

Thursday, May 9, 2013

Layer 3 BFD

Layer 3 BFD
If no Enhanced PAgP neighbors are available to assist in dual-active detection, then another
method is required to perform this function; use of a dedicated Layer 3 direct link heartbeat
mechanism between the virtual switches is an inexpensive way to determine whether or not a dual-active scenario has occurred.

Bidirectional Forwarding Detection (BFD) assists in the fast detection of a failed VSL, bringing innatively the benefits that BFD offers, such as subsecond timers and pseudo-preemption. To take advantage of this feature, you must first configure BFD on the selected interfaces that will be participating in IP-BFD dual-active detection, noting that these interfaces must be directly connected to each other:

vss#conf t
Enter configuration commands, one per line. End with CNTL/Z.
vss(config)#int gig 1/5/1
vss(config-if)#ip address 10.1.1.1 255.255.255.0
vss(config-if)#bfd interval 100 min_rx 100 multiplier 50
vss(config-if)#no shutdown
vss(config-if)#int gig 2/5/1
vss(config-if)#ip address 10.1.2.1 255.255.255.0
vss(config-if)#bfd interval 100 min_rx 100 multiplier 50
vss(config-if)#no shutdown

vss(config-if)#exit

Note that in a Cisco Virtual Switching System environment, both interfaces are seen to be Layer 3 routed interfaces on the same logical router and hence require different network addresses even though they are directly connected together.

To enable IP-BFD for dual-active detection, use the following configuration:
vss(config)#switch virtual domain 10
vss(config-vs-domain)#dual-active detection bfd
vss(config-vs-domain)#dual-active pair interface gig1/5/1 interface gig2/5/1 bfd

adding a static route 10.1.2.0 255.255.255.0 Gi1/5/1 for this dualactive pair
adding a static route 10.1.1.0 255.255.255.0 Gi2/5/1 for this dualactive pair


vss#show switch virtual dual-active bfd
Bfd dual-active detection enabled: Yes
Bfd dual-active interface pairs configured:
interface-1 Gi1/5/1 interface-2 Gi2/5/1

Note that by configuring these commands, static routes are automatically added for the remote addresses and are installed in the Routing Information Base (RIB) only if a dual-active scenario occurs. As a result, no packets are forwarded between the switches through the heartbeat interfaces until the VSL is brought down.

When the VSL does go down, a unique internal MAC address (selected from the pool of MAC addresses reserved for the line card) is assigned for each of the local interfaces, and sending BFD heartbeat packets brings up BFD neighbors. If the standby virtual switch has taken over as active, a BFD “adjacency-up” event is generated, indicating that a dual-active situation has occurred.

Action upon Dual-Active Detection
Upon detecting the dual-active condition, the original active chassis enters into recovery mode and brings down all of its interfaces except the VSL and nominated management interfaces, effectively removing the device from the network.

To nominate specific interfaces to be excluded from being brought down during dual-active detection recovery, use the following commands:

vss(config)#switch virtual domain 10
vss(config-vs-domain)#dual-active exclude interface gigabitEthernet 1/5/3

WARNING: This interface should only be used for access to the switch when in dual-active recovery mode and should not be configured for any other purpose

vss(config-vs-domain)#dual-active exclude interface gigabitEthernet 2/5/3
WARNING: This interface should only be used for access to the switch when in dual-active recovery mode and should not be configured for any other purpose

vss(config-vs-domain)#
To verify this configuration is correct, issue the following commands:
vss#sh switch virtual dual-active summary
Pagp dual-active detection enabled: Yes
Ip bfd dual-active detection enabled: Yes
Interfaces excluded from shutdown in recovery mode:
Gi1/5/3
Gi2/5/3

In dual-active recovery mode: No
You will see the following messages on the active virtual switch to indicate that a dual-active scenario has occurred:
*Jun 26 16:06:36.157: %VSLP-SW2_SPSTBY-3-VSLP_LMP_FAIL_REASON:

Port5/4: Link down
*Jun 26 16:06:36.782: %VSLP-SW1_SP-3-VSLP_LMP_FAIL_REASON:

Port 5/4: Link down
*Jun 26 16:06:36.838: %VSL-SW1_SP-5-VSL_CNTRL_LINK:
vsl_new_control_link NEW VSL Control Link 5/5
*Jun 26 16:06:37.037: %VSLP-SW1_SP-3-VSLP_LMP_FAIL_REASON:

Port 5/5: Link down
*Jun 26 16:06:37.097: %VSL-SW1_SP-2-VSL_STATUS: ======== VSL isDOWN ========

The following messages on the standby virtual switch console indicate that a
dual-active scenario has occurred:
*Jun 26 16:06:36.161: %VSL-SW2_SPSTBY-5-VSL_CNTRL_LINK:
vsl_new_control_link NEW VSL Control Link 5/5
*Jun 26 16:06:37.297: %VSLP-SW2_SPSTBY-3-VSLP_LMP_FAIL_REASON:

Port 5/5: Link down
*Jun 26 16:06:37.297: %VSL-SW2_SPSTBY-2-VSL_STATUS: -======== VSL is
DOWN ========-
*Jun 26 16:06:37.301: %PFREDUN-SW2_SPSTBY-6-ACTIVE: Initializing as
Virtual Switch ACTIVE processor
*Jun 26 16:06:37.353: %SYS-SW2_SPSTBY-3-LOGGER_FLUSHED: System was
paused for 00:00:00 to ensure console debugging output.
*Jun 26 16:06:37.441: %DUALACTIVE-SP-1-VSL_DOWN: VSL is down -
switchover, or possible dual-active situation has occurred

The following messages on the standby virtual switch console indicate that a dual-active scenario
has occurred:
*Jun 26 16:06:36.161: %VSL-SW2_SPSTBY-5-VSL_CNTRL_LINK:
vsl_new_control_link NEW VSL Control Link 5/5
*Jun 26 16:06:37.297: %VSLP-SW2_SPSTBY-3-VSLP_LMP_FAIL_REASON: Port
5/5: Link down
*Jun 26 16:06:37.297: %VSL-SW2_SPSTBY-2-VSL_STATUS: -======== VSL is
DOWN ========-
*Jun 26 16:06:37.301: %PFREDUN-SW2_SPSTBY-6-ACTIVE: Initializing as
Virtual Switch ACTIVE processor
*Jun 26 16:06:37.353: %SYS-SW2_SPSTBY-3-LOGGER_FLUSHED: System was
paused for 00:00:00 to ensure console debugging output.
*Jun 26 16:06:37.441: %DUALACTIVE-SP-1-VSL_DOWN: VSL is down -
switchover, or possible dual-active situation has occurred

Recovery from Dual-Active Scenario

You are notified of the situation through the CLI, syslog messages, etc., and it is your responsibility to restore the original active virtual switch as part of the Cisco Virtual Switching System. You can restore it by reconnecting or restoring the VSL.

If a VSL flap occurs, the system recovers automatically. Upon a link-up event from any of the VSL links, the previous active supervisor engine that is now in recovery mode reloads itself, allowing it to initialize as the hot-standby supervisor engine. If the peer chassis is not detected because the VSL is down again, the dual-active detection mechanism determines whether or not the peer chassis is active. If the peer chassis is detected, this event is treated as another VSL failure event and the chassis once again enters into recovery mode.

When the VSL is restored, the following messages are displayed on the console and the switch inrecovery mode (previous active virtual switch) reloads:

*Jun 26 16:23:34.877: %DUALACTIVE-1-VSL_RECOVERED: VSL has recovered
during dual-active situation: Reloading switch 1
*Jun 26 16:23:34.909: %SYS-5-RELOAD: Reload requested Reload Reason:
Reload Command.
<…snip…>
***
*** --- SHUTDOWN NOW ---
***
*Jun 26 16:23:42.012: %SYS-SW1_SP-5-RELOAD: Reload requested
*Jun 26 16:23:42.016: %OIR-SW1_SP-6-CONSOLE: Changing console
ownership to switch processor
*Jun 26 16:23:42.044: %SYS-SW1_SP-3-LOGGER_FLUSHED: System was paused
for 00:00:00 to ensure console debugging output.
System Bootstrap, Version 8.5(1)
Copyright (c) 1994-2006 by cisco Systems, Inc.
<…snip…>

After the chassis reloads, it reinitializes and the supervisor engine enters into standby virtual switch mode. If Switch Preemption is configured to prioritize this chassis to become active, it assumes this role after the preempt timer expires.

Wednesday, May 8, 2013

Enhanced PAgP


Enhanced PAgP
With the introduction of Cisco Virtual Switching System in the first software release, an enhancement to the PAgP protocol (Enhanced PAgP or PAgP+) has been implemented to assist in the dual-active detection.

The result of this detection is that the standby virtual switch (switch 2) always transitions to become an active virtual switch and the active virtual switch (switch 1) always enters into recovery mode.

Upon the detection of VSL going down on switch 2, the switch will immediately transmit a PAgP message on all port channels enabled for Enhanced PAgP dual-active detection, with a Type-Length-Value (TLV) containing its own Active ID = 2. When the access switch receives this PAgP message on any member of the port channel, it detects that it has received a new active ID value and considers such a change as an indication that it should consider switch 2 to be the new active virtual switch. In turn, the access switch modifies its local active ID value to Active ID = 2 and immediately sends a message to both virtual switches on all members of the port channel with the new Active ID = 2 to indicate that it now considers switch 2 to be the active virtual switch.

Form this point onward, the access switch sends TLVs containing Active ID = 2 to the virtual switches in all its regularly scheduled PAgP messages.

Use the following commands to configure the Cisco Virtual Switching System to take advantage of dual-active detection using Enhanced PAgP:

vss#conf t
Enter configuration commands, one per line. End with CNTL/Z.
vss(config)#switch virtual domain 10
vss(config-vs-domain)#dual-active detection pagp
vss(config-vs-domain)#dual-active trust channel-group 20
vss(config-vs-domain)#

To verify the configuration and ensure that Enhanced PAgP is compatible with its neighbors, issue the following command:

vss#sh switch virtual dual-active pagp
PAgP dual-active detection enabled: Yes
PAgP dual-active version: 1.1
Channel group 10 dual-active detect capability w/nbrs
Dual-Active trusted group: No
Dual-Active Partner Partner Partner
Port Detect Capable Name Port Version
Gi1/8/1 No SAL0802SHG 5/2 N/A
Gi2/8/1 No SAL0802SHG 5/1 N/A
Channel group 20 dual-active detect capability w/nbrs
Dual-Active trusted group: Yes
Dual-Active Partner Partner Partner
Port Detect Capable Name Port Version
Te1/1/1 Yes vs-access-2 Te5/1 1.1
Te2/1/1 Yes vs-access-2 Te5/2 1.1

Action upon Dual-Active Detection

Upon detecting the dual-active condition, the original active chassis enters into recovery mode and brings down all of its interfaces except the VSL and nominated management interfaces, effectively removing the device from the network.

Recovery from Dual-Active Scenario
You are notified of the situation through the CLI, syslog messages, etc., and it is your responsibility to restore the original active virtual switch as part of the Cisco Virtual Switching System. You can restore it by reconnecting or restoring the VSL.

If a VSL flap occurs, the system recovers automatically. Upon a link-up event from any of the VSL links, the previous active supervisor engine that is now in recovery mode reloads itself, allowing it to initialize as the hot-standby supervisor engine. If the peer chassis is not detected because the VSL is down again, the dual-active detection mechanism determines whether or not the peer chassis is active. If the peer chassis is detected, this event is treated as another VSL failure event and the chassis once again enters into recovery mode.

Tuesday, May 7, 2013

Configuring HSRP in Cisco 6500 Switches


Configuring HSRP in Cisco 6500 Switches

Here is a sample configuration for HSRP in Cisco 6500 Series switches for high availability and for VLAN redundancy.

In this example we have two Cisco 6513 Switches with SUP 720 with IOS 12.2.17d- SXB11. Switch A is active state of HSRP and Switch B is in standby state. A VLAN created VLAN 101. The group IP address is 10.2.0.1, 10.2.0.2 is assigned to Switch A & 10.2.0.3 Switch B.

For Switch 1
1) Create VLAN 100 & assign the IP address
2) Configure the standby IP address.
3) Configure standby preempt. (With preempt, Switch 1 will be active switch as long as it’s available.
4) Configure standby timer for HSRP update

For Switch 2
1) Create VLAN 100 & assign the IP address
2) Configure the standby IP address.
3) Configure standby priority less than 100.( In this case 50)
4) Configure standby timer for HSRP update

Now let’s look at the configuration

Switch01#sho run interface vlan 101
Building configuration...
Current configuration : 255 bytes
interface Vlan101
ip address 10.2.0.2 255.255.254.0
ip helper-address 10.0.1.100
ip helper-address 10.0.1.101
standby 2 ip 10.2.0.1
standby 2 timers 5 15
standby 2 preempt
end
Switch01#

Switch02#sho run interface vlan 101
Building configuration...
Current configuration : 278 bytes
interface Vlan101
ip address 10.2.0.3 255.255.254.0
ip helper-address 10.0.1.100
ip helper-address 10.0.1.101
standby 2 ip 10.2.0.1
standby 2 timers 5 15
standby 2 priority 50
standby 2 preempt
end
Switch02#

You can use the show standby command when in Privileged Mode to check the status of HSRP. This command tells you which Switch is active and which is standby, as well as a number of other statistics.

Switch01#sho standby vlan 101
Vlan101 - Group 2
Local state is Active, priority 100, may preempt
Hellotime 5 sec, holdtime 15 sec
Next hello sent in 0.908
Virtual IP address is 10.2.0.1 configured
Active router is local
Standby router is 10.2.0.3 expires in 12.676
Virtual mac address is 0000.0c07.ac02
2 state changes, last state change 22w0d
IP redundancy name is "hsrp-Vl101-2" (default)
Switch01#

Swicth2#sho standby vlan 101
Vlan101 - Group 2
Local state is Standby, priority 50, may preempt
Hellotime 5 sec, holdtime 15 sec
Next hello sent in 4.185
Virtual IP address is 10.2.0.1 configured
Active router is 10.2.0.2, priority 100 expires in 12.296
Standby router is local
1 state changes, last state change 12w2d
IP redundancy name is "hsrp-Vl101-2" (default)
Swicth2#

On the PC, the default IP address should point to 10.2.0.1 —not either of the Switches. This way, if one of the switches goes down, the other will take over.

HSRP is a valuable tool for ensuring high availability and router redundancy.

Monday, May 6, 2013

FWSM Configuration


FWSM Configuration

Step 1 :- Assigning VLAN’s to the FWSM

Define the VLAN’s the FWSM will protect in switch configuration mode

Cat6K(config)#vlan 150
Cat6K(config-vlan)#vlan 151
Cat6K(config-vlan)#vlan 152

Step2 :-Firewall Group Creation

Create a Firewall Group for the FWSM to manage

Cat6K(config)#firewall vlan-group 100 150-152

Attach Firewall Group to FWSM

Cat6K(config)#firewall module 6 vlan-group 100

Step3 :-Accessing the FWSM

Now session into the firewall module,

Cat6K# session slot module processor processor

Now type in cisco to get the welcome screen,

FWSM passwd: cisco
Welcome to the FWSM firewall
Type help of ‘?’ for a list of available commands
FWSM>

Type Enable to enter into the privilege mode,

Step4 :-Configurations at Interfaces

The FWSM supports 100 VLAN interfaces…
Interfaces are created using the following command

Creates VLAN interface 150 as an inside interface with security level 100
FWSM(config)# nameif vlan150 inside 100

Creates VLAN interface 152 as an outside interface with security level 0
FWSM(config)# nameif vlan152 outside 0

Step5 :-Assigning addresses

Assign IP address to the corresponding interfaces,

FWSM(config)# ip address inside 10.1.1.1 255.255.255.0
FWSM(config)# ip address outside 203.10.47.1 255.255.255.0

Step6 :-Assigning ACL’s

Configure corresponding ACL's to define the policies,

FWSM(config)# access-list in_acl permit tcp any host 10.1.1.1 eq 80
FWSM(config)# access-group in_acl in interface inside

Failover

FWSM:- Single Chassis Failover

1. Ability to failover to a redundant FWSM located in the same chassis…

2. FWSM pairs act in an active-standby relationship

3. Failover VLAN is required to be configured between both FWSM’s

4. Failover VLAN used to send heartbeat between primary and backup FWSM

5. Failover is stateful – backup FWSM understands full state of existing sessions

FWSM:- Multiple Chassis Failover

Ability to failover to a redundant FWSM located in a remote chassis…

Same setup as single chassis failover,..
No failover cable required (like with the PIX)

Configuring Failover
Pre-requisites

1. Create VLAN interface for failover protocol
2. Assign IP Address to VLAN interface
3. Associate VLAN interface to failover
4. Define firewall role (Primary/Secondary)
5. Define IP address for backup firewall
6. Define failover link (if remote chassis)
7. Force failover

The follwoing are the steps to be followed while configuring failover,

Step1 :-Define VLAN

Define the VLAN for carrying the failover protocol information between FWSM’s

FWSM(config)# nameif vlan500 bkup-link security99

Step2 :-Assign IP Address

Assign IP Address to the failover VLAN

FWSM(config)# ip address bkup-link 10.1.1.1 255.255.255.0

FWSM(config)# ip address bkup-link 10.1.1.2 255.255.255.0

Step3 :-Define Failover VLAN

Define VLAN 500 as the failover VLAN

FWSM(config)# failover lan interface bkup-link

FWSM(config)# failover lan interface bkup-link

Step4 :-Define role

Define the role of the FWSM in the chassis

FWSM(config)# failover lan unit primary

FWSM(config)# failover lan unit secondary

Step5 :-Define backup ip address

Define the IP address of the backup FWSM

FWSM(config)# failover ip address bkup-link 10.1.1.2

Step6 :-Define failover link

Define the link that will be used for failover

FWSM(config)# failover link bkup-link

Step7 :-Forcing failover

Forcing failover on the FWSM by issuing the failover command

FWSM(config)# failover

Well with these commands you will successfully establish the failover connectvity.

Confirming failover configuration

FWSM(config)# show failover
Failover On
Failover unit Primary
Failover LAN Interface bkup-link
Reconnect timeout 0:00:00
Poll frequency 15 seconds
This host: Primary - Active
Active time: 29925 (sec)
Interface outside (10.11.1.2): Normal
Interface inside (10.2.1.1): Normal
Other host: Secondary - Standby
Active time: 285 (sec)
Interface outside (10.11.1.3): Normal
Interface inside (10.2.1.2): Normal
Stateful Failover Logical Update Statistics
Link : Unconfigured.