![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRpIYCQ_ylxrxCEZq2UqaZDYAHNBRdTqYm63rfYfiAxR2cShSnTdGOtmU1nu6UdK8_zvRUL0VgpubycGjrKvSAj5YpMjmzN0yfGK-vvLlaHYTURLbIoGFbzMgxtFnTWbvb-xoeTw98IzEB/s320/6500.bmp)
6500 is a modular chassis switch. Capable of delivering speeds of up to 400 million packets per second and power up to 471 15.4W devices.
A 6500 comprises of a chassis, power supplies, one or two supervisors, line cards and service modules. A chassis can have 3, 4, 6, 9 or 13 slots each with the option of one or two modular power supplies. The supervisor engine provides centralised forwarding information and processing, the line cards provide port connectivity and service modules allow for devices such as firewalls to be integrated within the switch.
Supervisor
Supervisor 32
The Supervisor 32 (or Sup32) is a classic-only supervisor engine, comprising of MSFC2A and PFC3B.
The MSFC2A is the software portion of the supervisor. This particular version comprises of just a route processor, with the switch processor being part of the base board (similar to the Supervisor 2).
The PFC3B is the hardware component. Feature wise, this is identical to the Supervisor 720 and thus the majority of hardware features available in the 720 are available in the Supervisor 32.
Neither MSFC nor PFC are optional on the Supervisor 32.
Presently, the Supervisor 32 comes in two options:
- Supervisor 32 8-GE - 8xSFP 1GE Uplinks + 1 10/100/1000 management port (9 switchports total)
- Supervisor 32 10-GE - 2xXenpak 10GE Uplinks + 1 10/100/1000 management port
Note: The management ports are merely suggested as such. In practise, they are just regular switchports.
Supervisor 720
Sup720 componentsThe Supervisor 720 (or Sup720) is a fabric-enabled supervisor engine, comprising of MSFC3 and PFC3B. It supports all flavours of line cards, including Classic, cef/dcef256 and cef/dcef720.
The MSFC3 is the software portion of the supervisor. Version 3 of the MSFC includes both the route and switch processors, and thus handles all software processing of the supervisor (unlike the Supervisor 32 and Supervisor 2 where this functionality is on the base board).
The PFC3B is the hardware component. Feature wise, this is identical to the Supervisor 32 and thus the majority of hardware features available in the 720 are available in the Supervisor 32.
Original Supervisor 720 units shipped with the PFC3A. The PFC3B addressed a number of limitations, notibly hardware MPLS support, an improved Netflow hash, ACL counters, 4K of ACL labels (as opposed to 512) and an increase in the ACL LOU registers from 32 to 64.
Neither MSFC nor PFC are optional on the Supervisor 720.
Unlike the Supervisor 32, the Supervisor 720 has the option of either PFC3B or PFC3BXL.
Port Options
The 6500 supports four port configuration options. A port may either be:
- An access interface - This port type may carry a single vlan and is typically for wiring closets.
- A trunk interface - This may carry multiple vlans. A port with a voice vlan is technically a trunk link. We may use ISL or 802.1q to carry vlans.
- A Routed interface - This port has an IP address and is used to make layer 3 routing decisions, like on a router. Importantly, these interfaces consume an internal vlan out of the 6500's available pool.
- A Subinterface - Used to carry multiple virtual links across a single physical link.
Operating System
The 6500 currently supports three operating systems. CatOS, Native IOS and Modular IOS.CatOS
CatOS is supported for layer 2 (switching) operations only. To be able to perform routing functions (e.g. Layer 3) operations, the switch must be run in hybrid mode. In this case, CatOS runs on the Switch Processor (SP) portion of the MSFC and IOS runs on the Route Processor (RP). To make configuration changes, the user must then manually switch between the two environments.While CatOS does have some functionality missing from IOS, it's generally considered obsolete compared to running a switch in Native Mode.
Native IOS
Cisco IOS can be run on both the SP and RP. In this instance, the user is unaware of where a command is being executed on the switch, even though technically two IOS images are loaded -- one on each processor. This mode is the default shipping mode for Cisco products and enjoys support of all new features and line cards.
Modular IOS
Modular IOS (or more properly, IOS with Software Modularity is a version of Cisco IOS that employs a modern UNIX-based kernel to overcome many of the limitations of IOS. See Modular IOS for a breakdown of the improvements brought in this software release.
Control and Data Planes
The 6500 is a hardware switch. This means that all basic forwarding functions are provided in hardware through CEF. This is achieved by the PFC. To obtain Layer 3 information, the MSFC first builds a routing table from the running routing protocols in the Control Plane. This is called the routing information base, or RIB. This is then converted to an optimised format, called a Forwarding Information Base (FIB) and then copied to the PFC TCAM, in the Data Plane.
On ingress, a key is made from the incoming packet and the TCAM is queried in parallel. The most relevant match contains a pointer to the adjacency table, which provides the L2 rewrite information and, if applicable, any MPLS information to push or pop from the packet.
The PFC is a shared memory, in to which ACLs, Routes, NetFlow entries etc are all placed. It is therefore important to provision this when ordering a 6500.
MSFC
The MSFC or Multilayer Switch Feature Card, provides software functionality, largely involving building the routing table and maintaining layer 2 and 3 protocols.
The MSFC itself is really comprised of two systems, each with their own DRAM, CPU and bootflash. These are the Route Processor (RP) and Switch Processor (SP).
Switch Processor
The SP handles the initial bootup of the switch when running native or modular IOS. In this case, the SP portion of the bootflash is first copied to the SP DRAM and booted. Upon completion, the SP copies the RP portion of IOS to the RP's DRAM. The SP then hands control of the boot process to the RP and from here on, the user interface is controlled from the RP.
During normal operation, the SP handles layer 2 protocols such as arp and spanning-tree. While you can upgrade the memory for the SP, it is largely redundant due to the fact that most L2 information is now stored in the PFC.
Route Processor
The RP handles the general UI of the 6500, handling control over to the SP for L2 specific functions. When running native or modular IOS, it is generally not helpful to think of the 6500 as two separate processors, but instead as a parallel machine. The RP's bootflash is not used in native or modular IOS and therefore requires no code to be placed on it.
During normal operation, the RP handles routing protocols (such as OSPF, EIGRP, BGP etc) and builds the RIB in software before programming the FIB in to the PFC TCAM. Additionally, any QoS commands not supported in hardware will be handled by the RP.
Generally, out of memory errors on the 6500 are handled by upgrading the RP DRAM. It is a supported configuration to have the SP and RP with different DRAM amounts.
PFC
The PFC, or Policy Feature Card, is where the hardware is processed on the 6500. For the Supervisor 32, PFC3B is the only option and is the default for the Supervisor 720. This card supports up to 239K (where K = 1024) routes, ACLs etc as a shared memory, which is typically adequate for campus environments with default routes.
PFC3BXL for the Supervisor 720 is also an option, allowing up to 1M ipv4 routes, acls etc as a shared memory. This is the required PFC TCAM size for holding full internet routing tables.
Newer line cards support PFC3C, which allows up to to 96K MAC addresses as well as some additional undisclosed enhancements.
Note: Each PFC variety has an equivalent DFC option. In the case of mismatching DFCs and PFCs, the chassis will drop to the lowest common denominator (i.e. a chassis with PFC3B and a DFC3BXL will run at PFC3B throughout).
Methods of Operation
The 6500 has five major modes of operation. Classic, cef256, dcef256, cef720 and dcef720.
These make use of the Switch Fabric and the Classic Bus. All 6500 chassis have both of these, but the mode of operation depends if, how and when each of these modes are used.
Classic Bus
The 6500 classic architecture provides 32 Gbps centralised forwarding performance. The design is such that an incoming packet is first queued on the line card and then placed on to the global data bus (dBus) and is copied to all other line cards, including the supervisor. The supervisor then looks up the correct egress port, access lists, policing and any relevant rewrite information on the PFC. This is placed on the result bus (rBus) and sent to all line cards. Those line cards for whom the data is not required terminate processing. The others continue forwarding and apply relevant egress queuing.
The speed of the classic bus is 16gb full duplex (hence the 32gb) and is the only supported way of connecting a Supervisor 32 engine to a 6500.
cef256
This method of forwarding was first introduced with the Supervisor 2 engine. When used in combination with a switch fabric module, each line card has an 8gb connections to the switch fabric and additionally a connection to the classic bus. In this mode, assuming all line cards have a switch fabric connection, an ingress packet is queued as before and its headers are sent along the dBus to the supervisor. They are looked up in the PFC (including ACLs etc) and then the result is placed on the rBus. The initial egress line card takes this information and forwards the data to the correct line card along the switch fabric. The main advantage here is that there is a dedicated 8gbps connection between the line cards. The receiving line card queues the egress packet before sending it from the desired port.
The '256' is derived from a chassis using 2x8gb ports on 8 slots of a 6509 chassis.
16 * 8 = 128 * 2 = 256. The number is doubled to the switch fabric being 'full duplex'.
dcef256
dcef256 uses distributed forwarding. These line cards have 2x8gb connections to the switch fabric and no classic bus connection.
Unlike the previous examples, the line cards holds a full copy of the supervisors routing tables locally, as well as its own L2 adjacency table (i.e. MAC addresses). This eliminates the need for any connection to the classic bus or requirement to use the shared resource of the supervisor. In this instance, an ingress packet is queued, but its destination looked up locally. The packet is then sent across the switch fabric, queued in the egress line card before being sent.
cef720
This mode of operation is similar to cef256, except now there are 2x20gb connections to the switch fabric and there is no need for a switch fabric module (this is now integrated in to the supervisor). This was first introduced in to the Supervisor Engine 720.
In addition, the concept of a classic bus architecture on the line card has gone, replaced instead by a fabric ASIC controlling the forwarding from ports to the fabric.
The '720' is derived from a chassis using 2x40gb ports on 9 slots of a 6509 chassis. 40 * 9 = 360 * 2 = 720. The number is doubled to the switch fabric being 'full duplex'. The reason we use 9 slots for the calculation instead of 8 for the cef256 is that we no longer need to waste a slot with the switch fabric module.
dcef720
This mode of operation acts identically to cef720, except now we have a DFC (Distributed Forwarding Card) which is able to make forwarding descisions locally (like the dCEF256 architecture).
Mixing Modes
The 6500 runs in three major methods of operation: Bus/Flowthrough, Compact and Truncated.
Bus/Flowthrough is the classic means of sending packets and is used for communication between two classic cards. In this case, the whole packet is sent along the dBus. Centralised box performance in this case is 15mpps in the best case (64-byte packets). In this case, we send a 16byte cycles with a wait cycle when we're done sending.
Compact is where a chassis has all fabric-enabled cards. In this case, a special 32-byte header is sent along the classic bus containing information such as source/destination IP, CoS, etc. Because we have a guaranteed header size (2 clock cycles), we have no need to send wait cycles and are guaranteed 30mpps regardless of packet size.
Truncated is where a fabric chassis has a classic card installed. As classic cards will generate bus errors on compact frames (as they're not valid), we have to send full 64-byte packets (with a full L2/L3 header and null payload). In this case, we must also send a wait cycle for the classic modules. Performance in this case is reduced to 15mpps best case (64-byte packets and when fabric cards communicate).
Note: Distributed cards are unaffected by the above modes, as they only ever use the switch fabric for forwarding data and never use the classic bus for header lookup. In this case, 48mpps can be gained per slot, giving (for example) 432mpps for the box on a fully loaded 6509 chassis.
Power Redundancy Options
The 6500 supports dual power supplies for redundancy. These may be run in one of two modes, redundant or combined mode.
Redundant Mode
When running in Redundant Mode, each power supply provides approximately 50% of its capacity to the chassis. In the event of a failure, the unaffected power supply will then provide 100% of its capacity and an alert will be generated. As there was enough to power the chassis ahead of time, there is no interruption to service in this configuration. This is also the default and recommended way to configure power supplies.
Combined Mode
In combined mode, each power supply provides approximately 83% of its capacity to the chassis. This allows for greater utilisation of the power supplies and potentially increased PoE densities.
In the event of a failure, we power down all devices except the supervisor. During this time, there will be a temporary network outage while we return power to the system. The order at which we do this is as follows:
- First we power up service modules from the top down
- Then we power up line cards from the top most slot to the bottom most. We do _not_ permit PoE at this stage.
- Next we power up PoE from the highest line card and the highest port (i.e. line card 0/port 0) down through to the lowest. We go through the above until we have hit our power capacity of the remaining power. Normally, a single power supply will be able to power all service modules and line cards, but not give the PoE densities required.
We go through the above until we have hit our power capacity of the remaining power. Normally, a single power supply will be able to power all service modules and line cards, but not give the PoE densities required.
QoS
When performing QoS there are three places where the 6500 gets involved. Primarily on the line card for ingress queuing (normally a regular queue + priority queue), the PFC (for rate limiting and policing) and finally queuing on the egress line card.
Port Trust
Generally, a port will be set to a certain trust:
- Un-trusted - all QoS values are ignored and internal DSCP is set to a default (normally 0)
- Trust-CoS - The CoS value will be remapped to internal DSCP and all other fields ignored
- Trust-IPP - The IPP value will be remapped to internal DSCP and all other fields ignored
- Trust-DSCP - The DSCP value will be copied to internal DSCP and all other fields ignored
By default the egress packet will inherit the internal DSCP value in all fields. We can, however, preserve the original settings should this be desired.
Packet Flow
An ingress packet will have its CoS or ToS value copied to internal DSCP (see above) and then queued either in the standard or priority queue of that line card. At this point, it will be ingress & egress policed through the PFC by ACLs (either forward, mark or drop) based on the burst (see Token Buckets) and burst rate parameters. We then queue on the egress card either in a standard queue (chosen by mapped-CoS) or the priority queue. In the standard queue, some kind of scheduling algorithm will determine de-queuing (e.g. Weighted Round Robin, Shaped Round Robin etc).
Line Card Notation
All queuing is performed on the individual line cards. When looking at a line card, there is a standard notation to determine what queuing features you will get, in the form: XpYqZt, where:
- X is the number of priority queues (1 or 0)
- Y is the number of standard queues
- Z is the number of queuing thresholds
For example, 1p4q8t would be 1 priority queue, 4 regular queues and 8 thresholds.
Note, today, the 6500 does not support DSCP to queue mapping (only CoS to queue) with the exception of the 6708.
Online Insertion & Removal
OIR is a feature of the 6500 allowing you to hot swap most line cards without first powering down the chassis. The advantage of this is that one may perform an in-service upgrade. However, before attempting this, it is important that one understands the process of OIR and how it may still require a reload.
To prevent bus errors, the chassis has three pins in each slot which correspond with the line card. Upon insertion, the largest of these makes first contact and stalls the bus (as to avoid corruption). As the line card is pushed in further, the middle pin makes the data connection. Finally, the smallest pin removes the bus stall and allows the chassis to continue operation.
However, if any part of this operation is skipped, errors will occur (resulting in a stalled bus and ultimately a chassis reload). Common problems include:
- Line cards being inserted incorrectly (and thus making contact with only the stall and data pins and thus not releasing the bus)
- Line cards being inserted too quickly (and thus the stall removal signal is not received)
- Line cards being inserted too slowly (and thus the bus is stalled for too long and forces a reload).
Therefore, you are strongly advised not to perform OIR outside of maintenance windows. It is also for the above that OIR is commonly referred to as On Insertion, Reload.