Fast Switching Technology

What is Switch Ethernet?

With a high-speed backplane, it is possible to have all ports communicating at wire speed with minimal latency and low packet loss

There are two big LAN killers: increased demands that new technology, such as multimedia and videoconferencing, places on available bandwidth; and the distributed computing architecture trend being driven by mature implementations of the client/server model of corporate computing. With a high-speed switch backplane, it is possible to have all ports communicating at wire speed with minimal latency and low packet loss.

A switch is an intranetwork device designed to increase performance through LAN segmentation. Switching uses microsegmentation to isolate traffic. Upon arrival at the hub, a packet's destination address is read and the packet is sent directly to the relevant port - not to all ports as it would be with a repeater.

For networks experiencing a shortage of bandwidth, the introduction of a 10Mbps switch will only move the bottleneck from the hub to the 10Mbps server pipe. A minimal improvement will be apparent, due mainly to the decrease in the number of packet collisions.

To significantly increase performance it is necessary to open up the pipe to the server. In the past this was achieved by segmenting the network and installing multiple NICs in the server, or putting the server on a high-speed backbone such as FDDI. Vendors have now integrated a Fast Ethernet downlink into the switch for connection to either the server or backbone. Depending on your circumstances, this can result in a seven to eight-fold increase in performance.

For larger workgroups with multiple hubs operating as a single stack and sharing 10Mbps, it is possible to break up the stack and cascade each repeater off a dedicated 10Mbps pipe on the switch. After identifying your power users, you can move them directly to a port on the switch. While waiting for ATM's arrival, network administrators are increasing basing their backbone on Fast Ethernet and ignoring FDDI. But using 100Mbps Ethernet as a backbone technology has both advantages and disadvantages.

As a technology it is easier and cheaper to implement than FDDI and will run on both multimode fiber and category 5 cabling. Links can also be made directly to servers, hubs/switches and power users without the need for costly hardware or recabling. However, Fast Ethernet is not suitable for a campus-wide backbone as it suffers from hop and distance limitations, as well as not providing the redundancy of FDDI and ATM.

How Fast Ethernet is implemented depends on the structure of your environment, the location of users and servers and the use of virtual LANs, which allow users to be associated with a specific workgroup regardless of physical location. This is particularly important if you have client/server databases being accessed throughout your organization, or staff in various locations sharing large amounts of data.

The Fast Ethernet backbone can be based on either a shared or switched environment. Fast Ethernet is best deployed at the heart of a collapsed backbone architecture with servers concentrated in a server farm. Each server should have a high-speed pipe to a dedicated Fast Ethernet port. High-speed links over either fiber or copper then distribute the data to various organizational workgroups and power users. Even if ATM doesn't make it to the desktop you will already have an effective and scalable means of distributing data. By basing your network around high-speed switching you will optimize performance and ease the migration process to an ATM backbone.

Cut-through and Modified cut-through switching
There are three different packet-forwarding techniques: Cut-through, modified cut-through and store-and-forward which have respectively latency - the time it trades for a frame to travel through the switch.

To shorten the latency, cut-through switching reads the data packet header and then sends it to destination. But the cut-through switching can not check the runt, collision and CRC error packet. Usually, it only reads the first 14 byte. On the other hand, modified cut-through switching's latency is larger as to allow the packet to be temperately stored in a "buffer" and then forwards after eliminating the possibility of runt packet - runt is caused by CSMA/CD, the most common error packet. If the runt packet is sent to network, it will consume all the bandwidth. Usually, modified cut-through switching will wait 64 bytes and then forward it in order to find out all runt packets and filter them out.

Modified cut-through switching provides the reasonable low latency in the connected network, and decrease the big bandwidth consumption due to the runt packet. However, the benefit of cut-through switching is less due to the heavier and busy network load. Thus, the packet must be stored in the buffer waiting for sending in a busy network.

Store-and-Forward Switching
If to keep the network effective and reliability, store-and-forward switching is the best choice. Store-and-forward provides excellent error-checking function. For example, CRC, runt and collision filter.

Store-and-Forward reads the entire data packet, verifies the packet and send it to the destination port. No bad packets on network, packets can be switched between different network speeds. Store-and-forward latency is the same as sending a complete packet, so you will need to consider this latency when you test the performance.

Full-duplex
Full-duplex is a transmission method that effective effectively doubles the bandwidth of a link between a network card and a switch. It disables the collision detection mechanism, so the card and the switch can transmit and receive concurrently at full wire speed on each of the transmit and receive paths. A full-duplex segment can use the same Category 5 UTP cable used by 10BaseT Ethernet and Fast Ethernet.

Although full duplex has not yet been has not yet been standardized, an effort to complete a standard within the IEEE802.3 is currently underway.

Virtual LAN
The rise of switched internetworked architectures is mirrored by the growth of Virtual LANs (VLANs). To define a VLAN, it may be helpful to compare it with the conventional physical LAN which consists of a common physical medium to which workstations or similar entities are connected. An Ethernet network with bridges or routers to extend its reach is an example of a physical LAN.

A VLAN, on the other hand, is a logical collection of user stations. These stations can be physically dispersed, perhaps in different buildings or locales, but are ultimately connected via hubs or switches to accomplish assigned tasks.

Several Ethernet segments may be interconnected. Individual members of these segments may comprise a VLAN receives broadcast transmissions sent by the other user stations on the same VLAN, in a manner identical to user stations on a physical LAN.

Web Management vs SNMP
Network management encompasses many things, Normally, the first types of products that come to mind are SNMP-based network management platforms like HP's OpenView They also integrate vendor-specific configuration applications - making device configuration a matter of locating a device on the network map and automatically launch a detailed configuration application. The disadvantage of these systems is their bulk and complexity. Using such a product means either sitting in front of the workstation itself, or relying on X Windows to access the application remotely. This leaves little latitude for a network manager to access the management application from outside the network control center.

The major benefit of Web-Based Network Management is to help simplify your network management. WBM merges Web functionalities with network management to provide MIS staff with capabilities beyond traditional complicate tools, like SNMP. MIS staff can easily monitor and control enterprise networks with any Web browser at any node. Web-based management can be extremely valuable, providing access to powerful network resources. Later Web management applications use Java applets to varying degrees, instead of simple HTML forms and back-end processing. Primarily an interface enhancement, a Java applet can graphically represent data or present user-manipulated controls or widgets.

Naturally, Web-based network management is fundamentally insecure. But it comes up smelling like roses when you compare it with security on existing network management systems-particularly SNMP. Alought later versions of the SNMP protocol are in the process of standardization and acceptance, all current SNMP products rely on SNMP version 1, whose only security measure consists of a community string buried in every request packet.

Contrary to popular belief, the Web might present a more secure network management paradigm. Web services already offer the ability to establish encrypted sessions, through SHTFP. Otherwise, new technologies like Ipsec promise to deliver secure IP communications at any level. At the same time, increased use of digital certificates and network-wide directory services will aid authentication and access control. The future of network management security may follow the path of secure commerce applications.