Networking

Published on June 2016 | Categories: Documents | Downloads: 36 | Comments: 0 | Views: 450
of 157
Download PDF   Embed   Report

Comments

Content

Course Introduction
Pre-course Assessment

Module 1: Ethernet Overview
1.1: What is Ethernet? 1.2: Ethernet Development Module 1 Exercise

Module 2: Ethernet Basics
2.1: Transmitting and Receiving Data 2.2: Ethernet and the OSI Reference Model 2.3: The Physical Layer 2.4: The MAC Sublayer 2.5: Repeaters, Switches and Bridges Module 2 Exercise

Module 3: Ethernet Operations
3.1: The CSMA/CD Algorithm 3.2: Maximum Distance between Stations 3.3: Exponential Backoff Algorithm 3.4: Collisions and Performance Considerations 3.5: Network Segmentation Module 3 Exercise

Module 4: Ethernet Frame Composition
4.1: Basic Ethernet Frame Composition 4.2: Preamble/SFD 4.3: Destination Address and Source Address Fields 4.4: Type/Length Field 4.5: Data Field 4.6: Frame Check Sequence 4.7: Interframe Gap Module 4 Exercise

Module 5: Ethernet Frame Types
5.1: Overview 5.2: Ethernet II Frame 5.3: IEEE 802.3 Ethernet Frame with IEEE 802.2 LLC Header 5.4: IEEE 802.3 Frame with SNAP Encapsulation 5.5: Novell 802.3 Frame Module 5 Exercise

Module 6: Full-duplex Ethernet
6.1: Full-duplex and Half-duplex Compared 6.2: The Benefits of Full-duplex Ethernet 6.3: Full-duplex and Distance Limitations 6.4: Full-duplex Mode and Gigabit Ethernet Module 6 Exercise

Module 7: Ethernet Operation at 10Mbps
7.1: 10Base-5 and 10Base-2 7.2: 10Base-T 7.3: 10Base-FL 7.4: Implementation: 10Mbps Ethernet Configuration Guidelines Module 7 Exercise

Module 8: Fast Ethernet
8.1: The Growth of Fast Ethernet 8.2: 10Mbps Ethernet vs. Fast Ethernet 8.3: 100Base-TX 8.4: 100Base-FX 8.5: Implementation: Fast Ethernet Configuration Guidelines 8.6: Auto-negotiation Module 8 Exercise

Module 9: Gigabit Ethernet
9.1: Why Gigabit Ethernet is Needed 9.2: Gigabit Ethernet Defined 9.3: Implementation of Gigabit Ethernet 9.4: Gigabit Ethernet and CSMA/CD 9.5: Considerations for Early Adoption Module 9 Exercise

Module 10: Ethernet and Other Physical-layer Technologies
10.1: Overview: Ethernet and Other Technologies 10.2: Ethernet Compared 10.3: Specific Examples Module 10 Exercise

Module 11: Ethernet and the Upper-layer Protocols
11.1: The OSI Model Revisited 11.2: Running Multiple Protocols Module 11 Exercise Post-course Assessment Course Evaluation

Fundamentals of Ethernet Technology
Course Description This web-based course teaches the fundamentals of Ethernet networking. It contains eleven modules, which can be selected individually. Each module contains a number of lessons that discuss Ethernet concepts, generic implementation types, definitions and basic processes. The course does not include hands-on lab exercises, nor 'how to' directions for specific Intel products. This course begins with a brief overview about what Ethernet is and, also, describes the historical setting in which Ethernet was developed and delineates the reasons for the tremendous success of Ethernet products in the marketplace. The second module of the course serves as a comprehensive introduction to basic methods used by Ethernet to facilitate communication between computers. Following the high-level overviews of Ethernet that comprise the first two modules, the remaining modules of the course each focus on a specific aspect of Ethernet technology ranging from Ethernet's collision detection system and data transfer methods to specific types of Ethernet, including 10Base-T, Fast Ethernet and Gigabit Ethernet. General configuration guidelines for each Ethernet type are also discussed. Following a description of Gigabit Ethernet operations, the course discusses Ethernet's relationship to other physical-layer technologies, such as Token Ring, ATM and FDDI. The course concludes with an examination of Ethernet's relationship to the upper-layer protocols that Ethernet serves, reinforcing the concept of Ethernet as an Open Systems technology. Recommended Prerequisites Knowledge of networking fundamentals Course Goal After completing this self-study course, students should understand the fundamentals of Ethernet technology. Subsequent Intel certification courses are based on the assumption that students understand the basic concepts covered in this course. Certification courses will not attempt to cover these topics. This will minimize the amount of time students will spend outside of their work environment in advanced technical/sales training. Course Objectives
q

Identify the characteristics of the layers in the OSI layering model for data communications, with particular emphasis on the functionality included in OSI layers

q q q q

q q

q

q

q

q

q

one (physical layer) and two (data link layer) Identify the communication process that Ethernet standards define Identify common terminology used in the IEEE set of Ethernet standards Identify the role of the Ethernet bus and the concept of collision domains Identify the function and characteristics of Ethernet NICs, hubs, repeaters, bridges and switches Identify the operation of the CSMA/CD algorithm Identify each field in an Ethernet frame and its purpose, as well as the construction of specific Ethernet frame types Identify the similarities of and differences between Ethernet running at 10Mbps and 100Mbps Identify the basic specifications for Gigabit Ethernet and the basic configuration guidelines for 10Mbps, 100Mbps and 1000Mbps Ethernet Identify the basic operation and characteristics of Ethernet over copper media and optical fiber media, as well as the differences between the Cat 3, 4 and 5 UTP cabling certifications Identify the concepts of half-duplex and full-duplex Ethernet operation, auto-negotiation of speed and duplex operation Identify Ethernet's relationship to other networking technologies

Duration The course consists of an introduction and eleven separate course modules. Estimated time of completion is six hours, depending on your reading speed and the level of detail you desire.

* Legal Information © 1998 Intel Corporation

Fundamentals of Ethernet Technology
Welcome! Welcome to the Fundamentals of Ethernet Technology course! Before you begin the course, please take a moment to review the following information. Ensuring Your System Is Set Up Properly Please ensure your computer is properly set up to take advantage of the interactive training at this site. For more information about the system requirements, including web browsers, plugins and screen resolution, read the System Requirements/Troubleshooting FAQs. Preparing Yourself for Training Before you begin, set aside some time to take this course. It will be most beneficial to your learning experience to spend an hour or so at each session. Remove possible distractions. Turn off the phone and consider using ear plugs to prevent unwanted noise. When it is time to take a break, take one at a logical stopping point, such as the end of a module. Logging In Remember to log in each time you take a course. Registering allows you to participate in course exercises, track your progress, receive credit for completed courses and receive incentives that are associated with these courses. For more information on logging in, read the Registration/Log-on FAQs. Using the Course Syllabus to Navigate within a Course The course syllabus provides you with easy navigation through the course, allowing you to quickly reach the modules and lessons that contain the information that you want to learn. For example, when you return to a course after a break, you can use the course syllabus to jump to the last module or lesson you were taking. For more information, read the Course Taking FAQs. Tracking Your Progress Use the Student Records to track your progress. Check out

Before you get started If you have never taken an Intel webbased training course, go through the webbased training tutorial before you begin this course. This tutorial explains how course content is organized, how the exercises and course assessments work, how to navigate through the course, and how to get the most out of the course.

courses you’ve enrolled in, parts of courses you’ve completed and your scores. Encountering Technical Problems Many of the most common technical issues are described in the System Requirements/Troubleshooting FAQs.

* Legal Information © 1998 Intel Corporation

ETHERNET OVERVIEW
Module Description In addition to providing a high-level overview of the role Ethernet plays in network computing, this module also provides an historical perspective on the development of Ethernet. Lesson 1.1 defines Ethernet's relationship to upper-layer network protocols and introduces some of the basic components of an Ethernet network. Lesson 1.2 focuses on the historical development of Ethernet, and concludes by identifying the reasons for Ethernet's success in the marketplace. Module Objectives
q q q q q q

Identify Ethernet's basic role in computer networking Identify the components used on an Ethernet LAN Identify the key milestones in the historical development of Ethernet Identify the benefits of distributed processing Identify the benefits of Open Systems solutions Identify the reasons for Ethernet's success in the marketplace

* Legal Information © 1998 Intel Corporation

What is Ethernet?
Lesson Objectives
q q

Identify Ethernet's basic role in computer networking Identify the components used on an Ethernet LAN

Ethernet Defined Ethernet is a highly popular and internationally standardized networking technology that enables computers to communicate with each other. Ethernet's role in the landscape of network communication is limited, however, to the hardware-level transfer of data from one point to another. Beyond the hardware-level, or physical layer, of networking, data transport is handled by software protocols, such as TCP/IP, IPX, NetBEUI, DECnet and others. Network operating systems, such as Windows NT*, UNIX, NetWare* and others, along with the applications that run on them, use these protocols, which in turn use Ethernet, to provide the broad range of networking services that people depend upon. Ethernet equipment is manufactured by a wide variety of vendors. Today, nearly every brand of modern computer can be equipped to communicate on an Ethernet network. Ethernet technology can provide network speeds from 10Mbps (10 megabits per second) to 1Gbps (1 gigabit per second), which makes Ethernet equally suitable for both small and large networks.

* Legal Information © 1998 Intel Corporation

What is Ethernet? (Continued)
Course Introduction The basic concepts of Ethernet are very easy to understand. Like all inventions, Ethernet originates from a series of innovations on older technologies. The basic concepts of Ethernet evolve directly from the basic concepts behind telegraph, telephone and radio technology. As you work through the modules of this course, you will learn that just like the communication technologies that have come before it, Ethernet is nothing more than a practical solution to practical problems. The basic components of an Ethernet network include cabling, network interface cards (NICs), clients, servers, hubs and switches. The figure on the previous page shows you these basic components of an Ethernet network and how they fit together. By the end of this course, you will understand how each of these components works and how the Ethernet standard as a whole works to make communication between computers, printers and other office devices possible.

* Legal Information © 1998 Intel Corporation

Ethernet Development
Lesson Objectives
q q q q

Identify the key milestones in the historical development of Ethernet Identify the benefits of distributed processing Identify the benefits of Open Systems solutions Identify the reasons for Ethernet's success in the marketplace

Overview Understanding a little about the history of Ethernet is important for two reasons:

1. It gives you a foundation for understanding the practical and technical computing
problems that Ethernet addresses. 2. It gives you the cultural background you need to feel competent when discussing network technology with advanced systems administrators and engineers. The High Cost of Mainframe Computing In the 1970's and 1980's, mainframe computing began to present a number of significant limitations for large and small businesses alike. First, mainframes are not easily scalable. Companies must either plan to continually upgrade an entry-level mainframe, or they must make a significant, capital investment in a large mainframe (and later upgrade it as well, as technology improves). Both options are expensive. Second, the demand for advanced word processors, window-based graphical user interfaces, computer-aided design programs and statistical analysis tools requires an amount of computing power and quick response time that mainframes simply cannot not provide at any cost. Third, and finally, because most mainframe solutions from different vendors are incompatible with each other, once a company chooses a particular vendor, it is generally cost prohibitive to change vendors later on. As an alternative to the mainframe-centric world of the 1970's, the search for a decentralized, distributed and multivendor approach to data processing, now known as Open Systems, becomes the driving force behind the development and adoption of Ethernet, as well as other LAN technologies.

* Legal Information © 1998 Intel Corporation

Ethernet Development (Continued)
The Development of Ethernet Standards In September 1980, Digital, Intel and Xerox jointly published the first commercial Ethernet standard for connecting computers together. This original Ethernet standard, sometimes referred to as the DIX standard (D[igital] I[ntel] X[erox]), evolved directly from the Xerox Palo Alto Research Center's (PARC) experimental networks of the 1970's. Prior to the joint publication of this standard, it was generally not possible to network computers manufactured by different vendors. By publishing the first Ethernet standard jointly, Digital, Intel and Xerox made openly available an easy-to-understand, easy-to-implement and easy-to-maintain technology for high-speed communication between computers from either the same or different manufacturers. In 1985, the Institute of Electrical and Electronics Engineers (IEEE) published the first internationally approved set of Ethernet standards under the somewhat obscure title IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications. Shortly thereafter, this standard was adopted by the International Standards Organization (ISO), which effectively positioned Ethernet technology in a way that enabled it to become the most widely used method for connecting Local Area Networks (LANs). The widespread adoption of distributed computing and Open Systems (the concept of a modular, vendor-independent set of "open" interoperability standards) offers companies an economical way to purchase additional computing power in the form of file servers and PC workstations on an as needed basis, and from whatever vendor they believe will deliver the best value for their money. Open communications technologies like Ethernet provide the basis for a modular solution that:

1. Gives users mainframe-like access to shared information. 2. Distributes the processing load required for advanced applications. 3. Allows companies to network a mixture of hardware and software solutions from
different vendors. 4. Provides a scalable network architecture at an affordable cost.

* Legal Information © 1998 Intel Corporation

In 1973, Robert Metcalfe (later founder of 3Com*) designed the first Ethernet network, named the Alto Aloha Network, while working for Xerox at the now famous Palo Alto Research Center (PARC). In the mid- to late 1970's a number of exciting things were happening at PARC, including the development of a graphics-based monitor, the windowed display concept, the mouse, the laser printer, the desktop workstation and even something called the Worm, which was originally a software management and network maintenance strategy, yet later became a prototype for the modern computer virus. Somewhere along the line, the name Alto Aloha (derived from the FM radio-based ALOHA Network System built at the University of Hawaii) was dropped in favor of the slightly more mystical sounding, Ethernet. The ether of Ethernet is a reference to the hypothetical element, "lumeniferous ether," which from the 18th century up until Einstein's theory of relativity, many physicists believe to permeate the entire universe, holding it together and providing a medium for electromagnetic (light) waves. The abstract concept of an Ethernet is, then, of a network of wires that can serve as a binding medium across which all the different parts of the computer universe can communicate.

* Legal Information © 1998 Intel Corporation

Ethernet Development (Continued)
Ethernet Today Today, Ethernet technology has achieved commodity status and is available from a wide range of manufacturers and distributors, generally leaving consumers free to pick and choose from a variety of alternatives.

1997 shipments of network interface cards by technology. Source: IDC 1997 PC NIC Market Forecast Summary, 1995-2001.

Since the first Ethernet components appeared on the market in the beginning of the 1980's, Ethernet has gone on to become the most successful LAN technology in the marketplace. As the figure above shows, Ethernet dominates the market today and is likely to continue to do so for the foreseeable future. There are several reasons why:
q

q

q

q

Ethernet interfaces are available for almost any type of computer, from laptops to mainframes. Ethernet devices are relatively easy to design and manufacture, and as a result, relatively inexpensive. Ethernet is easy to install, maintain and troubleshoot, keeping the cost of ownership down. Ethernet has proved capable of meeting demands for higher LAN speeds in a costeffective manner. 100Mbps Fast Ethernet, only recently introduced, has been highly successful, and an IEEE standard for Gigabit Ethernet is expected to be finalized by the end of 1998.

* Legal Information © 1998 Intel Corporation

ETHERNET BASICS
Module Description This module describes details of Ethernet operation and is the largest module of the course. Lesson 2.1 introduces the Ethernet Bus wire, and describes the roles that network interface cards (NICs) and data frames play in Ethernet communications. Lesson 2.1 also describes Ethernet's method for controlling access to the shared broadcast medium by drawing an analogy between Ethernet and two-way radio. Lesson 2.1 also introduces and explains the CSMA/CD algorithm, and illustrates the importance of configuration guidelines. Lesson 2.2 describes the place Ethernet occupies in the OSI reference model, and identifies the basic network services that Ethernet provides. Lesson 2.3 describes in general terms Ethernet cabling schemes, and explains the difference between physical and logical topologies. Lesson 2.4 identifies the role of the MAC sublayer and the Ethernet operations that take place at the MAC sublayer. Lesson 2.5 concludes Module 2 with an explanation of repeaters, switches and bridges. Module Objectives
q q q q q q q q q q q q

Identify the basic characteristics of the Ethernet bus Identify the function of network interface cards Identify the basic components of the Ethernet frame Identify the fundamental process of Ethernet communication Identify the basic concepts of the CSMA/CD algorithm Identify the importance of configuration guidelines Identify the place Ethernet occupies in the OSI model Identify Ethernet's relationship to the upper layers of the OSI model Identify Ethernet bus and star topologies Identify the difference between physical network topology and the logical topology Identify the role of the MAC sublayer Identify the roles that repeaters, switches and bridges play

* Legal Information © 1998 Intel Corporation

Transmitting and Receiving Data
Lesson Objectives
q q q q q q

Identify the basic characteristics of the Ethernet bus Identify the function of network interface cards Identify the basic components of the Ethernet frame Identify the fundamental process of Ethernet communication Identify the basic concepts of the CSMA/CD algorithm Identify the importance of configuration guidelines

The Ethernet Bus Computers on an Ethernet network communicate with each other by broadcasting packets of data on a shared wire, called an Ethernet bus. The Ethernet bus is a single, continuous length of wire that serves as a medium for packet broadcasts. Each computer that participates in the network must connect directly to the Ethernet bus. In the hypothetical small office environment shown in the figure below, the Ethernet bus winds through the entire office, passing closely by each computer. Each computer's physical connection to the network is composed of a network interface card (NIC), a short drop line and a connector that taps directly into the bus-wire.

Bus-type, coaxial cabling scheme for a small office.

Network Interface Card (NIC) Addresses and Ethernet Frames Ethernet distinguishes one computer from another by a unique address assigned to each NIC. Each NIC is both a sender and receiver of packets of data called Ethernet frames. When a communication technology, like Ethernet, packages or frames, data, it means simply that routing information is being added to the beginning and the end of the original data. Once the data has been routed successfully, the routing information is discarded, just like a postal envelope is often discarded once its contents have been extracted. Furthermore, like postal envelopes, Ethernet frames can transport only a certain amount of data at a time. (For a standard frame, the maximum size of the data field is 1500 bytes.) Like sending a large letter in a series of envelopes, Ethernet transports larger amounts of data in multiple frames. Upon receiving each frame, the receiving computer discards the routing data and puts the original data back in order.

* Legal Information © 1998 Intel Corporation

Transmitting and Receiving Data (Continued)
Unlike postal envelopes, however, which arrive all at once, an Ethernet frame arrives one bit at a time. Each Ethernet frame contains a structured series of data fields that identify:

1. 2. 3. 4.

The beginning of the frame The address of the intended receiver The address of the sender The type of data being sent

Immediately following this information, the Ethernet frame includes the original data, or the "content" of the frame. The frame ends with a mathematical value (called a cyclic redundancy check, or CRC) that the receiving NIC uses to verify the frame has been received correctly. The figure below illustrates the structure of a standard Ethernet frame.

Structure of a standard Ethernet frame.

Listening, Sending and Receiving Ethernet was designed to be a relatively simple communications protocol. As a result, Ethernet bears many characteristics similar to common technologies such as telephone, telegraph and radio. Though recent advances in Ethernet switching technology have allowed Ethernet to operate more like a telephone system, the original Ethernet bus actually operates more like a two-way radio network of taxicab drivers.

* Legal Information © 1998 Intel Corporation

Transmitting and Receiving Data (Continued)
Listening, Sending and Receiving (Continued) Taxicab fleets often use two-way radios to share information about changing traffic patterns, pick-up locations, destinations and emergencies. All radio operators on the taxicab network hear all messages that are broadcast. Most messages, however, are intended for use by only one out of the many drivers. A typical radio message might sound as follows: "Message to Taxi 99. This is Central. Pick-up at 144th and Broadway. Reply." Very much like an Ethernet frame, the routing data ("Message to Taxi 99. This is Central") frames the core message (" Pick-up at 144th and Broadway"). There may be thousands of messages like this broadcast over the radio each day. A taxicab operator becomes accustomed, however, to overhearing all of the messages on his or her radio, yet paying close attention, or "processing," only those messages specifically addressed his or her taxicab. Computers on an Ethernet network function essentially the same way. Like the two-way radio network of taxicab drivers, all computers connected to the Ethernet bus hear all broadcasts. Whenever one computer sends a frame (or message) to another computer, the frame is broadcast over the entire length of the bus-cable, which all computers share. As the frame arrives at each computer, the network interface card (NIC) checks the frame's address information. If the destination address of the frame matches the NIC's address, the NIC processes the frame by checking the data packet's integrity and removing the routing information. If the frame's destination address does not match the NIC's address, the NIC does not process any of the information and waits for the next frame to arrive. Just like a taxicab driver, each NIC "listens" to all messages, but only processes those messages specifically addressed to it.

* Legal Information © 1998 Intel Corporation

Transmitting and Receiving Data (Continued)
Collisions On the taxicabs' radio network, if no one is currently speaking, all operators are free to contend for the open channel. Because only one radio operator can be heard at a time, if one driver is already speaking, every other driver must wait until that person finishes before broadcasting a message of his or her own. If two operators begin to speak at the same time, both messages are garbled, and each operator must stop speaking and wait until the channel is free. One of the operators may not realize that his or her message was garbled; in which case, a third operator will ask him or her to restate the message. Once again, Ethernet communication takes place in essentially this same way. When a computer's NIC has a frame ready to send, it first listens to the network for any frames from other computers already being broadcast. If there is a frame already being transmitted on the bus, the NIC waits until that frame is completed and the bus is free. It is possible, however, that two or more computers with frames to send will listen to the network at the same time and thinking that the bus is free, broadcast their frames simultaneously. The resulting garbled transmission is called a collision. The first NIC to detect the garbled transmission, sends out what is called a jam signal that informs all the computers on the network that a collision has occurred. The computers whose frames collided must then wait for a random time before trying again to resend their frames. Because the time to wait is randomly chosen, one computer's wait time will likely be shorter than the other. The computer with the shorter wait time will gain access to the open bus first, and the second computer will then wait until the first computer's broadcast is completed. Collisions are a normal part of Ethernet operations. The set of rules by which Ethernet handles collisions is called the CSMA/CD algorithm. CSMA/CD stands for "carrier sense multiple access with collision detection," which simply is a technical way to refer to the sense-to-see-ifthe-line-is-free-before-you-send method that the multiple computers on an Ethernet network use to share access to the broadcast channel. The collision detection part of the CSMA/CD algorithm defines many of the physical limitations of Ethernet and directly affects how Ethernet networks must be configured.

* Legal Information © 1998 Intel Corporation

Transmitting and Receiving Data (Continued)
The Fundamental Rule The fundamental rule of all Ethernet configuration guidelines is that collisions must be detected before a sending station completes the transmission of its frame. Basically, this means that the network connection between the two computers with the greatest distance between them must be short enough, and the frame transmission long enough, so that if one of these two computers happens to begin a transmission the instant before a transmission from the other computer arrives, the consequent collision can be detected before either transmission is completed. Once a sending station completes its frame transmission, without having been interrupted by a jam signal, it assumes that the frame has been received intact and that it has been processed correctly. If a jam signal is received by a sending station after it has completed its transmission, the sending station will assume that the collision belongs to a set of stations elsewhere on the network. The importance of Ethernet stations being able to detect collisions before the completion of each frame transmission cannot be understated. If the jam signal that results from a collision is not detected by the sending station before the sending station completes its transmission, the sending station has no way of knowing that it must retransmit its frame. Properly configured Ethernet networks ensure that the distance between the two stations farthest apart on the network is short enough that when a collision occurs neither of these two stations will have had time enough to finish its transmission before being interrupted by the jam signal.

* Legal Information © 1998 Intel Corporation

Usually, the upper-layer protocol responsible for the frame's data packet will expect a reply from the same upper-layer protocol on the receiving station. If the upper-layer protocol does not receive the expected reply within a certain time frame, it will use Ethernet to resend the data. But as far as Ethernet is concerned, once the Ethernet NIC has been able to transmit its frame without the interruption of a jam signal, Ethernet assumes the frame has arrived intact and has been processed correctly.

* Legal Information © 1998 Intel Corporation

Ethernet and the OSI Reference Model
Lesson Objectives
q q

Identify the place Ethernet occupies in the OSI model Identify Ethernet's relationship to the upper layers of the OSI model

Though Ethernet plays a critical role in network communications, this role is limited to a specific set of services that combine with upper-layer networking services to produce practical benefits such as network management, data security, file transfer, remote access and messaging. Let's say, for example, you have created a document in a word processor, and you would now like to save this document to a file in a directory on your workgroup's server before you attach it to an e-mail message. When you give the command to save your file, in addition to the actual transfer of data from your word processor to the hard disk on the server, a variety of communications takes place between your computer and the server, including requests for directory information, access rights and file creation. Though all of these communications are broadcast by Ethernet over the Ethernet bus, Ethernet neither initiates these kinds of network communications nor controls them in a substantive way. Network services like file transfer, network management, remote terminal access and network security are all facilitated by what are called upper-layer protocols. Technically, Ethernet is merely a taxi service for these upper layer protocols, which employ Ethernet to help accomplish their work. According to Open Systems Interconnection (OSI) standards, in the hierarchy of network services, Ethernet works at the bottom as a servant to all the layers above it. Open Systems Interconnection Reference Model The International Standards Organization and the IEEE published the first Open Systems Interconnection standards in 1977. OSI standards are explained graphically using the OSI reference model, shown in the figure below. The OSI reference model provides a comprehensive and modular framework for interconnecting computer systems from different manufacturers. The OSI model defines seven separate and distinct layers of communication that together provide a comprehensive suite of network services.

The Open Systems Interconnection reference model.

The OSI model does not define a specific technology for each layer. The OSI model requires only that every technology be able to accept data from the immediate layer below it and deliver data to the immediate layer above it using universally accepted methods. Though most networking solutions today do not strictly conform to the boundaries of the OSI model, the OSI model still provides a solid framework for understanding how networking technologies interoperate.

* Legal Information © 1998 Intel Corporation

Layers of the OSI Model Layer Function

Layer 7 Layer 6 Layer 5 Layer 4 Layer 3 Layer 2 Layer 1

Application

This layer provides services to user applications.

Presentation This layer describes how data should be formatted when presented to applications. It can also provide services like encryption and compression. Session Transport Network Data Link Physical This layer establishes, manages and ends connections between users and resources. This layer provides a reliable end-to-end connection across a network. This layer is responsible for routing packets between end stations in a network. This layer can provide error handling, flow control and arbitrates medium access. This layer defines the electrical, optical and mechanical characteristics of a network connection.

* Legal Information © 1998 Intel Corporation

Ethernet and the OSI Reference Model (Continued)
Ethernet and the OSI Model The Ethernet specification covers only the bottom layers of the model, the physical layer and the lower half of the data link layer. Ethernet provides two general services:

1. Ethernet connects computers together physically with cabling and network interface
cards. 2. Ethernet transports data packets from the network layer service on one computer to the network layer service of either one or a number of other computers. As an Open Systems technology, Ethernet does not specifically exclude any particular network layer technology. For example, when Ethernet receives a TCP/IP packet from the network layer, it treats the TCP/IP packet exactly the same as it would a NetBEUI or IPX/SPX packet. Like a taxicab, Ethernet simply transports the fare; it does not ask for names, only the destination.

Ethernet and the OSI Model

* Legal Information © 1998 Intel Corporation

The Physical Layer
Lesson Objectives
q q

Identify Ethernet bus and star topologies Identify the difference between physical network topology and the logical topology

At the physical layer, the lowest layer of the OSI reference model, Ethernet specifications cover details about the cabling requirements for Ethernet, including the use of coaxial cable, twisted-pair wire, optical fiber and connectors. Physical layer specifications also define data rates, as well as the electrical, mechanical and signaling characteristics of the physical medium. Physical layer specifications describe how Ethernet represents data as either electrical signals sent over a wire or as light pulses sent through a fiber optic cable. Physical layer specifications also describe how Ethernet activates and deactivates connections. Today, a variety of specific cabling schemes can be used in the design of Ethernet networks. The advantages, disadvantages and limitations of each are discussed in more detail in Modules 7 through 9, which cover operations, guidelines and specifications for Ethernet running at 10Mbps, 100Mbps and 1000Mbps. In general, the physical configuration of an Ethernet network conforms to one of two basic network topologies: bus or star. Ethernet Bus Topology Originally, the Ethernet bus was constructed using coaxial (10Base-5 and 10Base-2) cable, and computers connected to the bus using coaxial drop cables, or by connecting directly to the bus itself. On these networks, the Ethernet bus-cable stretches from one end of the building to the other. Terminating resistors placed at each end of the bus ensure that each broadcast signal travels the length of the wire only once. A series of T-connectors, inserted along the length of the bus, provide fixed tap points for individual computers. The bus winds though the building close enough to each computer's NIC that each computer can be connected to a Tconnector either directly or by using a short drop cable. The figure below shows the basic schematic of Ethernet bus topologies.

Bus Topology

* Legal Information © 1998 Intel Corporation

The Physical Layer (Continued)
Ethernet Star Topology Today, Ethernet LANs are built almost exclusively using twisted-pair wire to connect computers to a central hub (sometimes called a concentrator). There are several reasons for this:
q q q

q

Twisted-pair wiring is easier to install than coaxial cable. Twisted-pair wiring is significantly less expensive than coaxial cable. Twisted-pair wiring can be used for other purposes besides Ethernet, for example, to carry voice. The star configuration makes data traffic easier to monitor and troubleshooting simpler by concentrating the location of physical network connections in a small space (the hub).

Star network topology

In the star configuration shown in the figure above, the hub forms a central wiring closet that physically takes the place of the long, coaxial Ethernet bus. Even though the star topology shown in the figure above looks radically different from the bus topology shown in the figure before it, the basic operations of Ethernet are the same for both. The evolution from bus to star topologies is perhaps best understood as simply a dramatic shortening of the Ethernet bus and an equally dramatic elongation of the drop lines that connect individual computers to the shared broadcast medium.

* Legal Information © 1998 Intel Corporation

The Physical Layer (Continued)
The figure below shows the same office space shown in Lesson 2.1, this time cabled using a star topology.

Star-shaped, twisted-pair wiring scheme for a small office.

A Physical Star, A Logical Bus Both Ethernet star and bus topologies connect computers in such a way that packet broadcasts from one station are received by all other stations on the network. The network cabling still forms a shared broadcast medium. Regardless of the shape of the network cabling scheme (star or bus), the logical topology of Ethernet networks is a bus. The logical scheme for the bus topologies shown in the previous figures is the same as depicted for the star topologies.

* Legal Information © 1998 Intel Corporation

If you find the concept of a star being a bus difficult to grasp, simply imagine a very short bus enclosed inside the hub and very long drop cables connecting the network stations to the bus.

* Legal Information © 1998 Intel Corporation

The MAC Sublayer
Lesson Objective
q

Identify the role of the MAC sublayer

The IEEE 802 series of network standards divides the second layer of the OSI reference model, the data link layer, into two sublayers called the medium access control (MAC) layer and the logical link control (LLC) layer. The IEEE 802.3 Ethernet specification covers the physical layer and the MAC sublayer, but not the LLC sublayer. The LLC sublayer uses the MAC sublayer to provide medium-independent link functions to the network layer above it.

MAC sublayer

When a computer transmits a frame, Ethernet operations at the MAC sublayer assemble the destination and source addresses for each Ethernet frame and calculate the frame's CRC checksum. At the receiving end, Ethernet operations at the MAC sublayer process the destination address and verify the integrity of the frame using the CRC checksum. Ethernet's collision detection and handling protocol, the CSMA/CD algorithm also operates at the MAC sublayer.

* Legal Information © 1998 Intel Corporation

Repeaters, Switches and Bridges
Lesson Objectives
q

Identify the roles that repeaters, switches and bridges play in Ethernet networking

Repeaters, switches and bridges are physical networking components that will be discussed in some detail later in the course. They are introduced here, however, to provide a complete overview of the basic components of Ethernet networking. Repeaters Technically, an Ethernet hub is also a repeater because it regenerates the strength of all incoming signals and repeats them individually to each port. Because collisions on an Ethernet network must be detected before a station completes the transmission of its frame (IEEE 802.3 standards actually limit the minimum collision detection time to 512 bits), the maximum allowable distance between any two stations on an Ethernet network operating at 10Mbps is 2500 m (meters). Over a distance much less than this, however, electrical signals transmitted from an NIC lose their clarity and strength due to a natural weakening called attenuation. 10Mbps signals over twisted-pair wire, for example, become undecipherable at a distance of a little over 100 m. For thick coaxial cable, this distance is 500 m. Network hubs and repeaters work at the physical level to regenerate the strength of electrical signals so that distant segments of a network can share the same broadcast medium. Switches Ethernet switches operate like intelligent hubs that repeat incoming frames only to the computer (or computers) to which each frame is addressed. Thus, on a switched network with four computers (A, B, C and D), computer A can broadcast to computer B, and computer C can broadcast to computer D simultaneously, without a collision. In this simple example, Ethernet switching effectively doubles total throughput of the network by allowing computers A and C to broadcast at full network speed without having to wait for the first computer's broadcast to finish. As a result of their ability to significantly increase overall network throughput, switches are becoming an increasingly popular replacement for Ethernet hubs. Bridges Bridges operate at both the physical layer and the MAC sublayer and connect otherwise completely separate Ethernet networks. Bridges sit between each network and repeat only those frames that are specifically addressed to computers on the other side. By designing separate network domains connected with bridges, network traffic can be isolated without sacrificing system-wide connectivity. Bridges can also connect networks running at different speeds with different topologies or communication protocols.

* Legal Information © 1998 Intel Corporation

ETHERNET OPERATIONS
Module Description This module describes the details of data transmission and access control on Ethernet networks. Lesson 3.1 covers the CSMA/CD algorithm and focuses particularly on the process of collision detection. Lesson 3.2 explains the reason for Ethernet's maximum distance specifications and, also, explains in general how maximum distances are calculated. Lesson 3.3 defines the exponential backoff algorithm, and discusses the role it plays in collision detection and retransmission process. Lesson 3.4 discusses the effect of collisions and excessive collisions on performance. Lesson 3.5 concludes Module 3 by illustrating a number of ways that Ethernet networks can be configured to reduce collisions and increase performance. Module Objectives
q q

q q

q

q

Identify the operation of the CSMA/CD algorithm Identify the reasons there is a limit to the distance between stations on an Ethernet network Identify Ethernet's distance limitation in bit times Identify the operation and purpose of the backoff algorithm used to control retransmissions on an Ethernet Identify how to determine whether an Ethernet network is experiencing too many collisions Identify some of the ways Ethernet networks can be segmented to reduce collisions

* Legal Information © 1998 Intel Corporation

The CSMA/CD Algorithm
Lesson Objectives
q q

Identify the operation of the CSMA/CD algorithm Identify the reasons there is a limit to the distance between stations on an Ethernet network

Module 2 briefly introduced the CSMA/CD algorithm and how it works to control access to Ethernet's shared, physical medium. The CSMA/CD algorithm defines when stations are allowed to transmit and for how long, as well as how to manage situations in which two or more stations attempt to transmit at the same time. The following two flow charts illustrate the decision making processes that an Ethernet NIC completes when sending and receiving frames.

Flow chart for Ethernet frame transmissions.

* Legal Information © 1998 Intel Corporation

The CSMA/CD Algorithm (Continued)

Flow chart for receiving Ethernet frame transmissions.

Because an Ethernet network uses a shared broadcast medium, network stations must take turns transmitting data across the medium. If more than one station transmits data at the same time, the transmissions collide and the signal becomes undecipherable as a result. CSMA/CD stands for Carrier Sense, Multiple Access with Collision Detection. "Carrier Sense"

means that network stations with data to transmit should first listen to determine if another station is sending data. "Multiple Access" means that Ethernet provides a number of stations the opportunity to transmit on the single cable. "Collision Detection" refers to the process by which stations detect simultaneous transmissions.

* Legal Information © 1998 Intel Corporation

The CSMA/CD Algorithm (Continued)
Even though data signals travel near the speed of light, they still take time to travel over the network medium. As illustrated by the series of figures below, collisions can occur even though each station must check first to see if the medium is free. As a result, stations must continue to monitor for collisions even after gaining access to the medium.

Station A transmitting

The figure above shows two stations connected to an Ethernet bus and 500 m apart (about 1,640 ft [feet]). After station A begins to transmit, the signal travels away from station A in both directions. The speed of signal propagation through the cable varies slightly, depending on the cable type used. (Propagation is a technical term for the process by which signals, or waves, travel across a medium, such as wire, water or atmosphere.) Generally, signal propagation speed through copper and fiber cable is 2/3c, where c is the speed of light in a vacuum. Hence, it will take the transmitted signal about 2.5 µs (microseconds) to travel 500 m. On a 10Mbps Ethernet network, this means that station A will have transmitted 25 bits (2.5 µs multiplied by (x) 10,000,000 bits/second) by the time the first of the bits reaches station B. Now, assume that station B decides to begin a transmission immediately before the first bit from A's transmission has traveled the 500 m distance between the two stations. Because station B believes that the cable is free, it will begin to transmit. Consequently, the two signals collide on the wire immediately afterwards, as shown in the figure below.

Collision

Station B discovers the collision right away and transmits a jam pattern to ensure that all stations on the network detect the collision. A jam pattern is a sequence of bits that is put together in such a way that the signal cannot be mistaken for a valid transmission.

Station A detects the collision

It takes another 2.5 µs before the jam signal has traveled 500 m from station B to station A. By the time station A discovers the collision and stops the transmission, 5 µs have elapsed and, station A has already transmitted 50 bits.

* Legal Information © 1998 Intel Corporation

Maximum Distance between Stations
Lesson Objectives
q

q

Identify the reasons there is a limit to the distance between stations on an Ethernet network Identify Ethernet's distance limitation in bit times

If the distance between station A and station B increases, station A will transmit more and more bits without discovering a collision. If the stations are placed too far apart, station A will complete its transmission before the collision is discovered. If this happens, when station A receives the jam signal, it will not assume that its transmission was involved in the collision. Station A will, instead, assume that the collision belongs to some other set of computers. One of the significant assumptions of Ethernet operations is that once an Ethernet station is able to finish its transmission without being interrupted by either a jam signal or a collision, that station assumes that its transmission has been received successfully. Usually, the upper-layer protocol responsible for the frame's data packet expects a response from the same upper-layer protocol on the receiving station. When the expected reply is not received within a specified time unique to each protocol, the upper-layer protocol on the sending station will use Ethernet to resend the original data. These kinds of retransmissions, however, not only result in unacceptable delays and network inefficiency, but they are also unnecessary. The Ethernet standard contains several specifications that ensure collisions will be detected before a station finishes its transmission. First, the standard limits that the maximum distance between two stations in such a way that a station will not have transmitted more than 512 bits before a collision is discovered. On a 10Mbps Ethernet network, the maximum distance between two stations cannot exceed 2,500 m (about 8,200 ft). On a 100Mbps Ethernet network, the maximum distance is much shorter because data is transmitted ten times faster; thus, stations have less time to discover collisions. Second, the standard specifies that an Ethernet frame must always be at least 512 bits (64 bytes) long. Third, Ethernet standards require transmitting stations to monitor the cable for collisions throughout the first 512 bits of every transmission. After that, stations are free to assume that a collision will not occur.

* Legal Information © 1998 Intel Corporation

The round-trip propagation delay at 2,500 m is about 25 µs or only 250-bit times at 10Mbps. (Propagation is a technical term for the process by which signals, or waves, travel across a medium, such as wire, water or atmosphere. The phrase "round trip propagation delay" refers specifically to the time it takes for a single Ethernet transmission to travel the length of the wire twice. The first signal is assumed to be a data transmission and the second a jam signal.) What about the remaining 262-bit times? Some of it is there to allow a small delay in electronics circuits such as repeaters and network interface cards; the rest is a safety margin. The maximum allowable distance between any two stations on an Ethernet network operating at 10Mbps is 2500 m. Over a distances much less than this, however, electrical signals transmitted lose their clarity and strength due to a natural weakening called attenuation. Thus, repeaters are used to regenerate the strength of electrical signals so that distant segments of a network can share the same broadcast medium. (Technically, an Ethernet hub is also a repeater because it regenerates the strength of all incoming signals and repeats them individually to each port.)

* Legal Information © 1998 Intel Corporation

Exponential Backoff Algorithm
Lesson Objective
q

Identify the operation and purpose of the backoff algorithm used to control retransmissions on an Ethernet

After checking the broadcast medium and finding it is free, the sending station assumes that no other station has frames to send, and so begins to transmit. After a collision has occurred, however, there are always at least two stations on the network with frames to send. If after a collision, the affected stations were to simply retransmit their frames immediately after the jam signal has finished, the exact same collision will occur again. To avoid repeated collisions, Ethernet uses an exponential backoff algorithm that requires each station affected by a collision to wait a randomly selected amount of time before retransmitting. The station which randomly chooses the shortest backoff time will, then, be able to transmit its frame without interference from the station that contributed to the prior collision. Once the transmission from the station which randomly chose the shortest backoff time has been recognized by all other stations on the network, all stations must then wait until the transmission is completed before once again contending for access to the network. According to Ethernet's exponential backoff algorithm, once a collision occurs, all stations involved in the collision will wait for an amount of time, called a slot time, before transmitting again. The slot time depends on the transmission speed. On 10Mbps Ethernet networks, it is 51.2 µs. On Fast Ethernet networks, it is only 5.12 µs.

* Legal Information © 1998 Intel Corporation

Exponential Backoff Algorithm (Continued)
If two stations both wait for the same number of slot times, the transmissions will collide again. Thus, there is a range of slot times from which stations must randomly choose after each unsuccessful transmission. Specified ranges for slot times are shown in Table 3-3 below. Each time the station tries to retransmit and encounters a collision, the maximum waiting time is doubled. If a first transmission attempt fails due to a collision, all stations involved in the collision will wait between 0 and 2 slot times before attempting to transmit the frame a second time. On 10Mbps Ethernet, this means that the waiting time will be either 0x51.2 µs, 1x51.2 µs or 2x51.2 µs. On the third attempt to transmit the frame, the waiting time will be between 0 and 4 slot times, on the fourth attempt between 0 and 8 slot times and so on. After eleven successive collisions, maximum number of slot times stays at 1,023. An Ethernet station will attempt to transmit the same frame up to 16 times. After that, if the transmission has not been successfully completed, it will give up and discard the frame. Transmission Attempt 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Minimum Wait Slot Times N/A 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Maximum Wait Slot Times N/A 2 4 8 16 32 64 128 256 512 1023 1023 1023 1023 1023 1023

17

Give up
Table 3-3. Backoff algorithm slot times

Give up

* Legal Information © 1998 Intel Corporation

Collisions and Performance Considerations
Lesson Objective
q

Identify how to determine whether an Ethernet network is experiencing too many collisions

It is important to understand that collisions are a normal occurrence on an Ethernet; a collision Too many collisions will, however, decrease network is not an error. performance. To decrease the number of collisions, you must, among other things, limit the number of stations that share an Ethernet collision domain. To determine whether a network is experiencing too many collisions, a network administrator must first determine the quality of service he or she expects to maintain. A good rule of thumb is that the following inequality should hold true: (Number of deferred transmissions + number of retransmissions) / Total number of transmissions < 5 percent.) In other words, if 95 percent of all transmissions are not deferred (have to wait because another transmission is already taking place) and do not have to be retransmitted because of a collision, the delay experienced by users is probably within acceptable limits. The definition of "acceptable limits" is, of course, subjective. In some cases, users and management may be willing to accept a poorer quality of service from the network, and in other situations, a 95 percent average availability may not be sufficient. Another way to measure if an Ethernet network is experiencing too many collisions is to calculate the number of successive collisions that a transmitting station experiences. In general, a transmitting station should experience a collision no more than two times before it successfully transmits data.

* Legal Information © 1998 Intel Corporation

Common Ethernet Errors The following is brief description of the most common errors on Ethernet networks and their most likely causes. Error statistics can be obtained from several different sources, such as dedicated troubleshooting equipment like network monitors and probes as well as from network equipment such as bridges, switches and routers. Short. A frame which is shorter than the minimum 64 bytes. Short frames can be caused by noisy connections, cable faults and faults in network hardware. If they occur often, remedial action should be taken. Correcting the problem usually means replacing defective cables or equipment. CRC Error. A frame which has been corrupted during transmission. A CRC error is registered when the 4-byte checksum is invalid, that is the CRC information in the FCS field does not match the CRC value computed by the receiving station. Frames with CRC errors are discarded by the receiving hardware. CRC errors are usually caused by cable faults and other faults in the network. Correcting the problem usually means replacing defective cables or equipment. Alignment Error. Frames with alignment errors are frames that are longer than 64 bytes, have a bad CRC and are not an integral number of bytes in length; that is, the number of bits in the frame is not divisible by 8. Frames with alignment errors are discarded by the receiving station because they have an invalid CRC. Alignment errors are usually caused by cable faults or problems with network interface cards. Correcting the problem usually means replacing defective cables or network interface cards Long. A frame that is longer than the legal maximum length of 1518 bytes but shorter than 6000 bytes. Longs can have a negative impact on general network performance and may result in users being disconnected. The station transmitting the oversized frames has a hardware or software error. It should be found and removed from the network. Giant. A frame that is longer than 6,000 bytes. The station transmitting the oversized frames

has a hardware or software error. It should be found and removed from the network. Jabber. A long frame with a CRC or alignment error. "Jabbers" are usually caused by a malfunctioning network interface card or external transceiver. The faulty equipment should be replaced. Late Collision. Occurs after the first 512 bits have been transmitted by the sending station. Late collisions should never occur in a healthy Ethernet network or segment. A late collision can cause severe performance degradation because it cannot be detected by the sending station. Late collisions are typically caused by misconfiguration, such as having too long cable distances or by having more than 4 repeaters between one or more network stations in a 10Mbps environment. The problem must be solved by changing the network configuration so it complies with the guidelines. Excessive Collisions. As previously described, an Ethernet station will try up to 16 times to transmit a frame. If all transmission attempts fail due to collisions the frame is discarded by the sending station. This situation is called an excessive collision. If this occurs frequently due to heavy network traffic, the network should be redesigned to relieve the congestion.

* Legal Information © 1998 Intel Corporation

Collisions and Performance Considerations (Continued)
The figure below shows an example of how collision statistics might look on an Ethernet segment that is performing well. Notice that most transmissions succeed after only two successive collisions. The number of transmissions that experience more than six successive collisions is too small to be visible on the chart.

An Ethernet segment that is performing well. The total of all blue bars in the graph represents the total number of collisions on the network.

By contrast, following figure shows collision statistics on a heavily loaded Ethernet segment. Note that more of the transmissions experience multiple collisions on this segment.

A heavily loaded Ethernet segment. The total of all blue bars in the graph represents the total number of collisions on the network. Most of the retransmissions on this network experience two or more successive collisions. Network congestion is so great that some transmissions reach the excessive collision limit and are dropped by the sending station's NIC.

* Legal Information © 1998 Intel Corporation

Network Segmentation
Lesson Objective
q

Identify some of the ways Ethernet networks can be segmented to reduce collisions

Network managers can decrease the total number of collisions on a network in several ways. Network managers can:
q q q

Create multiple, small collision domains by segmenting network traffic. Increase network efficiency by using switches in the place of hubs. Increase network speed by implementing Fast Ethernet and Gigabit Ethernet on hightraffic backbones, links between servers and clusters of power workstations.

Bridges Between Segments Later courses in advanced network design and management will discuss these solutions in detail. The lesson focuses only on the general concept of network segmentation. The figure below shows a basic network configuration that uses a single collision domain.

* Legal Information © 1998 Intel Corporation

Network Segmentation (Continued)
The figure below shows a segmented network configuration that includes multiple collision domains.

In the figure above, a bridge is used to connect segments. The bridge functions as a selective repeater that retransmits the frames it receives only when they are specifically addressed to devices on the other side. As a general rule, network engineers try to keep at least 80% of all traffic generated by a collision domain within that same domain.

* Legal Information © 1998 Intel Corporation

Network Segmentation (Continued)
Separate Domains on a Single Server Some network operating systems allow a single server to use multiple NICs to create segmented networks. In this situation, each NIC in the server functions as a separate network segment. Software running on the server performs the function of a bridge. The server reads the destination address of each frame and repeats only those frames addressed to devices on the other segment. Using segmented collision domains on a single server not only decreases the number of total collisions on the network, but also allows the server to receive packets from two or more segments simultaneously. The figure below illustrates a network configuration that includes two collision domains and only one server.

* Legal Information © 1998 Intel Corporation

ETHERNET FRAME COMPOSITION
Module Description This module discusses the composition of the Ethernet frame. Lesson 4.1 gives a general overview of the contents of the Ethernet frame. Lessons 4.2 through 4.7 each discuss in detail the content and purpose of each of the Ethernet frame's six fields. Module Objectives
q q q q q

Identify the fields found in an Ethernet frame Identify the purpose of the preamble/SFD, type/length, data and FCS fields Identify the structure of MAC addresses Identify the difference between unicast, multicast and broadcast addresses Identify the purpose of having an interframe gap

* Legal Information © 1998 Intel Corporation

Basic Ethernet Frame Composition
Lesson Objective
q

Identify the fields found in an Ethernet frame

The Ethernet frame can, perhaps, best be thought of as a container for safely and efficiently transporting data packets from one station to another. The general format of an Ethernet frame is shown in the figure below.

Ethernet frame composition

Ethernet frames contain six fields in total. Each field, with the exception of the data field, is precisely defined both in length and content. Announcing the arrival of each frame, a 7-byte preamble serves to synchronize the sending station's and the receiving stations' clocks, ensuring that each frame is received at the same speed it was sent. Following the preamble a 1-byte start-of-frame delimiter signals to the receiving station that the substantive portion of the frame is about to start. The destination address, source address and typeIlength fields together form what is commonly referred to as the Ethernet header. The Ethernet header contains control information used by Ethernet to identify the source, destination, size and protocol of the upperlayer data packet contained in the data field. The data field immediately follows the header fields and varies in length between 46 and 1500 bytes. All other fields in the frame have fixed lengths. A frame check sequence field marks the end of the Ethernet frame and contains a checksum value that can be used to verify the frame has not been corrupted in transit. The checksum is the result of a calculation of bit values derived from of all other fields in the frame. Since the original Ethernet standard was published in 1980, a number of variant Ethernet frame types have been developed and are now in common use. All of them, however, follow the basic structure shown in the figure above. Lessons 4.2 through 4.6 describe in detail the six basic fields of an Ethernet frame.

* Legal Information © 1998 Intel Corporation

Preamble/SFD
Lesson Objective
q

Identify the purpose of the preamble/SFD field

Ethernet frame composition: preamble/SFD field

Timing Ethernet is sometimes described as a bit-serial, synchronous transmission facility. Bit-serial means that frames are transmitted and received one bit at a time across the medium. The phrase synchronous transmission refers to the fact that the clocks in both the sender and receiver must be synchronized in order for each bit to be correctly detected. Clock synchronization is important because Ethernet uses precisely timed changes in signal strength to create recognizable high-to-low and low-to-high patterns that, upon receipt, are interpreted as digital 0's or 1's. An unsynchronized clock will time the signal incorrectly and will either not be able to interpret the signal at all or will misinterpret the signal by reading high-to-low sequences as low-to-high sequences, and vice versa. In such instances, the frame is said to be misaligned. Synchronization The Ethernet frame enables the receiving station to synchronize its clock with the sending station by using a 7-byte (56-bit) series of alternating 1's and 0's, called a preamble. The steady alteration of 1's and 0's in the preamble constitutes a simple way to encode clocking information in the signal itself. Like a drum roll used to synchronize the feet of soldiers in a very fast march, the preamble's 56-bits of alternating 1's and 0's allow the receiving station to adjust its clock until the steady alterations of the preamble are timed correctly. Start of Frame Delimiter The start of frame delimiter (SFD) is also an alternating binary 1's and 0's pattern, except for the last two bits which are "11." The binary "11" sequence alerts the receiver to the fact the preamble has ended and the Ethernet header will now begin. At a first glance, the SFD appears unnecessary, because it may seem that the receiver could simply count bits until 8 bytes (64 bits) have been received and then start copying the frame to memory. However, the sender and receiver clocks may be so far out of synchronization that

the receiver will not be able to synchronize on the signal immediately, thus allowing an indeterminable portion of the preamble to pass by unrecognized.

* Legal Information © 1998 Intel Corporation

Destination Address and Source Address Fields
Lesson Objectives
q q

Identify the structure of MAC addresses Identify the difference between unicast, multicast and broadcast addresses

Ethernet frame composition: destination address and source address fields

The Structure of MAC Addresses Every NIC attached to an Ethernet network must have a unique 6-byte hardware address, usually called a MAC address. Illustrated in the figure below, the MAC address consists of two parts: a 3-byte manufacturer ID and a unique 3-byte NIC ID number, assigned by the manufacturer. (On certain first and second generation NICs, the MAC address can be manually set using a series of jumper connections on the circuit board.)

MAC address structure

Technical manuals usually record the bytes of the MAC address using hexadecimal notation, as opposed to writing out the binary digits. In an example MAC address such as "00-A0-C9CE-20-03," the first three bytes, 00-A0-C9, represent the manufacturer ID (Intel), the second three bytes, CE-20-03, represent the NIC ID. Using this same example, the first two bytes written using binary notation would read 00000000-10100000. (Although, to make things somewhat interesting, Ethernet actually reverses the bit-ordering of each byte when it is transmitted. So, the same two bytes would actually be transmitted as 00000000-00000101.) Destination Address The MAC address is, of course, each station's unique destination address (DA). When a station recognizes its own MAC address in the destination address field of an Ethernet frame, the station copies the rest of the frame to memory for further processing by the CPU. When a station recognizes that the destination address is not its own, that station simply disregards the rest of the frame. Source Address The source address (SA) field contains the MAC address of the sending station. The source

address field gives a receiving station the opportunity to respond to the originating station either by confirming receipt of the frame, requesting the frame be resent or answering a particular request. Unicast, Multicast and Broadcast Frames In addition to Ethernet's ability to send individual frames to a single workstation (sometimes called a unicast), Ethernet also has the ability to send frames either to a group of stations on a single segment (a multicast) or to all Ethernet stations on the network (a broadcast). Unicast Addresses. The unicast address for a particular destination is simply another name for an NIC's unique MAC address. An Ethernet frame sent to a unicast address is intended for one station only. Unicast transmissions are used by clients requesting application or file services from a particular sever, and by servers responding to client-specific requests.

* Legal Information © 1998 Intel Corporation

Destination Address and Source Address Fields (Continued)
Multicast Addresses. A multicast address identifies an entire group of stations attached to the same Ethernet segment. Routing information updates, for instance, are often sent to a multicast address. Routers on the network copy these frames from the wire, while other stations disregard them. Other examples of the use of multicast addresses are video distribution and bridge packets used by the spanning tree algorithm. So that NICs instantly recognize a multicast address, the first bit of a multicast address is always a binary 1. Broadcast Addresses. Each network has only one broadcast address. The broadcast address, as it appears in the destination address field, is composed entirely of binary 1's. All stations on the network are expected to copy Ethernet frames sent to the broadcast address and pass them to the CPU for further processing. Ethernet LAN services such as address resolution and service advertisements that rely on recurrent transmissions to all stations frequently send packets to the broadcast address. Specific examples include, the services provided by Address Resolution Protocol (ARP [a component of the TCP/IP suite]) and NetWare* SAPs. Multicast and broadcast addresses can appear only in the destination address field, never in the source address field. A frame can be intended for a group of stations, but it will always be sent from a specific station on the network.

* Legal Information © 1998 Intel Corporation

The spanning tree algorithm is an IEEE 802 standardized method of communication between bridges and switches. Most switches and bridges keep track of the layout of a network by building a table of address information. The table tells the bridge or switch which packets to forward and which to not forward. Sometimes bridges and switches contain conflicting address information, and, as a result, frames can become trapped in an endless loop of bridges and switches. In this case a group of bridges or switches mistakenly forwards the packet in a circle among themselves without ever forwarding the packet to the actual segment to which the packet is addressed. The spanning tree algorithm works to help bridges and switches communicate in order to both avoid loops and operate efficiently.

* Legal Information © 1998 Intel Corporation

Type/Length Field
Lesson Objective
q

Identify the purpose of the type/length field

Ethernet frame composition: type/length field

Ethernet frames come in slightly different variants, with the main difference being the type of information that is placed in the 2-byte field following the source address field. Generally speaking, this portion of the frame is used to designate either the size of the data field or the upper-layer protocol to which the contents of the data field should be delivered (i.e., IPX, IP, DECnet, AppleTalk, etc.). The contents and purpose of the type/length field are discussed in detail in Module 5, which discusses four of the most common Ethernet frame types.

* Legal Information © 1998 Intel Corporation

Data Field
Lesson Objective
q

Identify the purpose of the data field

Ethernet frame composition: data field

Minimum Length Requirements The data field contains the data packet that will be delivered to an upper-layer protocol, such as TCP/IP, IPX or DECnet. To ensure correct detection of collisions on the network, the total length of an Ethernet frame cannot be less than 64 bytes. The destination and source address fields, the type/length field and the frame check sequence field together account for 18 bytes. So the data field can never be less than 46 bytes long. Frames received that are less than 64 bytes long are usually the byproducts of collisions and are called runts. Switches and bridges, which examine an entire frame before forwarding it, immediately discard all runts, preventing them from propagating throughout the rest of the network. NICs, as well, immediately discard all frames that do not meet the 64-byte minimum length requirement. Transmitting Small Amounts of Data What happens, though, if a station has less than 46 total bytes to transmit? In a Telnet session, for instance, a single keystroke may be the only data a station needs to transmit. In this case, the upper-layer protocol that requests Ethernet to transmit the data simply pads the remaining portion of the data field with extra bytes until the 46-byte minimum requirement is met. At the receiving end, the same upper-layer protocol is then required to remove the extra bytes before passing the content of the data field (in this case, a single keystroke) to the application.

Transmitting small amounts of data.

* Legal Information © 1998 Intel Corporation

Data Field (Continued)
Transmitting Larger Amounts of Data The upper-limit size of the data field is 1500 bytes, meaning that the total length of an Ethernet frame, including the DA, SA, type/length and FCS fields, cannot exceed 1518 bytes. The 1500byte data field limit denotes Ethernet's compromise between transmission efficiency on the one hand and network availability on the other. Striking a Balance If Ethernet were to allow the transmission of very large frames, of say, 65,536 bytes or more, Ethernet's 18-byte overhead for each frame becomes comparatively insignificant to the datapacket portion of the frame. It would take, however, approximately 52 ms to transmit a frame of this size at 10Mbps. By computer standards 52 ms is a long time and could lead to unacceptably long wait times for other stations. Though smaller upperlimits on frame size can substantially decrease wait times for access to the network, they unfortunately increase the overall amount of overhead on the wire, and thus decrease the total amount of data throughput that the network can provide. Using Ethernet's minimum 64-byte frame size, with a minimum 46 bytes for data packets, overhead for small packets is 28% of the total transmission. For maximum-sized, 1518-byte Ethernet frames, overhead falls to 11% of the total transmission. While it is true that certain applications would benefit from a larger maximum Ethernet frame size, many other applications would benefit equally as well from a smaller frame size. As Ethernet speeds continue to increase, however, from 10Mbps to 100Mbps, and now 1000Mbps, the relative efficiency or inefficiency of Ethernet compared to other possible networking schemes continues to diminish in importance.

* Legal Information © 1998 Intel Corporation

Frame Check Sequence
Lesson Objective
q

Identify the purpose of the FCS field

Ethernet frame composition: frame check sequence

The frame check sequence (FCS) field contains a checksum called a cyclic redundancy check (CRC) that can be used to verify that the frame has not corrupted in transit. The frame check sequence is simply the result of a complex division problem applied to the contents of the frame. The transmitting station calculates the CRC value as the frame is transmitted and places the result in the FCS field. When the frame is received, the receiving station performs the same calculation and compares the resulting CRC value with the one found in the FCS field. If the two values match, the receiving station accepts the frame. If the two values do not match, the receiving station assumes the frame has been corrupted, and consequently discards it.

Note that the FCS field does not constitute a security mechanism. A sophisticated user with malicious intentions and the appropriate tools could change the frame, recalculate the CRC value and place it in the FCS field. The receiver would then be unable to detect that the frame has been tampered with. The FCS is intended only to protect against errors caused by noise on the transmission medium or by malfunctioning network equipment.

* Legal Information © 1998 Intel Corporation

Interframe Gap
Lesson Objective
q

Identify the purpose of having an interframe gap

After a frame has been successfully transmitted and received, Ethernet specifications require that an interframe gap of at least 96 bit-times pass before any station on the network can transmit the next frame. At 10Mbps, 96 bit-times translates to 9.6 µs. The reason for the 9.6 µs interframe gap is to allow enough time for station that last transmitted to cycle its circuitry from transmit mode to receive mode. Without the interframe gap, it is possible for a station that has just completed a transmission to miss a frame destined for it because it has not yet cycled back into receive mode. Even though modern Ethernet devices are capable of cycling from send mode to receive mode in a shorter time than the 9.6 µs allowed, the 96 bit-time interframe gap specification is still a part of the official standard, and is included in the specifications for Fast Ethernet as well. At Fast Ethernet speeds, however, 96 bit-times translates to 960 ns (nanoseconds), one-tenth of the time for 10Mbps Ethernet. Some Ethernet manufacturers currently market NICs (and Ethernet switches, as well) that use an interframe gap that is smaller than 96 bit-times specified by IEEE 802.3 standards. By shortening the interframe gap, manufacturers can claim an increased overall network throughput compared to their competitors. Network administrators must be cautious, however, when devices that meet the 9.6 µs specification are combined with devices that use a shorter interframe spacing. Mixing devices that use different interframe gap times increases the potential for 'dropped' packets, which in turn results in upper-layer protocol initiated retransmissions. Dropped packets can significantly reduce overall network performance, and, in certain instances, cause client stations to lose their connection to the network. Optional Exercise** Check your understanding of an Ethernet Frame! This interactive exercise allows you to apply your knowledge of frame fields and sizes.
**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

Drag the components of an Ethernet frame into their correct positions. The field names go in the empty blue squares; the field sizes go in the empty black squares.

* Legal Information © 1998 Intel Corporation

ETHERNET FRAME TYPES
Module Description Since the publication of the original DIX Ethernet standard in 1980, a variety of Ethernet frame types have been developed. This module identifies these variant frame types and in what situations they are most commonly used. Module Objectives
q q q

Identify the structure of variant Ethernet frame types Identify the reasons behind the development of each variant frame type Identify naming conventions for variant Ethernet frame types

* Legal Information © 1998 Intel Corporation

Overview
Lesson Objectives
q q

Identify the concept of data overhead Identify the use of multiple frame types

From the point of view of a network-layer protocol, everything in an Ethernet frame, excluding the data field, must be considered overhead. Generally speaking, the amount of overhead a frame uses is directly related to the sophistication of the services it can support. Each of the frame types covered in this module attempts to strike a useful balance between efficiency and sophistication. The Ethernet II frame, for example, includes only 26 bytes of overhead information (counting the preamble) for each data packet. The Ethernet SNAP frame, on the other hand, includes 34 bytes of overhead information (counting the preamble) for each data packet. Though the Ethernet II frame provides a more efficient use of network bandwidth, the Ethernet SNAP frame is able to support a broader range of upper-layer protocols, including AppleTalk. Many networks support several frame types at the same time. A network using TCP/IP, IPX and AppleTalk, for example, may support up to three different frame types: Ethernet II for TCP/ IP, Novell 802.3 for IPX, and Ethernet SNAP for AppleTalk support. The purpose of this module is to describe the overhead information unique to each Ethernet frame type, and reasons why particular information is included in some frames and excluded in others.

* Legal Information © 1998 Intel Corporation

Ethernet II Frame
Lesson Objectives
q q

Identify the structure of Ethernet II frame Identify the reasons behind the development the Ethernet II frame

The original Ethernet standard published by Digital, Intel and Xerox defines the format for the Ethernet II frame. (What might be called the Ethernet I frame was used only in the developmental stages of Ethernet and was not published as part of the jointly developed standard.) As illustrated in the figure below, the Ethernet II frame includes a 2-byte type field that immediately follows the source address.

Ethernet II Frame

The type field is used to contain a value called an EtherType that identifies the type of data in the data field. By assigning a unique value to each upper-layer protocol, the type field indicates to the receiving station which protocol (i.e., IPX, IP, DECnet, AppleTalk, etc.) should handle the contents of the data field. If the data field contains IP data, for example, the EtherType value is set to 0x0800. If the data field contains IPX data, the EtherType value is set to 0x8137, and for AppleTalk the value is 0x809B. All EtherType values are equal to decimal numbers greater than 1500, which serves to distinguish Ethernet II frames from IEEE 802.3 frames, which replace the type field with a size field (which is always less than 1500). Generally speaking, the Ethernet II frame is the most commonly used frame-type. TCP/IPbased networks use the Ethernet II frame almost exclusively, as does DECnet. Many Novell networks, as well, are configured to use Ethernet II frames. Optional Exercise** Check your understanding of an Ethernet II Frame! This interactive exercise allows you to apply your knowledge of frame fields.
**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

Drag the fields of an Ethernet II frame into their correct positions.

* Legal Information © 1998 Intel Corporation

IEEE 802.3 Ethernet Frame with IEEE 802.2 LLC Header
Lesson Objectives
q q q q

Identify the structure of the IEEE 802.3 frame Identify the 802.2 LLC header Identify the reasons behind the development of the IEEE 802.3 frame Identify naming conventions for variant frame types

Most networks can be configured to use either Ethernet II or IEEE 802.3 frames, and sometimes both. For example, Novell networks now use the IEEE 802.3 frame by default for IPX/SPX packets and the Ethernet II frame for TCP/IP packets, though some network administrators prefer to use the Ethernet II frame for both. A network administrator's decision to support the Ethernet II frame but not the IEEE 802.3 frame, or vice versa, is highly dependent upon the particular circumstances and is often a matter of mere preference. Generally speaking, even though the IEEE 802.3 frame is the officially recognized international standard, the Ethernet II frame is still the most widely implemented and widely supported frame type. The IEEE 802.3 frame replaces Ethernet II's 2-byte type field with a 2-byte length field and adds a 3-byte LLC (logical link control) header to the data field. The figure below illustrates the format of the IEEE 802.3 frame.

Ethernet IEEE 802.3 frame with IEEE 802.2 LLC header

You will remember that the IEEE 802 standard breaks up the OSI model's data link layer into the MAC (medium access control) sublayer and the LLC sublayer. While the IEEE 802.3 standard specifies operations for the physical layer and the MAC sublayer, the IEEE 802.2 standard specifies operations for the LLC sublayer. The LLC header, thus, contains information that enables the LLC layer to hand off the contents of the data field to the appropriate network layer protocol. The LLC header is actually comprised of three 1-byte fields: the destination service access point (DSAP), the source service access point (SSAP) and the control field. Length Field The length field identifies the combined length of the LLC and data fields in number of bytes. The value of the length field will always be equal to a number between 46 – 1500. Since EtherType values always equal a number greater than 1500, and 802.3 length values always equal a number 1500 or less, it is possible for most network hardware and software to distinguish between Ethernet II and IEEE frame types, and thus support both types of frames running on the same network.

* Legal Information © 1998 Intel Corporation

IEEE 802.3 Ethernet Frame with IEEE 802.2 LLC Header (Continued)
DSAP and SSAP Fields The DSAP (destination service access point) field serves the same purpose as the EtherType used in the Ethernet II frame. The DSAP field identifies which protocol should handle the contents of the data field. For instance, if the data field contains a NetWare* IPX/SPX packet, the DSAP field's hexadecimal value is set to 0xEO. The SSAP (source service access point) field identifies the upper-layer protocol that sent the data packet. Because the source protocol and the destination protocol are typically the same for each data field, the corresponding values for the SSAP and DSAP fields are, also, typically the same. Ctrl Field The ctrl field's value distinguishes between different types of LLC headers. The operation of the LLC layer is not, however, specified by Ethernet standards. IEEE 802.3 Naming Conventions Because a number of frame types are based on the IEEE 802.3 frame, commonly-used naming conventions for both the IEEE 802.3 frame and its variants can be confusing. For example, certain networking vendors refer to the IEEE 802.3 frame as the IEEE 802.3/802.2 frame, while other vendors, such as Novell, refer to the IEEE 802.3 frame simply as the IEEE 802.2 frame. The following table correlates commonly-used naming conventions with technically correct terminology. Common Name DIX frame 802.3/802.2 frame 802.2 frame SNAP frame Ethernet II frame IEEE 802.3 frame with 802.2 header IEEE 802.3 frame with 802.2 header and Sub-Network Access Protocol encapsulation Technical Description

Novell 802.3 frame A frame type that uses the structure of the IEEE 802.3 frame without 802.3 raw the 802.2 header. It is used only on Novell networks. Optional Exercise** Check your understanding of an IEEE 802.3 Ethernet Frame with IEEE 802.2 LLC Header! This interactive exercise allows you to apply your knowledge of frame fields.

**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

Drag the fields of an IEEE 802.3 Ethernet Frame with IEEE 802.2 LLC Header into their correct positions.

* Legal Information © 1998 Intel Corporation

IEEE 802.3 Frame with SNAP Encapsulation
Lesson Objectives
q q

Identify the structure of the SNAP frame Identify the reasons behind the development of the SNAP frame

Driven largely by the TCP/IP community, the addition of sub-network access protocol (SNAP) to the IEEE 802.3 frame was designed to expand the number of upper protocols that Ethernet can support. Manufacturers have not, however, implemented SNAP on even a modest scale. Today, the Ethernet SNAP frame is most commonly used to support Ethernet Macintosh* clients running AppleTalk. Expanding Protocol Support Because the DSAP and SSAP fields of the IEEE 802.2 frame are only eight bits (one byte) long, with two of the eight bits reserved for other purposes, the IEEE 802.2 frame can assign unique values to only 64 protocols. In order to provide support more than 64 protocols, the Ethernet SNAP frame includes an additional 5 bytes of header information, usually called the SNAP ID. The SNAP ID is divided in two parts: a 3-byte organizationally unique identifier (OUI), sometimes called a code field, and a 2-byte type field (equivalent to the EtherType field in the Ethernet II frame).

Ethernet SNAP Frame

On networks that support multiple frame types, Ethernet SNAP frames are recognized by the content of the DSAP, SSAP and ctrl fields. For Ethernet SNAP frames, the hexadecimal value of both the DSAP and SSAP fields equals 0xAA, and the value for the ctrl field equals 0x03. The SNAP ID Fields The purpose of the OUI, or code, field is to give individual vendors the ability to assign their own unique values for protocols running on their own equipment. In practical use, however, with the exception of certain Apple protocols, the content of the code field is always set to "0000-00" and the value in the "type" field is the same value that is used in the Ethernet II type field. Optional Exercise**

Check your understanding of an IEEE 802.3 Frame with SNAP Encapsulation! This interactive exercise allows you to apply your knowledge of frame fields.
**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

Drag the fields of an IEEE 802.3 Frame with SNAP Encapsulation into their correct positions.

* Legal Information © 1998 Intel Corporation

Novell 802.3 Frame
Lesson Objectives
q q

Identify the structure of Novell 802.3 frame Identify the reasons behind the development of the Novell 802.3 frame

Novell Ethernet 802.3 frame

Like the IEEE 802.3 frame, the Novell 802.3 frame differs from the Ethernet II frame by using a length field in the place Ethernet II's type field. The Novell 802.3 frame is used exclusively on Novell networks. The Novell 802.3 frame is often called "802.3 raw" because it does not use the 802.2 LLC header in the data field. Because the Novell 802.3 frame does not provide a "type" or "DSAP" field for encoding protocol information, the Novell 802.3 frame cannot support upper-layer protocols of different types, and is used for Novell IPX data only. As the figure above shows, the Novell 802.3 frame contains less overhead than either the IEEE 802.3 frame or the SNAP frame and the same amount of overhead as the Ethernet II frame. Because of its low overhead, the Novell 802.3 frame is extremely efficient. Its lack of sophistication, however, has caused even many Novell network administrators to resist its use. Optional Exercise** Check your understanding of a Novell 802.3 Frame! This interactive exercise allows you to apply your knowledge of frame fields.
**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

Drag the fields of a Novell 802.3 Frame into their correct positions.

* Legal Information © 1998 Intel Corporation

FULL-DUPLEX ETHERNET
Module Description This module explains how full-duplex Ethernet works, where it is commonly implemented and what its benefits are. Lesson 6.1 compares full-duplex and half-duplex operation by using analogies that include telephone and two-way radio communication models. Lesson 6.2 explains how full-duplex links can be used to increase overall network throughput, and Lesson 6.3 explains how distance limitations are affected by full-duplex operation. Lesson 6.4 concludes the module by briefly identifying the role full-duplex plays in the use of Gigabit Ethernet. Module Objectives
q q q q q

Identify how full-duplex Ethernet works Identify the benefits and limitations of full-duplex Ethernet Identify how full-duplex Ethernet affects distance limitations Identify connection limits of full-duplex Ethernet Identify use of full-duplex with Gigabit Ethernet

* Legal Information © 1998 Intel Corporation

Full-duplex and Half-duplex Compared
Lesson Objective
q

Identify how full-duplex Ethernet works

To understand the difference between full-duplex and half-duplex communication modes, consider the difference in convenience there is using modern telephones versus using twoway radios. Two-way radio operates in half-duplex communication mode, where all communicants share a single broadcast channel and when one person speaks, all must listen if anybody is to be heard at all. Ethernet, of course, works in a similar way and uses the CSMA/ CD algorithm to establish rules for sharing the same broadcast channel. Modern telephones, on the other hand, operate in full-duplex mode using two broadcast channels simultaneously. Full-duplex mode allows one person's transmission channel to function exclusively as the other person's receive channel, and vice versa. Using full-duplex communication, both parties can speak and listen at the same time without encountering garbled transmissions that would otherwise result from simultaneous broadcasts.

* Legal Information © 1998 Intel Corporation

The Benefits of Full-duplex Ethernet
Lesson Objectives
q q

Identify the benefits of full-duplex Ethernet Identify connection limits of full-duplex Ethernet

Using twisted-pair, twinax, or fiber optic cabling and full-duplex compatible NICs, full-duplex Ethernet allows two stations to transmit and receive data simultaneously.

Full-duplex Ethernet operation. In this case, two computers are directly connected using a medium type that has two separate channels. Full-duplex operation allows Station A to transmit on Station B's receive channel at the same time Station B is transmitting on Station A's receive channel.

Full-duplex links not only double potential throughput, but also eliminate collisions, as well as the need for each station to wait until the other station finishes transmitting. If reads and writes on a full-duplex link are symmetric, data throughput can be doubled. In actual usage, however, bandwidth improvements are more modest. Full-duplex Ethernet and Fast Ethernet links are particularly useful for server-to-server, server-to-switch and switch-to-switch connections. On a switch equipped with a full-duplex port, a packet arriving at a half-duplex port can be relayed on the full-duplex port, as soon as it has determined that the incoming packet on the half-duplex port has not been damaged by a collision. Additionally, packets arriving from a fullduplex port can be forwarded as soon as the destination is determined, since there are no collisions on a full-duplex link. With both Ethernet and Fast Ethernet, full-duplex segments are, however, limited to a single connection between two devices (for example, between a server and a switch). For three or more devices attached to the same segment, only half-duplex operation is possible.

* Legal Information © 1998 Intel Corporation

Full-duplex and Distance Limitations
Lesson Objective
q

Identify the distance limitations of full-duplex Ethernet

Because full-duplex operation eliminates the need to detect collisions, distances between devices can be extended to the full length that the medium is able to transmit a recognizable data signal. For example, with Fast Ethernet running on multimode fiber the maximum distance between devices is extended from about 400 m to approximately 2000 m. Due to UTP cable's higher rate of attenuation, the maximum distance between devices for UTP cable is 100 m, the same as for half-duplex.

* Legal Information © 1998 Intel Corporation

Full-duplex Mode and Gigabit Ethernet
Lesson Objective
q

Identify use of full-duplex with Gigabit Ethernet

Gigabit Ethernet has been developed to use full-duplex mode almost exclusively. Running at Gigabit Ethernet speeds a switch-like device called a buffered distributor will connect multiple full-duplex Gigabit Ethernet devices. Initially, all Gigabit Ethernet devices manufactured will support full-duplex operation.

* Legal Information © 1998 Intel Corporation

ETHERNET OPERATION AT 10MBPS
Module Description The IEEE 802.3 specification designates Ethernet implementation types according to the cabling used and the speed of data transfer. This module covers the specifications for Ethernet operating at 10Mbps over coaxial, unshielded twisted-pair and fiber optic cabling. Lesson 7.1 covers Ethernet designations 10Base-5 and 10Base-2, which specify the use of coaxial and thin coaxial cable. Lesson 7.2 covers the designation 10Base-T, which specifies the use of unshielded twisted-pair cabling, the most widely used medium for new Ethernet implementations. Lesson 7.3 covers the designation 10Base-FL, which specifies the use of fiber optic cabling. Lastly, Lesson 7.4 reviews a number of configuration guidelines for 10Mbps Ethernet implementations. Module Objectives
q q q q q q q q

Identify the characteristics of 10Base-5 and 10Base-2 Identify the topology and limitations of 10Base-5 and 10Base-2 networks Identify the differences and similarities between 10Base-T and 10Base-5/10Base-2 Identify the characteristics of 10Base-T and 10Base-FL Identify twisted-pair (UTP) wiring categories Identify UTP connectors Identify important guidelines to follow when building 10Mbps Ethernet networks Identify rules specific to 10Base-FL

* Legal Information © 1998 Intel Corporation

10Base-5 and 10Base-2
Lesson Objectives
q q q

Identify the differences and similarities between 10Base-T and 10Base-5/10Base-2 Identify the characteristics of 10Base-FL Identify important guidelines to follow when building 10Mbps Ethernet networks

The 10Base-5 designation was the first Ethernet implementation type to be defined by Ethernet standards. 10Base-5 designates a network that is implemented at 10Mbps, uses baseband transmission, and can carry a signal a maximum distance of 500 m without the use of a repeater. Unfortunately, cabling used for 10Base-5 is rigid, difficult to work with and expensive to install. 10Base-2 was, however, intended to be easier to use. Because it is thinner, 10Base-2 cable is cheaper to buy and to install than 10Base-5. 10Base-2 cable segments can only be 185 m (about 600 ft) long. Both 10Base-5 and 10Base-2 networks use a physical bus topology. On 10Base-5 networks, workstations attach to the bus cable using drop cables over distances up to 40 m (about 130 ft) long. On 10Base-2 networks, a computer’s network interface card attaches directly to the bus, using a T-connector. Both types of coaxial bus cables require terminating resistors placed at each end of the cable. Without terminators, signals are reflected back into the medium from the end of the bus cable, causing each transmission to collide with itself. The figure below shows how computers are attached to the bus cable on 10Base-5 and 10Base-2 networks.

10Base-5 and 10Base-2 bus

* Legal Information © 1998 Intel Corporation

10Base-5 and 10Base-2 (Continued)
Because the 10Base-5 and 10Base-2 specifications require networks to use the physical bus topology, these implementations present a number of limitations, including the following:
q

q

A cable or connection problem anywhere on the network’s bus is likely to cause problems for all users. For instance, if a user on a 10Base-2 cable segment breaks the bus by removing his or her workstation’s T-connector, the users on that cable segment lose access to the network, as shown in the animation below. There is no central location from where users can be added to or removed from the bus without disrupting the entire network. Adding a new user to a 10Base-2 cable requires that the cable be cut to insert a new T-connector. If a workstation on a 10Base-5 network must be moved more than 40 m from the cable, the bus cable must be moved to accommodate that workstation.

If a user on a 10Base-2 cable segment breaks the bus by removing his or her workstation’s T-connector, the users on that cable segment lose access to the network.

* Legal Information © 1998 Intel Corporation

10Base-T
Lesson Objectives
q q q q

Identify the differences and similarities between 10Base-T and 10Base-5/10Base-2 Identify the characteristics of 10Base-T Identify twisted-pair (UTP) wiring categories Identify UTP connectors

The IEEE addressed the implementation and maintenance difficulties of Ethernet bus topologies with specifications for Ethernet 10Base-T. The 10Base-T designation not only includes the use of inexpensive, unshielded twisted-pair cabling (which is similar to telephone wire), but it also specifies the use of a star topology which makes both the implementation and maintenance of Ethernet 10Base-T networks significantly easier compared to 10Base-2 and 10Base-5. In addition, because 10Base-T uses two wire pairs, one for transmitting data and one for receiving data, 10Base-T makes full-duplex operation possible. 10Base-2 and 10Base5, on the other hand, allow only half-duplex operation. The definitive advantages of 10Base-T over coaxial-based networks have made it the most widely implemented Ethernet standard. 10Base-T networks use Category 3, or higher, unshielded twisted-pair (UTP) cable. UTP cabling categories are defined in the Electronic Industry Association and Telecommunications Industry Association (EIA/TIA) 568 cabling standards, which currently include 5 categories for UTP cable. Categories are distinguished by the quality of the cable, or the speed at which reliable communication can take place. In appearance, all UTP cables look similar to telephone wire. The figure below shows a Category 5 UTP cable.

Category 5 Unshielded twisted-pair

* Legal Information © 1998 Intel Corporation

10Base-T (Continued)
The table below lists all five UTP cabling categories and their associated performance standards: UTP Category Rated Performance Applications

Category 1 (cat 1) No performance criteria Used in some older telephone systems. Category 2 (cat 2) Rated to 1MHz Category 3 (cat 3) Rated to 16MHz Category 4 (cat 4) Rated to 20MHz Category 5 (cat 5) Rated to 100MHz Used for telephone wiring. Used for 10Base-T. Widely deployed, especially in older installations. Used for 10Base-T and Token Ring. Used for 10Base-T, 100Base-T (Fast Ethernet), and other high-speed network technologies.

On 10Base-T networks each computer is attached to a central hub using UTP cables over distances up to 100 m (328 ft) long. When the maximum 100 m distance is used, the cable running from the wall plate to the cable closet should be no longer than 90 m, leaving 10 m for the connection between the computer and the wall plate and for the patch cables used in the wiring closet. Computers are attached to the UTP cable by an RJ-45 style connector, shown in the figure below.

RJ-45 jack and connector

* Legal Information © 1998 Intel Corporation

10Base-T (Continued)
The hubs at the center of a 10Base-T network are actually multiport repeaters. A signal from one station enters the hub on one port and is repeated on all the other hub ports as illustrated in the figure below.

Repeater hub operation

Because 10Base-T networks use a star topology with hubs at the center, 10Base-T networks provide several advantages over 10Base-5 and 10Base-2.

1. The hubs repeat only valid signals, so if there is a problem on a cable, it affects only the
workstation directly attached to the cable, as shown in the animation below. 2. With a hub, administrators can add or remove computers from the network without disrupting other computers. 3. On 10Base-T networks, both hubs and NICs show whether a connection is active or not by using green LEDs that give users live feedback about the status of the connection. This makes troubleshooting a 10Base-T network much simpler than troubleshooting 10Base-5 and 10Base-2 networks.

* Legal Information © 1998 Intel Corporation

10Base-FL
Lesson Objective
q

Identify the characteristics of 10Base-FL

The 10Base-FL specification resembles 10Base-T in several respects. Each computer on a 10Base-FL network connects to a central hub. In addition, 10Base-FL can operate in fullduplex mode. The main difference between 10Base-T and 10Base-FL is 10Base-FL's use of optical fiber cable instead of UTP. Fiber optic cabling is most commonly used to connect hubs to other hubs. The fiber used is multimode 62.5/125 fiber. Each fiber connects to networking equipment using a bayonet-type connector known as an ST connector. Optical fiber cable (specifically, the transmitters and receivers designed to work with fiber) is more expensive than UTP cable. However, optical fiber cable can span much greater distances than UTP cable. Thus, on 10Base-FL networks, full-duplex links between hubs can be up to 2,000 m (about 6,560 ft). Moreover, optical fiber cable can potentially support future data transmission rates of several hundreds of megabits per second.

* Legal Information © 1998 Intel Corporation

62.5/125 means that the fiber's core is 62.5 microns in diameter, with an outer cladding of 125 microns. Multimode fiber has a relatively large core diameter and uses inexpensive light emitters and receivers. It is the type of fiber optic cabling most often used on LANs. By contrast, monomode fiber has a narrow core diameter and uses expensive transmitters and receivers. It can be used over longer distances than multimode fiber cable.

* Legal Information © 1998 Intel Corporation

Implementation: 10Mbps Ethernet Configuration Guidelines
Lesson Objectives
q q

Identify important guidelines to follow when building 10Mbps Ethernet networks Identify rules specific to 10Base-FL

In addition to the Ethernet specifications described in the previous lessons, there are a number of general guidelines that must be followed when implementing a 10Mbps Ethernet LAN. The 5-4-3 Rule All 10Mbps Ethernet networks must follow the 5-4-3 rule. (The 5-4-3 rule applies only to 10Mbps Ethernet. Fast Ethernet and Gigabit Ethernet's faster wire speeds reduce the maximum allowable distance between stations, and also reduce the number of repeaters that can be used in a single collision domain.) The 5-4-3 rule states that a single 10Mbps collision domain can consist of five cable segments connected by four repeaters. Only three of the cable segments, however, may be populated with network stations. The figure below shows one possible configuration that the 5-4-3 rule allows.

This configuration consists of 5 total segments of 100 m each, 4 repeaters, with only 3 of the segments populated with network devices. The longest distance between any two stations is between the PCs on the left and the servers on the right. The total network diameter is 500 m.

Because only three of the five Ethernet segments are allowed to be populated with network devices, this means that one of the four repeaters must serve only to connect one repeater to another, as the third repeater from the left does in the figure above. At first, this repeater may seem unnecessary. Why couldn't you simply connect the second and fourth repeaters to each other directly? The simplest answer is that the second and fourth repeaters are too far apart (200 m) to hear each other.

* Legal Information © 1998 Intel Corporation

Implementation: 10Mbps Ethernet Configuration Guidelines (Continued)
As Ethernet signals travel across the wire, they diminish in strength until they are no longer is the technical term recognizable as valid data transmissions. Attenuation used to describe this natural degradation in signal quality as signals travel across the network medium. Even fiber optic transmissions are affected by attenuation. Repeaters serve to restore data signals to their original strength so that they may be heard at distances that would otherwise not be possible. The two following figures (on this page and the next) illustrate the diminutive effect of attenuation, as well as the restorative effect of Ethernet repeaters. On 10Base-5 and 10Base-2 networks, repeaters are inserted between the individual cable segments, chaining the cables together. The resulting topology is shown in the figure below.

Repeater use on 10Base-5 and 10Base-2 networks.

10Base-5 and 10Base-2 Ethernet segments that are connected using repeaters form a single collision domain. Inside a collision domain, all stations must contend for access to the shared medium. Collision domains are bounded by switches and routers. For networks that have a router or a switch, each of the network segments that connect to a switch or router port belong to their own collision domain. Inside a single collision domain, the following simple configuration rules apply:
q

q

There cannot be more than five cable segments and four repeaters between any two stations in a collision domain. Only three of these segments can be multistation segments (e.g., 10Base-2 or 10Base5). The last two segments must connect only a station to a hub or a hub to a hub.

* Legal Information © 1998 Intel Corporation

The word attenuation comes from the Latin word attenuatus, which means 'made thin.'

* Legal Information © 1998 Intel Corporation

Implementation: 10Mbps Ethernet Configuration Guidelines (Continued)
The same rules apply to 10Base-T networks; there can be no more than four 10Base-T repeater hubs between any two stations on the network. A typical configuration is shown in the figure below. All the servers and workstations in this figure are in the same collision domain, sharing the same half-duplex transmission medium.

Network with multiple repeater hubs

Even though there are six hubs in the network in the figure above, the 5-4-3 rule is not violated because there are no paths between stations in the network that involve more than three repeater hops. 10Base-FL-Specific Rules In addition to following the 5-4-3 rule described above, 10Base-FL networks must be built according to the rules listed below:
q

q

q

With four repeaters and five cable segments, 10Base-FL segments must not exceed 500 m (1,640 ft). With three repeaters and four cable segments, 10Base-FL segments must not exceed 1,000 m (3,280 ft). With two repeaters and three cable segments, 10Base-FL segments can be up to 2,000 m (6,561ft).

* Legal Information © 1998 Intel Corporation

FAST ETHERNET
Module Description Fast Ethernet operates at a data transfer speed of 100Mbps. Lesson 8.1 explains that because of upward trends in network growth, Fast Ethernet will soon surpass 10Mbps Ethernet in sales. Lesson 8.2 discusses some of the basic differences between 10Mbps Ethernet and Fast Ethernet operations. Lessons 8.2 and 8.3 cover Ethernet types 100Base-TX and 100Base-FX. Lesson 8.4 includes a comprehensive discussion of Fast Ethernet implementation guidelines, and Lesson 8.5 concludes this module by describing how Fast Ethernet's auto-negotiation feature enables 10/100Mbps devices to automatically configure themselves for either 10Mbps or 100Mbps operation. Module Objectives
q q q q q q

q

Identify reasons for the development of Fast Ethernet Identify the similarities and differences between 10Mbps Ethernet and Fast Ethernet Identify the basic characteristics of 100Base-TX and 100Base-FX Identify and use simple Fast Ethernet configuration guidelines Identify advanced Fast Ethernet configuration calculations Identify how the limitations to the size of a Fast Ethernet network can be avoided through the use of Ethernet switches Identify the purpose of auto-negotiation

* Legal Information © 1998 Intel Corporation

The Growth of Fast Ethernet
Lesson Objective
q

Identify reasons for the development of Fast Ethernet

In the early 1990s, it is clear that 10Mbps Ethernet implementations are not fast enough for many larger networks. Network backbones, in particular, are becoming clogged with traffic. While other high-speed LAN technologies, such as FDDI (Fiber Distributed Data Interface) exist, they represent, for most companies, a significant challenge to implement and maintain, and, for many companies, technologies such as FDDI and ATM are simply too expensive. In 1995, however, with the IEEE publication of the 100Mbps Fast Ethernet specification, companies soon had a relatively inexpensive way to significantly increase the speed of their high-traffic links. With Fast Ethernet, organizations can install high-speed LAN segments at a very reasonable cost. And because it uses the same basic technology as 10Mbps Ethernet, Fast Ethernet equipment is easy to install and manage. As shown in the figure below, Fast Ethernet will soon become the most widely used Ethernet implementation, with sales of 100Mbps network interface cards expected to surpass sales of 10Mbps cards in 1998. Many network interface cards already support both 10Mbps and 100Mbps transmission rates and the prices of Fast Ethernet hub and switch ports are dropping rapidly.

IDC World-wide adapter market forecast. Source: IDC

* Legal Information © 1998 Intel Corporation

10Mbps Ethernet vs. Fast Ethernet
Lesson Objective
q

Identify the similarities and differences between 10Mbps Ethernet and Fast Ethernet

In almost all respects, Fast Ethernet is simply Ethernet scaled by a factor of ten. Like Ethernet, Fast Ethernet uses the CSMA/CD algorithm to control access to a shared broadcast medium. Ethernet frame types are also the same 10Mbps and Fast Ethernet networks. With Fast Ethernet, the interframe gap is still 96 bit times, but because transmission speeds are multiplied by ten, the interframe gap is only 960 ns instead of 9.6 µs. The major difference between 10Mbps Ethernet and Fast Ethernet is that the maximum diameter of Fast Ethernet networks is smaller than the maximum diameter of 10Mbps Ethernet networks. On Fast Ethernet networks, stations must still be able to detect collisions within the first 512 bits transmitted, yet since the data transmission rate is ten times as fast, stations on an Ethernet network must be ten times as close in order to detect collisions within the same number of bit times as for 10Mbps. 10Mbps maximum distance of 2500 m between stations is reduced to 250 m for Fast Ethernet. Another difference between the two technologies is that on Fast Ethernet networks, there can be only one or possibly two repeaters or hubs between transmitting stations.

* Legal Information © 1998 Intel Corporation

100Base-TX
Lesson Objective
q

Identify the basic characteristics of 100Base-TX

100Base-TX is very similar to 10Base-T. Stations on 100Base-TX networks are connected to a central hub using UTP cable. The RJ-45-type connector is also used. The maximum distance between the workstation and the hub is 100 m (328 ft). Like 10Base-T, 100Base-TX provides separate transmit and receive channels, so full-duplex operation is possible. Servers and other high-performance network stations attached using 100Base-TX can transmit at 100Mbps and receive at 100Mbps at the same time, effectively boosting the bandwidth on the link to 200Mbps. However, 100Base-TX requires Category 5 cable, so if an organization wants to upgrade a 10Mbps Ethernet network using Category 3 cable to Fast Ethernet, it must either recable or implement a 100Base-T4 network. A 100Base-T4 network enables an organization to run Fast Ethernet over Category 3 or 4 cables; however, all four wire pairs are required. 100Base-T4 supports only half-duplex operation, and 100Base-T4 equipment is much less common than 100Base-TX equipment.

* Legal Information © 1998 Intel Corporation

10Base-T uses a Manchester encoding scheme that results in a digital signal with a fundamental frequency of 10Mhz, meaning that the signal that can be transmitted in a cable certified for 15Mhz signals or better. Cat 3 is certified for 16Mhz, Cat 4 for 20Mhz and Cat 5 for 100Mhz, so each of these can be used. 100Base-TX uses a different encoding scheme that results in a signal with a fundamental frequency of 31.25Mhz, which requires a cable that is certified for at least 46.875Mhz. This makes Cat 5 as the only option for 100Base-T.

* Legal Information © 1998 Intel Corporation

100Base-FX
Lesson Objective
q

Identify the basic characteristics of 100Base-FX

The specification for Fast Ethernet over optical fiber cable is known as 100Base-FX. Like 10Base-FL, 100Base-FX uses two strands of multimode 62.5/125 fiber. The connectors can be ST connectors, which are also used on 10Base-FL networks, but more commonly, the cheaper SC connector is used. The SC connector is keyed to reduce the risk of accidentally swapping the transmit and receive fibers. 100Base-FX is typically used for one of two reasons:

1. Optical fiber cable spans greater distances than UTP cable, up to 2,000 m (6,561 ft) is
possible on full-duplex links.

2. Optical fiber cable can support much higher bandwidths than UTP cable, so if an
organization anticipates upgrading to an even faster LAN technology in the future, it might install optical fiber cable.

* Legal Information © 1998 Intel Corporation

Implementation: Fast Ethernet Configuration Guidelines
Lesson Objectives
q q q

Identify simple Fast Ethernet configuration guidelines Identify advanced Fast Ethernet configuration calculations Identify how the limitations to the size of a Fast Ethernet network can be avoided through the use of Ethernet switches

At 100Mbps, collisions must be detected within 5.12 µs. If stations are too far apart or have too many repeaters between them, the timing requirements cannot be met. Both simple and advanced configuration rules can be used to verify that a particular Fast Ethernet network meets configuration requirements. Simple Configuration Rules Hubs and repeaters add a small delay (or latency) when an Ethernet frame is received and retransmitted. Using simple configuration rules, hubs are placed in two groups depending on the length of this delay:
q q

Class I hubs add less than 0.7 µs of latency. Class II hubs add less than 0.46 µs of latency.

Because Class II hubs are faster, two Class II hubs are allowed between stations in a single collision domain, but only one Class I hub is permitted. The length of each cable segment is also restricted:
q q

A UTP segment can be up to 100 m long. A (half-duplex) fiber segment can be up to 412 m long.

Simple configuration rules are summarized in Table 8-1. Hub Type Single Segment One Class I Hub UTP Fiber (FX) UTP and Fiber (FX) N/A 260 m (100 m UTP)

100 m 412 m 200 m 272 m

One Class II Hub

200 m 320 m

308 m (100 m UTP) 216 m (105 m UTP)

Two Class II Hubs 205 m 228 m

Table 8-1. Simple Fast Ethernet configuration rules.

In Table 8-1, you will notice the following:
q

q

q

With a Class I hub, you can have a maximum distance between any two stations on the repeated segment of 200 m using UTP cable and of 272 m using fiber optic cable. If you combine UTP cable and fiber optic cable, you can have 100 m of UTP and 160 m of fiber. With one Class II hub, the maximum distance using UTP is unchanged since no UTP segment can exceed 100 m. The maximum distance increases to 320 m when fiber is used because the Class II hub adds less latency than the Class I hub. With two Class II hubs, you can have 205 m of UTP cable (e.g., two 100 m segments and a 5 m segment between the hubs). The maximum distance using fiber is 228 m. The maximum distance using fiber and UTP is 216 m (two UTP segments of 100 m and 5 m, plus one 111 m fiber segment).

* Legal Information © 1998 Intel Corporation

Implementation: Fast Ethernet Configuration Guidelines (Continued)
Advanced Configuration Guidelines Rather than relying on the "canned" configuration rules given above, the exact delay between any two stations can be calculated using the following guidelines. In some cases, these guidelines allow a greater network diameter than the configuration rules given above. The advanced configuration rules rely on calculating the exact delay between any two stations in the network, based on the exact specifications for network interface cards, network cables and hubs. The delay between two stations in a network can be calculated using the following formula: Total delay = Hub delay + Cable delay + Network interface delay. To enable a station at one end of the network to detect a collision with a station at the other end, the following inequality must hold true: 2 x Total delay < 5.12 µs. To comply with the Fast Ethernet configuration guidelines, you must ensure that this requirement is fulfilled between any two stations in the network. To perform this calculation, fill in the delay values in the center column in Table 8-2 with the actual values that apply to your network. Sample values are given in the right column. Component Two Network Interface Cards UTP cable Fiber cable Class I hub Class II hub(s) Total delay Round-trip delay (Sum of the above) Total delay x 2 Must be less than 5.12 µs Delay Typical value 0.25 µs x 2 0.0055 µs/m. 100m maximum. 0.0050 µs/m. 412m maximum. Less than 0.7 µs Less than 0.46 µs

Table 8-2. Calculating round-trip delays.

To be on the safe side, you should add a safety margin to your calculations. If you get very close to the limit you may have problems later on, for instance if a network component is exchanged for another with a slower response time. While performing the exact calculation of round-trip delays using the exact specifications of network adapters, cable and hubs used in the network may in some instances allow you to go beyond the distances specified in the simple configuration rules, this approach cannot in general be recommended because it adds a significant extra administrative burden to network maintenance. Every time a network component or cable is exchanged for another, you must repeat the calculations above to verify that the maximum delay is still within the specified limits.

* Legal Information © 1998 Intel Corporation

Implementation: Fast Ethernet Configuration Guidelines (Continued)
Fast Ethernet Switches By using switches on a Fast Ethernet network, you can have a network much larger than would be allowed normally. Each port on an Ethernet switch forms its own collision domain. The network segments attached to each switch port must still conform to the configuration guidelines, but the total size of a switched 100Mbps network can grow much larger.

* Legal Information © 1998 Intel Corporation

Auto-negotiation
Lesson Objective
q

Identify the purpose of auto-negotiation

The Fast Ethernet specification defines a process called auto-negotiation that enables Ethernet devices to exchange information about their capabilities, such as the speed and duplex mode at which they operate. Auto-negotiation also provides a method that enables network administrators to:

1. Discover the reason a connection has been refused. 2. Determine what capabilities the network devices have. 3. Change connection speeds.
Making Migrations to Fast Ethernet Easier Many recently manufactured Ethernet devices, including NICs, hubs and switches, use dualspeed interfaces that allow a single device to operate at either 10Mbps or 100Mbps. Autonegotiation enables an Ethernet device, such as an NIC, to automatically configure itself for either 10Mbps or 100Mbps mode depending upon the capabilities of the device on the other end of the connection. Perhaps the most significant benefit of auto-negotiation is that it allows network administrators the ability to incrementally upgrade their network hardware easily, without having to perform manual configurations for each device. For example, a company that cannot afford to upgrade their entire network all at once can pursue an incremental migration by purchasing 10/100 NICs for all new machines, and perhaps a 10/100 hub or switch as well, and then upgrading the NICs of older machines over a period of time. Using auto-negotiation, the 10/100 hub will automatically configure itself to achieve the highest possible performance for the devices attached to it.

* Legal Information © 1998 Intel Corporation

Auto-negotiation (Continued)
How Auto-negotiation Works Auto-negotiation is an extension of the link test methods used by 10Base-T and 10Base-FL to verify the integrity of the link between devices. Auto-negotiation advertises a device's abilities by encoding a 16-bit data packet, called a link code word (LCW), within a burst of 17 to 33 link pulses, called a fast link pulse (FLP) burst. FLP bursts have an approximate duration of 2 µs and are transmitted in 16.8 µs intervals (the same interval as for the normal link pulses used by 10Base-T and 10Base-FL). The link code word contains two fields (called the selector field and the technology ability field), which together serve to identify a device's capabilities. It may seem that because the fast link pulse and the normal link pulse use the same interval at the same frequency, older devices may not be compatible with auto-negotiation. This is, however, not the case. For example, a 10Base-T device that does not have auto-negotiation capabilities sees fast link pulse bursts simply as a link test signal. A 10Base-T device will respond to the fast link pulse burst with its usual normal link pulse signal. At the other end of the link, a 10/100-capable device will recognize normal link pulse and choose 10Mbps mode operation. Auto-negotiation attempts to find the greatest common denominator for the two devices on the link in the following order of preference:

1. 2. 3. 4. 5.

100Base-TX full-duplex 100Base-T4 100Base-TX 10Base-T full-duplex 10Base-T half-duplex

Once the greatest common denominator of settings is determined, each device equipped with auto-negotiation will configure itself automatically. In certain cases where automatic configurations are not desired, auto-negotiation provides a way for these settings to be overridden manually.

* Legal Information © 1998 Intel Corporation

GIGABIT ETHERNET
Module Description The first two lessons in this module identify the basic operations of Gigabit Ethernet and some of the reasons that Gigabit Ethernet is needed in the marketplace. Lesson 9.3 covers Gigabit Ethernet implementation strategies, and also introduces a device new to Ethernet technology, the buffered distributor. Lesson 9.4 discusses Gigabit Ethernet's use of the CSMA/CD algorithm, and Lesson 9.5 concludes the module by identifying certain issues that network administrators must consider when implementing first-generation Gigabit Ethernet equipment. Module Objectives
q q q q q q q q

Identify factors contributing to the need for Gigabit Ethernet Identify the key characteristics of Gigabit Ethernet Identify Gigabit Ethernet types 1000Base-SX, -LX, -CX and -T Identify possible migration strategies for Gigabit Ethernet Identify how the buffered distributor works Identify Gigabit Ethernet's use of full-duplex mode Identify modified specifications for Gigabit Ethernet running in half-duplex mode Identify considerations for early Gigabit Ethernet implementation

* Legal Information © 1998 Intel Corporation

Why Gigabit Ethernet is Needed
Lesson Objective
q

Identify factors contributing to the need for Gigabit Ethernet

The growing need for network bandwidth in excess of the 100Mbps delivered by Fast Ethernet is driven by several factors.
q q

q

q

q

q

Intranet and Internet traffic is growing at an exponential rate. Mission critical mainframe applications continue to be replaced with distributed solutions. High-traffic document management, workflow, imaging and other information management and distributed database applications are becoming integral parts of core business strategies. Due to the increasing complexity of desktop publishing, scientific modeling, highresolution imaging and three-dimensional engineering applications, average file size is expanding. Increasingly popular applications like multimedia computer-based training, desktop video conferencing and interactive whiteboarding require high-bandwidth connections that can deliver a constant and reliable data stream. Finally, the extension of Fast Ethernet to the desktop reintroduces the congestion that Fast Ethernet backbones were originally designed to eliminate.

Compared to the alternative solutions for high speed networking, such as ATM and FDDI, Gigabit Ethernet offers the advantage of using protocols directly compatible with currently implemented Ethernet standards, making lower-cost, incremental migrations from Ethernet and Fast Ethernet possible.

* Legal Information © 1998 Intel Corporation

Gigabit Ethernet Defined
Lesson Objectives
q q

Identify the key characteristics of Gigabit Ethernet Identify Gigabit Ethernet types 1000Base-SX, -LX, -CX and -T

The growing need for network bandwidth in excess of the 100Mbps delivered by Fast Ethernet is driven by several factors.
q q q q

q

q

q

The transmission speed for Gigabit Ethernet is 1,000Mbps – 100 times that of Ethernet. The IEEE specification for Gigabit Ethernet will be IEEE 802.3z. Gigabit Ethernet uses the 802.3 Ethernet frame format. Gigabit Ethernet uses the CSMA/CD access method with support for one repeater per collision domain. At the MAC layer, Gigabit Ethernet is equivalent to Fast Ethernet scaled by a factor of ten. The upcoming IEEE 802.3z standard is expected to define Gigabit Ethernet running over multimode fiber and, over short distances, on shielded copper wire. A separate standards effort (IEEE working group 802.3ab) will specify Gigabit Ethernet operation for Cat 5 UTP cabling over distances up to 100 m.

Gigabit Ethernet Alliance The Gigabit Ethernet Alliance is an open forum that promotes industry cooperation in an effort to accelerate the development and standardization of Gigabit Ethernet. Over 120 Ethernet vendors, including Intel, currently participate in the Gigabit Ethernet Alliance by contributing technical expertise, testing interoperability standards and fostering open communications between potential suppliers and consumers. Because Fast Ethernet's success can be attributed largely to its compatibility with 10Mbps Ethernet, leaving unchanged as much of the original Ethernet specification as possible is a core strategy for making Gigabit Ethernet successful as well. The IEEE Standards Board expects to achieve final ratification of the 802.3z standard in either June or September 1998. However, since many vendors are developing products concurrently with the standardization effort, many Gigabit Ethernet products are currently available. International Data Corporation (IDC), a commonly referenced research firm, expects the value of the market for Gigabit Ethernet products to exceed USD 1 billion by the year 2000.

* Legal Information © 1998 Intel Corporation

Gigabit Ethernet Defined (Continued)
1000Base-SX and 1000Base-LX Two physical-layer standards, 1000Base-SX and 1000Base-LX, designate Gigabit Ethernet transmitted over fiber optic cabling. In order to minimize the time-to-market for new products, Gigabit Ethernet incorporates optical signaling components and encoding and decoding schemes borrowed from Fibre Channel. 1000Base-SX works best as a short-distance (up to 260 m) backbone and utilizes low-cost, multimode, 62.5 micron fiber optic cabling. Designed for longer-distance connections, 1000Base-LX uses multimode fiber to allow connections over distances up to 440 m and single-mode fiber for distances up to 3000 m. 1000Base-SX and 1000Base-LX use the same SC connectors (shown in the figures below) used for 100Base-FX systems.

SC fiber optic connectors

SC connector cross section

1000Base-CX 1000Base-CX designates Gigabit Ethernet transmitted over twinax, a 150-Ohm balanced, shielded, specialty cable. 1000Base-CX's distance limitation of up to only 25 m makes 1000Base-CX best suited for interconnecting switching closets, server farms and power workgroups. 1000Base-CX supports two kinds of connectors: standard 9-pin D connectors (below) and HSSC (High Speed Serial Card) connectors, also referred to as 8-pin Fibre Channel Type 2 connectors (also below).

HSSC/8-pin Fibre Channel Type 2 Connector

* Legal Information © 1998 Intel Corporation

Gigabit Ethernet Defined (Continued)
1000Base-T 1000Base-T designates Gigabit Ethernet transmitted over Category 5 UTP cable. Under the IEEE 802.3ab standard, 1000Base-T connections can run up to 100 m. The standard for 1000Base-T comprises the second phase of the Gigabit Ethernet standards process and falls under the purview of the IEEE 802.3ab task force. 1000Base-T will be designed to take advantage of existing UTP cable already widely deployed for Ethernet and Fast Ethernet. The IEEE Standards Board does not expect to ratify the 1000Base-T standard until early 1999. To accommodate the use of cost-effective UTP cabling, IEEE 802.3z, which is designed primarily for fiber cabling, will specify a way to use encoding schemes other than the Fiber Channel encoding scheme used by 1000Base-SX, -LX, and -CX. The table below summarizes Gigabit Ethernet media types and their distance limitations. Specification Medium Maximum Distance 260 m 440 m 3000 m 25 m

1000Base-SX Multimode Fiber 1000Base-LX 1000Base-LX Multimode Fiber Single-mode Fiber

1000Base-CX Twin-ax copper 1000Base-T

Four Pairs of Category 5 UTP 100 m

Optional Exercise** Check your understanding of Gigabit Ethernet! This interactive exercise allows you to apply your knowledge of Gigabit Ethernet specifications.
**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

For each Gigabit Ethernet specification, drag the medium and maximum distance into the correct positions.

* Legal Information © 1998 Intel Corporation

Implementation of Gigabit Ethernet
Lesson Objectives
q q q

Identify possible migration strategies for Gigabit Ethernet Identify how the buffered distributor works Identify Gigabit Ethernet's use of full-duplex mode

For the most part, Gigabit Ethernet implementation scenarios will mirror those for Fast Ethernet. As the general availability of Gigabit Ethernet products increases, the most likely targets for Gigabit Ethernet implementation will be links between routers, switches, hubs, repeaters and servers. Early implementations of Gigabit Ethernet may, however, include lesser-risk, non-mission-critical targets such as the server-to-router and server-to-switch connections of power workgroups. Migration and Rollout Strategies The Gigabit Ethernet Alliance identifies five, most likely upgrade scenarios for Gigabit Ethernet, including:

1. 2. 3. 4. 5.

Upgrading switch-to-switch links Upgrading switch-to-server links Upgrading switched Fast Ethernet backbones Upgrading shared FDDI backbones Upgrading high-performance workgroups

Due to the inherent risk of any first generation technology, rather than jeopardizing missioncritical applications, many network managers will initially implement Gigabit Ethernet in lowerrisk segments of the network, where, at the same time, they will be able to clearly measure a return on investment. Once companies have been able to deploy Gigabit Ethernet successfully on a limited scale, expanding the implementation of Gigabit Ethernet to mission-critical backbones, server links and wiring closets will become more natural. The Buffered Distributor Most Gigabit Ethernet products are simply faster versions of the Ethernet components you already know quite well. They include: switches, uplink/downlink modules, NICs and router interfaces. Gigabit Ethernet does, however, introduce one new device, called a buffered distributor. The buffered distributor is a full-duplex, multiport, hub-like device that interconnects two or more Ethernet links operating at 1000Mbps. Like a standard repeater, the buffered distributor

forwards all incoming packets to all connected links (except the original incoming link) creating a shared broadcast domain, comparable to an Ethernet collision domain. Unlike a standard repeater, the buffered distributor is permitted to buffer one or more incoming frames on each link before forwarding them, thus avoiding collisions. Full-duplex Gigabit Ethernet Ethernet and Fast Ethernet support full-duplex operation only as a single link between two devices. Adding a third device to full-duplex Ethernet and Fast Ethernet links is not possible. The switching capabilities of the Gigabit Ethernet buffered distributor, however, enable fullduplex network to be created using a hub-like star configuration for server farms and power workgroups. All first generation Gigabit Ethernet devices currently slated for production by major manufacturers are full-duplex devices. Optional Exercise** Check your understanding of Gigabit Ethernet! This interactive exercise allows you to apply your knowledge of migration and rollout strategies.
**This exercise requires the Macromedia Shockwave* plugin.

* Legal Information © 1998 Intel Corporation

On the following network diagram, select the links that are the most likely candidates for upgrading to Gigabit Ethernet, according to the Gigabit Ethernet Alliance. Once you have completed the exercise, click on the Done button to find out how you did.

* Legal Information © 1998 Intel Corporation

Gigabit Ethernet and CSMA/CD
Lesson Objectives
q q

Identify modified specifications for Gigabit Ethernet running in half-duplex mode Identify considerations for early Gigabit Ethernet implementation

Depending upon the market success of the buffered distributor, half-duplex Gigabit Ethernet devices may not ever be manufactured. Even so, the Gigabit Ethernet standard has preserved the CSMA/CD algorithm so that Gigabit Ethernet half-duplex operation is at least possible. 512-byte Minimum Carrier Event As transfer speeds increase, the time that each frame is on the wire decreases, and as a result, the maximum allowable distance between stations also decreases. For example: Fast Ethernet uses the same the minimum frame size of 64 bytes (512 bits) that is used for 10Mbps Ethernet. As a result, the maximum allowable distance between stations for Fast Ethernet is only 250 m, compared to 2,500 m for 10Mbps Ethernet. However, because the maximum allowable distance for UTP cabling is much shorter than 250 m (100 m for both 10Mbps Ethernet and Fast Ethernet), Fast Ethernet's smaller maximum distance limitations directly affect only the number of repeaters that may be used between stations, and not the maximum distance for UTP cabling itself. Even with its restrictions on the total number of repeaters, Fast Ethernet still allows for a reasonably-sized maximum network diameter. If, on the other hand, Gigabit Ethernet were to use the same 64-byte minimum frame size, the maximum allowable distance between stations for Gigabit Ethernet would be less than 25 m. Gigabit Ethernet, in order to support a distance limitation comparable to Fast Ethernet (100 m from a repeating hub to each device), extends the minimum CSMA/CD carrier event time from 64 bytes to 512 bytes. For packets shorter than 512 bytes, Gigabit Ethernet adds a non-data carrier extension to the end of the packet transmission, allowing stations to occupy the wire long enough to detect collisions without modifying the 802.3 frame structure. The minimum Ethernet frame length of 64 bytes remains the same. Packet Bursting For small packets, extending minimum carrier event time decreases the ratio of data to nondata by as much as eight. For example, a 64-byte frame would need to be extended with a 448-byte-size non-data carrier signal. To offset the inefficiency of transmitting small packets individually, Gigabit Ethernet will allow servers, switches and other devices to use a method called packet bursting to send multiple small packets in a single transmission event. By replacing non-data carrier extensions with additional packets, packet bursting increases the ratio of data to non-data for each transmission, effectively increasing the overall speed of the network by utilizing bandwidth more efficiently.

* Legal Information © 1998 Intel Corporation

Considerations for Early Adoption
Lesson Objective
q

Identify considerations for early Gigabit Ethernet implementation

Compatibility Some vendors are already shipping Gigabit Ethernet devices even though the 802.3z standard is not yet finalized. Despite the fact that exhibits at the Fall 1997 Networld+Interop in Atlanta demonstrated interoperability between Gigabit Ethernet equipment from different vendors, early adopters of Gigabit Ethernet technology run a slight risk that the equipment they buy may not conform to the final standard. Pricing Pricing for first generation Gigabit Ethernet devices may present a barrier to entry for many companies. Currently, it is possible to purchase Fast Ethernet NICs, for example, for under USD 100, and Fast Ethernet switches for under USD 200 per port. Initially, Gigabit Ethernet NICs may be priced as high as USD 1700, and Gigabit Ethernet switches will likely be priced between USD 2000 and USD 4000 per port. Over the past two years, however, Fast Ethernet switches have decreased in price approximately 36%. Gigabit Ethernet components are expected to follow a similar trend.

* Legal Information © 1998 Intel Corporation

ETHERNET AND OTHER PHYSICAL-LAYER TECHNOLOGIES
Module Description This section describes Ethernet's relationship to major networking technologies, such as Token Ring, FDDI and ATM, that either provide alternatives to or work in conjunction with Ethernet. Module Objectives
q

q

Identify the relationship between Ethernet and other networking technologies such as Token Ring, ATM and FDDI Identify the advantages Ethernet has in comparison to these technologies

* Legal Information © 1998 Intel Corporation

Overview: Ethernet and Other Technologies
Lesson Objective
q

Identify real-world situations in which understanding the relationship between Ethernet and other networking technologies is useful

As an Open Systems technology, Ethernet helps to illustrate the basic concepts of modularity and hierarchy from which the OSI reference model was born. The purpose of this module, and Module 11, is to deepen your understanding of the place Ethernet occupies in the overall landscape of computer networking. Having a clear understanding of the relationship between Ethernet and other popular technologies will help you to more quickly understand what your customers are saying, more intelligently address their concerns and more competently provide solutions to their problems. If you were told that Company X has already implemented FDDI and ATM, and has decided against using Frame Relay, does that mean Company X has also decided against using Ethernet? Or to use a slightly different situation, if you know that Company Y uses Token Ring in their order processing center, and they ask you about upgrading the engineering department's network to accommodate a new document imaging and workflow application, is it reasonable that they should consider using Ethernet? (Specific answers for each of these questions appear in Lesson 10.3.)

* Legal Information © 1998 Intel Corporation

Ethernet Compared
Lesson Objectives
q

q

Identify the relationship between Ethernet and other networking technologies such as Token Ring, ATM and FDDI Identify the advantages Ethernet has in comparison to these technologies

Token Ring IBM first adopted Token Ring technology as a core networking strategy in the early 1980s. Today, many IBM-based, distributed networking solutions are implemented on Token Ring networks. Compared to Ethernet's approximately 85% market share, only 10% of total network components sold in 1997 were Token Ring. Compared to Token Ring, the reasons for Ethernet's success include:

1. Ethernet networking is less complex than Token Ring and easier to troubleshoot. 2. Ethernet components are simpler relatively and, thus, less expensive to manufacture. 3. Even though token passing uses bandwidth more efficiently than the contention method
used by Ethernet, overall Ethernet performance has generally kept pace with Token Ring and, with the introduction of Fast Ethernet and Gigabit Ethernet, has outpaced Token Ring. Token Ring controls access to the physical medium by passing a control frame from one computer to the next. Only the computer possessing the control frame has the right to send data. Token Ring generally works best for networks with a large number of workstations that must constantly exchange data with a centrally located resource such as a distributed database or mainframe application. In contrast, Ethernet's contention method works best on networks that transmit large amounts of data intermittently. Such situations would include engineering groups using CAD/CAM applications and three-dimensional modeling tools, or a customer service department that uses workflow and document imaging to process customer complaints and access customer account information. Companies do not, however, always have to choose to implement only Token Ring or only Ethernet throughout the enterprise. As is often the case, Ethernet segments can be connected to existing Token Ring networks (as shown in the figure below) using a router that serves to bridge the two networks together.

Token Ring network connected to an Ethernet network using a router.

In the future, Token Ring's market share will likely continue to decrease. Ethernet's recent advances in speed have made the slim performance advantages of Token Ring over 10Mbps Ethernet virtually disappear.

* Legal Information © 1998 Intel Corporation

Ethernet Compared (Continued)
FDDI Companies usually implement FDDI as a high-speed, shared backbone connecting servers, switches, bridges and routers. FDDI operates at 100Mbps and uses a token passing access control method on fiber optic cabling configured as a dual ring (the second ring serves a backup in case the primary ring is broken). An Ethernet hub or switch, equipped with one or more FDDI interfaces, connects Ethernet workstations to the FDDI backbone. The figure below illustrates a typical FDDI configuration.

Network configuration using an FDDI ring for the network backbone.

FDDI is one of the most expensive networking solutions to implement. Consequently, with the introduction of Ethernet switches, 100Base-T and 100VG AnyLAN, most network managers consider 100Mbps Ethernet backbones viable and economical alternatives to FDDI. In general, the economic advantages of 100Mbps Ethernet over FDDI are two-fold:

1. 100Mbps Ethernet NICs are less expensive than FDDI NICs. 2. 100Mbps Ethernet can run on copper wire, which is substantially less expensive than
fiber.

* Legal Information © 1998 Intel Corporation

Ethernet Compared (Continued)
For companies interested in migrating to fiber optic cable for either security or future bandwidth needs, Ethernet 100Base-FX and Gigabit Ethernet also provide cost-effective solutions compared to FDDI. The figure below shows the same basic network configuration used in the figure above, but uses switched Ethernet instead of FDDI on the backbone.

Network configuration using switched Fast Ethernet in place of the FDDI ring shown in the previous figure.

Network engineers should keep in mind that it takes only ten 10Mbps Ethernet clients transmitting files at the same time to reach FDDI's 100Mbps maximum throughput on a shared ring. Switched Ethernet configurations, however, can provide multiple 100Mbps pipelines by routing each packet only to the station addressed, thus allowing multiple stations to transmit and receive simultaneously. Though switched FDDI solutions are available, in general FDDI switches have proven less efficient than Ethernet switches, and on average cost up to eight times more per port.

* Legal Information © 1998 Intel Corporation

Ethernet Compared (Continued)
ATM Asynchronous Transfer Mode (ATM) is both a LAN and a WAN technology. Much of the original lure of ATM was its potential to become a single, widely supported protocol for wide area networking, backbone connectivity and workstation connectivity as well. Recently, however, even ardent supporters of ATM have given up hope for success against Ethernet at the workstation. In general, ATM has failed to achieve widespread adoption for three reasons:

1. Lack of standards 2. High price 3. Complexity
ATM uses fixed-sized packets (53 bytes) called cells and provides data transfer rates from 25Mbps to 2400Mbps (OC-3 = 155Mbps and OC-12 = 622Mbps). Using standard-sized cells enables ATM to provide constant, high-speed data streams that audio, video and imaging applications require. ATM can be used with a variety of transmission media including twistedpair and fiber optic cable. The figure below depicts a network configuration that uses ATM on the backbone.

Network configuration using ATM for both backbone and WAN connectivity.

Most Ethernet component manufacturers will market Gigabit Ethernet as an alternative to ATM

backbones. In the past, network managers have looked to ATM as the only reliable way to achieve Quality-of-Service (QoS) grade connectivity for applications such as real-time databases, medical imaging and video conferencing. Gigabit Ethernet will provide QoS connectivity by working in combination with upper-layer QoS protocols such as RSVP and 802.1Q. QoS protocols enable individual packets to be prioritized so that high-priority, timesensitive data streams, like those required for real-time video, are not interrupted by lowerpriority, non-time-sensitive applications, such as e-mail. If Gigabit Ethernet becomes successful as quickly as Fast Ethernet has, the future of ATM will likely remain at the WAN level of connectivity. Compared to ATM, Gigabit Ethernet promises to be simpler to implement, more cost-effective and more compatible with existing LANs.

* Legal Information © 1998 Intel Corporation

Ethernet Compared (Continued)
Summary For workstation-level interconnectivity, Ethernet and Token Ring should generally be thought of as competitors. FDDI and ATM, on the other hand, have in the past filled particular needs that Ethernet running at 10Mbps could not. With the advent of Fast Ethernet and Gigabit Ethernet, however, Ethernet technology can now meet the bandwidth needs of high-traffic backbones and in many instances compete directly with FDDI and ATM solutions. Currently, Ethernet is not often thought of as a WAN technology. Though Ethernet-based satellite communications systems have been researched, solutions like Frame Relay and ATM running over public, telecommunications networks will continue for some time to be the WAN technologies of choice for linking local, Ethernet-based networks.

* Legal Information © 1998 Intel Corporation

Specific Examples
Lesson Objective
q

Identify answers for the hypothetical questions posed in Lesson 10.1

In reference to the questions posed at the end of Lesson 10.1, Company X who has chosen to implement ATM as opposed to Frame Relay for wide area connectivity, and who has implemented FDDI on their backbone, must still choose a physical-layer technology to link individual workstations to the backbone. More often than not, their preferred workstation-level connection strategy will be Ethernet. Later on, as Company X grows and its backbone becomes saturated with traffic, extending Fast Ethernet or Gigabit Ethernet solutions to the backbone, as opposed to implementing ATM or a new switched FDDI solution, will likely be the most cost-effective strategy for Company X to adopt. In the case of Company Y, implementing Ethernet, or Fast Ethernet, in the engineering department and using a router as a bridge to the order processing center's Token Ring network is both technically and economically a reasonable option. Because network traffic between the engineering department and the order processing department is likely to be very low, bridging the two networks is not likely to produce a bottleneck. In the final analysis, Ethernet's better performance for large-file-size transactions, lower cost of implementation and easier management makes considering Ethernet highly reasonable, even for companies that currently support Token Ring.

* Legal Information © 1998 Intel Corporation

ETHERNET AND THE UPPER-LAYER PROTOCOLS
Module Description This section explains Ethernet's practical relationship to the technologies it serves. Lesson 11.1 returns to a consideration of the OSI layer and Ethernet's role in the OSI reference model. Lesson 11.2 completes the course with a consideration of Ethernet as an Open Systems technology. Module Objectives
q q

Identify some of the upper-layer protocols that Ethernet supports directly and indirectly Identify the role Ethernet plays in relationship to a number of specific and popular network protocols

* Legal Information © 1998 Intel Corporation

The OSI Model Revisited
Lesson Objective
q

Identify some of the upper-layer protocols that Ethernet supports directly and indirectly

Module 10 focused on Ethernet's relationship to technologies that operate at OSI layers 1 and 2, the same layers at which Ethernet operates. This module focuses on Ethernet's relationship to network technologies that operate at OSI layers 3 and above. The table below reproduces the OSI model and categorizes a number of example technologies according to the OSI layer services they provide. Clarifying Ethernet's relationship to upper-layer protocols will help you to quickly understand many practical, real-world situations. LAYER 7 – Application EXAMPLE PROTOCOLS NetWare*, Vines*, NTAS, SNA

6 – Presentation NAPLPS, MAP, SMB 5 – Session 4 – Transport 3 – Network 2 – Data Link 1 – Physical NetBIOS, NCP, RIP TCP, NetBEUI, SNMP IP, IPX, DECnet, X.25, RSVP, 802.1Q Ethernet, Fast Ethernet, Gigabit Ethernet, FDDI, Token Ring, ATM Twisted-pair, coaxial, twinax and fiber optic cabling
Table 11-1

* Legal Information © 1998 Intel Corporation

Running Multiple Protocols
Lesson Objective
q

Identify the role Ethernet plays in relationship to a number of specific popular network protocols

Two Networks in One At the physical and data link layers of the OSI model, it is not possible, nor is it reasonable, to implement simultaneously two different technologies on a single network segment. For example, though it is possible to use a bridge to link separate Ethernet and Token Ring networks, it is not possible (nor would it ever be desirable) to connect a room of computers together using both Ethernet and Token Ring hardware at the same time. At the network layer, however, it is not only possible, but in many cases advantageous, to implement multiple protocols and run them at the same time. For example, most Novell NetWare* networks use a protocol called IPX/SPX at the network layer and above. Microsoft networks, on the other hand use TCP/IP and/or NetBEUI. Often times, Windows 95* workstations are configured to handle both Novell IPX/SPX packets and Microsoft TCP/IP packets. This allows individual Windows 95* stations to establish client-server connections with NetWare* servers (over IPX/SPX) and peer-to-peer connections to other Windows 95* workstations (over TCP/IP) at the same time. Ethernet as an Open Systems Solution In the example above, how does Ethernet fit into the picture? It almost sounds as if the example refers to two, entirely separate networks: a Novell network and a Microsoft network. Can a single Ethernet network, using only Ethernet cabling schemes and Ethernet NICs support both the Novell network and the Microsoft network described? Of course, the answer is yes. In the example above, the terms Novell network and Microsoft network refer only to layer 3 networking services and higher. Ethernet, in relationship to each networking protocol, works only at layers 1 and 2 to provide the physical transportation of data packets from one network client to another. At layer 3, the software programs Client for Novell Networks and Client for Microsoft Networks run simultaneously on each computer and accept data packets from the data link layer and processes packets according to the rules of the layer 3 protocol the client supports. When the data field of the Ethernet frame includes a TCP/IP packet, the TCP/IP packet is handled by the Microsoft client software. When the data field of the Ethernet frame includes an IPX/SPX packet, the IPX/SPX packet is handled by the Novell client software. Upper-layer protocols are covered in more detail in other courses. This lesson attempts merely

to reinforce the fact that by limiting Ethernet operations to a clearly defined network space, Ethernet is capable of supporting a wide range of specific networking technologies. Ethernet can not only support networks running both Novell and Microsoft networking protocols simultaneously, but also other combinations of network layer protocols as well.

* Legal Information © 1998 Intel Corporation

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close