of 158

Data Communication & Computer Networks

Published on May 2016 | Categories: Documents | Downloads: 17 | Comments: 0
252 views

Comments

Content

Data Communication & Computer Networks
MCA – II Semester III
  This book has precise contents. All topics are covered with some extra knowledge.

Mee Aahe

Data Communication & Computer Networks

Chapter 1. Introduction to Networking
A network is a set of equipments (often referred as data terminal equipment / DTE, or simply terminals or nodes ..) connected by a communication channel, which can be either guided/unguided media. DTE equipment can be a computer, printer or any device capable of sending and/or receiving data generated by other nodes on the network. A computer network is an interconnected collection of autonomous computers. Why networking? - Sharing of hardware Computer hardware resources, Disks, Printers.. - Sharing of software Multiple single user licenses are more expensive than multi-user license. Easy maintenance of software - Sharing of information Several individuals can interact with each other Working in groups can be formed - Communication e-mail, internet telephony, audio conferencing, video conferencing - Scalability Individual subsystems can be created and combine it into a main system to enhance the overall performance. - Distributed systems In a networked environment computers can distribute the work load among themselves keeping transparency to the end user The goals of a computer network include: • Resource sharing: programs (O.S., applications), data, equipment (printers, disks) are available to all users of the network regardless of location. • High reliability: By replicating files on different machines and having spare cpus, users are more immune from hardware/software failure. • Less cost: Small machines have about 1/10 the power of a mainframe but 1/1000 the cost. By using such machines with file server machine(s), a local area network LAN can be cheaply installed. It is easy to increase the capacity by adding new machines. • Communications medium: Users have access to email and the Internet Data communications has an ancient history, as people have always had an interest in communicating with each other. Different methods have been used and
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 2

Data Communication & Computer Networks

associated with each method are various advantages and disadvantages. A major problem with communications is ensuring that the receiver gets the message sent by the transmitter. In every form of communication there are common elements: 1. transmitter (sender, source) 2. receiver (destination) 3. message to be communicated 4. medium (how message is carried) Examples of medium: Medium Smoke signals Tomtom drum Pony express Carrier pigeon Post Telegraph Telephone Computer

Problem (Noise) Fog, Darkness Thunder Bandits Hunter Strike, Loss Broken wires Electrical Cable Electrical

Anything that interferes with the message is technically called Noise. Entire data communication system revolves around three fundamental concepts. :Destiny: The system should transmit the message to the correct intended destination. The destination can be another user or another computer. Reliability: The system should deliver the data to the destiny faithfully. Any unwanted signals (noise) added along with the original data may play havoc! Fast: The system should transmit the data as fast as possible within the technological constraints. In case of audio and video data they must be received in the same order as they are produced without adding any significant delays.

Hardware Architecture :
User: There will be a source that generates the message and a transducer that converts the message into an electrical signal. The source can be a person in front of a microphone or a computer itself sending a file. The user terminal is known as data terminal equipment (DTE). Transmitter: Can be a radio frequency modulator combining the signal coming out of the data equipment terminal. Here the radio frequency is acting as the carrier for the data signal. Or in case of direct digital transmission the transmitter can be Manchester encoder transmitting digital signals directly.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 3

Data Communication & Computer Networks

Communication channel: Can be guided media (twisted pair, coaxial cable, fiber optic.,) or unguided media (air, water .,). In both the cases communication is in the form of electromagnetic waves. With guided media the electromagnetic waves are guided along a physical path. Unguided media also called wireless the transmitting electromagnetic waves are not guided along with a physical path. They are radiated through air/vacuum/water., etc. Receiver: The receiver amplifies the received signals removes any unwanted signals (noise) introduced by the communication channel during propagation of the signal and feeds to the destiny. Destiny: The user at the other end finally receives the message through the data terminal equipment stationed at the other side.

Fig (b) shows a typical dial-up network setup. The data communication equipment (DCE) at the transmitting end converts the digital signals into audio tones (modulation) so that the voice grade telephone lines can be used as guided media during transmission. At the far end the receiving audio tones, they are converted back to digital signals (Demodulation) by the data communication equipment (DCE) and fed to the far end data terminal equipment (DTE). Types of communication :Based on the requirements, the communications can be of different types: Simplex communication: In simplex communication, communication is possible only in one direction. There is one sender and one receiver; the sender and receiver cannot change roles. Half-duplex communication: Half-duplex communication is possible in both directions between two entities (computers or persons), but one at a time. A walkietalkie uses this approach. The person who wants to talk presses a talk button on his handset to start talking, and the other persons handset will be in receive mode. When the sender finishes, he terminates it with an over message. The other person
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 4

Data Communication & Computer Networks

can press the talk button and start talking. These types of systems require limited channel bandwidth, so they are low cost systems. Full-duplex communication: In a full-duplex communication system, the two parties the caller and the called can communicate simultaneously, as in a telephone system. However, note that the communication system allows simultaneous transmission of data, but when two persons talk simultaneously, there is no effective communication! The ability of the communication system to transport data in both directions defines the system as full duplex.

Topologies :
The topology defines how the devices (computers, printers..etc) are connected and how the data flows from one device to another. There are two conventions while representing the topologies. The physical topology defines how the devices are physically wired. The logical topology defines how the data flows from one device to another. Broadly categorized into I) Bus II) Ring III) Star IV) Mesh

Bus topology: In a bus topology all devices are connected to the transmission medium as backbone. There must be a terminator at each end of the bus to avoid signal reflections, which may distort the original signal. Signal is sent in both directions, but some buses are unidirectional. Good for small networks.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 5

Data Communication & Computer Networks

The main problem with the bus topology is failure of the medium will seriously affect the whole network. Any small break in the media the signal will reflect back and cause errors. The whole network must be shutdown and repaired. In such situations it is difficult to troubleshoot and locate where the break in the cable is or which machine is causing the fault; when one device fails the rest of the LAN fails. Ring Topology : Ring topology was in the beginning of LAN area. In a ring topology, each system is connected to the next as shown in the following picture.

Each device has a transceiver which behaves like a repeater which moves the signal around the ring; ideal for token passing access methods. In this topology signal degeneration is low; only the device that holds the token can transmit which reduces collisions. If you see its negative aspect it is difficult to locate a problem cable segment; expensive hardware. Star topology : In a star topology each station is connected to a central node. The central node can be either a hub or a switch. The star topology does not have the problem as seen in bus topology. The failure of a media does not affect the entire network. Other stations can continue to operate until the damaged segment is repaired.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 6

Data Communication & Computer Networks

The advantages are cabling is inexpensive, easy to wire, more reliable and easier to manage because of the use of hubs which allow defective cable segments to be routed around; locating and repairing bad cables is easier because of the concentrators; network growth is easier. The disadvantages are all nodes receive the same signal therefore dividing bandwidth; Maximum computers are 1,024 on a LAN. Maximum UTP (Un shielded twisted pair) length is 100 meters; distance between computers is 2.5 meters. This topology is the dominant physical topology today. Mesh topology : A mesh physical topology is when every device on the network is connected to every device on the network; most commonly used in WAN configurations Helps find the quickest route on the network; provides redundancy. Very expensive and not easy to set up. Hybrid topology : A hybrid topology is a combination of any two or more network topologies in such a way that the resulting network does not have one of the standard forms. For example, a tree network connected to a tree network is still a tree network, but two star networks connected together exhibit hybrid network topologies. A hybrid topology is always produced when two different basic network topologies are connected.

Media :
Analog Transmission: Dominated the last 100 years and is here for a while yet. Network designers made use of the existing telephone network which was aimed at voice transmission. This is actually very poor for computer networking. For example 2 computers connected by a direct cable can achieve a data rate of up to 100 Mbps with very low error rate. Using phone lines, 56 Kbps is the maximum transmission speed with a relatively high error rate. It is approximately 10 orders of magnitude worse: the cost of bus ticket to town versus a moon landing is same order of magnitude. Modems : Phone lines deal with frequencies of 300 to 3000 Hz. A computer outputs a serial stream of bits (1’s, 0’s). A modem is a device that accepts such a bit stream and converts it to an analog signal, using modulation. It also performs the inverse conversion. Thus two computers can be connected using two modems and phone line. Using a modem, a continuous signal (tone) is sent in the range 1000 to 2000 Hz. To transmit information, this carrier signal is modulated. Its amplitude, frequency, phase or a combination can be modulated . Digital Transmission :
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 7

Data Communication & Computer Networks

Digital transmission takes place in the form of pulses representing bits (1’s and 0’s). This is the type of communication used internally in computers. The highspeed trunks linking central phone exchanges use digital transmission. It has a lower error rate than analog transmission. The local loop (from phone to exchange) is still analog. This must be converted at the exchange to digital. A device called a Codec (coder/decoder) does this. It samples the analog signal 8000 times per second and encodes the signal digitally by representing each sample as a binary number. The technique used is called Pulse Coded Modulation or PCM. Transmission Techniques : Copper wire - Twisted Pair - Coaxial Cable Fibre optic Twisted Pairs : They are used by telephones for the local loop (connection between your home phone and the local telephone exchange). They carry electrical signals. A tp consists of two insulated copper wires (1mm diameter) twisted to reduce electrical interference. Capacity: dependent on the distances involved but can be up to several Mbps over a few Kms. For example ISDN (Integrated Services Digital Network) lines offer speeds from 64Kbps to over 1 Mbps and have been available to home users for Internet access, for several years. More recently (2003), DSL (Digital Subscriber Line) and in particular ADSL (Asymmetric DSL) lines are available to home users with speeds of 1.5 to 6 Mbps. ISDN and ADSL both use digital transmission and so must use a digital line unlike the standard analog telephone line where a modem is used. You must install an ISDN card or an ADSL card into your PC to use an ISDN or ADSL line.

Twisted Pairs may be shielded (stp) or unshielded (utp) with the shielded having extra insulation. However, it is the rate of twisting (number of twists per inch) that is the most important characteristic. They are also classified into Category-5

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 8

Data Communication & Computer Networks

(CAT-5) and Category-6 (CAT-6). CAT-5 can carry 10 or 100 Mbps (10/100Mbps) over short distances e.g. up to 100 metres approx. This is the type of cable that is often used in building to connect PCs to a LAN. Usually, the CAT-5 cable connects to a device know as a hub which is less than 100 metres from each PC. There may be a hub for each floor/laboratory in a building. CAT-6 cable operates at 100/1000Mbps (Gigabit Ethernet) and is typically used to interconnect hubs. It is more expensive than CAT-5 cable. Large organisations frequently have a so-called "backbone" network that interconnects separate LANs in different buildings/rooms as in the diagram below. Over short distances CAT-6 cable may be used but optic fibre is also often used as it can cover longer distances.

Coaxial (Coax) Cable : Carry electrical signals. It consists of a copper core surrounded by 3 outer layers of insulation. It has a high bandwidth and good noise immunity. The original Ethernet standard was based on 10 Mbps coaxial cable. Ethernet is the most popular LAN standard and was developed at Rank Xerox (who also developed the mouse, laser printer and Graphical User Interface (GUI) software. Ethernet LANs can be based on tp, coax or optic fibre.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 9

Data Communication & Computer Networks

Capacity : 10 to 100 Mbps for distances of up to 1 km. Frequently used in LANs but is being replaced by utp/stp in most LANs. Optic Fibre : Uses light to carry data and has a huge bandwidth. Very thin glass fibres used. To date capacity of 1000 Mbps over 1 km is feasible.

It is used in WANs, LANs for interconnecting hubs and also for linking telephone exchanges. Excellent noise immunity as it does not suffer from electrical interference and is therefore suitable for harsh environments such as factory floor. Although computing technology is rapidly advancing, it is not gaining ground nearly as fast as communication technology is. Fiber optics is one of the advances that has propelled communication technology into the future at high speeds. Communication over fiber optics requires a source (of light), a line (transmission medium = fiber), and a destination (to detect the light). The light stays within the fiber line because of the angle at which the light hits the surface of the fiber line. Instead of passing through the fiber's surface (like a window), the light bounces off of it (like a mirror). The light propagates down the fiber line because it continually reflects off the surface from the inside; the light never escapes the fiber line until the receiver detects it. Like copper, fiber optics suffers problems when transmitting over a distance. Attenuation (a weakening of the power of a signal) occurs, as well as dispersion (the spreading out of light waves over a distance). The discovery of solitons has helped wipe out the problem of dispersion, though. A fiber cable is heavily insulated like coax, but it has several differences. The core of the cable is a glass strand, which is surrounded by a thick glass covering, which is then covered by plastic. When compared to copper for its overall purposes, fiber wins because it is lighter, higher bandwidth, easier to install, harder to tap, and the signal stays stronger longer than in copper. The only drawback to fiber at this point in time is the lack of familiarity among the engineering community with the fiber technology compared to the copper. Wireless Transmission : Line of Sight: Infrared and Microwave Physical cables have a major problem if you have to cross private or public property where it may be difficult or very expensive to get permission, in addition to the costs of laying the cable. Using line of sight transmitters avoids this problem.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 10

Data Communication & Computer Networks

Lasers can be used for wireless communication. It is a relatively low cost way to connect two buildings' LANs, but it has drawbacks. The laser is difficult to target on the destination's receiver because the beam is so small. Laser light also diffuses easily in poor atmospheric conditions, such as rain, fog, or intense heat. Infrared light is used for close-range communication, such as remote controls, because it does not pass through objects well. This is also a plus because infrared communications in one room do not interfere with the infrared communications in another room. Infrared communication is more secure than other options, such as radio, but it cannot be used outside due to interference by the Sun. Radio waves are easy to generate and are omnidirectional, but have low transmission rates. Also, depending on their frequency, radio waves either cannot travel very far, or are absorbed by the earth. In some cases, though, High Frequency (HF) waves are reflected back to earth by the Ionosphere (a layer of the atmosphere). Microwaves can be used over long distances e.g. A 100m tower can transmit data for distances over 100 km. Cheaper than digging a trench. Relatively high speeds of 10 Mbps upwards are possible. Microwave transmission is popular for its ability to travel in straight lines. A source can be directly focused on its destination without interfering with neighbouring transmissions. Because they travel in straight lines, though, the curvature of the earth can interfere with the microwave transmitters; the solution to this is the addition of repeaters in between the source and destination to redirect the data path. Microwaves are used for long distance communication (Microwave Communications, Inc.=MCI), cellular phones, garage door openers, and much more. Satellite: operate in same fashion as microwaves where the satellite operates as a ‘Big microwave repeater in the sky’!! Satellite communication has a high bandwidth giving up 50 Mbps speeds and a given satellite may be able to have many "channels" at this speed. Wireless: Radio LANs or wireless (Wi-Fi) LANs are becoming common in offices, universities, hotels, restaurants and airports. A wireless LAN enables users to connect to the Internet from a laptop computer with a wireless network card. In UCD, Commerce students use such laptops with wireless cards to connect to the college network, for course work and email.

Switching :
A mechanism for communicating by sharing resources. Switching is the generic method for establishing a path for point-to-point communication in a network. It involves the nodes in the network utilizing their direct communication lines to other nodes so that a path is established in a piecewise fashion. Each node has the capability to ‘switch’ to a neighbouring node (i.e., a node to which it is directly connected) to further stretch the path until it is completed.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 11

Data Communication & Computer Networks

One of the most important functions of the network layer is to employ the switching capability of the nodes in order to route messages across the network. There are two basic methods of switching: circuit switching and packet switching. When information has to go over a switch in the communications system, there are several choices of how to switch the information. A circuit could be set up, causing no delay between switches, but causing setup time. Message switching could be used; it involves sending an entire message from one switch to the next before forwarding is possible. And then there is packet switching, in which a message is cut into several smaller fixed-sized packets, thus reducing the wait time at each switch when compared to message switching. Circuit switching: A method of communicating after allocating a circuit before communication begins. In circuit switching, two communicating stations are connected by a dedicated communication path which consists of intermediate nodes in the network and the links that connect these nodes. What is significant about circuit switching is that the communication path remains intact for the duration of the connection, engaging the nodes and the links involved in the path for that period. (However, these nodes and links are typically capable of supporting many channels, so only a portion of their capacity is taken away by the circuit.)

Circuit switching relies on dedicated equipment especially built for the purpose, and is the dominant form of switching in telephone networks. Its main advantage lies in its predictable behaviour: because it uses a dedicated circuit, it can offer a constant throughput with no noticeable delay in transfer of data. This property is important in telephone networks, where even a short delay in voice traffic can have disruptive effects. Circuit switching’s main weakness is its inflexibility in dealing with computer oriented data. A circuit uses a fixed amount of bandwidth, regardless of whether it is used or not. In case of voice traffic, the bandwidth is usually well used because most of the time one of the two parties in a telephone conversation is speaking. However, computers behave differently; they tend to go through long silent periods followed by
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 12

Data Communication & Computer Networks

a sudden burst of data transfer. This leads to significant underutilization of circuit bandwidth. Another disadvantage of circuit switching is that the network is only capable of supporting a limited number of simultaneous circuits. When this limit is reached, the network blocks further attempts for connection until some of the existing circuits are released. Packet switching: A method of communicating by dividing data into packets. Nodes (switches) perform communication processing in terms of individual packets without determining the route before communication begins. Packet switching was designed to address the shortcomings of circuit switching in dealing with data communication. Unlike circuit switching where communication is continuous along a dedicated circuit, in packet switching, communication is discrete in form of packets. Each packet is of a limited size and can hold up to a certain number of octets of user data. Larger messages are broken into smaller chunks so that they can be fitted into packets. In addition to user data, each packet carries additional information (in form of a header) to enable the network to route it to its final destination. A packet is handed over from node to node across the network. Each receiving node temporarily stores the packet, until the next node is ready to receive it, and then passes it onto the next node. This technique is called store-and-forward and overcomes one of the limitations of circuit switching. A packet-switched network has a much higher capacity for accepting further connections. Additional connections are usually not blocked but simply slow down existing connections, because they increase the overall number of packets in the network and hence increase the delivery time of each packet.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 13

Data Communication & Computer Networks

Chapter 2. Common Network Architecture
Connection oriented N/Ws & Connectionless N/Ws : In general, transport protocols can be characterized as being either connection-oriented or connectionless. Connection-oriented services must first establish a connection with the desired service before passing any data. A connectionless service can send the data without any need to establish a connection first. In general, connection-oriented services provide some level of delivery guarantee, whereas connectionless services do not. Connection oriented N/Ws : Connection-Oriented means that when devices communicate, they perform handshaking to set up an end-to-end connection. The handshaking process may be as simple as synchronization such as in the transport layer protocol TCP, or as complex as negotiating communications parameters as with a modem. Connection-Oriented systems can only work in bi-directional communications environments. To negotiate a connection, both sides must be able to communicate with each other. This will not work in a unidirectional environment. Requires a session connection (analogous to a phone call) be established before any data can be sent. This method is often called a "reliable" network service. It can guarantee that data will arrive in the same order. Connection-oriented services set up virtual links between end systems through a network. Connection-oriented service involves three phases: - connection establishment - data transfer - connection termination. During connection establishment, the end nodes may reserve resources for the connection. The end nodes also may negotiate and establish certain criteria for the transfer, such as a window size used in TCP connections. This resource reservation is one of the things exploited in some denial of service (DOS) attacks. An attacking system will send many requests for establishing a connection but then will never complete the connection. The attacked computer is then left with resources allocated for many never-completed connections. Then, when an end node tries to complete an actual connection, there are not enough resources for the valid connection. The data transfer phase occurs when the actual data is transmitted over the connection. During data transfer, most connection-oriented services will monitor for lost packets and handle resending them. The protocol is generally also responsible for putting the packets in the right sequence before passing the data up the protocol stack.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 14

Data Communication & Computer Networks

When the transfer of data is complete, the end nodes terminate the connection and release resources reserved for the connection. Connection-oriented network services have more overhead than connectionless ones. Connection-oriented services must negotiate a connection, transfer data, and tear down the connection, whereas a connectionless transfer can simply send the data without the added overhead of creating and tearing down a connection. Each has its place in internetworks. Connection – oriented services is modelled after the telephone system. To talk to someone, you pick up the phone, dial the number, talk & then hang up. Similarly, to users the connection – oriented network service, the service, the service user first establishes aspects of a connection is that it acts like a tube : the sender pushes object in at one end, & the receiver takes them out at the other end. In the most cases the order is preserved so that the bits arrive in the order they were sent. In some cases when a connection is established, the sender, receiver & subnet conduct a negotiation about parameters to be used, such as maximum message size, quality of service required & other issues. Typically, one side makes a proposal & the other side can accept it, reject it, or make counter proposal. Connectionless N/Ws : Connectionless means that no effort is made to set up a dedicated end-toend connection. Connectionless communication is usually achieved by transmitting information in one direction, from source to destination without checking to see if the destination is still there, or if it is prepared to receive the information. When there is little interference, and plenty of speed available, these systems work fine. In environments where there is difficulty transmitting to the destination, information may have to be re-transmitted several times before the complete message is received. Walkie-talkies or Citizens Band radios are a good examples of connectionless communication. You speak into the mike, and the radio transmitter sends out your signal. If the person receiving you doesn't understand you, there's nothing his radio can do to correct things, the receiver must send you a message back to repeat your last message. IP, UDP, ICMP, DNS, TFTP & SNMP are example of connectionless protocols in use on the Internet. Does not require a session connection between sender and receiver. The sender simply starts sending packets (called datagrams) to the destination. This service does not have the reliability of the connection-oriented method, but it is useful for periodic burst transfers. Neither system must maintain state information for
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 15

Data Communication & Computer Networks

the systems that they send transmission to or receive transmission from. A connectionless network provides minimal services. Connectionless service is modelled after the postal system. Each message carries the full destination address & each one is routed through the system independent of all the others. It is possible that the first one sent be delayed so that the second one arrives first. Each service can be characterized by a quality of service. Some services are reliable in the sense that they never lose data. Usually, a reliable service is implemented by having the receiver acknowledge the receipt of each message so the sender is sure that it arrived. The acknowledgement process introduces overhead & delays, which are often worth it but are sometimes undesirable. A typical situation in which a reliable connection – oriented service is appropriate is file transfer. The owner of the file wants to be sure that all the bits arrive correctly & in the same order they were sent. Very few file transfer customers would prefer a service that occasionally scrambles or loses a few bits, even if it is much faster. Reliable connection – oriented service has two minor variations : message sequence & byte streams. In the former variant, the message boundaries are preserved. When a user logs into a remote servers, a byte stream from the users computers to the servers is all that is needed. Message boundaries are not relevant. The convenience of not having to establish a connection to send one short message is desired, but reliability is essential. The acknowledged datagram service can be provided for these applications. It is like sending a registered letter & requesting a return receipt. When the receipt comes letter was delivered to the intended party & not host along the way. Still another service is the request – reply service. In this service the sender transmits a single datagram containing a request; the reply contains the answer. Service Reliable Message Stream Reliable Byte Stream Unreliable Connection Unreliable Datagram Acknowledged Datagram Request – reply Example Sequence of pages Remote Login Digitized voice Electronic junk mail Registered mail Database query

Connection – oriented Network Connectionless Network

Figure :- Table of Six different types of services
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 16

Data Communication & Computer Networks

Connection-oriented methods may be implemented in the data link layers of the protocol stack and/or in the transport layers of the protocol stack, depending on the physical connections in place and the services required by the systems that are communicating. TCP (Transmission Control Protocol) is a connection-oriented transport protocol, while UDP (User Datagram Protocol) is a connectionless network protocol. Both operate over IP. The physical, data link, and network layer protocols have been used to implement guaranteed data delivery. For example, X.25 packet-switching networks perform extensive error checking and packet acknowledgment because the services were originally implemented on poor-quality telephone connections. Today, networks are more reliable. It is generally believed that the underlying network should do what it does best, which is deliver data bits as quickly as possible. Therefore, connection-oriented services are now primarily handled in the transport layer by end systems, not the network. This allows lower-layer networks to be optimized for speed. LANs operate as connectionless systems. A computer attached to a network can start transmitting frames as soon as it has access to the network. It does not need to set up a connection with the destination system ahead of time. However, a transport-level protocol such as TCP may set up a connection-oriented session when necessary. The Internet is one big connectionless packet network in which all packet deliveries are handled by IP. However, TCP adds connection-oriented services on top of IP. TCP provides all the upper-level connection-oriented session requirements to ensure that data is delivered properly. MPLS is a relatively new connectionoriented networking scheme for IP networks that sets up fast label-switched paths across routed or layer 2 networks. A WAN service that uses the connection-oriented model is frame relay. The service provider sets up PVCs (permanent virtual circuits) through the network as required or requested by the customer. ATM is another networking technology that uses the connection-oriented virtual circuit approach. Example of N/Ws : Since the beginning of the networking, a war has been going on between the people who support connectionless subnets & the people who supports connection – oriented subnets. P2P : Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-topeer network of nodes.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 17

Data Communication & Computer Networks

P2P networking has generated tremendous interest worldwide among both Internet surfers and computer networking professionals. P2P software systems like Kazaa and Napster rank amongst the most popular software applications ever. Numerous businesses and Web sites have promoted "peer to peer" technology as the future of Internet networking. "A type of network in which each workstation has equivalent capabilities and responsibilities. This differs from client/server architectures, in which some computers are dedicated to serving the others." This definition captures the traditional meaning of peer to peer networking. Computers in a peer to peer network are typically situated physically near to each other and run similar networking protocols and software. Before home networking became popular, only small businesses and schools built peer to peer networks. Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client– server model where only servers supply, and clients consume. Peer-to-Peer (P2P) networking is a fairly popular concept. Networks such as BitTorrent and eMule make it easy for people to find what they want and share what they have. The concept of sharing seems benign enough. If I have something you want and you have something I want, why shouldn't we share? In its simplest form, a peer-to-peer (P2P) network is created when two or more PCs are connected and share resources without going through a separate server computer. A P2P network can be an ad hoc connection—a couple of computers connected via a Universal Serial Bus to transfer files. A P2P network also can be a permanent infrastructure that links a half-dozen computers in a small office over copper wires. Or a P2P network can be a network on a much grander scale in which special protocols and applications set up direct relationships among users over the Internet. On a P2P network, when a user wants a file, installed P2P software locates any copies of the file within the P2P network. It then allows the user to create multiple connections with several sources that have all or part of the requested file. As parts of the file are received, they are also uploaded to other users that are requesting that file. This protocol of matching several sources to a request makes for an efficient download scheme. P2P technology is legal, but sharing copyrighted materials is not. Some websites that archive illegal P2P files have been targeted by organizations representing recording artists and the movie industry.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 18

Data Communication & Computer Networks

P2P is the peer to peer application where both parties get involved in the communication service. The communication can be carried out by any of the either party but for this it must be made sure that the peer must have similar characteristics in them and after that the communication or talk session among the peers would be considered a P2P. The P2P model can also be compared with some other models such as the client server model but it that model there is a client who interacts with the server online. In that model one thing is real and the other being the virtual where in this P2P model both the parties are real rather than virtual. The communication can be carried out by any of the person in fact it can be carried out on the basis of providing a certain node which serves as a communication purpose. It can also be carried out among different groups of users too at the same time. The node is made available to each of the two parties and the party who is in dire need of starting the conversation can begin with it. In P2P networking, it can happen that the users can not only communicate with each other but they can also share and transfer files with each other. P2P transfer is faster because P2P does not need a server to share files and data. P2P is very popular for file sharing. X.25 : In the early 1970's there were many data communication networks (also known as Public Networks), which were owned by private companies, organizations and governments agencies. Since those public networks were quite different internally, and the interconnection of networks was growing very fast, there was a need for a common network interface protocol. In 1976 X.25 was recommended as the desired protocol by the International Consultative Committee for Telegraphy and Telephony (CCITT) called the International Telecommunication Union (ITU) since 1993. X.25 is a standard for WAN communications that defines how connections between user devices and network devices are established and maintained. X.25 is designed to operate effectively regardless of the type of systems connected to the network. It is typically used in the packet-switched networks (PSNs) of common carriers, such as the telephone companies. Subscribers are charged based on their use of the network. X.25 network devices fall into three general categories: data terminal equipment (DTE), data circuit-terminating equipment (DCE), and packet-switching exchange (PSE). Data terminal equipment (DTE) devices are end systems that communicate across the X.25 network. They are usually terminals, personal computers, or network hosts, and are located on the premises of individual subscribers.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 19

Data Communication & Computer Networks

Data communication Equipments (DCEs) are communications devices, such as modems and packet switches that provide the interface between DTE devices and a PSE, and are generally located in the carrier's facilities.

PSEs are switches that compose the bulk of the carrier's network. They transfer data from one DTE device to another through the X.25 PSN.

Packet Assembler/Disassembler :
The packet assembler/disassembler (PAD) is a device commonly found in X.25 networks. PADs are used when a DTE device, such as a character-mode terminal, is too simple to implement the full X.25 functionality. The PAD is located between a DTE device and a DCE device, and it performs three primary functions: buffering (storing data until a device is ready to process it), packet assembly, and packet disassembly. The PAD buffers data sent to or from the DTE device. It also assembles outgoing data into packets and forwards them to the DCE device. (This includes adding an X.25 header.) Finally, the PAD disassembles incoming packets before forwarding the data to the DTE.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 20

Data Communication & Computer Networks

The X.25 protocol suite maps to the lowest three layers of the OSI reference model. Physical layer: Deals with the physical interface between an attached station and the link that attaches that station to the packet-switching node. o X.21 is the most commonly used physical layer standard. Frame layer: Facilitates reliable transfer of data across the physical link by transmitting the data as a sequence of frames. Uses a subset of HDLC known as Link Access Protocol Balanced (LAPB), bit oriented protocol. Packet layer: Responsible for end-to-end connection between two DTEs. Functions performed are: - Establishing connection - Transferring data - Terminating a connection - Error and flow control - With the help of X.25 packet layer, data are transmitted in packets over external virtual circuits.

Physical Layer : At the physical layer X.21 is specifically defined for X.25 by ITU-T. The X.21 interface operates over eight interchange circuits (i.e., signal ground, DTE common return, transmit, receive, control, indication, signal element timing and byte timing) their functions is defined in recommendation of X.24 and their electrical characteristics in recommendation of X.27. The recommendation specifies how the DTE can setup and clear calls by exchanging signals with the DCE. The physical connector has 15 pins, but not all of them are used. The DTE uses the T and C circuits to transmit data and control information. The DCE uses the R and I circuits for data and control. The S circuit contains a signal stream emitted by the DCE to provide timing information so the DTE knows when each bit interval
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 21

Data Communication & Computer Networks

starts and stops. The B circuit may also provide to group the bits into byte frames. If this option is not provided the DCE and DTE must begin every control sequence with at least two SYN characters to enable each other to deduce the implied frame boundary. Link Layer The link layer (also called level 2, or frame level) ensures reliable transfer of data between the DTE and the DCE, by transmitting the data as a sequence of frames (a frame is an individual data unit which contains address, control, information field etc.). The functions performed by the link level include: Transfer of data in an efficient and timely fashion. Synchronization of the link to ensure that the receiver is in step with the transmitter. Detection of transmission errors and recovery from such errors Identification and reporting of procedural errors to higher levels, for recovery. The link level uses data link control procedures, which are compatible with the High Level Data Link (HDLC) standardized by ISO, and with the Advanced Data Communications Control Procedures (ADCCP) standardized by the U.S. American National Standards Institute (ANSI). There are several protocols, which can be used in the link level: Link Access Protocol, Balanced (LAPB) is derived from HDLC and is the most commonly used. It enables to form a logical link connection besides all the other characteristics of HDLC. Link Access Protocol (LAP) is an earlier version of LAPB and is seldom used today. Link Access Procedure, D Channel (LAPD) is derived from LAPB and it is used for Integrated Services Digital Networks (ISDN) i.e. it enables data transmission between DTEs through D channel, especially between a DTE and an ISDN node. Logical Link Control (LLC) is an IEEE 802 Local Area Network (LAN) protocol, which enables X.25 packets to be transmitted through a LAN channel. Now let us discuss the most commonly used link layer protocol, i.e. LAPB. LAPB is a bit-oriented protocol that ensures that frames are correctly ordered and error-free. There are three kinds of frames: 1. Information: This kind of frame contains the actual information being transferred and some control information. The control field in these frames contains the frame sequence number. I-frame functions include sequencing, flow control, and error detection and recovery. I-frames carry send- and receive-sequence numbers. 2. Supervisory: The supervisory frame (S-frame) carries control information. Sframe functions include requesting and suspending transmissions, reporting on status, and acknowledging the receipt of I-frames. S-frames carry only receivesequence numbers. There are various types of supervisory frames.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 22

Data Communication & Computer Networks

-

RECEIVE READY-Acknowledgment frame indicating the next frame expected. REJECT-Negative acknowledgment frame used to indicate transmission error detection. RECEIVE NOT READY (RNR)-Just as RECEIVE READY but tells the sender to stop sending due to temporary problems.

3. Unnumbered: This kind of frames is used only for control purposes. U-frame functions include link setup and disconnection, as well as error reporting. U frames carry no sequence numbers. Packet Level This level governs the end-to-end communications between the different DTE devices. Layer 3 is concerned with connection set-up and teardown and flow control between the DTE devices, as well as network routing functions and the multiplexing of simultaneous logical connections over a single physical connection. PLP is the network layer protocol of X.25. Call setup mode is used to establish SVCs between DTE devices. A PLP uses the X.121 addressing scheme to set up the virtual circuit. The call setup mode is executed on a pervirtual- circuit basis, which means that one virtual circuit can be in call setup mode while another is in data transfer mode. This mode is used only with SVCs, not with PVCs. To establish a connection on an SVC, the calling DTE sends a Call Request Packet, which includes the address of the remote DTE to be contacted. The destination DTE decides whether or not to accept the call (the Call Request packet includes the sender's DTE address, as well as other information that the called DTE can use to decide whether or not to accept the call). A call is accepted by issuing a Call Accepted packet, or cleared by issuing a Clear Request packet. Once the originating DTE receives the Call Accepted packet, the virtual circuit is established and data transfer may take place. Data transfer mode is used for transferring data between two DTE devices across a virtual circuit. In this mode, PLP handles segmentation and reassembly, bit padding, and error and flow control. This mode is executed on a per-virtual-circuit basis and is used with both PVCs and SVCs. Idle mode is used when a virtual circuit is established but data transfer is not occurring. It is executed on a per-virtual-circuit basis and is used only with SVCs. Call clearing mode is used to end communication sessions between DTE devices and to terminate SVCs. This mode is executed on a per-virtual-circuit basis and is used only with SVCs. When either DTE wishes to terminate the call, a Clear Request packet is sent to the remote DTE, which responds with a Clear Confirmation packet.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 23

Data Communication & Computer Networks

Restarting mode is used to synchronize transmission between a DTE device and a locally connected DCE device. This mode is not executed on a per-virtualcircuit basis. It affects all the DTE device's established virtual circuits.

Ethernet : Ethernet is a physical and data link layer technology for local area networks (LANs). Ethernet was invented by engineer Robert Metcalfe. When first widely deployed in the 1980s, Ethernet supported a maximum theoretical data rate of 10 megabits per second (Mbps). Later, so-called "Fast Ethernet" standards increased this maximum data rate to 100 Mbps. Today, Gigabit Ethernet technology further extends peak performance up to 1000 Mbps. Higher level network protocols like Internet Protocol (IP) use Ethernet as their transmission medium. Data travels over Ethernet inside protocol units called frames. The run length of individual Ethernet cables is limited to roughly 100 meters, but Ethernet networks can be easily extended to link entire schools or office buildings using network bridge devices. Ethernet is the least expensive high speed LAN alternative. They transmit and receive data at speeds of 10 million bits per second through up to 300 feet of telephone wire to a "hub" device normally stacked in a wiring closet. Data is transferred between wiring closets using either a heavy coax cable ("Thicknet") or fiber optic cable.

Ethernet uses a protocol called CSMACD. This stands for "Carrier Sense, Multiple Access, Collision Detect". The "Multiple Access" part means that every station is connected to a single copper wire (or a set of wires that are connected together to form a single data path). The "Carrier Sense" part says that before
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 24

Data Communication & Computer Networks

transmitting data, a station checks the wire to see if any other station is already sending something. If the LAN appears to be idle, then the station can begin to send data.

Figure : Metcalfe’s original Ethernet Sketch An Ethernet station sends data at a rate of 10 megabits per second. That bit allows 100 nanoseconds per bit. Light and electricity travel about one foot in a nanosecond. Therefore, after the electric signal for the first bit has traveled about 100 feet down the wire, the station has begun to send the second bit. However, an Ethernet cable can run for hundreds of feet. If two stations are located, say, 250 feet apart on the same cable, and both begin transmitting at the same time, then they will be in the middle of the third bit before the signal from each reaches the other station. This explains the need for the "Collision Detect" part. Two stations can begin to send data at the same time, and their signals will "collide" nanoseconds later. When such a collision occurs, the two stations stop transmitting, "back off", and try again later after a randomly chosen delay period. While an Ethernet can be built using one common signal wire, such an arrangement is not flexible enough to wire most buildings. Unlike an ordinary telephone circuit, Ethernet wire cannot be just spliced together, connecting one copper wire to another. Ethernet requires a repeater. A repeater is a simple station that is connected to two wires. Any data that it receives on one wire it repeats bit-forbit on the other wire. When collisions occur, it repeats the collision as well. In common practice, repeaters are used to convert the Ethernet signal from one type of wire to another. In particular, when the connection to the desktop uses ordinary telephone wire, the hub back in the telephone closet contains a repeater for every phone circuit. Any data coming down any phone line is copied onto the main Ethernet coax cable, and any data from the main cable is duplicated and transmitted
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 25

Data Communication & Computer Networks

down every phone line. The repeaters in the hub electrically isolate each phone circuit, which is necessary if a 10 megabit signal is going to be carried 300 feet on ordinary wire. Every set of rules is best understood by characterizing its worst case. The worst case for Ethernet starts when a PC at the extreme end of one wire begins sending data. The electric signal passes down the wire through repeaters, and just before it gets to the last station at the other end of the LAN, that station (hearing nothing and thinking that the LAN is idle) begins to transmit its own data. A collision occurs. The second station recognizes this immediately, but the first station will not detect it until the collision signal retraces the first path all the way back through the LAN to its starting point. Any system based on collision detect must control the time required for the worst round trip through the LAN. As the term "Ethernet" is commonly defined, this round trip is limited to 50 microseconds (millionths of a second). At a signaling speed of 10 million bits per second, this is enough time to transmit 500 bits. At 8 bits per byte, this is slightly less than 64 bytes. To make sure that the collision is recognized, Ethernet requires that a station must continue transmitting until the 50 microsecond period has ended. If the station has less than 64 bytes of data to send, then it must pad the data by adding zeros at the end. In simpler days, when Ethernet was dominated by heavy duty coax cable, it was possible to translate the 50 millisecond limit and other electrical restrictions into rules about cable length, number of stations, and number of repeaters. However, by adding new media (such as Fiber Optic cable) and smarter electronics, it becomes difficult to state physical distance limits with precision. However those limits work out, they are ultimately reflections of the constraint on the worst case round trip. It would be possible to define some other Ethernet-like collision system with a 40 microsecond or 60 microsecond period. Changing the period, the speed, and the minimum message size simply require a new standard and some alternate equipment. AT&T, for example, once promoted a system called "Starlan" that transmitted data a 1 megabit per second over older phone wire. Many such systems are possible, but the term "Ethernet" is generally reserved for a system that transmits 10 megabits per second with a round trip delay of 50 microseconds. 10Base2 : • • • 10: 10Mbps; 2: under 185 (~200) meters cable length Thin coaxial cable in a bus topology Repeaters used to connect multiple segments Repeater repeats bits it hears on one interface to its other interfaces: physical layer device only!

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 26

Data Communication & Computer Networks

10BaseT and 100BaseT : • • • 10/100 Mbps rate T stands for Twisted Pair Hub(s) connected by twisted pair facilitate “star topology” – Distance of any node to hub must be < 100M

• • • •

• •



Most popular packet-switched LAN technology Bandwidths: 10Mbps, 100Mbps, 1Gbps Max bus length: 2500m – 500m segments with 4 repeaters Bus and Star topologies are used to connect hosts – Hosts attach to network via Ethernet transceiver or hub or switch • Detects line state and sends/receives signals – Hubs are used to facilitate shared connections – All hosts on an Ethernet are competing for access to the medium • Switches break this model Problem: Distributed algorithm that provides fair access Ethernet by definition is a broadcast protocol – Any signal can be received by all hosts – Switching enables individual hosts to communicate Network layer packets are transmitted over an Ethernet by encapsulating
Page 27

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks

Wireless LANs : Wireless LAN networking consists of the following components:
 

Stations Wireless access points

Stations


 

A station (STA) is a computing device that is equipped with a wireless LAN network adapter. A personal computer equipped with a wireless LAN network adapter is known as a wireless client. Wireless clients can communicate directly with each other or through a wireless access point. Wireless clients can be mobile. A Windows wireless client is a wireless client that has a wireless network adapter and driver installed and is running Windows Vista™, Windows XP, Windows Server Code Name “Longhorn,†or Windows Server 2003.

Wireless access points A wireless access point (AP) is a networking device equipped with a wireless LAN network adapter that acts as a bridge between STAs and a traditional wired network. An access point contains:
  

At least one interface that connects the wireless AP to an existing wired network (such as an Ethernet backbone). Radio equipment with which it creates wireless connections with wireless clients. IEEE 802.1D bridging software, so that it can act as a transparent bridge between wireless and wired LAN segments.

The wireless AP is similar to a cellular phone network's base station; wireless clients communicate with the wired network and other wireless clients through the wireless AP. Wireless APs are not mobile and act as peripheral bridge devices to extend a wired network. The logical connection between a wireless client and a wireless AP is a pointto-point bridged LAN segment, similar to an Ethernet-based network client connected to an Ethernet switch. All frames sent from a wireless client, whether unicast, multicast, or broadcast, are sent on the point-to-point LAN segment between the wireless client and the wireless AP. For frames sent by the wireless AP to wireless clients, unicast frames are sent on the point-to-point LAN segment and multicast and broadcast frames are sent to all connected wireless clients at the same time.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 28

Data Communication & Computer Networks

802.11 : IEEE 802.11 is an industry standard for a shared access, wireless local area network (WLAN) that defines the Physical layer and media access control (MAC) sublayer for wireless communications. 802.11 Physical Layer At the Physical layer, IEEE 802.11 defines both direct sequence spread spectrum (DSSS) and frequency hopping spread spectrum (FHSS) transmission schemes. The original bit rates for IEEE 802.11 were 2 and 1 megabits per second (Mbps) using the S-Band 2.4-2.5 gigahertz (GHz) Industrial, Scientific, and Medical (ISM) frequency band. The maximum bit rate for IEEE 802.11b is 11 Mbps (using DSSS). The maximum bit rate for IEEE 802.11a is 54 Mbps using the orthogonal frequency-division multiplexing (OFDM) transmission scheme and frequencies in the 5 GHz range, including the 5.725-5.875 gigahertz (GHz) C-Band ISM frequency band. The IEEE 802.11g standard uses OFDM, has a maximum bit rate of 54 Mbps, and uses the S-Band ISM. 802.11 MAC Sublayer At the MAC sublayer, IEEE 802.11 uses the carrier sense multiple access with collision avoidance (CSMA/CA) media access control (MAC) protocol, which works in the following way:


A wireless station with a frame to transmit first listens on the wireless channel to determine if another station is currently transmitting (carrier sense). If the medium is being used, the wireless station calculates a random backoff delay. Only after the random backoff delay can the wireless station again listen for a transmitting station. By instituting a random backoff delay, multiple stations that are waiting to transmit do not end up trying to transmit at the same time (collision avoidance).

The CSMA/CA scheme does not ensure that a collision never takes place and it is difficult for a transmitting node to detect that a collision is occurring. Additionally, depending on the placement of the wireless AP and the wireless clients, a radio frequency (RF) barrier can prevent a wireless client from sensing that another wireless node is transmitting. This is known as the hidden station problem. To provide better detection of collisions and a solution to the hidden station problem, IEEE 802.11 also defines the use of an acknowledgment (ACK) frame to indicate that a wireless frame was successfully received and the use of Request to Send (RTS) and Clear to Send (CTS) messages. When a station wants to transmit a frame, it sends an RTS message indicating the amount of time it needs to send the frame. The wireless AP sends a CTS message to all stations, granting permission to the requesting station and informing all other stations that they are not allowed to transmit for the time reserved by the RTS message. The exchange of RTS and CTS messages eliminates collisions due to hidden stations.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 29

Data Communication & Computer Networks

802.11b The major enhancement to IEEE 802.11 by IEEE 802.11b is the standardization of the Physical layer to support higher bit rates. IEEE 802.11b supports two additional speeds, 5.5 Mbps and 11 Mbps, using the S-Band ISM. IEEE 802.11b uses the DSSS transmission scheme to provide the higher data rates. The bit rate of 11 Mbps is achievable in ideal conditions. In less-than-ideal conditions, 802.11b uses the slower speeds of 5.5 Mbps, 2 Mbps, and 1 Mbps. 802.11a IEEE 802.11a operates at a data transmission rate as high as 54 Mbps and uses the C-Band ISM. Instead of DSSS, 802.11a uses OFDM. OFDM allows data to be transmitted by subfrequencies in parallel. This provides greater resistance to interference and greater throughput. This higher speed technology allows wireless LAN networking to perform better for video and conferencing applications. Because they are not on the same frequencies as Bluetooth or microwave ovens, OFDM and IEEE 802.11a provides both a higher data rate and a cleaner signal. The bit rate of 54 Mbps is achievable in ideal conditions. In less-than-ideal conditions, 802.11a uses the slower speeds of 48 Mbps, 36 Mbps, 24 Mbps, 18 Mbps, 12 Mbps, and 6 Mbps. 802.11g IEEE 802.11g, a relatively new standard, operates at a bit rate up to 54 Mbps, but uses the S-Band ISM and OFDM. 802.11g is also backward compatible with 802.11b and can operate at the 802.11b bit rates and use the DSSS transmission scheme. 802.11g wireless network adapters can connect to an 802.11b wireless AP, and 802.11b wireless network adapters can connect to an 802.11g wireless AP. Thus, 802.11g provides a migration path for 802.11b networks to a frequencycompatible standard technology with a higher bit rate. Existing 802.11b wireless network adapters cannot be upgraded to 802.11g by updating the firmware of the adapter and must be replaced. Unlike migrating from 802.11b to 802.11a (in which all the network adapters in both the wireless clients and the wireless APs must be replaced at the same time), migrating from 802.11b to 802.11g can be done incrementally. Like 802.11a, 802.11g uses 54 Mbps in ideal conditions and the slower speeds of 48 Mbps, 36 Mbps, 24 Mbps, 18 Mbps, 12 Mbps, and 6 Mbps in less-thanideal conditions. IEEE 802.11 Operating Modes IEEE 802.11 defines the following operating modes:
 

Ad hoc mode Infrastructure mode

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 30

Data Communication & Computer Networks

Ad Hoc Mode In ad hoc mode, wireless clients communicate directly with each other without the use of a wireless AP or a wired network Ad hoc mode is also called peer-to-peer mode. Wireless clients in ad hoc mode form an Independent Basic Service Set (IBSS), which is two or more wireless clients who communicate directly without the use of a wireless AP. Ad hoc mode is used to connect wireless clients together when there is no wireless AP present, when the wireless AP rejects an association due to failed authentication, or when the wireless client is explicitly configured to use ad hoc mode. Infrastructure Mode In infrastructure mode, there is at least one wireless AP and one wireless client. The wireless client uses the wireless AP to access the resources of a traditional wired network. The wired network can be an organization intranet or the Internet, depending on the placement of the wireless AP. A single wireless AP supporting one or multiple wireless clients is known as a Basic Service Set (BSS). A set of two or more wireless APs connected to the same wired network is known as an Extended Service Set (ESS). An ESS is a single logical network segment (also known as a subnet), and is identified by its SSID. When a wireless adapter is turned on, it begins to scan across the wireless frequencies for wireless APs and other wireless clients. Scanning is a listening process in which the wireless adapter listens on all the channels for beacon frames sent by wireless APs and other wireless clients. After scanning, a wireless adapter chooses a wireless AP with which to associate. This selection is made automatically by using the Service Set Identifier (SSID) of the wireless network and the wireless AP with the best signal strength (the highest signal-to-noise ratio). Next, the wireless client switches to the assigned channel of the chosen wireless AP and negotiates the use of a logical wireless point-to-point connection. This is known as an association. Whether the wireless client prefers to associate with wireless APs or individual wireless clients is determined by configuration settings of the wireless client. By default, a Windows wireless client prefers to associate with a wireless AP rather than another wireless client. If the signal strength of the wireless AP is too low, the error rate too high, or if instructed by the operating system (in the case of Windows, every 60 seconds), the wireless client scans for other wireless APs to determine whether a different wireless AP can provide a stronger signal to the same wireless network. If so, the wireless client switches to the channel of that wireless AP. This is known as reassociation.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 31

Data Communication & Computer Networks

Reassociation with a different wireless AP can occur for many different reasons. The signal can weaken because the wireless client moves away from the wireless AP or the wireless AP becomes congested with too much other traffic or interference. The wireless client, by switching to another wireless AP, can distribute the load over other wireless APs, increasing the performance for other wireless clients. By placing wireless APs so that their coverage areas overlap slightly but their channels do not, wireless connectivity for large areas can be achieved. As a wireless client moves its physical location, it can associate and reassociate from one wireless AP to another, maintaining a continuous connection during physical relocation. If the coverage areas of the wireless APs within an ESS overlap, then a wireless client can roam, or move from one location (with a wireless AP) to another (with a different wireless AP), while maintaining Network layer connectivity. For example, for TCP/IP, a wireless client is assigned an IP address when it connects to the first wireless AP. When the wireless client roams within the ESS, it creates wireless connections with other wireless APs but keeps the same IP address because all the wireless APs are on the same logical subnet. When the wireless client roams to a different ESS, the IP address configuration is no longer valid. For a Windows XP and Windows Server 2003 wireless client, a reassociation is interpreted as a media disconnect/connect event. This event causes Windows to perform a DHCP renewal for the TCP/IP protocol. Therefore, for reassociations within the ESS, the DHCP renewal refreshes the current IP address configuration. When the Windows wireless client reassociates with a wireless AP across an ESS boundary, the DHCP renewal process obtains a new IP address configuration that is relevant for the logical IP subnet of the new ESS. IEEE 802.11 Wireless Security For authentication, the original 802.11 standard defined open system and shared key authentication types. For data confidentiality (encryption), the original 802.11 standard defined Wired Equivalent Privacy (WEP). The original 802.11 standard did not define or provide a WEP key management protocol that provides automatic WEP encryption key determination and renewal. This is a limitation to IEEE 802.11 security services; especially for infrastructure mode networks with a large number of wireless clients. The authentication and key management issues of the original 802.11 standard are solved by using the combination of IEEE 802.1X port-based network access control and either Wi-Fi Protected Accessâ„¢ (WPAâ„¢) or Wi-Fi Protected Access 2â„¢ (WPA2â„¢). 802.11 Authentication The original IEEE 802.11 standard defined the following types of authentication:
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 32

Data Communication & Computer Networks
 

Open System Authentication Shared Key Authentication

Open System Authentication Open system authentication does not provide authentication, only identification using the wireless adapter's MAC address. Open system authentication is used when no authentication is required. Open system authentication is the default authentication algorithm that uses the following process: 1. The authentication-initiating wireless client sends an IEEE 802.11 authentication management frame that contains its identity. 2. The receiving wireless node checks the initiating station's identity and sends back an authentication verification frame. With some wireless APs, you can configure the MAC addresses of allowed wireless clients using a feature known as MAC filtering. However, MAC filtering does not provide any security because the MAC address of a wireless client can be easily determined and spoofed. By default, a Windows wireless client that is configured to perform open system authentication sends its MAC address as the identity. Shared Key Authentication Shared key authentication verifies that an authentication-initiating station has knowledge of a shared secret. According to the original 802.11 standard, the shared secret is delivered to the participating wireless clients by means of a secure channel that is independent of IEEE 802.11. In practice, the shared secret is manually configured on the wireless AP and the wireless client. Shared key authentication uses the following process: 1. The authentication-initiating wireless client sends a frame consisting of an identity assertion and a request for authentication. 2. The authenticating wireless node responds to the authentication-initiating wireless node with challenge text. 3. The authentication-initiating wireless node replies to the authenticating wireless node with the challenge text that is encrypted using WEP and an encryption key that is derived from the shared key authentication secret. 4. The authentication result is positive if the authenticating wireless node determines that the decrypted challenge text matches the challenge text originally sent in the second frame. The authenticating wireless node sends the authentication result. Because the shared key authentication secret must be manually distributed and typed, this method of authentication does not scale appropriately in large infrastructure network mode (for example, corporate campuses and public places).
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 33

Data Communication & Computer Networks

Additionally, shared key authentication is not secure and its use is strongly discouraged. 802.11x : The IEEE 802.1X standard defines port-based, network access control used to provide authenticated network access for Ethernet networks. This port-based network access control uses the physical characteristics of the switched LAN infrastructure to authenticate devices attached to a LAN port. Access to the port can be denied if the authentication process fails. Although this standard was designed for wired Ethernet networks, it has been adapted for use on 802.11 wireless LANs. IEEE 802.1X defines the following terms:
   

Port access entity Authenticator Supplicant Authentication server

Port Access Entity A LAN port, also known as port access entity (PAE), is the logical entity that supports the IEEE 802.1X protocol that associated with a port. A PAE can adopt the role of the authenticator, the supplicant, or both. Authenticator An authenticator is a LAN port that enforces authentication before allowing access to services accessible using that port. For wireless connections, the authenticator is the logical LAN port on a wireless AP through which wireless clients in infrastructure mode gain access to other wireless clients and the wired network. Supplicant The supplicant is a LAN port that requests access to services accessible using the authenticator. For wireless connections, the supplicant is the logical LAN port on a wireless LAN network adapter that requests access to the other wireless clients and the wired network by associating with and then authenticating itself to an authenticator. Whether for wireless connections or wired Ethernet connections, the supplicant and authenticator are connected by a logical or physical point-to-point LAN segment. Authentication server To verify the credentials of the supplicant, the authenticator uses an authentication server. The authentication server checks the credentials of the
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 34

Data Communication & Computer Networks

supplicant on behalf of the authenticator, and then responds to the authenticator indicating whether or not the supplicant is authorized to access the authenticator's services. The authentication server may be:




A component of the access point In this case, the access point must be configured with the sets of user credentials corresponding to the wireless clients that will be attempting to connect. This is typically not implemented for wireless APs. A separate entity In this case, the access point forwards the credentials of the wireless connection attempt to a separate authentication server. Typically, the wireless AP uses the Remote Authentication Dial-In User Service (RADIUS) protocol to send the connection attempt parameters to a RADIUS server.

Controlled and Uncontrolled Ports The authenticator's port-based, access control defines the following different types of logical ports that access the wired LAN via a single, physical LAN port:








Uncontrolled port The uncontrolled port allows an uncontrolled exchange between the authenticator (the wireless AP) and other networking devices on the wired network-regardless of any wireless client's authorization state. Frames sent by the wireless client are never sent using the uncontrolled port. Controlled port The controlled port allows data to be sent between a wireless client and the wired network only if the wireless client is authorized by 802.1X. Before authentication, the switch is open and no frames are forwarded between the wireless client and the wired network. When the wireless client is successfully authenticated using IEEE 802.1X, the switch is closed and frames can be sent between the wireless client and nodes on the wired network. On an authenticating Ethernet switch, the wired Ethernet client can send Ethernet frames to the wired network as soon as authentication is complete. The switch identifies the traffic of a specific wired Ethernet client using the physical port to which the Ethernet client is connected. Typically, only a single Ethernet client is connected to a physical port on the Ethernet switch. Because multiple wireless clients contend for access to the same channel and send data using the same channel, an extension to the basic IEEE 802.1X protocol is required to allow a wireless AP to identify the secured traffic of a particular wireless client. This is done through the mutual determination of a per-client unicast session key by the wireless client and wireless AP. Only authenticated wireless clients have knowledge of their per-client unicast session key. Without a valid unicast session key tied to a successful authentication, a wireless AP discards the traffic sent from the wireless client.

To provide a standard authentication mechanism for IEEE 802.1X, the Extensible Authentication Protocol (EAP) was chosen. EAP is a Point-to-Point Protocol (PPP)-based authentication mechanism that was adapted for use on pointto-point LAN segments. EAP messages are normally sent as the payload of PPP
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 35

Data Communication & Computer Networks

frames. To adapt EAP messages to be sent over Ethernet or wireless LAN segments, the IEEE 802.1X standard defines EAP over LAN (EAPOL), a standard way to encapsulate EAP messages. Gigabit : In data communications, a gigabit is one billion bits, or 1,000,000,000 (that is, 109) bits. It's commonly used for measuring the amount of data that is transferred in a second between two telecommunication points. For example, Gigabit Ethernet is a high-speed form of Ethernet (a local area network technology) that can provide data transfer rates of about 1 gigabit per second. Gigabits per second are usually shortened to Gbps. Some sources define a gigabit to mean 1,073,741,824 (that is, 230) bits. Although the bit is a unit of the binary number system, bits in data communications are discrete signal pulses and have historically been counted using the decimal number system. For example, 28.8 kilobits per second (Kbps) is 28,800 bits per second. Because of computer architecture and memory address boundaries, bytes are always some multiple or exponent of two. Gigabit Ethernet is an extension to the family of Ethernet computer networking and communication standards. The Gigabit Ethernet standard supports a theoretical maximum data rate of 1 Gbps (1000 Mbps). At one time, it was believed that achieving Gigabit speeds with Ethernet required fiber optic or other special cables. However, Gigabit Ethernet can be implemented on ordinary twisted pair copper cable (specifically, the CAT5e and CAT6 cabling standards). Migration of existing computer networks from 100 Mbps Fast Ethernet to Gigabit Ethernet is happening slowly. Much legacy Ethernet technology exists (in both 10 and 100 Mbps varieties), and these older technologies offers sufficient performance in many cases. Today, Gigabit Ethernet can only be found mainly in research institutions. A decrease in cost, increase in demand, and improvements in other aspects of LAN technology will be required before Gigabit Ethernet surpasses other forms of wired networking in terms of adoption. Also Known As: 1000 Mbps Ethernet Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second (1,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard. It came into use beginning in 1999, gradually supplanting Fast Ethernet in wired local networks since it was ten times faster. The cables and equipment are very similar to previous standards, and as of 2011 are very common and economical.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 36

Data Communication & Computer Networks

Half-duplex gigabit links connected through hubs are allowed specification but in the marketplace full-duplex with switches is normal.

by

the

Gigabit Ethernet allows network transfers up to 1.000 Mbps using standard Cat 5 UTP (unshielded twisted pair) cabling. How can this be accomplished, since Cat 5 cables can run only up to 100 Mbps? We will explain this and also other very interesting issues regarding Gigabit Ethernet performance. Ethernet Cat 5 cables have eight wires (four pairs), but under 10BaseT and 100BaseT standards (10 Mbps and 100 Mbps, respectively) only four (two pairs) of these wires are actually used. One pair is used for transmitting data and the other pair is used for receiving data. Pin 1 2 3 4 5 6 7 8 Colour White with Green Green White with Orange Blue White with Blue Orange White with Brown Brown Function +TD -TD +RD Not Used Not Used -RD Not Used Not Used

Ethernet standard uses a technique against electromagnetic noise called cancellation. As electrical current is applied to a wire, it generates an electromagnetic field around the wire. If this field is strong enough, it can create electrical interference on the wires right next to it, corrupting the data that were being transmitted there. This problem is called crosstalk. What cancellation does is to transmit the same signal twice, with the second signal ”mirrored“ (inverted polarity) compared to the first one, as you can see in Figure 1. So when receiving the two signals, the receiving device can compare the two signals, which must be equal but ”mirrored“. The difference between the two signals is noise, making it very simple to the receiving device to know what is noise and to discard it. ”+TD“ wire standards for ”Transmitting Data“ and ”+RD“ wire standards for ”Receiving Data“. ”-TD“ and ”-RD“ are the ”mirrored“ versions of the same signal being transmitted on ”+TD“ and ”+RD“, respectively. 1000BASE-X : 1000BASE-X is used in industry to refer to gigabit Ethernet transmission over fiber, where options include 1000BASE-CX, 1000BASE-LX, and 1000BASE-SX, 1000BASE-LX10, 1000BASE-BX10 or the non-standard -ZX implementations.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 37

Data Communication & Computer Networks

1000BASE-CX : 1000BASE-CX is an initial standard for gigabit Ethernet connections over twinaxial cabling with maximum distances of 25 meters using balanced shielded twisted pair and either DE-9 or 8P8C connector. The short segment length is due to very high signal transmission rate. Although, it is still used for specific applications where cabling is done by IT professionals, for instance the IBM BladeCenter uses 1000BASE-CX for the Ethernet connections between the blade servers and the switch modules, 1000BASE-T has succeeded it for general copper wiring use. 1000BASE-SX : 1000BASE-SX is a fiber optic gigabit Ethernet standard for operation over multi-mode fiber using a 770 to 860 nanometer, near infrared (NIR) light wavelength. The standard specifies a distance capability between 220 metres (62.5/125 µm fiber with low modal bandwidth) and 550 metres (50/125 µm fiber with high modal bandwidth). In practice, with good quality fiber, optics, and terminations, 1000BASE-SX will usually work over significantly longer distances. This standard is highly popular for intra-building links in large office buildings, co-location facilities and carrier neutral internet exchanges. 1000BASE-LX : 1000BASE-LX is a fiber optic gigabit Ethernet standard specified in IEEE 802.3 Clause 38 which uses a long wavelength laser (1,270–1,355 nm), and a maximum RMS spectral width of 4 nm. 1000BASE-LX is specified to work over a distance of up to 5 km over 10 µm single-mode fiber. 1000BASE-LX can also run over all common types of multi-mode fiber with a maximum segment length of 550 m. For link distances greater than 300 m, the use of a special launch conditioning patch cord may be required. This launches the laser at a precise offset from the center of the fiber which causes it to spread across the diameter of the fiber core, reducing the effect known as differential mode delay which occurs when the laser couples onto only a small number of available modes in multimode fiber. 1000BASE-LX10 : 1000BASE-LX10 was standardized six years after the initial gigabit fiber versions as part of the Ethernet in the First Mile task group. It is very similar to 1000BASE-LX, but achieves longer distances up to 10 km over a pair of single-mode fiber due to higher quality optics. Before it was standardized 1000BASE-LX10 was essentially already in widespread use by many vendors as a proprietary extension called either 1000BASE-LX/LH or 1000BASE-LH.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 38

Data Communication & Computer Networks

1000BASE-BX10 : 1000BASE-BX10 is capable of up to 10 km over a single strand of singlemode fiber, with a different wavelength going in each direction. The terminals on each side of the fibre are not equal, as the one transmitting downstream (from the center of the network to the outside) uses the 1,490 nm wavelength, and the one transmitting upstream uses the 1,310 nm wavelength. 1000BASE-ZX : 1000BASE-ZX is a non-standard but industry accepted term to refer to gigabit Ethernet transmission using 1,550 nm wavelength to achieve distances of at least 70 km over single-mode fiber. 1000BASE-T : 1000BASE-T (also known as IEEE 802.3ab) is a standard for gigabit Ethernet over copper wiring. Each 1000BASE-T network segment can be a maximum length of 100 meters (328 feet), and must use Category 5 cable or better. Category 5e cable or Category 6 cable may also be used. The data is transmitted over four copper pairs, eight bits at a time. First, eight bits of data are expanded into four 3-bit symbols through a non-trivial scrambling procedure based on a linear feedback shift register; this is similar to what is done in 100BASE-T2, but uses different parameters. The 3-bit symbols are then mapped to voltage levels which vary continuously during transmission. One example mapping is as follows: Symbol Line signal level 000  0 001 +1 010 +2 011 −1 100  0 101 +1 110 −2 111 −1

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 39

Data Communication & Computer Networks

Chapter 3. The OSI Reference Model
Over the past couple of decades many of the networks that were built used different hardware and software implementations, as a result they were incompatible and it became difficult for networks using different specifications to communicate with each other. To address the problem of networks being incompatible and unable to communicate with each other, the International Organisation for Standardisation (ISO) researched various network schemes. The ISO recognised there was a need to create a NETWORK MODEL that would help vendors create interoperable network implementations. The International Organisation for Standardisation (ISO) is an International standards organisation responsible for a wide range of standards, including many that are relevant to networking. In 1984 in order to aid network interconnection without necessarily requiring complete redesign, the Open Systems Interconnection (OSI) reference model was approved as an international standard for communications architecture. The model was developed by the International Organisation for Standardisation (ISO) in 1984. It is now considered the primary Architectural model for inter-computer communications. The Open Systems Interconnection (OSI) reference model is a descriptive network scheme. It ensures greater compatibility and interoperability between various types of network technologies. The OSI model describes how information or data makes its way from application programmes (such as spreadsheets) through a network medium (such as wire) to another application programme located on another network. The OSI reference model divides the problem of moving information between computers over a network medium into SEVEN smaller and more manageable problems. This separation into smaller more manageable functions is known as layering.

Protocol Layering :
The OSI Reference Model is composed of seven layers, each specifying particular network functions. The process of breaking up the functions or tasks of networking into layers reduces complexity. Each layer provides a service to the layer above it in the protocol specification.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 40

Data Communication & Computer Networks

Each layer communicates with the same layer’s software or hardware on other computers. The lower 4 layers (transport, network, data link and physical — Layers 4, 3, 2, and 1) are concerned with the flow of data from end to end through the network. The upper four layers of the OSI model (application, presentation and session—Layers 7, 6 and 5) are orientated more toward services to the applications. Data is Encapsulated with the necessary protocol information as it moves down the layers before network transit.

LAYER 7: APPLICATION : • • • The application layer is the OSI layer that is closest to the user. It provides network services to the user’s applications. It differs from the other layers in that it does not provide services to any other OSI layer, but rather, only to applications outside the OSI model. • Examples of such applications are spreadsheet programs, word processing programs, and bank terminal programs. • The application layer establishes the availability of intended communication partners, synchronizes and establishes agreement on procedures for error recovery and control of data integrity. LAYER 6: PRESENTATION : • The presentation layer ensures that the information that the application layer of one system sends out is readable by the application layer of another system.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 41

Data Communication & Computer Networks



If necessary, the presentation layer translates between multiple data formats by using a common format.



Provides encryption and compression of data.

Examples :- JPEG, MPEG, ASCII, EBCDIC, HTML. LAYER 5: SESSION : • The session layer defines how to start, control and end conversations (called sessions) between applications. • This includes the control and management of multiple bi-directional messages using dialogue control. • It also synchronizes dialogue between two hosts' presentation layers and manages their data exchange. • The session layer offers provisions for efficient data transfer.

Examples :- SQL, ASP(AppleTalk Session Protocol). LAYER 4: TRANSPORT : • The transport layer regulates information flow to ensure end-to-end connectivity between host applications reliably and accurately. • The transport layer segments data from the sending host's system and reassembles the data into a data stream on the receiving host's system. • The boundary between the transport layer and the session layer can be thought of as the boundary between application protocols and data-flow protocols. Whereas the application, presentation, and session layers are concerned with application issues, the lower four layers are concerned with data transport issues. • Layer 4 protocols include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 42

Data Communication & Computer Networks

LAYER 3: NETWORK :

• • •

Defines end-to-end delivery of packets. Defines logical addressing so that any endpoint can be identified. Defines how routing works and how routes are learned so that the packets can be delivered.



The network layer also defines how to fragment a packet into smaller packets to accommodate different media.



Routers operate at Layer 3.

Examples :- IP, IPX, AppleTalk. LAYER 2: DATA LINK :



The data link layer provides access to the networking media and physical transmission across the media and this enables the data to locate its intended destination on a network.



The data link layer provides reliable transit of data across a physical link by using the Media Access Control (MAC) addresses.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 43

Data Communication & Computer Networks



The data link layer uses the MAC address to define a hardware or data link address in order for multiple stations to share the same medium and still uniquely identify each other.



Concerned with network topology, network access, error notification, ordered delivery of frames, and flow control.

Examples :- Ethernet, Frame Relay, FDDI.

LAYER 1: PHYSICAL :



The physical layer deals with the physical characteristics of the transmission medium.



It defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between end systems.



Such characteristics as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, physical connectors, and other similar attributes are defined by physical layer specifications.

Examples :- EIA/TIA-232, RJ45, NRZ.

There was no standard for networks in the early days and as a result it was difficult for networks to communicate with each other. The International Organisation for Standardisation (ISO) recognised this. and researched various network schemes, and in 1984 introduced the Open Systems Interconnection (OSI) reference model.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 44

Data Communication & Computer Networks

The OSI reference model has standards which ensure vendors greater compatibility and interoperability between various types of network technologies. The OSI reference model organizes network functions into seven numbered layers. Each layer provides a service to the layer above it in the protocol specification and communicates with the same layer’s software or hardware on other computers. Layers 1-4 are concerned with the flow of data from end to end through the network and Layers 5-7 are concerned with services to the applications.

TCP/IP Model :
Transmission Control Protocol and Internet Protocol. TCP/IP is a suite of protocols, also known as the Internet Protocol Suite. It was originally developed for the US Department of Defense Advanced Research Project Agency (DARPA) network, but it is now the basis for the Internet.

As with the OSI model, the TCP/IP suite uses a layered model. TCP/IP model has four or five - depending on who you talk to and which books you read! Some people call it a four layer suite - Application, Transport, Internet and Network Access, others split the Network Access layer into its Physical and Datalink components. Network access :  The combination of datalink and physical layers deals with pure hardware (wires, satellite links, network interface cards, etc.)  Access methods such as CSMA/CD (carrier sensed multiple access with collision detection)

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 45

Data Communication & Computer Networks

 Ethernet exists at the network access layer - its hardware operates at the physical layer and its medium access control method (CSMA/CD) operates at the datalink layer. Internet :  This layer is responsible for the routing and delivery of data across networks.  It allows communication across networks of the same and different types and carries out translations to deal with dissimilar data addressing schemes. IP (Internet Protocol) and ARP (Address Resolution Protocol) are both to be found at the Internet layer. Transport :  The transport layer is similar to the OSI transport model, but with elements of the OSI session layer functionality.  The two protocols found at the transport layer are:  TCP (Transmission Control Protocol): reliable, connection-oriented protocol that provides error checking and flow control through a virtual link that it establishes and finally terminates. Examples include FTP and Email  UDP (User Datagram Protocol): unreliable, connectionless protocol that not error check or offer any flow control. Examples include SNMP Application :  This layer is broadly equivalent to the application, presentation and session layers of the OSI model.  It gives an application access to the communication environment.  Examples:  Telnet  HTTP (Hyper Text Transfer Protocol)  SMTP (Simple Mail Transfer Protocol)

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 46

Data Communication & Computer Networks

OSI vs. TCP/IP :
The OSI & TCP/IP reference models have much in common. Both are based on the concept of a stack of independent protocols. The functionality of the layer is roughly similar. Despite these fundamental similarities, the two models have many differences-

-

Services Interfaces Protocols OSI TCP/IP * Based on a stack of independent protocols * Layers have roughly same functionality - Model is general - Model only describe TCP/IP - Number of layer 7 - Number of Layer 4

1

Contents Similarities – - Easier to blend use what works best - knowing which model to use for your context - Real world vs. conceptual Connectionless vs connection oriented– - What do you need for your situation?

2

-

Network Layer supports both Transport layer supports only connection oriented Bad timing

-

-

-

Network Layer supports only connectionless Transport layer supports both Already well established in academia. Too specific No distinction between physical and data link layer

3

Flaws- knowing which model to use for your context - Model is too specific, not specific enough

-

-

-

Bad technology Bad implementation

-

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 47

Data Communication & Computer Networks

Chapter 4. Local Area Networks
A LAN is a high-speed data network that covers a relatively small geographic area. It typically connects workstations, personal computers, printers, servers, and other devices. LANs offer computer users many advantages, including shared access to devices and applications, file exchange between connected users, and communication between users via electronic mail and other applications.

LAN is made up of hardware as well as software components. Hardware consists of interface cards in all the machines and cables that tie them together. The software includes the drivers for all peripherals and network Operating System that manages the network. The internal network, and therefore the LAN, exists to link all of the PCs, laptops, servers, printers, and anything else that might be useful for a computer to talk to. Most LANs have a cable running from every computer to a wall jack. The wall jack is connected to a very similar type of cable that runs to a patch panel in a wiring closet. Local area networks (LAN) are usually fairly modern and very fast and make up a great portion of the internal network. However these are almost always connected to an internet connection that is significantly slower.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 48

Data Communication & Computer Networks

Components & Technology : The various components of LAN :

Data Terminal Equipments (DTEs) -

C O M P O N E N T S

- Server - Workstation - Terminal :
Printer/Plotter/Fax/Scanner Laptop/Notebook/PDA

Data Communication Equipments (DCEs) -

- Modem - Hub - Bridge - Router - Switch - Repeater

- Wired :
- Copper Cable :

Transmission Media - Fiber Optics

- Twisted Pair - Coaxial Cable

- Wireless :
- Radio wave - Laser - Infrared - Microwave - Satellite

Figure :-Representation of Various Components of LANS Server : One interesting new feature that has emerged today is that a LAN can give more than one file server. One of these servers can be used as backup. Which means it will store copies of every file on the other servers and can become the primary server in case the actual primary server fails. This is known as apparent redundancy. The future servers are going to be much more powerful than today's file servers. In fact, they have; already emerged. One of them is called the Communication Server . A communications server is an extraordinary powerful product. Any PC attached to this server can communicate directly to any large computer like a minicomputer or a mainframe which is outside the LAN.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 49

Data Communication & Computer Networks

In fact, it will work exactly as a terminal attached to that computer. The results of the work performed using this PC can be used in the same manner as would use any PC's output. Another powerful server that has arrived is called the Database Server. This allows our PC to access the superior database processing capabilities of large computers. Both the communications server and the database server can work hand in hand with the LAN software at the same time. WorkStation : A workstation has its own local Operating system depending on Machine Type; workstations can be DOS-based PCs, Application-Mac's running system. A workstation's main Job is to execute Program Files retrieved from networks. With advent of network based client - server computer the role of server has changed. In this distributed processing environment, processing burden is shared by server and workstation. A repeater is a physical layer device used to interconnect the media segments of an extended network. A repeater essentially enables a series of cable segments to be treated as a single cable. Repeaters receive signals from one network segment and amplify, retime, and retransmit those signals to another network segment. These actions prevent signal deterioration caused by long cable lengths and large numbers of connected devices. Repeaters are incapable of performing complex filtering and other traffic processing. In addition, all electrical signals, including electrical disturbances and other errors, are repeated and amplified. The total number of repeaters and network segments that can be connected is limited due to timing and other issues. A hub is a physical-layer device that connects multiple user stations, each via a dedicated cable. Electrical interconnections are established inside the hub. Hubs are used to create a physical star network while maintaining the logical bus or ring configuration of the LAN. In some respects, a hub functions as a multiport repeater. Bridges analyze incoming frames, make forwarding decisions based on information contained in the frames, and forward the frames toward the destination. In some cases, such as source-route bridging, the entire path to the destination is contained in each frame. In other cases, such as transparent bridging, frames are forwarded one hop at a time toward the destination Switches are data link layer devices that, like bridges, enable multiple physical LAN segments to be interconnected into a single larger network. Similar to bridges, switches forward and flood traffic based on MAC addresses. Because switching is performed in hardware instead of in software, however, it is significantly faster. Switches use either store-and-forward switching or cut-through switching when forwarding traffic. Many types of switches exist, including ATM switches, LAN switches, and various types of WAN switches.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 50

Data Communication & Computer Networks

Routers perform two basic activities: determining optimal routing paths and transporting information groups (typically called packets) through an internetwork. In the context of the routing process, the latter of these is referred to as switching. Although switching is relatively straightforward, path determination can be very complex.

Topologies :
The topology defines how the devices (computers, printers..etc) are connected and how the data flows from one device to another. There are two conventions while representing the topologies. The physical topology defines how the devices are physically wired. The logical topology defines how the data flows from one device to another. Broadly categorized into I) Bus II) Ring III) Star IV) Mesh

Bus topology: In a bus topology all devices are connected to the transmission medium as backbone. There must be a terminator at each end of the bus to avoid signal reflections, which may distort the original signal. Signal is sent in both directions, but some buses are unidirectional. Good for small networks.

The main problem with the bus topology is failure of the medium will seriously affect the whole network. Any small break in the media the signal will reflect back and cause errors. The whole network must be shutdown and repaired. In such situations it is difficult to troubleshoot and locate where the break in the cable is or which machine is causing the fault; when one device fails the rest of the LAN fails. Ring Topology : Ring topology was in the beginning of LAN area. In a ring topology, each system is connected to the next as shown in the following picture.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 51

Data Communication & Computer Networks

Each device has a transceiver which behaves like a repeater which moves the signal around the ring; ideal for token passing access methods. In this topology signal degeneration is low; only the device that holds the token can transmit which reduces collisions. If you see its negative aspect it is difficult to locate a problem cable segment; expensive hardware. Star topology : In a star topology each station is connected to a central node. The central node can be either a hub or a switch. The star topology does not have the problem as seen in bus topology. The failure of a media does not affect the entire network. Other stations can continue to operate until the damaged segment is repaired.

The advantages are cabling is inexpensive, easy to wire, more reliable and easier to manage because of the use of hubs which allow defective cable segments to be routed around; locating and repairing bad cables is easier because of the concentrators; network growth is easier. The disadvantages are all nodes receive the same signal therefore dividing bandwidth; Maximum computers are 1,024 on a LAN. Maximum UTP (Un shielded twisted pair) length is 100 meters; distance between computers is 2.5 meters. This topology is the dominant physical topology today. Mesh topology : A mesh physical topology is when every device on the network is connected to every device on the network; most commonly used in WAN configurations Helps find the quickest route on the network; provides redundancy. Very expensive and not easy to set up.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 52

Data Communication & Computer Networks

Hybrid topology : A hybrid topology is a combination of any two or more network topologies in such a way that the resulting network does not have one of the standard forms. For example, a tree network connected to a tree network is still a tree network, but two star networks connected together exhibit hybrid network topologies. A hybrid topology is always produced when two different basic network topologies are connected.

Ethernet - Ethernet is a 10Mbps LAN that uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol to control access network. When an endstation (network device) transmits data, every endstation on the LAN receives it. Each endstation checks the data packet to see whether the destination address matches its own address. If the addresses match, the endstation accepts and processes the packet. If they do not match, it disregards the packet. If two endstations transmit data simultaneously, a collision occurs and the result is a composite, garbled message. All endstations on the network, including the transmitting endstations, detect the collision and ignore the message. Each endstation that wants to transmit waits a random amount of time and then attempts to transmit again. This method is usually used for traditional Ethernet LAN.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 53

Data Communication & Computer Networks

TDMA ( time division multiple access ) was originally devised for digital microwave and satellite communications systems. It is still used with many such systems as well as with some fiber optic systems. Fixed time slots are made available, regardless of whether they are actually used. The complete end to end bit sequences within each time slots are usually called a serial packet, each of which comprises: source and destination address, data bits, control and status bits. The system is accessed through terminal stations and repeaters. Transmission is into an empty packet or packets and reception occurs via packet address recognition. A monitor station monitors the integrity of the system during normal operation and places framing bits around packets in the initializing process. FDDI (Fiber Distributed Data Interconnect) - FDDI provides data speed at 100Mbps which is faster than Token Ring and Ethernet LANs . FDDI comprise two independent, counter-rotating rings : a primary ring and a secondary ring. Data flows in opposite directions on the rings. The counter-rotating ring architecture prevents data loss in the event of a link failure, a node failure, or the failure of both the primary and secondary links between any two nodes. This technology is usually implemented for a backbone network.

Access Techniques : All computers attached to the Ethernet use CSMA/CD to co-ordinate their activities. A computer wishing to transmit checks for electrical activity on the cable, informally called a carrier. If there is no carrier, the computer can transmit. If a carrier is present, the computer waits for the sender to finish before proceeding. However, it is possible for two or more computers to detect the lack of carrier and start transmission simultaneously. The signals travel at approximately 70% of the speed of light and interfere with one another. This interference is called a collision. A sending computer monitors the signal on the cable and if it differs from
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 54

Data Communication & Computer Networks

the signal it is sending, then a collision has occurred and the computer stops transmitting. Following a collision, a computer waits for the cable to become idle before retransmitting. However, if the computers start transmitting as soon as the cable becomes free, another collision will occur. Ethernet requires each computer to delay after a collision. The standard specifies a maximum delay, d, and requires each computer to choose a random delay less than d. In this case, the computer choosing the shortest delay will transmit first. If subsequent collisions still occur, the computers double the maximum delay (2d, 4d, ...) until the range is large enough for one computer to choose a short delay and transmit without a collision. This technique is calledbinary exponential backoff. Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing. In networks using CSMA/CD technology such as Ethernet, network devices contend for the network media. When a device has data to send, it first listens to see if any other device is currently using the network. If not, it starts sending its data. After finishing its transmission, it listens again to see if a collision occurred. A collision occurs when two devices send data simultaneously. When a collision happens, each device waits a random length of time before resending its data. In most cases, a collision will not occur again between the two devices. Because of this type of network contention, the busier a network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as the number of devices on a single network increases. In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device. When a device has data to send, it must wait until it has the token and then sends its data. When the data transmission is complete, the token is released so that other devices may use the network media. The main advantage of token-passing networks is that they are deterministic. In other words, it is easy to calculate the maximum time that will pass before a device has the opportunity to send data. This explains the popularity of tokenpassing networks in some real-time environments such as factories, where machinery must be capable of communicating at a determinable interval.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 55

Data Communication & Computer Networks

For CSMA/CD networks, switches segment the network into multiple collision domains. This reduces the number of devices per network segment that must contend for the media. By creating smaller collision domains, the performance of a network can be increased significantly without requiring addressing changes. Normally CSMA/CD networks are half-duplex, meaning that while a device sends information, it cannot receive at the time. While that device is talking, it is incapable of also listening for other traffic. This is much like a walkie-talkie. When one person wants to talk, he presses the transmit button and begins speaking. While he is talking, no one else on the same frequency can talk. When the sending person is finished, he releases the transmit button and the frequency is available to others. When switches are introduced, full-duplex operation is possible. Full-duplex works much like a telephone-you can listen as well as talk at the same time. When a network device is attached directly to the port of a network switch, the two devices may be capable of operating in full-duplex mode. In full-duplex mode, performance can be increased, but not quite as much as some like to claim. A 100-Mbps Ethernet segment is capable of transmitting 200 Mbps of data, but only 100 Mbps can travel in one direction at a time. Because most data connections are asymmetric (with more data traveling in one direction than the other), the gain is not as great as many claim. However, full-duplex operation does increase the throughput of most applications because the network media is no longer shared. Two devices on a full-duplex connection can send data as soon as it is ready. Token-passing networks such as Token Ring can also benefit from network switches. In large networks, the delay between turns to transmit may be significant because the token is passed around the network. Transmission Protocol: Standards and protocols are required to govern the physical and logical connections between terminals, computers and other equipment. They are vital for data communications and computer networking. Typically standards fall into two groups: official standards (from national standards bodies) and de facto standards established by common usage. Standards for LANs (local area networks) were proposed by the American Institute of Electrical and Electronics Engineers (IEEE), an influential organisation.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 56

Data Communication & Computer Networks

A recommendation for standards called X-25 for access to and transmission methods for packet switched data networks (PSDNs) were proposed by the CCITT (now it known as ITU-T). The existence of different standards bodies regulating data communications is obviously a handicap for global standardisation. In addition, manufacturers have developed their own standards to maintain their market position e.g. Digital’s Decnet standards. (The Digital corporation was taken over by Compaq which in turn has been taken over by Hewlett-Packard). The International Standards Organisation (ISO) took an initiative to develop universal data communication standards to unite standards bodies, computer and telecommunications manufacturers and users. The ISO Open Systems Interconnection (OSI ) reference model was put forward as a framework to develop standards for data communication products. An open system is one that is prepared to communicate with any other open system by using agreed rules or protocols on how the communication should take place. It used a network protocol called IP (Internet Protocol) to handle the interconnection of WANs to LANs. It used a transport protocol call TCP (Transmission Control Protocol) to govern transmission of data. The two are often referred to as TCP/IP and the major protocols of the Internet. It also provided protocols for file transfer (FTP), remote login (TELNET) and e-mail (SMTP). These three protocols are still very important and widely used protocols Internetworking is the term used for the connection of two networks. The growth of internetworking between LANs and WANs and WANs and WANs led to what is now referred to as the Internet. A computer that provides for the interconnection of two different networks is called a gateway.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 57

Data Communication & Computer Networks

Transmission Media : Analog Transmission: Dominated the last 100 years and is here for a while yet. Network designers made use of the existing telephone network which was aimed at voice transmission. This is actually very poor for computer networking. For example 2 computers connected by a direct cable can achieve a data rate of up to 100 Mbps with very low error rate. Using phone lines, 56 Kbps is the maximum transmission speed with a relatively high error rate. It is approximately 10 orders of magnitude worse: the cost of bus ticket to town versus a moon landing is same order of magnitude. Modems : Phone lines deal with frequencies of 300 to 3000 Hz. A computer outputs a serial stream of bits (1’s, 0’s). A modem is a device that accepts such a bit stream and converts it to an analog signal, using modulation. It also performs the inverse conversion. Thus two computers can be connected using two modems and phone line. Using a modem, a continuous signal (tone) is sent in the range 1000 to 2000 Hz. To transmit information, this carrier signal is modulated. Its amplitude, frequency, phase or a combination can be modulated . Digital Transmission : Digital transmission takes place in the form of pulses representing bits (1’s and 0’s). This is the type of communication used internally in computers. The highspeed trunks linking central phone exchanges use digital transmission. It has a lower error rate than analog transmission. The local loop (from phone to exchange) is still analog. This must be converted at the exchange to digital. A device called a Codec (coder/decoder) does this. It samples the analog signal 8000 times per second and encodes the signal digitally by representing each sample as a binary number. The technique used is called Pulse Coded Modulation or PCM. 1. Wireed Transmission : Twisted Pairs : They are used by telephones for the local loop (connection between your home phone and the local telephone exchange). They carry electrical signals. A tp consists of two insulated copper wires (1mm diameter) twisted to reduce electrical interference. Capacity: dependent on the distances involved but can be up to several Mbps over a few Kms. For example ISDN (Integrated Services Digital Network) lines offer speeds from 64Kbps to over 1 Mbps and have been available to home users for Internet access, for several years. More recently (2003), DSL (Digital Subscriber Line) and in particular ADSL (Asymmetric DSL) lines are available to home users with speeds of 1.5 to 6 Mbps.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 58

Data Communication & Computer Networks

ISDN and ADSL both use digital transmission and so must use a digital line unlike the standard analog telephone line where a modem is used. You must install an ISDN card or an ADSL card into your PC to use an ISDN or ADSL line.

Twisted Pairs may be shielded (stp) or unshielded (utp) with the shielded having extra insulation. However, it is the rate of twisting (number of twists per inch) that is the most important characteristic. They are also classified into Category-5 (CAT-5) and Category-6 (CAT-6). CAT-5 can carry 10 or 100 Mbps (10/100Mbps) over short distances e.g. up to 100 metres approx. This is the type of cable that is often used in building to connect PCs to a LAN. Usually, the CAT-5 cable connects to a device know as a hub which is less than 100 metres from each PC. There may be a hub for each floor/laboratory in a building. CAT-6 cable operates at 100/1000Mbps (Gigabit Ethernet) and is typically used to interconnect hubs. It is more expensive than CAT-5 cable. Large organisations frequently have a so-called "backbone" network that interconnects separate LANs in different buildings/rooms as in the diagram below. Over short distances CAT-6 cable may be used but optic fibre is also often used as it can cover longer distances.

Coaxial (Coax) Cable : Carry electrical signals. It consists of a copper core surrounded by 3 outer layers of insulation. It has a high bandwidth and good noise immunity. The original Ethernet standard was based on 10 Mbps coaxial cable. Ethernet is the most popular LAN standard and was developed at Rank Xerox (who also developed the
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 59

Data Communication & Computer Networks

mouse, laser printer and Graphical User Interface (GUI) software. Ethernet LANs can be based on tp, coax or optic fibre.

Capacity : 10 to 100 Mbps for distances of up to 1 km. Frequently used in LANs but is being replaced by utp/stp in most LANs. Optic Fibre : Uses light to carry data and has a huge bandwidth. Very thin glass fibres used. To date capacity of 1000 Mbps over 1 km is feasible.

It is used in WANs, LANs for interconnecting hubs and also for linking telephone exchanges. Excellent noise immunity as it does not suffer from electrical interference and is therefore suitable for harsh environments such as factory floor. Although computing technology is rapidly advancing, it is not gaining ground nearly as fast as communication technology is. Fiber optics is one of the advances that has propelled communication technology into the future at high speeds. Communication over fiber optics requires a source (of light), a line (transmission medium = fiber), and a destination (to detect the light). The light stays within the fiber line because of the angle at which the light hits the surface of the fiber line. Instead of passing through the fiber's surface (like a window), the light bounces off of it (like a mirror). The light propagates down the fiber line because it continually reflects off the surface from the inside; the light never escapes the fiber line until the receiver detects it. Like copper, fiber optics suffers problems when transmitting over a distance. Attenuation (a weakening of the power of a signal) occurs, as well as dispersion (the spreading out of light waves over a distance). The discovery of solitons has helped wipe out the problem of dispersion, though. A fiber cable is heavily insulated like coax, but it has several differences. The core of the cable is a glass strand, which is surrounded by a thick glass covering, which is then covered by plastic. When compared to copper for its overall purposes, fiber wins because it is lighter, higher bandwidth, easier to install, harder to tap, and the signal stays stronger longer than in copper. The only drawback to fiber at this point in time is the lack of familiarity among the engineering community with the fiber technology compared to the copper.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 60

Data Communication & Computer Networks

2. Wireless Transmission : Line of Sight: Infrared and Microwave Physical cables have a major problem if you have to cross private or public property where it may be difficult or very expensive to get permission, in addition to the costs of laying the cable. Using line of sight transmitters avoids this problem. Lasers can be used for wireless communication. It is a relatively low cost way to connect two buildings' LANs, but it has drawbacks. The laser is difficult to target on the destination's receiver because the beam is so small. Laser light also diffuses easily in poor atmospheric conditions, such as rain, fog, or intense heat. Infrared light is used for close-range communication, such as remote controls, because it does not pass through objects well. This is also a plus because infrared communications in one room do not interfere with the infrared communications in another room. Infrared communication is more secure than other options, such as radio, but it cannot be used outside due to interference by the Sun. Radio waves are easy to generate and are omnidirectional, but have low transmission rates. Also, depending on their frequency, radio waves either cannot travel very far, or are absorbed by the earth. In some cases, though, High Frequency (HF) waves are reflected back to earth by the Ionosphere (a layer of the atmosphere). Microwaves can be used over long distances e.g. A 100m tower can transmit data for distances over 100 km. Cheaper than digging a trench. Relatively high speeds of 10 Mbps upwards are possible. Microwave transmission is popular for its ability to travel in straight lines. A source can be directly focused on its destination without interfering with neighboring transmissions. Because they travel in straight lines, though, the curvature of the earth can interfere with the microwave transmitters; the solution to this is the addition of repeaters in between the source and destination to redirect the data path. Microwaves are used for long distance communication (Microwave Communications, Inc.=MCI), cellular phones, garage door openers, and much more. Satellite: operate in same fashion as microwaves where the satellite operates as a ‘Big microwave repeater in the sky’!! Satellite communication has a high bandwidth giving up 50 Mbps speeds and a given satellite may be able to have many "channels" at this speed. Wireless: Radio LANs or wireless (Wi-Fi) LANs are becoming common in offices, universities, hotels, restaurants and airports. A wireless LAN enables users to connect to the Internet from a laptop computer with a wireless network card. In UCD, Commerce students use such laptops with wireless cards to connect to the college network, for course work and email.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 61

Data Communication & Computer Networks

Chapter 5. Broad Band Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 62

Data Communication & Computer Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 63

Data Communication & Computer Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 64

Data Communication & Computer Networks

Integrated Service Digital Networks (ISDN) : ISDN (Integrated Services Digital Network) is a digital communications technology that enables a small business or an individual to connect directly to both the Internet and other sites/users (e.g.: for videoconferencing). ISDN provides a standard interface for voice, fax, video, graphics, and data all on a single telephone line. Integrated Services refers to ISDN's ability to deliver two simultaneous connections, in any combination of voice, fax,, data, and video, over a single line. Multiple devices can be attached to the line, and used as needed. Digital refers to the fact that it is a purely digital transmission, as opposed to the analog transmission method used by conventional telephone lines. Network refers to the fact that ISDN is not simply a point-to-point connection like a leased telephone line ISDN networks extend from the local telephone exchange to the remote user, and include all the switching equipment in between. If your ISDN equipment includes analog capabilities, you can also connect to telephones, fax machines, and analog modems even though they may be connected to standard analog telephone lines. ISDN service is provided by the same companies that provide telephone service you get much faster, more dependable connections for voice, fax, data, and video all through a single connection. While not new (ISDN has been around for over 15 years), the advent of international standards has made ISDN viable as telephone companies around the world have upgraded their equipment to these ISDN standards. It is now commonly available in Europe, Japan, Australia, and from most major North American telephone companies AT&T, MCI, and Sprint can provide long-distance ISDN lines for global connections. One of the reasons for its widespread use is that it works on the ordinary copper wire already in place in the telephone system. One advantage of ISDN over other digital communications technologies is its ability to handle all types of information such as voice, computer data, studio-quality sound, and video. In addition, up to eight devices (such as telephones, computers, and fax machines) can be connected to one ISDN line. These can all be separate telephone numbers or multiples of the same number allowing one to still ring through while another is busy. The simplest ISDN connection (called Basic Rate or BRI) consists of two 64 Kbps (kilobits-per-second) data channels (called B-channels) plus a 16 Kbps control channel (called the D-channel). This is sometimes referred to as 2B+D. On the other end of the spectrum is Primary Rate ISDN (called PRI) with 23 B-channels plus a D-channel (i.e.: 23B+D ).
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 65

Data Communication & Computer Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 66

Data Communication & Computer Networks

Broad Band ISDN :

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 67

Data Communication & Computer Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 68

Data Communication & Computer Networks

ATM Traffic Mgmt : In the latest generation of IP networks, with the growing implementation of Voice over IP (VoIP) and multimedia applications, the addition of voice and video traffic to the traditional IP data network has become increasingly common. Voice, video, and data traffic types have different transmission characteristics and servicelevel requirements. The ATM technology is well-suited to transport mixed traffic because of its built-in ability to negotiate and guarantee a certain level of quality of service (QoS) from the source to the end device. This makes ATM a desirable transport method for mixed traffic through an IP network over a WAN. Traffic Characteristics : Voice, video, and data traffic are differentiated by the following transmission characteristics: • Voice—Traffic flows with a regular pattern at a constant rate that is sensitive to delay and delay variation. When compression techniques are in use, voice traffic is more sensitive to error than uncompressed voice. • Video—Real-time video traffic has similar transmission characteristics to voice traffic, but also requires high bandwidth. When compression techniques are in use, video traffic is more sensitive to error than uncompressed video. • Data—Traffic flows with an irregular pattern that is often called bursty because of its variability in rate and amount of traffic. Data traffic is not sensitive to delay or delay variation, but it is sensitive to error. Traffic management is vital to the performance and overall health of the ATM network. ATM uniquely satisfies the different transmission requirements of mixed traffic on a common network through its multiple service categories and QoS implementation. Traffic Contract An ATM WAN is frequently a public network owned and managed by a service provider who supports multiple customers. These customers agree upon and pay for a certain level of bandwidth and performance from the service provider over that WAN. This agreement becomes the basis of the traffic contract, which defines the traffic parameters and the QoS that is negotiated for each virtual connection for that user on the network. References to the traffic contract in an ATM network represent a couple of things. First, the traffic contract represents an actual service agreement between the user and the service provider for the expected network-level support. Second, the traffic contract refers to the specific traffic parameters and QoS values negotiated for
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 69

Data Communication & Computer Networks

an ATM virtual connection at call setup, which are implemented during data flow to support that service agreement. The traffic contract also establishes the criteria for policing of ATM virtual connections on the network to ensure that violations of the agreed-upon service levels do not occur. ATM Traffic Parameters The following traffic parameters are used to qualify the different ATM service categories: • Minimum Cell Rate (MCR)—Cell rate (cells per second) at which the edge device is always allowed to transmit. For UBR+, the MCR is the minimum cell rate requested by the edge device as a guaranteed service-level for the SVC. • Peak Cell Rate (PCR)—Cell rate (cells per second) that the edge device cannot exceed. Some service categories have a limit on the number of cells that can be sent at the PCR without penalty for violation of the traffic contract. • Cell Delay Variation Tolerance (CDVT)—Allowable deviation in cell times for a PVC that is transmitting above the PCR. For a given cell interarrival time expected by the ATM switch, CDVT allows for some variance in the transmission rate. It allows a certain number of cells to arrive faster than the expected cell interarrival time without penalty for violation of the traffic contract. • Sustainable Cell Rate (SCR)—Upper boundary for the average rate at which the edge device can transmit cells without loss. • Maximum Burst Size (MBS)—Number of cells that the edge device can transmit up to the PCR for a limited period of time without penalty for violation of the traffic contract. ATM QoS Parameters The ATM Forum specifications define specific QoS parameters that are used to manage cell delay and cell loss over the ATM network for each of the different ATM service categories. Some of these QoS parameters are considered negotiable and some are not. For SVCs, ATM switches evaluate the requested traffic parameters and QoS parameters using the Connection Admission Control (CAC) algorithm. CAC ensures that the requested QoS can be served throughout the duration of the connection over the network, from the source to the destination, without impacting other connections.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 70

Data Communication & Computer Networks

Negotiable QoS Parameters The following cell delay and cell loss parameters are considered negotiable because the information is exchanged through signaling between the UNI edge device and the network-to-network interface (NNI) switch while an ATM connection is being established. Cell Delay Parameters The ATM Forum specifications support two negotiable parameters for cell delay: • Maximum cell transfer delay (maxCTD)—Maximum length of time allowed for the network to transmit a cell from the source UNI device to the destination UNI device. • Peak-to-peak cell delay variation (peak-to-peak CDV)—Maximum variation allowed from the fixed CTD for each cell transmitted from the source UNI device to the destination UNI device. Represents the allowable jitter, or distortion, between cell interarrival times over the network. Cell Loss Parameters The ATM Forum specifications support the following negotiable parameter for cell loss: • Cell loss ratio (CLR)—Allowable percentage of cells (lost cells divided by total number of cells transmitted) that the network can discard due to congestion. Non-Negotiable QoS Parameters The following QoS parameters are not exchanged during connection setup on the ATM network: • Cell error ratio (CER)—Allowable percentage of cells (errored cells divided by the total number of all transmitted cells) that can be in error. • Severely errored cell block ratio (SECBR)—Allowable percentage of cell blocks (severely errored cell blocks divided by the total number of transmitted cell blocks) that can be severely in error. A cell block is a number of consecutively transmitted cells on a particular connection. A cell block is considered severely errored when more than a maximum numbe of errored cells, lost cells, or misinserted cells occur within that cell block. • Cell misinsertion rate (CMR)—Allowable rate of misinserted cells (misinserted cells divided by the time period during which misinserted cells were collected). This rate does not include severely errored cell blocks. Misinserted cells are cells that are received with an incorrect VPI/VCI value.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 71

Data Communication & Computer Networks

Congestion on an ATM Network Well-behaved traffic that conforms to the agreed-upon service levels is critical to the performance of the public ATM WAN. Without the proper controls and management in place, there is the potential for certain customers to consume bandwidth above the agreed-upon rate. This can cause congestion, which not only prevents other user traffic from its right to access that bandwidth, but can cause significant degradation to the performance on the network. The cost of congestion to ATM network performance is better understood when you consider what happens if one or more cells are marked and dropped during transmission of a packet. Consider an AAL5 PDU. It is important to recall that the cells are reassembled and the CRC of a packet is checked at the destination. This means that regardless of when or how many cells are dropped during transmission, all of the remaining cells associated with the packet are still transmitted across the ATM network. Then, when the destination receives the last cell with the end-of-message bit turned on, it reassembles the cells. When an application [such as the Transmission Control Protocol (TCP)] detects an error in the packet due to the lost cells, it requests that the source resend the entire packet. This results in more traffic being sent across the ATM network, creating even more congestion, which makes the problem worse. The congestion problem can grow exponentially out of control. When congestion occurs, packets are marked and dropped, which causes retransmissions. A disruptive phenomenon called global synchronization can occur network wide, particularly with TCP applications. During a global synchronization event, the queues fill and retransmissions occur. If the backoff period (or window) for retransmissions is too close, then when the cells are retransmitted onto the network, the queues again quickly fill and the cells are dropped again. Even with an ATM network that has been traffic engineered, congestion on the network can occur. The ATM public network also must be configured properly to manage all of the flows from the UNIs and NNIs that it supports. However, effective management of traffic on the ATM network begins with well-managed ATM traffic at the edge devices, such as the Cisco 7200 series router. Therefore, the primary goal of ATM traffic management is congestion prevention at the UNI interface. If the UNI device can present cells to the public ATM network in a predictable way, then the ATM network can be more efficient and effectively managed. Traffic Control Functions in ATM Traffic Management Two of the most important aspects of ATM traffic management are the traffic control functions of shaping and policing. The Cisco 7200 series routers support both of these traffic control functions for ATM.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 72

Data Communication & Computer Networks

Traffic Shaping Traffic shaping at the edge device of an ATM network is considered a preventive measure for the control of network congestion. Traffic shaping controls the flow of traffic onto the network to smoothe out peaks of traffic. The concept of traffic shaping is particularly relevant for data transfer, which is characterized by variable bursts of traffic onto the network. These bursts create peaks of traffic, and can cause periodic violations to the traffic contract by exceeding the allowable rate of transfer. Bursty traffic patterns also make inefficient use of the network bandwidth. Traffic Shaping on the Cisco 7200 Series Router The Cisco 7200 series router is normally an edge device located on the UNI side of the ATM network. It is very important to configure traffic shaping on the Cisco 7200 series router to effectively control the traffic going onto the ATM network to conform to the traffic contract—but it is only one aspect of the flow. When you implement traffic shaping, cells are sent onto the network in consistent patterns of cells with fixed, minimum intercell gaps. This rate is based on the traffic shaping parameters that you configure for that PVC or SVC. However, by shaping the traffic, and with the likely support of multiple service categories with competing transmission characteristics, you effectively create congestion on the router itself—this is where queueing comes in, and also the availability of certain Cisco IOS QoS software features to manage the performance of the queues. You begin with traffic shaping to configure the performance levels that you want to support on the ATM network. From there, because traffic shaping produces congestion, you need to optimize the applicable hardware and software queues to increase overall performance of the flow of traffic through the router. Port Adapter Support for Traffic Shaping on the Cisco 7200 Series Router It is very important to understand that each ATM port adapter on the Cisco 7200 series routers supports different ATM service categories and also implements traffic shaping functions uniquely. All ATM port adapters support traffic shaping on the Cisco 7200 series routers except the PA-A1 ATM port adapter. Although the PA-A1 does support the UBR service category, this is a best-effort service and technically does not perform the function of shaping the traffic over the PVC. The PA-A3 ATM port adapter and PA-A6 ATM port adapter provides enhanced functionality to the PA-A1 port adapter, and are highly recommended for

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 73

Data Communication & Computer Networks

ATM traffic shaping. The PA-A6 ATM port adapter is an enhanced version of the PAA3 ATM port adapter and supports twice as many virtual circuits. Design Objectives for ATM Traffic Management The result of successful ATM traffic management is the efficient transport of traffic through the network with minimization of congestion, while providing fair and sufficient bandwidth access for all service categories when needed. To efficiently transport mixed traffic through an ATM network, the challenge lies in meeting the following design objectives over the network: • Prevent congestion on the network by creating a more consistent flow of traffic at the edge device—this is known as traffic shaping. • Control cell delay and cell loss while satisfying the transmission requirements of the different traffic types—this is the basis of QoS for ATM. • Maximize the use of network bandwidth to fulfill the traffic contract, but prevent a particular application or location from monopolizing the bandwidth—this is part of queue management on the Cisco 7200 edge device; and, on the ATM network, the enforcement of bandwidth usage is known as traffic policing. Introduction to Very Small Aperture Terminal (VSAT) :

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 74

Data Communication & Computer Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 75

Data Communication & Computer Networks

Chapter 6. IP Adressing & Routing
IP addresses :

Internet Protocols :
Internet Architecture and Philosophy A TCP/IP internet provides three sets of services as shown in the following figure:

Connectionless Delivery System  The most fundamental internet service consists of a packet deliver system, which is  unreliable, best-effort, and connectionless.  Unreliable: packets may be lost, duplicated, delayed, or delivered out of order.  Connectionless: each packet is treated independently from all others.  Best-effort: the Internet software makes an earnest attempt to deliver packets. Purpose of the Internet Protocol  The IP protocol defines the basic unit of data transfer (IP datagram)  IP software performs the routing function  IP includes a set of rules that embody the idea of unreliable packet delivery:  How hosts and routers should process packets  How and when error messages should be generated Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 76

Data Communication & Computer Networks  The conditions under which packets can be discarded.

IP Datagram Encapsulation

IP Datagram Encapsulation for Ethernet

IP Header :  IP Header Format



VERS: current version is 4, I.e. IPv4 - proposal for IPv6, which will have a different header HLEN: header length in # 32-bit words - Normally = 5, i.e. 20 octet IP headers - Max 60 bytes - Header can be variable length (IP option) TYPE OF SERVICE 3-bit precedence field (unused), 4 TOS bits, 1 unused bit set to 0
Page 77





Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks



TOS bit 1 (min delay), 2 (max throughput), 3 (max reliability), 4 (min cost): only one can be set typically all are zero, for best-effort service DiffServ proposes to use TOS for IP QOS

TOTAL LENGTH: of datagram, in bytes - Max size is 65535 bytes (64K – 1) IDENT, FLAGS, FRAGMENT OFFSET: - Used for fragmentation and reassembly, will talk about this later TTL (Time To Live): upper limit on # routers that a datagram may pass through - Initialized by sender, and decremented by each router. When zero, discard datagram. This can stop routing loops - Example: ping –t TTL IP allows us to specify the TTL field - Question: normal users are not supposed to be able to modify the TTL field, how does ping do that? (the SetUID concept) - Question: How to implement traceroute? i.e., how to find the routers to a destination (without using IP options)? TYPE: IP needs to know to what protocol it should hand the received IP datagram - In essence, it specifies the format of the DATA area - Demultiplexes incoming IP datagrams into either UDP, TCP, ICMP… HEADER CHECKSUM - 16-bit 1’s complement checksum - Calculated only over header - Recomputed at each hop An example of IP datagram - Header length: 20 octet - TYPE: 01 (ICMP) - Source IP: 128.10.2.3 - Destination IP: 128.10.2.8 IP OPTIONS - IP OPTIONS field is not required in every datagram - Options are included primarily for network testing or debugging. - The length of IP OPTIONS field varies depending on which options are selected. Record Route Option - The sender allocates enough space in the option to hold IP addresses of the routers (i.e., an - empty list is included in the option field) - Each router records its IP address to the record route list - If the list is full, router will stop adding to the list
Page 78















Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks



Timestamp Option - Works like the record route option - Each router along the path fills in a 32-bit integer timestamp Source Routing - It provides a way for the sender to dictate a path through the Internet. - Strict Source Routing o The list of addresses specifies the exact path the datagram must follow to reach its destination o An error results if a router cannot follow a strict source route - Loose Source Routing o The list of addresses specifies that the datagram must follow the sequence of IP addresses, but allows multiple network hops between successive addresses on the list - Question: how are these two types of source routing implemented?



IP Fragmentation :  Why do we need fragmentation? - MTU: Maximum Transmission Unit - An IP datagram can contain up to 65535 total octets (including header) - Network hardware limits maximum size of frame (e.g., Ethernet limited to 1500 octets, i.e., - MTU=1500; FDDI limited to approximately 4470 octets/frame)

Illustration of When Fragmentation is Needed :





IP fragmentation - Routers divide an IP datagram into several smaller fragments based on MTU - Fragment uses same header format as datagram - Each fragment is routed independently How is an IP datagram fragmented?
Page 79

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks

-



IDENT: unique number to identify an IP datagram; fragments with the same identifier - belong to the same IP datagram - FRAGMENT OFFSET: o Specifies where data belongs in the original datagram o Multiple of 8 octets FLAGS: - bit 0: reserved - bit 1: do not fragment - bit 2: more fragments. This bit is turned off in the last fragment (Q: why do we need this bit? A: the TOTAL LENGTH field in each fragment refers to the size of the fragment and not to the size of the original datagram, so without this bit, the destination does not know the size of the IP datagram)

An Example of IP Fragmentation :







Example: Header + 400 + 400 + 400 - Header 1: FLAGS=001 and OFFSET = 0 - Header 2: FLAGS=001 and OFFSET = 400/8 = 50 - Header 2: FLAGS=000 and OFFSET = 800/8 = 100 How are IP fragments reassembled? - All the IP fragments of a datagram will be assembled before the datagram is delivered to - the layers above. - Where should they be assembled? At routers or the destination? o They are assembled at the destination. - IP reassembly uses a timer. If timer expires and there are still missing fragments, all the fragments will be discarded. Question: if you are implementing the IP fragmentation, what (malicious) situations do you need to consider? Malicious situations are
Page 80

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks

those that are intentionally created by adversaries, rather than occurring naturally. - What do you do if you never get the last missing piece? - What do you do if you get overlapping fragments? - What do you do if the last byte of a fragment would go over the maximum size of an IP packet, i.e., if the size of all reassembled fragments is larger than the maximum size of an IP packet? IP Spoofing :  Spoofing: - Any host can send packets pretending to be from any IP address - Replies will be routed to the appropriate subnet. Egress (outgoing) Filtering - Remove packets that couldn't be coming from your network; however it doesn't benefit you directly, so few people do it. Ingress (incoming) Filtering: remove packets from invalid (e.g. local) addresses. To conduct IP spoofing, one needs the superuser privilege.







Anatomy of an IP address : • The IP address is a 32-bit address that consists of two components. • One component is the network portion of the address, consisting of the network bits. – The network bits make up the left portion of the address. – They consist of the first bit up to some boundary. • The second component is the host portion of the address, consisting of the host bits. – The host bits make up the right portion of the address. – They consist of the remaining bits not included with the network bits.

The Mask : •The network portion of the address is separated from the host portion of the address by a mask. • The mask simply indicates how many bits are used for the network portion, leaving the remaining bits for the host portion. • A 24-bit mask indicates that the first 24 bits of the address are network bits, and the remaining 8 bits are host bits.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 81

Data Communication & Computer Networks

• A 16-bit mask indicates that the first 16 bits of the address are network bits, and the remaining 16 bits are host bits. • And so forth… Quick review on binary math : • Binary math is based on powers of 2, as opposed to powers of 10 for decimal math. – Whereas decimal math has a 1s place, 10s place, 100s place, and so forth… – Binary math has a 1s place, 2s place, 4s place, 8s place, and so forth. • Given an octet (8 bits), when a bit in the octet is set (1) its value is… – 128 = left-most bit (most significant bit) = 27 – 64 = next bit = 26 – 32 = next bit = 25 – 16 = next bit = 24 – 8 = next bit = 23 – 4 = next bit = 22 – 2 = next bit = 21 – 1 = right-most bit (least significant bit) = 20 • When a bit in an octet is not set (0) its value is zero. • The decimal value of an octet is the sum of each set bit’s value. – 11000000 = 128 + 64 = 192 – 10101000 = 128 + 32 + 8 = 168 – 11111111 = 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255 Dotted decimal notation : • Machines read the IP address as a stream of 32 bits. • However, for human consumption, the IP address is written in dotted decimal notation. – The 32-bit address is divided into 4 groups of 8 bits (an octet or a byte). – Each octet is written as a decimal number ranging from 0 to 255. – The decimal numbers are separated by periods, or dots.

Network, host, and broadcast addresses : • For a given IP network… – the network bits remain fixed and the host bits vary. – the network address is the one that results when all the host bits are not set (the result of performing an AND operation on the address and its mask). – the broadcast address is the one that results when all the host bits are set. – host addresses are those that result with all remaining combinations of the host bits. • The next two slides show examples of how to determine the various addresses for two networks.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 82

Data Communication & Computer Networks

24-bit mask (255.255.255.0) :

16-bit mask (255.255.0.0) :

Formula to determine number of hosts on a given network : • Given that there are N host bits in an address, the number of hosts for that network is 2N - 2. Two addresses are subtracted for the network address and the broadcast address. • 8 host bits: 28 - 2 = 254 hosts • 16 host bits: 216 - 2 = 65534 hosts • 24 host bits: 224 - 2 = 16777214 hosts

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 83

Data Communication & Computer Networks

Public addresses : • Most IP addresses are public addresses. Public addresses are registered as belonging to a specific organization. • Internet Service Providers (ISP) and extremely large organizations in the U.S. obtain blocks of public addresses from the American Registry for Internet Numbers (ARIN http://www.arin.net). Other organizations obtain public addresses from their ISPs. • There are ARIN counterparts in other parts of the world, and all of these regional registration authorities are subject to the global Internet Assigned Numbers Authority (IANA http://www.iana.org). • Public IP addresses are routed across the Internet, so that hosts with public addresses may freely communicate with one another globally. • No organization is permitted use public addresses that are not registered with that organization! Private addresses : • RFC 1918 designates the following as private addresses. – Class A range: 10.0.0.0 through 10.255.255.255. – Class B range: 172.16.0.0 through 172.31.255.255. – Class C range: 192.168.0.0 through 192.168.255.255. • Private addresses may be used by any organization, without any requirement for registration. • Because private addresses are ambiguous - can’t tell where they’re coming from or going to because anyone can use them – private addresses are not permitted to be routed across the Internet. • ISPs block private addresses from being routed across their infrastructure. • Note: The use of private addresses, network address translation (NAT), and proxy servers solved the IP address shortage problem for the short and medium terms. Address Classes : Three main classes : - Class A networks - Class B networks - Class C networks Class A networks – First octet values range from 1 through 126. – First octet starts with bit 0. – Network mask is 8 bits, written /8 or 255.0.0.0. – 1.0.0.0 through 126.0.0.0 are class A networks with 16777214 hosts each.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 84

Data Communication & Computer Networks

Class B networks – First octet values range from 128 through 191. – First octet starts with binary pattern 10. – Network mask is 16 bits, written /16 or 255.255.0.0. – 128.0.0.0 through 191.255.0.0 are class B networks, with 65534 hosts each.

Class C networks – First octet values range from 192 through 223. – First octet starts with binary pattern 110. – Network mask is 24 bits, written /24 or 255.255.255.0. – 192.0.0.0 through 223.255.255.0 are class C networks, with 254 hosts each.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 85

Data Communication & Computer Networks

Two additional classes, and reserved addresses : Class D addresses – First octet values range from 224 through 239. – First octet starts with binary pattern 1110. – Class D addresses are multicast addresses. Class E addresses – Essentially everything that’s left. – Experimental class. Reserved addresses – 0.0.0.0 is the default IP address, and it is used to specify a default route. – Addresses beginning with 127 are reserved for internal loopback addresses. It is common to see 127.0.0.1 used as the internal loopback address on many devices. Try pinging this address on a PC or Unix station.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 86

Data Communication & Computer Networks

Assigning Host IDs: - a.b.c.1 through a.b.c.10 – usually routers and servers - a.b.c.11 through a.b.c.204 – usually workstations - a.b.c.241 through a.b.c.254 – usually UNIX (or Linux) hosts Intranet Network IDs: - 10.0.0.0 through 10.255.255.255 – usually Internal networks - 72.16.0.0 through 172.31.255.255 – usually Intranets not connected to the Internet - 192.168.0.0 through 192.168.255.255 – usually networks connected to the Internet (and usually behind a firewall)

Loop back address: Loopback (loop-back) describes ways of routing electronic signals, digital data streams, or flows of items from their originating facility back to the source without intentional processing or modification. This is primarily a means of testing the transmission or transportation infrastructure. Covers router access, security, information gathering, configuration and scalability. • Most ISPs make use of the router loopback interface. • IP address configured is a host address • Configuration example: interface loopback 0 description Loopback Interface of CORE-GW3
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 87

Data Communication & Computer Networks

ip address 215.18.3.34, 255.255.255.255 • Loopback interfaces on ISP backbone usually numbered: - out of one contiguous block, or - using a geographical scheme, or - using a per PoP scheme • Aim is to aid recognition and improve security With routers using a loopback address as the source for all IP packets originating from the router, it becomes very easy to construct appropriate filters to protect management systems in the ISP’s network operation centres. Accessing the Router : • Put mapping of the router loopback address to router name into forward and reverse DNS. • Telnet to router using loopback address, not interface address. ISP routers usually have multiple external paths and many interfaces. • DNS Configuration example: core-gw3 A 215.17.1.8 ; Loopback of router gw3 Remote access using Telnet : • Remote access from the router using familiar telnet • Configure telnet so that the loopback address is used in packets originating from the router • Configuration example: ip telnet source-interface Loopback0 RADIUS User Authentication : • RADIUS distributed authentication system for dial user access to routers • Configure RADIUS so that the loopback address is used in packets originating from the router • Configuration example: ip radius source-interface Loopback0 radius-server host 215.17.1.1 auth-port 1645 acct-port 1646

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 88

Data Communication & Computer Networks

Logging Information : • Send logging information to a Unix or Windows SYSLOG server. • Log packets leave router with loopback interface address as source • Configuration example: logging source-interface loopback0 Network Time Protocol : • Network Time Protocol (NTP) used to synchronize the time on all the devices. • NTP packets leave router with loopback address as source • Configuration example: ntp source loopback0 ntp server 169.223.1.1 source loopback 1 SNMP : • If SNMP is used, send traps from router using loopback address as source. • Configuration example: snmp-server trap-source Loopback0 snmp-server host 169.223.1.1 community Interface Configuration : • “IP Unnumbered” no need for an IP address on point-to-point links keeps IGP small • Configuration example: interface loopback 0 ip address 215.17.3.1 255.255.255.255 ! interface Serial 5/0 ip unnumbered loopback 0 ! ip route 215.34.10.0 255.255.252.0 Serial 5/0 • Loopback interface is not “redundant” or “superfluous” • Multitude of uses to ease security, access, management, information and scalability of router and network • Protects the ISP’s Management Systems • Use the loopback! IP routing concepts : Routing :- path finding from one end to the other - routing occurs at layer 3 - bridging occurs at layer 2

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 89

Data Communication & Computer Networks

IP performs: - search for a matching host address - search for a matching network address - search for a default entry Routing done by IP router, when it searches the routing table and decide which interface to end a packet out.

Routing functions include: – route calculation – maintenance of the routing table – execution of routing protocols • On commercial routers handled by a single general purpose processor, called route processor IP forwarding is per-packet processing • On high-end commercial routers, IP forwarding is distributed • Most work is done on the interface cards

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 90

Data Communication & Computer Networks

• Hardware components of a router: – Network interfaces – Interconnection network – Processor with a memory and CPU IP routing is the process of forwarding a packet based on the destination IP address. Routing occurs at a sending TCP/IP host and at an IP router. In each case, the IP layer at the sending host or router must decide where to forward the packet. For IPv4, routers are also commonly referred to as gateways. To make these decisions, the IP layer consults a routing table stored in memory. Routing table entries are created by default when TCP/IP initializes, and entries can be added either manually or automatically. Direct and Indirect Delivery : Forwarded IP packets use at least one of two types of delivery based on whether the IP packet is forwarded to the final destination or whether it is forwarded to an IP router. These two types of delivery are known as direct and indirect delivery. • Direct delivery occurs when the IP node (either the sending host or an IP router) forwards a packet to the final destination on a directly attached subnet. The IP node encapsulates the IP datagram in a frame for the Network Interface layer. For a LAN technology such as Ethernet or Institute of Electrical and Electronic Engineers (IEEE) 802.11, the IP node addresses the frame to the destination media access control (MAC) address. Indirect delivery occurs when the IP node (either the sending host or an IP router) forwards a packet to an intermediate node (an IP router) because the final destination is not on a directly attached subnet. For a LAN technology such as Ethernet or IEEE 802.11, the IP node addresses the frame to the IP router MAC address.



End-to-end IP routing across an IP network combines direct and indirect deliveries. Find path - forward packet, forward packet, forward packet, forward packet... Find alternate path - forward packet, forward packet, forward packet, forward packet… Repeat until powered off
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 91

Data Communication & Computer Networks

Routing = building maps and giving directions • Path derived from information received from a routing protocol • Several alternative paths may exist best next hop stored in forwarding table • Decisions are updated periodically or as topology changes (event driven) • Decisions are based on: topology, policies and metrics (hop count, filtering, delay, bandwidth, etc.) Routing Tables: Routing is carried out in a router by consulting routing table. No unique format for routing tables, typically table contains: - address of a destination - IP address of next hop router - network interface to be used - subnet mask for the this interface - distance to the destination A routing table is present on every IP node. The routing table stores information about IP destinations and how packets can reach them (either directly or indirectly). Because all IP nodes perform some form of IP routing, routing tables are not exclusive to IP routers. Any node using the TCP/IP protocol has a routing table. Each table contains a series of default entries according to the configuration of the node, and additional entries can be added manually, for example by administrators that use TCP/IP tools, or automatically, when nodes listen for routing information messages sent by routers. When IP forwards a packet, it uses the routing table to determine: - The next-hop IP address For a direct delivery, the next-hop IP address is the destination address in the IP packet. For an indirect delivery, the next-hop IP ddress is the IP address of a router.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 92

Data Communication & Computer Networks

- The next-hop interface The interface identifies the physical or logical interface that forwards the packet. Routing Table Entries : A typical IP routing table entry includes the following fields: • Destination :

- Either an IP address or an IP address prefix. • Prefix Length

- The prefix length corresponding to the address or range of addresses in the destination. • Next-Hop

- The IP address to which the packet is forwarded. • Interface

- The network interface that forwards the IP packet. • Metric

- A number that indicates the cost of the route so that IP can select the best route, among potentially multiple routes to the same destination. The metric sometimes indicates the number of hops (the number of links to cross) in the path to the destination. Routing table entries can store the following types of routes: • Directly-attached subnet routes

- Routes for subnets to which the node is directly attached. For directly-attached subnet routes, the Next-Hop field can either be blank or contain the IP address of the interface on that subnet. • Remote subnet routes

- Routes for subnets that are available across routers and are not directly attached to the node. For remote subnet routes, the Next- Hop field is the IP address of a neighbouring router.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 93

Data Communication & Computer Networks



Host routes

- A route to a specific IP address. Host routes allow routing to occur on a per-IP address basis. • Default route

- Used when a more specific subnet or host route is not present. The next - hop address of the default route is typically the default gateway or default router of the node.

Routing Component :• Three important routing elements :

- algorithm - database - protocol Algorithm : can be differentiate based on several key characteristics Database : table in routers or routing table Protocol: the way information for routing to be gathered and distributed Routing algorithm types :- static Vs dynamic - source routing Vs hop-by-hop - centralize Vs distributed
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 94

Data Communication & Computer Networks

- distance vector Vs link state static route :- manually config routing table - can’t react dynamically to network change such as router’s crash - work well with small network or simple topology - unix hosts use command route to add an entry

Dynamic route :- network protocol adjusts automatically for topology or traffic changes - unix hosts run routing daemon routed or gated

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 95

Data Communication & Computer Networks

source routing :- source will determine the entire route - routers only act as sore-forward devices hop-by-hop :- routers determine the path based on theirs own calculation distance vector :- distance means routing metric - vector means destination - flood routing table only to its neighbours - RIP is an example - also known as Bellman-Ford algorithm or Ford-Fulkerson algorithm link state :- flood routing information to all nodes - each router finds who is up and flood this information to the entire routers - use the link state to build a shortest path map to everybody - also known as Shortest Path First (SPF) algorithm

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 96

Data Communication & Computer Networks

Chapter 7. Domian Network Services(DNS)
The Domain Name System (DNS) is a distributed database system that provides hostname-to-IP resource mapping (usually the IP address) and other information for computers on an internetwork. Any computer on the Internet can use a DNS server to locate any other computer on the Internet. DNS is made up of two distinct components, the hierarchy and the name service. The DNS hierarchy specifies the structure, naming conventions, and delegation of authority in the DNS service. The DNS name service provides the actual name-to-address mapping mechanism. DNS Hierarchy DNS uses a hierarchy to manage its distributed database system. The DNS hierarchy, also called the domain name space, is an inverted tree structure, much like eDirectory. The DNS tree has a single domain at the top of the structure called the root domain. A period or dot (.) is the designation for the root domain. Below the root domain are the top-level domains that divide the DNS hierarchy into segments. Listed below are the top-level DNS domains and the types of organizations that use them. Below the top-level domains, the domain name space is further divided into subdomains representing individual organizations.

Table : Top-Level DNS Domains

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 97

Data Communication & Computer Networks

Figure : DNS Hierarchy Domains and Subdomains A domain is a label of the DNS tree. Each node on the DNS tree represents a domain. Domains under the top-level domains represent individual organizations or entities. These domains can be further divided into subdomains to ease administration of an organization's host computers. For example, Company A creates a domain called companya.com under the .com top-level domain. Company A has separate LANs for its locations in Chicago, Washington, and Providence. Therefore, the network administrator for Company A decides to create a separate subdomain for each division. Any domain in a subtree is considered part of all domains above it. Therefore, chicago.companya.com is part of the companya.com domain, and both are part of the .com domain.

Figure : Domains and Subdomains
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 98

Data Communication & Computer Networks

DNS design goals The design goals of the DNS influence its structure. They are: - The primary goal is a consistent name space which will be used for referring to resources. In order to avoid the problems caused by ad hoc encodings, names should not be required to contain network identifiers, addresses, routes, or similar information as part of the name. - The sheer size of the database and frequency of updates suggest that it must be maintained in a distributed manner, with local caching to improve performance. Approaches that attempt to collect a consistent copy of the entire database will become more and more expensive and difficult, and hence should be avoided. The same principle holds for the structure of the name space, and in particular mechanisms for creating and deleting names; these should also be distributed. - Where there tradeoffs between the cost of acquiring data, the speed of updates, and the accuracy of caches, the source of the data should control the tradeoff. - The costs of implementing such a facility dictate that it be generally useful, and not restricted to a single application. We should be able to use names to retrieve host addresses, mailbox data, and other as yet undetermined information. All data associated with a name is tagged with a type, and queries can be limited to a single type. - Because we want the name space to be useful in dissimilar networks and applications, we provide the ability to use the same name space with different protocol families or management. For example, host address formats differ between protocols, though all protocols have the notion of address. The DNS tags all data with a class as well as the type, so that we can allow parallel use of different formats for data of type address. - We want name server transactions to be independent of the communications system that carries them. Some systems may wish to use datagrams for queries and responses, and only establish virtual circuits for transactions that need the reliability (e.g., database updates, long transactions); other systems will use virtual circuits exclusively. - The system should be useful across a wide spectrum of host capabilities. Both personal computers and large timeshared hosts should be able to use the system, though perhaps in different ways. Elements of the DNS The DNS has three major components: - The DOMAIN NAME SPACE and RESOURCE RECORDS, which are specifications for a tree structured name space and data associated with the names. Conceptually, each node and leaf of the domain name space tree names a set of
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 99

Data Communication & Computer Networks

information, and query operations are attempts to extract specific types of information from a particular set. A query names the domain name of interest and describes the type of resource information that is desired. For example, the Internet uses some of its domain names to identify hosts; queries for address resources return Internet host addresses. - NAME SERVERS are server programs which hold information about the domain tree's structure and set information. A name server may cache structure or set information about any part of the domain tree, but in general a particular name server has complete information about a subset of the domain space, and pointers to other name servers that can be used to lead to information from any part of the domain tree. Name servers know the parts of the domain tree for which they have complete information; a name server is said to be an AUTHORITY for these parts of the name space. Authoritative information is organized into units called ZONEs, and these zones can be automatically distributed to the name servers which provide redundant service for the data in a zone. - RESOLVERS are programs that extract information from name servers in response to client requests. Resolvers must be able to access at least one name server and use that name server's information to answer a query directly, or pursue the query using referrals to other name servers. A resolver will typically be a system routine that is directly accessible to user programs; hence no protocol is necessary between the resolver and the user program.

Domain Names :
The domain name represents an entity's position within the structure of the DNS hierarchy. A domain name is simply a list of all domains in the path from the local domain to the root. Each label in the domain name is delimited by a period. For example, the domain name for the Providence domain within Company A is providence.companya.com, as shown in above figure. The domain name space consists of a tree of domain names. Each node or leaf in the tree has zero or more resource records, which hold information associated with the domain name. The tree sub-divides into zones beginning at the root zone. A DNS zone may consist of only one domain, or may consist of many domains and sub-domains, depending on the administrative authority delegated to the manager. A domain is identified by a domain name, and consists of that part of the domain name space that is at or below the domain name which specifies the domain. A domain is a subdomain of another domain if it is contained within that domain. This relationship can be tested by seeing if the subdomain's name ends with the containing domain's name. For example, A.B.C.D is a subdomain of B.C.D, C.D, D, and " ".

Delegating Authority :
Domain delegation gives an organization authority for a domain. Having authority for a domain means that the organization's network administrator is
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 100

Data Communication & Computer Networks

responsible for maintaining the DNS database of hostname and address information for that domain. A group of domains and subdomains for which an organization has authority is called a zone. All host information for a zone is maintained in a single, authoritative database. For example, the companya.com. domain is delegated to Company A, creating the companya.com. zone. There are three subdomains within the companya.com. domain:    chicago.companya.com. washington.companya.com. providence.companya.com.

The Company A administrator maintains all host information for the zone in a single database and also has authority to create and delegate subdomains. For example, Company A's Chicago location has its own network administrator. The companya.com administrator delegates the chicago.companya.com zone to the Chicago location and no longer has authority over it. Company A now has two zones: companya.com and chicago.companya.com.   companya.com, which has authority over companya.com, washington.companya.com, and providence.companya.com zones chicago.companya.com, which has authority over the chicago.companya.com zone

Resource Records :
Resource records (RRs) contain the host information maintained by the name servers and make up the DNS database. Different types of records contain different types of host information. For example, an Address record provides the name-toaddress mapping for a given host, while a Start of Authority (SOA) record specifies the start of authority for a given zone. A DNS zone must contain several types of resource records for DNS to function properly. Other RRs can be present, but the following records are required for standard DNS:  Name server (NS)—Binds a domain name with a hostname for a specific name server The DNS zone must contain NS records for each primary and secondary name server in the zone. The DNS zone must contain NS records to link the zone to higher- and lower-level zones within the DNS hierarchy. Start of Authority (SOA)—Indicates the start of authority for the zone. The name server must contain one SOA record specifying its zone of authority.
Page 101



Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks



Canonical name (CNAME)—Specifies the canonical or primary name for the owner. The owner name is an alias.\ Address (A)—Provides the IP address for the zone.



For example, the name server for a zone must contain the following:     An SOA record identifying its zone of authority An NS record for the primary name server within the zone An NS record for each secondary name server within the zone An A record that maps each name server specified in the NS records to an IP address

Table : Resource Record Types and Field Differences
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 102

Data Communication & Computer Networks

A Resource Record (RR) is the basic data element in the domain name system. Each record has a type (A, MX, etc.), an expiration time limit, a class, and some type-specific data. Resource records of the same type define a resource record set (RRset). The order of resource records in a set, returned by a resolver to an application, is undefined, but often servers implement round-robin ordering to achieve load balancing. DNSSEC, however, works on complete resource record sets in a canonical order.

RR (Resource record) fields

Field

Description

Length (octets)

NAME

Name of the node to which this record pertains

(variable)

TYPE

Type of RR in numeric form (e.g. 15 for MX RRs)

2

CLASS

Class code

2

TTL

Count of seconds that the RR stays valid (The maximum is 231-1, which is about 68 years.)

4

RDLENGTH Length of RDATA field

2

RDATA

Additional RR-specific data

(variable)

SOA (Start of Authority)records :
The SOA record is the first record in a properly configured zone. It contains information about the zone in a string of fields. An SOA record tells the server to be authoritative for the zone. The SOA record takes the format: <domain.name.> IN SOA <hostname.domain.name.> <mailbox.domain.name> <serial-number> <refresh> <retry> <expire> <minimum-ttl> Where :
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 103

Data Communication & Computer Networks

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 104

Data Communication & Computer Networks

The “;” character in the example above indicates that the rest of the line is a comment that should be ignored by the nameserver. Also note: The trailing dot (“.”) after each record refers to a hostname. Without the dot, the nameserver adds the current zone after the record. For example, ns.pnic.net would be interpreted as ns.apnic.net.28.12.202.in-addr.arpa.

DNS protocol :
The DNS protocol (Domain Name System) allows names to be 'resolved' by the AX3000. Resolving is retrieving an IP address associated with a name. The DNS protocol is only available with certain AX3000 models. A domain (computer network) can be considered as a tree, with branches (nodes) such as hubs, switches, routers, print servers etc, and leafs, for example PCs, terminals and printers. The domain system makes no distinction between the use of interior nodes and the leafs, and this documentation uses the term "nodes" to refer to both. (i.e. any network resource). Each node has a name (Label) which must be unique to other nodes at the same level, but not necessarily unique within the whole network.

Figure : AX3000 Interface Box Label syntax: - Permissible characters are letters (a..z to A..Z), numbers (0..9) and the hyphen (-). - A Label must begin by a letter and be ended by a letter or a number. - The resolution is not case-sensitive. To set the DNS protocol, select the [Configuration]_[TCP/IP]_[DNS] menu. A dialog box as shown in Figure is below :

Figure : DNS Box
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 105

Data Communication & Computer Networks

The parameters of the DNS box are: - DNS Servers: to resolve a name, the AX3000 sends DNS requests to a DNS server. The IP address of this DNS server must be known. The AX3000 set-up procedure allows two DNS servers to be set. Note: if 'DNS Servers' is enabled in the AX3000 Interface box these two parameters are supplied by DHCP and cannot be accessed here. - Default DNS Domains: theses domains can be used during the resolving operation. Note: if the '1st DNS Search Domain' is enabled in the AX3000 Interface box the '1st Domain' parameter is automatically set and cannot be accessed here.

DHCP & Scope Resolution :
The Dynamic Host Configuration Protocol (DHCP) provides configuration parameters to Internet hosts. DHCP consists of two components: a protocol for delivering host-specific configuration parameters from a DHCP server to a host and a mechanism for allocation of network addresses to hosts. DHCP is built on a client-server model, where designated DHCP server hosts allocate network addresses and deliver configuration parameters to dynamically configured hosts. Throughout the remainder of this document, the term "server" refers to a host providing initialization parameters through DHCP, and the term "client" refers to a host requesting initialization parameters from a DHCP server. A host should not act as a DHCP server unless explicitly configured to do so by a system administrator. The diversity of hardware and protocol implementations in the Internet would preclude reliable operation if random hosts were allowed to respond to DHCP requests. For example, IP requires the setting of many parameters within the protocol implementation software. Because IP can be used on many dissimilar kinds of network hardware, values for those parameters cannot be guessed or assumed to have correct defaults. Also, distributed address allocation schemes depend on a polling/defense mechanism for discovery of addresses that are already in use. IP hosts may not always be able to defend their network addresses, so that such a distributed address allocation scheme cannot be guaranteed to avoid allocation of duplicate network addresses. DHCP supports three mechanisms for IP address allocation. In "automatic allocation", DHCP assigns a permanent IP address to a client. In "dynamic allocation", DHCP assigns an IP address to a client for a limited period of time (or until the client explicitly relinquishes the address). In "manual allocation", a client's IP address is assigned by the network administrator, and DHCP is used simply to convey the assigned address to the client. A particular network will use one or more of these mechanisms, depending on the policies of the network administrator.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 106

Data Communication & Computer Networks

Dynamic allocation is the only one of the three mechanisms that allows automatic reuse of an address that is no longer needed by the client to which it was assigned. Thus, dynamic allocation is particularly useful for assigning an address to a client that will be connected to the network only temporarily or for sharing a limited pool of IP addresses among a group of clients that do not need permanent IP addresses. Dynamic allocation may also be a good choice for assigning an IP address to a new client being permanently connected to a network where IP addresses are sufficiently scarce that it is important to reclaim them when old clients are retired. Manual allocation allows DHCP to be used to eliminate the error-prone process of manually configuring hosts with IP addresses in environments where (for whatever reasons) it is desirable to manage IP address assignment outside of the DHCP mechanisms. DHCP is designed to supply DHCP clients with the configuration parameters defined in the Host Requirements RFCs. After obtaining parameters via DHCP, a DHCP client should be able to exchange packets with any other host in the Internet. Not all of these parameters are required for a newly initialized client. A client and server may negotiate for the transmission of only those parameters required by the client or specific to a particular subnet. DHCP allows but does not require the configuration of client parameters not directly related to the IP protocol. DHCP also does not address registration of newly configured clients with the Domain Name System (DNS) DHCP is not intended for use in configuring routers. Requirements Throughout this document, the words that are used to define the significance of particular requirements are capitalized. These words are:\ "MUST" - This word or the adjective "REQUIRED" means that the item is an absolute requirement of this specification. "MUST NOT" - This phrase means that the item is an absolute prohibition of this specification. "SHOULD" - This word or the adjective "RECOMMENDED" means that there may exist valid reasons in particular circumstances to ignore this item, but the full implications should be understood and the case carefullyweighed before choosing a different course. "SHOULD NOT"

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 107

Data Communication & Computer Networks

-

This phrase means that there may exist valid reasons in particular circumstances when the listed behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.

"MAY" - This word or the adjective "OPTIONAL" means that this item is truly optional. One vendor may choose to include the item because a particular marketplace requires it or because it enhances the product, for example; another vendor may omit the same item. Terminology "DHCP client" - A DHCP client is an Internet host using DHCP to obtain configuration parameters such as a network address. "DHCP server" - A DHCP server is an Internet host that returns configuration parameters to DHCP clients. "BOOTP relay agent" - A BOOTP relay agent or relay agent is an Internet host or router that passes DHCP messages between DHCP clients and DHCP servers. DHCP is designed to use the same relay agent behavior as specified in the BOOTP protocol specification. "binding" - A binding is a collection of configuration parameters, including at least an IP address, associated with or "bound to" a DHCP client. Bindings are managed by DHCP servers.

Figure : DNS resolution sequence

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 108

Data Communication & Computer Networks

Figure : DHCP Servers

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 109

Data Communication & Computer Networks

Chapter 8. Network Applications

E-Mail : Electronic mail was originally designed to allow a pair of individuals to communicative via computer. The first electronic mail software provided only a basic facility: it allowed a person using one computer to type a message and send it across the Internet to a person using another computer. Current Electronic mail systems provide services that permit complex communication and interaction. For example Electronic mail can be used to: Send a single message to many recipients. Send a message that includes text, audio, video or graphics. Send a message to a user on a network outside the Internet. Send a message to which a computer program responds.

To appreciate the capabilities and significance of electronic mail, one must understand a few basic facts .The next sections consider how electronic mail appears to a user. Later sections describe how electronic systems work and discuss the impact of electronic mail. Researchers working on early computer networks realized that networks can provide a form of communication among individuals that combines the speed of telephone communication with permanence of postal mail. A computer can transfer small notes or large documents across a network almost instantaneously. The Designers called the new form of communication electronic mail often abbreviated as email. The concept of Email has become extremely popular in the Internet as well as on most other computer networks. To receive electronic mail, a user must have a mailbox, a storage area, usually on disk, that holds incoming email messages until the user has time to read them. In addition, the computer on which a mailbox resides must also run email
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 110

Data Communication & Computer Networks

software. When a message arrives, email software automatically stores it in the user’s mailbox. An email mailbox is private in the same way that postal mailboxes are private: anyone can send a message to a mailbox, only the owner can examine mailbox contents or remove messages. Like a post office mailbox each email mailbox has a mailbox address. To send email to another user, one must know the recipients mail box address. Thus - Each individual who participates in electronic mail exchange has a mail box identified by a unique address. - Any User can send mail across the Internet to another users mailbox if they know the mailbox address. - Only the owner can examine the contents of a mail box and extract messages. To send electronic mail across the Internet, an individual runs an email application program on their local computer. The local application operates similar to a word processor-- It allows a user to compose and edit a message and to specify a recipient by giving a mailbox address. Once the user finishes entering the message and adds attachments, email software sends it across the Internet to the recipient’s mailbox. When an incoming email message arrives, system software can be configured to inform the recipient. Some computers print a text message or highlight a small graphic on the users display (e.g., a small picture of letters in a postal mail box). Other computers sound a tone or play a recorded message. Still other computers wait for the user to finish viewing the current application before making an announcement. Most systems allows a user to suppress notification altogether, in which case the user must periodically check to see if email has arrived. Once email has arrived, a user can extract messages from his or her mailbox using an application program. The application allows a user to view each message, and optionally, to send a reply. Usually, when an email application begins, it tells the user about the messages waiting in the mailbox. The initial summary contains one line for each email message that has arrived; the line gives the sender’s name, the time that message arrived, and the length of the message. After examining the summary, a user can select and view messages on the list. Each time a user selects a message from the summary, the email system displays the message contents. After viewing a message, a user must choose an action. The user can send a reply to whoever sent the message, leave the message in the mail box so it can be viewed again, save a copy of the message in the file, or discard the message. - A computer connected to the Internet needs application software before users can send or receive electronic mail. - Email software allows a user to compose and send message or to read messages that have arrived. - A user can send a reply to any message.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 111

Data Communication & Computer Networks

Usually, the sender only needs to supply information for the TO and SUBJECT lines in a message header because email software fills in the date and the senders mailbox address automatically. In a reply, the mail interface program automatically constructs the entire header. It uses the contents of the FROM field in the original message as the contents of the TO field in the reply. It also copies the subject field from the original message to a reply. Having software fill in the header lines is convenient, and also makes it difficult to forge email. In practice, most email systems supply additional header lines that help identify the sending computer, give the full name of the person who sent the message, provide a unique message identifier that can be used for auditing or accounting, and identify the type of message (e.g., text or graphics). Thus, email messages can arrive with dozens of lines in the header. A lengthy header can be annoying to a recipient who must skip past it to find the body of a message. Software used to read email can make it easier for the recipient by skipping most header lines. To summarize: Although most email messages contain many lines of header, software generates most of the header automatically. User-Friendly software hides unnecessary header lines when displaying an email message.

E-Mail Operation : A computer communication always involves interaction between two programs called a client and a server. E-mail systems follow the client/server approach: Two programs co operate to transfer an email message from the sender’s computer to the recipient’s mail box (transfer requires two programs because an application running on one computer cannot store data directly in a mailbox on another computer’s disk). When a user sends an email message, a program on the sender’s computer become a client. It contacts an email server program on the recipient’s computer and transfers a copy of the message. The server stores the message in the recipient’s mailbox.

Figure : Email operation
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 112

Data Communication & Computer Networks

Client software starts automatically as a user finishes composing an email message. The client uses the recipients email address to determine which remote computer to contact. The client uses TCP to send a copy of the email message across the Internet to the server. When the server receives a message, it stores the message in the recipient’s mailbox and informs the recipient that email has arrived. The interaction between a client and a server is complex because at any time computers or the Internet connecting them can fail (e.g., someone can accidentally turn off one of the computers). To ensure that email will be delivered reliably, the client keeps a copy of the message during the transfer. After the server informs the client that the message has been received and stored on the disk, the client erases its copy. A computer cannot receive e-mail unless it has an e-mail server program running. On large computers, the system administrator arranges to start the server when the system first begins, and leaves the server running at all times. The server waits for an email message to arrive, stores the message in the appropriate mailbox on disk, and then waits for the next message. A user who has a personal computer that is frequently powered down or disconnected from the Internet cannot receive email while the computer is inactive. Therefore, most personal computers do not receive email directly. Instead, a user arranges to have a mailbox on a large computer with a server that always remains ready to accept an email message and store it in the user’s mailbox. For example, a user can choose to place their mailbox on their company’s main computer, even if they use a personal computer for most work. To read email from a personal computer, a user must contact the main computer system and obtain a copy of their mailbox. File Transfer Protocol (FTP) : Although services like email, Internet fax can be utilized for sending files over the net they are not designed for handling large volumes of data. For sending large volumes of data reliably over the net File Transfer Protocol (FTP) is preferred instead. FTP works in interactive environment. Just type ftp at the DOS command prompt to enter into ftp interactive session. FTP responds to each command the user enters. For example, when a session begins, the user enters a command to identify a remote computer. FTP then establishes a connection to the remote computer. In the same way, to terminate a session user tells FTP to relinquish its connection. The objectives of FTP are 1) to promote sharing of files (computer programs and/or data), 2) to encourage indirect or implicit (via programs) use of remote computers, 3) to shield a user from variations in file storage systems among hosts, and 4) to transfer data reliably and efficiently. FTP, though usable directly by a user at a terminal, is designed mainly for use by programs.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 113

Data Communication & Computer Networks

Files are transferred only via the data connection. The control connection is used for the transfer of commands, which describe the functions to be performed, and the replies to these commands. Several commands are concerned with the transfer of data between hosts. These data transfer commands include the MODE command which specify how the bits of the data are to be transmitted, and the STRUcture and TYPE commands, which are used to define the way in which the data are to be represented. The transmission and representation are basically independent but the "Stream" transmission mode is dependent on the file structure attribute and if "Compressed" transmission mode is used, the nature of the filler byte depends on the representation type. FTP commands : There are around 58 separate commands but the average user need to know only three following basic commands – open <name of the ftp computer>: for connecting to a remote computer. get <filename>: for retrieve a file from the computer. bye: Terminate the connection and leave the ftp session. ftp can be used not only to retrieving files but also for uploading/sending file by using send command. Once a connection has established just type the command send along with the file name to be sent. A copy of the file will be transferred to the remote computer. Of course, the FTP on the remote site must be configured to allow file storage. Many Internet sites that run ftp allow storage. FTP File Types : FTP understands only two basic file formats. It classifies each file as either a text file or a binary file. Text file supports ASCII encoding. FTP has commands to convert a non ASCII text file to ASCII text file. FTP uses classification of binary files for all non text files. For example the following type of files should be specified as binary files. A computer program Audio data A graphic or video image. A spread sheet A word processor document Compressed files -

The compressed file refers to a file, which has been processed to reduce its size by running file compress utility. By using file uncompressing utilities like unzip the original file can be reconstructed.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 114

Data Communication & Computer Networks

Choosing between binary and ASCII transfer can be sometimes difficult. When you are unsure about the type of the file choose binary option for transferring the file. If a user requests FTP to perform a transfer using incorrect type the resulting transferred copy may be damaged. FTP login : The user must login into the ftp site as an authentic user before performing any ftp based transactions. Usually the user will be provided with login name and password. This way the site is protected from malicious users and keeps the data secure. To make files available to the general public, a system administrator can configure FTP to honor anonymous FTP. It works like standard FTP, except that it allows anyone to access public files. To use anonymous FTP, a user enters the login as anonymous and the password as guest. Few sites may prompt for the user’s email address just in case of any errors like log failures so that those error messages can be emailed. Most users invoke FTP through a web browser so that the ftp transactions can be made in a graphic user interface (GUI) environment. FTP operation : FTP operation is also based on client server model.

Figure : FTP operation The user invokes a local FTP program or enters a URL that specifies FTP. The local FTP program or the user’s browser becomes an FTP client that uses TCP to contact an FTP server program on the remote computer. Each time the user requests a file transfer. The client and server program interacts to send a copy of the data across the Internet.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 115

Data Communication & Computer Networks

The FTP server locates the file that the user requested, and uses TCP to send a copy of the entire contents of the file across the Internet to the client. As the client program receives data, it writes the data into a file on the user’s local disk. After the file transfer completes, the client and server programs terminate the TCP connection used for the transfer. TELNET : The purpose of the TELNET Protocol is to provide a fairly general, bidirectional, eight-bit byte oriented communications facility. Its primary goal is to allow a standard method of interfacing terminal devices and terminal-oriented processes to each other. It is envisioned that the protocol may also be used for terminal communication ("linking") and process-process communication (distributed computation).

The Telnet protocol is often thought of as simply providing a facility for remote logins to the computer via the Internet. This was its original purpose although it can be used for many other purposes. A TELNET connection is a Transmission Control Protocol (TCP) connection used to transmit data with interspersed TELNET control information. The TELNET Protocol is built upon three main ideas: first, the concept of a "Network Virtual Terminal"; second, the principle of negotiated options; and third, a symmetric view of terminals and processes. It is best understood in the context of a user with a simple terminal using the local telnet program (known as the client program) to run a login session on a remote computer where his communications needs are handled by a telnet server program on the remote computer. It should be emphasized that the telnet server can pass on the data it has received from the client to many other types of process including a remote login server.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 116

Data Communication & Computer Networks

Figure : Telnet session Once connection has been established between the client and server, the software allows the user to interact directly with the remote computer’s operating system. For all user’s inputs the server sends output and displays on user’s screen. After a user logs out of the remote computer, the server on the remote computer terminates the Internet connection informs the client that the session has expired and control of the keyboard, mouse and display returns to the local computer. The remote access by telnet has three significant reasons. It makes computation remote from the user. Instead of sending a data file or a message from one computer to another, remote access allows a program to accept input, process it and send back the result to the remote user. Secondly, once user logs in to the remote computer the user can execute any kind of program residing in the remote server. Finally users working in heterogeneous platforms telnetting may become a common interface for different machines. Here's an example of a telnet session to sics $ telnet telnet> toggle options Will show option processing. telnet> open sics Trying 172.19.1.21 Connected to linux sics Escape character is '^]'. MIME : MIME means Multipurpose Internet Mail Extensions, and refers to an official Internet standard that specifies how messages must be formatted so that they can be exchanged between different email systems. MIME is a very flexible format, permitting one to include virtually any type of file or document in an email message. Specifically, MIME messages can contain text, images, audio, video, or other application-specific data. MIME specifies how messages must be formatted so that they can be exchanged between different email systems. MIME is a very flexible format, permitting one to include virtually any type of file or document in an email message. MIME messages can contain text, images, audio, video, or other application-specific data. Specifically, MIME allows mail messages to contain :
   

Multiple objects in a single message. Text having unlimited line length or overall length. Character sets other than ASCII, allowing non-English language messages. Multi-font messages.
Page 117

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks
 

Binary or application specific files. Images, Audio, Video and multi-media messages.

A MIME multipart message contains a boundary in the Content-type: header; this boundary, which must not occur in any of the parts, is placed between the parts, and at the beginning and end of the body of the message. A secure version of MIME, S/MIME (Secure/Multipurpose Internet Mail Extensions), is defined to support encryption of email messages. Based on the MIME standard, S/MIME provides the following cryptographic security services for electronic messaging applications: authentication, message integrity and nonrepudiation of origin and privacy and data security. S/MIME can be used by traditional mail user agents (MUAs) to add cryptographic security services to mail that is sent, and to interpret cryptographic security services in mail that is received. However, S/MIME is not restricted to mail; it can be used with any transport mechanism that transports MIME data, such as HTTP. As such, S/MIME takes advantage of the object-based features of MIME and allows secure messages to be exchanged in mixed-transport systems. Further, S/MIME can be used in automated message transfer agents that use cryptographic security services that do not require any human intervention, such as the signing of software-generated documents and the encryption of FAX messages sent over the Internet. MIME Header Fields MIME defines a number of header fields that are used to describe the content of a MIME entity. These header fields occur in at least two contexts: (1) As part of a regular RFC 822 message header. (2) In a MIME body part header within a multipart construct. The formal definition of these header fields is as follows: entity-headers := [ content CRLF ] [ encoding CRLF ] [ id CRLF ] [ description CRLF ] *( MIME-extension-field CRLF ) MIME-message-headers := entity-headers fields version CRLF ; The ordering of the header ; fields implied by this BNF ; definition should be ignored.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 118

Data Communication & Computer Networks

MIME-part-headers := entity-headers [ fields ] ; Any field not beginning with ; "content-" can have no defined ; meaning and may be ignored. ; The ordering of the header ; fields implied by this BNF ; definition should be ignored. SMTP : The objective of the Simple Mail Transfer Protocol (SMTP) is to transfer mail reliably and efficiently. SMTP is independent of the particular transmission subsystem and requires only a reliable ordered data stream channel. While this document specifically discusses transport over TCP, other transports are possible. An important feature of SMTP is its capability to transport mail across networks, usually referred to as "SMTP mail relaying". A network consists of the mutually-TCP-accessible hosts on the public Internet, the mutually-TCP-accessible hosts on a firewall-isolated TCP/IP Intranet, or hosts in some other LAN or WAN environment utilizing a non-TCP transport-level protocol. Using SMTP, a process can transfer mail to another process on the same network or to some other network via a relay or gateway process accessible to both networks. In this way, a mail message may pass through a number of intermediate relay or gateway hosts on its path from sender to ultimate recipient. The Mail eXchanger mechanisms of the domain name system are used to identify the appropriate nexthop destination for a message being transported. Simple Mail Transfer Protocol (SMTP) is a protocol designed to transfer electronic mail reliably and efficiently. SMTP is a mail service modeled on the FTP file transfer service. SMTP transfers mail messages between systems and provides notification regarding incoming mail. SMTP is independent of the particular transmission subsystem and requires only a reliable ordered data stream channel. An important feature of SMTP is its capability to transport mail across networks, usually referred to as "SMTP mail relaying". A network consists of the mutually-TCP-accessible hosts on the public Internet, the mutually-TCP-accessible hosts on a firewall-isolated TCP/IP Intranet, or hosts in some other LAN or WAN environment utilizing a non-TCP transport-level protocol. Using SMTP, a process can transfer mail to another process on the same network or to some other network via a relay or gateway process accessible to both networks.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 119

Data Communication & Computer Networks

Figure : Basic Structure of SMTP Mail eXchanger When an SMTP client has a message to transmit, it establishes a two- way transmission channel to an SMTP server. The responsibility of an SMTP client is to transfer mail messages to one or more SMTP servers, or report its failure to do so. The means by which a mail message is presented to an SMTP client, and how that client determines the domain names to which mail messages are to be transferred is a local matter, and is not addressed by this document. In some cases, the domain names transferred to, or determined by, an SMTP client will identify the final destinations of the mail message. In other cases, common with SMTP clients associated with implementations of the POP or IMAP protocols, or when the SMTP client is inside an isolated transport service environment, the domain name determined will identify an intermediate destination through which all mail messages are to be relayed. SMTP clients that transfer all traffic, regardless of the target domain names associated with the individual messages, or that do not maintain queues for retrying message transmissions that initially cannot be completed, may otherwise conform to this specification but are not considered fully-capable. Fully-capable SMTP implementations, including the relays used by these less capable ones, and their destinations, are expected to support all of the queuing, retrying, and alternate address functions discussed in this specification. The means by which an SMTP client, once it has determined a target domain name, determines the identity of an SMTP server to which a copy of a message is to be transferred, and then performs that transfer, is covered by this document. To effect a mail transfer to an SMTP server, an SMTP client establishes a two-way transmission channel to that SMTP server. An SMTP client determines the address of an appropriate host running an SMTP server by resolving a destination domain name to either an intermediate Mail eXchanger host or a final target host. An SMTP server may be either the ultimate destination or an intermediate "relay" i.e., it may assume the role of an SMTP client after receiving the message or "gateway" i.e., it may transport the message further using some protocol other than SMTP. SMTP commands are generated by the SMTP client and sent to the SMTP server. SMTP replies are sent from the SMTP server to the SMTP client in response to the commands. Mailbox
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 120

Data Communication & Computer Networks

As used in this specification, an "address" is a character string that identifies a user to whom mail will be sent or a location into which mail will be deposited. The term "mailbox" refers to that depository. The two terms are typically used interchangeably unless the distinction between the location in which mail is placed (the mailbox) and a reference to it (the address) is important. An address normally consists of user and domain specifications. The standard mailbox naming convention is defined to be "local- [email protected]": contemporary usage permits a much broader set of applications than simple "user names". Consequently, and due to a long history of problems when intermediate hosts have attempted to optimize transport by modifying them, the local-part MUST be interpreted and assigned semantics only by the host specified in the domain part of the address. POP : Post Office Protocol (POP) is designed to allow a workstation with an email client to dynamically access a mail drop on a server host over the TCP/IP network. POP3 is the version 3 (the latest version) of the Post Office Protocol, which has obsoleted the earlier versions of the POP protocol: POP1 and POP2. POPs are not intended to provide extensive manipulation operations of mail on the server; normally, mail is downloaded and then deleted. Post Office Protocol (POP) is an application-layer Internet standard protocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection.POP and IMAP (Internet Message Access Protocol) are the two most prevalent Internet standard protocols for e-mail retrieval. Virtually all modern e-mail clients and servers support both. The POP protocol has been developed through several versions, with version3 (POP3) being the current standard. Like IMAP, POP3 is supported by most webmail services such as Hotmail, Gmail and Yahoo! Mail. POP supports simple download-and-delete requirements for access to remote mailboxes. Although most POP clients have an option to leave mail on server after download, e-mail clients using POP generally connect, retrieve all messages, store them on the user's PC as new messages, delete them from the server, and then disconnect. Other protocols, notably IMAP, provide more complete and complex remote access to typical mailbox operations. Many e-mail clients support POP as well as IMAP to retrieve messages; however, fewer ISPs support IMAP. Proxy Server : The proxy server acts as an intermediate server that relays requests between a client and a server. The proxy server keeps track of all the client-server interactions, which allows you to monitor exactly what is going on, without having to access the main server.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 121

Data Communication & Computer Networks

You can use the proxy server to monitor all client-server interaction, regardless of the communication protocol. For example, you can monitor the following protocols: • HTTP for Web pages • HTTPS for secure Web pages • SMTP for email messages • LDAP for user management You can also use the proxy server as a simple port forwarding proxy if you need to test a WSM instance using a different port number than your standard port. A proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly. The proxy concept was invented in the early days of distributed systems as a way to simplify and control their complexity. Today, most proxies are a web proxy, allowing access to content on the World Wide Web. A proxy server has –
         

To keep machines behind it anonymous, mainly for security. To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server. To apply access policy to network services or content, e.g. to block undesired sites. To access sites prohibited or filtered by your ISP or institution. To log / audit usage, i.e. to provide company employee Internet usage reporting. To bypass security / parental controls. To circumvent Internet filtering to access content otherwise blocked by governments. To scan transmitted content for malware before delivery. To scan outbound content, e.g., for data leak protection. To allow a web site to make web requests to externally hosted resources (e.g. images, music files, etc.) when cross-domain restrictions prohibit the web site from linking directly to the outside domains.
Page 122

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Data Communication & Computer Networks

A proxy server that passes requests and responses unmodified is usually called a gateway or sometimes tunneling proxy. A proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet. A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as load-balancing, authentication, decryption or caching.

Types of proxy 1. Forward proxies

A forward proxy taking requests from an internal network and forwarding them to the Internet. Forward proxies are proxies where the client server names the target server to connect to. Forward proxies are able to retrieve from a wide range of sources (in most cases anywhere on the Internet). The terms "forward proxy" and "forwarding proxy" are a general description of behavior (forwarding traffic) and thus ambiguous. Except for Reverse proxy, the types of proxies described on this article are more specialized sub-types of the general forward proxy concept. 2. Reverse proxies

A reverse proxy taking requests from the Internet and forwarding them to servers in an internal network. Those making requests connect to the proxy and may not be aware of the internal network.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 123

Data Communication & Computer Networks

A reverse proxy (or surrogate) is a proxy server that appears to clients to be an ordinary server. Requests are forwarded to one or more origin servers which handle the request. The response is returned as if it came directly from the proxy server. Reverse proxies are installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites. There are several reasons for installing reverse proxy servers:




  





Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations). Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content. Compression: the proxy server can optimize and compress the content to speed up the load time. Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages. Security: the proxy server is an additional layer of defense and can protect against some OS and WebServer specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat. Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet.

Uses of proxy servers : Filtering A content-filtering web proxy server provides administrative control over the content that may be relayed through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. In some cases users can
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 124

Data Communication & Computer Networks

circumvent the proxy, since there are services designed to proxy information from a filtered website through a non filtered site to allow it through the user's proxy. A content filtering proxy will often support user authentication, to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users, or to monitor bandwidth usage statistics. It may also communicate to daemon-based and/or ICAP-based antivirus software to provide security against virus and other malware by scanning incoming content in real time before it enters the network. Many work places, schools, and colleges restrict the web sites and online services that are made available in their buildings. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture. Some common methods used for content filtering include: URL or DNS blacklists, URL filtering, MIME filtering, or content keyword filtering. Some products have been known to employ content analysis techniques to look for traits commonly used by certain types of content providers. Requests made to the open internet must first pass through an outbound proxy filter. The web-filtering company provides a database of URL patterns (regular expressions) with associated content attributes. This database is updated weekly by site-wide subscription, much like a virus filter subscription. The administrator instructs the web filter to ban broad classes of content (such as sports, pornography, online shopping, gambling, or social networking). Requests that match a banned URL pattern are rejected immediately. Assuming the requested URL is acceptable, the content is then fetched by the proxy. At this point a dynamic filter may be applied on the return path. For example, JPEG files could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error is returned and nothing is cached. Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet. Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that a content is a certain type. The resultant database is
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 125

Data Communication & Computer Networks

then corrected by manual labor based on complaints or known flaws in the contentmatching algorithms. Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS has not been tampered with. As a result, users wanting to bypass web filtering will typically search the internet for an open and anonymous HTTPS transparent proxy. They will then program their browser to proxy all requests through the web filter to this anonymous proxy. Those requests will be encrypted with https. The web filter cannot distinguish these transactions from, say, a legitimate access to a financial website. Thus, content filters are only effective against unsophisticated users. As mentioned above, the SSL/TLS chain-of-trust does rely on trusted root certificate authorities; in a workplace setting where the client is managed by the organization, trust might be granted to a root certificate whose private key is known to the proxy. Concretely, a root certificate generated by the proxy is installed into the browser CA list by IT staff. In such scenarios, proxy analysis of the contents of a SSL/TLS transaction becomes possible. The proxy is effectively operating a man-inthe-middle attack, allowed by the client's trust of a root certificate the proxy owns. A special case of web proxies is "CGI proxies". These are web sites that allow a user to access a site through them. They generally use PHP or CGI to implement the proxy functionality. These types of proxies are frequently used to gain access to web sites blocked by corporate or school proxies. Since they also hide the user's own IP address from the web sites they access through the proxy, they are sometimes also used to gain a degree of anonymity, called "Proxy Avoidance".
Caching

A caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server. Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user authentication). Another important use of the proxy server is to reduce the hardware cost. An organization may have many systems on the same network or under control of a single server, prohibiting the possibility of an individual connection to the Internet for each system. In such a case, the individual systems can be connected to one proxy server, and the proxy server connected to the main server. An example of a software caching proxy is Squid.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 126

Data Communication & Computer Networks

DNS proxy

A DNS proxy server takes DNS queries from a (usually local) network and forwards them to an Internet Domain Name Server. It may also cache DNS records. Implementation of proxies : Web proxy A web proxy is a proxy server that passes along http protocol requests like any other proxy server. However, the web proxy accepts target URLs within a user's browser window, processes the request, and then displays the contents of the requested URL immediately back within the users browser. This is generally quite different then a corporate intranet proxy which some people may refer to as a web proxy. Suffix proxies A suffix proxy server allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.SuffixProxy.com"). Suffix proxy servers are easier to use than regular proxy servers. But do not offer anonymity and the primary use is bypassing web filters; however, this is rarely used due to more advanced web filters. Transparent proxies An intercepting proxy, also known as a forced proxy or transparent proxy, is a proxy which intercepts normal communication, without clients needing any special configuration to use the proxy. Clients do not need to even be aware of the existence of the proxy. Intercepting proxies are normally located between the client and the Internet, with the proxy performing some of the functions of a gateway or router. standard definitions: "A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering".

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 127

Data Communication & Computer Networks

Chapter 9. SNMP
SNMP Introduction :
SNMP is the most popular protocol used to manage networked devices. It was designed in the late 1980s to facilitate the exchange of management information between networked devices operating at the application layer of the ISO/OSI model. Since its creation in 1988 as a short-term solution to manage elements in the growing Internet and other attached networks, Simple Network Management Protocol has achieved widespread acceptance and has become the de facto standard for internetwork management. SNMP was first defined by the IETF (Internet Engineering Task Force) in 1989, and was widely extended since. Implicit in the SNMP architectural model is a collection of network management stations and network elements. Network management stations execute management applications which monitor and control network elements. Network elements are devices such as hosts, gateways, terminal servers, and the like, which have management agents responsible for performing the network management functions requested by the network management stations. The Simple Network Management Protocol (SNMP) is used to communicate management information between the network management stations and the agents in the network elements. SNMP is applicable to TCP/IP networks, as well as other types of networks. SNMP was derived from its predecessor SGMP (Simple Gateway Management Protocol) and was intended to be replaced by a solution based on the CMIS/CMIP (Common Management Information Service/Protocol) architecture. This long-term solution, however, never received the widespread acceptance of SNMP.

The following definitions are used in the SNMP :MIB The conceptual repository for management information is called the Management Information Base (MIB). It does not hold any data, merely a definition
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 128

Data Communication & Computer Networks

of what data can be accessed. A definition of an MIB is a description of a collection of managed objects. SMI The MIB is specified in an adapted subset of the Abstract Syntax Notation One (ASN.1) language. This adapted subset is called the Structure of Management Information (SMI). ASN.1 ASN.1 is used in two different ways in SNMP. The SMI is based on ASN.1, and the messages in the protocol are defined by using ASN.1. Managed object A resource to be managed is represented by a managed object, which resides in the MIB. In an SNMP MIB, the managed objects are either: • scalar variables, which have only one instance per context. They have single values, not multiple values like vectors or structures. • tables, which can grow dynamically. • a table element, which is a special type of scalar variable. Operations SNMP relies on the three basic operations: get (object), set (object, value) and get-next (object). Instrumentation function An instrumentation function is associated with each managed object. This is the function, which actually implements the operations and will be called by the agent when it receives a request from the management station. Manager A manager generates commands and receives notifications from agents. There usually are only a few managers in a system. Agent An agent responds to commands from the manager, and sends notification to the manager. There are potentially many agents in a system. SNMP architecture :SNMP is based on the manager/agent model consisting of a manager, an agent, a database of management information, managed objects and the network protocol. The manager provides the interface between the human network manager and the management system. The agent provides the interface between the manager and the physical device(s) being managed, such as bridges, hubs, routers or network servers, these managed objects might be hardware, configuration parameters, performance statistics, and so on… These objects are arranged in what is known as a virtual
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 129

Data Communication & Computer Networks

information database, called a Management Information Base, also called MIB. SNMP allows managers and agents to communicate for the purpose of accessing these objects.

NMS : Network Management Station

Figure : The model of network management architecture Network Configuration One of the most common uses of SNMP is for remote management of network devices. SNMP is popular because it is flexible. Vendors can easily add network-management functions to their existing products. An SNMP-managed network typically consists of three components: managed devices, agents, and one or more network management systems. A managed device can be any piece of equipment that sits on your data network and is SNMP compliant. Routers, switches, hubs, workstations, and printers are all examples of managed devices. An agent is typically software that resides on a managed device. The agent collects data from the managed device and translates that information into a format that can be passed over the network using SNMP. A network-management system monitors and controls managed devices. The network management system issues requests, and devices return responses. Network-management systems and agents communicate using messages. SNMPv1 supports five different types of messages: GetRequest, SetRequest, GetNextRequest, GetResponse, and Trap. A single SNMP message is referred to as a Protocol Data Unit (PDU). These messages are constructed using Abstract Syntax Notation One (ASN.1) and translated into binary format using Basic Encoding Rules (BER). Each message type has a different purpose:

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 130

Data Communication & Computer Networks

GetRequest is typically used by the network-management system to retrieve one or more values from an agent. SetRequest is used by the network-management system to set the values within a device. GetNextRequest is used by the network-management system to retrieve the next value in a table or a list within an agent. GetResponse informs the management station of the results of a GetRequest or SetRequest by returning an error indication and a list of variable/value bindings. Trap messages are sent from agents to managers. Trap messages are unsolicited (the manager does not issue a request message) and may indicate a warning or error condition or otherwise notify the manager about the agent’s state. In essence, Trap messages provide an immediate notification for an event that might only be discovered during infrequent polling. SNMP uses five basic messages (Get, GetNext, GetResponse, Set, and Trap) to communicate between the manager and the agent. The Get and GetNext messages allow the manager to request information for a specific variable. The agent, upon receiving a Get or GetNext message, will issue a GetResponse message to the manager with either the information requested or an error indication as to why the request cannot be processed. A Set message allows the manager to request a change be made to the value of a specific variable in the case of an alarm remote that will operate a relay. The agent will then respond with a GetResponse message indicating the change has been made or an error indication as to why the change cannot be made. The Trap message allows the agent to spontaneously inform the manager of an ‘important’ event. Goals of the Architecture : The SNMP explicitly minimizes the number and complexity of management functions realized by the management agent itself. This goal is attractive in at least four respects: (1) The development cost for management agent software necessary to support the protocol is accordingly reduced. (2) The degree of management function that is remotely supported is accordingly increased, thereby admitting fullest use of internet resources in the management task. (3) The degree of management function that is remotely supported is accordingly increased, thereby imposing the fewest possible restrictions on the form and sophistication of management tools. (4) Simplified sets of management functions are easily understood and used by developers of network management tools.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 131

Data Communication & Computer Networks

A second goal of the protocol is that the functional paradigm for monitoring and control be sufficiently extensible to accommodate additional, possibly unanticipated aspects of network operation and management. A third goal is that the architecture be, as much as possible, independent of the architecture and mechanisms of particular hosts or particular gateways. The Management Information Base : The manager and agent use a Management Information Base and a relatively small set of commands to exchange information. The MIB is organized in a tree structure with individual variables, such as point status or description, being represented as leaves on the branches. A long numeric tag or object identifier (OID) is used to distinguish each variable uniquely in the MIB and in SNMP messages. The MIB lists the unique object identifier of each managed element in an SNMP network. The SNMP manager can’t monitor devices unless it has compiled their MIB files. The MIB is also a guide to the capabilities of SNMP devices. For example, if MIB lists OIDs for Traps but not for GetResponse messages, it will report alarms, but will not respond to alarm polls. Each SNMP element manages specific objects with each object having specific characteristics. Each object / characteristic has a unique object identifier consisting of numbers separated by decimal points (i.e., 1.3.6.1.4.1.2682.1). These object identifiers naturally form a tree as shown below. The MIB associates each OID with a readable label and various other parameters related to the object. The MIB then serves as a data dictionary or code book that is used to assemble and interpret SNMP messages.

Figure : The MIB tree structure

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 132

Data Communication & Computer Networks

When an SNMP manager wants to know the value of an object/characteristic, such as the state of an alarm point, the system name, or the element uptime, it will assemble a Get packet that includes the OID for each object/characteristic of interest. The element receives the request and looks up each OID in its code book (MIB). If the OID is found (the object is managed by the element), a response packet is assembled and sent with the current value of the object / characteristic included. If the OID is not found, a special error response is sent that identifies the unmanaged object. When an element sends a Trap packet, it can include OID and value information (bindings) to clarify the event. Remote units send a comprehensive set of bindings with each Trap to maintain traditional telemetry event visibility. Welldesigned SNMP managers can use the bindings to correlate and manage the events. SNMP managers will also generally display the readable labels to facilitate user understanding and decision-making.

Figure : Protocol layers in SNMP communication SNMP runs on a multitude of devices and operating systems, including, but not limited to,          core network devices (routers, switches, hubs, bridges, and wireless network access points) operating systems consumer broadband network devices (cable modems and DSL modems) consumer electronic devices (cameras and image scanners) networked office equipment (printers, copiers, and FAX machines) network and systems management/diagnostic frameworks (network sniffers and network analyzers) Uninterruptible Power Supplies (UPS) networked medical equipment (imaging units and oscilloscopes) manufacturing and processing equipment

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 133

Data Communication & Computer Networks

The SNMP protocol enables network and system administrators to remotely monitor and configure devices on their network, such as routers, switches, hubs, and servers. For example, if a system administrator wants to know how much traffic is flowing through a network device, she might poll the device using SNMP. Once the data is pulled from the router or switch, it can be interpreted in a number of different ways. Network traffic throughput is not the only thing you can monitor using SNMP.

Object Identifiers :
The names for all object types in the MIB are defined explicitly either in the Internet-standard MIB or in other documents which conform to the naming conventions of the SMI. The SMI requires that conformant management protocols define mechanisms for identifying individual instances of those object types for a particular network element. Each instance of any object type defined in the MIB is identified in SNMP operations by a unique name called its "variable name." In general, the name of an SNMP variable is an OBJECT IDENTIFIER of the form x.y, where x is the name of a non-aggregate object type defined in the MIB and y is an OBJECT IDENTIFIER fragment that, in a way specific to the named object type, identifies the desired instance. This naming strategy admits the fullest exploitation of the semantics of the GetNextRequest-PDU, because it assigns names for related variables so as to be contiguous in the lexicographical ordering of all variable names known in the MIB. The type-specific naming of object instances is defined below for a number of classes of object types. Instances of an object type to which none of the following naming conventions are applicable are named by OBJECT IDENTIFIERs of the form x.0, where x is the name of said object type in the MIB definition. For example, suppose one wanted to identify an instance of the variable sysDescr The object class for sysDescr is: iso 1 org 3 dod 6 internet 1 mgmt 2 mib 1 system 1 sysDescr 1

Hence, the object type, x, would be 1.3.6.1.2.1.1.1 to which is appended an instance sub-identifier of 0. That is, 1.3.6.1.2.1.1.1.0 identifies the one and only instance of sysDescr.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 134

Data Communication & Computer Networks

Chapter 10. Cryptography & Network Security
What is Cryptology?

• The art and science of keeping messages secury is cryptography, and it is practiced by cryptographers. • Cryptanalysts are practitioners of cryptanalysis, the art and science of breaking ciphertext; that is seeing through the disguise. • The branch of mathematics encompassing both cryptography and cryptanalysis is cryptology an its practitioners are cryptologists. • Modern cryptologists are generally trained in theoretical mathematics – they have to be. Cryptography – Basic Terminology

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 135

Data Communication & Computer Networks

The concept of securing messages through cryptography has a long history. Indeed, Julius Caesar is credited with creating one of the earliest cryptographic systems to send military messages to his generals. Messages and Encryption • A message is plaintext (sometimes called cleartext). The process of disguising a message in such a way as to hide its substance is called encryption. • An encrypted message is ciphertext. The process of turning ciphertext back into plaintext is called decryption. Algorithms and Keys • A cryptographic algorithm, also called a cipher, is the mathematical function used for encryption and decryption. • The security of a modern cryptographic algorithm is based on a secret key. This key might be any one of a large number of values. The range of possible key values is called the keyspace. • Both encryption and decryption operations are dependent on the key K and this is denoted by the K subscript in the functions EK(P) = C and DK(C) = P

Encryption Encryption is the process of scrambling the contents of a file or message to make it unintelligible to anyone not in possession of the "key" required to unscramble the file or message. There are two types of encryption: symmetric (private/secret) key and asymmetric (public) key encryption Throughout history, however, there has been one central problem limiting widespread use of cryptography. That problem is key management. In cryptographic systems, the term key refers to a numerical value used by an algorithm to alter
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 136

Data Communication & Computer Networks

information, making that information secure and visible only to individuals who have the corresponding key to recover the information. Consequently, the term key management refers to the secure administration of keys to provide them to users where and when they are required. Historically, encryption systems used what is known as symmetric cryptography. Symmetric cryptography uses the same key for both encryption and decryption. Using symmetric cryptography, it is safe to send encrypted messages without fear of interception (because an interceptor is unlikely to be able to decipher the message); however, there always remains the difficult problem of how to securely transfer the key to the recipients of a message so that they can decrypt the message. A major advance in cryptography occurred with the invention of public-key cryptography. The primary feature of public-key cryptography is that it removes the need to use the same key for encryption and decryption. With public-key cryptography, keys come in pairs of matched “public” and “private” keys. The public portion of the key pair can be distributed in a public manner without compromising the private portion, which must be kept secret by its owner. An operation (for example, encryption) done with the public key can only be undone with the corresponding private key. Prior to the invention of public-key cryptography, it was essentially impossible to provide key management for large-scale networks. With symmetric cryptography, as the number of users increases on a network, the number of keys required to provide secure communications among those users increases rapidly. For example, a network of 100 users would require almost 5000 keys if it used only symmetric cryptography. Doubling such a network to 200 users increases the number of keys to almost 20,000. Thus, when only using symmetric cryptography, key management quickly becomes unwieldy even for relatively small-scale networks. The invention of public-key cryptography was of central importance to the field of cryptography and provided answers to many key management problems for large scale networks. For all its benefits, however, public-key cryptography did not provide a comprehensive solution to the key management problem. Indeed, the possibilities brought forth by public-key cryptography heightened the need for sophisticated key management systems to answer questions such as the following: "How can I easily encrypt a file once for a number of different people using publickey cryptography?" "If I lose my keys, how can I decrypt all of my files that were encrypted with those keys?" "How do I know that I really have Alice's public key and not the public key of someone pretending to be Alice?" "How can I know that a public key is still trustworthy?"
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 137

Data Communication & Computer Networks

Symmetric Key Encryption When most people think of encryption it is symmetric key cryptosystems that they think of. Symmetric key, also referred to as private key or secret key, is based on a single key and algorithm being shared between the parties who are exchanging encrypted information. The same key both encrypts and decrypts messages. The strength of the scheme is largely dependent on the size of the key and on keeping it secret. Generally, the larger the key, the more secure the scheme. In addition, symmetric key encryption is relatively fast. The main weakness of the system is that the key or algorithm has to be shared. You can't share the key information over an unsecured network without compromising the key. As a result, private key cryptosystems are not well suited for spontaneous communication over open and unsecured networks. In addition, symmetric key provides no process for authentication or nonrepudiation.

Figure : Symmetric Key Encryption Remember, nonrepudiation is the ability to prevent individuals or entities from denying (repudiating) that a message was sent or received or that a file was accessed or altered, when in fact it was. This ability is particularly important when conducting e-commerce.

:

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 138

Data Communication & Computer Networks

Data Encryption Standard (DES) DES is one of the oldest and most widely used algorithms. DES was developed by IBM with the encouragement of the National Security Agency (NSA). It was originally deployed in the mid 1970s. DES consists of an algorithm and a key. The key is a sequence of eight bytes, each containing eight bits for a 64-bit key. Since each byte contains one parity bit, the key is actually 56 bits in length. According to author James Bamford in his book The Puzzle Palace, IBM originally intended to release the DES algorithm with a 128-bit key, but the NSA convinced IBM to release it with the 56-bit key instead. Supposedly this was done to make it easier for the NSA to decrypt covertly intercepted massages. DES is widely used in automated teller machine (ATM) and point-of-sale (POS) networks, so if you use an ATM or debit card you are using DES. DES has been enhanced with the development of triple DES. However, DES has been broken. It is gradually being phased out of use. Asymmetric Key Encryption For centuries, all cryptography was based on the symmetric key cryptosystems. Then in 1976, two computer scientists, Whitfield Diffe and Martin Hellman of Stanford University, introduced the concept of asymmetric cryptography. Asymmetric cryptography is also known as public key cryptography. Public key cryptography uses two keys as opposed to one key for a symmetric system. With public key cryptography there is a public key and a private key. The keys' names describe their function. One key is kept private, and the other key is made public. Knowing the public key does not reveal the private key. A message encrypted by the private key can only be decrypted by the corresponding public key. Conversely, a message encrypted by the public key can only be decrypted by the private key.

Figure : Asymmetric Key Encryption
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 139

Data Communication & Computer Networks

With the aid of public key cryptography, it is possible to establish secure communications with any individual or entity when using a compatible software or hardware device. For example, if Alice wishes to communicate in a secure manner with Bob, a stranger with whom she has never communicated before, Alice can give Bob her public key. Bob can encrypt his outgoing transmissions to Alice with Alice's public key. Alice can then decrypt the transmissions using her private key when she receives them. Only Alice's private key can decrypt a message encrypted with her public key. If Bob transmits to Alice his public key, then Alice can transmit secure encrypted data back to Bob that only Bob can decrypt. It does not matter that they exchanged public keys on an unsecured network. Knowing an individual's public key tells you nothing about his or her private key. Only an individual's private key can decrypt a message encrypted with his or her public key. The security breaks down if either of the parties' private keys is compromised. While symmetric key cryptosystems are limited to securing the privacy of information, asymmetric or public key cryptography is much more versatile. Public key cryptosystems can provide a means of authentication and can support digital certificates. With digital certificates, public key cryptosystems can provide enforcement of nonrepudiation. Unlike symmetric key cryptosystems, public key allows for secure spontaneous communication over an open network. In addition, it is more scalable for very large systems (tens of millions) than symmetric key cryptosystems. With symmetric key cryptosystems, the key administration for large networks is very complex.

:

Threats
A threat is anything that can disrupt the operation, functioning, integrity, or availability of a network or system. This can take any form and can be malevolent, accidental, or simply an act of nature. There are different categories of threats. There are natural threats, occurrences such as floods, earthquakes, and storms. There are also unintentional threats that are the result of accidents and stupidity. Finally, there
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 140

Data Communication & Computer Networks

are intentional threats that are the result of malicious indent. Each type of threat can be deadly to a network.

Firewalls
A network security domain is a contiguous region of a network that operates under a single, uniform security policy. Whenever domains intersect, there is a potential need for security to control traffic allowed into the network. Firewall technology can be used to filter this traffic. The most common boundary where firewalls are applied is between an organization’s internal network and the internet. This report will provide readers with a resource for understanding firewall design principles used in network security. Network Firewalls operate at different layers of the OSI and TCP/IP network models. The lowest layer at which a firewall can operate is the third level which is the network layer for the OSI model and the Internet Protocol layer for TCP/IP. At this layer a firewall can determine if a packet is from a trusted source but cannot grant or deny access based on what it contains. Firewalls that operate at the highest layer, which is the application layer, know a large amount of information including the source and the packet contents. Therefore, they can be much more selective in granting access. This may give the impression that firewalls functioning at a higher layer must be better, which is not necessarily the case. The lower the layer the packet is intercepted the more secure the system. If the intruder cannot get past the third layer, it is impossible to gain control of the operating system.

Figure : Firewall Interaction with the OSI and TCP/IP Network Models Firewalls fall into four broad categories: packet filters, circuit level gateways, application level gateways and stateful multilayer inspection firewalls. Packet filtering firewalls operate at the network level of the OSI model or the IP layer of TCP/IP. In a packet filtering firewall, each packet is compared to a set of rules before it is forwarded. The firewall can drop the packet, forward it, or send a message to the source. Circuit level gateways operate at the session layer of the OSI model, or the TCP
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 141

Data Communication & Computer Networks

layer of TCP/IP. Circuit level gateways examine each connection setup to ensure that it follows legitimate TCP handshaking. Application level gateways or proxies operate at the application layer. Packets received or leaving cannot access services for which there is no proxy. Stateful multilayer inspection firewalls combine aspects of the other three types of firewalls. They filter packets at the network layer, determine whether packets are valid at the session layer, and assess the contents of packets at the application layer. Firewall Architectures

Figure : Dual Firewall with DMZ Network Architecture After deciding the security requirements for the network the first step in designing a firewall is deciding on a basic architecture. There are two classes of firewall architectures, single layer and multiple layer. In a single layer architecture, one host is allocated all firewall functions. This method is usually chosen when either cost is a key factor or if there are only two networks to connect. The advantage to this architecture is any changes to the firewall need only to be done at a single host. The biggest disadvantage of the single layer approach it provides single entry point. If this entry point is breached, the entire network becomes vulnerable to an intruder. In a multiple layer architecture the firewall functions are distributed among two or more hosts normally connected in series. This method is more difficult to design
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 142

Data Communication & Computer Networks

and manage, it is also more costly, but can provide significantly greater security by diversifying the firewall defense. A common design approach for this type of architecture using two firewall hosts with a demilitarized network (DMZ) between them separating the Internet and the internal network. Using this setup traffic between the internal network and the Internet must pass through two firewalls and the DMZ. Firewall Types After the security requirements are established, a basic architecture is selected then Firewall functions can be chosen to meet these needs. The following is a detailed discussion of the 4 firewall categories: 1) Packet Filtering Firewalls

Figure : Packet Filtering The first generation of firewall architectures appeared around 1985 and came out of Cisco's IOS software division. These are called packet filter firewalls. Packet Filtering is usually performed by a router as part of a firewall. A normal router decides where to direct the data, a packet filtering router decides if it should forward the data at all. Packet filtering is at the core of most modern firewalls, but there are few firewalls sold today that only do stateless packet filtering. Unlike more advanced filters, packet filters are not concerned about the content of packets. Their access control functionality is governed by a set of directives referred to as a ruleset. Packet filtering capabilities are built into most operating systems and devices capable of routing; the most common example of a pure packet filtering device is a network router that employs access control lists. In their most basic form, firewalls with packet filters operate at the network layer. This provides network access control based on several pieces of information contained in a packet, including:
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 143

Data Communication & Computer Networks



The packet’s source IP address—the address of the host from which the packet originated (such as 192.168.1.1) The packet’s destination address—the address of the host the packet is trying to reach (e.g., 192.168.2.1) The network or transport protocol being used to communicate between source and destination hosts, such as TCP, UDP, or ICMP. Possibly some characteristics of the transport layer communications sessions, such as session source and destination ports (e.g., TCP 80 for the destination port belonging to a web server, TCP 1320 for the source port belonging to a personal computer accessing the server) The interface being traversed by the packet, and its direction (inbound or outbound).









Filtering inbound traffic is known as ingress filtering. Outgoing traffic can also be filtered, a process referred to as egress filtering. Here, organizations can implement restrictions on their internal traffic, such as blocking the use of external file transfer protocol (FTP) servers or preventing denial of service (DoS) attacks from being launched from within the organization against outside entities. Organizations should only permit outbound traffic that uses the source IP addresses in use by the organization—a process that helps block traffic with spoofed addresses from leaking onto other networks. Spoofed addresses can be caused by malicious events such as malware infections or compromised hosts being used to launch attacks, or by inadvertent misconfigurations. Packet Filtering rules can be set on the following: physical network interface the packet arrives on; source or destination IP address, the type of transport layer (TCP, UDP, ICMP), or the transport layer source or destination ports. Packet filtering firewalls are low cost, have only a small effect on the network performance, and do not require client computers to be configured in any particular way. However, packet filtering firewalls are not considered to be very secure on their own because they do not understand application layer protocols. Therefore, they cannot make content-based decisions on the packets, which makes them less secure than application layer and circuit level firewalls. Another disadvantage of Packet filtering firewalls are they are stateless and do not retain the state of a connection. They also have very little or no logging capability which makes it hard to detect if the network is under attack. Testing the grant and deny rules is also difficult which may leave the network vulnerable or incorrectly configured.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 144

Data Communication & Computer Networks

2) Circuit Level Gateways

Figure : Circuit Level Gateway Around 1989-1990, Dave Presotto and Howard Trickey of AT&T Bell Labs pioneered the second generation of firewall architectures with research in circuit relays which were called circuit level gateways. Circuit level gateways are used for TCP connections to observe handshaking between packets to ensure a requested session is legitimate. Normally, it would store the following information: a unique session identifier, the state of the connection (i.e., handshake established or closing), sequencing information, source or destination IP address, and the physical network interface through which the packet arrives or departs. The firewall then checks to see if the sending host has permission to send to the destination, and that the receiving host has permission to receive from the sender. If the connection is acceptable, all packets are routed through the firewall with no more security tests. The advantages of circuit level gateways is that they are usually faster than application layer firewalls because they perform less evaluations and they can also protect a network by blocking connections between specific Internet sources and internal hosts. The main disadvantages to circuit level gateways are that they cannot restrict access to protocol subsets other than TCP and similarly to packet filtering, testing the grant and deny rules can be difficult which may leave the network vulnerable or incorrectly configured.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 145

Data Communication & Computer Networks

3) Application Level Gateways

Figure : Application Level Gateways The third generation of firewall architectures called Application level gateways was independently researched and developed during the late 1980s and early 1990s mainly by Gene Spafford of Purdue University, Marcus Ranum, and Bill Cheswick of AT&T Bell Laboratories. Application level gateways or proxy firewalls are software applications with two primary modes (proxy server or proxy client). When a user on a trusted network wants to connect to a service on an untrusted network such as the Internet, the request is directed to the proxy server on the firewall. The proxy server pretends to be the real server on the Internet. It checks the request and decides whether to permit or deny the request based on a set of rules. If the request is approved, the server passes the request to the proxy client, which contacts the real server on the Internet. Connections from the Internet are made to the proxy client, which then passes them on to the proxy server for delivery to the real client. This method ensures that all incoming connections are always made with the proxy client, while outgoing connections are always made with the proxy server. Therefore, there is no direct connection between the trusted and untrusted networks. The main advantages are that application level gateways can set rules based on highlevel protocols, maintain state information about the communications passing through the firewall server, and can keep detailed activity records. The main disadvantages are its complex filtering and access control decisions can require significant computing resources which can cause performance delays and its vulnerability to operating system and application level bugs.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 146

Data Communication & Computer Networks

4) Stateful Multilayer Inspection Firewalls

Figure : Stateful Multilayer Firewalls Check Point Software released the first commercial product based on this fourth generation architecture in 1994 called stateful multilayer inspection firewalls. Stateful multilayer inspection firewalls provide the best security of the four firewall types by monitoring the data being communicated at application socket or port layer as well as the protocol and address level to verify that the request is functioning as expected. An example is if during an FTP session the port numbers being used or an IP address were to change, the firewall would not permit the connection to continue. Another advantage is when a specific session is complete, any ports that were being used are closed. Stateful inspection systems can dynamically open and close ports for each session which differs from basic packet filtering that leaves ports in a constant opened or closed state. The main disadvantage to stateful multilayer inspection firewalls is that they can be costly because they require the purchase of additional hardware and/or software that is not normally packaged with a network device. There are no specific rules that can be applied when designing a firewall because there are too many factors to consider. There are general guidelines that will help if followed. Start by denying all access to the network by default. In other words, start with a gateway that routes no traffic. Determine the inbound access policy and then specify the outbound access policy. Once the inbound and outbound policies have been specified, an architecture with appropriate firewall functions can be chosen that fits within the budget. External resources may be needed if the complexity of the firewall needed to satisfy the security requirements are too great for the in-house expertise. A costly firewall that is complex and not administrated properly can be less effective than a straightforward firewall costing many times less.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 147

Data Communication & Computer Networks

Fire wall policies and rules
A firewall policy dictates how firewalls should handle network traffic for specific IP addresses and address ranges, protocols, applications, and content types (e.g., active content) based on the organization’s information security policies. Before a firewall policy is created, some form of risk analysis should be performed to develop a list of the types of traffic needed by the organization and categorize how they must be secured—including which types of traffic can traverse a firewall under what circumstances. This risk analysis should be based on an evaluation of threats; vulnerabilities; countermeasures in place to mitigate vulnerabilities; and the impact if systems or data are compromised. Firewall policy should be documented in the system security plan and maintained and updated frequently as classes of new attacks or vulnerabilities arise, or as the organization’s needs regarding network applications change. The policy should also include specific guidance on how to address changes to the ruleset. Generally, firewalls should block all inbound and outbound traffic that has not been expressly permitted by the firewall policy—traffic that is not needed by the organization. This practice, known as deny by default, decreases the risk of attack and can also reduce the volume of traffic carried on the organization’s networks. Because of the dynamic nature of hosts, networks, protocols, and applications, deny by default is a more secure approach than permitting all traffic that is not explicitly forbidden. 1) Policies Based on IP Addresses and Protocols Firewall policies should only allow necessary IP protocols through. Examples of commonly used IP protocols, with their IP protocol numbers, are ICMP (1), TCP (6), and UDP (17). Other IP protocols, such as IPsec components Encapsulating Security Payload (ESP) (50) and Authentication Header (AH) (51) and routing protocols, may also need to pass through firewalls. These necessary protocols should be restricted whenever possible to the specific hosts and networks within the organization with a need to use them. By permitting only necessary protocols, all unnecessary IP protocols are denied by default. Some IP protocols are rarely passed between an outside network and an organization’s LAN, and therefore can simply be blocked in both directions at the firewall. For example, IGMP is a protocol used to control multicast networks, but multicast is rarely used, and when it is, it is often not used across the Internet. Therefore, blocking all IGMP traffic in both directions is feasible if multicast is not used. Firewalls at the network perimeter should block all incoming traffic to networks and hosts that should not be accessible from external networks. These firewalls
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 148

Data Communication & Computer Networks

should also block all outgoing traffic from the organization’s networks and hosts that should not be permitted to access external networks. Deciding which addresses should be blocked is often one of the most timeconsuming aspects of developing firewall IP policies. It is also one of the most errorprone, because the IP address associated with an undesired entity often changes over time. 2) Policies Based on Applications Most early firewall work involved simply blocking unwanted or suspicious traffic at the network boundary. Inbound application firewalls or application proxies take a different approach—they let traffic destined for a particular server into the network, but capture that traffic in a server that processes it like a port-based firewall. The application-based approach provides an additional layer of security for incoming traffic by validating some of the traffic before it reaches the desired server. The theory is that the inbound application firewall’s or proxy’s additional security layer can protect the server better than the server can protect itself—and can also remove malicious traffic before it reaches the server to help reduce server load. In some cases, an application firewall or proxy can remove traffic that the server might not be able to remove on its own because it has greater filtering capabilities. An application firewall or proxy also prevents the server from having direct access to the outside network. If possible, inbound application firewalls and proxies should be used in front of any server that does not have sufficient security features to protect it from application-specific attacks. The main considerations when deciding whether or not to use an inbound application firewall or proxy are:      Is a suitable application firewall available? Or, if appropriate, is a suitable application proxy available? Is the server already sufficiently protected by existing firewalls? Can the main server remove malicious content as effectively as the application firewall or proxy? Is the latency caused by an application proxy acceptable for the application? How easy it is to update the filtering rules on the main server and the application firewall or proxy to handle newly developed threats?

Application proxies can introduce problems if they are not highly capable. Unless an application proxy is significantly more robust than the server and easy to keep updated, it is usually best to stay with the application server alone. Application firewalls can also introduce problems if they are not fast enough to handle the traffic destined for the server.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 149

Data Communication & Computer Networks

However, it is also important to consider the server’s resources—if the server does not have sufficient resources to withstand attacks, the application firewall or proxy could be used as a shield. 3) Policies Based on User Identity Traditional packet filtering does not see the identities of the users who are communicating in the traffic traversing the firewall, so firewall technologies without more advanced capabilities cannot have policies that allow or deny access based on those identities. However, many other firewall technologies can see these identities and therefore enact policies based on user authentication. One of the most common ways to enforce user identity policy at a firewall is by using a VPN. Both IPsec VPNs and SSL VPNs have many ways to authenticate users, such as with secrets that are provisioned on a user-by-user basis, with multi-factor authentication (e.g., time-based cryptographic tokens protected with PINs), or with digital certificates controlled by each user. NAC has also become a popular method for firewalls to allow or deny users access to particular network resources. In addition, application firewalls and proxies can allow or deny access to users based on the user authentication within the applications themselves. Firewalls that enforce policies based on user identity should be able to reflect these policies in their logs. That is, it is probably not useful to only log the IP address from which a particular user connected if the user was allowed in by a user-specific policy; it is also important to log the user’s identity as well. 4) Policies Based on Network Activity Many firewalls allow the administrator to block established connections after a certain period of inactivity. For example, if a user on the outside of a firewall has logged into a file server but has not made any requests during the past 15 minutes, the policy might be to block any further traffic on that connection. Time-based policies are useful in thwarting attacks caused by a logged-in user walking away from a computer and someone else sitting down and using the established connections (and therefore the logged-in user’s credentials). However, these policies can also be bothersome for users who make connections but do not use them frequently. For instance, a user might connect to a file server to read a file and then spend a long time editing the file. If the user does not save the file back to the file server before the firewall-mandated timeout, the timeout could cause the changes to the file to be lost. Some organizations have mandates about when firewalls should block connections that are considered to be inactive, when applications should disconnect sessions if there is no activity, etc. A firewall used by such an organization should be able to set policies that match the mandates while being specific enough to match the security objective of the mandates. A different type of firewall policy based on network activity is one that throttles or redirects traffic if the rate of traffic matching the policy rule is too high. For
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 150

Data Communication & Computer Networks

example, a firewall might redirect the connections made to a particular inside address to a slower route if the rate of connections is above a certain threshold. Another policy might be to drop incoming ICMP packets if the rate is too high. Crafting such policies is quite difficult because throttling and redirecting can cause desired traffic to be lost or have difficult-to-diagnose transient failures.

SSL – Secure Socket Layer
SSL was developed by Netscape to provide security when transmitting information on the Internet. Netscape recognized the need to develop a process that would ensure confidentiality when entering and transmitting information on the Web. Without such a process very few individuals would feel comfortable entering information like credit card numbers on a Web site. Netscape recognized that ecommerce on the Web would never get off the ground without consumer confidence. As a result, SSL was developed to address the security needs of Web surfers. It is somewhat ironic that we require such a high level of security for transactions on the Web. Most knowledgeable individuals would never enter their Visa or Mastercard number on a site that did not employ SSL for fear of having the information intercepted. However, those same individuals would not hesitate to give that same information over the phone to an unknown person when ordering flowers, nor would they fear giving their credit cards to a waiter at a restaurant. Consider that this involves handing a card over to someone you have never met who inevitably disappears for 10 minutes. Where is the security in that exchange? For some reason we hold transactions on the Web to a higher standard of security than we do most other types of transactions. The risk that a credit card number will be stolen in transit on the Internet is very small. A greater risk is that the credit card number will be stolen from a system on which it is stored. That is precisely what happened to me: A while back I received an e-mail from my Internet service provider (ISP) informing me that a computer, which had been stolen from the ISP, may have contained credit card information for a number of its customers. The e-mail went on to state that it was possible that my credit card information was on the stolen machine. The company said that the file containing the credit card numbers was encrypted, so it did not believe that there was any real risk. Nevertheless, the firm said that it was advising its customers of this incident so they could take appropriate action. The original transaction with the ISP in which I gave the company my credit card information was not over the Internet. It was a traditional low-tech transaction. Like most companies, the ISP stored the user account information, including credit card numbers, in a database on a network. That is where the real risk lies. SSL utilizes both asymmetric and symmetric key encryption to set up and transfer data in a secure mode over an unsecured

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 151

Data Communication & Computer Networks

network. When used with a browser client, SSL establishes a secure connection between the client browser and the server. Usually, it's the HTTP over SSL (HTTPS). It sets up an encrypted tunnel between a browser and a Web server over which data packets can travel. No one tapping into the connection between the browser and the server can decipher the information passing between the two. Integrity of the information is established by hashing algorithms. Confidentiality of the information is ensured with encryption.

Figure : SSL session handshake To set up an SSL session both sides exchange random numbers. The server sends its public key with a digital certificate signed by a recognized CA attesting to the authenticity of the sender's identity and binding the sender to the public key. The server also sends a session ID. The browser client creates a pre_master_secret key. The client browser encrypts the pre_master_secret key using the server's public key and transmits the encrypted pre_master_secret key to the server. Then both sides generate a session key using the pre_master_secret and random numbers. The SSL session set-up begins with asymmetric encryption. The server presents the browser client with its public key, which the client uses to encrypt the pre_master_secret. However, once the client sends the encrypted pre_master_secret key back to the server, it employs a session key to establish a secure connection. The initial setup uses asymmetric encryption, but the two parties switch over to symmetric encryption. This is done because symmetric encryption creates much less overhead. Less overhead means better throughput and a faster response time. Asymmetric cryptosystems are much more CPU-intensive and would significantly slow the exchange of information. As a result, for spontaneous exchanges, asymmetric encryption is used initially to establish a secure connection and to authenticate identities (using digital certificates). Once identities are established and public keys are exchanged, the communicating entities switch to symmetric encryption for efficiency. Even with the use of symmetric encryption, network throughput is significantly diminished with SSL. Cryptographic processing is extremely CPU-intensive. Web
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 152

Data Communication & Computer Networks

servers that would normally be able to handle hundreds of connections may only be able to handle a fraction of that when employing SSL. In 1999, Internet Week reported on a test of a Sun 450 server and the effects of SSL. At full capacity, the server could handle about 500 connections per second of normal HTTP traffic. However, the same server could only handle about three connections per second when the connections employed SSL. The fact that SSL can have such a hindering effect on network performance has to be included in any capacity planning for e-commerce sites. There are SSL accelerators available that can enhance the performance of Web servers that employ SSL. Products from Hewlett-Packard, Compaq, nCipher, and others offer solutions that speed up the cryptographic processing. Usually, these products are separate boxes that interface with a server and off-load the SSL process from the server's CPU. They can also take the form of accelerator boards that are installed in the server.

IPSec (Internet Protocol Security)
IPSec, a set of protocols under development by the IETF to support secure exchange of packets at the IP layer, is utilized to implement VPNs on the Internet and intranets. IPSec operates at the network layer (layer 3) and supports two modes, transport mode and tunnel mode. IPSec Transport Mode Transport mode encrypts only the data or information portion (payload) of each IP packet; it leaves the header untouched. Transport mode provides end-toend encryption since the header information is untouched. As a result, no special setup is required for the network devices. Transport mode is usually used for secure communications between hosts. With transport mode, someone sniffing the network will not be able to decipher the encrypted payload. However, since the header information is not encrypted, sniffers will be able analyze traffic patterns. IPSec Tunnel Mode Tunnel mode encrypts the entire packet, both the header and the payload. The receiving device must be IPSec-compliant to be able to decrypt each packet, interpret it, and then reencrypt it before forwarding it onto the appropriate destination. As such, it is a node-to-node encryption protocol. However, tunnel mode safeguards against traffic analysis since someone sniffing the network can only determine the tunnel endpoints and not the true source and destination of the tunneled packets. The sending and receiving devices exchange a public key information using a protocol known as Internet Security Association and Key Management Protocol/Oakley (ISAKMP/Oakley). This protocol enables the receiver to obtain a public key and authenticate the sender using the sender's digital certificates. Tunnel mode is considered more secure than transport mode, since it conceals or encapsulates the IP control information.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 153

Data Communication & Computer Networks

Virtual Private Networks
Firewall devices at the edge of a network are sometimes required to do more than block unwanted traffic. A common requirement for these firewalls is to encrypt and decrypt specific network traffic flows between the protected network and external networks. This nearly always involves virtual private networks (VPN), which use additional protocols to encrypt traffic and provide user authentication and integrity checking. VPNs are most often used to provide secure network communications across untrusted networks. For example, VPN technology is widely used to extend the protected network of a multi-site organization across the Internet, and sometimes to provide secure remote user access to internal organizational networks via the Internet. Two common choices for secure VPNs are IPSec and Secure Sockets Layer (SSL)/Transport Layer Security (TLS). The two most common VPN architectures are gateway-to-gateway and hostto-gateway. Gateway-to-gateway architectures connect multiple fixed sites over public lines through the use of VPN gateways—for example, to connect branch offices to an organization’s headquarters. A VPN gateway is usually part of another network device such as a firewall or router. When a VPN connection is established between the two gateways, users at branch locations are unaware of the connection and do not require any special settings on their computers. The second type of architecture, host-to-gateway, provides a secure connection to the network for individual users, usually called remote users, who are located outside of the organization (at home, in a hotel, etc.) Here, a client on the user machine negotiates the secure connection with the organization’s VPN gateway. For gateway-to-gateway and host-to-gateway VPNs, the VPN functionality is often part of the firewall itself. Placing it behind the firewall would require VPN traffic to be passed through the firewall while encrypted, preventing the firewall from inspecting the traffic. All remote access (host-to-gateway) VPNs allow the firewall administrator to decide which users have access to which network resources. This access control is normally available on a per-user and per-group basis; that is, the VPN policy can specify which users and groups are authorized to access which resources, should an organization need that level of granularity. VPNs generally rely on authentication protocols such as Remote Authentication Dial In User Service (RADIUS). RADIUS uses several different types of authentication credentials, with the most common examples being username and
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 154

Data Communication & Computer Networks

password, digital signatures, and hardware tokens. Another authentication protocol often used by VPNs is the Lightweight Directory Access Protocol (LDAP); it is particularly useful for making access decisions for individual users and groups. To run VPN functionality on a firewall requires additional resources that depend on the amount of traffic flowing across the VPN and the type of encryption being used. For some environments, the added traffic associated with VPNs might require additional capacity planning and resources. Planning is also needed to determine the type of VPN (gateway-to-gateway and/or host-to-gateway) that should be included in the firewall. Many firewalls include hardware acceleration for encryption to minimize the impact of VPN services.

Digital Signatures
A digital signature allows a receiver to authenticate (to a limited extent) the identity of the sender and to verify the integrity of the message. For the authentication process, you must already know the sender's public key, either from prior knowledge or from some trusted third party. Digital signatures are used to ensure message integrity and authentication. In its simplest form, a digital signature is created by using the sender's private key to hash the entire contents of the message being sent to create a message digest. The recipient uses the sender's public key to verify the integrity of the message by recreating the message digest. By this process you ensure the integrity of the message and authenticate the sender.

Figure : Digital signature To sign a message, senders usually append their digital signature to the end of a message and encrypt it using the recipient's public key. Recipients decrypt the message using their own private key and verify the sender's identity and the message integrity by decrypting the sender's digital signature using the sender's public key. Once again we will use Alice and Bob to illustrate how digital signatures work. Alice has a pair of keys, her private key and her public key. She sends a
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 155

Data Communication & Computer Networks

message to Bob that includes both a plaintext message and a version of the plaintext message that has been encrypted using her private key. The encrypted version of her text message is her digital signature. Bob receives the message from Alice and decrypts it using her public key. He then compares the decrypted message to the plaintext message. If they are identical, then he has verified that the message has not been altered and that it came from Alice. He can authenticate that the message came from Alice because he decrypted it with Alice's public key, so it could only have been encrypted with Alice's private key, to which only Alice has access. The strengths of digital signatures are that they are almost impossible to counterfeit and they are easily verified. However, if Alice and Bob are strangers who have never communicated to each other before, and Bob received Alice's public key, but had no other means to verify who Alice was, other than Alice's assertion that she was who she claimed to be, then the digital signature is useless for authentication. It will still verify that a message has arrived unaltered from the sender, but it cannot be used to authenticate the identity of the sender. In cases where the parties have no prior knowledge of one another, a trusted third party is required to authenticate the identity of the transacting parties. The process of digitally signing starts by taking a mathematical summary (called a hash code) of the check. This hash code is a uniquely-identifying digital fingerprint of the check. If even a single bit of the check changes, the hash code will dramatically change. The next step in creating a digital signature is to sign the hash code with your private key. This signed hash code is then appended to the check. How is this a signature? Well, the recipient of your check can verify the hash code sent by you, using your public key. At the same time, a new hash code can be created from the received check and compared with the original signed hash code. If the hash codes match, then the recipient has verified that the check has not been altered. The recipient also knows that only you could have sent the check because only you have the private key that signed the original hash code.

Elliptic-Curve Cryptography (ECC)
Another up and coming development in cryptography appears to be ellipticcurve cryptography (ECC). ECC, which is widely expected to be the next-generation algorithm, has been proposed for use as a public key cryptosystem. EEC's strength comes from the fact that it is computational very difficult to solve the elliptic curve discrete logarithm problem. The appeal of ECC algorithms is the fact that they hold the possibility of offering security comparable to the RSA algorithms using smaller keys. Smaller keys mean that less computation is required. Less time and CPU resources would be required to implement this technology on the network. Less time and CPU translates into less cost associated with using these algorithms. As a result, interest in these algorithms is keen.
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 156

Data Communication & Computer Networks

It has also been said that ECC is more difficult to break than RSA. While both RSA with a 512-bit key and ECC with a 97-bit key have been broken, it has been stated that the ECC algorithm is more difficult to break. In 1999 a team of 195 volunteers in 20 countries using 740 computers took 40 days to recover the 97-bit ECC private key. Although ECC holds great promise, I am not aware of any practical implementation of the technology in any product now on the market. No matter what algorithm you employ, it is important to be cognizant of the fact that as computing power increases and becomes less expensive, the cryptographic key sizes will have to increase to ensure security. Not too far in the future, a 2,024-bit key will not be sufficient to ensure security. The Limitations of Encryption Communications are not necessarily secure simply because they are encrypted. It is important to remember that useful information can even be discerned from encrypted communications. I like to use an example from the book Blind Man's Bluff. In the book, authors Sherry Sontag and Chistopher and Annette Drew tell the story of U.S. submarine espionage during the Cold War. In the 1970s and 1980s, Soviet missile subs were using effective cryptosystems in conjunction with sophisticated transmitters that compressed their encrypted communications into microsecond bursts. While the United States was not able to break the Soviet transmission code, America was able to gather a great deal of information from the transmissions themselves. U.S. analysis of the transmission patterns revealed almost as much information as the actual content of the transmissions would have revealed. For example, the United States was able to determine that the messages were coming from Soviet subs on their way to and from patrol. They were also able to distinguish one sub from another by slight variations in the frequencies of the transmissions and that the Soviet subs sent transmissions at regular points or milestones in their patrols. Consequently, the United States was able to determine Soviet subs' location, when they reached their patrol sector, the halfway point or a particular landmark. The analysis of the transmission patterns enabled the United States to track Soviet subs on patrol without ever breaking the transmissions' code. It is important to understand that simply using encryption is no guarantee of confidentiality or secrecy. In addition, studies have shown that the randomness of the data for encrypted files stored on media can be used to distinguish the files from other stored data. Generally, operating systems do not store data in a random manner. Data is normally stored in a manner that optimizes retrieval, space, or speed. Encrypted files and algorithm keys by their nature must be random data. As a result, when large encrypted files and public/private key sets are stored on a disk drive their
Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti) Page 157

Data Communication & Computer Networks

randomness stands out against the normally organized data on the drive. There are programs available that purport to be able to find keys and encrypted files on a disk drive. If true, this could potentially mean that someone could steal key pairs if he or she had access to the drive on which the keys were stored. Meanwhile, however, it is also important to understand that developments in the field of cryptography and digital signature technology are the enabling force behind the recent explosion in e-commerce on the Internet. Without these technologies, Internet e-commerce would not be possible. As a result, those who want to participate in this new world of ecommerce, either as an entrepreneur or a consumer, need to understand the essential technology that is enabling its development.

Prof. Jadhav Dattatraya Subhash (SICS-MCA, Korti)

Page 158

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close