Computer Networking

Published on May 2016 | Categories: Types, School Work, Study Guides, Notes, & Quizzes | Downloads: 65 | Comments: 0 | Views: 761
of 77
Download PDF   Embed   Report

Comments

Content

COMPUTER NETWORKING

Network cards such as this one can transmit data at high speed over ethernet cables. A computer network is a system for communication between computers. These networks may be fixed (cabled, permanent) or temporary (as via modems or null modems) and generally involve the use of a telecommunications system. Carrying instructions between calculation machines and early computers was done by human users. In September, 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model K at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency ARPA when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet. In 1964 researchers at Dartmouth developed a time sharing system for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer (DEC's PDP-8) to route and manage telephone connections. In 1968 Paul Baran proposed a network system consisting of datagrams or packets that could be used in a packet switching network between computer systems. In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANet network using 50 kbit/s circuits. Networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from researchers and businesses to families and individuals in everyday use.

Contents


• •

1 Categorizing o 1.1 By functional relationship o 1.2 By network topology o 1.3 By specialized function 2 Protocol stacks 3 Suggested topics o 3.1 Layers o 3.2 Data transmission  3.2.1 Wired transmission  3.2.2 Wireless transmission o 3.3 Other

Local area network LAN redirects here, for other uses see LAN (disambiguation). A local area network (LAN) is a computer network covering a small local area, like a home, office, or small group of buildings such as a home, office, or college. Current LANs are most likely to be based on switched Ethernet or Wi-Fi technology running at from 10 to 10000 Mbit/s. The defining characteristics of LANs in contrast to WANs are: a) much higher data rates, b) smaller geographic range - at most a few kilometers - and c) they do not involve leased telecommunication lines. "LAN" usually does not refer to data running over local analog telephone lines, as on a private branch exchange (PBX). Technical aspects Although switched Ethernet is now most common at the physical layer, and TCP/IP as a protocol, historically many different options have been used (see below) and some continue to be popular in niche areas. Larger LANs will have redundant links, and routers or switches capable of using spanning tree protocol and similar techniques to recover from failed links. LANs will have connections to other LANs via routers and leased lines to create a WAN. Most will also have connections to the large public network known as the Internet, and links to other LANs can be 'tunnelled' across this using VPN technologies. History In the days before personal computers, a site might have just one central computer, with users accessing this via computer terminals over simple low-speed cabling. Networks such as IBM's SNA (Systems Network Architecture) were aimed at linking terminals or other mainframes at remote sites over leased lines—hence these were wide area networks. The first LANs were created in the late 1970s and used to create high-speed links between several large central computers at one site. Of many competing systems created at this time, Ethernet and ARCNET were the most popular. The growth of CP/M and then DOS based personal computer meant that a single site began to have dozens or even hundreds of computers. The initial attraction of networking these was generally to share disk space and laser printers, which were both very expensive at the time. There was much enthusiasm for the concept and for several years from about 1983 onward computer industry pundits would regularly declare the coming year to be “the year of the LAN”. In reality the concept was marred by proliferation of incompatible physical layer and network protocol implementations, and confusion over how best to share resources. Typically each vendor would have their own type of network card, cabling, protocol, and network operating system. A solution appeared with the advent of Novell NetWare which gave: (a) even-handed support for the 40 or so competing card/cable types, and (b) a much more sophisticated operating system than most of its competitors. NetWare dominated the personal computer LAN business from early after its introduction in 1983

until the mid 1990s when Microsoft introduced Windows NT Advanced Server and Windows for Workgroups. Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. Microsoft and 3Com worked together to create a simple network operating system which formed the base of 3Com's 3+Share, Microsoft's LAN Manager and IBM's LAN Server. None of these was particularly successful. In this same timeframe Unix computer workstation from vendors such as Sun Microsystems, Hewlett-Packard, Silicon Graphics, Intergraph, NeXT and Apollo were using TCP/IP based networking. Although this market segment is now much reduced, the technologies developed in this area continue to be influential on the Internet and in both Linux and Apple Mac OS X networking. Metropolitan area network Metropolitan Area Networks or MANs are large computer networks usually spanning a campus or a city. They typically use wireless infrastructure or optical fiber connections to link their sites. For instance a university or college may have a MAN that joins together many of their local area networks (LANs) situated around site of a fraction of a square kilometer. Then from their MAN they could have several wide area network (WAN) links to other universities or the Internet. Some technologies used for this purpose are ATM, FDDI and SMDS. These older technologies are in the process of being displaced by Ethernet-based MANs (e.g. Metro Ethernet) in most areas. MAN links between LANs have been built without cables using either microwave, radio, or infra-red free-space optical communication links. DQDB, Distributed Queue Dual Bus, is the Metropolitan Area Network standard for data communication. It is specified in the IEEE 802.6 standard. Using DQDB, networks can be up to 30 miles long and operate at speeds of 34 to 155 Mbit/s. Several notable networks started as MANs, such as the Internet peering points MAEWest and MAE-East and the Sohonet media network. Wireless community network Wireless community networks or wireless community projects are the largely hobbyist-led development of interlinked computer networks using wireless LAN technologies, taking advantage of the recent development of cheap, standardised 802.11b (Wi-Fi) devices to build growing clusters of linked, citywide networks. Some are being used to link to the wider Internet, particularly where individuals can obtain unmetered internet connections such as ADSL and/or cable modem at fixed costs and share them with friends. Where such access is unavailable or expensive, they can act as a low-cost partial alternative, as the only cost is the fixed cost of the equipment. Such projects started to evolve in 1998 with the availability of 802.11 equipment, and are gradually spreading to cities and towns around the world. In mid-2002 most such networks have been still embryonic, with small groups of people experimenting and gradually interconnecting with each other and thus expanding the domain and utility of the networks. As of mid-2005, wireless community networks have become increasingly popular and exist throughout many cities. Such networks have a distributed rather than a

tree-like topography and have the potential to replace the congested and vulnerable backbones of the wired internet in most places. These projects are in many senses an evolution of amateur radio and, more specifically packet radio, as well as an outgrowth of the free software community (which in itself substantially overlaps with amateur radio), and share their freewheeling, experimental, adaptable culture. The key to using standard wireless networking devices designed for short-range use for multi-kilometre linkups is the use of high-gain antennas. Commercially-available examples are relatively expensive and not that readily available, so much experimentation has gone into homebuilt antenna construction. One striking design is the cantenna, which performs better than many commercial antenna designs and is constructed from a steel food can. Most wireless community network projects are coordinated by citywide user groups who freely share information and help using the Internet. They often spring up as a grassroots movement offering free, anonymous Internet access to anyone with WiFi capability. Wireless MAN A wireless metropolitan area network (MAN) offers broadband network access via exterior antennas. Antennas communicate with base stations which are connected to core network. This is a good alternative to fixed line networks. It is generally simple to build and relatively inexpensive. 802.16 is an Institute of Electrical and Electronics Engineering (IEEE) standard which specifies the wirelessMAN Air Interface for wireless metropolitan area networks. Standard was completed in October 2001 and published on 8 April 2002. 802.16 is a “last mile” technique which uses bandwidth between 10 – 66 GHz. Because of the short wavelength, line of sight is required. Standard supports point-to-multipoint topology, frequency-division duplex (FDD) and time-division duplex (TDD) in a consistent framework and full quality of service (QoS). With QoS it is possible to send sound, video etc. Standard specifies 120 Mbit/s on each 25 MHz channel. 802.16a followed 802.16 standard. It was completed in November 2002 and published on 1 April 2003. It uses bandwidth between 2 - 11 GHz and support mesh instead of only point-to-multipoint network architecture. Standard doesn't need line of sight. With mesh support subscriber stations communicate with other subscribers rather than directly with the base station. Wide area network A wide area network or WAN is a computer network covering a wide geographical area, involving a vast array of computers. This is different from personal area networks (PANs), metropolitan area networks (MANs) or local area networks (LANs) that are usually limited to a room, building or campus. The most well-known example of a WAN is the Internet. WANs are used to connect local area networks (LANs) together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are most often built using leased lines. At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Network

protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.. Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation. Personal area network A personal area network (PAN) is a computer network used for communication among computer devices (including telephones and personal digital assistants) close to one person. The devices may or may not belong to the person in question. The reach of a PAN is typically a few meters. PANs can be used for communication among the personal devices themselves (intrapersonal communication), or for connecting to a higher level network and the Internet (an uplink). Personal area networks may be wired with computer buses such as USB and FireWire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA and Bluetooth. Wireless A Bluetooth PAN is also called a piconet, and is composed of up to 8 active devices in a master-slave relationship (up to 255 devices can be connected in "parked" mode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 meters, although ranges of up to 100 meters can be reached under ideal circumstances. Recent innovations in Bluetooth antennas have allowed these devices to far exceed the range for which they were originally designed. At DEF CON 12, with the right equipment a group of hackers known as "Flexilis" was able to achieve connectivity to bluetooth devices more than half a mile away. The type of antenna use was homemade and Yagi-based. They named the antenna "The BlueSniper". This is a rifle stock with a scope and Yagi antenna attached. A cable attaches the antenna to the Bluetooth card, which can be in a PDA or laptop computer. The laptop can be carried in a backpack with the cables connecting into the backpack, giving it the Ghostbusters look. Another PAN technology, Skinplex, transmits via the capacitive near field of human skin. Skinplex can detect and communicate up to one meter from a human body. It is already used for access control (door locks) and jamming protection (so people are not caught in convertible roofs) in cars. By functional relationship Client-server Client/Server is a network architecture which separates the client (often a graphical user interface) from the server. Each instance of the client software can send requests to a server or application server.

Although this idea is applied in a variety of ways, on many different kinds of applications, the easiest example to visualize is the current use of web pages on the internet. For example, if you are reading this article on Wikipedia, your computer and web browser would be considered a client, and the computers , databases, and applications that make up the Wikipedia would be considered the server. When your web browser requests a particular article from Wikipedia, the Wikipedia server finds all of the information required to display the article in the Wikipedia database, assembles it into a web page, and sends it back to your web browser for you to look at. Introduction A Client/Server architecture is intended to provide a scalable architecture, whereby each computer or process on the network is either a client or a server. Server software generally, but not always, runs on powerful computers dedicated for exclusive use to running the business application. Client software on the other hand generally runs on common PCs or workstations. Clients get all or most of their information and rely on the application server for things such as configuration files, stock quotes, business application programs, or to offload computer-intensive application tasks back to the server in order to keep the client computer (and client computer user) free to perform other tasks. Properties of a server:
• • •

Passive (Slave) Waiting for requests On requests serves them and send a reply

Properties of a client:
• • •

Active (Master) Sending requests Waits until reply arrives

Servers can be stateless or stateful. A stateless server does not keep any information between requests. Example: An HTTP server for static HTML pages. A stateful server can remember information between requests. The scope of this information can be global or session. Example: Apache Tomcat. The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the UML. Another type of network architecture is known as a peer-to-peer architecture because each node or instance of the program is both a "client" and a "server" and each has equivalent responsibilities. Both architectures are in wide use. Multi-tier architectures Main article: Multi-tier architecture

A generic Client/Server architecture has two types of nodes on the network: clients and servers. As a result, these generic architectures are sometimes referred to as "two-tier" architectures. Some networks will consist of three different kinds of nodes, clients, application servers which process data for the clients, and database servers which store data for the application servers. This is called a three-tier architecture. In general, an n-tier or multi-tier architecture may deploy any number of distinct services, including transitive relations between application servers implementing different functions of business logic, each of which may or may not employ a distinct or shared database system. The advantage of an n-tier architecture compared with a two-tier architecture (or a threetier with a two-tier) is that it separates out the processing that occurs to better balance the load on the different servers; it is more scalable. The disadvantages of n-tier architectures are: 1. It puts a greater load on the network. 2. It is much more difficult to program and test software than in two-tier architecture because more devices have to communicate to complete a user’s transaction. Addressing Methods of addressing in client server environments can be described as follows


• • •

Machine process addressing; where the address is divided up as follows process@machine. Therefore 56@453 would indicate the process 56 on computer 453 Name Server; Name servers have an index of all names and addresses of servers in the relevant domain. Localization packets; Broadcast messages are sent out to all computers in the distributed system to determine the address of the destination computer Trader; A trader is a system that indexes all the services available in a distributed system. A computer requiring a particular service will check with the trading service for the address of a computer providing such a service.

Examples A popular client in widespread use today is the web browser which communicates with web servers over the internet to fetch and display web page content. The X Window System is a client-server architecture with an unusual property. The server is always local (near the user) and the client can be local or remote. This can be less confusing if you think of the server (the X display) as making some resource available (a windowing display system) and the client as making use of that resource. Peer-to-peer

It has been suggested that Peer-to-Peer Streaming Systems and Incentive Mechanisms be merged into this article or section. (Discuss) P2P redirects here. For other uses, see P2P (disambiguation) or Peer-to-peer (disambiguation).

GNUnet A peer-to-peer (or P2P) computer network is a network that relies on the computing power and bandwidth of the participants in the network rather than concentrating it in a relatively low number of servers. P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and realtime data, such as telephony traffic, is also passed using P2P technology. A pure peer-to-peer network does not have the notion of clients or servers, but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example for a non peer-to-peer file transfer is an FTP server where the client and server programs are quite distinct, and the clients initiate the download/uploads and the servers react to and satisfy these requests. Operation of peer-to-peer networks Three major types of P2P network are: Pure P2P:
• • •

Peers act as clients and server There is no central server managing the network There is no central router

Hybrid P2P:
• •



Has a central server that keeps information on peers and responds to requests for that information. Peers are responsible for hosting the information (as the central server does not store files), for letting the central server know what files they want to share, and for downloading its shareable resources to peers that request it. Route terminals are used addresses, which are referenced by a set of indices to obtain an absolute address.

Mixed P2P:


Has both pure and hybrid characteristics

Advantages of peer-to-peer networks An important goal in peer-to-peer networks is that all clients provide resources, including bandwidth, storage space, and computing power. Thus, as nodes arrive and demand on the system increases, the total capacity of the system also increases. This is not true of a client-server architecture with a fixed set of servers, in which adding more clients could mean slower data transfer for all users. The distributed nature of peer-to-peer networks also increases robustness in case of failures by replicating data over multiple peers, and -- in pure P2P systems -- by enabling peers to find the data without relying on a centralized index server. In the latter case, there is no single point of failure in the system. When the term peer-to-peer was used to describe the Napster network, it implied that the peer protocol was important, but, in reality, the great achievement of Napster was the empowerment of the peers (i.e., the fringes of the network) in association with a central index, which made it fast and efficient to locate available content. The peer protocol was just a common way to achieve this. Attacks on peer-to-peer networks Many peer-to-peer networks are under constant attack by people with a variety of motives. Examples include:
• • • • • • • • •

poisoning attacks (e.g. providing files whose contents are different from the description) polluting attacks (e.g. inserting "bad" chunks/packets into an otherwise valid file on the network) defection attacks (users or software that make use of the network without contributing resources to it) insertion of viruses to carried data (e.g. downloaded or carried files may be infected with viruses or other malware) malware in the peer-to-peer network software itself (e.g. distributed software may contain spyware) denial of service attacks (attacks that may make the network run very slowly or break completely) filtering (network operators may attempt to prevent peer-to-peer network data from being carried) identity attacks (e.g. tracking down the users of the network and harassing or legally attacking them) spamming (e.g. sending unsolicited information across the network- not necessarily as a denial of service attack)

Most attacks can be defeated or controlled by careful design of the peer-to-peer network and through the use of encryption. P2P network defense is in fact closely related to the "Byzantine Generals Problem". However, almost any network will fail when the majority of the peers are trying to damage it, and many protocols may be rendered impotent by far fewer numbers. Networks, protocols and applications
• •

• • •

• •

• • • • • • • • • •

Ares: Ares Galaxy, Warez P2P BitTorrent: ABC [Yet Another BitTorrent Client], Azureus, BitComet, BitSpirit, BitTornado, BitTorrent, BitTorrent.Net, G3 Torrent, mlMac, MLdonkey, QTorrent, Shareaza, Transmission, µTorrent Direct Connect network: BCDC++, DC++, NeoModus Direct Connect eDonkey2000: aMule, eDonkey2000, eMule, LMule, MLdonkey, mlMac, Shareaza, xMule, iMesh FastTrack: giFT, Grokster, iMesh (and its variants stripped of adware including iMesh Light), Kazaa (and its variants stripped of adware such as Kazaa Lite), KCeasy, Mammoth, MLdonkey, mlMac, Poisoned Freenet: Entropy (on its own network), Freenet Gnutella: Acquisition, BearShare, BetBug, Cabos, Gnucleus, Grokster, iMesh, gtk-gnutella, Kiwi Alpha, LimeWire, FrostWire, MLdonkey, mlMac, Morpheus, Phex, Poisoned, Swapper, Shareaza, XoloX Gnutella2: Adagio, Caribou, Gnucleus, iMesh, Kiwi Alpha, MLdonkey, mlMac, Morpheus, Shareaza, TrustyFiles Joltid PeerEnabler: Altnet, Bullguard, Joltid, Kazaa, Kazaa Lite Kad Network (using Kademlia protocol): aMule, eMule, MLdonkey MANOLITO/MP2P: Blubster, Piolet, RockItNet MFPnet: Amicima Napster: Napigator, OpenNap, WinMX Peercasting type networks: PeerCast, IceShare, Freecast LiveP2P type networks: CoolStreaming, Cybersky-TV WPNP: WinMX other networks: Akamai, ANts P2P, Applejuice, AsagumoWeb, Audiogalaxy, Avalanche, CAKE, Chord, The Circle, Coral, Dijjer, EarthStation 5, FileTopia, FotoSwap, GNUnet, Groove, Hamachi, iFolder, iGlance, konspire2b, Madster/Aimster, MUTE, OpenExt, OpenFT, P-Grid, Qnext, IRC, JXTA, Peersites, MojoNation, Mnet, Octoshape, OmilyX, Overnet, Scour, Skype, Solipsis, soribada, Soulseek, SPIN, Swarmcast, WASTE, Winny

An earlier generation of peer-to-peer systems were called "metacomputing" or were classed as "middleware". These include: Legion, Globus, Condor, ByteTornado Multi-network applications


aMule (eDonkey network, Kad Network) (Linux, Mac OS X, FreeBSD, NetBSD, OpenBSD, Windows and Solaris Operating Environment) (open source)

• • • • • • • • •

eMule (eDonkey network, Kad Network) (Windows, Linux) (open source) GiFT (own OpenFT protocol, and with plugins - FastTrack, eDonkey and Gnutella) (open source) Gnucleus (Gnutella, Gnutella2) (Windows) (open source) iMesh (Fasttrack, Edonkey Network, Gnutella, Gnutella2) (Microsoft Windows) (closed source) Kiwi Alpha (Gnutella, Gnutella2) (Windows) (closed source) MLdonkey (BitTorrent, eDonkey, FastTrack, Gnutella, Gnutella2, Kademlia) (Windows, Linux, Mac OS X) (open source) Morpheus (Gnutella, Gnutella2) (Windows) (closed source) Napshare (MUTE, Key Network) (Linux, Windows) (open source) Shareaza (BitTorrent, eDonkey, Gnutella, Gnutella2) (Windows) (open source)

Network topology A network topology is the pattern of links connecting pairs of nodes of a network. A given node has one or more links to others, and the links can appear in a variety of different shapes. The simplest connection is a one-way link between two devices. A second return link can be added for two-way communication. Modern communications cables usually include more than one wire in order to facilitate this, although very simple bus-based networks have two-way communication on a single wire. Network topology is determined only by the configuration of connections between nodes; it is therefore a part of graph theory. Distances between nodes, physical interconnections, transmission rates, and/or signal types are not a matter of network topology, although they may be affected by it in an actual physical network.

The topology system was originally invented by Robbie Cowan Decentralisation In a mesh topology, there are at least two nodes with two or more paths between them. A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. This is similar in some

ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multi-dimensional ring has a toroidal (torus) topology, for instance. A fully connected, complete topology or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive to set up, but have a high amount of reliability due to multiple paths data can travel on. This topology is mostly seen in military applications. Hybrids Hybrid networks use a combination of any two or more topologies in such a way that the resulting network does not have one of the standard forms. For example, a tree network connected to a tree network is still a tree network, but two star networks connected together (known as extended star) exhibit hybrid network topologies. A hybrid topology is always produced when two different basic network topologies are connected. Two common examples for Hybrid network are: star ring network and star bus network
• •

A Star ring network consists of two or more star topologies connected using a multistation access unit (MAU) as a centralized hub. A Star Bus network consists of two or more star topologies connected using a bus trunk (the bus trunk serves as the network's backbone).

While grid networks have found popularity in high-performance computing applications, some systems have used genetic algorithms to design custom networks that have the fewest possible hops in between different nodes. Some of the resulting layouts are nearly incomprehensible, although they do function quite well. Bus network

Image showing bus network layout A bus network is a network architecture in which a set of clients are connected via a shared communications line, called a bus. There are several common instances of the bus architecture, including one in the motherboard of most computers, and those in some versions of Ethernet networks. Bus networks are the simplest way to connect multiple clients, but often have problems when two clients want to transmit at the same time on the same bus. Thus systems which use bus network architectures normally have some scheme of collision handing or collision avoidance for communication on the bus, quite often using Carrier Sense

Multiple Access or the presence of a bus master which controls access to the shared bus resource. A true bus network is passive – the computers on the bus simply listen for a signal; they are not responsible for moving the signal along. However, many active architectures can also be described as a "bus", as they provide the same logical functions as a passive bus; for example, switched Ethernet can still be regarded as a logical bus network, if not a physical one. Indeed, the hardware may be abstracted away completely in the case of a software bus. With the dominance of switched Ethernet over passive Ethernet, passive bus networks are uncommon in wired networks. However, almost all current wireless networks can be viewed as examples of passive bus networks, with radio propagation serving as the shared passive medium. Advantages and Disadvantages of a Bus Network Advantages
• • • •

Easy to implement and extend Well suited for temporary networks (quick setup) Typically the cheapest topology to implement Failure of one station does not affect others

Disadvantages
• • • • • • • •

Difficult to administer/troubleshoot Limited cable length and number of stations A cable break can disable the entire network Maintenance costs may be higher in the long run Performance degrades as additional computers are added or on heavy traffic Low security (all computers on the bus can see all data transmissions on the bus) One virus in the network will affect all of them (but not as badly as a star or ring network) Proper termination is required.(loop must be in closed path)

Star network

Image showing star network layout

Star networks are one of the most common computer network topologies. In its simplest form, a star network consists of one central switch, or hub computer which acts as a router to transmit messages. When applied to a bus-based network, this central hub rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the rest of the systems will be unaffected. If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems. Comparing star networks to other types of network
• • • • • •

Easy to implement and extend, even in large networks Well suited for temporary networks (quick setup) The failure of a non-central node will not have major effects on the functionality of the network. Reliable market proven system No problems with collisions of Data since each station has its own cable to the server/hub. Security can be implemented in the hub/switch.

Disadvantages
• • • • •

Limited cable length and number of stations Maintenance costs may be higher in the long run Failure of the central node can disable the entire network. One virus in the network can affect them all. Depending on the transmission media, length limitations may be imposed from the central location used.

Ring network

Image showing ring network layout A ring network is a topology of computer networks where each node is connected to two other nodes, so as to create a ring. The most popular example is a token ring network. Ring networks tend to be inefficient when compared to star networks because data must travel through more points before reaching its destination. For example, if a given ring network has eight computers on it, to get from computer one to computer four, data must travel from computer one, through computers two and three, and to its destination at computer four. It could also go from computer one through eight, seven, six, and five until reaching four, but this method is slower because it travels through more computers. Contents




1 Advantages and Disadvantages of a Ring Network o 1.1 Advantages o 1.2 Disadvantages 2 See also

Advantages and Disadvantages of a Ring Network Advantages
• • • •

All stations have equal access Each node on the ring acts as a repeater, allowing ring networks to span greater distances than other physical topologies. Because data travels in one direction high speeds of transmission of data are possible When using a coaxial cable to create a ring network the service becomes much faster.

Disadvantages
• • •

Often the most expensive topology If one node fails, the rest of the network could also fail Damage to the ring will affect the whole network

Mesh networking

(Redirected from Mesh network)

mesh network layout Mesh networking is a way to route data, voice and instructions between nodes. It allows for continuous connections and reconfiguration around blocked paths by "hopping" from node to node until a connection can be established. Mesh networks are self-healing: the network can still operate even when a node breaks down or a connection goes bad. As a result, a very reliable network is formed. This concept is applicable to wireless networks, wired networks, and software interaction. A mesh network is a networking technique which allows inexpensive peer network nodes to supply back haul services to other nodes in the same network. It effectively extends a network by sharing access to higher cost network infrastructure. Mesh networks differ from other networks in that the component parts can all connect to each other. An MIT project developing "hundred dollar laptops" for under-privileged schools in developing nations plans to use mesh networking to create a robust and inexpensive infrastructure for the students who will receive the laptops. The instantaneous connections made by the laptops would reduce the need for an external infrastructure such as the internet to reach all areas, because a connected node could share the connection with nodes nearby. Star-bus network

A star-bus network is a combination of a star network and a bus network. A hub (or concentrator) is used to connect the nodes to the network. It is a combination of the linear bus and star topologies and operates over one main communication line. Server farm

A typical server farm. A server farm is a collection of computer servers usually maintained by an enterprise to accomplish server needs far beyond the capability of one machine. Often, server farms will have both a primary and a backup server allocated to a single task, so that in the event of the failure of the primary server, a backup server will take over the primary server's function. Server farms are typically co-located with the network switches and/or routers which enable communication between the different parts of the cluster and the users of the cluster. Server farms are commonly used for cluster computing. Many modern supercomputers consist of giant server farms of high-speed processors connected by either Gigabit Ethernet or custom interconnects such as Myrinet. Another common use of server farms is for web hosting. Wireless community network Wireless community networks or wireless community projects are the largely hobbyist-led development of interlinked computer networks using wireless LAN technologies, taking advantage of the recent development of cheap, standardised 802.11b (Wi-Fi) devices to build growing clusters of linked, citywide networks. Some are being used to link to the wider Internet, particularly where individuals can obtain unmetered internet connections such as ADSL and/or cable modem at fixed costs and share them with friends. Where such access is unavailable or expensive, they can act as a low-cost partial alternative, as the only cost is the fixed cost of the equipment. Such projects started to evolve in 1998 with the availability of 802.11 equipment, and are gradually spreading to cities and towns around the world. In mid-2002 most such networks have been still embryonic, with small groups of people experimenting and gradually interconnecting with each other and thus expanding the domain and utility of the networks. As of mid-2005, wireless community networks have become increasingly popular and exist throughout many cities. Such networks have a distributed rather than a

tree-like topography and have the potential to replace the congested and vulnerable backbones of the wired internet in most places. These projects are in many senses an evolution of amateur radio and, more specifically packet radio, as well as an outgrowth of the free software community (which in itself substantially overlaps with amateur radio), and share their freewheeling, experimental, adaptable culture. The key to using standard wireless networking devices designed for short-range use for multi-kilometre linkups is the use of high-gain antennas. Commercially-available examples are relatively expensive and not that readily available, so much experimentation has gone into homebuilt antenna construction. One striking design is the cantenna, which performs better than many commercial antenna designs and is constructed from a steel food can. Most wireless community network projects are coordinated by citywide user groups who freely share information and help using the Internet. They often spring up as a grassroots movement offering free, anonymous Internet access to anyone with WiFi capability. XML appliance

DataPower XA35 XML Accelerator

Sarvega XML Content Router An XML appliance is a separate computer system with deliberately narrow functionality that exchanges XML messages with other computer systems. XML appliances are designed specifically to be easy to install, configure and manage. XML appliances frequently include specialized hardware and software to accelerate the processing of XML messages. Contents
• • • • •

1 History of XML appliances 2 Common features of XML appliances 3 Classification of XML appliances 4 XML appliance vendors 5 See also

History of XML appliances The first XML appliances were created by engineers that required a large volume of XML transformations. They created specialized Application-specific integrated circuits that performed transformations up to 100 times faster than software-only solutions.

Although there were some early adopters of these systems, it was initially restricted to large e-commerce sites such as Yahoo! and Amazon. Early entrants to this field include vendors such as DataPower (now owned by IBM) and Sarvega (now owned by Intel). A second round of XML appliances started to appear around 2003, when these devices were used to exchange SOAP XML messages between computers on public networks. These messages required advanced security features such as encryption,digital signatures and denial of service attack prevention. Because the setup and configuration of softwareonly systems was time consuming, companies could save a great deal of money by using appliances that were pre-packaged with WS-Security standards built in. Common features of XML appliances

Tarari Hardware XML Processor
• •

• •

They make assumptions that most messages that enter or exit the appliance are well-formed XML files They have customized hardware and software that is optimized to make parsing and analysis of XML files efficient. The DataPower XG4 XML chipset and the Tarari RAX-XSLT chipset are examples of such hardware. They have custom software to make the appliances easy to install, configure and manage They have built-in support for many XML standards such as XSLT, XPath, SOAP and WS-Security

Classification of XML appliances Although the term XML appliance is the most general term to describe these devices, most vendors use alternative terminology that describe more specific functionality of these devices. The following are alternative names used for XML Appliances:


• •

XML accelerators - are devices that typically use custom hardware to accelerate XPath processing. This hardware typically provides a performance boost between 10 and 100 times in the number of messages per second that can be processed. Integration appliance - (also known as application routers) are devices that are designed to make the integration of computer systems easier. XML firewall - (also known as XML security gateways) are devices that support the WS-Security standards. These appliances typically offload encryption and decryption to specialized hardware devises.

XML appliance vendors
• • • • •

Cast Iron Systems [1] InfoTone Communications [2] DataPower Reactivity [3] Sarvega

Computer bus

In computer architecture, a bus is a subsystem that transfers data or power between computer components inside a computer or between computers. Unlike a point-to-point connection, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together. Early computer buses were literally parallel electrical buses with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB. Bus topology In a network, the master scheduler controls the data traffic. If data is to be transferred the requesting computer sends a message to the scheduler, which puts the request into a queue. The message contains an identification code which is broadcast to all nodes of the network. The scheduler works out priorities and notifies the receiver as soon as the bus is available. The identified node takes the message and performs the data transfer between the two computers. Having completed the data transfer the bus becomes free for the next request in the scheduler's queue. Bus benefit: any computer can be accessed directly and message can be sent in a relatively simple and fast way. Disadvantage: needs a scheduler to assign frequencies and priorities to organize the traffic. Examples of internal computer buses Parallel
• • • • • •

CAMAC for instrumentation systems Extended ISA or EISA Industry Standard Architecture or ISA Low Pin Count or LPC MicroChannel or MCA MBus

• • • • • • • •

Multibus for industrial systems NuBus or IEEE 1196 Peripheral Component Interconnect or PCI S-100 bus or IEEE 696, used in the Altair and similar microcomputers SBus or IEEE 1496 VESA Local Bus or VLB or VL-bus (for video cards) VMEbus, the VERSAmodule Eurocard bus STD Bus for 8- and 16-bit microprocessor systems

Serial
• • • • •

1-Wire HyperTransport I2C PCI Express or PCIe Serial Peripheral Interface Bus or SPI bus

Examples of external computer buses Parallel


• • • •



Advanced Technology Attachment or ATA (aka PATA, IDE, EIDE, ATAPI, etc.) disk/tape peripheral attachment bus (the original ATA is parallel, but see also the recent development Serial ATA, below) Centronics parallel (generally connects single device, occasionally 2 daisychained) HIPPI HIgh Performance Parallel Interface IEEE-488 (aka GPIB, General-Purpose Instrumentation Bus, and HPIB, HewlettPackard Instrumentation Bus) PCMCIA, now known as PC card, much used in laptop computers and other portables, but fading with the introduction of USB and built-in network and modem connections. SCSI Small Computer System Interface, disk/tape peripheral attachment bus

Serial
• • • • •

ACCESS.bus (A.b) Apple Desktop Bus (ADB) Controller Area Network (CAN) Serial Peripheral Interface (SPI) I²C

• • • • • •

Fibre Channel IEEE 1394 (FireWire) RS-485 Serial ATA or SATA Serial Storage Architecture (SSA) Universal Serial Bus (USB)

Proprietary


Floppy drive connector

Examples of internal/external computer buses
• • • •

Futurebus InfiniBand QuickRing SCI

Electrical bus

Symbolic representation of a bus: The thick line is the bus, which represents three wires. The slash through the bus arrow and the "3" means that the bus represents 3 wires. An electrical bus (sometimes spelled buss) is a physical electrical interface where many devices share the same electric connection. This allows signals to be transferred between devices (allowing information or power to be shared). A bus often takes the form of an array of wires that terminate at a connector which allows a device to be plugged onto the bus.
• • •

Buses are used for connecting components of a computer: a common example is the PCI bus in PCs. See computer bus. Buses are used for communicating between computers (often microprocessors). See computer bus. Buses are used for distribution of electrical power to components of a system. The (usually) thick conductors used are called busbars. In an electrical laboratory, for example, a bare bus-bar will sometimes line the wall, to be used by the engineers and technicians for its high electrical current carrying capacity, which allows a convenient approximation to zero voltage, or ground in the US, and earth in the UK.



In analysis of an electric power network a "bus" is any node of the single-line diagram at which voltage, current, power flow, or other quantities are to be evaluated. These may or may not correspond with heavy electrical conductors at a substation.

ARCNET

ARCNET (also CamelCased as ARCnet, an acronym from Attached Resource Computer NETwork) is a local area network (LAN) protocol, similar in purpose to Ethernet or Token Ring. ARCNET was the first widely available networking system for microcomputers and became popular in the 1980s for office automation tasks. It has since gained a following in the embedded systems market, where certain features of the protocol are especially useful. AppleTalk AppleTalk is a suite of protocols developed by Apple Computer for computer networking. It was included in the original Macintosh (1984) and is now deprecated by Apple in favor of TCP/IP networking. Addressing An AppleTalk address was a 4-byte quantity. This consisted of a two-byte network number, a one-byte node number, and a one-byte socket number. Of these, only the network number required any configuration, being obtained from a router. Each node dynamically chose its own node number, according to a protocol which handled contention between different nodes accidentally choosing the same number. For socket numbers, a few well-known numbers were reserved for special purposes specific to the AppleTalk protocol itself. Apart from these, all application-level protocols were expected to use dynamically-assigned socket numbers at both the client and server end. Because of this dynamism, users could not be expected to access services by specifying their address. Instead, all services had names which, being chosen by humans, could be expected to be meaningful to users, and also could be sufficiently long enough to minimize the chance of conflicts. Note that, because a name translated to an address which included a socket number as well as a node number, a name in AppleTalk mapped directly to a service being provided by a machine, which was entirely separate from the name of the machine itself. Thus, services could be moved to a different machine and, so long as they kept the same service name, there was no need for users to do anything different to continue accessing the service. And the same machine could host any number of instances of services of the same type, without any network connection conflicts. Contrast this with A records in the DNS, where a name translates only to a machine address, not including the port number that might be providing a service. Thus, if people are accustomed to using a particular machine name to access a particular service, their access will break when the service is moved to a different machine. This can be mitigated

somewhat by insistence on using CNAME records indicating service rather than actual machine names to refer to the service, but there is no way of guaranteeing that users will follow such a convention. (Some newer protocols, such as Kerberos and Active Directory use DNS SRV records to identify services by name, which is much closer to the AppleTalk model.) Protocols AppleTalk Address Resolution Protocol AARP resolves AppleTalk addresses to physical layer, usually MAC, addresses. It is functionally equivalent to ARP. AARP is a fairly simple system. When powered on, an AppleTalk machine broadcasts an AARP probe packet asking for a network address, intending to hear back from controllers such as routers. If no address is provided, one is picked at random from the "base subnet", 0. It then broadcasts another packet saying "I am selecting this address", and then waits to see if anyone else on the network complains. If another machine has that address, it will pick another address, and keep trying until it finds a free one. On a network with many machines it may take several tries before a free address is found, so for performance purposes the successful address is "written down" in NVRAM and used as the default address in the future. This means that in most real-world setups where machines are added a few at a time, only one or two tries are needed before the address effectively become constant. AppleTalk Data Stream Protocol This was a comparatively late addition to the AppleTalk protocol suite, done when it became clear that a TCP-style reliable connection-oriented transport was needed. Significant differences from TCP were:
• •

a connection attempt could be rejected there were no "half-open" connections; once one end initiated a tear-down of the connection, the whole connection would be closed (i.e., ADSP is full-duplex, not dual simplex).

Apple Filing Protocol The Apple Filing Protocol (AFP), formerly AppleTalk Filing Protocol, is the protocol for communicating with AppleShare file servers. Built on top of ASP, it provided services for authenticating users (extensible to different authentication methods including twoway random-number exchange) and for performing operations specific to the Macintosh HFS filesystem. AppleTalk Session Protocol

ASP was an intermediate protocol, built on top of ATP, which in turn was the foundation of AFP. It provided basic services for requesting responses to arbitrary commands and performing out-of-band status queries. It also allowed the server to send asynchronous attention messages to the client. AppleTalk Transaction Protocol ATP was the original reliable session-level protocol for AppleTalk, built on top of DDP. At the time it was being developed, a full, reliable connection-oriented protocol like TCP was considered to be too expensive to implement for most of the intended uses of AppleTalk. Thus, ATP was a simple request/response exchange, with no need to set up or tear down connections. An ATP request packet could be answered by up to eight response packets. The requestor then sent an acknowledgement packet containing a bit mask indicating which of the response packets it received, so the responder could retransmit the remainder. ATP could operate in either "at-least-once" mode or "exactly-once" mode. Exactly-once mode was essential for operations which were not idempotent; in this mode, the responder kept a copy of the response buffers in memory until successful receipt of a release packet from the requestor, or until a timeout elapsed. This way, it could respond to duplicate requests with the same transaction ID by resending the same response data, without performing the actual operation again. Datagram Delivery Protocol DDP was the lowest-level data-link-independent transport protocol. It provided a datagram service with no guarantees of delivery. All application-level protocols, including the infrastructure protocols NBP, RTMP and ZIP, were built on top of DDP. Name Binding Protocol NBP was a dynamic, distributed system for managing AppleTalk names. When a service started up on a machine, it registered a name for itself on that machine, as chosen by a human administrator. At this point, NBP provided a system for checking that no other machine had already registered the same name. Then later, when a client wanted to access that service, it used NBP to query machines to find that service. NBP provided browseability ("what are the names of all the services available?") as well as the ability to find a service with a particular name. As would be expected from Apple, names were truly human readable, containing spaces, upper and lower case letters, and including support for searching. Printer Access Protocol

PAP was the standard way of communicating with PostScript printers. It was built on top of ATP. When a PAP connection was opened, each end sent the other an ATP request which basically meant "send me more data". The client's response to the server was to send a block of PostScript code, while the server could respond with any diagnostic messages that might be generated as a result, after which another "send-more-data" request was sent. This use of ATP provided automatic flow control; each end could only send data to the other end if there was an outstanding ATP request to respond to. PAP also provided for out-of-band status queries, handled by separate ATP transactions. Even while it was busy servicing a print job from one client, a PAP server could continue to respond to status requests from any number of other clients. This allowed other Macintoshes on the LAN that were waiting to print to display status messages indicating that the printer was busy, and what the job was that it was busy with. Routing Table Maintenance Protocol RTMP was the protocol by which routers kept each other informed about the topology of the network. This was the only part of AppleTalk that required periodic unsolicited broadcasts: every 10 seconds, each router had to send out a list of all the network numbers it knew about and how far away it thought they were. Zone Information Protocol ZIP was the protocol by which AppleTalk network numbers were associated with zone names. A zone was a subdivision of the network that made sense to humans (for example, "Accounting Department"); but while a network number had to be assigned to a topologically-contiguous section of the network , a zone could include several different discontiguous portions of the network. Physical Implementation The initial default hardware implementation for AppleTalk was a high-speed serial protocol known as LocalTalk that used the Macintosh's built-in RS-422 ports at 230.4 kbit/s. LocalTalk used a splitter box in the RS-422 port to provide an upstream and downstream cable from a single port. The system was slow by today's standards, but at the time the additional cost and complexity of networking on PC machines was such that it was common that Macs were the only networked machines in the office. Other physical implementations were also available. One common replacement for LocalTalk was PhoneNet, a 3rd party solution (from a company called Farallon) that also used the RS-422 port and was indistinguishable from LocalTalk as far as Apple's LocalTalk port drivers were concerned, but ran over two unused wires in existing phone cabling. PhoneNet was considerably less expensive to install and maintain. Ethernet and TokenRing was also supported, known as EtherTalk and TokenTalk respectively. EtherTalk in particular gradually became the dominant implementation method for

AppleTalk as Ethernet became generally popular in the PC industry throughout the 1990s.

Networking Model OSI Model Corresponding AppleTalk layers Application Apple Filing Protocol (AFP) Presentation Apple Filing Protocol (AFP) Zone Information Protocol (ZIP) AppleTalk Session Protocol (ASP) AppleTalk Data Stream Protocol (ADSP) AppleTalk Transaction Protocol (ATP) AppleTalk Echo Protocol (AEP) Name Binding Protocol (NBP) Routing Table Maintenance Protocol (RTMP) Datagram Delivery Protocol (DDP) EtherTalk Link Access Protocol (ELAP) LocalTalk Link Access Protocol (LLAP) TokenTalk Link Access Protocol (TLAP) Fiber Distributed Data Interface (FDDI) LocalTalk Ethernet Token FDDI driver driver driver driver

Session

Transport

Network

Data link

Physical

Ring

Cross Platform Solutions

The BSD and Linux operating systems support AppleTalk through an open source project called Netatalk, which implements the complete protocol suite and allows them to both act as native file or print servers for Macintoshes, Internet protocol suite and print to LocalTalk printers over the network. In addition, Columbia University released the Layer Protocols Columbia AppleTalk Package (CAP) which DNS, TLS/SSL, implemented the protocol suite for various Unix Application TFTP, FTP, HTTP, flavors including Ultrix, SunOS, *BSD and IRIX. IMAP, IRC, NNTP, This package is no longer actively maintained. POP3, SIP, SMTP, SNMP, SSH, Asynchronous Transfer Mode TELNET, BitTorrent, RTP, rlogin, ENRP, … Asynchronous Transfer Mode, or ATM for short, TCP, UDP, DCCP, is a cell relay network protocol which encodes data Transport SCTP, IL, RUDP, traffic into small fixed-sized (53 byte; 48 bytes of … data and 5 bytes of header information) cells instead of variable sized packets (sometimes known as Network IP (IPv4, IPv6), frames) as in packet-switched networks (such as the ICMP, IGMP, ARP, Internet Protocol or Ethernet). It is a connectionRARP, … oriented technology, in which a connection is Link Ethernet, Wi-Fi, established between the two endpoints before the Token ring, PPP, actual data exchange begins. SLIP, FDDI, ATM, DTM, Frame Relay, Introduction SMDS, … ATM was intended to provide a single unified networking standard that could support both synchronous channel networking (PDH, SDH) and packet-based networking (IP, Frame relay, etc), whilst supporting multiple levels of quality of service for packet traffic. ATM sought to resolve the conflict between circuit-switched networks and packetswitched networks by mapping both bitstreams and packet-streams onto a stream of small fixed-size 'cells' tagged with virtual circuit identifiers. The cells are typically sent on demand within a synchronous time-slot pattern in a synchronous bit-stream: what is asynchronous here is the sending of the cells, not the low-level bitstream that carries them. In its original conception, ATM was to be the enabling technology of the 'Broadband Integrated Services Digital Network' (B-ISDN) that would replace the existing PSTN. The full suite of ATM standards provides definitions for layer 1 (physical connections), layer 2 (data link layer) and layer 3 (network) of the classical OSI seven-layer networking model. The ATM standards drew on concepts from the telecommunications community, rather than the computer networking community. For this reason, extensive provision was made for integration of most existing telco technologies and conventions into ATM. As a result, ATM provides a highly complex technology, with features intended for applications ranging from global telco networks to private local area computer networks. ATM has been a partial success as a technology, with widespread deployment, but

generally only used as a transport for IP traffic; its goal of providing a single integrated technology for LANs, public networks, and user services has largely failed. Successes and Failures of ATM Technology Numerous telcos have implemented wide-area ATM networks, and many ADSL implementations use ATM. However, ATM has failed to gain wide use as a LAN technology, and its great complexity has held back its full deployment as the single integrating network technology in the way that its inventors originally intended. Many people, particularly in the Internet protocol-design community, considered this vision to be mistaken. Their argument went something like this: We know that there will always be both brand-new and obsolescent link-layer technologies, particularly in the LAN area, and it is fair to assume that not all of them will fit neatly into the SDH model that ATM was designed for. Therefore, some sort of protocol is needed to provide a unifying layer over both ATM and non-ATM link layers, and ATM itself cannot fill that role. Conveniently, we have this protocol called "IP" which already does that. Ergo, there is no point in implementing ATM at the network layer. In addition, the need for cells to reduce jitter has disappeared as transport speeds increased (see below), and improvements in voice over IP have made the integration of speech and data possible at the IP layer, again removing the incentive for ubiquitous deployment of ATM. Most telcos are now planning to integrate their voice network activities into their IP networks, rather than their IP networks into the voice infrastructure. Many technically sound ideas from ATM were adopted by MPLS, a generic Layer 2 packet switching protocol. ATM remains widely deployed, and is used as a multiplexing service in DSL networks, where its compromises fit DSL's low-data-rate needs well. In turn, DSL networks support IP (and IP services such as VoIP) via PPP over ATM. ATM will remain deployed for some time in higher-speed interconnects where carriers have already committed themselves to existing ATM deployments; ATM is used here as a way of unifying PDH/SDH traffic and packet-switched traffic under a single infrastructure. However, ATM is increasingly challenged by speed and traffic shaping requirements of converged networks. In particular, the complexity of SAR imposes a performance bottleneck, as the fastest SARs known run at 2.5 Gbit/s and have limited traffic shaping capabilities. Currently it seems like Ethernet implementations (10Gbit-Ethernet, MetroEthernet) will replace ATM in many locations. Enables convergence of Voice, Video, Data on one network Recent developments Interest in using native ATM for carrying live video and audio has increased recently. In these environments, low latency and very high quality of service are required to handle linear audio and video streams. Towards this goal standards are being developed such as

AES47 (IEC 62365), which provides a standard for professional uncompressed audio transport over ATM. This is worth comparing with professional video over IP. ATM Concepts Why Cells? The motivation for the use of small data cells was the reduction of jitter (delay variance, in this case) in the multiplexing of data streams; reduction of this (and also end-to-end round-trip delays) is particularly important when carrying voice traffic. This is because the conversion of digitized voice back into an analog audio signal is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence - and if the data does arrive, but late, it is useless, because the time period when it should have been converted to a signal has already passed. Now consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (i.e. some of the data packets will be large). No matter how small the speech packets could be made, they would always encounter full-size data packets, and under normal queuing conditions, might experience maximum queuing delays. At the time ATM was designed, 155 Mbit/s SDH (135 Mbit/s payload) was considered a fast optical network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the USA (2 to 34 Mbit/s in Europe). At this rate, a typical full-length 1500 byte (12000 bit) data packet would take 89 µs to transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds. A queueing delay induced by several such data packets might be several times the figure of 7.8 ms, in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this in a number of ways:


Have a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced by passage through the buffer would be such that echo cancellers would be required even in local networks; this was considered too expensive at the time. Also, it would have increased the delay across the channel, and human conversational mechanisms tend not to work well with high-delay channels. Build a system which can inherently provide low-jitter (and minimal overall delay) to traffic which needs it. Operate on a 1:1 user basis (i.e., a dedicated pipe).





Cells In Practice The rules for segmenting and reassembling packets and streams into cells are known as ATM Adaptation Layers. The most important two are AAL 1, used for streams, and AAL 5, used for most types of packets. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtualconnection basis. Since ATM was designed, networks have become much faster. As of 2001, a 1500 byte (12000 bit) full-size Ethernet packet will take only 1.2 µs to transmit on a 10 Gbit/s optical network, removing the need for small cells to reduce jitter. Some consider that this removes the need for ATM in the network backbone. Additionally, the hardware for implementing the service adaptation for IP packets is expensive at very high speeds. Specifically, the cost of segmentation and reassembly (SAR) hardware at OC-3 and above speeds makes ATM less competitive for IP than Packet Over SONET (POS). SAR performance limits mean that the fastest IP router ATM interfaces are OC12 - OC48 (STM4 - STM16), while (as of 2004) POS can operate at OC-192 (STM64) with higher speeds expected in the future. On slow links (2 Mbit/s and below) ATM still makes sense, and this is why so many ADSL systems use ATM as an intermediate layer between the physical link layer and a Layer 2 protocol like PPP or Ethernet. At these lower speeds, ATM's ability to carry multiple logical circuits on a single physical or virtual medium provides a compelling business advantage. DSL can be used as an access method for an ATM network, allowing a DSL termination point in a telephone central office to connect to many internet service providers across a wide-area ATM network. In the United States, at least, this has allowed DSL providers to provide DSL access to the customers of many internet service providers. Since one DSL termination point can support multiple ISPs, the economic feasibility of DSL is substantially improved. Why Virtual Circuits? ATM is a channel based transport layer. This is encompassed in the concept of the Virtual Path (VP) and Virtual Circuit (VC). Every ATM cell has an 8- or 12-bit Virtual Path Identifier (VPI) and 16-bit Virtual Circuit Identifer (VCI) pair defined in its header. The length of the VPI varies according to whether the cell is sent on the user-network interface (on the edge of the network), or if it is sent on the network-network interface (inside the network). As these cells traverse an ATM network, switching is achieved by changing the VPI/VCI values. Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). Another advantage of the use of virtual circuits is the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, n*64 channels, IP, SNA, etc.) to share a common ATM connection without interfering with one another.

Structure of An ATM Cell An ATM cell consists of a 5 byte header and a 48 byte payload. The payload size of 48 bytes was a compromise between the needs of voice telephony and packet networks, obtained by a simple averaging of the US proposal of 64 bytes and European proposal of 32, said by some to be motivated by a European desire not to need echo-cancellers on national trunks. ATM defines two different cell formats: NNI (Network-network interface) and UNI (User-network interface). Most ATM links use UNI cell format. Diagram of the UNI ATM Cell Diagram of the NNI ATM Cell 7 GFC VPI VCI VCI HEC PT CLP 4 3 VPI VCI 0 7 VPI VPI VCI VCI HEC PT CLP VCI 4 3 0

Payload

(48

bytes)

Payload

(48

bytes)

GFC = Generic Flow Control (4 bits) (default: 4-zero bits) VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI) VCI = Virtual Channel Identifier (16 bits) PT = Payload Type (3 bits) CLP = Cell Loss Priority (1 bit) HEC = Header Error Correction (8bits) (checksum of header only) The PT field is used to designate various special kinds of cells for Operation and Management (OAM) purposes, and to delineate packet boundaries in some AALs. Several of ATM's link protocols use the HEC field to drive a CRC-Based Framing algorithm which allows the position of the ATM cells to be found with no overhead required beyond what is otherwise needed for header protection. In a UNI cell the GFC field is reserved for an (as yet undefined) local flow control/submultiplexing system between network and user. All four GFC bits must be zero by default.

The NNI cell format is almost identical to the UNI format, except that the 4 bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 212 VCs each (in practice some of the VP and VC numbers are reserved). Bluetooth

This article is about the Bluetooth wireless specification. For King Harold Bluetooth, see Harold I of Denmark Bluetooth is an industrial specification for wireless personal area networks (PANs). Bluetooth provides a way to connect and exchange information between devices like personal digital assistants (PDAs), mobile phones, laptops, PCs, printers and digital cameras via a secure, low-cost, globally available short range radio frequency. The name Bluetooth was born from the 10th century king of Denmark, King Harold Bluetooth (whose surname is sometimes written as Bluetooh), who engaged in diplomacy which led warring parties to negotiate with each other. The inventors of the Bluetooth technology thought this a fitting name for their technology which allowed different devices to talk to each other [1]. Introduction

A typical Bluetooth mobile phone headset. Bluetooth is a radio standard primarily designed for low power consumption, with a short range (power class dependent: 1 meter, 10 meters, 100 meters) and with a low-cost transceiver microchip in each device. Bluetooth lets these devices talk to each other when they come in range, even if they are not in the same room, as long as they are within up to 100 meters of each other, dependent on the power class of the product. Products are available in one of three power classes: Class Power Power Range (mW) (dBm) (approximate)

Class 1 100 mW 20 dBm ~100 meters Class 2 2.5 mW 4 dBm ~10 meters Class 3 1 mW 0 dBm ~1 meter

Bluetooth applications

A Bluetooth mouse.
• • •

• • • • • • •

Wireless networking between desktops and laptops, or desktops in a confined space and where little bandwidth is required Bluetooth peripherals such as printers, mice, keyboards and digital pens. Bluetooth cell phones have been sold in large numbers, and are able to connect to computers, personal digital assistants (PDAs), certain automobile handsfree systems and various other devices. The standard also includes support for more powerful, longer-range devices suitable for constructing wireless LANs. Transfer of files (images, mp3s, etc) between mobile phones, Personal digital assistants (PDAs) and computers via OBEX Certain mp3 players and digital cameras to transfer files to and from computers Bluetooth headsets for mobile phones and smartphones Some testing equipment is Bluetooth enabled Some medical applications are under development Certain GPS receivers transfer NMEA data via Bluetooth Bluetooth car kits — In 2002 Audi, with the Audi A8, was the first motor vehicle manufacturer to install Bluetooth technology in a car, enabling the passenger to use a wireless in-car phone. Later, BMW added it as an option on its 3 Series, 5 Series, 7 Series and X5 vehicles. Since then, other manufacturers have followed suit, with many vehicles, including the 2004 Toyota Prius and the 2004 Lexus LS 430. The Bluetooth car kits allow users with Bluetooth-equipped cell phones to

• • • •







make use of some of the phone's features, such as making calls, while the phone itself can be left in a suitcase or in the boot/trunk, for instance. Companies like Parrot or Motorola manufacture Bluetooth hands-free car kits for well-known brand car manufacturers. Certain data logging equipment transmits data to a computer via Bluetooth. For remote controls where infrared was traditionally used. Hearing aids — Starkey Laboratories have created a device to plug into some hearing aids [2] A number of unscrupulous advertising firms in the greater Los Angeles area debuted Bluetooth-enabled billboards along roads and highways, broadcasting advertisements to passing motorists' Bluetooth-enabled cellular phones or PDAs, much to the motorists' annoyance. [3] Nintendo Revolution and Sony's Playstation 3 will use Bluetooth technology for its wireless controllers. Also Hip Gear has already released a Bluetooth controller for the Xbox. Newer model Zoll Defibrilators for the purpose of transmitting Defibrilation Data and Patient Monitoring/ECG data between the unit and a reporting PC using Zoll Rescue Net software. The upcoming LEGO Mindstorms NXT will use Bluetooth as an alternative way to receive programs from the computer.

Specifications and Features The Bluetooth specification was first developed by Ericsson (now Sony Ericsson), and was later formalized by the Bluetooth Special Interest Group (SIG). The SIG was formally announced on May 20, 1999. It was established by Sony Ericsson, IBM, Intel, Toshiba and Nokia, and later joined by many other companies as Associate or Adopter members. Bluetooth is also known as IEEE 802.15.1. Bluetooth 1.0 and 1.0B Versions 1.0 and 1.0 B had numerous problems and the various manufacturers had great difficulties in making their products interoperable. 1.0 and 1.0B also had mandatory Bluetooth Hardware Device Address (BD_ADDR) transmission in the handshaking process, rendering anonymity impossible at a protocol level, which was a major setback for services planned to be used in Bluetooth environments, such as Consumerium. Bluetooth 1.1 In version 1.1:
• • •

many errata found in the 1.0B specifications were fixed. added support for non-encrypted channels. Received Signal Strength Indicator (RSSI)

Bluetooth 1.2

This version is backwards compatible with 1.1 and the major enhancements include


• • • •

Adaptive Frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding using crowded frequencies in the hopping sequence Higher transmission speeds in practice extended Synchronous Connections (eSCO), which improves voice quality of audio links by allowing retransmissions of corrupted packets. Host Controller Interface (HCI) support for 3-wire UART HCI access to timing information for Bluetooth applications.

Bluetooth 2.0 This version is backwards compatible with 1.x. The main enhancement is the introduction of Enhanced Data Rate (EDR) of 2.1 Mbit/s. This has the following effects (Bluetooth SIG, 2004):
• • • •

3 times faster transmission speed (up to 10 times in certain cases). Lower power consumption through a reduced duty cycle. Simplification of multi-link scenarios due to more available bandwidth. Further improved BER (bit error rate) performance.

The future of Bluetooth The next version of Bluetooth, currently code named Lisbon, includes a number of features to increase security, useability and value of Bluetooth. The following features are defined:


Atomic Encryption Change - allows encrypted links to change their encryption keys periodically, increasing security, and also allowing role switches on an encrypted link. Extended Inquiry Response - provides more information during the inquiry procedure to allow better filtering of devices before connection. This information includes the name of the device, and a list of services, with other information. Sniff Subrating - reducing the power consumption when devices are in the sniff low power mode, especially on links with asymmetric data flows. Human interface devices (HID) are expected to benefit the most with mice and keyboards increasing the battery life from 3 to 10 times those currently used. QoS Improvements - these will enable audio and video data to be transmitted at a higher quality, especially when best effort traffic is being transmitted in the same piconet.









Simple Pairing - this improvement will radically improve the pairing experience for Bluetooth devices, while at the same time increasing the use and strength of security. It is expected that this feature will significantly increase the use of Bluetooth.

The version of Bluetooth after Lisbon, code-named Seattle, has a number of the same features, but the main one announced is the allignment with UltraWideBand. This will allow the use of Bluetooth profiles over the UWB radio, enabling very fast data transfers, synchronizations and file pushes, while also building on the low power idle modes of Bluetooth. The combination of a low power radio used when no data needs to be transmitted, and a high data rate radio used to transmit the bulk data could be the start of the software radios. Bluetooth, given its worldwide regulatory approval, lowest power operation, and extremely robust data transmission capabilities provides an ideal signalling channel to enable the soft radio concept to start with WiMedia UWB. Technical information Communication & connection A Bluetooth device playing the role of the "master" can communicate with up to 7 devices playing the role of the "slave." This network of "group of up to 8 devices" (1 master + 7 slaves) is called a piconet. At any given time, data can be transferred between the master and 1 slave; but the master switches rapidly from slave to slave in a round-robin fashion. (Simultaneous transmission from the master to multiple slaves is possible, but not used much in practice). Either device may switch the master/slave role at any time. Bluetooth specification allows connecting 2 or more piconets together to form a scatternet, with some devices acting as a bridge by simultaneously playing the master role in one piconet and the slave role in another piconet. These devices have yet to come, though are supposed to appear next year (2007). Setting up connections Any Bluetooth device will transmit the following sets of information on demand
• • • •

Device Name Device Class List of services Technical information eg: device features, manufacturer, Bluetooth specification, clock offset

Anything may perform an "inquiry" to find other devices to which to connect, and any device can be configured to respond to such inquiries. However if the device trying to connect knows the address of the device it will always respond to direct connection requests and will transmit the information shown in the list above if requested for it. Use

of the device's services however may require pairing or its owner to accept but the connection itself can be started by any device and be held until it goes out of range. Some devices can only be connected to one device at a time and connecting to them will prevent them from connecting to other devices and showing up in inquiries until they disconnect the other device. Pairing Pairs of devices may establish a trusted relationship by learning (by user input) a shared secret known as a "passkey". A device that wants to communicate only with a trusted device can cryptographically authenticate the identity of the other device. Trusted devices may also encrypt the data that they exchange over the air so that no one can listen in. The encryption can however be turned off and passkeys are stored on the device's file system and not the Bluetooth chip itself. Since the Bluetooth address is permanent a pairing will be preserved even if the Bluetooth name is changed. Pairs can be deleted at any time by either device. Devices will generally require pairing or will prompt the owner before it allows a remote device to use any or most of its services. Some devices such as Sony Ericsson phones will usually accept OBEX business cards and notes without any pairing or prompts. Certain printers and access points will allow any device to use its services by default much like unsecured Wi-Fi networks. Air interface The protocol operates in the license-free ISM band at 2.45 GHz. In order to avoid interfering with other protocols which use the 2.45 GHz band, the Bluetooth protocol divides the band into 79 channels (each 1 MHz wide) and changes channels up to 1600 times per second. Implementations with versions 1.1 and 1.2 reach speeds of 723.1 kbit/s. Version 2.0 implementations feature Bluetooth Enhanced Data Rate (EDR), and thus reach 2.1 Mbit/s. Technically version 2.0 devices have a higher power consumption, but the three times faster rate reduces the transmission times, effectively reducing consumption to half that of 1.x devices (assuming equal traffic load). Bluetooth differs from Wi-Fi in that the latter provides higher throughput and covers greater distances but requires more expensive hardware and higher power consumption. They use the same frequency range, but employ different multiplexing schemes. While Bluetooth is a cable replacement for a variety of applications, Wi-Fi is a cable replacement only for local area network access. A glib summary is that Bluetooth is wireless USB (although Wireless USB is really wireless USB), whereas Wi-Fi is wireless Ethernet, both operating at much lower bandwidth than the cable systems they are trying to replace, minus that of the newest version of the Wireless N protocol, which operates at a maximum speed of 108 Mbit/s.(Double that of a normal Wireless G connection.) Many USB Bluetooth adapters are available, some of which also include an IrDA adapter. Older (pre-2003) Bluetooth adapters, however, limit the amount of services by offering only the Bluetooth Enumerator and a less-powerful incarnation of Bluetooth Radio. Such devices are able to link computers via Bluetooth, but they unfortunately don't offer much in the way of the twelve or more services that modern adapters are able to utilize.

Security Security measures Bluetooth uses the SAFER+ algorithm for authentication and key generation. The E0 stream cipher is used for encrypting packets. This makes eavesdropping on Bluetoothenabled devices more difficult. Security concerns 2005: In April 2005, Cambridge University security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices, confirming the attacks to be practicably fast and Bluetooth's symmetric key establishment method to be vulnerable. To rectify this vulnerability, they carried out an implementation which showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as handphones. In June 2005 Yaniv Shaked and Avishai Wool published the paper "Cracking the Bluetooth PIN1", which shows both passive and active methods for obtaining the PIN for a Bluetooth Link. The passive attack would allow a suitably equipped attacker to eavesdrop on communications and spoof if they were present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that the first method may be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter their PIN during the attack when their device prompts them to. Also, this active attack will most likely require custom hardware, as most commercially available Bluetooth Devices are not capable of the timing necessary. In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth-enabled phones to track other devices left in cars. Police are advising users to ensure any mobile networking connections are de-activated if laptops and other devices are left in this way. However the best way is to not leave any valuable devices in cars. Bluetooth profiles In order to use Bluetooth, a device must be able to interpret certain Bluetooth profiles. These define the possible applications. The following profiles are defined and adopted by the Bluetooth SIG: Advanced Audio Distribution Profile (A2DP) Also referred to as the AV profile, it is designed to transfer a stereo audio stream like music from an MP3 player to a headset or car radio. This profile relies on AVDTP and GAVDP. It includes mandatory support for low complexity

Sub_Band_Codec (SBC) and supports optionally: MPEG-1,2 Audio, MPEG-2,4 AAC and ATRAC, and is extensable to support manufacturer defined codecs. Bluetake's I-Phono Hi-Fi Sport Headphones are an example of this profile being employed. Most bluetooth stacks implement the SCMS-T copyright protection. In these cases it is not possible to connect the A2DP headphones for high quality audio. E.g. the Motorola HT820 can be used for high quality audio only with certain versions of the Toshiba bluetooth stack. Audio/Video Remote Control Profile (AVRCP) This profile is designed to provide a standard interface to control TVs, Hi-fi equipment, etc. to allow a single remote control (or other device) to control all of the A/V equipment that a user has access to. It may be used in concert with A2DP or VDP. It has the possibility for vendor-dependent extensions. The Generic Media Control Profile (GMCP) is proposed to be an open standard for transfer of media content related information using those extensions. Basic Imaging Profile (BIP) This profile is designed for sending images between devices and includes the ability to resize, and convert images to make them suitable for the receiving device. It may be broken down into smaller pieces: Image Push Allows the sending of images from a device the user controls. Image Pull Allows the browsing and retrieval of images from a remote device. Advanced Image Printing print images with advanced options using the DPOF format developed by Canon, Kodak, Fujifilm, and Matsushita Automatic Archive Allows the automatic backup of all the new images from a target device. For example, a laptop could download all of the new pictures from a camera whenever it is within range. Remote Camera Allows the initiator to remotely use a digital camera. For example, a user could place a camera on a tripod for a group photo, use their phone handset to check that everyone is in frame, and activate the shutter with the user in the photo. Remote Display

Allows the initiator to push images to be displayed on another device. For example, a user could give a presentation by sending the slides to a digital projector. Basic Printing Profile (BPP) This allows devices to send text, e-mails, vCards, or other items to printers based on print jobs. It differs from HCRP in that it needs no printer-specific drivers. This makes it more suitable for embedded devices such as mobile phones and digital cameras which cannot easily be updated with drivers dependent upon printer vendors. Common ISDN Access Profile (CIP) This provides unrestricted access to the services, data and signalling that ISDN offers. Cordless Telephony Profile (CTP) This is designed for cordless phones to work using Bluetooth. It is hoped that mobile phones could use a Bluetooth CTP gateway connected to a landline when within the home, and the mobile phone network when out of range. It is central to the Bluetooth SIG's '3-in-1 phone' use case. Dial-up Networking Profile (DUN) This profile provides a standard to access the Internet and other dial-up services over Bluetooth. The most common scenario is accessing the Internet from a laptop by dialling up on a mobile phone, wirelessly. It is based on SPP, and provides for relatively easy conversion of existing products, through the many features that it has in common with the existing wired serial protocols for the same task. These include the AT command set specified in ETSI 07.07, and PPP. Fax Profile (FAX) This profile is intended to provide a well defined interface between a mobile phone or fixed-line phone and a PC with Fax software installed. Support must be provided for ITU T.31 and / or ITU T.32 AT command sets as defined by ITU-T. Data and voice calls are not covered by this profile. File Transfer Profile (FTP) Provides access to the file system on another device. This includes support for getting folder listings, changing to different folders, getting files, putting files and deleting files. It uses OBEX as a transport and is based on GOEP. General Audio/Video Distribution Profile (GAVDP) Provides the basis for A2DP, and VDP.

Generic Access Profile (GAP) Provides the basis for all other profiles. Generic Object Exchange Profile (GOEP) provides a basis for other data transfer profiles. Based on OBEX. Hard Copy Cable Replacement Profile (HCRP) This provides a simple wireless alternative to a cable connection between a device and a printer. Unfortunately it does not set a standard regarding the actual communications to the printer, so drivers are required specific to the printer model or range. This makes this profile less useful for embedded devices such as digital cameras and palmtops, as updating drivers can be problematic. Hands Free Profile (HFP) This is commonly used to allow car hands free kits to communicate with mobile phones in the car. It uses SCO to carry a mono, PCM audio channel. It is considered to be the killer app for Bluetooth as more Governments are passing legislation to ban the direct use of mobile phones while driving. Human Interface Device Profile (HID) provides support for devices such as mice, joysticks, keyboards, etc. It is designed to provide a low latency link, with low power requirements. Popular devices that feature support for this profile include: Logitech diNovo Media Desktop 2.0, Microsoft Optical Desktop Elite. The unreleased PlayStation 3 controllers will also use BT HID. Headset Profile (HSP) This is the most commonly used profile, providing support for the popular Bluetooth Headsets to be used with mobile phones. It relies on SCO for audio and a subset of AT commands from GSM 07.07 for minimal controls including the ability to ring, answer a call, hang up and adjust the volume. Intercom Profile (ICP) This is often referred to as the walkie-talkie profile. It is another TCS based profile, relying on SCO to carry the audio. It is proposed to allow voice calls between two Bluetooth capable handsets, over Bluetooth. Object Push Profile (OPP) A basic profile for sending "objects" such as pictures, virtual business cards, or appointment details. It is called push because the transfers are always instigated by the sender (client), not the receiver (server). Personal Area Networking Profile (PAN) This profile is intended to allow the use of Bluetooth Network Encapsulation Protocol on Layer 3 protocols for transport over a Bluetooth link.

SIM Access Profile (SAP) This allows devices such as car phones with built in GSM transceivers to connect to a SIM card in a phone with Bluetooth, so the car phone itself doesn't require a separate SIM card. Service Discovery Application Profile (SDAP) This mandatory profile is used to find out which profiles are offered by the Server device. Serial Port Profile (SPP) This profile is based on the ETSI TS07.10 specification and uses the RFCOMM protocol. It emulates a serial cable to provide a simply implemented wireless replacement for existing RS232 based serial communications applications, including familiar control signals. It provides the basis for DUN, FAX, HSP and LAN profiles. Synchronisation Profile (SYNCH) This profile allows synchronisation of Personal Information Manager (PIM) items. As this profile originated as part of the infrared specifications but has been adopted by the Bluetooth SIG to form part of the main Bluetooth specification, it is also commonly referred to as IrMC Synchronization. Video Distribution Profile (VDP) This profile allows the transport of a video stream. It could be used for streaming a recorded video from a PC media centre to a portable player, or from a digital video camera to a TV. Support for H.263 baseline is mandatory. Support for MPEG-4 Visual Simple Profile, H.263 profiles 3 and 8 are optionally supported, and covered in the specification. The remaining profiles are still not finalised, but are currently proposed within the Bluetooth SIG:
• • • • • • •

Handsfree Profile 1.5 (HFP 1.5) Unrestricted Digital Information (UDI) Wireless application Protocol over BT (WAP) Extended Service discovery profile (ESDP) Local Positioning Profile (LPP) Video Conferencing Profile (VCP) Device ID (DID) : Allows a device to be identified according to the Specification version met, the Manufacturer, product, product version, etc. It enables similar applications to those the Plug-and-play specification allows.

Compatibility of products with profiles can be verified on the Bluetooth Qualification website.

Future of Bluetooth Bluetooth technology already plays a part in the rising Voice over IP (VOIP) scene, with Bluetooth headsets being used as wireless extensions to the PC audio system. As VOIP becomes more popular, and more suitable for general home or office users than wired phone lines, Bluetooth may be used in Cordless handsets, with a base station connected to the Internet link. In March 2006, the Bluetooth Special Interest Group (SIG) announced its intent to work with UWB manufacturers to develop a next-generation Bluetooth technology using UWB technology and delivering UWB speeds. This will enable Bluetooth technology to be used to deliver high speed network data exchange rates required for wireless VOIP, music and video applications. Competing Technologies
• • • • • • • • • • • • • • • • • •

ANT[4] — Low Data Rate Low Power wireless personal area network BACnet — A competing protocol which can also be transported over LonWorks. Bluetooth — industrial specification for wireless personal area networks (PANs) KNX — intelligent electrical installation networking HomePlug — powerline protocol INSTEON — an integrated dual-band mesh network that combines wireless radio frequency (RF) with the home's existing electrical wiring. IrDA — industry standard infrared protocol LonWorks — A competing protocol. nanoNET[5] — proprietary set of wireless sensor protocols, designed to compete with ZigBee OBEX — communications protocol that facilitates the exchange of binary objects between devices RadioRa[6] — proprietary two-way RF protocol, developed by Lutron for use in residential lighting control TinyOS — mesh network OS using the NesC language Topdog[7] — proprietary protocol for wireless networking, for use in residential and commercial lighting control UPB[8] — powerline protocol that offers improved performance and reliability over X10 Wi-Fi — product compatibility standards for wireless local area networks (WLANs) Wireless USB — wireless extension to USB X10 — powerline protocol ZigBee — set of high level protocols designed for low power digital radios

DECnet DECnet is a proprietary suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers.

It evolved into one of the first peer-to-peer network architectures, thus making DEC into a networking powerhouse in the 1980s. Initially built with four layers, it later (1992) evolved into a seven layer OSI compliant networking protocol, around the time when open systems (POSIX compliant, i.e. Unixlike) were grabbing marketshare from the proprietary OSes like VAX/VMS and AlphaVMS. DECnet was built right into the DEC flagship operating system (VAX/VMS) from its inception. Digital ported it to its own Ultrix variant of UNIX, as well as Apple Macintosh computers and PCs running both DOS and Windows under the name DEC Pathworks, transforming these systems into DECnet end-nodes on a network of VAX machines. More recently, an open-source version has been developed for the Linux OS: see LinuxDECnet on Sourceforge. Brief overview of the evolution of DECnet DECnet refers to a specific set of hardware and software networking products which implement the DIGITAL Network Architecture (DNA). The DIGITAL Network Architecture is essentially a set of documents which define the network architecture in general, states the specifications for each layer of the architecture, and describes the protocols which operate within each layer. Although network protocol analyzer tools tend to categorize all protocols from DIGITAL as "DECnet", strictly speaking, non-routed DIGITAL protocols such as LAT, SCS, AMDS, LAST/LAD are not DECnet protocols and are not part of the DIGITAL Network Architecture. To trace the evolution of DECnet is to trace the development of DNA. The beginnings of DNA were in the early 1970s. DIGITAL published its first DNA specification at about the same time that IBM announced its Systems Network Architecture (SNA). Since that time, development of DNA has evolved through the following phases: DECnet Phase IV protocol suite FAL: File Access Application NML: Network Management Listener DAP: Data Access Presentation CTERM: Command Terminal Session SCP: Session Control Protocol Transport NSP: Network Service Protocol Network DRP: DECnet Routing Protocol DDCMP: Digital Data Communications Message Data link MOP: Maintenance Operation Ethernet, Token ring, HDLC, FDDI, ... Physical Ethernet, Token ring, FDDI, ... Ethernet

Listener Protocol

Protocol Protocol

Internet protocol suite Layer Application Ethernet is a frame-based computer networking technology for local area networks (LANs). The name comes from the physical concept of ether. It defines wiring and signaling for the physical layer, and frame formats and protocols for the media access control (MAC)/data link layer of the Transport OSI model. Ethernet is mostly standardized as IEEEs 802.3. It has become the most widespread LAN Network technology in use during the 1990s to the present, and has largely replaced all other LAN standards such as token ring, FDDI, Link and ARCNET. General description Protocols DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin, ENRP, … TCP, UDP, DCCP, SCTP, IL, RUDP, … IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, … Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, DTM, Frame Relay, SMDS, …

A 1990s Ethernet network interface card. This is a combo card that supports both coaxialbased 10BASE2 (BNC connector, left) and Twisted-pair-based 10BASE-T (RJ-45 connector, right). Ethernet is based on the idea of peers on the network sending messages in what was essentially a radio system, captive inside a common wire or channel, sometimes referred to as the ether, which is an oblique reference to the luminiferous aether through which 19th century physicists incorrectly theorized that electromagnetic radiation traveled. Each peer has a unique 48-bit key known as the MAC address to ensure that all systems in an Ethernet network have distinct addresses. By default network cards come programmed with a globally unique address, though this can usually be overridden. Due to the ubiquity of Ethernet and the ever-decreasing cost of the hardware needed to support it, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards obviating the installation of a separate network card. Despite the huge changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s (see gigabit ethernet) and beyond, the different variants remain essentially the same from the programmer's point of view and are easily interconnected using readily available inexpensive hardware.

CSMA/CD shared medium Ethernet A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governs the way the computers share the channel. Originally developed in the 1960s for the ALOHAnet in Hawaii using radio, the scheme is relatively simple compared to token ring or master controlled networks. When one computer wants to send some information, it obeys the following algorithm: Main procedure 1. Frame ready for transmission 2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6μs in 10 Mbps Ethernet). 3. Start transmitting 4. Does a collision occur? If so, go to collision detected procedure. 5. End successful transmission Collision detected procedure 1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision 2. Is maximum number of transmission attempts reached? If so, abort transmission. 3. Calculate and wait random backoff period 4. Re-enter main procedure at stage 1 This works something like a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current guest to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time. The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit. Ethernet originally used a shared coaxial cable winding around a building or campus to every attached machine. Computers were connected to an Attachment Unit Interface (AUI) transceiver, which in turn connected to the cable. While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector could make the whole Ethernet segment unusable. Multi point systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work just fine while others would work slowly due to excessive retries or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running ping and shouted out reports as performance changed.

Ethernet repeaters and hubs As Ethernet grew, the Ethernet hub was developed to make the network more reliable and the cables easier to connect. For signal degradation and timing reasons, Ethernet segments have a restricted size which depends on the medium used. For example, 10BASE5 coax cables have a maximum length of 500 metres (1,640 feet). A greater length can be obtained by using an Ethernet repeater, which takes the signal from one Ethernet cable and repeats it onto another cable. Repeaters can be used to connect up to five Ethernet segments, three of which can have attached devices. This also alleviates the problem of cable breakages: when an Ethernet coax segment breaks, all devices on that segment are unable to communicate; repeaters allowed the other segments to continue working. Like most other high-speed buses, Ethernet segments must be terminated with a resistor at both ends. For coaxial cable, each end of the cable must have a 50-ohm resistor and heatsink attached, called a terminator and affixed to a male N or BNC connector. If this is not done, the result is the same as if there is a break in the cable: the AC signal on the bus will be reflected, rather than dissipated, when it reaches the end. This reflected signal is indistinguishable from a collision, and so no communication can take place. A repeater electrically isolates the segments connected to it, regenerating and retiming the signal. Network vendors such as DEC and SynOptics sold hubs which connected many 10BASE-2 thin coaxial segments.

Coaxial cable is used to transmit 10BASE-2 Ethernet The development of Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T eventually made Ethernet over coax obsolete. These variations allowed unshielded twisted-pair Cat-3/Cat-5 cable and RJ45 telephone connectors to connect endpoints to hubs, replacing coaxial and AUI cables. Hubs made Ethernet networks more reliable by preventing problems with one cable or device from affecting other devices on the network. Twisted-pair Ethernet resolves the termination problem by making every segment point-to-point, so termination can be built into the hardware rather than requiring a special external resistor.

A Twisted pair 10BASE-T Cable is used to transmit 10BASE-T Ethernet Despite the physical star topology, hubbed Ethernet networks are half-duplex and still use CSMA/CD, with only minimal cooperation from the hub in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to the speed of a single link, either 10 or 100 Mbit/s, minus the overhead for preambles, inter-frame gaps, headers, trailers, and padding. Collisions also reduce the total throughput, especially when the network is heavily loaded. In the worst case when there are lots of hosts with long cables that transmit many short frames, excessive collisions that seriously reduce throughput can happen with loads as low as 50%. A more typical configuration can tolerate higher loads before collisions seriously reduce throughput. Bridging and Switching While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. This creates significant limits on how many machines can communicate on an Ethernet network. To alleviate this, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction. Control mechanisms like spanning-tree protocol enable a collection of bridges to work together in coordination. Dual speed hubs In the early days of Fast Ethernet, fast ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These effectively split the network into two sections, each acting like a hubbed network at its respective speed then acted as a two port switch between those two sections. This meant they allowed mixing of the two speeds without the cost of a Fast Ethernet switch. Ethernet frame types and the EtherType field Frames are the format of data packets on the wire. There are several types of Ethernet frame:


The Ethernet Version 2 or Ethernet II frame, the so-called DIX frame (named after DEC, Intel, and Xerox); this is the most common today, as it is often used directly by the Internet Protocol.

• • •

Novell's homegrown variation of IEEE 802.3 ("raw 802.3 frame") without IEEE 802.2 LLC IEEE 802.2 LLC frame IEEE 802.2 LLC/SNAP frame

In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This doubles the potential number of frame types. The different frame types have different formats and MTU values, but can coexist on the same physical medium.

The most common Ethernet Frame format, type II It is claimed that some older (Xerox?) Ethernet specification had a 16-bit length field, although the maximum length of a packet was 1500 bytes. Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification, however, have a 16-bit sub-protocol label field called the EtherType, with the convention that values between 0 and 1500 indicated the use of the original Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the new frame format with an EtherType sub-protocol identifier. Varieties of Ethernet Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and run at 10 mbps, 100 mbps, and 1 gpbs, respectively. 10-gigabit Ethernet is becomming more popular in both enterprise and carrier networks, with discussions starting on 40G and 100G Ethernet. Fiber distributed data interface

Internet protocol suite Layer Application In computer networking, fiber-distributed data interface (FDDI) provides a standard for data transmission in a local area network that can extend in range up to 200 kilometers (124 miles). The FDDI protocol uses as its basis the token ring protocol. In addition to covering large geographical areas, FDDI local area networks can support Transport thousands of users. As a standard underlying medium it uses optical fiber (though it can use copper cable, in which case one can refer to CDDI). Network FDDI uses a dual-attached, counter-rotating tokenring topology. Protocols DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin, ENRP, … TCP, UDP, DCCP, SCTP, IL, RUDP, … IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, …

Ethernet, Wi-Fi, FDDI, as a product of American National Standards Link Token ring, PPP, Institute X3-T9, conforms to the open system SLIP, FDDI, ATM, interconnect (OSI) model of functional layering of DTM, Frame Relay, LANs using other protocols. FDDI-II, a version of SMDS, … FDDI, adds the capability to add circuit-switched service to the network so that it can also handle voice and video signals. Work has started to connect FDDI networks to the developing Synchronous Optical Network SONET. The four FDDI standards comprise:
• • • •

ANSI X3T9.5, containing Physical Media Dependent (PMD) specifications ANSI X3T9.5, containing the Physical (PHY) specifications ANSI X3.139, containing Media Access Control (MAC) specifications ANSI X39.5, containing the Station Management (SMT) specifications.

Frame relay In the context of computer networking, frame relay (also found written as "framerelay") consists of an efficient data transmission technique used to send digital information quickly and cheaply in a relay of frames to one or many destinations from one or many end-points. Network providers commonly implement frame relay for voice and data as an encapsulation technique, used between local area networks (LANs) over a wide area network (WAN). Each end-user gets a private line (or leased line) to a framerelay node. The frame-relay network handles the transmission over a frequently-changing path transparent to all end-users. As of 2006 ATM and native IP-based protocols have gradually begun to displace frame relay. With the advent of the VPN and other dedicated broadband services such as cable modem and DSL, the end may loom for the frame relay protocol and encapsulation. There remain, however, many rural areas lacking DSL and cable modem services, and in such cases the least expensive type of "always-on" connection remains a

128-kilobit frame-relay line. Thus a retail chain, for instance, may use frame relay for connecting rural stores into their corporate WAN (probably with a VPN encryption-layer for security). Frame Relay description The designers of frame relay aimed at a telecommunication service for cost-efficient data transmission for intermittent traffic between local area networks (LANs) and between end-points in a wide area network (WAN). Frame relay puts data in variable-size units called "frames" and leaves any necessary error-correction (such as re-transmission of data) up to the end-points. This speeds up overall data transmission. For most services, the network provides a permanent virtual circuit (PVC), which means that the customer sees a continuous, dedicated connection without having to pay for a full-time leased line, while the service-provider figures out the route each frame travels to its destination and can charge based on usage. Frame Relay versus X.25 The design of X.25 aimed to provide error-free delivery over links with high error-rates. Frame relay takes advantage of the new links with lower error-rates, enabling it to eliminate many of the services provided by X.25. The elimination of functions and fields, combined with digital links, enables frame relay to operate at speeds 20 times greater than X.25. X.25 specifies processing at layers 1, 2 and 3 of the OSI model, while frame relay operates at layers 1 and 2 only. This means that frame relay has significantly less processing to do at each node, which improves throughput by an order of magnitude. X.25 prepares and sends packets, while frame relay prepares and sends frames. X.25 packets contain several fields used for error and flow control, none of which frame relay needs. The frames in frame relay contain an expanded address field that enables frame relay nodes to direct frames to their destinations with minimal processing . X.25 has a fixed bandwidth available. It uses or wastes portions of its bandwidth as the load dictates. Frame relay can dynamically allocate bandwidth during call setup negotiation at both the physical and logical channel level. Virtual circuits As a WAN protocol, frame relay is most commonly implemented at Layer 2 (data link layer) of the Open Systems Interconnection (OSI) seven layer model. Two types of circuits exist: permanent virtual circuits (PVCs) which are used to form logical end-toend links mapped over a physical network, and switched virtual circuits (SVCs). The latter analogous to the circuit-switching concepts of the public-switched telephone network (or PSTN), the global phone network we are most familiar with today. While

SVCs exist and are part of the frame relay specification, they are rarely applied to realworld scenarios. SVCs are most often considered harder to configure and maintain and are generally avoided without appropriate justification. IEEE 802.11 IEEE 802.11, the Wi-Fi standard, denotes a set of Wireless LAN/WLAN standards developed by working group 11 of the IEEE LAN/MAN Standards Committee (IEEE 802). The term 802.11x is also used to denote this set of standards and is not to be mistaken for any one of its elements. There is no single 802.11x standard. The term IEEE 802.11 is also used to refer to the original 802.11, which is now sometimes called "802.11legacy." For the application of these standards see Wi-Fi.

A Cisco Aironet 1200 Access Point

A Compaq 802.11b PCI card The 802.11 family currently includes six over-the-air modulation techniques that all use the same protocol. The most popular (and prolific) techniques are those defined by the b, a, and g amendments to the original standard; security was originally included and was later enhanced via the 802.11i amendment. Other standards in the family (c–f, h–j, n) are service enhancements and extensions or corrections to previous specifications. 802.11b was the first widely accepted wireless networking standard, followed (somewhat counterintuitively) by 802.11a and 802.11g. 802.11b and 802.11g standards use the 2.4 gigahertz (GHz) band, operating under Part 15 of the FCC Rules and Regulations. The 802.11a standard uses the 5 GHz band. Operating in the 2.4 gigahertz frequency band, 802.11b and 802.11g equipment can incur interference from microwave ovens, cordless telephones, Bluetooth devices, and other appliances using the same 2.4 GHz band.

Which part of the radio frequency spectrum may be used varies between countries, with the strictest limitations in the USA. While it is true that in the USA 802.11a and g devices may be legally operated without a license, it is not true that 802.11a and g operate in an unlicensed portion of the radio frequency spectrum. Unlicensed (legal) operation of 802.11 a & g is covered under Part 15 of the FCC Rules and Regulations. Frequencies used by channels one (1) through six (6) (802.11b) fall within the range of the 2.4 gigahertz Amateur Radio band. Licensed amateur radio operators may operate 802.11b devices under Part 97 of the FCC Rules and Regulations that apply. Protocols 802.11 legacy The original version of the standard IEEE 802.11 released in 1997 specifies two raw data rates of 1 and 2 megabits per second (Mbit/s) to be transmitted via infrared (IR) signals or in the Industrial Scientific Medical frequency band at 2.4 GHz. IR remains a part of the standard but has no actual implementations. The original standard also defines Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) as the media access method. A significant percentage of the available raw channel capacity is sacrificed (via the CSMA/CA mechanisms) in order to improve the reliability of data transmissions under diverse and adverse environmental conditions. At least five different, somewhat-interoperable, commercial products appeared using the original specification, from companies like Alvarion (PRO.11 and BreezeAccess-II), Netwave Technologies (AirSurfer Plus and AirSurfer Pro), Symbol Technologies (Spectrum24), and Proxim (OpenAir). A weakness of this original specification was that it offered so many choices that interoperability was sometimes challenging to realize. It is really more of a "meta-specification" than a rigid specification, allowing individual product vendors the flexibility to differentiate their products. Legacy 802.11 was rapidly supplemented (and popularized) by 802.11b. Widespread adoption of 802.11 networks only occurred after 802.11b was ratified and as a result few networks ran on the 802.11 standard. 802.11b The 802.11b amendment to the original standard was ratified in 1999. 802.11b has a maximum raw data rate of 11 Mbit/s and uses the same CSMA/CA media access method defined in the original standard. Due to the CSMA/CA protocol overhead, in practice the maximum 802.11b throughput that an application can achieve is about 5.9 Mbit/s over TCP and 7.1 Mbit/s over UDP. Channels and international compatibility 802.11b and 802.11g divide the spectrum into 14 overlapping, staggered channels whose center frequencies are 5 megahertz (MHz) apart. It is a common misconception that

channels 1, 6 and 11 (and, if available in the regulatory domain, channel 14) do not overlap and those channels (or other sets with similar gaps) can be used so that multiple networks can operate in close proximity without interfering with each other, but this statement is somewhat over-simplified. The 802.11b and 802.11g standards do not specify the width of a channel; rather, they specify the center frequency of the channel and a spectral mask for that channel. The spectral mask for 802.11b requires that the signal be attenuated by at least 30 dB from its peak energy at ±11 MHz from the center frequency, and attenuated by at least 50 dB from its peak energy at ±22 MHz from the center frequency. 802.11a The 802.11a amendment to the original standard was ratified in 1999. The 802.11a standard uses the same core protocol as the original standard, operates in 5 GHz band, and uses a 52-subcarrier orthogonal frequency-division multiplexing (OFDM) with a maximum raw data rate of 54 Mbit/s, which yields realistic net achievable throughput in the mid-20 Mbit/s. The data rate is reduced to 48, 36, 24, 18, 12, 9 then 6 Mbit/s if required. 802.11a has 12 non-overlapping channels, 8 dedicated to indoor and 4 to point to point. It is not interoperable with 802.11b, except if using equipment that implements both standards. Data rate Coding Modulation (Mbit/s) rate 1472 byte Ndbp transfer duration s (µs) 24 36 48 72 96 144 192 2012 1344 1008 672 504 336 252

6 9 12 18 24 36 48

BPSK BPSK 4-QAM 4-QAM 16-QAM 16-QAM 64-QAM

1/2 3/4 1/2 3/4 1/2 3/4 2/3

54

64-QAM

3/4

216

224

Standards The following IEEE Standards and task groups exist within the IEEE 802.11 working group: (The Official 802.11 WG Project Timelines can be found at http://www.ieee802.org/11/802.11_Timelines.htm)
• • • • • • • • • • • • • • • • • • • • • • • • • •

IEEE 802.11 - The original 1 Mbit/s and 2 Mbit/s, 2.4 GHz RF and IR standard (1999) IEEE 802.11a - 54 Mbit/s, 5 GHz standard (1999, shipping products in 2001) IEEE 802.11b - Enhancements to 802.11 to support 5.5 and 11 Mbit/s (1999) IEEE 802.11c - Bridge operation procedures; included in the IEEE 802.1D standard (2001) IEEE 802.11d - International (country-to-country) roaming extensions (2001) IEEE 802.11e - Enhancements: QoS, including packet bursting (2005) IEEE 802.11F - Inter-Access Point Protocol (2003) Withdrawn February 2006 IEEE 802.11g - 54 Mbit/s, 2.4 GHz standard (backwards compatible with b) (2003) IEEE 802.11h - Spectrum Managed 802.11a (5 GHz) for European compatibility (2004) IEEE 802.11i - Enhanced security (2004) IEEE 802.11j - Extensions for Japan (2004) IEEE 802.11k - Radio resource measurement enhancements IEEE 802.11l - (reserved and will not be used) IEEE 802.11m - Maintenance of the standard; odds and ends. IEEE 802.11n - Higher throughput improvements IEEE 802.11o - (reserved and will not be used) IEEE 802.11p - WAVE - Wireless Access for the Vehicular Environment (such as ambulances and passenger cars) IEEE 802.11q - (reserved and will not be used, can be confused with 802.1Q VLAN trunking) IEEE 802.11r - Fast roaming IEEE 802.11s - ESS Mesh Networking IEEE 802.11T - Wireless Performance Prediction (WPP) - test methods and metrics IEEE 802.11u - Interworking with non-802 networks (e.g., cellular) IEEE 802.11v - Wireless network management IEEE 802.11w - Protected Management Frames IEEE 802.11x - (reserved and will not be used) IEEE 802.11y - 3650-3700 Operation in USA

Internet Protocol

Internet protocol suite Layer Application Protocols DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin, ENRP, … TCP, UDP, DCCP, SCTP, IL, RUDP, … IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, … Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, DTM, Frame Relay, SMDS, …

The Internet Protocol (IP) is a data-oriented protocol used for communicating data across a packet-switched internetwork. IP is a network layer protocol in the internet protocol suite and is encapsulated in a data link Transport layer protocol (e.g., ethernet). As a lower layer protocol, IP provides the service of communicable unique global addressing amongst computers. This Network implies that the data link layer need not provide this service. Ethernet provides globally unique addresses except it is not globally communicable (i.e., two arbitrarily chosen ethernet devices will Link only be able to communicate if they are on the same bus). Packetization

Encapsulation of user data in a UDP datagram inside an IP packet. Data from an upper layer protocol is encapsulated inside one or more packets/datagrams (the terms are basically synonymous in IP). No circuit setup is needed before a host tries to send packets to a host it has previously not communicated with (this is the point of a packet-switched network). This is quite unlike Public Switched Telephone Networks that require the setup of a circuit before a phone call may go through. Services provided by IP Because of the abstraction provided by encapsulation, IP can be used over a heterogenous network (i.e., a network connecting two computers can be any mix of ethernet, ATM, FDDI, Wi-fi, Token ring, etc.) and it makes no difference to the upper layer protocols.

All the data link layers can (and do) have their own set of addressing (or possibly the complete lack of it) and the need to resolve IP addresses to data link addresses is needed. This resolving is addressed by the Address Resolution Protocol (ARP). Reliability IP provides an unreliable service (i.e., best effort delivery). This means that the network makes no guarantees about the packet and none, some, or all of the following may apply:
• • • •

data corruption out of order (packet A may be sent before packet B, but B can arrive before A) duplicate arrival lost or dropped/discarded

In terms of reliability the only thing IP does is ensure the IP packet's header is error-free through the use of a checksum. This has the side-effect of discarding packets with bad headers on the spot, and with no required notification to either end (though an ICMP message may be sent). To address any of these reliability issues, an upper layer protocol must handle it. For example, to ensure in-order delivery the upper layer may have to cache data until it can be passed up in order. The primary reason for the lack of reliability is to reduce the complexity of routers. While this does give routers carte blanche to do as they please with packets, anything less than best effort yields a poorer experience for the user. So, even though no guarantees are made, the better the effort made by the network, the better the experience for the user. IP addressing and routing Perhaps the most complex aspects of IP are addressing and routing. Addressing refers to how end hosts become assigned IP addresses and how subnetworks of IP host addresses are divided and grouped together. IP routing is performed by all hosts, but most importantly by internetwork routers, which typically use either interior gateway protocols (IGPs) or external gateway protocols (EGPs) to help make IP datagram forwarding decisions across IP connected networks. Version history IP is the common element found in today's public Internet. The current and most popular network layer protocol in use today is IPv4; this version of the protocol is assigned version 4. IPv4 was adopted by the United States Department of Defense as MIL-STD1778. IPv6 is the proposed successor to IPv4 whose most prominent change is the addressing. IPv4 uses 32-bit addresses (~4 billion addresses) while IPv6 uses 128-bit addresses (~3.4×1038 addresses)

Versions 0 through 3 were either reserved or unused; version 5 was used for an experimental stream protocol. Other version numbers have been assigned, usually for Internet protocol suite experimental protocols, but have not been widely Layer Protocols used. Application DNS, TLS/SSL, TFTP, FTP, HTTP, Token ring IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, (Redirected from Token Ring) BitTorrent, RTP, Token-Ring local area network (LAN) technology rlogin, ENRP, … was developed and promoted by IBM in the early 1980s and standardised as IEEE 802.5 by the Transport TCP, UDP, DCCP, Institute of Electrical and Electronics Engineers. SCTP, IL, RUDP, Initially very successful, it went into steep decline … after the introduction of 10BASE-T for Ethernet IP (IPv4, IPv6), and the EIA/TIA 568 cabling standard in the early Network ICMP, IGMP, ARP, 1990s. A fierce marketing effort led by IBM sought RARP, … to claim better performance and reliability over Ethernet, Wi-Fi, Ethernet for critical applications due to its Link Token ring, PPP, deterministic access method, but was no more SLIP, FDDI, ATM, successful than similar battles in the same era over DTM, Frame Relay, their Micro Channel architecture. IBM no longer SMDS, … uses or promotes Token-Ring. Madge Networks, a one time competitor to IBM, is now considered to be the market leader in Token Ring. Overview Stations on a Token-Ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. This token passing mechanism is shared by ARCNET, Token Bus, and FDDI, and has theoretical advantages over the stochastic CSMA/CD of Ethernet.

Token Ring network

Physically, a Token-Ring network is wired as a star, with 'hubs' and arms out to each station and the loop going out-and-back through each. Cabling is generally IBM "Type1" Shielded Twisted Pair, with unique hermaphroditic connectors. Initially (in 1985) Token-Ring ran at 4 Mbit/s, but in 1989 IBM introduced the first 16 Mbit/s Token-Ring products and the 802.5 standard was extended to support this. In 1981, Apollo Computers introduced their proprietary 12 Mbit/s Apollo Token Ring (ATR) and Proteon introduced their 10 Mbit/s ProNet-10 Token Ring network. However, IBM Token-Ring was not compatible with ATR or ProNet-10. More technically, Token-Ring is a local area network protocol which resides at the data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token that travels around the ring. Token ring frames travel completely around the loop. Token frame When no station is transmitting a data frame, a special token frame circles the loop. This special token frame is repeated from station to station until arriving at a station that needs to transmit data. When a station needs to transmit a data frame, it converts the token frame into a data frame for transmission. The special token frame consists of three bytes as follows:






Starting Delimiter — consists of a special bit pattern denoting the beginning of the frame. The bits from most significant to least significant are J,K,0,J,K,0,0,0. J and K are code violations. Since Manchester encoding is self clocking, and has a transition for every encoded bit 0 or 1, the J and K codings violate this, and will be detected by the hardware. Access Control — this byte field consists of the following bits from most significant to least significant bit order: P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is a token frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees this frame, and R bits are reserved bits. Ending Delimiter — The counterpart to the starting delimiter, this field marks the end of the frame and consists of the following bits from most significant to least significant: J,K,1,J,K,1,I,E. I is the intermediate frame bit and E is the error bit.

Token ring frame format A data token ring frame is an expanded version of the token frame that is used by stations to transmit medium access control (MAC) management frames or data frames from upper layer protocols and applications. The token ring frame format is defined as follows:
• • • •

Starting Delimiter — as described above. Access Control — as described above. Frame Control — a one byte field that contains bits describing the data portion of the frame contents. Destination address — a six byte field used to specify the destination(s).

• •



• •

Source address — a six byte field that is either the local assigned address (LAA) or universally assigned address (UAA) of the sending station adapter. Data — a variable length field of 0 or more bytes, the maximum allowable size depending on ring speed containing MAC Internet protocol suite management data or upper layer Layer Protocols information. Frame Check Sequence — a four byte field Application DNS, TLS/SSL, used to store the calculation of a CRC for TFTP, FTP, HTTP, frame integrity verification by the receiver. IMAP, IRC, NNTP, Ending Delimiter — as described above. POP3, SIP, SMTP, Frame Status — a one byte field used as a SNMP, SSH, primitive acknowledgement scheme on TELNET, whether the frame was recognized and BitTorrent, RTP, copied by its intended receiver. rlogin, ENRP, …

Transport TCP, UDP, DCCP, Transmission Control Protocol SCTP, IL, RUDP, The Transmission Control Protocol (TCP) is one … of the core protocols of the Internet protocol suite. IP (IPv4, IPv6), Using TCP, applications on networked hosts can Network ICMP, IGMP, ARP, create connections to one another, over which they RARP, … can exchange data or packets. The protocol Ethernet, Wi-Fi, guarantees reliable and in-order delivery of sender Link Token ring, PPP, to receiver data. TCP also distinguishes data for SLIP, FDDI, ATM, multiple, concurrent applications (e.g. Web server DTM, Frame Relay, and email server) running on the same host. SMDS, … TCP supports many of the Internet's most popular application protocols and resulting applications, including the World Wide Web, email and Secure Shell. In the Internet protocol suite, TCP is the intermediate layer between the Internet Protocol below it, and an application above it. Applications often need reliable pipe-like connections to each other, whereas the Internet Protocol does not provide such streams, but rather only unreliable packets. TCP does the task of the transport layer in the simplified OSI model of computer networks. Applications send streams of octets (8-bit bytes) to TCP for delivery through the network, and TCP divides the byte stream into appropriately sized segments (usually delineated by the maximum transmission unit (MTU) size of the data link layer of the network the computer is attached to). TCP then passes the resulting packets to the Internet Protocol, for delivery through a network to the TCP module of the entity at the other end. TCP checks to make sure that no packets are lost by giving each packet a sequence number, which is also used to make sure that the data are delivered to the entity at the other end in the correct order. The TCP module at the far end sends back an acknowledgement for packets which have been successfully received; a timer at the sending TCP will cause a timeout if an acknowledgement is not received within a reasonable round-trip time (or RTT), and the (presumably lost) data will then be re-transmitted. The TCP checks that no bytes are damaged by using a checksum; one is computed at the sender for each block of data before it is sent, and checked at the receiver.

Protocol operation

An abridged version of the TCP state diagram Connection establishment To establish a connection, TCP uses a 3-way handshake. Before a client attempts to connect with a server, the server must first bind to a port to open it up for connections: this is called a passive open. Once the passive open is established then a client may initiate an active open. To establish a connection, the 3-way (or 3-step) handshake occurs: 1. The active open is performed by sending a SYN to the server. 2. In response, the server replies with a SYN-ACK. 3. Finally the client sends an ACK back to the server. Data transfer There are a few key features that set TCP apart from UDP:

• • • • •

Error-free data transfer Ordered-data transfer Retransmission of lost packets Discarding duplicate packets Congestion throttling

TCP window size

TCP sequence numbers and windows behave very much like a clock. The window, whose width (in bytes) is defined by the receiving host, shifts each time it receives and acks a segment of data. Once it runs out of sequence numbers, it loops back to 0. The TCP receive window size is the amount of received data (in bytes) that can be buffered during a connection. The sending host can send only that amount of data before it must wait for an acknowledgment and window update from the receiving host. TCP ports TCP uses the notion of port numbers to identify sending and receiving applications. Each side of a TCP connection has an associated 16-bit unsigned port number assigned to the sending or receiving application. Ports are categorized into three basic categories: wellknown, registered and dynamic/private. The well-known ports are assigned by the Internet Assigned Numbers Authority (IANA) and are typically used by system-level or root processes. Well-known applications running as servers and passively listening for connections typically use these ports. Some examples include: FTP (21), TELNET (23), SMTP (25) and HTTP (80). Registered ports are typically used by end user applications as ephemeral source ports when contacting servers, but they can also identify named services that have been registered by a third party. Dynamic/private ports can also be used by end user applications, but are less commonly so. Dynamic/private ports do not contain any meaning outside of any particular TCP connection. There are 65535 possible ports officially recognized. Packet structure A TCP packet consists of two sections:


header



data

The header consists of 11 fields and, of which, only 10 are required. The 11 th field is optional (red background in table) and aptly named: options. Header + 0 32 64 96 128 160 Bits 0 - 3 4 - 9 10 - 15 Source Port Sequence Number Acknowledgement Number Data Reserved Flags Offset Checksum Options (optional) 16 - 31 Destination Port

Window Urgent Pointer

160/192+ Data

+ 0 32 64 96 128 160 192 225 257

Bits 0 - 3 4 - 7 8 - 9 10 - 15 Source address Destination address Zeros Protocol Source Port Sequence Number Acknowledgement Number Data Reserved Flags Offset Checksum Options (optional)

16 – 31

TCP length Destination Port

Window Urgent Pointer

257/289+ Data The source and destination addresses are those in the IPv4 header. The protocol is that for TCP (see List of IPv4 protocol numbers): 6. The TCP length field is the length of the TCP header and data. Urgent pointer If the URG flag is set then this field is a 16-bit offset from the sequence number. Options Additional header fields (called options) may follow the urgent pointer. If any options are present then the total length of the option field must be a multiple of a 32bit word and the data offset field adjusted appropriately.

Data The last field is not a part of the header. The contents of this field are whatever the upper layer protocol wants but this protocol is not set in the header and is presumed based on the port selection. TCP tuning (Redirected from TCP Tuning) To meet Wikipedia's quality standards, this article or section may require cleanup. Please discuss this issue on the talk page, or replace this tag with a more specific message. Editing help is available. This article has been tagged since September 2005. TCP tuning techniques adjust some parameters of TCP connection over high-bandwidth high-latency networks. Observation, the "wizard gap" - people with well tuned networks perform 10x to 1000x as fast as ordinary users, especially on high speed (gigabit and beyond) networks. Network and system characteristics Bandwidth-delay product (BDP) Bandwidth × delay product (BDP) is a term primarily used in conjunction with the TCP to refer to the number of bytes necessary to fill a TCP "path", i.e. it is equal to the maximum number of simultaneous packets in transit between the transmitter and the receiver. TCP has a concept of windows which are used for congestion control and for determining the optimum size of packet that is resilient to packet loss, packet truncation (due to link layer maximum transmission unit) or reordering. High performance networks have very large BDPs, on the order of (xxx) (bytes). To give a practical example, in the case of two satellites located 0.5 light-seconds apart, communicating over a radio link with a bandwidth of 10Gbit/second, there will be at most 0.5×10e9 = 5Gbits = 625MB of data in the space between them. Operating systems and protocols designed as recently as a few years ago when networks were slower were tuned for BDPs of orders of magnitude smaller, with implications for tuning. Buffers The original TCP configurations supported buffers of 64K Bytes, which was adequate for slow links or links with small round trip times (RTTs). Larger buffers are required by the high performance options described below. Buffering is used throughout high performance network systems to handle delays in the system. In general, buffer size will need to be scaled proportional to the amount of data "in flight" at any time. For very high performance applications that are not sensitive to network delays, it is possible to interpose large end to end buffering delays by putting in intermediate

data storage points in an end to end system, and then to use automated and scheduled nonreal-time data transfers to get the data to their final endpoints. TCP Networking Options for High Performance
• • • • •

RFC 2018 - TCP Selective Acknowledgment Options RFC 1323 - TCP Extensions for High Performance Maximum Buffer Sizes on the host Application Buffers Path MTU

Universal Serial Bus

"USB" redirects here; for other uses, see USB (disambiguation).

Type A USB connector

Dual images of the two Type B USB connectors, mini and full size, side and front view, compared with a U.S. 5¢ piece (nickel) in both images for scale.

USB 2.0 "trident" logo Universal Serial Bus (USB) provides a serial bus standard for connecting devices, usually to computers such as PCs and the Apple Macintosh, but is also becoming commonplace on video game consoles such as Sony's PlayStation 2, Microsoft's Xbox 360, Nintendo's Revolution, and PDAs, and even devices like televisions and home stereo equipment. Overview A USB system has an asymmetric design, consisting of a host controller and multiple daisychained devices. Additional USB hubs may be included in the chain, allowing branching into a tree structure, subject to a limit of 5 levels of branching per controller. Not more than 127 devices, including the bus devices, may be connected to a single host controller. Modern computers often have several host controllers, allowing a very large number of USB devices to be connected. USB cables do not need to be terminated. USB 2 uses bursts, unlike FireWire. Despite the capability of daisy-chaining several USB devices and that early USB announcements foresaw that each future USB device could replicate the USB port on itself and allow for a long chain of devices, this was never widespread for economical and technical reasons, and typically only USB hubs actually replicate and multiply USB ports, thus making most USB devices effectively "consuming" an USB port, disallowing daisychaining or shared use. USB was designed to allow peripherals to be connected without the need to plug expansion cards into the computer's ISA, EISA, or PCI bus, and to improve plug-and-play capabilities by allowing devices to be hot-swapped (connected or disconnected without powering down or rebooting the computer). When a device is first connected, the host enumerates and recognises it, and loads the device driver it needs.

A USB hub USB can connect peripherals such as mice, keyboards, gamepads and joysticks, scanners, digital cameras, printers, external storage, networking components, etc. For many devices such as scanners and digital cameras, USB has become the standard connection method. USB is also used extensively to connect non-networked printers, replacing the parallel ports which were widely used; USB simplifies connecting several printers to one computer. As of 2004 there were about 1 billion USB devices in the world. As of 2005, the only large classes of peripherals that cannot use USB, because they need a higher data rate than USB can provide, are displays and monitors, and high-quality digital video components. Standardization The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Apple Computer, Hewlett-Packard, NEC, Microsoft, Intel, and Agere. The USB specification is at version 2.0 (with revisions) as of March 2006. Hewlett-Packard, Intel, Lucent, Microsoft, NEC, and Philips jointly led the initiative to develop a higher data transfer rate than the 1.1 specification. The USB 2.0 specification was released in April 2000 and was standardized by the USB-IF at the end of 2001. Previous notable releases of the specification were 0.9, 1.0, and 1.1. Equipment conforming with any version of the standard will also work with devices designed to any of the previous specifications (backwards compatibility). Smaller USB plugs and receptacles, called Mini-A and Mini-B, are also available, as specified by the On-The-Go Supplement to the USB 2.0 Specification. The specification is at revision 1.0a (Jan 2006). Technical details

PCB mounting female USB connectors USB connects several devices to a host controller through a chain of hubs. In USB terminology devices are referred to as functions, because in theory what we know as a device may actually host several functions, such as a router that is a Secure Digital Card reader at the same time. The hubs are special purpose devices that are not officially considered functions. There always exists one hub known as the root hub, which is attached directly to the host controller.

The pipes are also divided into four different categories by way of their transfer type:
• • • •

control transfers - typically used for short, simple commands to the device, and a status response, used e.g. by the bus control pipe number 0 isochronous transfers - at some guaranteed speed (often but not necessarily as fast as possible) but with possible data loss, e.g. realtime audio or video interrupt transfers - devices that need guaranteed quick responses (bounded latency), e.g. pointing devices and keyboards bulk transfers - large sporadic transfers using all remaining available bandwidth (but with no guarantees on bandwidth or latency), e.g. file transfers

When a device (function) or hub is attached to the host controller through any hub on the bus, it is given a unique 7 bit address on the bus by the host controller.

USB Enumeration Trace The host controller then polls the bus for traffic, usually in a round-robin fashion, so no device can transfer any data on the bus without explicit request from the host controller. The interrupt transfers on corresponding endpoints does not actually interrupt any traffic on the bus, they are just scheduled to be queried more often and in between any other large

transfers, thus "interrupt traffic" on a USB bus is really only high-priority traffic.

Standard USB signaling

USB Standard-A, B plugs showing pin numbers (Not drawn to scale) Standard USB connector pinout Pin Function (host) Function (device)

1 VBUS (4.75–5.25 V) VBUS (4.4–5.25 V) 2 D− 3 D+ 4 Ground D− D+ Ground

USB signals are transmitted on a twisted pair of data cables, labelled D+ and D−. These collectively use half-duplex differential signaling to combat the effects of electromagnetic noise on longer lines. D+ and D− operate together; they are not separate simplex connections. Transmitted signal levels are 0.0–0.3 V for low and 2.8–3.6 V for high. Transfer speed USB supports three data rates.
• •

A Low Speed rate of 1.5 Mbit/s (183 KiB/s) that is mostly used for Human Interface Devices (HID) such as keyboards, mice and joysticks. A Full Speed rate of 12 Mbit/s (1.4 MiB/s). Full Speed was the fastest rate before the USB 2.0 specification and many devices fall back to Full Speed. Full Speed devices divide the USB bandwidth between them in a first-come first-served basis and it is not uncommon to run out of bandwidth with several isochronous devices. All USB Hubs support Full Speed.



A Hi-Speed rate of 480 Mbit/s (57 MiB/s).

Mini USB signaling

USB Mini-A, B plugs showing pin numbers (Not drawn to scale)

Mini-A (left) Rounded, Mini B (Right) Square Mini USB connector pinout Pin Function 1 2 3 4 5 VBUS (4.4–5.25 V) D− D+ ID Ground

Most of the pins of a mini USB connector are the same as a standard USB connector, except pin 4. Pin 4 is called ID and is connected to pin 5 for a mini-A. This indicates if a device supporting usb on the go (with a mini AB socket) should initially act as host, in the mini B this is open circuit. The Mini A also has an additional piece of plastic inside to prevent insertion into slave only device. USB connectors The connectors which the USB committee specified were designed to support a number of USB's underlying goals, and to reflect lessons learned from the varied menagerie of connectors then in service. In particular:

USB compared to other standards Storage

A Flash Drive, a typical USB mass-storage device USB implements connections to storage devices using a set of standards called the USB mass-storage device class. This was initially intended for traditional magnetic and optical drives, but has been extended to support a wide variety of devices. USB is not intended to be a primary bus for a computer's internal storage: buses such as ATA (IDE) and SCSI fulfill that role. same function. USB 2.0 Hi-Speed vs FireWire The signalling rate of USB 2.0 Hi-Speed mode is 480 megabits per second, while the signalling rate of FireWire 400 (IEEE 1394a) is 393.216 Mbit/s [4]. USB can require more host resources than Firewire due to the need for the host to provide the arbitration and scheduling of transactions. USB transfer rates are generally higher than Firewire due to the need for Firewire devices to arbitrate for bus access. A single Firewire device may achieve a transfer rate for Firewire 400 as high as 41 MB/s. While for USB 2.0 the rate can be higher 55 MB/s (for a single device). In a multi device environment Firewire rapidly loses ground to USB: Firewire's mixed speed networks and long connection chains dramatically affect its performance. Version history USB


USB 1.0 FDR: Released in November 1995, the same year that Apple adopted the IEEE 1394 standard known as FireWire. USB 1.0: Released in January 1996. USB 1.1: Released in September 1998. USB 2.0: Released in April 2000. The major feature of this standard was the addition of high-speed mode. This is the current revision.

• • •



USB 2.0: Revised in December 2002. Added three speed distinction to this standard, allowing all devices to be USB 2.0 compliant even if they were previously considered only 1.1 or 1.0 compliant. This makes the backwards compatibility explicit, but it becomes more difficult to determine a device's throughput without seeing the symbol. As an example, a computer's port could be incapable of USB 2.0's hi-speed fast transfer rates, but still claim USB 2.0 compliance (since it supports some of USB 2.0). Internet protocol suite Layer Protocols DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin, ENRP, … TCP, UDP, DCCP, SCTP, IL, RUDP, … IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, … Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, DTM, Frame Relay, SMDS, …

Wireless USB

Released in May 12, 2005. Wireless USB uses UWB Application (Ultra Wide Band) as the radio technology. User Datagram Protocol (Redirected from User datagram protocol) The User Datagram Protocol (UDP) is one of the core protocols of the Internet protocol suite. Using UDP, programs on networked computers can send short messages known as datagrams to one another. Transport UDP does not provide the reliability and ordering guarantees that TCP does; datagrams may arrive out of order or go missing without notice. However, as a result, UDP is faster and more efficient for many Network lightweight or time-sensitive purposes. Also its stateless nature is useful for servers that answer small Link queries from huge numbers of clients. Common network applications that use UDP include the Domain Name System (DNS), streaming media applications, Voice over IP, Trivial File Transfer Protocol (TFTP), and online games. Ports

Main article: List of TCP and UDP port numbers UDP utilizes ports to allow application-to-application communication. The port field is 16bits so the valid range is 0 to 65,535. Port 0 is reserved and shouldn't be used. Ports 1 through 1023 are named "well-known" ports and on Unix-derived operating systems binding to one of these ports requires root access. Ports 1024 through 49,151 are registered ports. Ports 49,152 through 65,535 are ephemeral ports and are used as temporary ports primarly by clients when communicating to servers. + Bits 0 - 15 0 Source Port 3 Length 2 16 - 31 Destination Port Checksum

6 Data 4

X.25

X.25 is an ITU-T standard protocol suite for WAN networks using the phone or ISDN system as the networking hardware. It defines standard physical layer, data link layer and network layers (layers 1 through 3) of the OSI model. Packet switched network was the common name given to the international collection of X.25 providers, typically the various national telephone companies. Their combined network had large global coverage during the 1980s and into the '90s, and it is still in use mainly in transaction systems. History X.25 was developed in the ITU Study Group VII based upon a number of emerging data network projects, such as the research project at the UK's National Physical Laboratory under the direction of Donald Davies who developed the concepts of packet switched networks. In the late 1960s a test network was started, and by 1974 a number of sites had been linked together to form SERCnet (Science and Engineering Research Council Network). SERCnet would later grow and be re-organized as JANET in 1984, which continues in service today, but as a TCP/IP network. Other contributions to the standardising process came from the ARPA project as well as French, Canadian, Japanese and Scandinavian projects emerging in the early 1970s. Various updates and additions were worked into the standard, eventually recorded in the ITU series of technical books describing the telecoms systems. These books were published every fourth year with different colored covers. Architecture

A Televideo terminal model 925 made around 1982

The general concept of X.25 was to create a universal and global packet-switched network on what was then the bit-error prone analog phone system. Much of the X.25 system is a description of the rigorous error correction needed to achieve this, a system known as LAPB. The X.25 model was based on the concept of establishing "virtual calls" through the network, with "data terminating equipment" (DTE's) providing endpoints to users that looked like point-to-point connections. X.25 was developed in the era of dumb terminals connecting to host computers. Instead of dialing directly “into” the host computer — which would require the host to have its own pool of modems and phone lines, and require non-local callers to make long-distance calls — the host could have an X.25 connection to a network service provider. Now dumb-terminal users could dial into the network's local “PAD” (Packet Assembly/Disassembly facility), a gateway device connecting modems and serial lines to the X.25 link as defined by the ITU-T X.29 and X.3 standards. Addressing and Virtual Circuits

An X.25 Modem once used to connect to the german Datex-P network. The X.121 address consists of a three-digit Data Country Code (DCC) plus a network digit, together forming the four-digit Data Network Identification Code (DNIC), followed by the Network Terminal Number (NTN) of at most ten digits. Note the use of a single network digit, seemingly allowing for only 10 network carriers per country, but some countries are assigned more than one DCC to avoid this limitation. Layers OSI model
• •

TCP/IP model

Application layer Presentation layer
• o o

Data link layer

Application layer Network Access Layer



Switching Physical layer

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close