Definition

Published on May 2016 | Categories: Documents | Downloads: 32 | Comments: 0 | Views: 235
of 19
Download PDF   Embed   Report

Comments

Content

Definition: A network switch is a small hardware device that joins multiple computers together within one local area network (LAN). Technically, network switches operate at layer two (Data Link Layer) of the OSI model. Network switches appear nearly identical to network hubs, but a switch generally contains more intelligence (and a slightly higher price tag) than a hub. Unlike hubs, network switches are capable of inspecting data packets as they are received, determining the source and destination device of each packet, and forwarding them appropriately. By delivering messages only to the connected device intended, a network switch conserves network bandwidth and offers generally better performance than a hub.

In electronics, a crossbar switch (also known as cross-point switch, crosspoint switch, or matrix switch) is a switch connecting multiple inputs to multiple outputs in a matrix manner. Originally the term was used literally, for a matrix switch controlled by a grid of crossing metal bars, and later was broadened to matrix switches in general. It is one of the principal switch architectures, together with a rotating switch,memory switch and a crossover switch.
Contents
[hide]

  

1 General properties 2 Applications 3 Implementations

o o o o o    

3.1 Mechanical 3.2 Electromechanical/telephony 3.3 Electromechanical/instrumentation 3.4 Telephone exchange 3.5 Semiconductor

4 Arbitration 5 See also 6 References 7 External links

[edit]General

properties

A crossbar switch is an assembly of individual switches between multiple inputs and multiple outputs. The switches are arranged in a matrix. If the crossbar switch has M inputs and N outputs, then a crossbar has a matrix with M x N cross-points or places where the "bars" cross. At each crosspoint is a switch; when closed, it connects one of M inputs to one of N outputs. A given crossbar is a single layer, non-blocking switch. Collections of crossbars can be used to implement multiple layer and/or blocking switches. A crossbar switching system is also called a co-ordinate switching system. [edit]Applications

Crossbar switches are most famously used in information processing applications such as telephony and circuit switching, but they are also used in applications such as mechanical sorting machines with inputs. The matrix layout of a crossbar switch is also used in some semiconductor memory devices (see nanotechnology). Here the "bars" are extremely thin metal "wires", and the "switches" are fusible links. The fuses are blown or opened using high voltage and read using low voltage. Such devices are called programmable read-only memory.
[1]

At the 2008 NSTI Nanotechnology Conference a paper was

presented which discussed a nanoscale crossbar implementation of an adding circuit used as an alternative to logic gates for computation.
[2]

Furthermore, matrix arrays are fundamental to modern flat-panel displays. Thin-film-transistor LCDs have a transistor at each crosspoint, so they could be considered to include a crossbar switch as part of their structure. For video switching in home and professional theater applications, a crossbar switch (or a matrix switch, as it is more commonly called in this application) is used to make the output of multiple video appliances available simultaneously to every monitor or every room throughout a building. In a typical installation, all the video sources are located on an equipment rack, and are connected as inputs to the matrix switch. Where central control of the matrix is practical, a typical rack-mount matrix switch offers front-panel buttons to allow manual connection of inputs to outputs. An example of such a usage might be a sports bar, where numerous programs are displayed simultaneously. In order to accomplish this, a sports bar would ordinarily need to purchase a separate cable or satellite subscription for each display for which independent control is desired. The matrix switch enables the signals to be re-routed on a whim, thus allowing the establishment to purchase only those subscriptions needed to cover the total number of unique programs viewed anywhere in the building. Such switches are used in high-end home theater applications. Video sources typically shared include set-top cable/satellite receivers or DVD changers; the same concept applies to audio as well. The outputs are wired to televisions in individual rooms. The matrix switch is controlled via an Ethernet or RS232 serial connection by a whole-house automation controller, such as those made by AMX, Crestron, orControl4 - which provides the user interface that enables the user in each room to select which appliance to watch. The actual user interface varies by system brand, and might include a combination of on-screen menus, touch-screens, and handheld remote controls. The system is necessary to enable the user to select the program they wish to watch from the same room they will watch it from, otherwise it would be necessary (and arguably absurd) for them to walk to the equipment rack. The special crossbar switches used in distributing satellite TV signals are called Multiswitches. [edit]Implementations

Historically, a crossbar switch consisted of metal bars associated with each input and output, together with some means of controlling movable contacts at each cross-point. In the later part of the 20th Century these literal crossbar switches declined and the term came to be used figuratively for rectangular array switches in general. Modern "crossbar switches" are usually implemented with semiconductor technology. An important emerging class of optical crossbars is being implemented with MEMS technology. [edit]Mechanical A type of middle 19th Century telegraph exchange consisted of a grid of vertical and horizontal brass bars with a hole at each intersection. The operator inserted a brass pin to connect one telegraph line to another. [edit]Electromechanical/telephony A telephony crossbar switch is an electromechanical device for switching telephone calls. The first design of what is now called a crossbar switch was the Bell company Western Electric's "coordinate selector" of 1915. To save money on control systems, this system was organized on the stepping switch or selector principle rather than the link principle. It was little used in America, but the Televerket Swedish governmental agency manufactured an own design (the Gotthilf Betulander design from 1919, inspired by the Western Electric system), and used it in Sweden from 1926 until the digitalization in the 1980s in small and medium sized A204 model switches. The system design used in AT&T's 1XB crossbar exchanges, which entered revenue service from 1938, developed by Bell Telephone Labs, was inspired by the Swedish design but was based on the rediscovered link principle. In 1945, a similar design by Swedish Televerket was installed in Sweden, making it possible to increase the capacity of the A204 model switch. Delayed by the Second World War, several millions of urban 1XB lines were installed from the 1950s in the United States. In 1950, the Ericsson Swedish company developed their own versions of the 1XB and A204 systems for the international market. In the early 1960s, the company's sales of crossbar switches exceeded those of their rotating 500-switching system, as measured in the number of lines. Crossbar switching quickly spread to the rest of the world, replacing most earlier designs like the Strowger (step-by-step) and Panelsystems in larger installations in the U.S. Graduating from entirely electromechanical control on introduction, they were gradually elaborated to have full electronic control and a variety of calling features including short-code and speed-dialing. In the UK the Plessey Company produced a range of TXK crossbar exchanges, but their widespread rollout by the British Post Office began later than in other countries, and then was inhibited by the parallel development of TXE reed relay and electronic exchange systems, so they never achieved a large number of customer connections although they did find some success as tandem switch exchanges.

Crossbar switches use switching matrices made from a two-dimensional array of contacts arranged in an x-y format. These switching matrices are operated by a series of horizontal bars arranged over the contacts. Each such "select" bar can be rocked up or down byelectromagnets to provide access to two levels of the matrix. A second set of vertical "hold" bars is set at right angles to the first (hence the name, "crossbar") and also operated by electromagnets. The select bars carry spring-loaded wire fingers that enable the hold bars to operate the contacts beneath the bars. When the select and then the hold electromagnets operate in sequence to move the bars, they trap one of the spring fingers to close the contacts beneath the point where two bars cross. This then makes the connection through the switch as part of setting up a calling path through the exchange. Once connected, the select magnet is then released so it can use its other fingers for other connections, while the hold magnet remains energized for the duration of the call to maintain the connection. The crossbar switching interface was referred to as the TXK or TXC switch (Telephone eXchange Crossbar) - in the UK. Network-on-Chip or Network-on-a-Chip (NoC or NOC) is an approach to designing the communication subsystem between IP cores in aSystem-on-a-Chip (SoC). NoCs can span synchronous and asynchronous clock domains or use unclocked asynchronous logic. NoC appliesnetworking theory and methods to on-chip communication and brings notable improvements over conventional bus and crossbarinterconnections. NoC improves the scalability of SoCs, and the power efficiency of complex SoCs compared to other designs. Research has been done on integrated optical waveguides and devices comprising an Optical Network-on-Chip (ONoC).
Contents
[hide]
[1][2]

       

1 Emerging paradigm 2 Parallelism and scalability 3 Benefits of adopting NoCs 4 Research on on-chip networks 5 NoC Benchmark 6 See also 7 References 8 External links

[edit]Emerging

paradigm

Network-on-Chip (NoC) is an emerging paradigm for communications within large VLSI systems implemented on a single silicon chip. Sgroi et al. call "the layered-stack approach to the design of the onchip intercore communications the Network-on-Chip (NOC) methodology." In a NoC system, modules

such as processor cores, memories and specialized IP blocks exchange data using a network as a "public transportation" sub-system for the information traffic. A NoC is constructed from multiple point-topoint data links interconnected by switches (a.k.a. routers), such that messages can be relayed from any source module to any destination module over several links, by making routing decisions at the switches. A NoC is similar to a modern telecommunications network, using digital bit-packet switching over multiplexed links. Although packet-switching is sometimes claimed as necessity for a NoC, there are several NoC proposals utilizing circuit-switchingtechniques. This definition based on routers is usually interpreted so that a single shared bus, a single crossbar switch or a point-to-point network are not NoCs but practically all other topologies are. This is somewhat confusing since all above mentioned are networks (they enable communication between two or more devices) but they are not considered as network-on-chips. Note that some articles erroneously use NoC as a synonym for mesh topology although NoC paradigm does not dictate the topology. Likewise, the regularity of topology is sometimes considered as a requirement which is, obviously, not the case in research concentrating on "applicationspecific NoC topology synthesis". [edit]Parallelism

and scalability

The wires in the links of the NoC are shared by many signals. A high level of parallelism is achieved, because all links in the NoC can operate simultaneously on different data packets. Therefore, as the complexity of integrated systems keeps growing, a NoC provides enhanced performance (such as throughput) and scalability in comparison with previous communication architectures (e.g., dedicated point-to-point signal wires, shared buses, or segmented buses with bridges). Of course, the algorithms must be designed in such a way that they offer large parallelism and can hence utilize the potential of NoC. [edit]Benefits

of adopting NoCs

Traditionally, ICs have been designed with dedicated point-to-point connections, with one wire dedicated to each signal. For large designs, in particular, this has several limitations from a physical design viewpoint. The wires occupy much of the area of the chip, and in nanometerCMOS technology, interconnects dominate both performance and dynamic power dissipation, as signal propagation in wires across the chip requires multiple clock cycles. (See Rent's rule for a discussion of wiring requirements for point-to-point connections). NoC links can reduce the complexity of designing wires for predictable speed, power, noise, reliability, etc., thanks to their regular, well controlled structure. From a system design viewpoint, with the advent of multi-core processor systems, a network is a natural architectural choice. A NoC can provide separation between computation and communication, support modularity and IP reuse via

standard interfaces, handle synchronization issues, serve as a platform for system test, and, hence, increase engineering productivity. [edit]Research

on on-chip networks

Although NoCs can borrow concepts and techniques from the well-established domain of computer networking, it is impractical to blindly reuse features of "classical" computer networks and symmetric multiprocessors. In particular, NoC switches should be small, energy-efficient, and fast. Neglecting these aspects along with proper, quantitative comparison was typical for early NoC research but nowadays they are considered in more detail. The routing algorithms should be implemented by simple logic, and the number of data buffers should be minimal. Network topology and properties may be application-specific. Some researchers think that NoCs need to support quality of service (QoS), namely achieve the various requirements in terms of throughput, end-to-end delays and deadlines. Real-time computation, including audio and video playback, is one reason for providing QOS support. However, current system implementations like VxWorks, RTLinux or QNX are able to achieve sub-millisecond real-time computing without special hardware. This may indicate that for many real-time applications the service quality of existing on-chip interconnect infrastructure is sufficient, and dedicated hardware logic would be necessary to achieve microsecond precision, a degree that is rarely needed in practice for end users (sound or video jitter need only tenth of milliseconds latency guarantee). Another motivation for NOC-level quality-ofservice is to support multiple concurrent users sharing resources of a single chip multiprocessor in a public cloud computing infrastructure. In such instances, hardware QOS logic enables the service provider to make contractual guarantees on the level of service that a user receives, a feature that may be deemed desirable by some corporate or government clients. To date, several prototype NoCs have been designed and analyzed in both industry and academia but only few have been implemented on silicon. However, many challenging research problems remain to be solved at all levels, from the physical link level through the network level, and all the way up to the system architecture and application software. The first dedicated research symposium on Networks on Chip was held at Princeton University, in May 2007.
[3]

The second IEEE International Symposium on Networks-on-

Chip was held in April 2008 atNewcastle University.

System on a chip
From Wikipedia, the free encyclopedia
(Redirected from System on chip)

The AMD Geode is an x86 compatible system on a chip

A system on a chip or system on chip (SoC or SOC) is an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. It may contain digital,analog, mixedsignal, and often radio-frequency functions—all on a single chip substrate. A typical application is in the area of embedded systems. The contrast with a microcontroller is one of degree. Microcontrollers typically have under 100 kB of RAM (often just a few kilobytes) and often really are single-chip-systems, whereas the term SoC is typically used with more powerful processors, capable of running software such as the desktop versions of Windows and Linux, which need external memory chips (flash, RAM) to be useful, and which are used with various external peripherals. In short, for larger systems system on a chip is hyperbole, indicating technical direction more than reality: increasing chip integration to reduce manufacturing costs and to enable smaller systems. Many interesting systems are too complex to fit on just one chip built with a process optimized for just one of the system's tasks. When it is not feasible to construct an SoC for a particular application, an alternative is a system in package (SiP) comprising a number of chips in a single package. In large volumes, SoC is believed to be more cost-effective than SiP since it increases the yield of the fabrication and because its packaging is simpler. [1] Another option, as seen for example in higher end cell phones and on the Beagle Board, is package on package stacking during board assembly. The SoC chip includes processors and numerous digital peripherals, and comes in a ball grid package with lower and upper connections. The lower balls connect to the board and various peripherals, with the upper balls in a ring holding the memory buses used to access NAND flash and DDR2 RAM. Memory packages could come from multiple vendors.
Contents

[hide]

      

1 Structure 2 Design flow 3 Fabrication 4 Books 5 See also 6 Notes 7 External links

[edit]Structure
A typical SoC consists of:



A microcontroller, microprocessor or DSP core(s). Some SoCs—called multiprocessor system on chip (MPSoC)—include more than one processor core.

     

Memory blocks including a selection of ROM, RAM, EEPROM and flash memory. Timing sources including oscillators and phase-locked loops. Peripherals including counter-timers, real-time timers and power-on reset generators. External interfaces including industry standards such as USB, FireWire, Ethernet, USART, SPI. Analog interfaces including ADCs and DACs. Voltage regulators and power management circuits.

These blocks are connected by either a proprietary or industry-standard bus such as the AMBA bus from ARM Holdings. DMA controllers route data directly between external interfaces and memory, bypassing the processor core and thereby increasing the data throughput of the SoC.

Microcontroller-based system on a chip

[edit]Design

flow

An SoC consists of both the hardware described above, and the software that controls the microcontroller, microprocessor or DSP cores, peripherals and interfaces. The design flow for an SoC aims to develop this hardware and software in parallel. Most SoCs are developed from pre-qualified hardware blocks for the hardware elements described above, together with the software drivers that control their operation. Of particular importance are the protocol stacks that drive industry-standard interfaces like USB. The hardware blocks are put together using CADtools; the software modules are integrated using a software-development environment. Chips are verified for logical correctness before being sent to foundry. This process is called functional verification and it accounts for a significant portion of the time and energy expended in the chip design life cycle (although the often quoted figure of 70% is probably an exaggeration).[2] With the growing complexity of chips, hardware verification languages like SystemVerilog, SystemC, e, and OpenVera are being used. Bugs found in the verification stage are reported to the designer.

System-on-a-chip design flow

Often, one step in the verification flow is emulation: The hardware is mapped onto an emulation platform based on a field-programmable gate array (FPGA) that mimics the behavior of the SoC, and the software modules are loaded into the memory of the emulation platform. Once programmed, the emulation platform enables the hardware and software of the SoC to be tested and debugged at close to its full operational speed. Emulation is generally preceded by extensive software simulation. In fact, sometimes the FPGAs are used primarily to speed up some parts of the simulation work. After emulation the hardware of the SoC follows the place-and-route phase of the design of an integrated circuit before it is fabricated.

[edit]Fabrication
SoCs can be fabricated by several technologies, including:

  

Full custom Standard cell FPGA

SoC designs usually consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. And with fewer packages in the system, assembly costs are reduced as well. However, like most VLSI designs, the total cost is higher for one large chip than for the same functionality distributed over several smaller chips, because of lower yieldsand higher NRE costs.

MPSOC
The multiprocessor System-on-Chip (MPSoC) is a system-on-a-chip (SoC) which uses multiple processors (see multi-core), usually targeted for embedded applications. It is used by platforms that contain multiple, usually heterogeneous, processing elements with specific functionalities reflecting the need of the expected application domain, a memory hierarchy (often using scratchpad RAM and DMA) and I/O components. All these components are linked to each other by an on-chip interconnect. These architectures meet the performance needs of multimedia applications, telecommunication architectures, network security and other application domains while limiting the power consumption through the use of specialised processing elements and architecture. Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them.
[1]

There are many variations on this basic theme, and the definition

of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on onedie, multiple dies in one package, multiple packages in one system unit, etc.). Multiprocessing sometimes refers to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant. However, the terms multitasking or multiprogramming are more appropriate to describe this concept, which is implemented mostly in software, whereas multiprocessing is more appropriate to describe the use of multiple hardware CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two of them.

Point-to-multipoint communication
From Wikipedia, the free encyclopedia

Point-to-multipoint communication is a term that is used in the telecommunications field which refers to communication which is accomplished via a specific and distinct type of oneto-many connection, providing multiple paths from a single location to multiple locations. [1] Point-to-multipoint is often abbreviated as P2MP, PTMP, or PMP. Point-to-multipoint telecommunications is most typically (2003) used in wireless Internet and IP Telephony via gigahertz radio frequencies. P2MP systems have been designed both as single and bi-directional systems. A central antenna or antenna array

broadcasts to several receiving antennas and the system uses a form of Time-division Multiplexing to allow for the back-channel traffic.
[edit]

In telecommunications, a point-to-point connection refers to a communications connection between two nodes or endpoints. An example is a telephone call, in which one telephone is connected with one other, and what is said by one caller can only be heard by the other. This is contrasted with a point-to-multipoint or broadcast communication topology, in which many nodes can receive information transmitted by one node. Other examples of point-to-point communications links are leased lines, microwave relay links, and two way radio. Examples of point-to-multipoint communications systems are radio and television broadcasting. The term is also used in computer networking and computer architecture to refer to a wire or other connection that links only two computers or circuits, as opposed to other network topologies such as buses or crossbar switches which can connect many communications devices. Point-to-point is sometimes abbreviated as P2P, Pt2Pt, or Po-Po.[citation needed] This usage of P2P is distinct from P2P referring to peer-to-peer file sharing networks.
[edit]Basic

point-to-point data link

A traditional point-to-point data link is a communications medium with exactly two endpoints and no data or packet formatting. The host computers at either end had to take full responsibility for formatting the data transmitted between them. The connection between the computer and the communications medium was generally implemented through an RS-232 interface, or something similar. Computers in close proximity may be connected by wires directly between their interface cards . When connected at a distance, each endpoint would be fitted with a modem to convert analog telecommunications signals into a digital data stream. When the connection used a telecommunications provider, the connections were called a dedicated, leased, or private line.

TheARPANET used leased lines to provide point-to-point data links between its packet-switching nodes, which were called Interface Message Processors.
[edit]Modern

point-to-point links

In (2003), the term point-to-point telecommunications relates to fixed wireless data communications for Internet or Voice over IP via radiofrequencies in the multi-gigahertz range. It also includes technologies such as laser for telecommunications but in all cases expects that the transmission medium is line of sight and capable of being fairly tightly beamed from transmitter to receiver. The Telecommunications Industry Association's engineering committees develop U.S. standards for point-to-point communications and related cellular tower structures.[1]Online tools help users find if they have such line of sight.[2] The telecommunications signal is typically bi-directional, either time division multiple access (TDMA) or channelized. In hubs and switches, a hub provides a point-to-multipoint (or simply multipoint) circuit which divides the total bandwidth supplied by the hub among each connected client node. A switch on the other hand provides a series of point-to-point circuits, via microsegmentation, which allows each client node to have a dedicated circuit and the added advantage of having full-duplex connections.
[edit] It has been suggested that Partial re-configuration be merged into this article or section. (Discuss)Proposed since December 2011. "FPGA" redirects here. It is not to be confused with Flip-chip pin grid array.

An Altera Stratix IV GX FPGA

An example of a Xilinx Spartan 6 FPGA programming/evaluation board

A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by the customer or designer after manufacturing—hence "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an applicationspecific integrated circuit (ASIC) (circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare). FPGAs can be used to implement any logical function that an ASIC could perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design
[1]

and the low non-recurring engineering costs relative to an ASIC design
[2]

(notwithstanding the generally higher unit cost), offer advantages for many applications.

FPGAs contain programmable logic components called "logic blocks", and a hierarchy of reconfigurable interconnects that allow the blocks to be "wired together"—somewhat like many (changeable) logic gates that can be inter-wired in (many) different configurations. Logic blocks can be configured to perform complexcombinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, the

logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.
[2]

In addition to digital functions, some FPGAs have analog features. The most common analog feature is programmable slew rate and drive strength on each output pin, allowing the engineer to set slow rates on lightly loaded pins that would otherwise ring unacceptably, and to set stronger, faster rates on heavily loaded pins on high-speed channels that would otherwise run too slow.
[3][4]

Another relatively common

analog feature is differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip.
[5]

Such devices blur the line between an FPGA, which carries digital ones

and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
Contents
[hide]



1 History

o o o o 

1.1 Modern developments 1.2 Gates 1.3 Market size 1.4 FPGA design starts

2 FPGA comparisons

o o         

2.1 Complex programmable logic devices 2.2 Security considerations

3 Applications 4 Architecture 5 FPGA design and programming 6 Basic process technology types 7 Major manufacturers 8 See also 9 References 10 Further reading 11 External links

[edit]History

The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field programmable), however programmable logic was hard-wired between logic gates.
[6]

In the late 1980s the Naval Surface Warfare Department funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
[6]

Some of the industry’s foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.
[7][8]

Xilinx Co-Founders, Ross Freeman and Bernard Vonderschmitt, invented the first commercially viable field programmable gate array in 1985 – the XC2064.
[9]

The XC2064 had programmable gates and
[10]

programmable interconnects between gates, the beginnings of a new technology and market. XC2064 boasted a mere 64 configurable logic blocks (CLBs), with two 3-input lookup tables (LUTs).
[11]

The

More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his

invention.

[12]

Xilinx continued unchallenged and quickly growing from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market-share. By 1993, Actel was serving about 18 percent of the market.
[10]

The 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
[13]

FPGAs got a glimpse of fame in 1997, when Adrian Thompson, a researcher working at the University of Sussex, merged genetic algorithm technology and FPGAs to create a sound recognition device. Thomson’s algorithm configured an array of 10 x 10 cells in a Xilinx FPGA chip to discriminate between two tones, utilising analogue features of the digital chip. The application of genetic algorithms to the configuration of devices like FPGAs is now referred to as Evolvable hardware [edit]Modern
[6][14]

developments

A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip". This work mirrors the architecture by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group which combined a reconfigurable CPU architecture on a single chip called the SB24. That work was done in 1982. Examples of such hybrid technologies can be found in the Xilinx Virtex-II PRO and Virtex-4 devices, which include one or more PowerPC processors embedded within the FPGA's logic fabric. The Atmel FPSLIC is another such

device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The ActelSmartFusion devices incorporate an ARM architecture Cortex-M3 hard processor core (with up to 512kB of flash and 64kB of RAM) and analog peripherals such as a multi-channel ADC and DACs to their flash-based FPGA fabric. In 2010, an extensible processing platform was introduced for FPGAs that fused features of an ARM high-end microcontroller (hard-core implementations of a 32-bit processor, memory, and I/O) with an FPGA fabric to make FPGAs easier for embedded designers to use. By incorporating the ARM processorbased platform into a 28 nm FPGA family, the extensible processing platform enables system architects and embedded software developers to apply a combination of serial and parallel processing to address the challenges they face in designing today's embedded systems, which must meet ever-growing demands to perform highly complex functions. By allowing them to design in a familiar ARM environment, embedded designers can benefit from the time-to-market advantages of an FPGA platform compared to more traditional design cycles associated with ASICs.
[15][16][17][18][19]

An alternate approach to using hard-macro processors is to make use of soft processor cores that are implemented within the FPGA logic.MicroBlaze and Nios II are examples of popular softcore processors. As previously mentioned, many modern FPGAs have the ability to be reprogrammed at "run time," and this is leading to the idea ofreconfigurable computing or reconfigurable systems — CPUs that reconfigure themselves to suit the task at hand. Additionally, new, non-FPGA architectures are beginning to emerge. Softwareconfigurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like A router is a device that forwards data packets between computer networks, creating an overlayinternetwork. A router is connected to two or more data lines from different networks. When a data packet comes in on one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it gets to its destination node.
[1]

The most familiar type of routers are home and small office routers that simply pass data, such as web pages and email, between the home computers and the owner's cable or DSL modem, which connects to the Internet through an ISP. However more sophisticated routers range from enterprise routers, which connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.

Contents
[hide]



1 Applications

o o o o o    

1.1 Access 1.2 Distribution 1.3 Security 1.4 Core 1.5 Internet connectivity and internal use

2 Historical and technical information 3 Forwarding 4 References 5 External links

[edit]Applications When multiple routers are used in interconnected networks, the routers exchange information about destination addresses, using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless transmission). It also contains firmware for different networking protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnets addresses recorded in the router do not necessarily map directly to the physical interface connections. called planes: 
[3] [2]

A router has two stages of operation

Control plane: A router records a routing table listing what route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured addresses, called static routes.

A typical home or small office router showing the ADSL telephone line andEthernet network cable connections



Forwarding plane: The router forwards data packets between incoming and outgoing interface connections. It routes it to the correct network type using information that the packet headercontains. It uses data recorded in the routing table control plane.

Routers may provide connectivity within enterprises, between enterprises and the Internet, and between internet service providers (ISPs) networks. The largest routers (such as the Cisco CRS-1or Juniper T1600) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks. Other networking solutions may be provided by a backbone Wireless Distribution System (WDS), which avoids the costs of introducing networking cables into buildings.
[4]

All sizes of routers may be found inside enterprises.

[5]

The most powerful routers are usually found in

ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever increasing demands of intranet data traffic. A three-layer model is in common use, not all of which need be present in smaller networks. [edit]Access
[6]

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close