CN

Published on January 2017 | Categories: Documents | Downloads: 39 | Comments: 0 | Views: 398
of 19
Download PDF   Embed   Report

Comments

Content

Q.1 To study communication guiding system
Guidance system A guidance system is a device or group of devices used to navigate a ship, aircraft, missile, rocket, satellite, or other craft. Typically, this refers to a system that navigates without direct or continuous human control. Systems that are intended to have a high degree of human interaction are usually referred to as a navigation system. One of the earliest examples of a true guidance system is that used in the German V-1 during World War II. This system consisted of a simple gyroscope to maintain heading, an airspeed sensor to estimate flight time, an altimeter to maintain altitude, and other redundant systems. A guidance system has three major sub-sections: Inputs, Processing, and Outputs. The input section includes sensors, course data, radio and satellite links, and other information sources. The processing section, composed of one or more CPUs, integrates this data and determines what actions, if any, are necessary to maintain or achieve a proper heading. This is then fed to the outputs which can directly affect the system's course. The outputs may control speed by interacting with devices such as turbines, and fuel pumps, or they may more directly alter course by actuating ailerons, rudders, or other devices.

Guidance systems
Guidance systems consist of 3 essential parts: navigation which tracks current location, guidance which leverages navigation data and target information to direct flight control "where to go", and control which accepts guidance commands to effect change in aerodynamic and/or engine controls. Navigation is the art of determining where you are, a science that has seen tremendous focus in 1711 with the Longitude prize. Navigation aids either measure position from a fixed point of reference (ex. landmark, north star, LORAN Beacon), relative position to a target (ex. radar, infra-red, ...) or track movement from a known position/starting point (e.g. IMU). Today's complex systems use multiple approaches to determine current position. For example, today's most advanced navigation systems are embodied within the Anti-ballistic missile, the RIM-161 Standard Missile 3 leverages GPS, IMU and ground segment data in the boost phase and relative position data for intercept targeting. Complex systems typically have multiple redundancy to address drift, improve accuracy (ex. relative to a target) and address isolated system failure. Navigation systems therefore take multiple inputs from many different sensors, both internal to the system and/or external (ex. ground based update). Kalman filter provides the most common approach to combining navigation data (from multiple sensors) to resolve current position. Example navigation approaches:









Celestial navigation is a position fixing technique that was devised to help sailors cross the featureless oceans without having to rely on dead reckoning to enable them to strike land. Celestial navigation uses angular measurements (sights) between the horizon and a common celestial object. The Sun is most often measured. Skilled navigators can use the Moon, planets or one of 57 navigational stars whose coordinates are tabulated in nautical almanacs. Historical tools include a sextant, watch and ephemeris data. Today's space shuttle, and most interplanetary spacecraft, use optical systems to calibrate inertial navigation systems: Crewman Optical Alignment Sight (COAS),[9] Star Tracker.[10] Long-range Navigation (LORAN) : This was the predecessor of GPS and was (and to an extent still is) used primarily in commercial sea transportation. The system works by triangulating the ship's position based on directional reference to known transmitters. Global Positioning System (GPS) : GPS was designed by the US military with the primary purpose of addressing "drift" within the inertial navigation of Submarinelaunched ballistic missile(SLBMs) prior to launch. GPS transmits 2 signal types: military and a commercial. The accuracy of the military signal is classified but can be assumed to be well under 0.5 meters. GPS is a system of 24 satellites orbiting in unique planes 10.914.4 Nautical miles above the earth. The Satellites are in well defined orbits and transmit highly accurate time information which can be used to triangulate position. Inertial Measurement Units (IMUs) are the primary inertial system for maintaining current position (navigation) and orientation in missiles and aircraft. They are complex machines with one or more rotating Gyroscopes that can rotate freely in 3 degrees of motion within a complex gimbal system. IMUs are "spun up" and calibrated prior to launch. A minimum of 3 separate IMUs are in place within most complex systems. In addition to relative position, the IMUs contain accelerometers which can measure acceleration in all axis. The position data, combined with acceleration data provide the necessary inputs to "track" motion of a vehicle. IMUs have a tendency to "drift", due to friction and accuracy. Error correction to address this drift can be provided via ground link telemetry, GPS, radar, optical celestial navigation and other navigation aids. When targeting another (moving) vehicle, relative vectors become paramount. In this situation, navigation aids which provide updates of position relative to the target are more important. In addition to the current position, inertial navigation systems also typically estimate a predicted position for future computing cycles. See also Inertial navigation system.

 

Radar/Infrared/Laser : This form of navigation provides information to guidance relative to a known target, it has both civilian (ex rendezvous) and military applications. o active (employs own radar to illuminate the target), o passive (detects target’s radar emissions), o semiactive radar homing, o Infrared homing : This form of guidance is used exclusively for military munitions, specifically air-to-air and surface-to-air missiles. The missile’s seeker head homes in on the infrared (heat) signature from the target’s engines (hence the term ―heat-seeking missile‖), o Ultraviolet homing, used in FIM-92 Stinger - more resistive to countermeasures, than IR homing system

o

o

Laser designation : A laser designator device calculates relative position to a highlighted target. Most are familiar with the military uses of the technology on Laser-guided bomb. The space shuttle crew leverages a hand held device to feed information into rendezvous planning. The primary limitation on this device is that it requires a line of sight between the target and the designator. Terrain contour matching (TERCOM). Uses a ground scanning radar to "match" topography against digital map data to fix current position. Used by cruise missiles such as the BGM-109 Tomahawk.

Guidance is the "driver" of a vehicle. It takes input from the navigation system (where am I) and uses targeting information (where do I want to go) to send signals to the flight control system that will allow the vehicle to reach its destination (within the operating constraints of the vehicle). The "targets" for guidance systems are one or more state vectors (position and velocity) and can be inertial or relative. During powered flight, guidance is continually calculating steering directions for flight control. For example the space shuttle targets an altitude, velocity vector, and gamma to drive main engine cut off. Similarly, an Intercontinental ballistic missile also targets a vector. The target vectors are developed to fulfill the mission and can be preplanned or dynamically created. Control. Flight control is accomplished either aerodynamically or through powered controls such as engines. Guidance sends signals to flight control. A Digital Autopilot (DAP) is the common term used to describe the interface between guidance and control. Guidance and the DAP are responsible for calculating the precise instruction for each flight control. The DAP provides feedback to guidance on the state of flight controls.

Q.To Implement Dijkstra's And Bellman- Ford Algorithm
Dijkstra's algorithm Not to be confused with Dykstra's projection algorithm. Dijkstra's algorithm

Dijkstra's algorithm runtime Class Data structure Worst performance case Search algorithm Graph

Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959,[1][2] is a graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative edge path costs, producing a shortest path tree. This algorithm is often used in routing and as a subroutine in other graph algorithms. For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e. the shortest path) between that vertex and every other vertex. It can also be used for finding costs of shortest paths from a single vertex to a single destination vertex by stopping the algorithm once the shortest path to the destination vertex has been determined. For example, if the vertices of the graph represent cities and edge path costs represent driving distances between pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. As a result, the shortest path first is widely used in network routing protocols, most notably IS-IS and OSPF (Open Shortest Path First). Dijkstra's original algorithm does not use a min-priority queue and runs in O(|V|2). The idea of this algorithm is also given in (Leyzorek et al. 1957). The implementation based on a minpriority queue implemented by a Fibonacci heap and running in O(|E| + |V| log |V|) is due to (Fredman & Tarjan 1984). This is asymptotically the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded nonnegative weights. (For an overview of earlier shortest path algorithms and later improvements and adaptations, see: Single-source shortest-paths algorithms for directed graphs with nonnegative weights.)

Algorithm

Illustration of Dijkstra's algorithm search for finding path from a start node to a goal node in a robot motion planning problem. Open nodes represent the "tentative" set. Filled nodes are visited ones, with color representing the distance: the greener, the further. Nodes in all the different directions are explored uniformly, appearing as a more-or-less circular wavefront as Dijkstra's algorithm uses a heuristic identically equal to 0. Let the node at which we are starting be called the initial node. Let the distance of node Y be the distance from the initial node to Y. Dijkstra's algorithm will assign some initial distance values and will try to improve them step by step. 1. Assign to every node a tentative distance value: set it to zero for our initial node and to infinity for all other nodes. 2. Mark all nodes unvisited. Set the initial node as current. Create a set of the unvisited nodes called the unvisited set consisting of all the nodes except the initial node. 3. For the current node, consider all of its unvisited neighbors and calculate their tentative distances. For example, if the current node A is marked with a tentative distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B (through A) will be 6+2=8. If this distance is less than the previously recorded tentative distance of B, then overwrite that distance. Even though a neighbor has been examined, it is not marked as visited at this time, and it remains in the unvisited set. 4. When we are done considering all of the neighbors of the current node, mark the current node as visited and remove it from the unvisited set. A visited node will never be checked again; its distance recorded now is final and minimal. 5. If the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance among the nodes in the unvisited set is infinity (when planning a complete traversal), then stop. The algorithm has finished. 6. Set the unvisited node marked with the smallest tentative distance as the next "current node" and go back to step 3.

Pseudocode
In the following algorithm, the code u := vertex in Q with smallest dist[], searches for the vertex u in the vertex set Q that has the least dist[u] value. That vertex is removed from the set Q and returned to the user. dist_between(u, v) calculates the length between the two neighbor-nodes u and v. The variable alt on line 15 is the length of the path from the root node to the neighbor node v if it were to go through u. If this path is shorter than the current shortest path recorded for v, that current path is replaced with this alt path. The previous array is populated with a pointer to the "next-hop" node on the source graph to get the shortest route to the source.
1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from source to v 4 previous[v] := undefined ; // Previous node in optimal path from source 5 end for ; 6 dist[source] := 0 ; // Distance from source to source 7 Q := the set of all nodes in Graph ; // All nodes in the graph are unoptimized - thus are in Q 8 while Q is not empty: // The main loop 9 u := vertex in Q with smallest distance in dist[] ; 10 if dist[u] = infinity: 11 break ; // all remaining vertices are inaccessible from source 12 end if ; 13 remove u from Q ; 14 for each neighbor v of u: // where v has not yet been removed from Q. 15 alt := dist[u] + dist_between(u, v) ; 16 if alt < dist[v]: // Relax (u,v,a) 17 dist[v] := alt ; 18 previous[v] := u ; 19 decrease-key v in Q; // Reorder v in the Queue 20 end if ; 21 end for ; 22 end while ; 23 return dist[] ; 24 end Dijkstra.

If we are only interested in a shortest path between vertices source and target, we can terminate the search at line 13 if u = target. Now we can read the shortest path from source to target by iteration:
1 2 3 4 5 6 S := empty sequence u := target while previous[u] is defined: insert u at the beginning of S u := previous[u] end while ;

Now sequence S is the list of vertices constituting one of the shortest paths from source to target, or the empty sequence if no path exists. A more general problem would be to find all the shortest paths between source and target (there might be several different ones of the same length). Then instead of storing only a single node in each entry of previous[] we would store all nodes satisfying the relaxation condition. For

example, if both r and source connect to target and both of them lie on different shortest paths through target (because the edge cost is the same in both cases), then we would add both r and source to previous[target]. When the algorithm completes, previous[] data structure will actually describe a graph that is a subset of the original graph with some edges removed. Its key property will be that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph will be the shortest path between those nodes in the original graph, and all paths of that length from the original graph will be present in the new graph. Then to actually find all these short paths between two given nodes we would use a path finding algorithm on the new graph, such as depth-first search.

Q.IMPLEMENTATION AND COMPARISON OF VARIOUS TYPES OF CRYPTOGRAPHY .
Different Types of Cryptographic Algorithms

RSA
RSA is a public key algorithm invented by Rivest, Shamir and Adleman. The key used for encryption is different from (but related to) the key used for decryption. The algorithm is based on modular exponentiation. Numbers e, d and N are chosen with the property that if A is a number less than N, then (Ae mod N)d mod N = A. This means that you can encrypt A with e and decrypt using d. Conversely you can encrypt using d and decrypt using e (though doing it this way round is usually referred to as signing and verification). • The pair of numbers (e,N) is known as the public key • The pair of numbers (d,N) is known as the private key and must be kept secret. and can be publishe d.

The number e is known as the public exponent, the number d is known as the private exponent, and N is known as the modulus. When talking of key lengths in connection with RSA, what is meant is the modulus length. An algorithm that uses different keys for encryption and decryption is said to be asymmetric. Anybody knowing the public key can use it to create encrypted messages, but only the owner of the secret key can decrypt them. Conversely the owner of the secret key can encrypt messages that can be decrypted by anybody with the public key. Anybody successfully decrypting such messages can be sure that only the owner of the secret key could have encrypted them. This fact is the basis of the digital signature technique. Without going into detail about how e, d and N are related, d can be deduced from e and N if the factors of N can be determined. Therefore the security of RSA depends on the difficulty of factorizing N. Because factorization is believed to be a hard problem, the longer N is, the more secure the cryptosystem. Given the power of modern computers, a length of 768 bits is considered reasonably safe, but for serious commercial use 1024 bits is recommended. The problem with choosing long keys is that RSA is very slow compared with a symmetric block cipher such as DES, and the longer the key the slower it is. The best solution is to use RSA for digital signatures and for protecting DES keys. Bulk data encryption should be done using DES.

Q.To Study Various Type Of Routers And Bridges.
Network Devices
Routers, brouters, and gateways are inter-networking devices used for connecting different networks. Repeaters A repeater connects two segments of your network cable. It re times and regenerates the signals to proper amplitudes and sends them to the other segments. When talking about, ethernet topology, you are probably talking about using a hub as a repeater. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay which can affect network communication when there are several repeaters in a row. Many network architectures limit the number of repeaters that can be used in a row. Repeaters work only at the physical layer of the OSI network model. Bridges A bridge reads the outermost section of data on the data packet, to tell where the message is going. It reduces the traffic on other network segments, since it does not send all packets. Bridges can be programmed to reject packets from particular networks. Bridging occurs at the data link layer of the OSI model, which means the bridge cannot read IP addresses, but only the outermost hardware address of the packet. In our case the bridge can read the ethernet data which gives the hardware address of the destination address, not the IP address. Bridges forward all broadcast messages. Only a special bridge called a translation bridge will allow two networks of different architectures to be connected. Bridges do not normally allow connection of networks with different architectures. The hardware address is also called the MAC (media access control) address. To determine the network segment a MAC address belongs to, bridges use one of:




Transparent Bridging - They build a table of addresses (bridging table) as they receive packets. If the address is not in the bridging table, the packet is forwarded to all segments other than the one it came from. This type of bridge is used on ethernet networks. Source route bridging - The source computer provides path information inside the packet. This is used on Token Ring networks.

Routers A router is used to route data packets between two networks. It reads the information in each packet to tell where it is going. If it is destined for an immediate network it has access to, it will strip the outer packet (IP packet for example), readdress the packet to the proper ethernet

address, and transmit it on that network. If it is destined for another network and must be sent to another router, it will re-package the outer packet to be received by the next router and send it to the next router. Routing occurs at the network layer of the OSI model. They can connect networks with different architectures such as Token Ring and Ethernet. Although they can transform information at the data link level, routers cannot transform information from one data format such as TCP/IP to another such as IPX/SPX. Routers do not send broadcast packets or corrupted packets. If the routing table does not indicate the proper address of a packet, the packet is discarded. There are two types of routers: 1. Static routers - Are configured manually and route data packets based on information in a router table. 2. Dynamic routers - Use dynamic routing algorithms. There are two types of algorithms: o Distance vector - Based on hop count, and periodically broadcasts the routing table to other routers which takes more network bandwidth especially with more routers. RIP uses distance vectoring. Does not work on WANs as well as it does on LANs. o Link state - Routing tables are broadcast at startup and then only when they change. The open shortest path first (OSPF) protocol uses the link state routing method to configure routes or distance vector algorithm (DVA). Common routing protocols include:
   

IS-IS -Intermediate system to intermediate system which is a routing protocol for the OSI suite of protocols. IPX - Internet Packet Exchange. Used on Netware systems. NLSP - Netware Link Services protocol - Uses OSPF algorithm and is replacing IPX to provide internet capability. RIP - Routing information protocol uses a distance vector algorithm.

There is a device called a brouter which will function similar to a bridge for network transport protocols that are not routable, and will function as a router for routable protocols. It functions at the network and data link layers of the OSI network model. Gateways A gateway can translate information between different network data formats or network architectures. It can translate TCP/IP to AppleTalk so computers supporting TCP/IP can communicate with Apple brand computers. Most gateways operate at the application layer, but can operate at the network or session layer of the OSI model. Gateways will start at the lower level and strip information until it gets to the required level and repackage the information and work its way back toward the hardware layer of the OSI model. To confuse issues, when talking about a router that is used to interface to another network, the word gateway is often used. This does not mean the routing machine is a gateway as defined here, although it could be.

Q.CASE STUDY OF VOIP CONCEPT. Internet
This article is about the public worldwide computer network system. For other uses, see Internet (disambiguation). The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (often called TCP/IP, although not all protocols use TCP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support email. Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and Internet Protocol Television (IPTV). Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, faulttolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2011, more than 2.2 billion people — nearly a third of Earth's population — use the services of the Internet.[1] The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise

Terminology
Computer network types geographical scope  Near field (NFC)  Body (BAN)  Personal (PAN)  Near-me (NAN)  Local (LAN) o Home (HAN) o Storage (SAN)  Campus (CAN)  Backbone  Metropolitan (MAN)  Wide (WAN)  Internet  Interplanetary Internet This box:
  

by

view talk edit

See also: Internet capitalization conventions Internet is a short form of the technical term internetwork,[2] the result of interconnecting computer networks with special gateways or routers. The Internet is also often referred to as the Net. The term the Internet, when referring to the entire global system of IP networks, has been treated as a proper noun and written with an initial capital letter. In the media and popular culture, a trend has also developed to regard it as a generic term or common noun and thus write it as "the internet", without capitalization. Some guides specify that the word should be capitalized as a noun but not capitalized as an adjective.[3][4] The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet establishes a global data communications system between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.[5]

Routing

Internet packet routing is accomplished among various tiers of Internet Service Providers. Internet Service Providers connect customers (thought of at the "bottom" of the routing hierarchy) to customers of other ISPs. At the "top" of the routing hierarchy are ten or so Tier 1 networks, large telecommunication companies which exchange traffic directly "across" to all other Tier 1 networks via unpaid peering agreements. Tier 2 networks buy Internet transit from other ISP to reach at least some parties on the global Internet, though they may also engage in unpaid peering (especially for local partners of a similar size). ISPs can use a single "upstream" provider for connectivity, or use multihoming to provide protection from problems with individual links. Internet exchange points create physical connections between multiple ISPs, often hosted in buildings owned by independent third parties.[citation needed]

Q. To Study Various Types Of Lan Equipments
A local area network (LAN) is a computer network that interconnects computers in a limited area such as a home, school, computer laboratory, or office building using network media.[1] The defining characteristics of LANs, in contrast to wide area networks (WANs), include their

usually higher data-transfer rates, smaller geographic area, and lack of a need for leased telecommunication lines. ARCNET, Token Ring and other technology standards have been used in the past, but Ethernet over twisted pair cabling, and Wi-Fi are the two most common technologies currently used to build LANs. A conceptual diagram of a local area network using 10BASE5 Ethernet The increasing demand and use of computers in universities and research labs in the late 1960s generated the need to provide high-speed interconnections between computer systems. A 1970 report from the Lawrence Radiation Laboratory detailing the growth of their "Octopus" network[2][3] gave a good indication of the situation. Cambridge Ring was developed at Cambridge University in 1974[4] but was never developed into a successful commercial product. Ethernet was developed at Xerox PARC in 1973–1975,[5] and filed as U.S. Patent 4,063,220. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper, "Ethernet: Distributed Packet-Switching For Local Computer Networks."[6] ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977.[7] It had the first commercial installation in December 1977 at Chase Manhattan Bank in New York.[8] Standards evolution The development and proliferation of personal computers using the CP/M operating system in the late 1970s, and later DOS-based systems starting in 1981, meant that many sites grew to dozens or even hundreds of computers. The initial driving force for networking was generally to share storage and printers, which were both expensive at the time. There was much enthusiasm for the concept and for several years, from about 1983 onward, computer industry pundits would regularly declare the coming year to be ―the year of the LAN‖.[9][10][11]

In practice, the concept was marred by proliferation of incompatible physical layer and network protocol implementations, and a plethora of methods of sharing resources. Typically, each vendor would have its own type of network card, cabling, protocol, and network operating system. A solution appeared with the advent of Novell NetWare which provided even-handed support for dozens of competing card/cable types, and a much more sophisticated operating system than most of its competitors. Netware dominated[12] the personal computer LAN business from early after its introduction in 1983 until the mid-1990s when Microsoft introduced Windows NT Advanced Server and Windows for Workgroups. Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. Microsoft and 3Com worked together to create a simple network operating system which formed the base of 3Com's 3+Share, Microsoft's LAN Manager and IBM's LAN Server - but none of these was particularly successful. During the same period, Unix computer workstations from vendors such as Sun Microsystems, Hewlett-Packard, Silicon Graphics, Intergraph, NeXT and Apollo were using TCP/IP based networking. Although this market segment is now much reduced, the technologies developed in this area continue to be influential on the Internet and in both Linux and Apple Mac OS X networking—and the TCP/IP protocol has now almost completely replaced IPX, AppleTalk, NBF, and other protocols used by the early PC LANs. Cabling Early LAN cabling had been based on various grades of coaxial cable. Shielded twisted pair was used in IBM's Token Ring LAN implementation. In 1984, StarLAN showed the potential of simple unshielded twisted pair by using Cat3 cable—the same simple cable used for telephone systems. This led to the development of 10Base-T (and its successors) and structured cabling which is still the basis of most commercial LANs today. In addition, fiber-optic cabling is increasingly used in commercial applications. As cabling is not always possible, wireless Wi-Fi is now very common in residential premises and elsewhere where support for mobile laptops and smartphones is important.

Technical aspects
Network topology describes the layout of interconnections between devices and network segments. At the Data Link Layer and Physical Layer, a wide variety of LAN topologies have been used, including ring, bus, mesh and star, but the most common LAN topology in use today is switched Ethernet. At the higher layers, the Internet Protocol (TCP/IP) has become the standard, replacing NetBEUI, IPX/SPX, AppleTalk and others. Simple LANs generally consist of one or more switches. A switch can be connected to a router, cable modem, or ADSL modem for Internet access. Complex LANs are characterized by their use of redundant links with switches using the spanning tree protocol to prevent loops, their ability to manage differing traffic types via quality of service (QoS), and to segregate traffic with VLANs. A LAN can include a wide variety of network devices such as switches, firewalls, routers, load balancers, and sensors.[13] LANs can maintain connections with other LANs via leased lines, leased services, or the Internet using virtual private network technologies. Depending on how the connections are established and secured in a LAN, and the distance involved, a LAN may also be classified as a metropolitan area network (MAN) or a wide area network (WAN).

Q. To study various types of errors correcting techniques

It is to S.P. Corder that Error Analysis owes its place as a scientific method in linguistics. As Rod Ellis cites (p. 48), "it was not until the 1970s that EA became a recognized part of applied linguistics, a development that owed much to the work of Corder". Before Corder,

linguists observed learners' errors, divided them into categories, tried to see which ones were common and which were not, but not much attention was drawn to their role in second language acquisition. It was Corder who showed to whom information about errors would be helpful (teachers, researchers, and students) and how. There are many major concepts introduced by S. P. Corder in his article "The significance of learners' errors", among which we encounter the following: 1) It is the learner who determines what the input is. The teacher can present a linguistic form, but this is not necessarily the input, but simply what is available to be learned. 2) Keeping the above point in mind, learners' needs should be considered when teachers/linguists plan their syllabuses. Before Corder's work, syllabuses were based on theories and not so much on learners’ needs. 3) Mager (1962) points out that the learners' built-in syllabus is more efficient than the teacher's syllabus. Corder adds that if such a built-in syllabus exists, then learners’ errors would confirm its existence and would be systematic. 4) Corder introduced the distinction between systematic and nonsystematic errors. Unsystematic errors occur in one’s native language; Corder calls these "mistakes" and states that they are not significant to the process of language learning. He keeps the term "errors" for the systematic ones, which occur in a second language. 5) Errors are significant in three ways: to the teacher: they show a student’s progress - to the researcher: they show how a language is acquired, what strategies the learner uses. to the learner: he can learn from these errors. 6) When a learner has made an error, the most efficient way to teach him the correct form is not by simply giving it to him, but by letting him discover it and test different hypotheses. (This is derived from Carroll's proposal (Carroll 1955, cited in Corder), who suggested that the learner should find the correct linguistic form by searching for it. 7) Many errors are due to that the learner uses structures from his native language. Corder claims that possession of one’s native

language is facilitative. Errors in this case are not inhibitory, but rather evidence of one’s learning strategies. The above insights played a significant role in linguistic research, and in particular in the approach linguists took towards errors. Here are some of the areas that were influenced by Corder's work: STUDIES OF LEARNER ERRORS

Corder introduced the distinction between errors (in competence) and mistakes (in performance). This distinction directed the attention of researchers of SLA to competence errors and provided for a more concentrated framework. Thus, in the 1970s researchers started examining learners’ competence errors and tried to explain them. We find studies such as Richards's "A non-contrastive approach to error analysis" (1971), where he identifies sources of competence errors; L1 transfer results in interference errors; incorrect (incomplete or overgeneralized) application of language rules results in intralingual errors; construction of faulty hypotheses in L2 results in developmental errors. Not all researchers have agreed with the above distinction, such as Dulay and Burt (1974) who proposed the following three categories of errors: developmental, interference and unique. Stenson (1974) proposed another category, that of induced errors, which result from incorrect instruction of the language. As most research methods, error analysis has weaknesses (such as in methodology), but these do not diminish its importance in SLA research; this is why linguists such as Taylor (1986) reminded researchers of its importance and suggested ways to overcome these weaknesses. As mentioned previously, Corder noted to whom (or in which areas) the study of errors would be significant: to teachers, to researchers and to learners. In addition to studies concentrating on error categorization and analysis, various studies concentrated on these three different areas. In other words, research was conducted not only in order to understand errors per se, but also in order to use what is learned from error analysis and apply it to improve language competence. Such studies include Kroll and Schafer's "Error-Analysis and the Teaching of Composition", where the authors demonstrate how error

analysis can be used to improve writing skills. They analyze possible sources of error in non-native-English writers, and attempt to provide a process approach to writing where the error analysis can help achieve better writing skills. These studies, among many others, show that thanks to Corder's work, researchers recognized the importance of errors in SLA and started to examine them in order to achieve a better understanding of SLA processes, i.e. of how learners acquire an L2.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close