Artificial Intelligence

Published on November 2016 | Categories: Documents | Downloads: 139 | Comments: 0 | Views: 1418
of 15
Download PDF   Embed   Report

chronology of artificial intelligence ,AI ,AI applications,swarm intelligence

Comments

Content

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com DEFINITION’S FOR ARTIFICIAL INTELLIGENCE:
Artificial Intelligence is a branch of Science which deals with helping machines find solutions to complex problems in a more human-like fashion. This generally involves borrowing characteristics from human intelligence, and applying them as algorithms in a computer friendly way. A more or less flexible or efficient approach can be taken depending on the requirements established, which influences how artificial the intelligent behavior appears. Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, computer systems that exhibit the characteristics we associate with intelligence in human behavior - understanding language, learning, reasoning and solving problems. Artificial Intelligence is the study of how to make computers do things which at the moment people do better. This is ephemeral as it refers to the current state of computer science and it excludes a major area; problems that cannot be solved well either by computers or by people at the moment. Artificial Intelligence is the branch of computer science that is concerned with the automation of intelligent behavior. A I is based upon the principles of computer science namely data structures used in knowledge representation, the algorithms needed to apply that knowledge and the languages and programming techniques used in their implementation. Artificial Intelligence is a field of study that encompasses computational techniques for performing tasks that apparently require intelligence when performed by humans.

Artificial Intelligence is the field of study that seeks to explain and emulate intelligent behavior in terms of computational processes. Artificial Intelligence is about generating representations and procedures that automatically or autonomously solve problems heretofore solved by humans.

The simulation of human intelligence on a machine so as to make the machine efficient to identify and use the right piece of knowledge at a given step of solving a problem. A system capable of planning and executing the right task at the right time is generally called rational. While there is no universally accepted definition of intelligence. AI researchers have studied several traits that are considered essential.

1

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com AI Research
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic. AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.

Problems of AI Deduction, reasoning, problem solving
Early AI researchers developed algorithms that imitated the process of conscious, step-by-step reasoning that human beings use when they solve puzzles, play board games, or make logical deductions. These early methods were unable to handle incomplete or imprecise information but by the late 80s and 90s, AI research developed highly successful methods for dealing with uncertainty, employing concepts from probability and economics. For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research. It is not clear, however, that conscious human reasoning is any more efficient when faced with a difficult abstract problem. Cognitive scientists have demonstrated that human beings solve most of their problems using unconscious reasoning, rather than the conscious, step-bystep deduction that early AI research was able to model. For many problems, people seem to simply jump to the correct solution: they think "instinctively" and "unconsciously". These instincts seem to involve skills usually applied to other problems, such as motion and manipulation (our so-called "embodied" skills that allow us deal with the physical world) or perception (for example, our skills at pattern matching). It is hoped that sub-symbolic methods, like computational intelligence and situated AI, will be able to model these instinctive skills. The problem of unconscious problem solving, which forms part of our commonsense reasoning, is largely unsolved.

Knowledge representation Knowledge representation and commonsense knowledge
Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of "what exists" is an ontology (borrowing a word from traditional philosophy). Ontological engineering is the science of finding a general representation that can handle all of human knowledge. Among the most difficult problems in knowledge representations are:

Default reasoning and the qualification problem: Many of the things people know take
the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture a animal that is fist sized, sings, and flies. None of these things are true about birds in general. John McCarthy identified this problem in 1969 [46] as the qualification

2

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.

Unconscious knowledge: Much of what people know isn't represented as "facts" or
"statements" that they could actually say out loud. They take the form of intuitions or tendencies and are represented in the brain unconsciously and sub-symbolically. This unconscious knowledge informs, supports and provides a context for our conscious knowledge. As with the related problem of unconscious reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.

The breadth of common sense knowledge: The number of atomic facts that the average
person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge, such as CYC, require enormous amounts of tedious stepby-step ontological engineering — they must be built, by hand, one complicated concept at a time.

Planning Automated planning and scheduling

Intelligent agents must be able set goals and achieve them. [49] They need a way to visualize the future: they must have a representation of the state of the world and be able to make predictions about how their actions will change it. There are several types of planning problems: Classical planning problems assume that the agent is the only thing acting on the world, and that the agent can be certain what the consequences of it's actions may be. Partial order planning problems take into account the fact that sometimes it's not important which sub-goal the agent achieves first. If the environment is changing, or if the agent can't be sure of the results of its actions, it must periodically check if the world matches its predictions (conditional planning and execution monitoring) and it must change its plan as this becomes necessary (replanning and continuous planning). Some planning problems take into account the utility or "usefulness" of a given outcome. These problems can be analyzed using tools drawn from economics, such as decision theory or decision analysis[53] and information value theory. Multi-agent planning problems try to determine the best plan for a community of agents, using cooperation and competition to achieve a given goal. These problems are related to emerging fields like evolutionary algorithms and swarm intelligence.

Learning
machine learning Important machine learning problems are: Unsupervised learning: find a model that matches a stream of input "experiences", and be able to predict what new "experiences" to expect. Supervised learning, such as classification (be able to determine what category something belongs in, after being a number of examples of things from each category), or regression (given a set of numerical input/output examples, discover a continuous function that would generate the outputs from the inputs).

3

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
Reinforcement learning: the agent is rewarded for good responses and punished for bad ones. (These can be analyzed in terms decision theory, using concepts like utility). Natural language processing natural language processing Natural language processing gives machines the ability to be read and understand the languages human beings speak. The problem of natural language processing involves such subproblems as: syntax and parsing; semantics and disambiguation; and discourse understanding. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation. Perception Machine perception, computer vision, and speech recognition Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition. Motion and manipulation Robotics The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as: navigate, referred to as robotic mapping including the sub-problems of localization (knowing where you are), mapping (learning what is around you) and path planning (figuring out how to get there). Manipulate objects (usually described in terms of configuration space). Social intelligence Affective computing Emotion and social skills play two roles for an intelligent agent: It must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability model human emotions and the perceptual skills to detect emotions.) For good human-computer interaction, an intelligent machine also needs to display emotions — at the very least it must appear polite and sensitive to the humans it interacts with. At best, it should appear to have normal emotions itself.

APPLICATIONS OF ARTIFICIAL INTELLIGENCE Business
Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001). A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Many practical applications are dependent on artificial neural networks, networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims

4

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering. Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.

Toys and games
The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of AI, specifically in the form of Tamagotchis and Giga Pets, the Internet (example: basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy.

List of applications
Typical problems to which AI methods are applied Pattern recognition Optical character recognition Handwriting recognition Speech recognition Face recognition Artificial Creativity Computer vision, Virtual reality and Image processing Diagnosis (artificial intelligence) Game theory and Strategic planning Game artificial intelligence and Computer game bot Natural language processing, Translation and Chatterbots Non-linear control and Robotics

Other fields in which AI methods are implemented Artificial life Automated reasoning Automation Biologically-inspired computing Colloquis Concept mining Data mining Knowledge representation Semantic Web E-mail spam filtering Robotics 5

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com Behavior-based robotics Cognitive Cybernetics Developmental robotics Epigenetic robotics Evolutionary robotics Hybrid intelligent system Intelligent agent Intelligent control Litigation Swarm Intelligence

SWARM INTELLIGENCE
Swarm intelligence is the term used to denote artificial intelligence systems where collective behavior of simple agents causes coherent solutions or patterns to emerge. This has applications in swarm robotics. A population of unsophisticated agents interacting with their environments and each other makes up a swarm intelligence system. Because there is no set of global instructions on how these units act, the collective interactions of all the agents within the system often leads into some sort of collective behavior or intelligence. This type of artificial intelligence is used to explore distributed problem solving without having a centralized control structure. This is seen to be a better alternative to centralized, rigid and preprogrammed control. Real life swarm intelligence can be observed in ant colonies, beehives bird flocks and animal herds.

Taxonomy of Swarm Intelligence
Swarm intelligence has a marked multidisciplinary character since systems with the above mentioned characteristics can be observed in a variety of domains. Research in swarm intelligence can be classified according to different criteria. Natural vs. Artificial: It is customary to divide swarm intelligence research into two areas according to the nature of the systems under analysis. We speak therefore of natural swarm intelligence research, where biological systems are studied; and of artificial swarm intelligence, where human artifacts are studied. Scientific vs. Engineering: An alternative and somehow more informative classification of swarm intelligence research can be given based on the goals that are pursued: we can identify a scientific and an engineering stream. The goal of the scientific stream is to model swarm intelligence systems and to single out and understand the mechanisms that allow a system as a whole to behave in a coordinated way as a result of local individual-individual and individualenvironment interactions. On the other hand, the goal of the engineering stream is to exploit the understanding developed by the scientific stream in order to design systems that are able to solve problems of practical relevance. Natural/Scientific: Foraging Behavior of Ants In a now classic experiment done in 1990, Deneubourg and his group showed that, when given the choice between two paths of different length joining the nest to a food source, a colony of ants has a high probability to collectively choose the shorter one. Deneubourg has

6

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
shown that this behavior can be explained via a simple probabilistic model in which each ant decides where to go by taking random decisions based on the intensity of pheromone perceived on the ground, the pheromone being deposited by the ants while moving from the nest to the food source and back. Artificial/Scientific: Clustering by a Swarm of Robots Several ant species cluster corpses to form cemeteries. Deneubourg et al. (1991) were among the first to propose a distributed probabilistic model to explain this clustering behavior. In their model, ants pick up and drop items with probabilities that depend on information on corpse density which is locally available to the ants. Beckers et al. (1994) have programmed a group of robots to implement a similar clustering behavior demonstrating in this way one of the first swarm intelligence scientific oriented studies in which artificial agents were used. Artificial/Engineering: Swarm-based Data Analysis Engineers have used the models of the clustering behavior of ants as an inspiration for designing data mining algorithms. A seminal work in this direction was undertaken by Lumer and Faieta in 1994. They defined an artificial environment in which artificial ants pick up and drop data items with probabilities that are governed by the similarities of other data items already present in their neighborhood. The same algorithm has also been used for solving combinatorial optimization problems reformulated as clustering problems (Bonabeau et al. 1999).

Properties of a Swarm Intelligence System
The typical swarm intelligence system has the following properties: it is composed of many individuals; the individuals are relatively homogeneous (i.e., they are either all identical or they belong to a few typologies); the interactions among the individuals are based on simple behavioral rules that exploit only local information that the individuals exchange directly or via the environment (stigmergy); the overall behaviour of the system results from the interactions of individuals with each other and with their environment, that is, the group behavior self-organizes.

Examples of Swarm Intelligence Systems Ant colony behavior
Ant colony behavior has been one of the most popular models of swarm behavior. Ants by themselves may seem to act randomly and without any discernible purpose, but when the collective interactions among ants are taken together, there will emerge a collective intelligence and behavior that has the capacity of solving a lot of problems. Through swarm intelligence, ants can determine the shortest path to a food source, feed the whole colony, build large structures, and adapt to situations. The swarm intelligence model of ant colonies has already been applied to technology in recent years such as in routing optimization of communication networks, decentralized control of UAVs and factory scheduling.

7

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com

Particle swarm optimization Particle swarm optimization, on the other hand, is a type of swarm intelligence inspired by bird flocks and fish schools. This type of swarm optimization gives individual agents within the swarm the ability to change its position depending on its own limited intelligence and in comparison to other agents in the population. This enables individual agents to modify their paths depending on the success of the other agents in the population in finding the correct solution. This type of swarm intelligence is used in practical applications such as in artificial neural networks and in grammatical evolution models. A CHRONOLOGY OF ARTIFICIAL INTELLIGENCE ~3000BC A papyrus which was brought in a Luxor antique shop by Edwin smith in1882 was prepared Representing 48 surgical observations of head wounds. The observations were started in Symptomdiagnosistreatmentprognosis combination as: IF a patient has this symptom THEN he has this injury with this prognosis if this treatment is applied. This is the first known expert system. 13th century Ramón Lull invented the Zairja, the first device that generated ideas by mechanical means. 1651 Leviathan, written by Thomas Hobbes (1588-1679) was published. In it he proposes that humans collectively, by virtue of their organization and use of their machines. Would create a new intelligence. George B. dyson refers to Hobbes as the patriarch of artificial intelligence in his book‖ Darwin among the machines: The evolution of global intelligence‖ p7, 1997. 17th century Leibnitz and Pascal invented mechanical computing devices. Pascal invented an eight digit calculator, the Pascaline in 1642.In 1694 Gottfried Leibnitz computer, which multiplied by repetitive addition an algorithm still in use. 1726 Jonathan Swift anticipated an automatic book writer in Gulliver‘s Travels. 1805 Joseph –Marie invented the first truly programmable device to drive looms with instructions provided by punched cards. 1832 Charles Babbage designed the ‗analytical engine‘ a mechanical programmable computer. he had earlier designed a more limited difference engine in 1822 which he never finished building.

8

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com

1847 George Boole developed a mathematical symbolic logic later called Boolean algebra .For reasoning about categories (i.e. sets of objects), which is also applicable to manipulating and simplifying logical propositions. 1879 Gottlob frege went beyond Boole in his treatment of logic with his invention of predicate logic making it possible to prove general theorems fro rules. ~1890 Hand –driven mechanical calculators became available. 1890 Herman Hollerith patented a tabulating machine to process census data fed in on punched cards. his company the tabulating machine company, eventually merged into what was to become IBM Late 1800’s Leonardo Torres Y Quevedo invented a relay –activated automation that played end games in chess. 1898 Behaviorism was expounded by psychologist Edward Thorndike in ―Animal Intelligence‖. The basic idea is that all actions, thoughts, or desires are reflexes triggered by a higher form of stimulus with humans just reacting to a higher form of stimulus. 1921 Karel capek a Czech writer, invented the term ―ROBOT” to describe intelligent machines that revolted against their human masters and destroyed them. 1928 John Von Neumann introduced the minmax theorem which is still used as a basis of game playing program. 1937 Alan Turing conceived of a universal Turing machine that could mimic the operation of any other computing machine. However as did Godel he also recognized that there exists certain kind of calculations that no machine could perform. Even recognizing this limit on computers Turing still did not doubt that computers could be made to think.

9

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com

~1938 Claude Shannon showed that calculations could be performed much faster using electromagnetic relays than they could be performed with mechanical calculators .he applied Boolean algebra. 1943 Vacuum tubes replaced electromechanical relays in calculators. These were used in 1943 in colossus a faster successor of Robinson, to decipher increasingly complex German codes. 1945 ENIAC (Electronic Numerical Integrator and calculator) which was run 1000 times faster than the relay operated computers was ready to run late 1945 1945 Symbolic artificial intelligence emerged as a specific intellectual field. Key development included Norbert wiener‘s development of the field of cybernetics, in which he invented a mathematical theory of feedback in biological and engineered systems. 1947 The transistor was invented by William Shockley, Walter Brattain and John Bardeen. 1948 Norbert Wiener published cybernetics a landmark book on information theory. ‘Cybernetics‘ means ‗the science of control and communication in the animal and the machine‘. 1949 Donald O. Hebbs suggested a way in which artificial neural networks might learn. 1950 Turing proposed his test the Turing test, to recognize machine intelligence. 1950’s It became clear that computers could manipulate symbols representing concepts as well as numerical data. 1951 EDVAC, the first Von Neumann computer, was built. 1951

10

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
Marvin Minsky & Dean Edmonds built the first artificial neural network that simulated a rat finding its way through maze. 1955-1956 Logic Theorist, The first AI PROGRAM was written by Allen Newell, Herbert Simon and J.C. Shaw .It proved theorems using a combination of searching goal-oriented behavior and application of rules. ~1956 IBM released the 701 general purpose electronic computer, the first such machine on the market it was designed by Nathaniel Rochester. 1956 A two month summer conference on thinking machines was held at Dartmouth University. The attendees included John McCarthy, Marvin Minsky, Claude Shannon, Nathaniel Rochester, Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Herbert Simon and Allen Newell. It did not result in a consensus view of AI. John McCarthy names the new discipline, ―Artificial Intelligence‖. 1957 Edward Feigenbaum‘s EPAM (elementary perceiver and memorizer), provided a model of how people memorize nonsense syllables. Arthur Samuel Wrote a Checkers –Playing program that soon learned how to beat him. 1958 John McCarthy & Marvin Minsky found the Artificial intelligence laboratory at the Massachusetts institute of technology. John McCarthy developed the LISP program at MIT for AI work. Early 1960’s AI researchers concentrated on means of representing related knowledge in computers a necessary precursor to developing the ability of computers to learn. 1961 Mortimer Taube, an engineer, authored the first anti-AI book,‖computers and common sense: The Myth of Thinking machines‖. It did not receive much attention. 1962 The world‘s first industrial robots were marketed by an U.S. company. 1963

11

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
Tom Evans, under Marvin Minsky‘s supervision created the program ANALOGY. It was designed to solve problems that involved associating geometric patterns that occurred in a past case with the pattern in a current case. The Stanford University found the artificial intelligence laboratory under John Mccarthy. 1965 Brothers Herbert L. Dreyfus a philosopher & Stuart E. Dreyfus, a mathematician , wrote a strongly anti-AI paper, ‖Alchemy and AI‖ , which was published reluctantly by the RAND Corporation for whom Herbert was consulting. Middle and late 1960’s Marvin Minsky and Seymour Papert directed the blocks Micro world Project at MIT AI laboratory. This project improved computer vision, robotics and even natural language processing. 1966 National Research Council ended all support for automatic translation research, a field closely related to AI. 1968 The tradition of mad computers is continued with the release of the film, 2001: A SPACE ODYSSEY Directed by Stanley Kubrick, from Arthur C. Clarke‘s book. 1968-1969 Terry Winograd, a Doctoral student under Seymour papert, wrote SHRDLU. SHRDLU created a simulated block world and robotic arm on a computer about which a user could ask questions and give commands in ordinary English. 1969 A mobile robot called Shakey was assembled at Stanford, which could navigate a block world in eight rooms and follows instructions in a simplified form of English. Marvin Minsky & Seymour papert published their book, Perceptrons- an Introduction to computational Geometry. 1970 William Wood at bolte, Beranek & newman in boston conceived a parsing scheme called the augmented transition network. by mixing syntax rules with semantic analysis , the scheme could discriminate between the meanings of sentences such as ―The beach is sweltering‖ and ―The boy is sweltering‖. DARPA‘s Speech Understanding Research(SUR) program, for which Carnegie Mellon was the prime contractor, was brought to an abrupt end .Although goals were met , the product which has limited grammar was not considered practical. 1971-1972

12

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
Alain Colmerauer & Phillipe Roussel wrote the computer language, PROLOG (for PROgrammation en LOGique).It was revised in 1974 to force logical statements (i.e., IF THEN) to be written only in the horn clause format.

1973 Sir James Light hill, Cambridge University‘s Lucasian Chair of Applied Mathematics, advised the British government to cease most AI research in Britain. 1974 Funding for AI research at MIT ,Carnegie Mellon , and Stanford from DARPA was cut drastically as a result of recent disappointing results . Diverging Specialities in AI field emerged these include Edward Feigenbaum‘s work on expert systems ;Roger Shank on language analysis ;Marvin Minsky on knowledge representation;Douglas lenat on automatic learning and nature of heuristics ;david marr on machine vision; and others developing PROLOG. 1975 Marvin Minsky published a paper , ―a framework for representing knowledge ―, which he started with‖it seems to me that the ingredients of most theories in artificial intelligence and in psychology have been on the whole too minute ,local and unstructured to account …for the effectiveness of commonsense thought‘. ~1977 Roger Schank and others augmented the conceptual dependency theory with the use of scripts and the use of knowledge of people‘s plans and goals to make sense of stories told by people and to answer questions about those stories that would require inferences to be made to answer them. First commercial expert system developed was XCON (for eXpert CONfigurer), developed by John McDermott at Carnegie Mellon. July 1979 World Champion backgammon player, Luigi Villa of Italy became the first human champion of a board game to be defeated by a computer program, which was written by Hans Berliner of Carnegie Mellon. The program evaluated its moves by evaluating a weighted set of criteria that measured the goodness of a move. 1980s Fuzzy Logic was introduced in a fuzzy predictive system used to operate the automated subway trains in sendai, Japan. This system, designed by Hitachi, reduced energy consumption by 10% and lowered the margin of error in stopping the trains at specified positions to less than 10 centimeters. First meeting of the American Association for Artificial Intelligence held in Stanford, California. 1982

13

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
David Marr‘s book, ‗Vision‘, was published posthumously, Marr died of Leukemia in 1980. it provided anew view of how the human brain used shading, steropsis ,texture ,edges, color and the frame concept to recognize things.

Early and mid 1980s A succession of early expert systems were built and put in use by companies these included: A hydrostatic and rotary bacteria –Killing cooker diagnosis program at Campbell‘s Soup based on Aldo Cimino‘s Knowledge; A lathe & grinder diagnosis analyzer at GM‘s Saginaw plant using Charlie Amble‘s skills at listening for problems based on sounds; A mineral prospecting expert system called PROSPECTOR that found a molybdenum deposit. A bell system that analyzed problems in telephone networks & recommended solutions. FOLIO an investment portfolio advisor; WILLIARD , a forecaster of large thunderstorms. Mid 1980’s Resurgence of neural network technology with the publication of key papers by the Parallel Distributed Processing Study Group. Demonstrations of neural networks in diverse applications such as artificial speech generation, learning to play backgammon, and driving a vehicle illustrated the versatility of the technology. 1985 MIT‘s Media Laboratory, Dedicated to researching media –related applications using computer science (including artificial intelligence) and sociology, was found under jerome weisner and Nicholas Negroponte. Speech systems are now able to provide any of the following: a large vocabulary, continuous speech recognition or speaker independence. 1987 Etienne Wenger published his book ―ARTIFICIAL INTELLIGENCE & TUTORING SYSTEMS: Computational & Cognitive Approaches to the Communication of Knowledge‖, a milestone in the development of intelligent tutoring systems. Inflexibility of these expert systems in applying rules and the tunnel vision implied in their limited knowledge can result in poor conclusions. Expert Systems couldn‘t reverse their logical conclusions if later given contradictory facts. End of 1980’s Experts systems were increasingly used in industry, and other AI techniques were being implemented jointly with conventional software, often unnoticed but with beneficial effect. 1990’s

14

Assignment no: 1 compiled by D.Prabhu M.Tech CSE www.datatycoon.blogspot.com
Emphasis on ontology began .Ontology is the study of the kinds of things that exist .in AI, the programs and sentence deal with various kinds of objects , and AI researchers study what these kinds are and what their properties are. 1994 The World Wide Web emerges. 1997 Deep Blue, a highly parallel 32-node IBM RS/6000 SP supercomputer, beat Gary Kasparov, world champion of chess. Deep Blue did this by calculating millions of alternative plays for a number of moves ahead. May 17 1999 An artificial intelligence system, Remote Agent was given primary control of a Space Craft for the first time. For 2 days Remote Agent ran on the on-board computer of Deep Space 1, while 60 million miles from earth. The goal of such control systems is to provide less costly and more capable control. 2000’s AI applications of many, seemingly unrelated kinds are quietly being commercialized in greater numbers. Continuous speech recognition programs that accurately turn speech into text. Face –recognition systems Washing machines that automatically adjust to different conditions to wash clothes better. Automatic mortgage underwriting systems. Automatic investment decision makers. Data mining tools E-mail filters. REFERENCES [1] http://en.wikipedia.org/wiki/Artificialintelligence. [2] Elaine Rich and Kevin Knight., Artificial Intelligence – IInd Edition, Tata McGraw Hill Book, 1992. [3] Dan W. Petterson., Introduction to Artificial Intelligence and Expert Systems - IIIrd, Prentice Hall Book, 1990. [4] P. H. Winston., Artificial Intelligence – Addison Wesley, 1983. [5] http://en.wikipedia.org/wiki/Swarmintelligence

[6] The Winchester- vol2, p12-15, published by dept of computer science, B.C.E.T Karaikal

15

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close