Artificial Intelligence

Published on January 2017 | Categories: Documents | Downloads: 110 | Comments: 0 | Views: 1020
of 11
Download PDF   Embed   Report



Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an
academic field of study which generally studies the goal of emulating human-like
intelligence, though other variations of AI such as strong-AI and weak-AI are also studied.
Major AI researchers and textbooks define this field as "the study and design of intelligent
agents", where an intelligent agent is a system that perceives its environment and takes
actions that maximize its chances of success. John McCarthy, who coined the term in 1955,
defines it as "the science and engineering of making intelligent machines"
Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the
bronze robot of Hephaestus, and Pygmalion's Galatea. Human likenesses believed to have
intelligence were built in every major civilization: animated cult images were worshiped
in Egypt and Greece and

humanoid automatons were


by Yan

Shi, Hero


Alexandria and Al-Jazari. It was also widely believed that artificial beings had been created
by Jābir ibn Hayyān, Judah Loew and Paracelsus. By the 19th and 20th centuries, artificial
beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel
Čapek's R.U.R. (Rossum's Universal Robots). Pamela McCorduck argues that all of these are
some examples of an ancient urge, as she describes it, "to forge the gods". Stories of these
creatures and their fates discuss many of the same hopes, fears andethical concerns that are
presented by artificial intelligence.
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians
since antiquity. The study of logic led directly to the invention of the programmable digital
electronic computer, based on the work of mathematician Alan Turing and others.
Turing's theory of computation suggested that a machine, by shuffling symbols as simple as
"0" and "1", could simulate any conceivable act of mathematical deduction. This, along with
concurrent discoveries in neurology, information theory and cybernetics, inspired a small
group of researchers to begin to seriously consider the possibility of building an electronic
The field of AI research was founded at a conference on the campus of Dartmouth College in
the summer of 1956. The attendees, including John McCarthy, Marvin Minsky, Allen
Newell and Herbert Simon, became the leaders of AI research for many decades. They and

Page 1

their students wrote programs that were, to most people, simply astonishing: computers were
solving word problems in algebra, proving logical theorems and speaking English. By the
middle of the 1960s, research in the U.S. was heavily funded by the Department of
Defense and laboratories had been established around the world. AI's founders were
profoundly optimistic about the future of the new field: Herbert Simon predicted that
"machines will be capable, within twenty years, of doing any work a man can do" and Marvin
Minsky agreed, writing that "within a generation ... the problem of creating 'artificial
intelligence' will substantially be solved".
They had failed to recognize the difficulty of some of the problems they faced. In 1974, in
response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress
to fund more productive projects, both the U.S. and British governments cut off all undirected
exploratory research in AI. The next few years would later be called an "AI winter", a period
when funding for AI projects was hard to find.
In the early 1980s, AI research was revived by the commercial success of expert systems, a
form of AI program that simulated the knowledge and analytical skills of one or more human
experts. By 1985 the market for AI had reached over a billion dollars. At the same time,
Japan's fifth generation computer project inspired the U.S and British governments to restore
funding for academic research in the field. However, beginning with the collapse of the Lisp
Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI
winter began.
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat
behind the scenes. Artificial intelligence is used for logistics, data mining, diagnosis and
many other areas throughout the technology industry. The success was due to several factors:
the increasing computational power of computers (see Moore's law), a greater emphasis on
solving specific sub problems, the creation of new ties between AI and other fields working
on similar problems, and a new commitment by researchers to solid mathematical methods
and rigorous scientific standards.
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a
reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot won the DARPA
Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert
trail. Two years later, a team from CMU won the DARPA Urban Challenge when their
vehicle autonomously navigated 55 miles in an urban environment while adhering to traffic

Page 2

hazards and all traffic laws. In February 2011, in a Jeopardy! quiz show exhibition
match, IBM's question answering system, Watson, defeated the two greatest Jeopardy
champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which
provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms
that emerged from lengthy AI research as does the iPhone's Siri.

The scientists and engineers at the Computer Vision and Pattern Recognition conference are
creating a world in which cars drive themselves, machines recognize people and
―understand‖ their emotions, and humanoid robots travel unattended, performing everything
from mundane factory tasks to emergency rescues.
C.V.P.R., as it is known, is an annual gathering of computer vision scientists, students,
robotics , software hackers — and increasingly in recent years, business and entrepreneurial
types looking for another great technological leap forward.
The growing power of computer vision is a crucial first step for the next generation of
computing, robotic and artificial intelligence systems. Once machines can identify objects
and understand their environments, they can be freed to move around in the world. And once
robots become mobile they will be increasingly capable of extending the reach of humans or
replacing them.
Self-driving cars, factory robots and a new class of farm hands known as ag-robots are
already demonstrating what increasingly mobile machines can do. Indeed, the rapid advance
of computer vision is just one of a set of artificial intelligence-oriented technologies — others
include speech recognition, dexterous manipulation and navigation — that underscore a sea
change beyond personal computing and the Internet, the technologies that have defined the
last three decades of the computing world.
―During the next decade we‘re going to see smarts put into everything,‖ said Ed Lazowska, a
computer scientist at the University of Washington who is a specialist in Big Data. ―Smart
homes, smart cars, smart health, smart robots, smart science, smart crowds and smart
computer-human interactions.‖
The enormous amount of data being generated by inexpensive sensors has been a significant
factor in altering the centre of gravity of the computing world, he said, making it possible to

Page 3

use centralized computers in data centres — referred to as the cloud — to take artificial
intelligence technologies like machine-learning and spread computer intelligence far beyond
desktop computers.
Apple was the most successful early innovator in popularizing what is today described as
ubiquitous computing. The idea, first proposed by Mark Weiser, a computer scientist with
Xerox, involves embedding powerful microprocessor chips in everyday objects.
Steve Jobs, during his second tenure at Apple, was quick to understand the implications of
the falling cost of computer intelligence. Taking advantage of it, he first created a digital
music player, the iPod, and then transformed mobile communication with the iPhone. Now
such innovation is rapidly accelerating into all consumer products.
―The most important new computer maker in Silicon Valley isn‘t a computer maker at all, it‘s
Tesla,‖ the electric car manufacturer, said Paul Saffo, a managing director at Discern
Analytics, a research firm based in San Francisco. ―The car has become a node in the network
and a computer in its own right. It‘s a primitive robot that wraps around you.‖
Here are several areas in which next-generation computing systems and more powerful
software algorithms could transform the world in the next half-decade.

With increasing frequency, the voice on the other end of the line is a computer.
It has been two years since Watson, the artificial intelligence program created by I.B.M., beat
two of the world‘s best ―Jeopardy‖ players. Watson, which has access to roughly 200 million
pages of information, is able to understand natural language queries and answer questions.
The computer maker had initially planned to test the system as an expert adviser to doctors;
the idea was that Watson‘s encyclopedic knowledge of medical conditions could aid a human
expert in diagnosing illnesses, as well as contributing computer expertise elsewhere in
In May, however, I.B.M. went a significant step farther by announcing a general-purpose
version of its software, the ―I.B.M. Watson Engagement Advisor.‖ The idea is to make the

Page 4

company‘s question-answering system available in a wide range of call center, technical
support and telephone sales applications. The company says that as many as 61 percent of all
telephone support calls currently fail because human support-centre employees are unable to
give people correct or complete information.
Watson, I.B.M. says, will be used to help human operators, but the system can also be used in
a ―self-service‖ mode, in which customers can interact directly with the program by typing
questions in a Web browser or by speaking to a speech recognition program.
That suggests a ―Freakonomics‖ outcome: There is already evidence that call-center
operations that were once outsourced to India and the Philippines have come back to the
United States, not as jobs, but in the form of software running in data centres.

A race is under way to build robots that can walk, open doors, climb ladders and generally
replace humans in hazardous situations.
In December, the Defense Advanced Research Projects Agency, or Darpa, the Pentagon‘s
advanced research arm, will hold the first of two events in a $2 million contest to build a
robot that could take the place of rescue workers in hazardous environments, like the site of
the damaged Fukushima Daiichi nuclear plant.
Scheduled to be held in Miami, the contest will involve robots that compete at tasks as
diverse as driving vehicles, traversing rubble fields, using power tools, throwing switches and
closing valves.
In addition to the Darpa robots, a wave of intelligent machines for the workplace is coming
from Rethink Robots, based in Boston, and Universal Robots, based in Copenhagen, which
have begun selling lower-cost two-armed robots to act as factory helpers. Neither company‘s
robots have legs, or even wheels, yet. But they are the first commercially available robots that
do not require cages, because they are able to watch and even feel their human co-workers, so
as not to harm them.
For the home, companies are designing robots that are more sophisticated than today‘s
vacuum-cleaner robots. Hoaloha Robotics, founded by the former Microsoft executive Tandy


Page 5

Trower, recently said it planned to build robots for elder care, an idea that, if successful,
might make it possible for more of the aging population to live independently.
Seven entrants in the Darpa contest will be based on the imposing humanoid-shaped Atlas
robot manufactured by Boston Dynamics, a research company based in Waltham,
Massachusetts. Among the wide range of other entrants are some that look anything but
humanoid — with a few that function like ―transformers‖ from the world of cinema. The
contest, to be held in the infield of the Homestead-Miami Speedway, may well have the
flavor of the bar scene in ―Star Wars.‖

Intelligent Transportation:
Amnon Shashua, an Israeli computer scientist, has modified his Audi A7 by adding a camera
and artificial-intelligence software, enabling the car to drive the 65 kilometers, or 40 miles,
between Jerusalem and Tel Aviv without his having to touch the steering wheel.
In 2004, Darpa held the first of a series of ―Grand Challenges‖ intended to spark interest in
developing self-driving cars. The contests led to significant technology advances, including
―Traffic Jam Assist‖ for slow-speed highway driving; ―Super Cruise‖ for automated freeway
driving, already demonstrated by General Motors and others; and self-parking, a feature
already available from a number of car manufacturers.
Recently General Motors and Nissan have said they will introduce completely autonomous
cars by the end of the decade. In a blend of artificial-intelligence software and robotics,
Mobileye, a small Israeli manufacturer of camera technology for automotive safety that was
founded by Mr. Shashua, has made considerable progress. While Google and automotive
manufacturers have used a variety of sensors including radars, cameras and lasers, fusing the
data to provide a detailed map of the rapidly changing world surround a moving car,
Mobileye researchers are attempting to match that accuracy with just video cameras and
specialized software.


Page 6

Emotional Computing:
At a preschool near the University of California, San Diego, a child-size robot named Rubi
plays with children. It listens to them, speaks to them and understands their facial
Rubi is an experimental project of Prof. Javier Movellan, a specialist in machine learning and
robotics. Professor Movellan is one of a number of researchers now working on a class of
computers that can interact with humans, including holding conversations.
Computers that understand our deepest emotions hold the promise of a world full of brilliant
machines. They also raise the specter of an invasion of privacy on a scale not previously
possible, as they move a step beyond recognizing human faces to the ability to watch the
array of muscles in the face and decode the thousands of possible movements into an
understanding of what people are thinking and feeling.
These developments are based on the work of the American psychologist Paul Ekman, who
explored the relationship between human emotion and facial expression. His research found
the existence of ―micro expressions‖ that expose difficult-to-suppress authentic reactions. In
San Diego, Professor Movellan has founded a company, Emotient, that is one of a handful of
start-ups pursuing applications for the technology. A near-term use is in machines that can
tell when people are laughing, crying or skeptical — a survey tool for film and television
Farther down the road, it is likely that applications will know exactly how people are reacting
as the conversation progresses, a step well beyond Siri, Apple‘s voice recognition system.
Advances in artificial intelligence could lead to mass unemployment, warn experts

Experts have warned that rapidly improving artificial intelligence could lead to mass
unemployment just days after Google revealed the purchase of a London based start-up
dedicated to developing this technology.


Page 7

Speaking on Radio 4‘s Todayprogramme, Dr Stuart Armstrong from the Future of Humanity
Institute at the University of Oxford said that there was a risk that computers could take over
human jobs ―at a faster rate than new jobs could be generated.‖
―We have some studies looking at to which jobs are the most vulnerable and there are quite a
lot of them in logistics, administration, insurance underwriting,‖ said Dr Armstrong.
―Ultimately, huge swathe of jobs are potentially vulnerable to improved artificial
Dr Murray Shanahan, a professor of cognitive robotics at Imperial College London, agreed
that improvements in artificial intelligence were creating ―short term issues that we all need
to be talking about.‖
"It's very difficult to predict," said Dr Shanahan. "That is, of course, a concern. But in the
past when we have developed new kinds of technologies then often they have created jobs at
the same time as taking them over. But it certainly is something we ought to be discussing."
Both academics did however praise Google for creating an ethics board to look at the ―how to
deploy artificial intelligence safely and reduce the risks‖ after its £400 million purchase of
London-based start-up DeepMind.
Google's search technology power devices such as Google Glass (above), allowing users
to perform searchs and ask for help in natural language.
Deep Mind has been operating largely unnoticed by the wider UK technology scene, although
its advances in artificial intelligence have obviously been of interest to the experts - founded
in just 2012, Deep Mind is Google's largest European acquisition to date.
Dr Shanahan hailed Deep Mind as ―a company with some outstanding people working for it,‖
noting that the company has mainly been working in the areas of machine learning and deep
learning, which he described as ―all about finding patterns in very large quantities of data.‖
Google‘s purchase of the company has led to speculation as to how they might implement the
technology. Although there had been some talk of using Deep Mind‘s algorithms to give
‗brains‘ to Google recent robotic purchases, insiders have said that the acquisition was about
improving search functionality, not AI.

Page 8

Regardless of how Deep Mind‘s expertise will be used, Google‘s purchase of the company
underscores increasing fears over the impact of technology on employment.
Academics note that although professions have always been threatened by the forces of
‗progress' (a nebulous concept that can cover anything from speedier computers to more
efficient steam engines), current trends suggest jobs are being destroyed faster than they are
being created.
A recent paper by Carl Benedikt Frey and Michael A. Osborne of Oxford University suggests
that nearly half (47 per cent) of all American jobs are under threat and could be automated in
―a decade of two‖.

Artificial Neural Systems (ANS)
A neural network is an electronic model of the brain consisting of many interconnected
simple processors. This imitates how your actual brain works.

Applications of artificial neural systems:
 Learning to read postcodes
 Stock market prediction
 Debt risk assessment

Advantages artificial neural systems:
 These do not need to be programmed to be able to learn.

Disadvantages artificial neural systems:
 Set up – time and money as this requires plenty of expert advice.

Vision systems:
 The need to interpret, fully understand and make sense of visual input on the
computer, i.e. Artificial Intelligence is used to try and interpret and understand
an image - industrial, military use, satellite photo interpretation.
 Spy plane takes a photograph and experts would then analyse it to try and
figure it out - see if it was an enemy area.

Page 9

 Police using the computer to come up with a photo fit drawing of a criminal.
 Doctors using the system to make diagnosis of patient.

Speech recognition:
The ability of the computer to understand a human talking to it. There are many problems
associated with this – humans have different accents, slang words, noise in the background,
feeling poorly (flu, cold etc). This means that the computer has to be trained to recognize the
voice of the human. This means that the human has to ensure that by talking to the computer
system before, i.e. train it, the system will be able to recognise their words, sentences, etc.

Handwriting recognition
This is where human handwriting is turned into text that then can be edited when input into a
palmtop computer or a tablet. A stylus is used to write on the computer screen and then
handwriting recognition software will then change it into the text, e.g. a teacher using a smart
board can turn their own writing into text in the same manner.
This allows you to scan in a page, containing text, and the OCR software will convert this
into editable text. It does this by recognising the shapes of the letters and converting them
into ASCII text.
An intelligent robot has many different sensors, large processors and a large memory in order
to show that they have intelligence. The robots will learn from their mistakes and be able to
adapt to any new situation that may arise.
An intelligent robot can be programmed with its own expert system, e.g. a factory floor is
blocked with fallen boxes. An intelligent robot will remember this and take a different route.
These intelligent robots carry out many different tasks such as automated delivery in a
factory, pipe inspection, bomb disposal, exploration of dangerous/unknown environments.

 Work 24/7, 365 days/year, unlike human workers; do not need holidays
 Cheaper – do not need paid - company make more money in the long run
 More accurate
 Safer than sending a human into dangerous places, e.g. nuclear power stations


Page 10

Knowledge Representation:
Semantic Net

Semantic net is a knowledge representation technique. It is a way of showing all the relative
relationships between members of a set of objects, i.e. facts.
The following facts are represented in the semantic net above:
 cat is a mammal
 dog is a mammal
 dog likes meat
 dog likes water
 cat likes cream


Page 11

Sponsor Documents

Or use your account on


Forgot your password?

Or register your new account on


Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in