of 56

Neural Networks Ver1

Published on February 2017 | Categories: Documents | Downloads: 3 | Comments: 0
57 views

Comments

Content



Centre for Advanced Technology
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SOFTWARE DEVELOPMENT * EMBEDDED SYSTEMS

#109, 2nd Floor, Bombay Flats, Nungambakkam High Road,
Nungambakkam, Chennai - 600 034.
Phone - 044 - 2823 5816, 98412 32310
E-Mail: [email protected], [email protected], URL: ncctchennai.com
Dedicated to Commitments, Committed to
Technologies

Where Technology and Solutions Meet

INTRODUCTION
The purpose is to make a
technical presentation on
NEURAL NETWORKS &
ITS APPLICATIONS
NCCT

About NCCT

NCCT is a leading IT organization backed by a
strong R & D, concentrating on Software
Development & Electronics product development.

The major activities of NCCT includes System Software
Design and Development, Networking and Communication,
Enterprise computing, Application Software Development,
Web Technologies Development
NCCT

• Machine learn ing and human brai n
• Introducti on to Neura l Netwo rks
• Compu ter neuron s
• Arch itectu re of Neu ral Networks
• Need for Neu ral Netwo rks
• Uses of Neura l Netw ork s
• Al gori thms
• Appl ications
NCCT

Q
Machine learning involves adaptive mechanisms
that enable computers to learn from experience,
learn by example and learn by analogy
Q
Learning capabilities can improve the
performance of an intelligent system over time
Q
The most popular approaches to machine
learning are Artificial Neural Networks and
Genetic Algorithms
Q
This session is dedicated to NEURAL
NETWORKS

• SUPERVISED LEARNING
– Recognizing hand-written digits, pattern recognition, regression.
– Labeled examples
(input , desired output)
– Neural Network models: perceptron, feed-forward, radial basis
function, support vector machine.
• UNSUPERVISED LEARNING
– Find similar groups of documents in the web, content
addressable memory, clustering.
– Unlabeled examples
(different realizations of the input alone)
– Neural Network models: self organizing maps, Hopfield networks.
LEARNING
NCCT


THE BRAIN

Pattern Recognition

Association

Complexity

Noise Tolerance

THE MACHINE

Calculation

Precision

Logic


The Von Neumann architecture uses a
single processing unit;

Tens of millions of operations per
second

Absolute arithmetic precision

The brain uses many slow
unreliable processors acting in
parallel


Ten billion neurons

Average several thousand
connections

Hundreds of operations per
second

Reliability low

Die off frequently (never
replaced)

Compensates for problems by
massive parallelism

• The brain has been extensively studied by scientists.
• Vast complexity prevents all but rudimentary understanding.
• Even the behaviour of an individual neuron is extremely
complex
• Single “percepts” distributed among many neurons
• Localized parts of the brain are responsible for certain well-
defined functions (e.g.. vision, motion).
• Which features are integral to the brain's performance?
• Which are incidentals imposed by the fact of biology?

WHAT ARE NEURAL
NETWORKS

A NEURAL NETWORK can be defined as a model of
reasoning based on the human brain.

The brain consists of a densely interconnected set of
nerve cells, or basic information - processing units,
called neurons.

The human brain incorporates nearly 10 billion neurons
and 60 trillion connections, synapses, between them.

By using multiple neurons simultaneously, the brain
can perform its functions much faster than the fastest
computers in existence today


A NEURON consists of a cell body, soma, a number
of fibres called dendrites, and a single long fibre
called the axon

Each neuron has a very simple structure, but an
army of such elements constitutes a tremendous
processing power

The neurons are connected by weighted links
passing signals from one neuron to another

Neural Networks are a type of artificial intelligence
that attempts to imitate the way a human brain
works.
WHAT ARE NEURAL
NETWORKS


Rather than using a digital model, in which all
computations manipulate zeros and ones, a neural
network works by creating connections between
processing elements, the computer equivalent of
neurons.

The organization and weights of the connections
determine the output

Information is stored and processed in a neural network
simultaneously throughout the whole network, rather
than at specific locations

In other words, in neural networks, both data and its
processing are global rather than local
WHAT ARE NEURAL
NETWORKS

BIOLOGICAL
NEURAL NETWORK
Soma
Soma
Synapse
Synapse
Dendrites
Axon
Synapse
Dendrites
Axon
NCCT

DI AGRAM OF A
NEURON
Neuron
Y
Input Signals
x
1
x
2
x
n
Output Signals
Y
Y
Y
w
2
w
1
w
n
Weights

Analogy between Biological
and
Art ifi cial Neural Networks
Soma
Soma
Synapse
Synapse
Dendrites
Axon
Synapse
Dendrites
Axon
Input Layer Output Layer
Middle Layer
I

n

p

u

t



S

i

g

n

a

l

s
O

u

t

p

u

t



S

i

g

n

a

l

s

ARCHI TECT URE OF A
TYPICAL ART IFICIAL NEURAL
NETWORK
Input Layer
Output Layer
Middle Layer
I

n

p

u

t



S

i

g

n

a

l

s
O

u

t

p

u

t



S

i

g

n

a

l

s

USES OF NEURAL
NETWORK

Neural networks are used for both
regression and classification.

Regression is a function approximation and
time series prediction.

Classification, the objective is to assign the
input patterns to one of the several
categories or classes, usually represented
by outputs restricted to lie in the range from
0 to 1.


It is well proven that function
approximation gives better results than
the classical regression techniques.

Could work very well for non linear
systems.
NCCT

SIMPLE EXPLANATION
HOW NEURAL NETWORK WORKS

Neural Networks use a set of processing elements
(or nodes) loosely analogous to neurons in the brain

These nodes are interconnected in a network that
can then identify patterns in data as it is exposed to
the data, In a sense, the network learns from an
experience just as people do

This distinguishes neural networks from traditional
computing programs, that simply follow instructions
in a fixed sequential order.
NCCT

• The bottom layer represents the input layer, in
this case with 5 inputs labelled X1 through X5.
• In the middle is something called the hidden
layer, with a variable number of nodes. It is
the hidden layer that performs much of the
work of the network.
• The output layer in this case has two nodes,
Z1 and Z2 representing output values we are
trying to determine from the inputs.
• For example, we may be trying to predict
sales (output) based on past sales, price and
season (input).
The structure
of a neural
network looks
something
like this
image
SIMPLE EXPLANATION
HOW NEURAL NETWORK WORKS

SIMPLE EXPLANATION
HIDDEN LAYER

Each node in the hidden layer is
fully connected to the inputs. That
means what is learned in a hidden
node is based on all the inputs
taken together

This hidden layer is where the
network learns interdependencies
in the model

The following diagram provides
some detail into what goes on
inside a hidden node
More on the
Hidden
Layer

SIMPLE EXPLANATION
HIDDEN LAYER


Simply speaking a weighted sum is
performed: X1 times W1 plus X2
times W2 on through X5 and W5

This weighted sum is performed for
each hidden node and each output
node and is how interactions are
represented in the network

Each summation is then
transformed using a nonlinear
function before the value is passed
on to the next layer.
More on the
Hidden
Layer

HEBBIAN LEARNING

TWO NEURONS REPRESENT TWO
CONCEPTS

Synaptic strength between them indicates the
strength of association of concepts;

HEBBIAN LEARNING

Connections are strengthened whenever two
concepts occur together;

PAVLOVIAN CONDITIONING

An animal is trained to associate two events

i.e. dinner is served after going in the rings

s
In 1958, Frank Rosenblatt
introduced a training algorithm that
provided the first procedure for
training a simple ANN: a perceptron
s
The perceptron is the simplest form
of a neural network. It consists of a
single neuron with adjustable
synaptic weights and a hard limiter

s
The operation of Rosenblatt’s perceptron
is based on the McCulloch and Pitts
neuron model. The model consists of a
linear combiner followed by a hard limiter
s
The weighted sum of the inputs is applied
to the hard limiter, which produces an
output equal to +1 if its input is positive
and −1 if it is negative.

Threshold
Inputs
x
1
x
2
Output
Y

Hard
Limiter
w
2
w
1
Linear
Combiner
θ

s
This is done by making small adjustments
in the weights to reduce the difference
between the actual and desired outputs of
the perceptron.
s
The initial weights are randomly assigned,
usually in the range [−0.5, 0.5], and then
updated to obtain the output consistent
with the training examples.
How does the
perceptr on l earn its
cl assi fi cati on tasks?

s
If at iteration p, the actual output is Y(p) and the
desired output is Y
d
(p), then the error is given by:
where p = 1, 2, 3, . . .
Iteration p here refers to the pth training example
presented to the perceptron.
s
If the error, e(p), is positive, we need to increase
perceptron output Y(p), but if it is negative, we need
to decrease Y(p).
) ( ) ( ) ( p Y p Y p e
d
− ·
How does the perc ept ron
learn its cl assi ficat ion
tasks?

THE PERCEPTRON
LEARNING RULE
where p = 1, 2, 3, . . .
α is the learning rate, a positive constant less than
unity.
The perceptron learning rule was first proposed by
Rosenblatt in 1960.
Using this rule we can derive the perceptron training
algorithm for classification tasks.
) ( ) ( ) ( ) 1 ( p e p x p w p w
i i i
⋅ ⋅ + · + α

STEP 1: INITIALISATION
Set initial weights w1, w2,…, wn and threshold θ
to random numbers in the range [−0.5, 0.5].
PERCEPTRON’S
TRAINING ALGORITHM
STEP 2: ACTIVATION
Activate the perceptron by applying inputs
x1(p), x2(p),…, xn(p) and desired output Yd (p).
Calculate the actual output at iteration p = 1
where n is the number of the perceptron inputs,
and step is a step activation function.
]
]
]
]




θ − ·

·
n
i
i i
p w p x step p Y
1
) ( ) ( ) (

STEP 3: WEIGHT TRAINING
Update the weights of the
perceptron
where is the weight correction at iteration p.
The weight correction is computed by the delta rule:
where
STEP 4: ITERATION
Increase iteration p by one, go back to Step 2 and repeat the
process until convergence.
) ( ) ( ) 1 ( p w p w p w
i i i
∆ + · +
) ( ) ( ) ( p Y p Y p e
d
− ·
PERCEPTRON’S
TRAINING ALGORITHM

s
The neuron computes the weighted sum of the input
signals and compares the result with a threshold value, θ.
s
If the net input is less than the threshold, the neuron output
is –1. But if the net input is greater than or equal to the
threshold, the neuron becomes activated and its output
attains a value +1.
s
The neuron uses the following transfer or activation
function:
s
This type of activation function is called a sign function.

·
·
n
i
i i
w x X
1
¹
'
¹
θ < −
θ ≥ +
·
X
X
Y
if , 1
if , 1
NEURON COMPUTATION

Step function
Sign function
+1
-1
0
+1
-1
0 X
Y
X
Y
+1
-1
0 X
Y
Sigmoid function
+1
-1
0 X
Y
Linear function
¹
'
¹
<

·
0 if , 0
0 if , 1
X
X
Y
step
¹
'
¹
< −
≥ +
·
0 if , 1
0 if , 1
X
X
Y
sign
X
sigmoid
e
Y

+
·
1
1
X Y
linear
·

EXAMPLE
Threshold
Inputs
x
1
x
2
Output
Y

Hard
Limiter
w
2
w
1
Linear
Combiner
θ
A neuron uses a step function
as its activation function q =
0.2 and W1 = 0.1, W2 = 0.4,
What is the output with the
following values of x1 and x2:
1 1
0 1
1 0
0 0
Y x2 x1


The output signal is transmitted through
the neuron’s outgoing connection.

The outgoing connection splits into a
number of branches that transmit the
same signal.

The outgoing branches terminate at the
incoming connections of other neurons
in the network.
NCCT


• THREE DIFFERENT CLASSES OF NETWORK
ARCHITECTURES

Single-layer Feed-forward

Multi-layer Feed-forward

Recurrent
The ARCHITECTURE of a neural
network is linked with the learning
algorithm used to train
Neurons are
organized in
a Cyclic
Layers

NETWORK ARCHITECTURES
SINGLE LAYER FEED FORWARD
Input
layer
of
source
nodes
Output l ayer
of
neurons
NCCT

NETWORK ARCHITECTURES
MULTI LAYER FEED-FORWARD
INPU
T
LAYE
R
OUTP
UT
LAYER
HIDDEN
LAYER
3-4-2
NETWORK
NCCT

Recurrent Network with hidden neuron(s): unit
delay operator z
-1
implies dynamic system
z
-1
z
-1
z
-1
RECU RRENT
NETWO RK
INPUT
HIDDE
N
OUTP
UT

NEURAL NETWO RK
ARCH ITECTU RES


Biomedical Applications

Business Forecasting Applications

Demand Analysis and Forecasting

Marketing Applications

Financial Applications

Space Research

Psychiatric Diagnosis
NCCT

90% accurate learning head pose, and recognizing 1-of-20 faces


Redefinin g t he
Learnin g
Specialization, Design,
Development and
Implementation with Projects
Experience the learning with the latest
new tools and technologies…

Project Specialization Concept
• NCCT, in consultation with Export-Software Division, offers Live
Electronics related Projects, to experience the learning with
the latest new tools and technologies
• NCCT believes in specialized Hardware Design, development training
and implementation with an emphasis on development principles and
standards
• NCCT plays a dual positive role by satisfying your academic
requirements as well as giving the necessary training in electronics
and embedded product development
NCCT

WE ARE OFFERING PROJECTS FOR THE
FOLLOWING DISCIPLINES

COMPUTER SCIENCE AND ENGINEERING

INFORMATION TECHNOLOGY

ELECTRONICS AND COMMUNICATION ENGINEERING

ELECTRICAL AND ELECTRONICS ENGINEERING

ELECTRONICS AND INSTRUMENTATION

MECHANICAL AND MECHATRONICS

PROJECTS IN THE AREAS OF

System Software Development

Application Software Development, Porting

Networking & Communication related

Data Mining, Neural Networks, Fuzzy Logic, AI based

Bio Medical related

Web & Internet related

Embedded Systems - Microcontrollers, VLSI, DSP, RTOS

WAP, Web enabled Internet Applications

UNIX \ LINUX based Projects

Projects @ NCCT
SAMPLE PROJECTS @ NCCT
ANN TECHNOLOGY
CHARACTER AND PATTERN
RECOGNITION USING NEURAL
NETWORKS
NCCT

Projects @ NCCT
BRIEF IDEA

TO DETERMINE HANDWRITTEN CHARECTERS
USING ARTIFICIAL NEURAL NETWORKS
FEATURES

USING ANN TECHNOLOGY

ACCURACY

EASY TO IMPLEMENT

FOOL PROOF MECHANISM
NCCT

Projects @ NCCT
SAMPLE PROJECTS @ NCCT
NEURAL NETWORK BASED
MEDICAL SYSTEMS
NEURAL NETWORK BASED
DIAGNOSTIC SYSTEM

Projects @ NCCT
BRIEF IDEA

FORECASTING FETAL HEART BEATS USING NEURAL
NETWORKS

COMBINES INPUT WINDOWS< HIDDEN LAYERS,
FEEDBACK AND SELF RECURRENT UNIT
FEATURES

ADDITIONAL SELF RECURRENT INPUT

COMBINES SEVERAL TECHNIQUES

FOR PROCESSING TEMPORAL ASPECTS OF THE
INPUT SEQUENCE

Placements @ NCCT
NCCT has an enormous placement wing, which enrolls all candidates in its placement
bank, and will keep in constant touch with various IT related industries in India / Abroad,
who are in need of computer trained quality manpower
Each candidate goes through complete pre-placement session before placement made by
NCCT
The placement division also helps students in getting projects and organize guest lectures,
group discussions, soft learning skills, mock interviews, personality development skills, easy
learning skills, technical discussions, student meetings, etc.,

For every student we communicate the IT organizations, with the following documents
* Curriculum highlighting the skills
* A brief write up of the software knowledge acquired at NCCT, syllabus
taught at NCCT
* Projects and Specialization work done at NCCT
* Additional skills learnt
NCCT
THE FOLLO WING SKIL L S ET IS
SECURE
Software
Applications
C, C++, Visual C++, ASP, XML, EJB
Embedded
Technologies
Embedded Systems, PLC
Other Areas VLSI, ULSI, DSP, Bio Informatics & Technology
Emerging
Technologies
WAP, Remote Computing, Wireless
Communications, VoIP, Bluetooth in Embedded,
LINUX based applications
Ever green
technologies
UNIX, C

NCCT
Quality is Our Responsibility
Dedicated to Commitments
and Committed to
Technology

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close