vol 2 no 4

Published on January 2017 | Categories: Documents | Downloads: 69 | Comments: 0 | Views: 1250
of 113
Download PDF   Embed   Report

Comments

Content


(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

1
RD-Optimisation analysis for H.264/AVC scalable
video coding
Sinzobakwira Issa
1
, Abdi Risaq M. Jama
2
and Othman Omar Khalifa
3


1
Olympia College, School of Engineering
Persiaran Raja Chulan, 50200 Kuala Lumpur, Malaysia
[email protected]

2, 3
International Islamic University Malaysia, Department of Electrical and Computer Engineering
Jalan Gombak, Box: 53100 Kuala Lumpur, Malaysia

Abstract: The development of multimedia propagations and
applications has led to a greater expansion in the field of video
transmission over a heterogeneous media as well as iterative
delivery platforms with dedicated content requirements. It is
known that conventional video coding systems encode video
content with given bitrates adapted to a specific function or
application. As a result, conventional video coding does not meet
the fundamental requirements of the state-of-the-art flexible
digital media application. The newly technology based on
scalable video coding appears as a new modus operandi that has
the ability to satisfy the underlying requirements. In this work,
a multi-users scenario was considered for an optimum
performance between multiple streams. A rate distortion
optimized video frame dropping strategy which can be applied
on active network nodes during high traffic intensity was
developed. The concept of scalability here, come to introduce the
operability of high level of suppleness coding and decoding
systems. A base layer which can display the suitable quality of
the premium file was considered and take care of the
improvement of video quality.
Keywords: Bitrates, PSNR, bandwidth, multi-users scenario
and RDO.
1. Introduction
The past few decades, starting in the early nineties, a
remarkable development has been achieved in the field of
video compression. A lot of efforts were and still are being
exerted for compressing, storing data in digital medium and
allocation over the web.
It is very crucial to have the idea of monochrome digital
video data sequence which is a set of individual pictures
called frames happening at predetermined time increments.
This frame needs to be considered as a light intensity of two
dimensions in terms of function of variable x and y; f(x, y),
where x and y denote special coordinates and the value off at
any point (x, y) is proportional of the brightness of the frame
or the gray level at the point for monochrome. The normal
standard speed at which these frames are displayed is 30
frames per second.
This representation is called canonical representative.
However, this canonical representation has negative impact
because it needs very huge amounts of memory, resulting in
impracticality of being stored or shared on the web or to be
launched into digital channel. The fact may seem as an
amusing game when we try to illustrate how it could be
done.
The clear picture is an example of a 100 minutes movie
displayed at 30 frames per second with width of frame
640x480 pixels with each pixel taking 3 bytes of memory.
The reality shows that, for each second of the movie, the
requirement be at least 27MB of memory; as a result, the
entire movie will need almost 162GB of memory. If this
movie were stored on DVD’s, then considered the current
DVD capacity of 4.7 GB, would roughly require 35 DVD’s.
Therefore, video needs to be compressed considerably for
efficient storage and sharing over the web [1]
However, there are a lot of redundancies within the video
data that can be eliminated yielding file size reduction or
compression.
2. H.264/AVC Scalable Video Coding
2.1 Basic H.264/AVC structure
The H.264/AVC standard has a range of coding tools
contributing to its high compression performance, flexibility
and robustness. However, the performance improvements
come at a cost of significantly high computational
complexity. Therefore, encoder implementations should
make use of the available coding tools effectively to achieve
the desired compression performance with the available
processing resources.
H.264/AVC is an extremely scalable video codec,
delivering excellent quality across the entire bandwidth
spectrum, from high definition television to the video
conferencing and 3G mobile multimedia. The following can
thusly be summarized as the important differences.
• Enhanced motion prediction capability
• Use of a small block-size exact match transform
• Adaptive in-loop deblocking filter
• Enhanced entropy coding methods


Figure 1. H.264/AVC structure
2.2 Scalable Video Coding

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

2
Scalable video coding is desirable in heterogeneous and
error-prone environments for various reasons. For example,
scalable coding helps streaming servers avoid congestions in
network by allowing the server to reduce the bitrate of
bitstreams whilst still transmitting a useable bitstream.
One application for scalability is to improve error
resilience in transport systems that allow different qualities
of service.
For example, the essential information could be delivered
through a channel with high error protection. Scalability can
also be used to enable different quality representations
depending on playback devices processing power.
Devices with better processing power can decode and
display the full quality version, whereas devices having
lower processing power decode the lower quality version.
2.3 Types of SVC
There are three conventional types of scalability: temporal,
quality and spatial. Temporal scalability enables adjustment
of picture rate.
a) This is commonly carried out with either disposable
pictures or disposable sub-sequences, which are
explained later on. Picture rate adjustment is then
simply done by removing these disposable parts
from the coded sequence thus lowering the frame
rate.
b) In conventional quality scalability, also known as
SNR scalability, an enhancement layer is achieved
with pictures having finer quantizers than the
particular picture in the lower reference layer[3].
In coarse-granularity quality scalability, pictures in
enhancement layers may be used as prediction
references and therefore all the enhancement layer
pictures in a group of pictures typically have to be
disposed as a unit. In fine granularity scalability,
the use of enhancement layer pictures as prediction
sources is limited and therefore finer steps of
bitrate can be achieved compared to coarse-
granularity scalability.
c) Finally, spatial scalability is used for creation of
multi-resolution bitstreams to meet different
display requirements or constraints and is very
similar to SNR scalability [5].
A spatial enhancement layer enables recovery of coding loss
between an up-sampled version of the reconstructed layer
used as a reference by the enhancement layer and a higher
resolution version of the original picture.

3. Rate Distortion Optimization
3.1 Lagrangian multiplier method

In H.264/AVC, it is the art of the encoder to have the ability
of having the effective way of encoding a given video
sequence by selecting among numerous ranges of modes and
parameters.
The encoder targets to achieve optimum rate distortion
performance by choosing the best of modes and parameters
of a given video. Doing this, the encoder would be looking
to minimize distortion in a sequence of particular video.
Rate-distortion Optimisation (RDO) methods used in video
compression are discussed in [6] [2], which include dynamic
programming and Lagrange optimisation methods.
A Lagrange optimisation method, which is also known as
Lagrange multiplier method, offer computationally less
complex (although sometimes sub-optimal) solutions to the
optimisation problem was proposed. Due to its less complex
nature, a specific form of the Lagrange optimisation method
has been used in rate-distortion optimisation of H.264/AVC
[10].

3.2 Constrained Optimisation Problem

The objective function within source constraints is
minimized or maximized by the constrained optimization.
In the case here of video coding standards, this issue of
constrained optimization can be considered as reducing the
amount of distortion of a given video sequence meaning to
strive looking to increase the number of bits that can be
encoded in exactly that particular coding sequence[4].
Below is the mathematical representation of the constrained
optimization unit;
Let S represent all the allowable vectors and let B an
element of S, (BЄS). The objective function is defined for all
B in S as D(B) and the constraint function R(B) is defined
for all B in S . The constrained problem can be presented as:
Given a constraint R
c
, find


BЄS
Subject to



The solution (BЄS*) to the problem satisfies that R (B*) ≤
R
c
and D (B*) ≤D (B) for all B
In S*, where



That is, if the solution to the problem is B*, then there is
no other B in S which satisfies the constraint R
c
, that will
result in a smaller value for the objective function than D
(B*). The Lagrange multiplier theory offers a way of
solving the above constrained problem (i.e. finding B*) by
representing the problem as an unconstrained problem [3].

3.3 Major Theorem

The constrained optimisation problem was presented earlier
in previous section, equation (2). The Lagrange theory
represents the constrained problem as an unconstrained
problem as follows:
Theorem: for any λ≥0, the solution B*(λ) to the
unconstrained problem




This is considered as solution of the constrained problem in
(1) presenting R
c
=R (B* (λ)) as the constraint.
Proof of the theorem
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

3
If B* (λ) is the solution to the constrained problem (4) then:



Therefore,



If this is true for all B in S, it is true for a subset of B in S
where,


Now, for the above subset and for any λ≥0:




Therefore with the constraint R
c
=R (B* (λ)), the solution
B* for the unconstrained problem is also the solution for the
constrained problem.
It should be noted that the theory does not guarantee a
solution for the constrained problem. It only states that for
any λ≥0 of the unconstrained problem, there is a
corresponding constrained problem which has the same
solution as the unconstrained problem.

3.4 Optimisation problem

Consider a macroblock, for which the encoder can encode
the macroblock using only one of the ‘K’ possible modes
given by the set m = {m
1
,, m
2
, … , m
K
}. Let ‘M’ (M Єm) be
the mode selected to code the macroblock. In the context of
H.264/AVC, these mode allocations could be any allowable
combination of macroblock partition modes, Quantisation
Parameters (QP), choice of reference frames etc… so that
the K possible modes will include all the possible admissible
parameter combinations for the
Macroblock
Define the objective function D(M) and constraint
function R(M) , where D(M) and R(M) are distortion and
rate of the macroblock as a result of selecting a particular
coding mode. If the rate constraint is R
c
, the constraint
problem is defined as:
Find the coding mode M*,



Subject to



This may be written as an unconstrained problem using a
Lagrange multiplier:



Where the solution to (4-11), M*, would satisfy,



The optimum coding mode M* (if one exists) can be found
by solving (14). That means, when the macroblock is coded
in mode M* it would satisfy the target rate (R (M*) =
R
c
). All the other modes (if they exist) that satisfy R (M)≤ R
c

will have a higher distortion than D(M*).
The term D (M) + λ .R (M) in equation (4-9) is called the
Lagrangian rate-distortion cost. The mode that minimises
the Lagrangian rate-distortion cost for a particular λ≥0
(which satisfies the rate constraint in the constraint
problem) is selected as the solution mode for the constrained
problem.

4. Methodology

4.1 Objective video quality measurement

Objective video quality measurements are used to measure
the video quality, typically in situations where fast
(sometimes online) and repeatable measurements of the
distortion or the difference between the video under test and
a reference video are needed [7].

4.2 PSNR

The Peak Signal to Noise Ratio (PSNR) is the most
commonly used objective measure of video quality. PSNR is
measured as follows:



Where n is the bit depth and MSE is the Mean Squared
Error between corresponding pixel values of the original
image and the current image of the sequence under test. For
M × N array of pixels, MSE is given by:



Where P
o
(i, j) denotes a pixel from the original image
and P
i
(i, j) denotes the corresponding pixel from the test
image. The parameters ‘i’ and ‘j’ point to a position in the
pixel arrays.
The MSE in itself can be a measure of distortion.
However PSNR is preferred because the log scale provides a
more realistic mapping to quality variations. Therefore,
PSNR continues to be the most commonly used objective
quality measure [5].

5. Implementation
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

4

For the objectives to be achieved, software video simulation
tool JSVM was used to implement and test the algorithms.
There are plenty and different H.264/AVC reference
software. JSVM was chosen for this research due to its
flexibility of varying parameters.
JSVM codec is commonly used to test new algorithms in
the video community. The use of this reference software
enables realistic comparison of the performance of different
algorithms developed by different researchers. The source
code is mainly the same as the one used in the C
programming language [8].

6. Results analysis

In this part of the simulation, basic parameters such as
frame rate of 30 Hz, number of frames 300 and group of
pictures 16 were taken into consideration. Set of stiff video
were used to evaluate the performance such as foreman,
garden, football, flower, Claire and Carphone. The PSNR
versus bitrates graph for various group of pictures were
studied in difference circumstances. Below are different
cases that were taken into consideration:

Initially, spatial dimensioning is represented by
QCIF and CIF, but was taken without additional progressive
refinement (PR) slices. With additional PR, the transform
coefficients are refined thus the improvement of the
reconstructive pictures’ quality. The performance clearly
proves that the PSNR varies with the quality.


Figure 2. Sequential scalable coding (Foreman)

In this case, several spatial resolution or bitrates are taken
into consideration or provided by the encoded bitstream.
The result shows that the PSNR is directly proportional to
bitrates.

Figure 3. Single Layer coding

Based on Lagrangian Cost Function, if a video frame is to
be sent on the outgoing link, it is first placed in the output
buffer. Note that, for simplicity, we don’t consider the buffer
limitations for the simulations in here.
If the outgoing link cannot accommodate all the video
packets, it will first drop the additional enhancement PR
slices one by one. If the link is still overloaded, the spatial
enhancement layers are dropped next in the same spirit, i.e.,
scale out the enhancement layers completely sticking only to
the base layer. The optimized SVC offers better quality than
the unoptimized SVC one


Figure 4. Rate distortion optimization for scalable coding

Comparing to single-layer at higher bitrates, also, when the
outgoing capacity R
out
is larger than the required incoming
rate, at 1670 kB/s, the RD-optimized single-layer coding
and unoptimized coding perform the same.
This is obvious, as at higher bitrates, the network link
will rarely overflow and very few or no video packets are
lost. However, if the outgoing rate is very small, it can be
seen that SVC strategy leads to good improvements in terms
of reconstructed video quality. Table 1 shows the
improvements obtained for individual video streams for the
outgoing link R
out
= 600 Kbit/s.


Figure 5. evaluation of SVC and SLC

Table 1: comparison of different video streams

Scalable Video Coding Single layer coding Sequences
Optimized
(dB)
Unoptimized
(dB)
Optimized
(dB)
Unoptimized
(dB)
Garden 45.0008 38.5645 42.5682 40.2658
Foreman 34.5545 35.2564 32.5654 30.1254
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

5
Football 37.2356 37.0052 36.2545 36.5485
Flower 40.3215 39.0235 37.5468 37.6256
Claire 36.2597 36.4566 31.2564 32.2564
Carphone 41.3255 38.4552 38.2545 39.2545


7. Recommendations

Although the video coding standards exhibit acceptable
quality-compression performance in many visual
communication applications, further improvements are
desired and more features need to be added, especially for
some specific applications. The important considerations
for video coding schemes to be used within future networks
could be bases on Compression efficiency, robustness with
respect to packet loss, adaptability to different available
bandwidths and adaptability to memory and computational
power for different clients.
Several other communication and networking issues are also
relevant, such as scalability, robustness, and interactivity.
A network with a single active node was considered, in our
simulations. This could be further enhanced to more
practical situations with a hierarchy of many active network
nodes and perform rate shaping at every node accordingly.
Different values for the Lagrangian multiplier λ could be
modeled for more stringent buffer conditions. A reasonable
value for λ can be determined in maximizing the
Lagrangian cost function, since λ is determined as a
function of buffer fullness.
The scalable video coding approach could be further
extended to MCTF based scalable video codec which
employs an open-loop architecture.

8. Conclusion
The choice of a Scalable Video Coding framework in this
context brings technical and economical advantages. Under
this framework, network elements can adapt the video
streams to the channel conditions and transport the adapted
video streams to receivers with acceptable perceptual
quality. The advantages of deploying such an adaptive
framework are that it can achieve suitable QoS for video
over wired and wireless networks, bandwidth efficiency and
fairness in sharing resources [11].
The adaptive scalable video coding technology produces
bitstreams decodable at different bitrates, requiring
different computational power and channel bitrate. In
addition, the bitstream is organized with a hierarchical
syntax that enables users to easily extract only a subpart of
the data contained in the bitstream and still being able to
decode the original input video but at a reduced spatial
resolution or frame rate. This process can be applied
recursively, that is, once a new bit stream is extracted out of
the original, it can undergo successive extractions
corresponding to always lower resolutions.
References
[1] C. S. Kannangara, I. E. G. Richardson, M Bystrom, J.
Solera, Y. Zhao, A. MacLennan & R. Cooney,
"Complexity Reduction of H.264 using Lagrange
Optimization Methods," IEE VIE 2005, Glasgow, 4~6
April, 2005.
[2] H. Kim and Y. Altunbasak, "Low-complexity
macroblock mode selection for H.264/AVC encoders,"
presented at International Conference on Image
Processing, Singapore, 2004.
[3] K. P. Lim, "JVT -I020, Fast INTER Mode Selection."
San Diego: ISO/IEC MPEG and ITU-T VCEG Joint
Video Team, 2003.
[4] X. Li. Scalable video compression via over complete
motion compensated wavelet coding. Signal Processing:
Image Communication, special issue on
subband/wavelet interframe video coding, 19:637—651,
August 2004.
[5] S.-R. Kang, Y. Zhang, M. Dai, and D. Loguinov, \Multi-
layer active queue management and congestion control
for scalable video streaming," in Proc. IEEE ICDCS,
Tokyo, Japan, Mar. 2004, pp. 768{777}.
[7] T. Oelbaum, V. Baroncini, T. K. Tan, and C. Fenimore,
“Subjective quality assessment of the emerging
AVC/H.264 video coding standard,” International
Broadcasting Conference (IBC), Sept., 2004.
[7] R. Leung and D. Taubman. Impact of motion on the
random access efficiency of scalable compressed video.
Proc. IEEE Int. Conf. Image Processing, 3:169—172,
September 2005.
[8] R. Leung and D. Taubman. Perceptual mappings for
visual quality enhancement in scalable video
compression. Proc. IEEE Int. Conf. Image Processing,
2:65—68, September 2005.
[9] R. Leung and D. Taubman. Transform and embedded
coding techniques for maximum efficiency and random
accessibility in 3-D scalable compression. IEEE Trans.
Image Processing, 14(10):1632—1646, October 2005.
[10] R. Leung and D. Taubman. Minimizing the perceptual
impact of visual distortion in scalable wavelet
compressed video. Proc. IEEE Int. Conf. Image
Processing, October 2006.
[11] R. Leung and D. Taubman. Perceptual optimization for
scalable video compression based on visual masking
principles. IEEE Trans. Circuits Syst. Video Technol.,
submitted in 2006.
[12] T. Wedi and Y. Kashiwagi, “Subjective quality
evaluation of H.264/AVC FRExt for HD movie
content,” Joint Video Team document JVT-L033, July,
2004.
[13] ISO/IEC JTC 1/SC 29/WG 11 (MPEG), “Report of the
formal verification tests on AVC/H.264,” MPEG
document N6231, Dec., 2003 (publicly available at
http://www.chiariglione.org/mpeg/quality_tests.htm).
[14] T. Schierl, T. Stockhammer and T. Wiegand, "Mobile
Video Transmission using Scalable Video Coding
(SVC)," IEEE Trans. On Circuits and Systems for
Video Technology, Special issue on Scalable Video
Coding, scheduled June 2007.
[15] S. Wenger, Y.-K. Wang and T. Schierl, “Transport and
Signaling of SVC in IP networks,” IEEE Transactions
on Circuits and Systems for Video Technology, Special
issue on Scalable Video Coding, scheduled for: March
2007.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

6
Markov Based Mathematical Model of Blood Flow
Pattern in Fetal Circulatory System
Sarwan Kumar
1
, Sneh Anand
2
, Amit Sengupta
3


1
Dr B R Ambedkar National Institute of Techmology,
Jalandhar -144011 ,Punjab,India
[email protected]

2
Indian Institute of Techmology
CBME Delhi, India

3
Indian Institute of Techmology
CBME Delhi, India

Abstract: This paper presents a novel approach to estimate
blood flow characteristics in the fetal circulatory system during
pregnancy. We have developed a mathematical model of the
fetal circulatory system by taking two nodes concept based on
Markov model. As the oxygenated blood flows from mother side
through placenta to fetus and deoxygenated blood from fetus to
mother via umbilical cord. When it is simulated, the model
shows how the oxygenated blood flows from placenta (one node)
to the umbilicus (second node) and deoxygenated blood to
placenta from the fetus. Also the same model is simulated at
different conductivity path of the umbilical cord and available
blood supply at placenta. Also it shows the effect of uterine
contractions on the blood supply to the fetus. All simulations
have been performed in the Lab VIEW environment at various
conditions of vein and arteries.

Keywords: Markov model, placenta, umbilicus cord,
mathematical model, uterine contraction.
1. Introduction
The baby develops in the uterus with a life support system
for the fetus and is composed of umbilical cord, placenta
and amniotic fluid. The placenta is a pancake shaped
temporary organ which is attached to the uterus and is
connected to fetus through the umbilical cord. The umbilical
cord is the lifeline between the fetus and placenta. As soon it
is formed it functions throughout pregnancy to protect the
vessels that travel between the fetus and the placenta. The
responsibility of the placenta is to act as a point of trade
between the circulatory system of the mother and the baby.
It is very important to know the relationship between
concentration (quantity) of blood available at placenta and
how quickly it passes to fetus through vein, the only path to
carry good blood to fetus from mother and the waste
products of the fetus are transferred to the mother’s blood
through umbilical arteries. Therefore the umbilical cord is
called the life line and it is through this cord that the
placenta and the fetus are attached to each other. There are
three blood vessels in the umbilical cord, two small arteries
and a vein [10]. This cord can grow to a length of 50-60 cm
which allows the baby to have enough space to move safely
without damaging the placenta or the umbilical cord. The
placental conductivity increases with the age of pregnancy
[8]. Complete circulatory system is shown in figure 1.
During the time of pregnancy it can be found that the
umbilical cord is in the form of a knot or at times this cord
is wrapped around the body of the baby. This is a common
phenomenon and there is no prevention for this. This does
not pose any risk or threat to the baby or the mother. There
may be some complications of the placenta due to pregnancy
the most common of which is placenta previa [10]. In this
the placenta is attached over or near the cervix. With the
growth of the fetus there is pressure on the placenta and due
to this reason there may be bleeding [11]. If this condition
occurs there is need for medical care so that one can be
ensured of a safe labor for the baby. Due to many
complications there may be decrease in blood supply to fetus
which leads to asphyxia and increase in heart rate [12].
Hence compromise of the fetal blood flow through the
umbilical cord vessels can have serious deleterious effects on
the health of the fetus and newborn. There for, it is
necessary to know the blood profile in the fetus. This paper
discuss about a novel mathematical model for the
circulation of blood in the fetal circulatory system by taking
two node concept based on Markov model [4] to know how
the blood profile. The same model is simulated at various
conductivities of the blood vessels and available blood at
placenta. We have also demonstrated the effect of uterine
contractions on the blood profile which would be useful to
assist in developing a new bioelectric sensor for the
evaluation actual blood flow time.
2. Markov Model
A Markov model is a stochastic process whose dynamic
behavior is such that its future development depends only on
the present state space. In other words, the description of the
present state fully captures all the information that could
influence the future evolution of the process. Being a
stochastic process means that all state transitions are
probabilistic. At each step the system may change its state
from the current state to another state (or remain in the
same state) according to a probability distribution. The
changes of state are called transitions, and the probabilities
associated with various state-changes are called transition
probabilities. In other words, the description of the present
state fully captures all the information that could influence
the future evolution of the process. In order to formulate a
Markov model we must first define all the mutually
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

7
exclusive states of the system. The state of the system at t =0
are called the initial states (P0), and those representing a
final or equilibrium state are final stage (P1). The set of
Markov state equations describes the probabilistic transition
from the initial to final states.
The transition probabilities must obey the following two
rules:
1 The probability of transition in time ∆t from one
state to another is given by gain into ∆t.
2 The probabilities of more than one transition in
time are infinitesimals of higher order and can be
neglected.

3. Proposed Model
Figure 1 presents the complete fetal circulatory system and
its equivalent Markov chain in figure 2. The mostly
problems related to node Ia are Intrauterine Growth
Restriction (IUGR) and preeclampsia [5 ]. These are due to
high blood pressure, diabetes, infection, kidney disease,
heart or respiratory disease, alcohol, drugs and cigarette
smoking ( figure 3) which may lead to fetal hypoxia, fetal
death, low birth weight, placenta abruption (figure 4)[5].
The problems related to Umbilical Cord i.e. node II are two
vessels, long cord, nuchal cord and short cord figure 5. The
node Ib and node III are less significant in fetal circulation
and are ignored. The modified nodes representation of the
fetal circulation system and is equivalent signal flow graph
is shown in figure 6. In term of mathematical model as
described by the Markov Model [7], node I represents full of
oxygen rich blood toward mother side and node II represents
the fetus side. Umbilicus cord connects the two nodes.
There are two stages:
Stage 1: Placenta attached to the mother side, say node I,
full of oxygen
Stage II: Umbilicus, the entering point to fetus, say node II
Let us P0(t) the quantity of the good blood at node 1
P1(t) the quantity of blood reaches at node II through vein
gv the conductivity gain of the vein
u1 the conductivity gain of the artery1
u2 the conductivity gain of the artery2

After ∆t the blood at node I and node II is given by


(1)

P1(t+∆t) = P1(t) (1- u1 ∆t) + P1(t) (1- u2 ∆t) + P0(t) gv ∆t
(2)

From equation 1 and 2

P0(t+∆t) - P0(t) = - P0(t) gv ∆t + P1(t)( u1 + u2) ∆t

(3)
P0(t+∆t) - P0(t)/ ∆t = - P0(t) gv + P1(t)( u1 + u2) (4)

(5)
Similarly

(6)

The final solution of equation’s 5 and 6

P0(s) = p0(s+u1 + u2) - p1(u1 + u2)/ (s+ gv)(s+ u1 +
u2)- g v (u1 + u2)
(7)


(8)

And
P1(s) = p1(s+ gv) + gv p1/(s+ gv)(s+ u1 + u2)- gv (u1 +
u2) (9)

P1(t) = (p1- p0)gv / (u1 + u2 )+ p1 (u1 + u2 - gv )/
(u1 + u2 )e- (u1 + u2) t (10)



Figure 1. Fetal Circulatory System

Figure 2. Fetal Circulatory System in nodes representation
(Node Ia Uterine Artery, Node Ib Placenta, Node II
Umbilical Cord, Node III Fetal Heart)

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

8


Figure 3. Problems related to node Ia and their causes



Figure 4. Problems to fetus due to Intrauterine Growth
Restriction (IUGR) and Preeclampsia




Figure 5. Problems related to node II (Umbilical Cord)

4. Simulation and analysis

The software is designed in LabVIEW and simulated at
different levels of P
0
, P
1
,g
v
,u
1
and u
2
. When equation 8 is
simulated for various values of conductivities of vein ( g
v
)
and two arteries (u
1
, u
2
), we got the exponential curve.
The response of the P
0
(t) is shown in figure 7 at g
v
unity or
100% conductivity. This indicates that the blood transfer
from mother side placenta to fetus through vein having
100% conductivity at the start of the process. This indicates
that the blood takes 4 second to reach to fetus. This
exponential curve due to simulation is same as the current
discharge through capacitor and register which has already
been established in tissue impedance characterization [3],
[4]. Here tissue is represented by capacitor in parallel with
resistance as shown in figure 8 [8 ] This time increases as
the conductivity (g
v
) decreases because of knot or some
other reasons as shown in figure 9 where the time is
approximately 10 seconds. This may leads to the child a
number of dangerous effects (depression of the central
nervous system, breathing paralysis, etc.).




Figure 6. Actual system and Equivalent Markov Signal
Flow Graph

This will increase further if the contractions are more [1].
The quantity of the blood supply from the mother is highly
affected the uterine contractions. Because the contractions
will increase the intramyometrial pressure (120mmHg)
compared to arterial pressure (85mmHg) [1]. The initial
blood supply is less in this case and less amount of blood
crossing the umbilical cord feeding fetus. This is the
situation of less oxygen to the fetus, may lead to hypoxia.
This is shown by our model as the output magnitude of P
0
in
term of available blood quantity. The output is shown in
figure 10 for the 50% available blood. Maternal blood enters
the placenta through the spiral arteries (the terminal
branches of the uterine artery), which traverse the
myometrium-muscular contractile layer of the uterus and
flow into the intervelleous space. At this level the mother
exchanges substances with the fetus through the "placental
barrier." Anesthetics and analgesics are of low molecular
weight, and are easily exchanged by diffusion as a result of
the concentration gradient between the maternal and fetal
compartments. Caldeyro-Barcia et.al have found that when
the uterus is at rest, without contraction, the mother's
arterial blood easily crosses the intervelleous space since the
average arterial pressure is about 85 mmHg and the
intramyometrial pressure external to the arteries is about 10
mmHg. During uterine contractions, however, the
intramyometrial pressure rises to 120 mmHg, exceeding the
arterial pressure which, under such conditions, is about 90
mmHg. The arteries therefore become temporarily occluded
because of the external pressure, and the placenta becomes
disconnected from the maternal circulation. [1]
If the conductivity is reduces to 10%, it take approximately
50 second to reach its final stage ie fetal heart. This very
dangerous stage for the fetus as lesser oxygen is going to
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

9
fetus extremely asphyxia. The fetus may die because of less
blood or oxygen.




Figure 7. Output at gv unity with blood flow time of 5
seconds



Figure 8. Tissue’s Cell Membrane & its Electrical
Equivalent




Figure 9. Output at gv 0.5 with blood flow time of 10
seconds




Figure 10. 50 % available blood during contractions and
same amount to fetus



Figure 11. Deoxygenated blood flow curve, when both
arteries are good




Figure 12. Deoxygenated blood flow curve, when one artery
good




Figure 13. Blood flow time increases with decrease in
length of the umbilical vein

When equation 10 is simulated at various levels of arteries
path (u
1
+ u
2
). When both are working the response of this
is shown in figure 11. It takes approximately 2 second to
transfer the waste to placenta. This time increases as the
conductivity decreases. The setting time is doubled when
any one arteries is failed as shown in figure 12. This may
increase the acidic or PH composition level in the fetus
which can spoil umbilicus cord. The effect of length and
diameter of vein have also been simulated. The reaching
time also increases with the decrease in the length of the
cord even the supply of the blood is 100% at placenta. The
result of which is shown in figure 13. This indicates the
blood is reached to fetus with longer time. When compared
the result with the blood flow from model with the actual
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

10
flow of Doppler FVW taken from the paper [5], it shows the
same blood flow pattern as actual. The figures 14(a) show
the simulated blood flow while 14(b) the actual flow.


(a)

(b)
Figure 14. Comparison of the flow of blood between result
from model and with actual flow, (a) Flow of blood response
of Markov model (b) Actual blood flow: A frame extracted
from Doppler FVW [5]

5. Conclusion

The blood flow timing between the placenta-fetus and fetus -
placenta is given by the equation 8 and 10 respectively and
simulated using LabVIEW software. The flow is exponential
which shows that the umbilical cord structure (vein and
arteries) acts as a capacitor in parallel to resistance. Time
taken by the blood to reach fetus is increases as conductivity
decreases. Also the time increases in case of lass quantity of
blood is available due to uterine contractions, knot or any
other reason. The simulated results show the larger settling
time in case of short length. On the blood flow; it would be
useful to assist in developing a sensor for the evaluation of
conductivity of the umbilical cord and placenta during
pregnancy for the well-being of fetus. We are developing a
stand alone instrument for monitoring the various
parameters of the fetal model.

References

[1] C. Hernandez Sande, G. Rodriguez-Izquierdo, and M.
Iglesias,” Intermittent Drug Administration During
Labor and Protection of the Fetus, IEEE Transactions
on Biomedical Engineering, pp 615-619, 1983.
[2] S M Sims, E Daniel and R E Garfield, “Improved
Electrical Coupling in Uterine Smooth Muscle Is
Associated with Increased Numbers of Gap Junction”,
Journal of General of Physiology, pp-353-375, 1982.
[3] S Gandhi, D C Walker ,B.B. Brown and D. Anumba, “
Comparison of human uterine cervical electrical
impedance measurement derived using two tetrapolar
probes of different sizes”, Biomedical Engineering ,pp
1-7,2006.
[4] R.J Halter,., A. Hartov,., J.A. Heaney, K.D. Paulsen,.
A.R. Schned,., “Electrical Impedance Spectroscopy of
the Human Prostate”, IEEE Transactions on Biomedical
Engineering, pp 1321-1327 , 2007.
[5] A. Gaysen , S. K. Dua, A. Sengupta and Nagchoudhuri
, “ Effect of Non-Linearity Doppler Waveforms
Through Novel Model”, Biomedical Engineering
Online, pp1-13,2003.
[6] A S Gordon ,, J Strauss and G A Misrahy, “ Electrical
Impedance of Isolated Amnion”, Biophysical Journal,
,pp 855-865,2000.
[7] G. D. Clifford, F. Azuaje, P.E. McSharry , “Advanced
Methods and Tool for ECG Data Analysis”, Artech
House, pp 295-300 , 2006 .
[8] Guyton, Textbook of Medical Physiology, Eight
Editions, 1991.
[9] Ross and Wilson, Anatomy and Physiology in Health
and Illness, Tenth Edition , 2006.
[10] T. Erkinaro , “Fetal and Placental Haemodynamic
Responses to Hypoxaemia , Maternal and Vasopressor
Therapy in a Chronic Sheep Mode” l, Acta University ,
pp-1-96, 2006.
[11] J. C. Huhta , “ Fetal congestive heart failure” Seminars
in Fetal & Neonatal Medicine 10, pp 542-552 , 2005.
[12] F. Kovacs, M. Torok, and I.Habermajer , “A Rule-
Based Phonocardiographic Method for Long-Term
Fetal Heart Rate Monitoring” , IEEE Transactions on
Biomedical Engineering , pp 124-130 , 2000.

Authors Profile

Sarwan Kumar received the BTech and MTech degrees in
Electrical Engineering from Regional Engineering College
Kurukshetra in 1992 and 1997, respectively. He is associate
professor at National Institute of Technology Jalandhar. Now he is
pursuing PhD from IIT Delhi, India under the guidance of
professors Sneh Anand IIT Delhi and Dr. Amit Sengupta, ,
Consulting Obstetrician & Gynecologist (CHS), Mumbai.


























(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

11
Redevelopment of Graphical User Interface for
IPMS Web Portal
Hao Shi

Victoria University, School of Engineering and Science
Melbourne, Australia
[email protected]

Abstract: IPMS, short for Industry Project Management
System, is a web portal for industry project team management.
IPMS is a very useful project managment tool to manage
students, allocate projects, coordinate supervisors and liaise with
industry sponsors. It has speeded up the process and allowed the
stakeholders to focus on their key tasks. However the originally
developed IPMS is no longer working after migrating to a new
server. As a result, the manual management was brought back
which was both times consuming and tedious for both project
students and the course coordinator. This project aims to
upgrade IPMS to PHP5.0 and re-develop the new GUI
(Graphical User Interface) with enhanced system
functionalities. In this paper, first the background information
about IPMS is described. Then the newly developed GUI is
presented and the usability test is conducted on the re-developed
GUI. It is concluded that the newly developed GUI meets the
user requirements and is better than the existing GUI.
Keywords: IPMS, Project Management Systems, GUI
(Graphical User Interface), Industry Project, Web Portal.
1. Introduction
Many final-year projects are offered at tertiary computing
degree programs to provide project students team work and
real world project experience under supervision of external
project sponsor and academic staff [1]. However, managing
software project teams is a complex task [2]. It should have
occupied 20% time of the project coordinator but in the end
it took more than 80% [3]. In order to reduce administrative
load, many project management tools have been produced
[4]. Some tools monitor full cycles of software engineering
projects while others emphases more on aspects of the
management projects [2].
SourceForge is the best known web portal, currently hosts
over one hundred thousand projects and over a million users
[5]. Open source tools such asDokuWiki, Trac, and
Subversion can be integrated to provide a low-cost platform
for student collaboration on team projects [6]. By
consolidating project artifacts in a central location, the wiki
software serves as both a repository for project information
and a means of communication between team members and
course instructors who may be working from different
physical locations. Integrated version control helps students
track changes in their documents and provides a safety net
for recovery of information that has been previously deleted
from project artifacts [6].
Even Moodle or Blackboard, dedicated eLearning tools
[7] have incorporated group management tools which allow
the

course designer to create groups and manage group
activities besides course contents. Unfortunately there is no
“one size fits all” solution in project management [8]. Many
higher education institutions continue to build their own
project management tools as they provide significant
benefits to teaching and learning. More and more project
teams in industry, academia, and the open source
community are increasingly reliant on web-based project
management portals [9].
The market for tools to improve software project
management and software quality management is fast
growing. It has been approved that vendors of software
project and quality management tools can walk the talk by
using quantitative data to manage the development project
and process [10]. Recently, agile software development
methods are popular because software should be developed
in a short period. However, conventional project
management techniques are often not adaptable to such new
development methodologies. A new tool based on the
communication model has been developed for agile software
development [11], which allows to monitor product quality
control and progress control. Some of these tools focus
mainly on project management for teaching and learning [8]
while others have full support for administrative tasks such
as student registration, team formation, project
confirmation, supervisor allocation and document
management [4, 12, 13].
In this paper, it aims to upgrade the existing IPMS and
re-develop a new GUI for IPMS.
2. Background
IPMS (Industry Project Management System) is a “All-In-
One” web portal. It was primarily developed to automate
and streamline management of the final-year Industry
Projects at Faculty of Health, Engineering and Science at
Victoria University. IPMS prototype was initially developed
based on the FDD methodology using Linux-Apache-
MySQL-PHP, (LAMP) three-tier client-server architecture
[Martin] as shown in Figure 1. PHP (Hypertext
Preprocessor) is the program language which generates
dynamic web GUI (graphic user interface). Apache is
employed as a web server running under Linux operating
system while MySQL is database management system for
IPMS web pages and supports CMS (Contents Management
System). LDAP (Lightweight Directory Access Protocol) is
used for the user authentication [4, 11].

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

12

Figure 1. System topology [Martin]

Examples of the IPMS major menus are shown in Figure 2
and 3.




Figure 2. Student Menu [4]


3. Redevelopment of IPMS GUI
In the 1st Semester 2009, one of industry project teams were
assigned the task to upgrade the exiting IPMS and re-
develop its GUI because it was no longer working after PHP
was upgraded from version 4.0 to 5.0 on a new server. The
project team consists of four final-year computer science
students. They aim to maintain existing system functionality
in the new system and improve the user interface and the
logical flow of pages and add possible new functionality
[14]. In the following subsections, the newly developed
Admin menu and Student Menu are presented.



Figure 3. Admin Menu [4]
3.1 Top-down Module Design
After the structured system analysis, the top-down design for
IPMS module using Gane and Sarson graphical
specification technique is shown in Figure 4.
3.2 Admin Menu
Once a user is logged in as an admin user, the admin menu
becomes available at the left side. The major change is that
the menus are grouped into major functionality shown in
shown Figure 5.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

13




Figure 4. Top-down Module Design for IPMS [15]















Figure 5. Newly developed GUI for Admin Menu [15]
3.2.1 General Information Menu
The general information menu consists of six submenus,
namely About Industrial Project, Projects, Supervisors,
Sponsors, Industrial Partners and FAQs as shown in Figure
6.
3.2.2 Administrator menu
The Administrator Area contains the key menus such as
Overview Information, Reports, Emails, User
Administration, Project Administration, Team
Administration, Database Access and Content Management
as shown in 7.


Figure 6. General Information submenus



Figure 7. Administrator Menu
3.2.3 My Team menu
My Team menu contains three submenus namely:
RCM3001, RCM3002 and Assessment, each submenu
contains several submenus as well as shown in Figure 7.

Add_New
Login Logout PHPMyAdmi
n
Check_Login
Update_Web_Conten
t
Update
Register Database Web_Content User Email User_Details Login
IPMS Module
Project Report Former_Student
User_Report
Team_Report
Project_Report
Team
Team _Document
Details Delete
External
Back to the default home page
Contain general web pages
that are meant for everyone
to see
Contain forms/documents
used to enable a supervisor
to evaluate Team’s
performance
Contain the assigned
Team(s) to a supervisor with
their contact details
Contain web pages which
displays Team’s personal
information
Contain forms/documents
used to enable unit
coordinator to able to
modify user, project and
team information.
Logout from the system
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

14

(a) RCM3001 submenu


(b) Assessment submenu
Figure 8. My Team menu

3.3 Student Menu
Once a user logins in as a student, the student menu displays
as shown in Figure 8.
3.2.1 My Team menu
This menu contains five submenus, i.e. Team Details,
Create, Join, Upload Photo and Leave as shown in Figure
10.
4. Use Acceptance Test
Many usability tests were carried out by the project
coordinator during the course of IPMS re-development.
Improvement and changes were made to enhance the GUI.
In this paper, the usability test conducted is in the form of
student user experience survey on the new developed GUI
on the same aspects [12]:
Q1. Registration/Signup process
Q2. Availability of other students to form a team
Q3. Team formation
Q4. Registration of an available project to a team or
project proposal
Q5. Efficiency from registration to team formation
Q6. Overall experience

The detailed results of the usability test are shown in Figure
11.





Figure 9. Student Menu [15]


Figure 10. My Team submenus


Q1. Registration/Signup process
0%
10%
20%
30%
40%
50%
Very Hard Hard Neutral Easy Very Easy
N
o
.

o
f

U
s
e
r
s
Paper-Based
NEW GUI

(a)
Q2. Availability of other students to form a team
0%
10%
20%
30%
40%
50%
Very Hard Hard Neutral Easy Very Easy
N
o
.

o
f

U
s
e
r
s
Paper-Based
New GUI

(b)
Contain general web pages
that are meant for everyone
to see
Contain web pages which
displays Team’s personal
information
Contain the team with their
contact details
Contain forms/documents
used to enable students to
view Team’s performance
criteria
Back to the default home page
Logout from the system
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

15
Q3. Team formation
0%
10%
20%
30%
40%
50%
60%
Very Hard Hard Neutral Easy Very Easy
N
o
.

o
f

U
s
e
r
s
Paper-Based
NEW GUI

(c)
0%
10%
20%
30%
40%
50%
Very Hard Hard Neutral Easy Very Easy
N
o
.

o
f

U
s
e
r
s
Paper-Based
IPMS Portal
Q4. Registration of an available project to a team or project proposal

(d)
Q5. Efficiency from registration to team formation
0%
10%
20%
30%
40%
50%
60%
Very Hard Hard Neutral Easy Very Easy
N
o
.

o
f

U
s
e
r
s
Paper-Based
NEW GUI

(e)
Q6. Overall experience
0%
10%
20%
30%
40%
50%
60%
70%
Very Hard Hard Neutral Easy Very Easy
N
o
.

o
f

U
s
e
r
s
Paper-Based
NEW GUI

(f)
Figure 11. User experience survey
5. Conclusions
IPMS has been upgraded to PHP5.0 after one-year
development. The usability test has proved the new
developed GUI efficient and user friendly. The new
upgraded IPMS removes tedious manual process and
provides smooth management functionalities for students,
supervisors, and coordinator and industry sponsors. It is
concluded that the newly developed IPMS meets the user
requirements and is better than the previous version.
Acknowledgements
The author would like to thank the project team, Riad El
Tabbal (team leader), Leang Heng It, Jack Toke and Duncan
Tu and the project supervisor, Associate Professor Xun Yi
for their contributions in revitalising the IPMS GUI.
References
[1] J. Ceddia and J. Sheard, “Evaluation of WIER – A Capstone
Project Management Tool”, Proceedings of the International
Conference on Computers in Education (ICCE), pp. 777-781,
2002.
[2] J. L. Smith, S. A. Bohner, D. S. McCrickard, “Project
Management for the 21st Century: Supporting Collaborative
Design through Risk Analysis”, Proceedings of 43rd ACM
Southeast Conference, pp. 2-300- 2-305, 2005.
[3] G. Jones, “One Solution for Project Management”,
Proceedings of SIGUCCS (The Special Interest Group on
University and College Computing Servies) Fall Conference.
pp. 65-69, 2001.
[4] H. Shi, “IPMS: A Web Portal for Industry Project Team
Management”, International Journal of Communication, Vol.
7 No. 4, April 2007, pp. 111-116.
[5] Source-Forge, http://sourceforge.net [Accessed: Feb. 12,
2010]
[6] E. R. Haley, G. B. Collins, and D. J. Co, "The wonderful
world of wiki benefits students and instructors", IEEE
Potentials, Volume: 27, Issue: 2, pp. 21-26, 2008.
[7] Blackboard, http://blackboard.com [Accessed: Feb. 12, 2010]
[8] Moodel, Open-source course management system
http://moodle.com [Accessed: Feb. 12, 2010]
[9] A. N. Norita and P. A. Laplante, “ Software Project
Management Tools: Making a Practical Decision Using
AHP”, Proceedings of the 30th Annual IEEE/NASA
Software Engineering Workshop, 24-28, 2006.
[10] G. V. Seshagiri and S. Priya, "Walking the Talk: Building
Quality into the Software Quality Management Tool",
Proceedings of the Third International Conference On
Quality Software (QSIC), pp. 67 – 74, 2003.
[11] N. Hanakawa and K. Okura, "A project management support
tool using communication for agile software development",
Proceedings of the 11th Asia-Pacific Software Engineering
Conference (APSEC), pp. 316 - 323, 2004.
[12] R. Martin and H. Shi “Design and Implementation of IPMS
Web Portal”, Proceedings of International Conference on
Computers and Advanced Technology in Education (CATE),
pp. 16-21, 2007.
[13] H. Shi, "Reshaping ICT Industry Projects - My Three-Year
Experience", Proceedings of AusWIT06 Australian Women
in IT Conference, 4-5 December, Adelaide, Australia, pp.36-
46, 2006
[14] R. El Tabbal, L. H. It, J. Toke and D. Tu, “Redevelopment of
Industry Project Management System”, Final-year Industry
Project Design Report, School of Engineering and Science,
Victoria University, November 2009.-
[15] R. El Tabbal, L. H. It, J. Toke and D. Tu, “Redevelopment of
Industry Project Management System”, Software Design
Document and User Manual, School of Engineering and
Science, Victoria University, June 2009.

Author Profile


Hao Shi obtained her BE in Electronics
Engineering from Shanghai Jiao Tong
University, China and her PhD at
University of Wollongong. She is now an
Associate Professor and ICT Industry
Project coordinator at School of
Engineering and Science, Victoria
University. She has established Industry- Based Learning
program at the School and won a number of Teaching and
Leaning grants and awards. She is currently managing more
than a dozen of ICT university scholarships with local
industry partners via her grants from Victorian Government,
Australia.





(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

16

Analysis of Searching Techniques and Design of
Improved Search Algorithm for Unstructured
Peer – to – Peer Networks

Dr. Yash Pal Singh
1
, Rakesh Rathi
2
, Jyoti Gajrani
3
, Vinesh Jain
4


1
Bundelkhand Institute of Engg. and Tech.
Jhansi India
[email protected]

2
Govt.Engg.College,Ajmer
Badliya Circle, NH 08, Ajmer
[email protected]

3
Govt.Engg.College,Ajmer
Badliya Circle, NH 08, Ajmer
[email protected]

4
Govt.Engg.College,Ajmer
Badliya Circle, NH 08, Ajmer
[email protected]


Abstract: We study the performance of several search
algorithms on unstructured peer-to-peer networks, both using
classic search algorithms such as flooding and random walk, as
well as a new hybrid algorithm proposed in this paper. This
hybrid algorithm uses two level random walks for the adaptive
probability search (APS). We compare the performance of the
search algorithms on several graphs corresponding to common
topologies proposed for peer-to- peer networks. In this paper it
is found that Local Indices algorithm gives the average
performance. Intelligent search and Routing Indices have higher
bandwidth. Further work can be done on reducing the size of the
query message subsequently it will reduce the bandwidth. APS is
the efficient technique among all. Further it can be improved by
proposed search algorithm which uses two-level k-walker
random walk with APS instead of k-walker random walk.
Advantages of two level walk will further reduce collision of
nodes and can help in searching the distant nodes in the
network. But it may slightly increase the response time.

Keywords: peer-to-peer networks, adaptive probability search.
1. Introduction
P2P network is a distributed network composed of a large
number of distributed, heterogeneous, autonomous, and
highly dynamic peers in which participants share a part of
their own resources such as processing power, storage
capacity, software and files. The participant in the P2P
network can act as a server and a client at the same time.
P2P systems constitute highly dynamic networks of peers
with complex topology. This topology creates an overlay
network, which may be totally unrelated to the physical
network that connects the different nodes (computers). P2P
systems can be differentiated by the degree to which these
overlay networks contain some structure or are created ad-
hoc. Network structure here means the way in which the
content of the network is located with respect to the network
topology. Unstructured, Loosely structured and Highly
structured are various categories of P2P networks based on
the control over data location and network topology. In this
paper we are mainly concerned on comparative study of
various available search Algorithms in unstructured P2P
systems also it present the design of new proposed search
algorithm for unstructured P2P systems.
2. Unstructured P2P systems
In unstructured networks, the placement of data (files) is
completely unrelated to the overlay topology. Since there is
no information about which nodes are likely to have the
relevant files, searching essentially amounts to random
search, in which various nodes are probed and asked if they
have any files matching the query. These systems differ in
the way in which they construct the overlay topology, and
the way in which they distribute queries from node to node.
The advantage of such systems is that they can easily
accommodate a highly transient node population. The
disadvantage is that it is hard to find the desired files
without distributing queries widely. For this reason
unstructured p2p systems are considered to be unscalable.
However work is done towards increasing the scalability of
unstructured systems. Napster, Gnutella, Kazaa,
Morpheus[1] are various unstructured P2P systems.
3. Searching in unstructured Systems [4]
Initially for the purpose of searching specific data item,
flooding which is basically BFS was used but it generates a
large number of duplicate messages and also does not scale
well so a number of alternative schemes have been proposed
to address the above problem.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

17
These works include iterative deepening, k-walker
random walk, modified random BFS, two-level k-walker
random walk, directed BFS, intelligent search, local indices
based search, routing indices based search, attenuated bloom
filter based search, adaptive probabilistic search, and
dominating set based search.

Searching strategies in unstructured P2P systems are
either blind search or informed search. In a blind search
such as iterative deepening, no node has information about
the location of the desired data. In an informed search such
as routing indices, each node keeps some metadata about the
data location. To restrict the total bandwidth consumption,
data queries in unstructured P2P systems may be terminated
prematurely before the desired existing data is found;
therefore, the query may not return the desired data even if
the data actually exists in the system. An unstructured P2P
network can not offer bounded routing efficiency due to lack
of structure.
The searching schemes in unstructured P2P systems can
also be classified as deterministic or probabilistic. In a
deterministic approach, the query forwarding is
deterministic. In a probabilistic approach, the query
forwarding is probabilistic, random, or is based on ranking.
Another way to categorize searching schemes in
unstructured P2P systems is regular-grained or coarse-
grained. In a regular-grained approach, all nodes
participate in query forwarding. In a coarse-grained scheme,
the query forwarding is performed by only a subset of nodes
in the entire network.
4. Comparison of Existing Search Algorithms
Based on search method, Query forwarding, Message
Overhead and node duplication various searching methods
are compared as follows-

Algo-
rithm
Search
method
Query
forward-
ing
Message
over-head
Node
dupli-
cation
Flooding BFS,
Blind
Broadcast High High
Iterative
Deepning
BFS,
Blind
Broadcast High High
Local
Indices
BFS,
Informed
Broadcast Medium Mediu
m
Directed
BFS
BFS,
Informed
Partial
Broadcast
Medium High
Intelligen
t Search
BFS,
Informed
subset of
neighbor
Medium Mediu
m
Routing
indices
BFS,
Informed
subset of
neighbor
Medium Mediu
m
Std.
random
walk
BFS,
Blind
One
neighbor
Low Low
k-walker
random
walk
BFS,
Blind
subset of
neighbor
High High
2 Lvl k-
walker
random
walk
BFS,
Blind
subset of
neighbor
Low Low
APS BFS,
Informed
subset of
neighbor
Medium

Medium

Based on scalability, response time (RT), success rate(SR)
and bandwidth various searching methods are compared as
follows-
Algorithm Search
method
Query
forward-
ing
Message
over-head
Node
dupli-
cation
Flooding No High Medium Low
Iterative
Deepning
Yes High Medium Medium
Local
Indices
Yes Medium Medium Medium
Directed
BFS
Yes Medium Medium High

Intelligent
Search
Yes Medium

Medium High

Routing
indices
Yes Medium Medium High

Std.
random
walk
Yes High

Medium Low
k-walker
random
walk
Yes Medium Medium low
2 Lvl k-
walker
random
walk
Yes Medium

Medium low
APS Yes Low High

Medium

Among those algorithms, Adaptive Probability Search
(APS) is the most efficient algorithm. APS is based on k-
walker random walk and probabilistic (not random)
forwarding. Another interesting algorithm is Two-Level
Random Walk in which walkers are searching for an object
in two levels. So it reduces the redundancy of nodes.
5. Adaptive Probability Search (APS) [6]
In the Adaptive Probabilistic Search (APS) [6], it is
assumed that the storage of objects and their copies in the
network follows a replication distribution. The number of
query requests for each object follows a query distribution.
The search process does not affect object placement and the
P2P overlay topology.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

18
The APS is based on k-walker random walk and
probabilistic (not random) forwarding. The querying node
simultaneously deploys k walkers. On receiving the query,
each node looks up its local repository for the desired object.
If the object is found, the walker stops successfully.
Otherwise, the walker continues. The node forwards the
query to the best neighbor that has the highest probability
value. The probability values are computed based on the
results of the past queries and are updated based on the
result of the current query. The query processing continues
until all k walkers terminate either successfully or fail (in
which case the TTL limit is reached). To select neighbors
probabilistically, each node keeps a local index about its
neighbors. There is one index entry for each object which
the node has requested or forwarded requests for through
each neighbor. The value of an index entry for an object and
a neighbor represents the relative probability of that
neighbor being selected for forwarding a query for that
object. The higher the index entry value the higher the
probability. Initially, all index values are assigned the same
value. Then, the index values are updated as follows. When
the querying node forwards a query, it makes some guess
about the success of all the walkers.
The guess is made based on the ratio of the successful
walkers in the past. If it assumes that all walkers will
succeed (optimistic approach), the querying node pro-
actively increases the index values associated with the
chosen neighbors and the queried object. Otherwise
(pessimistic approach), the querying node proactively
decreases the index values. Using the guess determined by
the querying node, every node on the query path updates the
index values similarly when forwarding the query.
Upon walker termination, if the walker is successful,
there is nothing to be done in the optimistic approach. If the
walker fails, index values relative to the requested object
along the walker’s path must be corrected. Using
information available inside the search message, the last
node in the path sends an “update” message to the preceding
node. This node, after receiving the update message,
decreases its index value for the last node to reflect the
failure. The update procedure continues along the reverse
path towards the requester, with intermediate nodes
decreasing their local index values relative to the next hops
for that walker. Finally, the requester decreases its index
value that relates to its neighbour for that walker. If we
employ the pessimistic approach, this update procedure
takes place after a walker succeeds, having nodes increase
the index values along the walker’s path. There is nothing
to be done when a walker fails.

Figure 1. Searching object using pessimistic approach of
APS with walkers.

Figure 1 shows an example of how the search process
works. Node A initiates a request for an object owned by
node F using two walkers. Assume that all index values
relative to this object are initially equal to 30 and the
pessimistic approach is used. The paths of the two walkers
are shown with thicker arrows. During the search, the index
value for a chosen neighbour is reduced by 10. One walker
with path (A,B,C,D) fails, while the second with path
(A,E,F) finds the object. The update process is initiated for
the successful walker on the reverse path (along the dotted
arrows). First node E, then node A increase the value of
their indices for their next hops (nodes F, E respectively) by
20 to indicate object discovery through that path. In a
subsequent search for the same object, peer A will choose
peer B with probability 2/9 (= 20 20+40+30), peer E with
probability 4/9 and peer G with probability 3/9.

APS requires no message exchange on any dynamic
operation such as node arrivals or departures and object
insertions or deletions. The nature of the indices makes the
handling of these operations simple: If a node detects the
arrival of a new neighbour, it will associate some initial
index value with that neighbour when a search will take
place.
If a neighbour disconnects from the network, the node
removes the relative entries and stops considering it in
future queries. No action is required after object updates,
since indices are not related to file content. So, although our
algorithm actively uses information, its maintenance cost on
any of these events is zero, a major advantage over most
current approaches.
5.1 Discussion on APS
Each node stores a relative probability (an unsigned
integer) for each of its neighbours for each requested object.
So for R such objects and N neighbours, O(R x N) space is
needed.
For a typical network node, this amount of space is not a
burden. On nodes with limited storage capacities, index
values for objects not requested for some time can be erased.
This can be achieved by assigning a time-to-expire value on
each newly-created or updated index. Each search or update
message carries path information, storing a maximum of
TTL peer addresses. Alternatively, each node can associate
the search and requester node IDs with the preceding peer in
the path of the walker. Updates then follow the reverse path
back to the requester. This information expires after a
certain amount of time.The number of messages exchanged
by APS method to terminate in the worst case will be (2 x k
x TTL) where all walkers (k walkers) travel TTL hops and
then invoke the update procedure, so the method has the
same complexity with its random counterpart. The only
extra messages that occur in APS are the update messages
along the reverse path. This is where the two index update
policies are used.
Along the paths of all k walkers, indices are updated so
that better next hop choices are made with bigger
probability. Learning feature includes both positive and
negative feedback from the walkers in both update
approaches. In the pessimistic approach, each node on the
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

19
walker’s path decreases the relative probability of its next
hop for the requested object concurrently with the search. If
the walker succeeds, the update procedure increases those
index values by more than the subtracted amount (positive
feedback). So, if the initial probability of a node for a certain
object was P, it becomes bigger than P if the object was
discovered through (or at) that node and smaller than P if
the walker failed. Conversely, if many of our walkers hit
their targets on average, the optimistic approach should be
considered. This is the only invariant we require from our
update process.
The learning process in the optimistic approach operates
in an opposite fashion, Learning is important to achieve
both high performance and discovery of newly inserted
objects. Unlearning helps our search process adjusts to
object deletions and node departures, redirecting the walkers
elsewhere. All the nodes participating in the search get
benefited from the process.

Besides standard resource-sharing in P2P systems, APS
achieves the distribution of search knowledge over a large
number of peers.
6. Performance of APS [6]
The main metrics used to evaluate the performance of a
search algorithm are the success rate, the number of
discovered objects (Hits per Query) and the number of
messages produced.


Figure 2. Success rate vs. number deployed walkers for APS
and Random walk algorithms


Figure 3. Message production vs. number deployed walkers
for APS and Random walk algorithms


Figure 4. Hits per Query vs. number deployed walkers for
APS and Random walk algorithms

7. Two- Level Random Walk[7]
It’s an efficient search algorithm which increases the total
number of nodes searched for a certain total number of
search step, and reduces the redundancy or average number
of times a particular node is searched. It works in the
following manner. When a node wishes to send a query with
a certain search key, it composes a search message and
broadcasts it to k1 randomly selected neighbours. The
message has an initial TTL1 = l1 hops. When an
intermediate node receives this message, it checks the TTL1
timer. If the latter is still more than 0 then it decrements the
timer by one, selects one random neighbour and forwards
the message to it. This process continues until one of the
nodes, say node E, receives the message with an expired
TTL1 timer (i.e. TTL1 = 0). We call such a node an edge
node. The message will then “explode” into k2 search
messages forwarded from this node. Specifically, node E
will compose a message with TTL1=0, and a second timer
TTL2=l2. It will then randomly select k2 of its neighbours,
excluding the one it just received the message from, and
broadcast the message to them. Figure 1 shows an example
illustrating this process. At level one, a source node sends
k1 random messages to a set of k1 randomly selected nodes
of its neighbours. This constitutes k1 threads (or random
walks) which travel from the source node to the edge nodes
(a node where TTL1 expires). Each of the k1 threads will
then explode into k2 threads (with TTL2 = l2 ) at each of
the k1 edge nodes. This algorithm reduces redundancy by
decreasing the average number of times a node is searched.
In the one-level k-walk algorithm k random threads are
generated from the source and they are likely to have
“thread collisions” (i.e. threads run into each other)
especially near the source. This results in having redundant
hits in the same nodes (nodes being searched multiple
times). On the other hand, the two-level algorithm sends
fewer threads from the source node which results in a
smaller probability of thread collisions near the source. Each
of the k1 threads will then explode into k2 threads once it is
”sufficiently” away from the source and the other threads.
This way, the same number of search threads can be
generated (k=k1*k2) but with a larger number of nodes
searched and a smaller probability of redundant searches to
the same nodes using the same number of total search steps.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

20
8. Enhancing Performance of APS
Proposed algorithm uses two-level random walk for the
existing APS algorithm[6] instead of k-walker random walk.
Advantage of two-level walk[7] over one-level walk is that it
increases the total number of nodes searched for a certain
total number of search step, and reduces the redundancy or
average number of times a particular node is searched. So
collision of nodes can be further reduced and also distant
objects can also be search efficiently. Two level walk will
also help in further reducing message overhead. Only
disadvantage will be increased in response time.

9. Algorithm of proposed Technique
Assumptions
k
1
= k
2
= k = k
3
– Number of walkers in each level
ttlcount – counter for ttl value
l1 = ttl2 = ttl - Time to live for each level
level – variable for level number
kcount – counter for k3 i.e. number of walkers
Select a querying node
Kcount = 0
level = 1

while (level <= 2)
{
while (kcount <= k3)
{
while (ttlcount <= ttl)
{
select a neighbouring node by
applying APS and Process the node;
if object is not found
then
increment ttlcount by one
continue;
else
come out of the loop (exit); }
increment kcount by one;
}
increment level by one;
}
10. Conclusion
In this research work, various searching techniques in
unstructured p2p networks are studied. Comparative study
of these techniques is done. A new Search Technique is
proposed which helps in further enhancing the performance
of APS.


References

[1] Stephanos Androutsellis-Theotokis, ‘A Surver of Peer-
To-Peer File Sharing Technologies’, White Paper,
ELTRUN, Athens University of Economics and
Business, Greece, 2002.
[2] V. Vishnumurthy and P. Francis. On heterogeneous
overlay construction and random node selection in
unstructured P2P networks. In Proc. IEEE Infocom,
2006.
[3] .MA. Jovanovic, Modelling large-scale peer-to-peer
networks and a case study of gnutella. Master's thesis,
Department of Electrical and Computer Engineering
and Computer Science, University of Cincinnati, June
2000.
[4] .Xiuqi Li and Jie Wu, Searching Techniques in Peer-to-
Peer Networks, Department of Computer Science and
Engineering, Florida Atlantic University.
[5] V. Vishnumurthy and P. Francis. A comparison of
structured and unstructured P2P approaches to
heterogeneous random peer selection. In Proc. Usenix
Annual Technical Conference, 2007.
[6] D. Tsoumakos and N. Roussopoulos. Adaptive
Probabilistic Search (APS) for Peer-to-Peer Networks.
Technical Report CS-TR-4451, Un. of Maryland, 2003.
[7] Imad Jawhar and Jie Wu, A Two-Level Random Walk
Search Protocol for Peer-to-Peer Networks, Department
of Computer Science and Engineering, Florida Atlantic
University.
[8] Beverly Yang Hector Garcia Molina, Improving Search
in Peer-to-Peer networks, Computer Science
Department, Stanford University.














































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

21



Steganography Security for Copyright Protection of
Digital Images Using DWT

K.T.Talele
1
, Dr.S.T.Gandhe
2
, Dr.A.G.Keskar
3


1
Electronics Engineering Dept.,Sardar patel Institute of Technology,
Andheri(w)Mumbai, India
[email protected]

2
Electronics Engineering Dept.,Sardar patel Institute of Technology,
Andheri(w)Mumbai, India
[email protected]

3
Electronics Engineering Dept , Visvesvaraya National Institute of Technology,
Nagpur,India
[email protected]

Abstract: The proposed system combines cryptography and
steganography for copyright protection of digital images using
DWT. The proposed algorithms tested on various attacks such as
median, wavelet compression, fading and resizing by comparing
different performance parameters such as mean square error,
peak signal to noise ratio, correlation coefficient and the results
are very encouraging. The sensitivity is least observed in DWT
method where the watermark maintains a fair level of resistance
to noise and other attacks. The proposed system can be used for
enhanced copyright protection, detection of misappropriated
images; detect alternation of images stored in a digital library.
Keywords: Cryptography, encryption, Decryption,
Steganography.
1. Introduction
Security is one of the major concerns in today’s age. Unlike
the past, most of the transactions between people take place
over the internet. But internet itself is not a secure medium.
So, when it comes to sending highly important documents
over the internet, an extra precaution has to be taken. In
other cases, authenticity of digital data is a big concern.
With the widespread usage of digital media, demand for
copyright protection has increased manifold as it is
evidently seen in the audio records industry. The extra
precaution for copyrighting digital media is required here as
well.
One of the ways to take this extra precaution is to use
Steganography. Steganography helps to hide the content of
interest which is to be protected, inside any image, audio or
video file. To further ensure that interception of content
does not happen, the content can be encrypted using one the
popular Cryptographic algorithms[1][2]. Fragile
Watermarking is used in the case where tamper detection
and authenticity have a higher priority whereas Robust
Watermarking deals with copyright protection[3][4] [5][6].
The proposed system hides a logo in images using DWT .
The watermark should be imperceptible to anyone and
sensitive to any kind of tampering done on the image under
consideration. The system is compared for various
algorithms for embedding the logo. The algorithms are
compared on the basis of Mean Square Error (MSE), Peak
Signal to Noise Ratio (PSNR) and Correlation Coefficient
(CC) values of the extracted logo for different attacks[7].

2. Proposed System
The block diagram for proposed system is as shown in
figure 1.First the logo is encrypted and then it is inserted in
a given image using DWT and the logo is extracted and
then it is decrypted to get original logo.



Figure 1. Block Diagram of Proposed System
3. Algorithm
3.1 Cryptography Algorithm
The encryption algorithm works on the approach of
swapping pixel values of randomly generated 128
locations in the row of every logo. It is important that the
set of 128 locations so generated are done with the help of
a password and that they are all unique. The steps of the
algorithm are as follows:
a) Take the input logo.
b) Ask the user to enter a 8 bit key.
c) Generate 8 random vectors of size 1X128.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

22
d) Specify the ‘state’ of the random number generator
by giving the ASCII value of each character in the
key for every random vector. This will generate 8
random vectors.
e) Generation of Input vector:
Ø The outer loop controls the column
traversal of the watermark logo.
Ø The middle loop controls the selection
of random vectors previously generated.
Ø The inner loop controls the row
traversal of the watermark logo.
Ø Inside the innermost loop, we select a
random vector based on the value of the middle
loop. With every turn of the innermost loop we
take two consecutive values, r(1,k,j) and
r(1,k+1,j) from this random vector and swap
the corresponding location values from the
watermark image in the same column and
store it in another array called ‘encrypted’ at
the same position.
f) Every time the middle loop finishes, the random
vectors are considered corresponding to the first
character of the password.
g) This cycle continues till the last column is covered.
h) Thus the watermark image is encrypted into a new
image file ‘encrypted.bmp’.
3.2 Proposed Steganography in color images
Algorithm

a) Consider any color image having size 512 X 512 as
a host image. If size of host image is not 512 X 512
then make it 512 X 512.
b) Split the image into three planes viz. Red, Green and
Blue.
c) Decompose the host image (all three planes) by
using discrete wavelet transform. Store the first
level coefficients i.e. LL1, LH1, HL1, HH1 as first
level watermark key coefficients of host
image.[8][9]
d) Approximation coefficient of first level is LL1 which
is further decomposed into new coefficients i.e.
LL2, LH2, HL2, HH2 as second level watermark
key coefficients of host image.
e) Consider the gray scale image having size 128 X
128 as a logo to be hidden. If size of watermark
logo is not 128 X 128 then make it 128 X 128.
f) Decompose the watermark logo by using discrete
wavelet transform. Store the first level
approximation coefficients i.e. LL1, LH1, HL1,
HH1 as first level watermark key coefficients of
watermark logo.
g) Insert coefficients of LH2 part of host image by LL1
part pixel by pixel.
h) Perform the two level inverse discrete wavelet
transform of host image (all three planes) by using
approximation coefficients of three planes of host
image.
i) Find the mean square error (MSE) and peak signal
to noise ratio (PSNR) and the correlation
coefficient (CC) between the original host image
and invisible watermark image by using the related
formulae as these are the important performance
parameters.
3.3 Performance Parameters

3.3.1 Peak Signal to Noise Ratio(PSNR) and
Mean Square Error(MSE).
The imperceptibility of a watermark is measured by the
watermarked image quality in terms of Peak-Signal-to-Noise
Ratio (PSNR) (in dB). Most common difference measure
between tow images is the mean square error. The mean
square error measure is popular because it correlates
reasonably with subjective visual tests and it is
mathematically tractable.
Consider a discrete image A(m, n) for m=1,2,……M and
n=1,2,……N, which is regarded as a reference image.
Consider a second image Â(m, n), of the same spatial
dimension as A(m, n), that is to be compared to the
reference image.
Under the assumption that A(m, n) and Ã(m, n) represent
samples of a stochastic process,
MSE is given as



Where E (·) is the expectation operator.

The normalized Mean Square Error is given as


Normalized mean square error for deterministic image
arrays is defined as

Image error measures are often expressed as signal-to-noise
ratio,



We use PSNR to determine the difference between original
image A (m, n) and the watermarked image à (m, n).

The value of mean square error should be minimum and the
value of peak signal to noise ratio should be as maximum as
possible.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

23
3.3.2 Correlation Coefficient (CC)
The robustness performance of watermark extraction is
evaluated by normalized correlation coefficient, r, of the
extracted watermark A and the original watermark B.



Where A and B respectively, the normalized original and
watermark image by subtracting its corresponding means
value. The magnitude range of r is [0, 1], and the unity
holds if the extracted image perfectly matches the original
one.
The correlation coefficient is used to compare original
image and the watermarked image, and also for comparing
original watermark and the retrieved watermark.
4. Results and Discussion
The algorithms with various attacks are implemented
using MATLAB. The results of various algorithms are
shown through figure 2 to figure 19.
4.1 Cryptography


(a) (b)
(c)
Figure 2: (a) Original Logo (b) Encrypted Logo (c)
Decrypted Logo

Figure 2 (a) shows original logo and figure 2(b) shows
encrypted logo and figure 2(c)shows decrypted logo.
4.2 Steganography
4.2.1 Steganography in images using DWT


(a) (b)
Figure 3: (a) Original Image (b) Watermarked Logo (to be
hidden)


(a) (b)
Figure 4: (a) Original Image (b) Extracted Logo

Figure 3(a) shows original logo and figure 3(b) shows
watermark logo .Figure 4(a) shows watermarked image and
figure 4(b) shows extracted logo.

4.3 Comparative Study Of Different Watermarking
Algorithms for different original images
The algorithm of insertion is applied to five different input
images as shown in figure 5 through figure 9 and compared
the result using different performance parameters. The
comparison is as shown in table 1.. We can insert secret
information in these images for copyright protection.

(a) (b)
Figure 5: (a) Original Image (b) Watermarked Image for
lena image.


(a) (b)
Figure 6: (a) Original Image (b) Watermarked Image for
medical image


(a) (b)
Figure 7: (a) Original Image (b) Watermarked Image for
satellite image.


(a) (b)
Figure 8: (a) Original Image (b) Watermarked Image for
satellite Scene.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

24

(a) (b)
Figure 9: (a) Original Image (b) Watermarked Image for
text image.

Table 1: PSNR, MSE, and CC for original image and
watermark image for Invisible Watermarking for five
different images


The value of PSNR is sufficiently high, MSE is very low and
CC is nearly equal to 1.So this algorithm has created
minimum disturbance to host image and perceptually both
the images are alike.












(a)












(b)


(c)
Figure 10: Graph of (a) PSNR, (b) MSE, (c) CC for
steganography in images using DWT

Graphically these values are shown in figure 10.

4.4 Attacks

The algorithm is tested for various attacks[7] such as
median filter, wavelet compression, fading, noise, resizing
etc.

4.4.1 Median Filter

The figure 11 through figure 14 shows the algorithm is
tested for median filter under four mask sizes 3X3, 5X5,
7X7, 9X9. For each of these cases peak signal to noise ratio,
mean square error and correlation coefficient are calculated
and are as shown in figure 15.



(a) (b)
Figure 11: (a) Median filtered Watermark Image with mask
size 3X3, (b) Extracted Watermarked Logo


(a) (b)
Figure 12: (a) Median filtered Watermark Image with mask
size 5X5, (b) Extracted Watermarked Logo


(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

25

(a) (b)
Figure 13: (a) Median filtered Watermarked Image with
mask size 7X7, (b) Extracted Watermark Logo


(a) (b)
Figure 14: (a) Median filtered Watermarked Image with
mask size 9X9, (b) Extracted Watermark Logo

(a)















(b)


(c)
Figure 15: Graph of (a) PSNR, (b) MSE, (c) CC for
watermarked image for different mask size of median filter

4.4.2 Wavelet Compression.

The algorithm is tested for wavelet compression and is as
shown in figure 16. The performance parameters are
MSE=0.0060, PSNR=46.02619, CC=0.9907


(a) (b)
Figure 16 (a) Wavelet compressed Watermarked image (b)
Extracted Watermark Logo

4.4.2 Fading

The algorithm is tested for fading where each pixel value of
image is increased by 50 and extracted watermark logo and
is as shown in figure 17. The performance parameters are
MSE=0.0561, PSNR=36.5775, CC=0.9954


(a) (b)
Figure 17: (a) Faded Watermarked Image (Original+50),
(b) Extracted Watermark Logo


4.4.3 Noise


(a) (b)
Figure 18: (a) Noise added Watermarked image,
(b) Extracted Watermark Logo

The algorithm is tested for noise and is as shown in figure
18. The performance parameters are MSE=0.0322
PSNR=38.9902 CC=0.8103

4.4.4 Resizing
The algorithm is tested for resizing where watermark image
resized by a scaling factor of 2.and is as shown in figure
19.The performance parameters are MSE=0.0028
PSNR=49.5556 CC=0.9972.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

26

(a) (b)
Figure 19: (a) Resized Watermarked Image, (b) Extracted
Watermark Logo

5. Conclusion
Thus we have developed a system which successfully
embeds a logo imperceptibly in a given cover image. The
logo is sensitive to tampering and compression. On the other
hand, the sensitivity is least observed in DWT method
where the watermark maintains a fair level of resistance to
noise and other attacks. In order to further prevent the
watermark logo from detection, we encrypt it before
embedding it in an image. The algorithm used is based on
swapping the pixel values of the watermark logo by using
password generated random vectors. The system is not
immune to compression techniques. Whenever the
watermarked is stored in compressed format, a considerable
loss of information is observed. The system can be extended
to possible counter the compression effect. Extracted
watermark face image can be recognized by using
automated intelligent face detection algorithm[11].The
proposed system can be used for enhanced copyright
protection, detection of misappropriated images, detect
alternation of images stored in a digital library.

References
[1]W.Diffie and M.E.Hellman, “New Directions in
Cryptography”, IEEE trans. On Information
Theory, Vol.IT-22, No.6, Nov.1976.
[2] B.M.Macq, J.J.Quisquater, “Cryptography for Digital
TV Broadcasting”, Proc. of the IEEE, Vol.83, No.6,
Jun1995, pp 944-957.
[3] J. M. Acken, “How Watermarking Value to Digital
Content”, Comm. of ACM, July 1998, Vol 41, No.7, pp
75-77.
[4] Yongjian Hu, Sam Kwong, Jiwu Huang, “An Algorithm
for Removable Visible Watermarking”; IEEE
transactions on circuits and systems for video
technology, vol. 16, no. 1, January 2006.
[5] Zhang Fan, Zhang Hongbin, “Capacity and Reliability of
Digital Watermarking”, International Conference on
the Business of Electronic Product Reliability and
Liability 2004.
[6] Fan Zhang, Hongbin Zhang, “Digital Watermarking
Capacity and Reliability”, Proceedings of the IEEE
International Conference on E-Commerce Technology
2004.
[7] Chun-Hsiang Huang and Ja-Ling Wu, “Attacking
Visible Watermarking Schemes”, IEEE transactions on
multimedia, vol. 6, no. 1, February 2004
[8] S.T.Gandhe and K.T.Talele and Dr. A.G. Keskar “ Face
Recognition Using DWT+PCA” International
conference INDICON07 organized by IEEE India
council at Bangalore on 6
th
,7
th
,8
th
September 2007.
[9] S.T.Gandhe, K.T.Talele, A.G.Keskar “Intelligent Face
Recognition: A Comparative Study” ICGST’s
Graphics Vision & Image Processing Journal,
Vol.7,Issue 2,2007, page no.53-60.

Authors Profile











































Dr. A.G.Keskar is a Professor in Electronics
Engg. Dept,Visvesvaraya National Institute of
Technology, Nagpur. He is a senior member
of IEEE. He has published 10 Journal papers
and published 25 papers in International
Conferences. His area of interest includes
Fuzzy logic, Embedded System and Machine
Vision.
Dr.S.T.Gandhe is Professor in Electronics
Engg Department, S. P. Institute of
Technology, Mumbai, India.. He is a member
of IEEE. His area of interest includes Image
Processing, Pattern Recognition and Robotics.
He has published 12 papers in National
Conferences and 10 papers in International
conference and 3 papers in international
journal.
K.T.Talele is aAssistant Professor in
Electronics Engg Dept, S. P. Institute of
Technology, Mumbai., India He is a member
of IEEE. His area of interest includes DSP,
Image Processing and Multimedia Comm. He
has published twenty papers in National
Conferences and three papers in International
journal and 14 papers in international
conference
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

27
Design and Implementation of A GUI Based on
Neural Networks for the Currency Verification
1
Ajay Goel ,
2
O.P.Sahu,
3
Rupesh Gupta and
4
Sheifali Gupta

1
Department of CSE, Singhania University, Rajasthan, India,
[email protected]

2
Department of ECE, N.I.T. Kurukshetra, India,
[email protected]

3
Department of ME, Singhania University, Rajasthan, India,
[email protected]

4
Department of ECE, Singhania University, Rajasthan, India,
[email protected]

Abstract: The technological development in the era of image
processing and machine vision has two faces. One face is to help
the society by automation and the other side has serious
implications on the society like cyber crimes e.g. web hacking,
cracking, etc. One of the emerging crimes is preparing fake
legal documents in now days. These documents have social
values, like a degree certificates certifies the educational
qualification of a person. The legal documents contain lots of
symbols like kinegrams, hologram, watermark etc. by which we
can verify the authenticity of these documents. Digital
watermarking emerged as a tool for protecting the multimedia
data from copyright infringement. In this paper an attempt has
been made to verify the legal document on the basis of
watermark. In this work the correlation mapping with neural
network is used for extracting the watermark to verify the legal
documents. This technique gives elevated accuracy rate with
fewer times to extract watermark. This method can be
implemented also in additional applications like stamp
verification, currency verification etc.
Keywords: Watermarking, Multilayered-Network, Certifying,
Epochs.
1. Introduction
With the increasing use of internet and effortless copying,
tempering and distribution of digital data, copyright
protection [1] for multimedia data has become an important
issue. There are lots of symbols present on the printed
document for their identification but in this work watermark
has been chosen for the verification. Basically legal
documents can be verified by two methods: first-line
inspection methods and second-line inspection methods
.First-Line Inspection Methods are Watermarks, Ultraviolet
Fluorescence, Intaglio Printing are further divided in Micro
text & Holograms and Kinegrams and second is Second-
Line Inspection Methods Isocheck / Isograms. Recent
public literatures show that some researchers have tried to
apply watermarking into printing system. In geometric
transform. Pun [5] has devised a watermarking algorithm
robust to printing and scanning. The PhotoCheck software
developed in AlpVision Company by Kutter [6] is mainly
fo-cused on authentication detection of passports. As a
passport belongs to the owner with his photo, this belongs to
a content-based watermarking technique. When the photo is
changed, the image with the watermark is of course lost and
this just requires that the watermark hidden in the owner’s
passport is robust to one cycle of print and scan.
Considering the special characteristic of FFT on rotation,
scaling and cropping, Lin [7][8] has carried out the research
on fragile watermarking rather early and obtained many
useful conclusions on the distortion brought by print and
scan. Re-searchers in China [9] began to hide some
information in printing materials, using the function offered
by PhotoShop. All these are focused on the watermark
robust to one cycle of print and scan.
2. Basic Concept
1. Since water mark making requires highly efficient
technique and the water mark can be seen only by its
shadow, the water mark can be effective key to certify the
currency note.
2. In certifying the currency note, since normally using
currency note is folded, sometimes noise occurred, it
needs feedback learning of water mark of used currency
note. The back propagation neural network is suitable to
certify the water mark, because it can design many layers
for many nodes network, that it is used to recognize the
complicate pattern [3].

They consider correlation as the basis for finding matches of
a sub- image w(x,y) of JxK within an image f(x,y) size
MxN, we assume that J≤M and K≤N. They prepare the
template of each type of note then apply correlation on each
stored note with on which we are testing. Zero value of
correlation coefficient gives the location of the
watermark[10].
3. Certifying
To certify the watermark it is inputted to back-propagation
neural network. Result of neural network is used to certify
the currency note. First neural network must be trained by
sending the ideal watermark to it. Size of input is sent to
neural network about 4225 nodes.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

28
4. Trainable Multilayered- network

BP method is given by eq(5.2) as [6]

) ( t k W x W ij j i ij − ∆ + · ∆ α ηδ (1.1)

Where ij W is the weight connecting an output of unit j into
an input of unit i, η is the step size, a is a momentum
coefficient, and xj is an input signal from unit j. The
quantity i δ is an error term, computed as


¹
'
¹ −
·

m mi W i O
Oi ti i O
i
δ
δ
'
) ( '
(1.2)

Where ti is a desired signal to unit I, Oi is the actual output
of unit i, and O’i is the derivative of O’i. Weight is adjusted
according to what is the target. When there is much
difference between input and target then neural network
needs more training. When epochs are increased neural
network gives best response as the number of epochs
increased training is better, although with increasing
number of time taken to train the network is more but target
is achieved with less mean squared error [4].
5. Methodology
In this methodology first the searching of watermark is done
for that a document is split into blocks. Block size of each
block is equal to the size of ideal watermark. After splitting
each block is stored in different variable. After splitting
document into blocks further process is that to correlate each
blocks with ideal watermark. This correlation will give the
correlation coefficient. Block which contain watermark
gives the highest correlation coefficient. Now we extracted
the block which gives us highest correlation coefficient and
give it to neural network. Correlation coefficient[3] is given
by the equation (1.3)

∑∑
− − · ) , ( ) , ( ) , ( t y s x w y x f t s c (1.3)

Where x, y are co-ordinates of selected block and s=0, 1,
2…. M, t= 0,1,2,……N Where M×N is size of ideal
watermark. Each pixel of the selected block is matched with
each corresponding pixel of ideal watermark block.
Difference between these is calculated called correlation
coefficient. Correlation coefficient of each block has been
calculated with the same procedure. Now select the block
which contains the highest correlation coefficient. The
system software flowchart is shown in figure 1 can be
described; the location of the water mark is detected[7].

Figure 1. Flowchart of a system
The edge information from the shadow of water mark is
derived from shining image, and it is inputted to neural
network for certifying. The edge information all of currency
note is inputted to neural network for checking.
6. The Proposed Scheme
We are applied an algorithm which tries to find the
watermark more effectively. First step to break the image
into different pieces having size of each piece is equal to
ideal watermark size. Then it finds the correlation among
each piece with ideal watermark. Correlation process gives
correlation coefficients then the method picks up the piece
which gave greatest correlation coefficient, because this
piece has most probably of containing watermark. The piece
found in the previous step has been given to the neural
network for verification. We trained the neural network with
ideal watermark as a target.
This is implemented in matlab 7.0 and vb.net. GUI has been
created in vb.net and main implementation has been done in
matlab 7.0. Then GUI is linked with matlab7.0 with the
help of M-files.

Figure 2. Main form of GUI, “Document verification system”

In fig 2 photo of the currency note 1000 has been shown.
After acquiring image the neural network has been trained.
Following figure 3 shows the training of neural network.


Figure 3. TrainingoOf Neural Network
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

29
After clicking ok main form displayed again then we have to
verify it. By selecting verify from the menu bar will start
process of verification. Following graph in figure 4 shows
the watermark accuracy present on the document.


Figure 4. Accuracy of two watermark present on the
document.
7. Result
The note is divided into two categories one is called training
set other is called testing set. After inputting the note’s
image of currency notes, the location of watermark has been
detected by correlation mapping by splitting image into
blocks then finding correlation of each block. Then block is
given to neural network for certification. Neural network is
trained through training set. After the goal is accomplished
we test the neural network by tested data. In this neural
network Mean squared error has been used. So training
graph has been plotted target versus input. So the neural
network has been trained with 1000 epochs.

Figure 5. Training neural network with 1000 epochs
This bar chart is plotted on scale 1. First watermark shows
the accuracy of 99% and second watermark shows the
accuracy of 100% which are above threshold so we can say
that the currency note is real. This accuracy is checked on
the basis of percentage matching with real watermarks
because neural network has been trained with the real
watermark and gives mean squared error when tested. Two
neural network for two watermark has been created
differently. If accuracy is below threshold then the note is
rejected.

Figure 6. Bar chart showing percentage accuracy of two
watermarks.
We have implemented our technique on the Indian currency
and Indian postage stamp, but their technique is
implemented on Thai currency. However output of this
technique is also different it did not show the accuracy of
watermark. This technique searches the watermark into
whole image while our technique will split the image into
blocks and apply correlation on each block with ideal
watermark, which gives us a correlation coefficient. The
value of correlation coefficient gives us an idea of similarity
between two images. This technique takes shorter time to
find the watermark in the note.
Reference
[1] Ingermar J. Cox, Matthew L. Miller, and Jeffrey
A.Bloom, Digital Watermarking, Morgan
KaufmannPublishers, 2002
[2] Francisco J. Gonzalez-Serrano, Harold. Y. Molina-
Bulla, and Juan J. Murillo- Fuentes,” Independent
component analysis applied to digital image
watermarking,” International Conference on Acoustic,
Speech and Signal Processing (ICASSP), vol. 3, pp.
1997-2000, May 2001.
[3] Dan Yu, Farook Sattar, and Kai-Kuang Ma,
“Watermark detection and extraction using
independent component analysis method,” EURASIP
Journal on Applied Signal Processing, vol. 1, pp. 92–
104, 2002.
[4] Minfen Shen, Xinjung Zhang, and Lisha Sun, P. J.
Beadle, F. H. Y. Chan, “A method for digital image
watermarking using ICA,” 4th International
Symposium on Independent Component Analysis and
Blind Signal Separation (ICA 2003), Nara, Japan,
April 2003, pp. 209-214.
[5] Ju Liu , Xingang Zhang, Jiande Sun, and Miguel
Angel Lagunas, “A digital watermarking scheme based
on ICA detection,” 4th International Symposium on
Independent Component Analysis and Blind Signal
Separation, (ICA 2003), Nara, Japan, April 2003, pp.
215-220.
[6] Stephane Bounkong, Boremi Toch, David Saad, and
David Lowe, “ICA for watermarking digital images,”
Journal of Machine Learning Research 4, pp. 1471-
1498, 2003.
[7] Viet Thang Nguyen and Jagdish Chandra Patra,
“Digital image watermarking using independent
component analysis,” PCM 2004, Lecture Notes in
Computer Science 3333, pp. 364-371, Springer-
Verlag, 2004.
[8] Thai Duy Hien, Zensho Nakao, and Yen-Wei Chen,
“Robust multi-logo watermarking by RDWT and
ICA”, Signal Processing, Elsevier, vol. 86, pp. 2981-
2993, 2006.
[9] Aapo Hyvarinen, “Survey on Independent Component
Analysis”, Neural Computing Surveys,vol. 2, pp. 94-
128, 1999.
[10] Hyvarinen, Karhunen, and Oja, “Introduction,”
Chapter 1 in Independent Component Analysis,John
Wiley, pp. 1-12, 2001.



(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

30

An Approach to Protection at Peer-To-Peer
Networks

*
Dr.G.Srinivasa Rao,
*
Dr.G. Appa Rao,
*
S. Venkata Lakshmi,
*
D. Veerabhadra Rao,
**
B. Venkateswar Reddy,
*
P. Venkateswara Rao,
*
K. Sanjeevulu.

*
GITAM University,
**
CVSR Engineering College
[email protected]


Abstract: Open networks are often insecure and
provide an opportunity for viruses and DDOS activities
to spread. To make such networks more resilient against
these kind of threats, we propose the use of a peer-to-
peer architecture whereby each peer is responsible for:
(a) detecting whether a virus or worm is uncontrollably
propagating through the network resulting in an
epidemic; (b) automatically dispatching warnings and
information to other peers of a security-focused group;
and (c) taking specific precautions for protecting their
host by automatically hardening their security measures
during the epidemic. This can lead to auto-adaptive
secure operating systems that automatically change the
trust level of the services they provide. We demonstrate
our approach through a prototype application based on
the JXTA peer-to-peer infrastructure.

Keywords: Peer-to-peer, Antivirus, Intrusion Detection,
JXTA

1. Introduction
The rapid evolution of the Internet, coupled with
the reduction in the cost of hardware, have brought forth
very significant changes in the way personal computers are
used. Nowadays, the penetration of the Internet is wide, at
least in the developed world, and high percentage of
connectivity is handled through broadband technologies
such as DSL, cable modems, satellite links and even 3G
mobile networks. Many companies have permanent
connections to the Internet through leased lines and optical
fibers, and many home users through the aforementioned
broadband connections. If one also takes into account the
significant development of wireless networking technologies
(such as Wireless LAN, HyperLAN), the immediate result is
an almost universal connection of most users on a 24-hour
basis. Although the potential benefits arising from these
developments are various and important, so are the dangers
that follow from the possibility of malicious abuse of this
technology.
The proliferation of viruses and worms, as well as the
installation of Trojan horses on a large number of
computers aiming at Denial of Service (DoS) attacks
against large servers, constitute one of the major current
security problems. This is due to the extent to which critical
infrastructures and operations such as hospitals, airports,
power plants, aqueducts etc. are based on networked
software-intensive systems. The measures taken for
protection against such threats include [45] the use of
firewalls, anti-virus software and intrusion detection
systems (IDS). Considerable importance is also placed on
the topology of the network being protected [43], as well as
to its fault tolerance to ensure that its operation will
continue even if a part of it is damaged.
A significant increase in the spread of viruses, worms
and Trojan horses over the Internet has been observed in the
recent years. Recent evidence shows that older boot sector
viruses, as well as viruses transmitted over floppy disks no
longer constitute a considerable threat [12]. At the same
time, though, modern viruses have become more dangerous,
employing complex mutation, stealth and polymorphism
techniques [37] to avoid detection by anti-virus software
and intrusion detection systems. These techniques are
particularly advanced and, combined with the fact that
antivirus software is often not properly updated with the
latest virus definitions, can lead to uncontrollable
situations.
In the last two years it has been proven both theoretically
[38, 23] but mainly practically that the infection of
hundreds of thousands of computers within a matter of
hours -or even minutes is feasible. At the theoretical level
Staniford [38] presented scanning techniques (random
scans, localized scans, hit-list scans, permutation scans)
which, used by a worm, can perform attacks of this order.
Indeed such worms are often referred to as Warhol worms or
Flash worms due to their potential velocity of transmission.
A similar confirmation was obtained practically in the
cases of the worms Code Red [31], Code Red (CRv2) [5],
Code Red II [13], Nimda [26, 27, 20], and Slammer [22],
which were characterized as epidemics by the scientific
community [44] (although a more appropriate
epidemiological term would be pandemics). Recently the
Blaster-worm [24, 21] caused significant disruption in the
Internet, although the infection rate of the specific worm
was relatively slow in comparison with the previously
mentioned worms. The reason for the effectiveness of the
Blaster-worm was the exploitation of the Windows DCOM
RPC interface buffer overrun vulnerability. This
vulnerability affects all unpatched Windows NT /2000/ XP
systems, as opposed to Code Red worms variations or the
Slammer worm which were focused on machines acting as
Web Servers or SQL Servers respectively.
All of the above is evidence that rapid malcode is
extremely hard to confront using the “traditional” way of
isolating and studying the code to extract the appropriate
signature and update the IDS in real time.
We now propose to the reader to consider human
behavior during a flu epidemic. Obviously a visit to a doctor
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

31
and the use of vaccines is essential, however there is also
need for an increased awareness and use of hygiene rules:
avoiding crowded spaces, increasing the ventilation of our
working area etc. Once the epidemic subsides, these
measures can be suspended; a person showing symptoms of
the disease, of course, should still visit a doctor to receive
medical care, regardless of whether the epidemic is still
taking place.
The classic computer protection methods can be likened
to the above medical situation: The vaccination of the
population can be compared to updating the virus signature
files; the lookout for symptoms may be compared to
detection by an IDS; while the hygiene rules followed,
which are essential for the protection of the larger, still
unaffected population, may be compared to the operation of
our proposed system, described in the
following sections.

2. Architecture:
Peer-to-peer networks, which we will hereafter reference
as p2p networks, are often considered to be security threats
for organizations, companies or plain users, mainly due to
the use of p2p-based applications for illegal file sharing,
and to the ability of worms to be spread through such
applications (e.g. VBS.GWV.A [41, 40] and W32.Gnuman
[10]). Our work indicates, however, that p2p networks can
also be positively utilized to significantly reinforce network
security, by offering substantial help in the protection
against malicious applications. We propose an effective way
to achieve this by collecting and exchanging information
that will allow us to obtain a global overview of the network
status, with reference to ongoing security attacks. The goal
of our methodology is to select the most appropriate security
policy, based on the level of danger posed by rapid malcode
circulating in the network.
P2p networks leverage the principle that a much
better utilization of resources (processing power, bandwidth,
storage etc.) is achieved if the client/server model is
replaced by a network of equivalent peers. Every node in
such a p2p network is able to both request and offer services
to other peer nodes, thus acting as a server and a client at
the same time (hence the term “servent” = SERVer +
cliENT which is sometimes used).
The motivation behind basing applications on p2p
architectures or infrastructures derives to a large extent
from their adaptability to variable operating environments,
i.e. their ability to function, scale and self-organize in the
presence of a highly transient population of nodes (or
computers/users), hardware failures and network
outages,without the need for a central administrative server.
Our proposed application, which we call
“NetBiotic”, requires the cooperation of several computers
within a common peer group, in which messages are
exchanged describing the attacks received by each
computer. It consists of two independent entities: a Notifier
and a Handler. These entities act as independent daemons
for UNIX systems, services for Windows NT/2000/XP or
processes for Windows 9x/Me. From now on we will be
referring to these entities as daemons for simplicity. Figure
1 illustrates the architecture of the proposed system within a
group of cooperating peer computers.
The Notifier is a daemon responsible for monitoring the
computer on which it runs and collecting any information
relevant to probable security attacks. There is a plethora of
different approaches to incorporate in the Notifier; for
simplicity in our preliminary implementation we only
monitor the log files of several security related applications,
such as firewalls, anti-virus software and IDS systems.
These are applications that collect information about
security threats and attacks to the computer system on
which they are running and either notify the user of these
attacks or take specific measures, while at the same time
storing information relevant to the attacks into log files. By
regularly reading the log files generated by these
applications, the Notifier detects any recently identified
security attacks to the computer it is running on. At regular
time intervals t, the Notifier of node n will record the
number of hits (h
n
t
) the node received over the past interval.
It will then calculate and transmit the percentage p
n
t
by
which this average differs from the average hits in an
aggregate of the k latest intervals, given by






Figure 1. The architecture of the NetBiotic system
within a group of cooperating peer
computers.


(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

32
where:
• t is the ordinal number of a fixed time interval.
• n is a node identifier.
• h
n
t
is the number of attacks node n received in the
interval t.
• p
n
t
is the percentage increase or decrease in attacks
during the current interval t on node n.
• k(>0) is the size of the “window” used, in number
of t time intervals, within which the average
attack rate is calculated.
Selecting the appropriate length of the time interval t is
currently a subject of further research. In our current
implementation we use a value of 15 minutes, which we
feel provides a balance between increased network traffic
and delay in notifying the network of attacks. This will be
further discussed in the next Section.
A value of p
n
t
significantly greater than 1.0 is an
indication that node n is under security attack during the
interval t. The actual threshold used for p
n
t
is set by
experience, and can vary according to the tolerance for
false positives/negatives one has. With a small threshold it
is possible to falsely interpret slightly increased rapid
malcode activity as an epidemic (false positive), leading to
an unnecessary activation of the available countermeasures,
which in turn can disrupt some non critical useful services
and cause inconvenience to the users. A very large
threshold on the other hand, would probably fail to identify
a rapid malcode epidemic (false negative) leaving the
system protected only by its standard built-in security
mechanisms. We tend to believe that is much better to tune
the NetBiotic system towards a large threshold because
rapid malcode epidemics cause a number of side-effects
which are difficult to remain unnoticed. For us it is more
important to ensure the timely recognition of these
symptoms, in order to increase the security level of the
protected system before a circulating worm may manage to
launch an attack against it.
The Handler is also a daemon, responsible for
receiving the messages sent from the Notifiers of other
computers, and for taking the appropriate measures when it
is deemed necessary. More specifically, it records the hit
rates h
t
and percentage changes p
t
received from the
different nodes in the peer group within a predefined period
of
time t, and calculates the overall change in attack rate,
averaged for all n nodes of the peer group that transmitted a
message during the last interval:


The architecture supports countermeasures based
upon predefined thresholds for p
avg,
, which are again set by
experience. If p
avg,
exceeds an upper threshold, the security
level of the computer is raised. If, on the other hand, it
drops below a lower threshold for a large period of time, the
security level at which the computer functions is reduced.

Selecting the appropriate thresholds τ
high
and τ
low
for
increasing or decreasing the security levels is crucial. In our
approach, the thresholds are selected empirically and
we have:
• if p
avg
> τ
high
then increase security

policy.
• if p
avg
< τ
low
then decrese security policy.
• if τ
low
≤ p
avg
≤ τ
high,
do nothing.

We base our decision for modifying the security policy
on the rate of change of attacks, rather than on the actual
number of attacks, to normalize the inputs from all peers
with respect to their regular susceptibility to attacks; a peer
whose actual number of attacks during a monitored time
interval has increased from 1000 to 1100 has only
experienced a 10% change in the attack rate, while a peer
whose number of attacks increased from 50 to 150 within
the same interval has experienced a 200% change in the
attack rate; still, they have both received 100 attacks more
than usual. As far as the actual utilization of our
architecture for protecting the computer system is
concerned, the countermeasures taken will depend on many
factors. A simple personal computer will be requiring
different protection strategy than the central server of a
large company. The type of operating system is also an
important factor. The proposed system is not suggested as a
replacement for traditional protection software (anti-
viruses, IDS, firewalls etc.). The aim of NetBiotic is to
assemble an additional, overall picture of the network status
and suggest the basic security measures to be taken in the
event of an epidemic. The NetBiotic architecture might not
be capable to protect against a specific attack, however it
will engage the standard measures that in many cases are
crucial (such as disabling HTML previewing in several
mail clients, not allowing Active X controls in various web
browsers, disabling macros in some office application etc.).

In our prototype design, the recommended measures for
a simple personal computer running Microsoft Windows
would be to increase the security level of the default mail
client and web browser. It would be additionally helpful to
alert the user of the increased threat, in order to minimize
threats of automated social engineering attacks. Servers can
similarly disable non-critical networked services (e.g. by
modifying the inetd.conf file in the case of Linux/Unix
based operating systems). Figure 2 illustrates the operation
and interaction of the Notifier and Handler daemons.

3. Implementation
The prototype system we present here was developed
using the JXTA protocol [15]. JXTA is a partially
centralized p2p protocol implementation introduced in early
2001, designed for maximum peer autonomy and
independence. It allows applications to be developed in any
language, it is independent of operating system type and is
not limited to the TCP/IP protocol for data transfer. This
allows an application such as NetBiotic to be easily ported
to various operating systems, which is crucial to its
operation, as its effectiveness will depend on the size of the
peer group that will adopt it. An additional benefit of JXTA
is its availability under an open source software license
agreement, similar to the Apache License [1].
Due to the nature of our application, security issues are
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

33
of particular interest. Security provisions are usually
incorporated in p2p architectures by means of various
cryptographic mechanisms such as the information
dispersal algorithm [30] or Shamir’s secret sharing code
[33], anonymous cryptographic relays [32], distributed
steganographic file systems [11], erasure coding [19],
SmartCards or various secure routing primitives [7].
JXTA peers function under a role-based trust model,
whereby individual peers function under the authority of
third-party peers to carry out specific tasks. Public key
encryption of the messages exchanged, which may be in
XML format, as well as the use of signed certificates are
supported, providing confidentiality to the system. The use
of message digests provides data integrity, while the use of
credentials — special to-kens that authenticate a peer’s
permission to send a message to a specific endpoint —
provide authentication and authorization. JXTA also
supports the use of secure pipes based on the TLS protocol.
Further work is being carried out based on the security
issues of the JXTA system, notably the implementation of a
p2p based web of trust in the Poblano Project [4], which
will be discussed in the future work Section.
Our system was implemented in Java (java2
version 1.4.0 02) using JXTA version 1.0, and uses the
winreg [36] tool to administer the windows registry and
modify the security settings of the various applications. The
main advantages of Java are its compatibility with most
operating systems as well as the fact that it is one of the
most secure programming languages.
In our preliminary implementation, the Handler
modifies the security settings of the Microsoft Outlook mail
client and the Microsoft Internet Explorer web
browser.These two applications were selected as they are
often the target of viruses. The simple operation of
increasing their security settings is therefore enough to
provide effective protection to a large number of users.
Most anti-virus programs can be adjusted to
produce log files with the attacks they intercept. By
regularly monitoring such log files, the Notifier daemon is
able to detect a security attack and notify the peers. To test
our prototype system, we created a software tool which
randomly appends supposed security attack entries to these
log files.
The NetBiotic architecture is compatible with any IDS
or anti-virus software that can be setup to record the
security attacks against the system it is protecting in a log
file. Our aim is to make the NetBiotic system as
independent as possible from the IDS with which it
cooperates and the underlying operating system. This
independence, however,
cannot be total, as the following factors will be unavoidably
system dependent:

¯ Log files
In its simplest form, the system can simply check the size of
the log file. For a more sophisticated operation, though, it
would be necessary to incorporate a parser that would
extract specific information from the log files. Such a parser
has to be specific to each different type of log file used.

¯ Countermeasures taken
System independence cannot be achieved in the case of the
countermeasures taken, which will depend on the operating
system. Different scripts have to be used to modify the
security levels of applications in different operating
systems.


Figure 2. Operation of the Notifier and Handler daemons

Our system has been tested in laboratory
environment as well as in a peer group that was set up for
this purpose, in which virus attacks were simulated on
some peers, resulting in the modification of the security
settings of Microsoft Outlook and Internet Explorer on
other peer computers. No real viruses were deployed. A
program was running on each of the peer computers and
periodically edited the log file of the antivirus software,
simply changing its size to simulate a security attack event.
The average frequency with which these events were
simulated was random and different for each computer. The
exchange of messages, individual and overall average hit
rates as well as the resulting changes in the security settings
of the application were recorded and verified against our
theoretical expectations.
Finally, since our system consists of two independent
daemons, it is possible to only install one of the two on
certain peer computers. For instance, the Notifier daemon
would be particularly useful running on a large company
server, and supplying the peers with information about the
security threats it faces. The administrators of such a server
may prefer not to install the Handler daemon, and instead
manually take action in the event of security attacks.
Similarly, for a personal computer user who may not have
adequate security measures and antivirus software installed
(for either financial or other reasons), installing the Handler
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

34
daemon itself may provide an adequate level of protection.
In this case, the Handler daemon would modify the local
security level based on information received by the security
focused peer group. The Handler would therefore operate
relying on the trustworthiness of the information received
from the peer group only, which may in some cases be a
disadvantage.

4. Related Work
The research that is most relevant to our proposed
system has been carried out within the framework of project
Indra [14], with which we partially share a common
philosophy. We agree on the basic principle of using p2p
technology to share security attack information between
computers in a network in order to activate security
countermeasures if necessary.
We differ however in the circumstances under which
specific countermeasures should be taken. According to the
Indra project team, in the event that a security attack is
detected countermeasures should be immediately initiated,
by using the appropriate plugins to protect the computer
system. A single security attack anywhere in the network is
enough for them to generate a response. In short, Indra is
designed to respond to every single security attack.
In contrast, our system’s goal is to determine if there
is a general increase in the virus or worm attacks in the
network, or more importantly a virus or worm epidemic
outbreak. Measures taken in this case, such as the increase
in security settings of mail clients, web browsers and anti-
virus programs will only be effective during the epidemic,
and the system will return to its original state after it is
finished. In our design, individual virus or worm attacks in
the network are not considered separately. Furthermore, we
believe that our design can be expanded to very large
network sizes without considerably increasing the overall
network traffic.
A number of highly distributed systems rely on peer
communications. The Hummingbird system [28] is based
on a cooperative intrusion detection framework that relies
on the exchange of security related information between
networks or systems in the absence of central
administration. The structure of the Hummingbird system
is significantly more complex and advanced than
NetBiotic, using a combination of Manager-Hosts,
Managed Hosts, Slave Hosts as well as Peer, Friend and
Symbiote relationships for the exchange of security related
information. The Hummingbird system includes advanced
visualization tools for its configuration and monitoring of
log files, and although it may require considerable effort
and expert knowledge for fine tuning the cooperation of
each host with the others, it is particularly effective for
distributed security attacks (such as doorknob, chaining,
loopback attacks etc.). A potential secondary use of the
Hummingbird system, in our view, could also be in the
detection of malcode.
Emerald [29, 25] is a system targeted towards the
exchange of security incident related information between
different domains or large networks. It consists of a layered
architecture that provides a certain abstraction, and
requires the adjustment of parameters relevant to the trust
relationships between cooperating parties. We believe that
Emerald, like Hummingbird, can be invaluable in
protecting a computer system or network against
distributed and targeted attacks. NetBiotic may not be in
the position to affront such attacks with the same
effectiveness, as its goal is the seamless and automated
creation of a network of peers for the fast exchange of
information regarding rapid spread malcode activity,
leveraging the benefits of peer-to-peer architectures and
topologies, and providing basic protection to the
participating peers.
Bakos and Bert [2] presented a system for the
detection of virus outbreaks. The fastest spreading worms
use scanning techniques for identifying potential target
computers. As a result, they also scan a large number of
addresses that do not correspond to actual computers. The
routers that intercept such scanning messages usually reply
with a ICMP Destination Unreachable (also known as
ICMP Type 3 or ICMP-T3) message. The authors propose
that a carbon copy message be sent by the routers to a
central collector system, which will be responsible for
collecting, correlating and analyzing this data. Bakos and
Bert have implemented such a system by modifying the
kernel of the Linux operating system to act as a router. The
central collector receives the messages and forwards them
to an analyzer system, which extracts the valuable
information. It should however be examined whether the
time required for the entire processing prohibits the use of
this system for fast spreading worms, as described by
Staniford [38].
Systems that use an extended network to gather
information yet rely on a centralized client/server model
were also examined. DeepSight [6] is a system developed
by Symantec based on a client/server architecture, whereby
centralized servers collect and re-distribute security attack
information. Since it is a commercial system it is not
available for scientific research, however it does include a
very widespread data collection network.
An approach similar to DeepSight is taken by
DShield, in which hundreds of computers communicate
with central servers and transmit their IDS log files. The
servers process the data and announce in a web site
information about the currently active malware, the IP
addresses from which most attacks originated and other
useful information. Through the incorporation of different
parsers, DShield supports various different IDS systems.
DShield has been active for more than two years, with a
significant number of users. A disadvantage of the system is
that the large volume of data collected requires considerable
processing time for extracting useful information. The
theoretical times taken by the Flash and Warhol worms as
well as the measured times for the Slammer worm [22, 38]
to spread through the Internet are probably beyond the
ability of DShield to react.
Both DeepSight and DShield aim at providing a
global view of the Internet security status, however they are
both subject to the disadvantages of the client/server
architecture they follow: their dependence on a single
server for their operation and their lack of adaptability
makes them vulnerable to targeted attacks. An original
approach taken by the AAFID [35], whereby agents are
used to collect virus attack information also follows a
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

35
centralized control structure. The same holds for the GrIDS
system [39], which uses activity graphs to control large
scale networks and identify suspicious activities, based on
the judgment of a System Security Officer.
Finally, the following two approaches propose
different ways of monitoring the overall security state and
threat level of a network: In the DIDS system [34], the
overall security state of a network under observation is
represented by a numerical value ranging between 0 (safest)
and 100 (least safe), while a clearly visual approach to
representing the network security state has been proposed
[42, 8]. We find both approaches very descriptive and
useful to a System Security Officer. In our prototype
NetBiotic implementation, however, we are currently
adopting a much simpler approach which consists of
choosing between three different security states (regular,
low risk and high risk), as described in Section 2.

5. Future Work
The NetBiotic system is an evolving research
prototype. It is currently being extended in a number of
ways as discussed below, in order to subsequently be
released as open source software to allow the collaboration
with other research groups working in similar directions.
At this stage, our goal is to propose an architecture,
accompanied by a basic implementation for proof-of-
concept purposes, which, based on a p2p network
infrastructure can provide security services for computer
systems. Although our prototype performed well in the
situation in which we tested it, it is not suitable for
performing large-scale testing.
We expect that, before more advanced versions of
our application will be implemented, the scientific
community will examine the use p2p networks in security
applications from a theoretical standpoint and provide
insight into the advantages and disadvantages of such an
approach.
The following conceptual and implementation
improvements are currently being considered:

¯ Vulnerability to malicious attacks
A major drawback of our current design is its
inability to effectively verify theinformation transmitted in
the network. If one or more malicious users manage to
introduce in the peer network a large number of false hit
rate indications, the result may be the unwanted decrease of
the security measures of the computers in the network,
rending them vulnerable to virus attacks.
We propose that all members of the security peer group will
have to be authenticated and verified, probably through the
use of certificates, to enforce a consistent authentication and
authorization policy.
At the implementation level, to confront the
problem of malicious users introducing false information we
further propose the following approaches, based on the
capabilities offered by JXTA:

1. JXTA supports the exchange of encrypted messages
based on the TLS algorithm secured pipes [3], which will
be used for the transmission of warning messages.
2. JXTA message digest will be used for data integrity
purposes.
3. Other research groups are involved in the creation of a
p2p-based web of trust. We intend to study these systems to
examine to what extent they can be used to enhance the
NetBiotic architecture.
¯ Use of epidemiological models
We believe that the incorporation of mathematical
epidemiological models for the detection of epidemic
outbreaks in the network and determining the threshold for
initiating security level modifications should significantly
enhance the robustness of our system. A key point in our
future research will be the selection of the thresholds for
modifying security policies. These thresholds will be
variable and will depend on each system’s characteristics
and on an analysis of the attack data collected. Studies [9,
18, 16, 17] show that there is a correlation between the
patterns of spread of biological viruses and computer
viruses. These studies were mainly limited to closed local
area networks. P2p models are ideal for gathering large
scale network virus information, which can subsequently be
processed and adapted to epidemiological models, leading
to decision tools for concluding, or perhaps even predicting,
whether there is — or is likely to be — an epidemic
outbreak in the network.
¯ Choice of appropriate security policy
In conjunction with other factors, such as the role of the
system being protected, our system should be able to
effectively choose the most appropriate security policy for
the specific period of time. In this way, single incidents of
virus attacks
may not be the cause of any concern, yet the detection of
epidemic outbreaks would initiate a modification of the
security policies.
¯ Platform porting
In porting our system to Unix/Linux platforms, the
operating system could be instructed to launch or halt
applications, or automatically request updates. The
configuration of these operating systems can be edited
through plain text files, which is an additional benefit for
our system.

6. Conclusions
Even the best protected organizations, companies
or personal users are finding it difficult to effectively shield
themselves against all malicious security attacks due the
increasing rate with which they appear and spread.
Antivirus applications, as well as IDS systems, identify the
unknown malware by employing behavioral based heuristic
algorithms. These algorithms are particularly effective
under a strict security policy, however they tend to produce
an increased number of false alarms, often disrupting and
upsetting the smooth operation of a computer system and
the organization or users it supports. On the other hand, if
the security policy is relaxed, the threat of a virus infection
becomes imminent.
We propose a platform based on p2p technology in
which the computers participating as peers of a network
automatically notify each other of security threats they
receive. Based on the rate of the warning messages
received, our system will increase or decrease the security
measures taken by the vulnerable applications running on
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

36
the computer. Our approach automates elements of the
process of choosing the appropriate security policy, based
on data useful for adjusting the security levels of both the
operating system (by launching and terminating related
applications) and the security applications (by modifying
the security parameters of the heuristic algorithms they
employ).
An important aspect of our design is that the traffic
introduced in the network by the peer nodes as a result of
the transmission of hit rate information is minimal. We
believe that, with the inclusion of the future extensions we
are currently working on, our approach may lead to
operating systems, antivirus programs, IDS software and
applications that will be able to self-adjust their security
policies.

References:
[1] Apache license: Current on-line (June 2003):
http://httpd.apache.org/docs/license.
[2] G. Bakos and V. Berk. Early detection of internet
worm activity by metering icmp destination
unreachable messages. In Proceedings of the the
SPIE Aerosense, 2002.
[3] Wilson B.J. JXTA. New Riders, Indianapolis, IN, USA,
June 2002.
[4] R. Chen and W. Yeager. Poblano: A distributed trust
model for peer-to-peer networks. Technical report,
Sun Microsystems.
[5] Code Red CRv2. Current on-line (June 2003):
http://www.caida.org/analysis/security/code-
red/coderedv2 analysis.xml.
[6] Deepsight threat management system: Current on-line
(June 2003): http://www.securityfocus.org.
[7] P. Druschel and A. Rowstron. Past: A large-scale,
persistent peer-to-peer storage utility. In Proceedings
of the Eighth Workshop on Hot Topics in Operating
Systems, May 2001.
[8] R. Erbacher, K. Walker, and D. Frincke. Intrusion and
misuse detection in large scale systems. IEEE
Computer Graphics and Applications, 22(1), 2002.
[9] S. Forrest, S. Hofmeyr, and A. Somayaji. Computer
immunology.Communications of the ACM,
40(10):88–96, 1997.
[10] W32.gnuman.worm: Current on-line (June 2003):
http://service1.symantec.com/sarc/sarc.nsf/html/w32.
gnuman.worm.html.
[11] S. Hand and T. Roscoe. Mnemosyne: Peer-to-peer
steganographic storage. In Proceedings of the 1st
International Workshop on Peer-to-Peer Systems
(IPTPS ’02), MIT Faculty Club, Cambridge, MA,
USA, March 2002.
[12] Icsa labs 2002 computer virus prevalence survey.
Current on-line (June 2003):
http://www.trusecure.com/download/dispatch/vps200
2.pdf.
[13] Code Red II. Current on-line (June 2003):
http://www.eeye.com/html/research/advisories/al200
10804.html.
[14] R. Janakiraman, M. Waldvogel, and Q. Zhang. Indra:
A peer-to-peer approach to network intrusion
detection and prevention. In Proceedgings of 2003
IEEE WET ICE Workshop on Enterprize Security,
Linz, Austria, June 2003.
[15] Project jxta v2.0 java programmer’s guide: Current
on-line (June 2003):
http://www.jxta.org/docs/jxtaprogguide v2.pdf.
[16] J Kephart. How topology affects population dynamics.
In Proceedings of Artificial Life 3, Santa Fe, New
Mexico, June 1992.
[17] J. Kephart, D. Chess, and S. White. Computers and
epidemiology. IEEE Spectrum, May 1993.
[18] J. Kephart and S. White. Directed-graph
epidemiological models of computer viruses. In
Proceedgings of IEEE Computer Society Symposium
on Research in Security and Privacy, pages 343–
361, Oakland, CA, 1991.
[19] J. Kubiatowicz, D. Bindel, Y. Chen, P. Eaton, D.
Geels, S.R. Gummadi, H. Weatherspoon, W.
Weimer, C. Wells, and B. Zhao. Oceanstore: An
architecture for global-scale persistent storage. In
Proceedings of ACM ASPLOS. ACM, November
2000.
[20] A. Mackie, J. Roculan, R. Russell, and VanVelzen M.
Nimda worm analysis - incident analysis report
version ii. September 2001.
[21] J. Miller, J. Gough, B. Konstanecki, J. Talbot, and J.
Roculan. Deepsight threat management system
threat alert - microsoft DCOM RPC worm alert.
Current on-line (August 2003):
https://tms.symantec.com/members/analystreports/030811-
alert-dcomworm.pdf.
[22] D. Moore, V. Paxson, S. Savage, C. Shannon, S.
Staniford, and N. Weaver. The spread of the
sapphire/slammer worm. Current on-line (June
2003):
http://www.caida.org/outreach/papers/2003/sapphire/
sapphire.html. Technical
report, 2003.
[23] D. Moore, G. Voelker, and S. Savage. Internet
quarantine:requirements for containing self-
propagating code. In Proceedings of the 2003 IEEE
Infocom Conference, San Francisco California,
USA, April 2003.
[24] Microsoft security bulletin ms03-026. Current on-line
(August 2003):
http://www.microsoft.com/technet/treeview/default.as
p? url=/technet/security/bulletin/ms03-026.asp.
[25] P. Neumann and P. Porras. Experience with
EMERALD to date. In First USENIX Workshop on
Intrusion Detection and Network Monitoring, pages
73–80, Santa Clara, California, April 1999.
[26] Current on-line (June 2003):
http://www.incidents.org/react/nimda.pdf.
[27] Current on-line (June 2003): http://www.f-
secure.com/v-descs/nimda.shtml.
[28] Polla, D., J. McConnell, T. Johnson, J. Marconi, D.
Tobin, and D. Frincke. A framework for cooperative
intrusion detection. In Proceedings of the 21st
National Information Systems Security Conference,
pages 361–373, October 1998.
[29] P. Porras and P. Neumann. EMERALD: Event
monitoring enabling responses to anomalous live
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

37
disturbances. In Proceedings of the National
Information Systems Security Conference, October
1997.
[30] M.O. Rabin. Efficient dispersal of information for
security, load balancing and fault tolerance. Journal
of the ACM, 36(2):335–348, April 1989.
[31] Code Red. Current on-line (June 2003):
http://www.eeye.com/html/research/advisories/al200
10717.html.
[32] A. Serjantov. Anonymizing censorship resistant
systems. In Proceedings of the 1st International
Workshop on Peer-to-Peer Systems (IPTPS ’02), MIT
Faculty Club, Cambridge, MA, USA, March 2002.
[33] A. Shamir. How to share a secret. Communications
of the ACM, 22:612–613, November 1979.
[34] S. Snapp, J. Brentano, G. Dias, T. Goan, T.
Heberlein, C. Ho, K. Levitt, B. Mukherjee, S. Smaha,
T. Grance, D. Teal, and D. Mansur. DIDS
(distributed intrusion detection system) - motivation,
architecture, and an early prototype. In
Proceedings of the 14th National Computer Security
Conference, pages 167–176, Washington, DC, 1991.
[35] E. Spafford and D. Zamboni. Intrusion detection
using autonomous agents. Computer Networks,
(34):547–570, October 2000.
[36] D. Spinellis. Outwit: Unix tool-based programming
meets the windows world. In Proceedings of the
USENIX 2000 Technical Conference, pages 149–
158, San Diego, CA, USA, June 2000.
[37] D. Spinellis. Reliable identification of bounded-length
viruses is np-complete. IEEE Transactions on
Information Theory, 49(1):280–284, January 2003.
[38] S. Staniford, V. Paxson, and N. Weaver. How to own
the internet in your spare time. In Proceedings of the
11th USENIX Security Symposium, 2002.
[39] S. Staniford-Chen, S. Cheung, R. Crawford, M.
Dilger, J. Frank, J. Hoagland, K. Levitt, C. Wee, R.
Yip, and D. Zerkle. GrIDS – A graph-based
intrusion detection system for large networks. In
Proceedings of the 19th National Information
Systems Security Conference, 1996.
[40] VBS.Gnutella. Current on-line (June 2003):
http://service1.symantec.com/sarc/sarc.nsf/html/vbs.
gnutella.html.
[41] VBS.Gnutella. Current on-line (June 2003):
http://vil.nai.com/vil/content/v 98666.html.
[42] G. Vert, J. McConnell, and D. Frincke. A visual
mathematical model for intrusion detection. In
Proceedings of the 21st National Information
Systems Security Conference, pages 329–337,
October 1998.
[43] C. Wang, J.C. Knight, and M.C. Elder. On computer
viral infection and the effect of immunization. In
Annual Computer Security Applications Conference
(AC-SAC), pages 246–256, December 2000.
[44] V. Yegneswaren, P. Barford, and J. Ullrich. Internet
intrusions: Global characteristics and prevalence. In
Proceedings of ACM SIGMETRICS, June 2003.
[45] R.L. Ziegler. Linux Firewalls. New Riders
Publishing, Indianapolis IN, USA., 2002.

Authors Profile

Dr. G.Srinivasa Rao, M.Tech, Ph.D,
Sr.Asst.Professor. four years industrial
experience and over 8 Years of teaching
experience with GITAM University, handled
courses for B.Tech, M.Tech. Research areas
include Computer Networks And Data
Communications. published 6 papers in
various National and International
Conferences and Journals.

Dr. G.Appa Rao., M.Tech., M.B.A.,Ph.D.,
in computer science and Engineering form
Andhra Universiy. Over 12 Years of
teaching experience with GITAM
University, handled courses for B.Tech,
M.Tech. Research areas include Data
Mining and AI. Published 8 papers in
various National and International
Conferences and Journals.

Mrs. S. Venkata Lakshmi M.Tech in
Information Technology from Andhra
University. Asst.Prof in GITAM University.
Over 2 years of teaching experience with
GITAM University and Andhra University
handled courses for B.Tech, and M.C.A. and
2 years of industry experience as a software
engineer. Published 2 papers in various
International Conferences and Journals.

Mr..D.Veerabhadra Rao., M.Tech., in
Information Technology. Over 7 Years of
teaching experience with GITAM University,
handled courses for B.Tech and M.Tech .One
research paper was published in international
journal and one conference.


B. Venkateswar Reddy M.Sc (Maths) from
Osmania University and M.Tech(CS) from
Satyabhama University, having one year
teaching experience, handled courses for
B.Tech and M.Tech in CVSR Engineering
college, Hyderabad.




P. Venkateswara Rao M.Sc(Physics) from
Acharya Nagarjuna University, Pursuing
M.Tech (CST) from GITAM University





K. Sanjeevulu M.Sc(Maths) from Osmania
University., Pursuing M.Tech (CST) from
GITAM University.




(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

38
New Power Aware Energy Adaptive protocol with
Hierarchical Clustering for WSN

Mehdi Golsorkhtabar
1
, Mehdi Hosinzadeh
2
, Mir Javad Heydari
1
, Saeed Rasouli
1


1
Islamic Azad University, Tabriz Branch, Department of Computer Engineering,
Tabriz, Iran
{m.golsorkhtabar;m.heydari;s.rasouli}@iaut.ac.ir

2
Science and Research Branch, Islamic Azad University,
Tehran, Iran
[email protected]

Abstract: In recent years, progress in wireless communication
has made possible the development of low cost wireless sensor
networks. Clustering sensor nodes is an effective topology
control approach. In this paper, we propose a new routing
algorithm which can increase the network lifetime. We assume
that each node can estimate its residual energy and then a new
clustering method will be proposed for increase of network
lifetime. This assumption is similar to many other proposed
routing algorithms for WSN and is a possible assumption. In the
new algorithm, the predefined numbers of nodes which have the
maximum residual energy are selected as cluster-heads based on
special threshold value first and then the members of each
cluster are determined based on the distances between the node
and the cluster head and also between the cluster head and base
station. At last, the simulation results show that our method
achieves longer lifespan and reduce energy consumption in
wireless sensor networks.
Keywords: wireless sensor networks; clustering algorithm;
energy adaptive; network lifetime
1. Introduction
Wireless Sensor Network (WSN) comprises of micro
sensor nodes, which are usually battery operated sensing
devices with limited energy resources. In most cases,
replacing the batteries is not an option. [1-2].
Wireless sensor network, usually, are heterogeneous. The
protocols should be design for the typical of heterogeneous
wireless sensor networks. Most of the clustering algorithms
are designed for homogeneous wireless sensor networks and
they are not optimized when network's nodes is in
heterogeneous state such as [3-4].
In this paper, we propose and evaluate PEAP, (new Power
aware Energy adaptive Protocol with hierarchical clustering
for WSN). In considered wireless sensor network, nodes
send sensing information to a cluster-head and then the CH
transmit data to base station. The certain clustering
algorithms with special method periodically electing cluster-
heads then cluster-heads aggregate the data of their cluster
nodes and send it to the base station. We assume that all the
nodes of the network are spread heterogeneous, at first all
nodes battery power is equal, all sensor nodes have limited
energy and the base station is fixed and not located between
sensor nodes and most of them are static and only a few are
mobile.
2. Related work
In recent years, many protocol and algorithm for clustering
have been proposed. Usually there are many differences
between CH selection and clustering organism. But
generally some of these clustering schemes applied in
homogeneous networks and some others clustering
algorithms applied in heterogeneous networks. Therefore,
most of the current popular clustering algorithms are not
fault tolerant, such as LEACH [3], PEGASIS [4] and HEED
[5].
LEACH is the most popular clustering algorithm. Many of
CH selection algorithms are based on LEACH’s
architecture. [6] is proposed to elect the CHs according to
the energy remaining in each node. We call this clustering
protocol LEACH-E. In the rest of this section, we review
LEACH algorithm and discuss its limitations, because
LEACH is very popular in wireless sensor network
clustering protocols.

2.1 LEACH (Low Energy Adaptive Clustering
Hierarchy)
In LEACH protocol, energy efficiency are achieved by
being CH in turn, and then distributing impartially the total
networks energy to unique node, thus lowing energy
consuming and increasing network lifespan. CH election
depends on the whole numbers of CH in networks and times
that nodes have been CH until now. Principles scenarios for
this protocol are:
• The base station fixed in nowhere near of the sensor
nodes.
• All the nodes in the wireless sensor network have the
same initial battery power and are homogeneous in all other
ways.
In first phase, algorithm chooses a node stochastically,
the principal will be explained in the following: all sensor
nodes compute a value T(n ) according to the following
formula at the beginning of all rounds:
) 1 (
¹
¹
¹
'
¹

− ·
others
G n
p r p
p
n T
0

)) 1 mod( ( 1 ) (

Where in this equation P describes desired percentage of
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

39
CHs (e.g. P=0.05) current round, and G is the set of nodes
that have not been CH in the last 1/P rounds, r is the
number of the current round.
For each node, a random number between 0 and 1
is generated. If this random number is less than T(n ) , this
sensor node will become a cluster head in this round and
broadcast an advertisement message to other sensor
nodes near it.
When each node has elected as cluster head itself for the
current round broadcasts an advertisement message to the
rest of the nodes in the network. All the non-cluster head
nodes, after receiving this advertisement message, decide on
the cluster to which they will belong for this round. This
decision is based on the received signal strength of the
advertisement messages. After cluster head receives all the
messages from the nodes that would like to be included in
the cluster and based on the number of nodes in the cluster,
the cluster head creates a TDMA schedule and assigns each
node a time slot when it can transmit [3].
Despite many advantages in using of the LEACH protocol
for cluster organization, CH selection and incising network
lifetime, there are a few features that the protocol does not
support. LEACH assumes nodes power energies
homogeneously. In a real, wireless sensor networks
scenario, sensor nodes energy spread in heterogeneous
manner.
3. The New Protocol
In this section, the details of PEAP are introduced. The
major application of a wireless sensor network is to
monitoring of a remote environment. Data of individual
nodes are usually not very important. Since the data of
sensor nodes are correlated with their neighbor nodes, data
aggregation can increase reliability of the measured
parameter and decrease the amount of traffic to the base
station. PEAP uses this observation to increase the efficiency
of the network. In order to develop the PEAP, some
assumptions are made about sensor nodes and the
underlying network model. For sensor nodes, it is assumed
that all nodes are able to transmit with enough power to
reach the BS if needed, that the nodes can adjust the amount
of transmit power, and each node can support different
Medium Access Control (MAC) protocols and perform
signal processing functions. These assumptions are
reasonable due to the technological advances in radio
hardware and low-power computing [3]. For the network, it
is assumed that nodes have always data to send to the end
user and the nodes located close to each other have
correlated data.
Such as LEACH, in first phase, PEAP chooses a node
stochastically, , the principal will be explained in the
following: all sensor nodes compute a value T(n )
according to the following formula at the beginning of all
rounds.
) 2 (
]
]
]
]
]
]










,
`




.
|
− ·
max _
_
1 *
max _
_
* ) (
n
E
current n
E
n
E
current n
E
p n T

Where in this equation P = the desired percentage of CHs
(e.g. P=0.05) the current round, and E
n_current
is the current
energy and E
n_max
the initial energy of the node, with r
s
as
the number of consecutive rounds in which a node has not
been CH. Thus, the chance of node n to become cluster head
increases because of a higher threshold. A possible blockade
of the network is solved. Additionally, r
s
is reset to 0 when a
node becomes CH. Thus, we ensure that data is transmitted
to the base station as long as nodes are alive [6].
Our clustering model is based on confidence value
associated with broadcast from CHs. Confidence value of a
CH is a function of some parameters (1) distance between
the CH, the node and (2) the CH current battery power and
(3) number of nodes already were a member of this CH.
Basically, our model checks first if, with the current battery
power the CH has, it would be able to support the current
members at maximum data broadcast rate. A node decides
to join a CH if the head can still support the node with its
rest power. Confidence value given by:
Where in this equation B
P
is the battery power of given
node, C
m
is number of nodes already a member of given CH,
Dc is distance between the CH and the node.
Like LEACH, in order to reduce the probability of
collision among joint-REQ messages during the setup phase,
CSMA (Carrier Sense Multiple Access) is utilized as the
MAC layer protocol. When a cluster head has data to send,
it must sense the channel to see if anyone else is
transmitting using the BS spreading code. If so, the cluster
head waits to transmit the data. Otherwise, the cluster head
sends the data using the BS spreading code [3].
4. Simulation Results
In order to evaluate the performance of the PEAP protocol,
the simulator, specific to the needs of our model, was coded
in PHP with Apache HTTP server version 2.2 and uses
PHP/SWF Charts for its graphical needs.
We assume a simple model for the radio hardware energy
dissipation where the transmitter dissipates energy to run
the radio electronics and the power amplifier, and the
receiver dissipates energy to run the radio electronics, as
shown in Fig. 1. For the experiments described here, both
the free space (d
2
power loss) and the multi path fading (d
4

power loss) channel models were used, depending on the
distance between the transmitter and receiver [7]. Power
control can be used to invert this loss by appropriately
setting the power amplifier. If the distance is less than a
threshold d
o
, the free space (fs) model is used; otherwise, the
multi path (mp) model is used. Thus, to transmit l-bit
message a distance, the radio expends
) 3 (
¹
¹
¹
'
¹
≥ +
< +
· + ·
o
4
mp elec
o
2
fs elec
amp - Tx elec - Tx Tx
d d , d l lE
d d , d l lE
d l E l E d) (l, E
ε
ε
) , ( ) (

And to receive this message, the radio expends:
Dc * Cm
Bp
v
(i) C ·

(3)
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

40
) 4 (
elec elec - Rx Rx
E l l E (l) E · · ) (

The electronics energy, E
elec
, depends on factors such as
the digital coding, modulation, filtering, and spreading of
the signal, whereas the amplifier energy,
εfs
d
2
or
εmp
d
4
,
depends on the distance to the receiver and the acceptable
bit-error rate.
We consider a wireless sensor network with N=100 nodes
randomly distributed in a 300m * 300m field. We assumed
the base station to be fixed and located at the origin (0, 0) of
the coordinate system. The radio parameters used in our
simulations are shown in Table (1).

Figure 1.Total nodes transmitting (N=100).
Table 1: Parameters used in simulations
Parameter Value
E
elec
50 nJ/bit
ε£
s
10 pJ/ bit/m
2

ε
mp
0.0013 pJ/bit/m
4

E
0
3 J
E
DA
5 nJ/bit/message
d
0
70 m
Message size 8192 bits

Fig. 2 present the energy consumption of the clustering
protocols when the amount of nodes spread in network is
100. The x-axis indicates the number of rounds while y-axis
indicates the mean residual energy of each node. The results
demonstrate that the energy consumption of our algorithm is
generally smaller than LEACH and LEACH-E.

Figure 2.Total network energy (N=100)
Fig. 3 presents the number of nodes dead when using of
clustering protocols. This result is closely related to the
network lifetime of the wireless sensor networks.

Figure 3.Total nodes transmitting (N=100).
5. Conclusion
In wireless sensor networks, the energy consumption and
the network lifetime are important issues for the research of
the route protocol. This paper introduces PEAP, a new
power aware energy adaptive protocol with hierarchical
clustering for wireless sensor networks that distributes loads
among more powerful nodes. Compared to the existing
clustering protocols, PEAP has better performance in CH
election and forms adaptive power efficient and adaptive
clustering hierarchy. The simulation results presented that
PEAP significantly improves the lifespan and the energy
consumption of the wireless sensor networks compared with
existing clustering protocols. Further directions of this study
will be deal with clustered sensor networks with more than
three parameters with in threshold calculating and more
parameters to confidence value calculating.
References
[1] C. Buratti, A. Conti, D. Dardari,R. Verdone, “An
Overview on Wireless Sensor Networks Technology
and Evolution” , Sensors 2009, 9, 6869-6896;
doi:10.3390/s90906869.
[2] I. F. Akyildiz, W. Su, and Y. Sankarasubramaniam, “A
survey on sensor networks”, IEEE Communications
Magazine, 2002, 40(8), pp.102-114.
[3] W.R. Heinzelman, A.P. Chandrakasan, H.
Balakrishnan, “An application-specific protocol
architecture for wireless microsensor net- works”,
IEEE Transactions on Wireless Communications 1
(4) (2002) 660–670.
[4] S. Lindsey, C.S. Raghavenda, “PEGASIS: power
efficient gathering in sensor information systems”,
Proceeding of the IEEE Aerospace Conference, Big
Sky, Montana, March 2002.
[5] O. Younis, S. Fahmy, “HEED: A hybrid, energy-
efficient, distributed clustering approach for ad hoc
sensor networks”, IEEE Transactions on Mobile
Computing 3 (4) (2004) 660–669.
[6] M.J. Handy, M. Haase, D. Timmermann,“Low energy
clustering hierarchy with deterministic cluster head
selection”, Proceedings of IEEE MWCN, 2002.
[7] T. Rappaport, Wireless Communications: Principles &
Practice. Englewood Cliffs, NJ: Prentice-Hall, 1996.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

41


Hexagonal Coverage by Mobile Sensor Nodes

G.N. Purohit
1
, Seema Verma
2
and Megha Sharma
3

1
Department of Mathematics, AIM & ACT, Banasthali University,
Banasthali-304022
[email protected]

2
Department of Electronics, AIM & ACT, Banasthali University,
Banasthali- 304022
[email protected]

3
Department of Computer Science, AIM & ACT, Banasthali University,
Banasthali-304022
[email protected]


Abstract: Before the advent of mobile sensor nodes, static
nodes have been used to provide coverage, which focuses on
repositioning of sensors to achieve coverage. But mobile sensor
nodes provide a dynamic approach to cover age. Targets that
might never be detected in a stationery sensor network can be
detected by moving sensors. Mobile sensors can compensate for
lack of sensors and improve network coverage. Here, we focus
on coverage of a rectangular region which is divided into
regular hexagons. The region is covered with mobile sensor
nodes, where a group of four MSNs position themselves on four
vertices of a hexagon. We can employ N≥4 MSNs for this
purpose, although basically only 4 MSNs are needed but extras
are employed in case of failure of any MSN.

Key Words: coverage, mobile sensor nodes, energy
efficiency, hexagonal coverage.
1. Introduction
The coverage problem is a fundamental issue in WSN,
which mainly concerns with a fundamental question: How
well a sensor field is observed by the deployed sensors? To
optimize network coverage, the traditional approach is to
deploy a large number of stationary sensor nodes and then to
schedule their sensing activities in an efficient way [6].
Recently, mobile sensor nodes have received much attention
since network performance can be greatly improved by
using just a few of mobile nodes. Mobile sensor nodes have
the movement capability to collaboratively reinstall the
network coverage. They are extremely valuable in situations
where traditional deployment mechanisms fail or are not
suitable, for example, a hostile environment where sensors
cannot be manually deployed or air-dropped. It is well
known that mobility increases the capacity of networks
(MANETs) by reducing the number of relays for routing,
prolonging the lifespan of wireless sensor networks (WSNs)
and ensuring network connectivity in delay-tolerant
networks (DTNs), using mobile nodes to connect different
parts of a disconnected network. In this paper we present
Mobile Traversal Algorithm (MTA) where the region of
interest [ROI], considered as a rectangular area, is covered
by regular hexagons. The MSNs are placed at four vertices
of the hexagon. They move in a systematic manner on
rectangular and triangular parts of hexagon. Previous work
on Mobile traversal has been done using triangulation based
coverage [3], but the hexagonal approach proves to be more
efficient as the total distance traveled and time taken is
comparatively less. Deploying a good topology is also
beneficial to management and energy saving, and the
hexagonal topology provides 2-coverage, as we wish to
ensure optimal and energy efficient coverage. A
deterministic energy-efficient protocol for sensor networks is
used in [1] that focuses on energy efficient coverage of ROI.
Energy efficient distributed algorithms for sensor target
coverage based on properties of an optimal schedule is
included in [2]. Power efficient organization of wireless
sensor networks is done in [4]. A coverage-preserving node
scheduling scheme for large wireless sensor networks is
discussed in [5].

The proposed objectives of our approach are:

(i) Covering the sensing area by minimum
number of sensors, N≥4, as well as providing highly reliable
and long
system lifetime, which is the main design challenge in
sensor networks.
(ii) Upon a failure, the remaining MSN’s (N-4) efficiently
complete the coverage of the targeted area, otherwise they
remain in sleeping mode.

We assume the following:
(i) The sensing range of a sensor x, is a disc of radius
r centered at x and defined by
S
x
(r) = {a € R
2
:

| x-a| ≤ r}
where |x-a| stands for the Euclidian distance
between x and a.
(ii)A location in region A is said to be covered by sensor x if
it is within x’s sensing range. A location in A is said to be
covered if it is within at least K sensor’s sensing
range.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

42
(iii) All sensors are considered to have identical
sensing range.

This paper is organized as follows. The problem is
formulated in Section 2. In Section 3 the MTA is presented,
and the total traveling distance for various sides ‘a’ of
MSNs is determined. In Section 4 we conclude the paper.

2. Problem Formulation
We consider a rectangular field of length ‘L’ and breadth
‘B’ which is divided into regular hexagons of sides a, We
consider all sensors having uniform sensing range r and side
of the regular hexagon a is taken less than r. The hexagon is
further subdivided into three parts: two isosceles triangles
and a rectangle, as shown in Fig1. The whole rectangular
area is covered by m*n hexagons.


Figure 1. Division of rectangular field into regular
hexagons

However, the area is over covered by some hexagons
covering the perimeter of the ROI, shown as the shaded area
in Figure 1. The length (L) and breadth (B) of the targeted
region are related with m, n and side a of the hexagon by
following relations:

L = a (m-1) √3, B = a (2n-1)
Thus there are n rows of hexagons and in each row there are
m hexagons.

The rows are numbered 1 to n and the columns are
numbered 1 to m. For sake of convenience we consider the
center of the top most left hexagon as origin of reference, x-
axis along horizontal line and y-axis as the vertical line
downwards
The coordinate positions of the centers of hexagons dividing
the rectangular field are represented as (x, y) = (a√3
i
, a
j
),
which represent the position of the centre of the hexagon in
i
th
row and j
th
column, where x= 1…n, y = 1…m and 1≤i≤n,
1≤j≤m.

We can see that each point in the ROI is covered by at least
2 sensors, which is depicted by the figure shown below


Figure 2. K-Coverage of a point
3. MTA (Mobile Traversal Algorithm)
The side of regular hexagons is taken less than or equal to
the sensing range of the sensors. i.e., a ≤ r. Given N ≥ 4
MSN’s, we take four sensors, which are placed at a (0, 0), b
(0, a), c (a√3/2, a), d (0, a√3/2), and the sensors move as
indicated in Figure 3. Considering only the hexagons in
the ROI i.e., the rectangular region, the 4 MSNs a, b, c, d
move towards the right until the last column is reached.



Figure 3. Traversal schemes of MSNs
Out of these four MSNs only three MSNs cover the triangles
below the rectangles of the hexagon in first row. After
reaching to the leftmost triangle of the first row these three
MSNs cover the top triangles of the hexagons in the second
row until they cover up to the right most top triangle of
second row. Then these three MSNs along with the
remaining MSNs position on the four vertices of the right
most half rectangle of the second row and cover all the
rectangles as done earlier in first row. This process
continues until the whole unshaded area in the Figure 3 is
covered. The distance traveled by these MSNs covering
rectangles, and triangles, rectangles in the indicated
directions are detailed in the next paragraph. The time taken
is proportional to distance covered by the MSNs.

In the first movement only two MSNs ‘a’ and ‘b’ together
travel a distance of 3√3a to cover the first full rectangle, Fig
4(a). To cover the remaining full rectangles the 4 MSNs
together move a total distance of 4√3a in each move, to
cover a rectangle, Fig 4(b). To cover the last right most top
half-rectangle the MSNs ‘a’ and ‘b’ together travel a
distance of 3√3a, Fig 4(c). To cover the right-angled
(right-most half triangle in the first row) triangle, two
MSNs (namely ‘d’ and ‘c’ in figure) out of these 4 MSNs
are kept stationery and the other two move along the
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

43
indicated directions and travel together a distance 2a, Fig
4(d). To cover the isosceles triangle one MSN (namely ‘c’ in
figure) moves along the indicated direction a distance of
√13/2a, Fig 4(e). To cover the next isosceles triangles MSN
‘b’ travels a distance of √7a, Fig 4(f). To cover the right-
angled triangle(right most half triangle) in the next row
MSNs ‘a’, ‘b’, ‘c’ travel a distance of 3a, Fig 4 (vii) To
cover the rightmost half-rectangle in the next row the four
MSNs (namely ‘a’, ‘b’, ‘c’, ‘d’ in figure) travel a total
distance of 5a , Fig 4 (g).This way the MSNs cover the
rectangular region (ROI) upto m columns and n rows.

Figure 4. MSN movements between hexagons

3.1 MTA with failure tolerance
In order to provide failure tolerance to MTA described
above, we add a few extra MSN’s which in the failure of a
particular MSN would occupy its position, otherwise these
extra MSN’s stay in the sleep mode.


Figure 5. Extra sensors to provide coverage in case of
sensor failure

If a sensor fails at say i
th
row and j
th
column, then the
sensor staying at the nearest corner to the coordinate
position (i, j) will move to cover that point.
3.2 Total traveling Distance
Based on the number of moves and individual traveling
distance of the MSNs the total traveling distance, D, is
calculated as:

( ) [ ]
( ) [ ]( ) ( ) 1 5 1 2 7 2 13 2 5
3 3 2 4 3 6
− + − − + +
+ × − + ·
n a n a m a a
m a n D
(1)

3.2.1 Total traveling Distance for different sides of
varying length of the hexagons

To compare our results, with the results of [3], we have
considered the area of ROI as a rectangular plot of size
4500*2000 units of measure. The side of the hexagon (taken
less than the sensing range) is considered 45, 50,55,60,65
units. The Total traveling distance of MSNs for their
varying sensing ranges is determined. The traveling
distance of the MSNs decrease as the length, a, of the side is
increased. i.e., we can say that the traveling distance of the
MSNs is inversely proportional to the length, a, of the side
of the hexagon. The data is graphically represented in
Figure 5, and in tabular form in Table 1.


Sides of regular hexagon
Figure 5. Total distance covered versus side lengths
of the hexagon

Table 1 Total distance covered by MSNs for
different side lengths of the hexagon


Length of side of
hexagon (a)

Total traveling distance
(D)
45 690677.78
50 624499.33
55 569478.6
60 536880.36
65 465344.62


In [3] the authors have taken equilateral triangle of sides of
length, a=50 units and distance traveled by the MSNs
covering the rectangular region and starting at arbitrary
points in the region varies from 7.38*10
5
to 7.54*10
5
which
is much more than 6.91*10
5
obtained in our case for the
regular hexagon having sides of length a=50.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

44
4. Conclusions
In this paper, we present a MTA for the coverage of a
rectangular field by N≥4 mobile sensors. Though only 4
sensors are sufficient to cover the region, however, in case
of failure, extra MSN’s kept in reserve/sleeping mode are
activated. Thus the system is reliable for coverage. We also
observe that as side ‘a’ of the hexagon is increased the total
traveling distance covered by MSNs decreases. The
hexagonal topology provides an efficient and reliable
coverage as each point in the ROI is covered at least by 2-
sensors.
References
[1] A. Dhawan, S. K. Prasad, “Energy efficient distributed
algorithm for sensor target coverage based
Performance Computing, 2008.
[2] A. Khan, C. Qiao, S.K. Tripathi, “Mobile Traversal
Schemes based on Triangulation Coverage,” Mobile
Netw Appl, Vol. 12, pp. 422-437, 2007.
[3] Wang, H. B.

Lim,

D. Ma,

A survey of movement
strategies for improving network coverage in wireless
sensor networks,” Computer Communications, Vol.
32, pp. 1427- 1436, 2009.
[4] D. Brinza, Al. Zelikovsky, “Deeps: Deterministic
energy- efficient protocol for sensor networks,”
Proceedings of the International Workshop on Self
Assembling Wireless Networks (SAWN), pp. 261–
266, 2006.
[5] D. Tian, N. D. Georganas, “A coverage-preserving
node scheduling scheme for large wireless sensor
networks,” In WSN Proceedings of the 1st ACM
international workshop on Wireless sensor networks
and applications, New York, NY, USA, ACM, pp. 32–
41, 2002.
[6] S. Slijepcevic, M. Potkonjak, “Power efficient
organization of wireless sensor networks,” IEEE
International Conference on Communications (ICC),
Vol. 2, pp. 472– 476, 2001.

Authors Profile

Megha Sharma received the B.C.A and
M.C.A degree from I.G.N.O.U in 2004 and
2008, respectively. She is currently working
towards a Ph.D degree in computer Science at
the Banasthali University of Rajasthan. Her
research interests include wireless sensor
networks with a focus on the coverage of
wireless sensor networks.

Prof. G. N. Purohit is a Professor in
Department of Mathematics & Statistics at
Banasthali University (Rajasthan). Before
joining Banasthali University, he was
Professor and Head of the Department of
Mathematics, University of Rajasthan, Jaipur.
He had been Chief-editor of a research journal and regular
reviewer of many journals. His present interest is in O.R., Discrete
Mathematics and Communication networks. He has published
around 40 research papers in various journals.



Mrs. (Dr.) Seema Verma obtained her M.Tech and Ph.D degree
from Banasthali University in 2003 and 2006 respectively. She is
a associate Professor of Electronics. Her research areas are VLSI
Design, communication Networks. She has published around 18
research papers in various Journals.






















































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

45

Powers of a Graph and Associated Graph
Labeling

G. N. Purohit, Seema Verma and Usha Sharma

Centre for Mathematical Sciences,
Banasthali University, Rajasthan 304022
[email protected]


Abstract: - Graph coloring is a classical topic in graph theory
and vertex coloring of a graph is closely associated with channel
assignment in wireless (sensor) network. Unit Disk graph is a
suitable model for connectivity in sensor network. This paper is
concerned with the power of a graph in general and power of
Unit Disk graph in particular. L(1,1,1)- Labeling is used to
avoid interference between communicating channels. We
develop L(1,1,1)-Labeling of a UD graph. For this we make use
of cellular partition algorithm. We have proved that cube of any
UD graph can be properly colored by at most 25ω colors, where
ω is the maximum clique size.

Keywords: Graph Labeling, wireless network.

1. Introduction

A graph G = (V, E), where V is the set of vertices and E is
the set of edges. Each edge i.e. element of E is an unordered
pair of element of V. Out of many induced graphs from a
graph; power graph finds a special place. Powers of a graph
have been considered in [1]. Square of a graph is a graph
with the same vertex set in which vertices at distance 2 are
connected through an edge. Cube of a graph is also the
graph on the same set of vertices; however, additionally
there is an edge between two vertices whenever they are at
most distance 3.

Graph coloring is a classical problem in graph theory and
proper coloring of a graph means assigning distinct colors
(labels) to adjacent vertices. The minimum number of colors
required to color a graph G properly is called chromatic
number of G and denoted as χ (G). A lot of research has
been done on the chromatic number of graphs. χ is bounded
by ω ≤ χ ≤ ∆+1 [5], where ω is the maximum clique size in
the graph and ∆ is the maximum degree of graph. The
chromatic number of powers of a graph has been studied in
[1].

Besides proper coloring there are many types of coloring
(labeling) of vertices. One such generalization is L(p,q)-
labeling, in which the labels at adjacent vertices should
differ by atleast p and labels at vertices at distance 2 should
differ by atleast q [6]. L(p,q)- labeling problem has attracted
attention of many researchers in the past [7]. Particular
cases of L(p,q)- labeling (i) L(1,1)- labeling and (ii) L(2,1)-
labeling have been defined and a lot of research has been
done in this area. L(1,1)- labeling is also known as distance
two coloring problem and is equivalent to the proper
coloring of the square of a graph, [11] includes labeling of
many important graphs. Another generalization of labeling
(coloring) is L(h,1,1)- labeling, in which the labels on
adjacent vertices differ by atleast h and labels on vertices at
distance 2 or 3 are distinct [10]. This concept is applied in
channel assignment problem and in wireless (sensor)
network.

Unit Disk graph [8] is another class of a graph, which finds
application in modeling a wireless (sensor) network. Since
the radio coverage range of sensors is based on Euclidean
distance between the nodes. So we utilize the concept of
Euclidean distance in a graph. This concept of Euclidean
distance in a graph has given rise to a new branch termed as
geometric graph theory. One can extend the concept of
power of graphs to the UD graph to obtain square and cube
of graphs and also Euclidean distance two graph [8] and
Euclidean distance three graph. Chromatic number of UD
graph and square of UD graph is considered in [8]. These
results are useful in the wireless sensor network technology.
In this paper we describe some powers of a graph and
powers of a unit disk graph. We develop a L(1,1,1)- labeling
of a UD graph by using cellular partition algorithm.

This paper is organized as follows. In Section-2 we have
provided some auxiliary definitions. In particular we have
obtained some results related to powers of a cycle and a
complete bipartite graph. In Section-3 we have defined Unit
Disk graph and its powers. Some results have been proved
for powers of a UD graph. In Section-4 we have given
cellular partition algorithm [8]. The main result of the paper
is theorem (4.1). This theorem shows that using the
developed cellular partition algorithm, cube of any UD
graph can be properly colored using 25ω colors, where ω is
the maximum clique size. In last Section, we have given
conclusion.

2. Auxiliary Definitions

2.1 Graph Powers

In this section we consider different powers of graph, which
finds application in channel assignments, L(p, q)- coloring
of graphs etc.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

46
2.1.1 Square of a graph (G
2
) - The Square G
2
of a graph G
= (V, E) is the graph, whose vertex set is V itself and there
is an edge between two vertices v
i
and v
j
if and only if their
graph distance (length of shortest path between v
i
and v
j
) in
G is at most 2. Examples of a graph and its square graphs
are given in following figures:

Figure 1. (a) Cycle C
6
Figure 1.(b)

Square of Cycle C
6

2.1.2 Cube of a graph (G
3
) - The Cube G
3
of a graph G (V,
E) is the graph, whose vertex set is V and there is an edge
between two vertices v
i
and v
j
if and only if their graph
distance in G at most 3. Examples of a graph and its cube
graphs are given in following figures:

Figure 2. (a) Cycle C
6 Figure 2. (b)

Cube of Cycle C
6


We can generalize the above definitions as follows:-

2.1.3 K
th
power of a graph (G
k
) - The k
th
power G
k
of a
graph G (V, E) is the graph, whose vertex set is V and there
is an edge between two vertices v
i
and v
j
if and only if their
graph distance in G is at most k.

As a special case we prove the following results for cycle C
n

and complete bipartite graph K
m,n
.



Theorem 2.1 If G = Cycle with n vertices (C
n
), Then


Proof: Case (i): Let G = C
n
(4≤n≤2k) and let v
i
and v
j
be
any two arbitrary vertices of G. The maximum distance
between v
i
and v
j
could be k in this case and thus any pair of
vertices could have a graph distance at most k.
Thus max {d (v
i,
v
j
)│∀ v
i,
v
j
є V}≤k.


From the definition of G
k
,
two vertices will be adjacent if d
(v
i,
v
j
)

≤ k. Since this condition is satisfied by all pairs of
vertices in G. Therefore all pairs of vertices will be adjacent
in G
k
and hence G
k
will be a complete graph on n vertices
and thus (C
n
)
k
=K
n
.



Case (ii): Let G = Cn (n≥2k+1) and let v
i
be an arbitrary
vertex of graph G. We have to show that deg (v
i
) = 2k ∀v
i
є
C
n.
We know that deg (v
i
) = 2, ∀ v
i
є C
n
. From the
definition of G
k
,
two vertices will be adjacent if the distance
between them is at most k.

There are exactly 2k vertices which are at most at a distance
k from v
i
. On one side of v
i
these k vertices v
i+1
, v
i+2
,
v
i+3
……….

v
i+k
are

at distance 1, 2, 3…….k from v
i
respectively. Similarly on the other side of v
i
these k vertices
v
i-1
, v
i-2
, v
i-3
……….

v
i-k
are at distance 1, 2, 3…….k from v
i
respectively. Out of these 2k vertices v
i+1
and v
i-1
are already
adjacent to v
i
in C
n
and remaining 2k-2 vertices will be
made adjacent to v
i
in (C
n
)
k
. Therefore deg (v
i
) = 2+2k-2 =
2k. Thus (C
n
)
k
will be a 2k-Regular graph on n vertices.

Theorem 2.2 If G = K
m,n
,

then
G
k
= G
k-1
= G
k-2
=……………= G
3
= G
2
= K
m+n
.





Proof: Let G = K
m,n
be a bipartite graph. Let V
1
and V
2
be
two partitions of vertex set V of G with m and n number of
vertices respectively. Let v
i
be an arbitrary vertex of V
1
.
Then all the vertices of V
2
are at distance 1 from v
i.

Moreover all other vertices of V
1
are at distance 2 from v
i
.
Since v
i
is an arbitrary vertex therefore this is true for all v
i
’s
in V
1
as well as for all vertices in V
2
. Thus all the pairs of
vertices are adjacent in G
2
. Thus G
2
will be a complete
graph on m+n vertices. G
k
(k>2) will not change K
m+n
.
Thus G
k
= G
k-1
= G
k-2
=……………= G
3
= G
2
= K
m+n
.
2.2 Labeling of a graph G (V, E)
2.2.1 L (p, q) – Labeling - For two positive integers p and
q, an L(p,q)- Labeling of a graph G is a function
C:V(G) →N such that C(v
i
) - C(v
j
) ≥ p if vertex v
i
and
v
j
are adjacent and C(v
i
) - C(v
j
) ≥ q if vertex v
i
and v
j
are at distance 2.
In particular L(1, 1)- labeling and L(2,1)- labeling are well
known examples of L(p,q)- Labeling.

2.2.2 L (1,1) – Labeling - It is also called the proper
labeling of a graph G. It is the labeling of the vertices with
non negative integers such that the labels on adjacent
vertices differ by at least 1.
2.2.3 L (2,1) – Labeling - It is a labeling of the vertices
with non negative integers such that the labels on adjacent
vertices differ by at least 2 and the labels on vertices at
distance 2 differ by at least 1.
We can generalize the above definition as follows:-
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

47
2.2.4 L (p,q,r) – Labeling - For three positive integers p, q
and r, an L(p,q, r)- Labeling of a graph G is a function
C:V(G) →N such that C(v
i
) - C(v
j
) ≥ p if vertex v
i
and
v
j
are adjacent, C(v
i
) - C(v
j
) ≥ q if vertex v
i
and v
j
are at
distance 2 and C(v
i
) - C(v
j
) ≥ r if vertex v
i
and v
j
are at
distance 3.
In particular L(1,1,1)- Labeling is more useful in channel
assignment problem and in wireless (sensor) network than
the others.

2.2.5 L (1,1,1) – Labeling - It is the labeling of the vertices
with non-negative integers such that the labels on adjacent
vertices, on vertices at distance 2 and 3 are different.
We can generalize it as follows:-
2.2.6 L (d
1
,d
2
,d
3
,…..d
i
,…..d
k
) – Labeling:- It is a labeling
of the vertices with non-negative integers such that the
labels on vertices at distance i from each other differ at least
by d
i
.

3. Powers of a Unit Disk graph

For the sake of completeness, we first define unit disk
graph.

3.1 Unit Disk Graph- A graph G is a Unit Disk graph if
there is an assignment of unit disks centered at its vertices
such two vertices are adjacent if and only if one vertex is
within the unit disk centered at the other vertex. We denote
a unit disk graph by G
UD.

3.2 Square of a Unit Disk Graph (G
UD
2
) - The Square
G
UD
2

of a Unit Disk graph G
UD
(V, E) is the graph whose
vertex set is V and there is an edge between two vertices v
i

and v
j
if and only if their graph distance in G
UD
is at most
2.
3.3 Euclidean distance two graph of a Unit Disk graph
(G
UD
ED2
) - Euclidean distance two graph of a unit disk
graph G
UD
(V, E) is the graph whose vertex set is V and
there is an edge between two vertices v
i
and v
j
if and only if
their Euclidean distance in G
UD
is at most 2.



3.4 Cube of a Unit Disk graph (G
UD
3
) - The cube G
UD
3
of a
Unit Disk graph G
UD
(V, E) is the graph whose vertex set is
V and there is an edge between two vertices v
i
and v
j
if and
only if their graph distance in G
UD
is at most 3.
3.5 Euclidean distance three graph of a Unit Disk graph
(G
UD
ED3
) - Euclidean distance three graph of a unit disk
graph G
UD
(V, E) is the graph whose vertex set is V and
there is an edge between two vertices v
i
and v
j
if and only if
their Euclidean distance in G
UD
is at most 3.



Figure 4. (a): Cube of a UD graph (G
UD
3
)
Figure 4. (b): ED-3 graph of a UD graph (G
UD
ED3
)

Now we discuss some results relating to G
UD
and G
UD
ED
.
Theorem 3.1 For any Unit disk graph G
UD
, G
UD
2
⊆G
UD
ED2
.
Proof: The proof of this theorem is given in [8].
Theorem 3.2 For any Unit disk graph G
UD
, G
UD
3
⊆G
UD
ED3
.
Proof: Let G
UD
be a Unit Disk graph. G
UD
3
be the cube of
G
UD
and G
UD
ED3
be the Euclidean distance three graph of
G
UD
. Since both the graph are on the same vertex set. So it
is sufficient to prove this theorem that edge set of G
UD
3
is the
subset of edge set of G
UD
ED3
. Let (c, w) be an edge in G
UD
3
.
There must exist two vertices u & v such that (c, u), (u, v),
Figure 3. (a):
Square of a UD
graph
Figure 3. (b): ED-2
graph of a UD graph
4(a)
4(b)
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

48
(v, w) are three edges in G
UD
. Since G
UD
is a Unit Disk
graph.
If we consider d
Ed
(c, w) denotes the Euclidean distance
between c and w, then
d
ED
(c, w) ≤ d
ED
(c, v) + d
ED
(v, w)
≤d
ED
(c, u) + d
ED
(u, v) + d
ED
(v, w)
= 1+1+1 = 3.
Hence d
ED
(c, w) ≤ 3. Thus the edge (c, w) is an edge in
G
UD
ED3
.
Hence G
UD
3
⊆G
UD
ED3

Further G
UD
3
may be proper subgraph of G
UD
ED3
in some
instances. We will show that there might be an edge in
G
UD
ED3
but not in G
UD
3
. As shown in figure 4(b), there might
be

a vertex x in G
UD
ED3
such that 1<d
ED
(c, x) ≤ 3 and but
there are no two vertices u' and v' such that (c, u'), (u', v')
and (v', x) are edges in G
UD
. Thus (c, x) is an edge in G
UD
ED3

but not in G
UD
3
. Similarly there might be a vertex y in
G
UD
ED3
such that 1<d
ED
(c, y) ≤ 2 but there are no vertex w'
such that (c, w') and (w', y) are edges in G
UD
. Thus (c, y) is
an edge in G
UD
ED3
but not in G
UD
3
.
Theorem 3.3 For any UD graph G
UD
, a coloring scheme χ
(G
UD
ED3
) for coloring G
UD
ED3
would also color G
UD
3
which is
equivalent to L(1,1,1)- labeling of G
UD
.
Proof: Since we’ve proved in previous theorem, any G
UD
3
be
a subgraph of G
UD
ED3
then ∃ a coloring scheme χ (G
UD
ED3
)
to color G
UD
ED3
properly could be sufficient to any of its
subgraph. Therefore it would also color G
UD
3
. Since L(1,1,1)
labeling of G
UD
is equivalent to proper coloring of G
UD
3
.
Thus χ (G
UD
ED3
) fulfill L(1,1,1) labeling of G
UD
.
4. Cellular Partition Algorithm
The concept of UD graph as well as labeling can be applied
in wireless sensor networks since we can model a wireless
sensor network as a UD graph. In this modeling sensors are
denoted as vertices. The sensing coverage area of a sensor is
represented by a unit disk centered at the corresponding
vertex. The connectivity between two sensors is determined
as if one sensor is within the sensing coverage area of
another sensor. If G
UD
represent a model of a wireless sensor
network, then G
UD
2
and G
UD
3
provide possible interfering
sensor nodes. To avoid this interference we need a proper
labeling for G
UD
3
which is equivalent to L(1, 1, 1)- Labeling
of G
UD
.
In order to cover the targeted area by sensors, we have to
divide the whole area in smaller cells (area). We have
chosen regular hexagons to cover the whole plane based on
the observation that hexagon is the most suitable polygon
which could cover the plane efficiently. It is the most
suitable tile that could cover the plane with no overlap and
thus it is the most efficient way to cover the plane.
Now in order to label the nodes we adopt the Cellular
Partition algorithm. In this algorithm first of all we partition
the whole plane in unit hexagonal cells with a side length ½
thus the diagonal length of each cell is 1. If there is any UD
graph in this plane, vertices of the graph inside the cell will
form a clique, since no two vertices in the same hexagon
have a Euclidean distance greater than 1. Let the maximum
clique size be ω then there can not be more than ω vertices
in the same hexagonal cell. Since we know that ω colors are
sufficient to color each hexagonal cell. Therefore we can
color the whole graph properly.
Using the above Cellular Partition algorithm we prove the
following theorem:
Theorem 4.1 Euclidean distance three graph G
UD
ED3
for any
UD graph G
UD
can be properly colored by at the most 25ω
colors where ω is the maximum clique size.
Proof: We partition the whole plane into hexagonal cells
with side ½ and diagonal 1. All vertices included in any
hexagon would form a clique. Since ω is the maximum
clique size, so we could place at most ω vertices into each
cell.
Next we construct a patch of 25 hexagons
*
and use 25ω
color to color the patch. An example of the patch is shown
in figure [5] and keeps the same orientation of patches of 25
to cover the whole plane as shown in figure [6]. Now we
prove that a vertex in i
th
hexagon in a patch would be at a
Euclidean distance of at least 3 to any other vertex in the i
th
hexagon in any other adjacent patch.


We maintain the same numbering orientation in a patch for
all patches in the whole plane. So the distance between two
vertices in i
th
hexagons in adjacent patches is constant. As
an example let A, B and C be the centers of centre hexagons
in the three adjacent patches as shown in figure [6]. Their
distance can be computed as:
We know that AB

= 10* 3 /4 = 4.33 > 4.

Also we have AD = ½ + ½ + 1+ ½ + 1+ ¼
= 15/4
Figure 5. A patch of 25 hexagons
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

49


Figure 6. Cover the whole plane with the patches of 25
hexagons

CD = 5* 3 /4
AD
2
+ CD
2
= AC
2

Therefore AC =
2
2
4
3 * 5
4
15


,
`


.
|
+
,
`

.
| = 4.33 > 4. Since the
vertices in both the hexagons are at distance ½ only from
their centers A, B and C then the distance from any vertex
in hexagon with center A to any vertex in hexagon with
center B will be greater than 3. Similarly the distance from
any vertex in hexagon with center A to any vertex in
hexagon with center C will also be greater than 3. Therefore
our patch fulfills the coloring of G
UD
ED3
.
In the above coloring scheme each hexagon with a certain
color pattern is far away from its sibling hexagon with same
color pattern. So it is a valid coloring for G
UD
ED3
. We can
properly color any G
UD
ED3
graph with 25ω colors. Using
Theorem 3.2, it follows that it is valid color scheme for G
UD
3

also.
5. Conclusion
Using the developed cellular partition algorithm, cube of
any UD graph can be properly colored using 25ω colors,
where ω is the maximum clique size. This is equivalent to
L(1,1,1)- labeling of unit disk graph and can be used to
avoid interference between communicating channels in
wireless (sensor) network. The number 25ω is the upper
bound and we are looking for obtaining a suitable lower
bound also.
Acknowledgement
Ms USHA SHARMA, one of the authors of this paper
acknowledges the grant received from Department of
Science & Technology (D.S.T.), Government of India, New
Delhi for carrying out this research.


References

[1] N. Alon and B. Mohar, “The chromatic number of graph
powers”, Comb. Probab. Comput. 11, 1 (2002), 1–10.
[2] G. J. Chang and D. Kuo, “The l(2,1)-labeling problem
on graphs”, SIAM J. Discret. Math. 9, 2 (1996), 309–
316.
[3] B. N. Clark, C. J. Colbourn and D. S. Johnson, “Unit
disk graphs”, Discrete Math. 86, 1-3 (1990), 165–177.
[4] A. Graf, M. Stumpf and G.Weisenfels, “On coloring
unit disk graphs”, Algorithmica 20, 3 (1998), 277–293.
[5] D. B. West, “Introduction to Graph Theory”, Second
edition Prentice Hall, 2001.
[6] B. M. K. Q. Peter Bella, Daniel Kral, “Labeling planar
graphs with a condition at distance two,” in
Proceedings 2005 European Conference on
Combinatorics, Graph Theory and Applications, 2005.
[7] M. Hall d´orsson, “Approximating the l(h, k)-labelling
problem,” Engineering Research Institute, University of
Iceland Technical, Tech. Rep. Report No. VHI 03-2005,
Available: citeseer.ist.psu.edu/252952.html
[8] T. Ren, K. L. Bryan, and L. Thoma, “On coloring the
square of unit disk graph,” University of Rhode Island
Dept. of Computer Science and Statistics, Tech. Rep.,
2006.
[9] Kevin L. Bryan, Tiegeng Ren, Lisa DiPippo, Timothy
Henry, Victor Fay-Wolfe, “Towards Optimal TDMA
Frame Size in Wireless Sensor Networks”, University of
Rhode Island Dept. of Computer Science and Statistics,
Tech. Rep.
[10] T. Calamoneri, E.G. Fusco, R.B. Tan and P. Vocca, “L
(h,1,1)- labeling of outerplanar graphs”, Mathematical
Methods of Operations research, Volume 69, Number 2,
May 2009, 307-312.
[11] T. Calamoneri, “The L(h, k)-Labelling Problem: A
Survey and Annotated Bibliography”, The Computer
Journal Vol. 49 No. 5, 2006.




















(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

50
A Method of Access in a Dynamic Context Aware
Role Based Access Control Model for Wireless
Networks
1
Dr. A.K. Santra,
2
Nagarajan S
1
Director and Professor, MCA Department, Bannari Amman Institute of Technology, Sathayamangalam, TamilNadu.
2
Research Scholar, Bharathiar University, Coimbatore and Selection Grade Lecturer, Alliance Business Academy, Bangalore, Karnataka.

Abstract: This paper address security in dynamic context
aware systems. Context awareness is a emerging as an important
element in wireless systems. Security challenges in context
aware systems include integrity, confidentiality and availability
of context information as well as end user’s privacy. The paper
addresses the dynamic changes happening in the mapping
between the roles and permissions depending on context
information. The paper presents a access control method using
artificial neural networks. It represents the data in terms of bits
to express the roles and permissions which helps in reducing the
data transmission and is a good fit for wireless networks with
lower bandwidth. It also introduces a novel method for storing
the information in a reduced format. Instead of accessing the
access control tables the machine is learning it, which in turn
reduces the time required to access the tables. Being dynamic in
nature there is no requirement for changes, any change is taken
care by the machine learning itself. Further, the algorithm is
simple and easy to implement in wireless networks.

Keywords: Dynamic Context, Wireless Networks.
1. Introduction
It has been proved that Dynamic Role Based Access Control
can manage Access Control and security, more and more
mobile devices are incorporating this feature. Pervasive
communication technology is becoming a everyday feature
and it is changing the way of communicating with the
external world. This type of DRBAC requires the following
tables: 1. User Location Table 2. User Role Table 3. Role –
Permission Table and 4. Mutual Exclusive role table.
Each time anybody accesses the system the first three tables
are searched.
Further, there is a very complex mapping of Location, users,
roles and permissions. It has been observed that frequently
searching the tables reduces the efficiency of access control.
An disadvantage of wireless devices are that they have less
power, storage, computing and transmission abilities.
Hence, performing access control in wireless environments
is actually more complex than that I wired environments.
Therefore, any approach to access control must be relatively
simple and very efficient.
This paper addresses the following points:
It gives a access control algorithm and storage is reduced
using the EAR decomposition and is retrieved accordingly.
It also uses a ANN to train the system so that this procedure
is learnt by the system, rather than searching the tables.
This algorithm assigns the user with different permissions
in different sessions depending on the context aware data
available at that point of time. This reduces the data storage
and transmission for using only the bits making it very
much easy to complement in networks where the bandwidth
of the network is very low.
The anytime, anywhere access infrastructures is to enable a
new generation of applications that can leverage
continuously manage, adapt and finally optimization is
required.
The major challenge faced in Wireless applications is
managing the security of the system using Access Control
Lists. ACL's is a very common mechanism used in Access
Control. It has been observed that the ACL's are used to
check for permission to access resources or services.
Another point to be noted at this juncture is such type of
approach is very inadequate for wireless applications, since
most proposed models do not take care of context
information into consideration.
There is a need for giving control in a dynamic way as the
context changes according to location, time, system
resources, network security configuration etc., Therefore,
access control mechanism that changes the permission of a
user dynamically based on context information is very much
essential.
In this direction [3] have proposed a GRBAC Model and
representing the system using State Machines. Using this
model, It is representing the information for the new
algorithm proposed and show how it can be stored and
retrieved. Then finally, show how this can be used to train
the system without accessing the matrix.
2. Background
Location, User, Role and Permission are the major
components of a DRBAC which are represented as follows:
L = {L1, L2, ........................Li}
U = {U1, U2, .......................Ui}
R = {R1, R2, .......................Ri}
P = {P1, P2, ........................Pi}
T = {T1, T2, T3}
The permission only directly maps to one role. In case many
roles want to own the same permission, this need to be done
using role inheritance. Since conflicted permissions also
needs to be addressed.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

51
3. Dynamic Context Aware Role Based Access
Control
DRBAC addresses the dynamic requirement of applications
in pervasive environments. It extends the traditional RBAC
model to use dynamic context information while making
access control decisions. The DRBAC addresses the
following:
1. A user's access privileges must change when the
user's context changes.
2. A resource must adjust its access permission when
its system information changes.
4. DRBAC Definitions
The DRBAC definitions are taken from the RBAC
formalisms presented in [3] and [4]
USER: A user is an entity whose access is being controlled.
USERS represents a set of users.
ROLES: A role is a job function within the context of an
organization with some associated semantics regarding the
authority and responsibility conferred on the user assigned
to the role. ROLES represents a set of roles.
PERMS: A permission is an approval to access one or more
RBAC protected resources. PERMS represents a set of
permissions.
LOCATIONS: Locations is the set of points from where the
user accesses the resources. LOCATIONS is the set of points
of access.
TIMES: Times is the time at which the user access the
resources. Times is the set of time at which the user has the
access.
SESSIONS: A session is a set of interactions between
subjects and objects. A user is assigned a set of roles during
each session. The active role will be changes dynamically
among the assigned roles for each interaction. SESSIONS
represents a set of sessions.
UA : UA is the mapping that assigns a role to a user. In the
session, each user is assigned a set of roles, the context
information is used to decide which role is active. The user
will access the resource with the active role.
PA : PA is the mapping that assigns permissions to a role.
Every role that has a privilege to access the resource is
assigned a set of permissions, and the context information is
used to decide which permission is active for that role.
Definition of the Agent: A Central Authority checks for the
user's access rights. And gives the privileges that are active
for him in that session.
5. Explanation of the DRBAC Model
The environment considered is an educational institute. The
designations are Professor, Associate Professor, Assistant
Professor and Teaching Assistant. At office they will have
both read and write permissions. For this we represent the
locations, roles and Time in the following way:

Locations
L1 = Campuses Abroad
L2 = Campuses coming under the home country
L3 = Campuses in each City
L4 = Campuses within the city
L5 = Residence
Time
T1 = 8:00 AM to 8:00 PM (Office Hours)
T2 = 5:30 AM to 7:59 AM (Morning)
T3 = 8:01 PM to 5:29 AM (Night)
Roles
For Time T1
R1 = Professor
R2 = Associate Professor
R3 = Assistant Professor
R4 = Teaching Assistant
R5 = Professor Remote
R6 = Associate Professor Remote
R7 = Assistant Professor Remote
R8 = Teaching Assistant Remote
For Time T2
R9 = Professor
R10 = Associate Professor
R11 = Assistant Professor
R12 = Teaching Assistant
R13 = Professor Remote
R14 = Associate Professor Remote
R15 = Assistant Professor Remote
R16 = Teaching Assistant Remote
For Time T3
R17 = Professor
R18 = Associate Professor
R19 = Assistant Professor
R20 = Teaching Assistant
R21 = Professor Remote
R22 = Associate Professor Remote
R23 = Assistant Professor Remote
R24 = Teaching Assistant Remote
Permission
P1 = Append
P2 = Create.
P3 = Execute.
P4 = Get attribute.
P5 = I/O Control.
P6 = Link.
P7 = Lock.
P8 = Read.
P9 = Rename.
P10 = Unlink.
P11 == Write.
The Access Control Algorithm for wireless applications.
For the sake of this study it is considered that static IP
addresses are used. The wireless infrastructure
implementing a WLAN is used for the logins inside the
campus; While Broadband wireless internet is used to login
remotely.
Step 1: Using IPSec Labeling the process of authentication
is done as described in [5].
Step 2: Using the IP address associated with the user the
location of the user is determined.
Step 3: Depending on the user's location a role is assigned
which is further associated with permissions.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

52
Using the following information we try to ascertain whether
a user is permitted to login from a particular location or not
using matrix1. If the said user has access rights from that
location the step 2 of the algorithm is executed i.e., is
mapping the IP address to a role else the access right is
denied.

Matrix1
L1 L2 L3 L4 L5
U1 1 1 1 1 1
U2 0 1 1 1 1
U3 0 1 1 1 1
U4 0 0 0 1 0
. 0 0 0 1 0
U5 1 1 1 1 1

The function of the second matrix defines the relationship
between the Location and roles for the time the user logs in.
Depending on the time the user logs in the roles are
assigned. This is used to check whether a role has access
rights at various locations are not. Further, the permission
for the roles are defined during the time the role is created.
If the role column in the matrix is 1 it means that role can
be provided access for that location and further step 3 of the
algorithm is executed else the access to that role is denied.

Matrix2
For Time T1
R1 R2 R3 R4 R5 R6 R7 R8
L1 1 0 0 0 0 0 0 0
L2 1 1 1 0 0 0 0 0
L3 1 1 1 0 0 0 0 0
L4 1 1 1 1 0 0 0 0
L5 1 1 1 0 1 1 1 0

For Time T2
R9 R10 R11 R12 R13 R14 R15 R16
L1 1 0 0 0 0 0 0 0
L2 1 1 1 0 0 0 0 0
L3 1 1 1 0 0 0 0 0
L4 1 1 1 1 0 0 0 0
L5 1 1 1 0 1 1 1 0

For Time T3
R17 R18 R19 R20 R21 R22 R23 R24
L1 1 0 0 0 0 0 0 0
L2 1 1 1 0 0 0 0 0
L3 1 1 1 0 0 0 0 0
L4 1 1 1 1 0 0 0 0
L5 1 1 1 0 1 1 1 0

Based on the permission rights for that user the access is
allowed. These two matrix are represented in the form of a
graph and then use the open ear decomposition technique to
reduce this information and store it.
6. Performance test of the algorithm
The test bed was created as a kernel program in SeLinux. It
is allowed to run with the same modules that Se Linux has
in addition to the modules created for this purpose.
Whenever somebody logins into the system it uses the
authentication methods presently provided by the operating
system. Using this to our advantage we put our static
addresses specific to the location based on the labeling of
IPSec object called labeled IPSec. This particular feature is
available in mainline Linux version 2.6.16 itself. This does
the authorization process as described in [5] and also we use
the same information to determine the location of the user.
Once the user's location is ascertained the next step is to
look out for the time at which this login has been requested.
This is done with the help of the system clock. With this
context information that is generated, access roles are
accordingly assigned.
The SELinux user identities are different from UNIX
identities. Here, for experimentation the normal roles
defined are R1, R2, R3, R4, ............R24 and the
corresponding Selinux roles defined are R1_r, R2_r, R3_r,
R4_r, ......R24_r. These roles are associated with the user.
The normal user are U1, U2, U3, U4, ........Un and the
corresponding Selinux users defined are U1_u, U2_u, U3_u,
U4_u, ........... Un_u.
Here _r identifies the roles while _u identifies the user.
SELinux user identities are different from UNIX identities.
They are applied as part of the security
label and can be changed in real time under limited
conditions. SELinux identities are not primarily
used in the targeted policy. In the targeted policy, processes
and objects are system_u, and the default
for Linux users is user_u. When identities are part of the
policy scheme, they are usually identical to
the Linux account name (UID), and are compiled into the
policy. In such a strict policy, some system
accounts may run under a generic, unprivileged user_u
identity, while other accounts have direct
identities in the policy database
_t identifies type. SELINUX_SRC/rbac is the place in which
roles are allowed to attain which other roles.
Types are the primary security attribute Selinux uses in
making authorization decisions as defined in permissions
above. This is defined in /etc/security/selinux/src/policy.
Depending on this roles can be assigned.
7. Representation of the Matrix and
decomposition / retrieval
Using the three Matrix defined in the above method, the
next step is to apply the well known Hungarian Algorithm
to represent the matrix in the form of a graph. The Steps in
the Hungarian Algorithm is as follows:



(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

53
Step 1
Generate initial labeling L and matching M in EL.

Step 2
If M perfect, stop.
Otherwise pick free vertex U such that it belongs to X.
Set S = { U } , T = Null.

Step 3
If NL (S) = T, Update labels (forcing NL(S) ≠ T)
α
l
= min
s € S
, y does not belong to T.

l(v) – α
l
if v € S
l’(v) = l(v) + α
l
if v € T
l(v) otherwise


Step 4
If N
l
(S) ≠ T, Pick y € N
l
(S) – T
If y free, u – y is the augmenting path,
Then Augment M and Go to step 2.
Else
If y matched, say to z, extend alternating tree:
Such that, S = Su { z }, T = T U { y }
Go to step 3.

Matrix1 and its graph representation G1


Matrix2 and its graph representation G2

Similarly, the graphs for the other two matrix is drawn and
reduced as shown.
Now, using the two graphs we apply the path ear
decomposition algorithm. The steps of the path Ear
decomposition algorithm is as follows:
An ear decomposition D = [ P
0
, P
1
, P
2
, ………., P
r-1
] of an
undirected graph G = (V, E) is a partition of E into an
ordered collection of edge-disjoint simple paths P
0
, P
1
, P
2
,
………., P
r-1
such that P
0
is an edge, P
0
U P
1
is a simple
cycle, and each end point of P
i
, for i > 1, is contained in
some P
j
, j < i, and none of the internal vertices of P
j
are
contained in any P
j
, j < i. The paths in D are called ears. An
ear is open if it is non-cyclic and is closed otherwise. A
trivial ear is an ear containing a single edge. D is an open
ear decomposition if all ears are open.
Let D = [ P
0
, P
1
, P
2
, ………., P
r-1
] be an ear
decomposition for a graph G = (V, E). For a vertex v in V,
we denote by ear(v), the index of the lowest numbered ear
that contains v; for an edge e = (x,y) in E, we denote by
ear(e) (or ear(x,y)), the index of the unique ear that contains
e. A vertex v belongs to P
ear(v).

The path ear decomposition algorithm:
Input: A connected graph G = (V, E) with a root r € V, and
with V = n.
Output : A depth first search tree of G, together with a label
on each edge in E, indicating its ear number.
Set T of edges; integer count;
Procedure df s(vertex v);
{ * This is a recursive procedure. The call df s(v) of the
main program constructs a depth first search tree T of G
rooted at r; the recursive call df s(w) constructs the sub tree
of T rooted at w. The depth first search tree is constructed
by placing the tree edges in the set T and labeling the
vertices in the sub tree rooted at vertex v in pre-order
numbering, starting with count. The procedure assigns ear
labels to the edges of G while constructing the depth first
search tree. An edge that does not belong to any ear is given
the label (∞, ∞). Initially, all vertices are unmarked. * }
Vertex w;
‘mark’ v;
Pre-order(v) := count; count := count + 1; low(v) := n;
ear(v) := (n,n);
For each vertex w adjacent to v
{ * This for loop performs a depth forth search of each child
of v in turn and assigns ear labels to the tree and non tree
edges incident on vertices in the sub trees rooted at the
children of v. * }
If w is not marked
Add (v,w) to T; parent(w) : = v; df s(w);

If low(w) ≥ pre-order(w)
ear(parent
(w), w) := (∞, ∞)
Low(w) < pre-order(w) ear(parent(w),w) := ear(w)
Fi;
Low(v) := min(low(v), low(w));
Ear(v) := lexmin(ear(v), ear(w))

If w is marked
If w ≠ parent (v)
Low(v) := min(low(v), pre-order(w));
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

54
Ear (w, v):= (pre-order(w), pre-order(v))
Ear (v) := lexmin(ear(v), ear(w,v));
Fi
Fi
Rof
End df s;
{* Main program *}
T: = Null set; count: = 0; df s(r);
Sort the ear labels of the edges in lexicographically non-
decreasing order and relabel distinct labels (expect labels
(∞, ∞)) in a order as 1,2,3,,4,………;
Relabel the non tree edge with label 1 as 0
End.
Using the algorithm the graph G1, G2 and G3 reduces to
G11, G21 and G31 respectively.
Graph G1 reduced to the form G11
P1 = { < U1, L1 > < U5, L1 > }
P2 = { < U1, L2 > < U5, L2> }
P3 = { < U1, L3 > < U5, L3 >}
P4 = { < U1, L4 > < U5, L4 >}
P5 = { < U1, L5 > < U5, L5}
P6 = { < U2, L2 > U3, L2 > < U3, L3 > }
P7 = { < U2, L3 > }
P8 = { < U3, L4 > < U4, L4 > }
P9 = { < U3, L5 > }
P10 = { <U4, L4 > }
Therefore, G11 = { P1, P2, P3, P4, P5, P6, P7, P8, P9, P10}
Graph G2 reduced to the form G21
P1 = { < L1, R1> < L2, R1 > < L2, R2 > < L3, R3 > < L4,
R3 > < L4, R4 >}
P2 = { < L2, R3 > < L5, R3 > < L5, R5 > }
P3 = { < L5, R6 > < L5, R7 > }
Therefore, G21 = { P1, P2, P3}
Similar operation is performed on the other two graphs.
G11 and G21 are referred to as the partition matrix and can
be called partition path matrix. The path decomposition is
edge disjoint one Whence the union of the path reduced will
give the entire graph G1 and G2.
8. Conclusion
It has been observed that any dynamic context aware system
needs to search relative tables to get the user permissions.
This paper is presenting a dynamic context aware algorithm
using SElinux where the number of tables are reduced. It
also shows a way to store it and retrieve. Executing our
module the roles are assigned according to the location and
time. Hence it can be implemented with ease in a wireless
networked environment.
Acknowledgements
We Would like to thank Prof. K. A Venkatesh, HOD
Department of Computer Applications, Alliance Business
Academy for all his support and discussions. We would also
like to thank Mr. Mahesh M S for the experimental support
provided in the lab during the preparation of this algorithm
and module.


References
[1] Efficient Access Control in Wireless Networks, Kun
Wang, Zhenguo Ding, Lihua Zhou, Proceedings of
IEEE/WIC/ACM International Conference on Web
Intelligence and Intelligent Agent Technology. 85-88,
ISBN:0-7695-2749-3, 2006.
[2] Fast Access Control algorithm in Wireless Network,
Kun Wang, Zhixin Ma, This paper appears in: Grid
and Pervasive Computing Workshops, 2008. GPC
Workshops '08. The 3rd International Conference on,
ISBN 978-0-7695-3177-9, 347p – 351p, 25-28 May
2008.
[3] Context-Aware Dynamic Access Control for Pervasive
Applications,. G. Zhang and M. Parashar, Proceedings
of the Communication Networks and Distributed
Systems Modeling and Simulation Conference (CNDS
2004), 2004 Western MultiConference (WMC), pp.
219 . 225, January 2004.
[4] Supporting relationships in access control using role
based access control. K. Beznosov, J Barkley and J
Uppal, Symposium on Access Control Models and
Technologies, Proceedings of the fourth ACM
workshop on Role-based access control, Fairfax,
Virginia, United States, 55p – 65p, ISBN:1-58113-
180-1 1999.
[5] Leveraging IPsec for Distributed Authorization, Trent
Jaeger, David King, Kevin Butler, Jonathan McCune,
Ramon Caceres, Serge Hallyn, Joy Latten, Reiner
Sailer and Xiolan Zhang.
nsrc.cse.psu.edu/tech_report/NAS-TR-0037-2006.pdf,
2006
Authors Profile
Dr. A.K.Santra is presently working as
the Director (Computer Applications), at
the Bannari Amman Institute of
Technology in Sathyamangalam. He has
close to 40 years of experience both in
the industry and Teaching. He published
17 papers in various International
Journals and conferences. He is presently guiding a number
of students for their Ph. D. degrees. He is on the board and
a reviewer in various International Journals.


Mr. Nagarajan S is presently working
as Selection Grade Lecturer, at the
Alliance Business Academy, Bangalore.
He is also a Research Scholar at
Bharathiar University at Coimbatore. He
has nearly about 13 years of Industry and
teaching experience. He has published
one international paper in an
International Journal and 5 in various conferences.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

55
Design of a Novel Cryptographic Algorithm using
Genetic Functions

Praneeth Kumar G
1
and Vishnu Murthy G
2


1
C V S R College Of Engineering,
Ghatkesar, Andhra Pradesh, India
[email protected]

2
C V S R College Of Engineering,
Ghatkesar, Andhra Pradesh, India
[email protected]

Abstract: Information Security plays a key role in the field of
modern computing. Here, at this paper we present a new
cryptographic algorithm which is proven to be resistant to
Cryptanalysis, Bruteforce and timing attacks. As the algorithm
uses Blum Blum Shub Genrator, A Cryptographically Secure
Pseudorandom Bit Generator (CSPRBG) for deriving the key
and Gentic Funtions in the process of Encryption. A comparison
of the proposed technique with existing and industrially accepted
RSA and Triple-DES has also been done in terms of resistance
to attacks and the various features of the algorithm.
Keywords: Encryption, Decryption, Blum, Blum, Shub
Generator, Genetic Functions.
1. Introduction
Information Security plays a vital aspect of modern
computing systems. With the global acceptance of the
Internet, virtually every computer is connected to every
other. So at this point of time maintaining secrecy and
security of information has become necessity. For these
reasons different types of research works on encryption and
decyption is going on so that various algorithms are
developed in this field. The process of encoding a message
so that it can be read only by the sender and the intended
recipient is known as encryption. The encoded version is
known as cipher text and process of decoding the cipher text
is known as Decryption.
The Algorithm uses Blum Blum Shub Generator for
generating key and Genetic functions “CROSSOVER” and
“MUTATION” in the process of encryption and decryption.
The Algorithm uses a key of four parameters, for security
which makes it resistant against Bruteforce attack.
Key = {p, q, s, k}
Where, p, q are two large prime numbers and s is a
randomly chosen number where s is relatively prime to n
(product of p and q) and k is Key Size used.
2. Literature Survey
2.1 Blum, Blum, Shub Generator
A popular approach for generating secure pseudorandom
number is known as the Blum, Blum, Shub (BBS)
generator, named for its developers[1]. The procedure is as
follows. First, choose two large prime numbers, p and q ,
that both have remainder of 3 when divided by 4. That is,
p≡q≡3 (mod 4)
means that (p mod 4)= (q mod 4)= 3. Let n = p X q. Next,
choose a random number s, such that s is relatively prime to
n; this is equivalent to saying that neither p nor q is factor of
s. Then the BBS generator produces a sequence of numbers
X
i
according to the following algorithm:
X
0
= s
2
mod n.
for i =1 to infinite
X
i
= (X
i-1
)
2
mod n
The BBS is referred to as a cryptographically secure
pseudorandom bit generator (CSPRBG). A CSPRBG is
defined as one that passes the next- bit test , Which is
defined as follows: “A Pseudo random bit generator is said
to pass the next-bit test if there is not a polynomial-time
algorithm that, an on input of the first k bits of an output
sequence, can predict the (k+1)
st
bit with probability
significantly greater than 1/2”. The security of BBS is based
on the difficulty of factoring n. That is, we need to
determine its two prime factors p and q.
2.2 Genetic Functions
In the proposed algorithm we use two genetic functions
“CROSSOVER” and “MUTATION”.
Crossover is a genetic function which can be described by
the following figure: As Illustrated in the figure the Binary
representation of key and plain text are Crossected. We have
two forms of crossover: Single and Double Crossover.
Taking 1 breaking point for a single crossover and 2
breaking points for double crossover.
Crossover :


Mutation is a genetic function where the bit at a given
position is inversed (i.e., 0 to 1 and vice versa).
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

56
3. Proposed Algorithm
The algorithm consists of two phases where the first phase
is of generating random numbers and the other performs
encryption/ decryption.
3.4 Key Generation
The algorithm uses a 4-tuple key {p, q, s, k} where p and
q are large prime numbers, s is a chosen random number
which is relatively prime to n, the product of p and q and k,
the key size. The key size is of Variable one.
Then, the algorithm uses the Blum, Blum, Shub
Generator for generating the random numbers (Which is
described in Section 2.1) which are used as keys in each
iteration for encryption.
1. Choose p=7 and q=19
2. Implies, n= 7 X 19 = 133
3. Choose s=100, relatively prime with 133
4. Then, X
0
=s
2
mod n= (100)
2
mod 133= 25
X
1
=(X
0
)
2
mod n= (25)
2
mod 133= 93
X
2
=(X
1
)
2
mod n= (93)
2
mod 133= 4
X
3
=(X
2
)
2
mod n= (4)
2
mod 133= 16
. .
. .
Here, the key is represented as {7, 19, 100, 8 }.
3.5 Encryption/ Decryption Algorithm
The proposed algorithm follows the below given method
for encryption and decryption. The Random numbers should
be generated concurrently in both the processes.
3.2.4 Encryption
The Encryption process is carried out as :
for every bit in the file until EOF
if random number generated is odd
perform CROSSOVER between plain text(binary
representation of ASCII value) and the random
number(in binary representations ) where the
breaking point is x
i
%k .
else if the number generated is even
perform Double CROSSOVER between plain
text(binary representation of ASCII value) and the
random number(in binary represenations) where the
first breaking point is x
i
%k and second one is
(x
i
+s)%k.
perform MUTATION at the (2*x
i
)%k position in the
offsprings..

The set of two numbers from the above output is the cipher
text.
Single Crossover

Suppose that the Message is AB and Key is CD
Where, A is part of Plain text before breaking point
B is part of Plain text after breaking point.
C is part of Key before breaking point.
D is part of Key after breaking point.



A B A D A’ D
è
(Crossover)
è
(Mutation)

C D C B C’ B

Double Crossover
Suppose that the Message is AB and Key is CD

Where, A is part of Plain text before first breaking point
B is part of Plain text between first and second
breaking points.
C is part of Plain text after the second breaking
point.
D is part of Key before first breaking point.
E is part of Key between first and second breaking
point.
F is part of Key after the second breaking point.

A B C D B F D B’ F
è
(Double Crossover)
è
(Mutation)

D E F A E C A E’ C

Then, the Plain Text’s (Binary representation of ASCII
code) is cross-over’d with Key (Binary representation)
generated by BBS (Section 3.1)

Here, The Cipher text that will be sent consists of 2 numbers
A’D and C’B instead of AB in the reverse number (if single
crossover is performed) and DB’F and AE’C instead of
ABC in the reverse order (if double crossover is performed).

For the plain text “TEXT” the encryption process is as
follows:

Character ASCII Value Binary Value
T 83 01010011
E 69 01000101
X 87 01010111
T 83 01010011

01010011(83) 01011001 01001001(73)
è
(Crossover)
è
(Mutation)

00011001(25) 00010011 00000011(3)

So the Cipher Text is (3, 73). This process is continued until
all the text in source file (Plain text) is completed.
3.2.5 Decryption

The Decryption process is carried out as :
Generate random numbers concurrently.
for every bit in the file(cipher text) until EOF
if random number generated is odd
read two characters at a time.
perform CROSSOVER between the second number
read and the x
i
(binary representations) where the
breaking point is n%k .
perform MUTATION at the (2*x
i
)%k position in the
crossovered numbers.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

57
perform CROSSOVER between first offspring of the
above phase and the first character read(in binary
representations).
else if the number generated is even
perform Double CROSSOVER between the second
number and the key (binary representation of ASCII
value)where the first breaking point is x
i
%k and
second one is (x
i
+s)%k .
perform MUTATION at the (2*x
i
)%k position in the
crossovered numbers.
perform CROSSOVER between first number of the
above output and the first character read(binary
representations).

The first number of the above output is the plain text(if
single crossover is to be performed)

A’ D A’ D A D
è
(Crossover)
è
(Mutation)

C D C’ D C’ D

A D A B à Plain Text
è
(Crossover)

C’ B C’ D

If double crossover is to be performed

D B’ F D B’ F D B F
è
(Double Crossover)
è
(Mutation)

D E F D E F D E F

D B F A B C à Plain Text
è
(Double Crossover)

A E’ C D E’ F
4. Analysis
The proposed algorithm has the following advantages :
Suitable for hardware or software: Algorithm uses only
primitive computational operations that can be easily
implemented on hardware in a less economic way. Which is
not possible with RSA and Triple DES.
Variable-length key : The key length can be varied in the
algorithm which is possible in RSA but not in Triple DES.
Low memory Requirement : A low memory requirement
makes the proposed algorithm suitable for smart cards and
other devices with restricted memory which is not possible
in RSA and Triple-DES.
Resistant to Known Plain Text, Known Cipher Text and
Bruteforce Attacks
Resistant to Timing Attack : As the algorithm uses Blum,
Blum, Shub generator for Key Generation it is resistant to
timing attacks(Section 2.1). RSA is prone to this kind of
attack but Triple-DES is not.
Computationally Secure : As the proposed algorithm maps
each character in plain text to two characters in cipher text.
It is hard to break the cipher. This feature is present in both
the RSA and Triple-DES algorithms.
Ease of analysis : The algorithm is explained concisely over
here. Even though it is difficult to cryptanalyze . RSA and
DES lacks in this feature.
5. Conclusion and Future Enhancements
Hence, The paper proposes a new algorithm which is
equivalently secure with RSA and Triple DES and which
can be easily implemented on the hardware.
Future process will be devoted to extend the algorithm to
achieve the other security services like Authentication, Data
Integrity etc.,
References
[1] Lenore Blum, Manuel Blum, and Michael Shub.,
“Comparision of two pseudo random number
generators” Proc. CRYPTO’82, pages. 61-78,
Newyork, 1983.
[2] William Stallings, “Cryptography and Network
Security”, Prentice Hall, 3
rd
Edition.
[3] Subramil Som, Jyotsna Kumar Mandal and Soumya
Basu, “A Genetic Functions Based Cryptosystem
(GFC)”, IJCSNS, September 2009.
[4] Ankit Fadia, “Network Security”, Macmillan India Ltd.


Authors Profile

Praneeth Kumar G received the B.Tech
Degree in Computer Science and
Engineering from Progressive Engineering
College in 2008. During May’ 2008 –
Aug’ 2009, he worked in Concepts in
Computing(CIC) as a Software Engineer.
He is presently working at C V S R
College of Engineering as an Assistant
Professor. His areas of interest include
software engineering and Information
Security.

Vishnu Murthy G received the B.Tech.
and M.Tech. degrees in Computer Science
and Engineering. He is resource person for
IEG and Birla Off campus programmes.
He is presently pursuing his Ph.D in
J.N.T.U. and heading the Department of
Computer Science and Engineering in C V
S R College Of Engineering. His areas of
interest include software Engineering,
Information Security and Image
Processing.









(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

58
Cluster Management using Cluster Size Ratio in Ad
Hoc Networks

D K L V Chandra Mouly, Ch D V Subba Rao and M M Naidu

Department of Computer Science and Engineering
S V University College of Engineering, Tirupati - 517502, India.
[email protected], [email protected]

Abstract: Cluster Management using Cluster Size Ratio
(CMCSR) is a completely distributed algorithm for partitioning
a given set of mobile nodes into clusters. The proposed algorithm
tries to reduce the amount of computational and information
overhead while maintaining a stable cluster formation. It
constructs and maintains a backbone topology based on a
minimal dominating set (MDS) of the network. According to this
algorithm, each node determines the membership in the MDS for
itself and its one-hop neighbors based on one-hop neighbor
information that is disseminated among neighboring nodes
using willingness and priority information of the nodes. The
algorithm then ensures that the members of the MDS are
connected into a connected dominating set (CDS), which can be
used to form the backbone infrastructure of the communication
network to facilitate routing. The algorithm outperforms the
existing algorithms with respect to stability. Load balancing the
cluster heads using the cluster size ratio is the heuristic used in
this algorithm.

1. Introduction

This section discusses elementary issues of ad hoc networks
and benefits of clustering.

1.1 Ad Hoc Networks
In the next generation of wireless communication systems,
there will be a need for the rapid deployment of independent
mobile users. Significant examples include establishing
survivable, efficient, dynamic communication for
emergency/ rescue operations, disaster relief efforts, and
military networks. Such network scenarios cannot rely on
centralized and organized connectivity, and can be
conceived as applications of ad hoc networks. An ad hoc
network is an autonomous collection of mobile users that
communicate over relatively bandwidth constrained wireless
links. Since the nodes are mobile, the network topology may
change rapidly and unpredictably over time. The network is
decentralized, where all network activity including
discovering the topology and delivering messages will be
taken care by the nodes, i.e., routing functionality will be
incorporated into mobile nodes.

The set of applications for ad hoc networks is diverse,
ranging from small, static networks that are constrained by
power sources, to large-scale, mobile, highly dynamic
networks. The design of network protocols for these
networks is a complex issue. Regardless of the application,
ad hoc networks need efficient distributed algorithms to
determine network organization, link scheduling, and
routing. However, determining viable routing paths and
delivering messages in a decentralized environment where
network topology fluctuates is not a welldefined problem
[1].

1.2 Clustering in Ad Hoc Networks
A wireless ad hoc network consists of nodes that move freely
and communicate with each other using wireless links. Ad-
hoc networks do not use specialized routers for path
discovery and traffic routing. One way to support efficient
communication between nodes is to develop wireless
backbone architecture; this means that certain nodes must be
selected to form the backbone. Over time, the backbone
must change to reflect the changes in the network topology
as nodes move around. The algorithm that selects the
members of the backbone should naturally be fast, but also
should require as little communication between nodes as
possible, since mobile nodes are often powered by batteries.
One way to solve this problem is to group the nodes into
clusters, where one node in each cluster functions as cluster
head, responsible for routing [2].

1.3 Benefits of clustering
Ad-hoc networks are suited for use in situations where an
infrastructure is unavailable or to deploy one is not cost
effective. One of many possible uses of mobile ad-hoc
networks is in some business environments, where the need
for collaborative computing might be more important
outside the office environment than inside, such as in
business meeting outside the office to brief clients on a
given assignment.
Mobile ad-hoc networks allow the construction of flexible
and adaptive networks with no fixed infrastructure. These
networks are expected to play an important role in the future
wireless generation. Future wireless technology will require
highly-adaptive mobile networking technology to effectively
manage multi-hop ad-hoc network clusters, which will not
only operate autonomously but also will be able to attach at
some point to the fixed networks.

2. Literature Review

This section emphasizes some of the past clustering
techniques.
2.1 Types of Topology Management
There are two approaches to topology management in ad
hoc networks:
• Power control.
• Hierarchical topology organization.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

59
2.1.1 Power Control
Different Power control mechanisms adjust the power on a
per-node basis, so that one-hop neighbor connectivity is
balanced and overall network connectivity is ensured [4, 5,
6]. Li [7] proved that network connectivity is minimally
maintained as long as the decreased power level keeps at
least one neighbor remaining connected at every 2π/3 to
5π/6 angular separation. Ramanathan [7] proposed to
incrementally adjust nodes power levels so as to keep
network connectivity at each step topologies derived from
power-control schemes often result in unidirectional links
that create harmful interference due to the different
transmission ranges among one-hop neighbors [9]. The
dependencies on volatile information in mobile networks,
such as node locations [4], signal strength or angular
positions [8] also contribute to the instability of topology
control algorithms based on power control.
2.1.2 Hierarchical topology control
This approach to topology control is often called clustering,
and consists of selecting a set of cluster heads in a way that
every node is associated with a cluster head, and cluster
heads are connected with one another directly or by means
of gateways, so that the union of gateways and cluster heads
constitute a connected backbone [10, 14, 15]. Once elected,
the cluster heads and the gateways helps to reduce the
complexity of maintaining topology information, and can
simplify such essential functions as routing, bandwidth
allocation, channel access, power control or virtual circuit
support. For clustering to be effective, the links and nodes
that are part of the backbone (i.e., cluster heads, gateways,
and the links that connect them) must be close to minimum
and must also be connected.
2.2 TYPES OF CLUSTER HEAD ELECTIONS
Cluster heads can be elected in four ways.
• Deterministic Clustering
• Non-Deterministic Clustering
• Reactive Clustering
• Proactive Clustering
A. Deterministic Clustering
Deterministic clustering can determine the cluster heads in a
single round. Different heuristics have been used to form
clusters and to elect cluster heads. Several approaches [12]
utilized the node identifiers to elect the cluster heads within
one or multiple hops.

B. Non-Deterministic Clustering
In non-deterministic clustering, negotiations are used.
Negotiations require multiple incremental steps, and may
incur an election jitter during the process, because of the
lack of consensus about the nodes being elected as the
cluster heads. Examples of this approach are the “core”
extraction algorithm [13] and the spanning tree algorithm
[14]. SPAN [13] allows a node to delay the announcement
of becoming a cluster head for random amounts of time to
attempt to attain minimum conflicts between cluster heads
in its one-hop neighborhood.
C. Reactive Clustering
It is on-demand clustering algorithm. There is no periodic
exchange of clustering information in the network. Instead,
whenever there is data traffic, cluster related information is
piggybacked in outgoing data packets and extracted out of
received packets.
D. Proactive Clustering
Some are proactive clustering algorithms, which require
periodic broadcast of cluster-related information. SPAN [13]
adaptively elects coordinators according to the remaining
energy and the number of pairs of neighbors a node can
connect.
2.3 Scope for Present Work
The efficiency of a communication network depends not
only on its control protocols, but also on its topology. Our
work i.e. CMCSR proposes a distributed topology
management algorithm that constructs and maintains a
backbone topology based on a Minimal Dominating Set
(MDS) of the network. Without topology management each
and every node should maintain the routing information for
all the nodes they need. By using topology management a
subset of nodes are selected called cluster heads and each
cluster head performs the routing work for its members.

3. System Model
3.1 Assumptions
This work assumes that an ad hoc network comprises a
group of mobile nodes communicating through a common
broadcast channel using omni-directional antennas with the
same transmission range. The topology of an ad hoc
network is thus presented by an undirected graph G = (V,E),
where V is the set of network nodes, and E ⊆ U * V is the
set of links between nodes. The existence of a link (u, v) ∈ E
also means (v; u) ∈ E, and that nodes u and v are within the
packet-reception range of each other, in which case u and v
are called one-hop neighbors of each other. The set of one-
hop neighbors of a node i is denoted by N
i
. Two nodes that
are not connected but share at least one common one-hop
neighbor are called two-hop neighbor of each other.

Each node has one unique identifier, and all transmissions
are omni directional with the same transmission range. The
nodes move with constant mobility. The energy is decreased
linearly. Different types of nodes consume energy at
different rates. We ignore the energy consumed due to local
computations, but assume that the energy consumption rate
is only dependent on the type of the node. A host consumes
0.6% of the total energy per minute in these algorithms;
whereas a cluster head consumes 3%.

3.2 Model
In an ad hoc network all nodes are alike and all are mobile.
There are no base stations to coordinate the activities of
subsets of nodes. Therefore, all the nodes have to
collectively make decisions. All communication is over
wireless links. A wireless link can be established between a
pair of nodes only if they are within wireless range of each
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

60
other. We will only consider bidirectional links. It is
assumed the MAC layer will mask unidirectional links and
pass only bidirectional links. Beacons could be used to
determine the presence of neighboring nodes. After the
absence of some number of successive beacons from a
neighboring node, it is concluded that the node is no longer
a neighbor. Two nodes that have a wireless link will,
henceforth, be said to be one wireless hop away from each
other. They are also said to be immediate neighbors.
Communication between nodes is over a single shared
channel.

In ad hoc networks the nodes within each neighborhood are
not known a priori. The individual cluster may transition to
spatial TDMA for inter-cluster and intra-cluster
communication. All nodes broadcast their node identity
periodically to maintain neighborhood integrity. Due to
mobility, a node’s neighborhood changes with time. As the
mobility of nodes may not be predictable, changes in
network topology over time are arbitrary. However, nodes
may not be aware of changes in their neighborhood.
Therefore, clusters and cluster heads must be updated
frequently to maintain accurate network topology.

3.2.1 Attributes of a node
The attributes of a node and their functionality are as given
in Table I.

Table 1: Attributes of a node and their functionality
ATTRIBUTE FUNCTION
ID Unique name given to node
ENERGY The capacity to work in
MOBILITY The speed of the node when it
is moving
WILLINGNESS How much the node is willing
to be a cluster head
PRIORITY Has the priority among other
nodes to became a cluster head
CLUSTER SIZE Cluster size ratio which it is
having
TYPE Whether it is cluster head or
gateway or door way or
member
NEIGHBORS Number of one – hop neighbors

3.2.2 Computing Priorities of Nodes
Given that cluster heads provide the backbone for a number
of network control functions, their energy consumption is
more pronounced than that of ordinary hosts. Low-energy
nodes must try to avoid serving as cluster heads to save
energy. However, to balance the load of serving as cluster
heads, every node should take the responsibility of serving
as a cluster head for some period of time with some
likelihood. Furthermore, node mobility has to be considered
in cluster head elections. To take into account the mobility
and energy levels of nodes in their election, we define the
two-hop neighbor information needed to assign node
priorities that consists of two components: (a) Neighboring
Nodes, (b) Willingness value assigned to a node as a
function of its mobility and energy level.
We denote the willingness value of node i by W
i
, the speed
of node i by a scalar M
i
that ranges from 0 to 1 meters per
second, and the remaining energy on node i as E
i
in the
range of 0 and 1. The willingness W
i
is a function that
should be defined according to the following criteria:

1. To enhance survivability, each node should have the
responsibility of serving as a cluster head with some
nonzero probability determined by its willingness value.
2. To facilitate with the stability and the frequency with
which cluster head elections must take place, the
willingness value of a node should remain constant as long
as the variation of the speed and energy level of the node do
not exceed some threshold values.
3. To avoid electing cluster heads that quickly lose
connectivity with their neighbors after being elected, the
willingness value of a node should decrease drastically after
the mobility of the node exceeds a given value.
4. To prolong the battery life of a node, its willingness
value should decrease drastically after the remaining energy
of the node drops below the given level.

Willingness value (Wi) is as specified below:
W
i
= 2
log2(Ei+.9)log2(Mi+2)


Here the constants 0.9 and 2 in Eq. (1) eliminate the
boundary conditions in the logarithmic operations. The
logarithmic operations on the speed and the remaining
energy values render higher willingness values in the high
energy and low speed field, while giving close to zero values
in the low energy and high-speed region.

Priority value (Pi) is a function of no.of neighbors and
willingness
i.prio = 2
log2(Wi)/n

Figure 1 illustrates the effect of the two factors on the
priority values. From the Figure 2 we can conclude that the
priority is directly proportional to the willingness value and
number of neighbors.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Priority
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
4
7
10
Willingness
Number of
Neighbors

Figure 1. Priority Graph
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

61

3.2.3 MDS and Cluster Head Election
The approach to establishing a minimal dominating set
(MDS) is based on three key observations First, using
negotiations among nodes to establish which nodes should
begin the MDS incurs substantial overhead when nodes
move around and the quality of links changes frequently.
Hence, nodes should be allowed to make MDS membership
decisions based on local information. Second, because in an
MDS every node is one hop away from a cluster head, the
local information needed at any node needs to include only
nodes that are one and two hops away from the node itself.
Third, having too many cluster heads around the same set of
nodes does not lead to an MDS. Hence, to attain a selection
of nodes to the MDS without negotiation, nodes should rank
one another using the two-hop neighborhood information
they need. Based on the above, the approach adopted in
CMCSR consists of each node communicating to its
neighbor’s information about all its two-hop neighbors.
Using this information, each node computes a priority for
each node in its two-hop neighborhood, such that no two
nodes can have the same priority at the same instant of time.
A node can become cluster head if the node has highest
priority in its two hop neighborhood.

3.2.4 Connected Dominating Set Election
The CDS [4] of a network topology is constructed in two
steps. In the first step, if two cluster heads in the MDS are
separated by three hops and there are no other cluster heads
between them, a node with the highest priority on the
shortest paths between the two cluster heads is elected as a
doorway, and is added to the CDS. Therefore, the addition
of a doorway brings the connected components in which the
two cluster heads reside one hop closer. In the second step,
if two cluster heads or one cluster head and one doorway
node are only two hops away and there are no other cluster
heads between them, one of the nodes between them with
the highest priority becomes a gateway to connect cluster
head to cluster head or doorway to cluster head. After these
steps, the CDS is formed.
CDS is constructed in two steps
• Selecting doorway
• Selecting gateway
(a) Selecting Doorway
Node i can become a doorway for cluster heads n and j if the
following conditions are satisfied. i) If cluster n and j are
not two hops away. ii) There is no other cluster head m on
the shortest path between n and j. iii) There is no other node
m with higher priority than node i.

Selecting gateway
Node i can become a gateway for cluster head n and j if the
following conditions are satisfied.
i) If there is no cluster head or doorway between n and j.
ii) If there is no node with higher priority than node i.

3.2.5 Computing Cluster Size ratio
The objective is to develop an enhancement for existing
heuristics to provide a contiguous balance of loading on the
elected cluster heads. Once a node is elected as cluster head
it is desirable for it to stay as a cluster head up to some
maximum specified amount of time, or budget. The budget
is a user defined constraint placed on the heuristic and can
be modified to meet the unique characteristics of the system,
i.e., the battery life of individual nodes. Some of the goals of
the heuristic are:
1. Minimize the number and size of the data structures
required to implement the heuristic,
2. Extend the cluster head duration budget based on an input
parameter,
3. Allow every node equal opportunity to become a cluster
head in time,
4. Maximize the stability in the network.

Data Structures
The data structures necessary for the heuristic consist of one
local variable: Physical ID (PID). The PID is the initial id
given and is unique for each individual node. However, this
changes with time to represent the elect ability of a node.

Basic Idea
The node id load heuristic operates on the principle of load
balancing. That is, the ids of each non-cluster head node
cycles through the queue at a rate of 1 unit per run of the
load-balancing heuristic. Each node has a minimum value
of 0 and a maximum value of Max_Cluster Size. Upon
reaching Max_Cluster Size a node will rotate to a value of 0
on the next cluster election heuristic run. As the cluster
election heuristics run they will use the priorities to
determine the cluster heads of the network. A cluster head
will maintain this value until it has exhausted its cluster
head duration budget. At this point it will set its work to 0,
i.e., less than any other node, and become a normal node.

4. Performance Evaluation
We have conducted simulation experiments to evaluate the
performance of the proposed heuristic i.e. CMCSR. These
simulation results were then compared against Topology
Management by Priority Ordering TMPO [15]. We assumed
a variety of systems running with 10, 20, 40, 60, 80 and 100
nodes to simulate ad hoc networks with varying levels of
node density. Two nodes are said to have a wireless link
between them if they are within communication range of
each other. Additionally, the span of a cluster, i.e. the
maximum number of wireless hops between a node and its
cluster head (d) was set to 2. The entire simulation was
conducted in a 1150 * 1150 unit region. Initially, each node
was assigned a unique node id and (x, y) coordinates within
the region. The nodes were then allowed to move at random
in any direction at a speed of not greater than half of the
wireless range of a node per second. The simulation range is
set to 2000 seconds, and the network was sampled for every
2 seconds. At each sample time the proposed cluster size
ratio and cluster election heuristic was run to determine
cluster heads and their associated clusters. Each simulation
run for 2000 seconds measures several performance metrics.
The main simulation metric measured was Cluster head
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

62
Duration, which provides a basis for evaluating the
performance of the proposed load-balancing heuristic.
For the purposes of these simulations we have set the cluster
head budget to be a function of the maximum amount of
work it performs That is, once a node becomes a cluster
head it will remain a cluster head until it has exhausted its
maximum work load, or until it loses out to another cluster
head based on the rules of the cluster election heuristic.

The proposed CMCSR algorithm makes a noticeable
difference in the cluster head duration (ranging from 4% to
28%). This shows that the load-balancing heuristics
generates longer cluster head durations; it will also produce
much tighter and more deterministic responses (stability).
These results are not surprising. Therefore, once a cluster
head is elected it continues as cluster head for a maximum
of the programmed budget. This will provide the longer
cluster head durations that we see. The cluster size ratio
heuristic is continuously rotating, moving ordinary nodes
into the position of becoming a cluster head. Therefore, once
a cluster head budget is exceeded, a different cluster head is
elected and the process repeats. This provides the cluster
size ratio effect of distributing the responsibility of being a
cluster head among all nodes. We present below three
graphs for our simulation results. First one is the average
cluster head duration. Second one is the average number of
cluster head. And finally the improvement graph for the
cluster head duration.

4.1 Nodes Vs Cluster Head Duration
Figure 2 shows the graph for the average cluster head
duration. X-axis takes the number of nodes and y-axis
shows the cluster head duration in seconds. The topology
management is executed for 1800 seconds for each x nodes
and the values are noted. Totally the program is executed for
18000 seconds. The diamond shaped line indicates the
cluster head duration without load i.e. incase of TMPO.
0
5
10
15
20
25
30
10 20 30 40 50 60 70 80 90 100
No. of Nodes
C
l
u
s
t
e
r

H
e
a
d

D
u
r
a
t
i
o
n
(
i
n

S
e
c
)
TMPO CMCSR

Figure 2. Average Cluster head duration Vs no. of nodes

Second the topology management is executed for the 600
seconds for each x nodes and the values are noted. Totally
the program is executed for 18000 seconds. The square
shaped line indicates the cluster head duration with load i..e.
incase of CMCSR.

4.2 Nodes Vs Number of Cluster Heads
Figure 3 shows the graph for the average number of cluster
heads formed during the topology management. The
topology management is executed for 1800 seconds for each
of x nodes and the values are noted. The diamond shaped
line indicates the number of cluster head formed during
topology management without load (TMPO). Second the
topology management executed for the 600 seconds for each
x nodes and the values are noted. Totally the program is
executed for 18000 seconds. The square shaped line
indicates the cluster head formed during topology
management with load (CMCSR).

0
2
4
6
8
10
12
14
16
10 20 30 40 50 60 70 80 90 100
No.of Nodes
N
o
.
o
f

C
l
u
s
t
e
r
s
TMPO CMCSR

Figure 3. Average no. of clusters
Figure 4. Average Cluster head duration


0
5
10
15
20
25
900 1800 2700 3600
Syst em Executed(Sec)
TMPO CMCSR
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

63
4.3 Improvement in Cluster head Duration
Figure 4 shows the graph for the average cluster head
duration. X-axis takes system executed in seconds and y-
axis shows the average cluster head duration in seconds.
The below graph is constructed under the following
conditions. Both TMPO and CMCSR is run for 900 sec,
1800 sec, 2700 sec and 3600 sec by taking total number of
nodes into account as 50. The diamond shaped line indicates
the cluster head duration without load and the square shaped
line indicates the cluster head duration with load. The
results related to the above three graphs indicates that
CMCSR outperforms TMPO.


5. Conclusions
The cluster size load balancing heuristics have been
proposed for ad hoc networks. The cluster election heuristics
favor the election of cluster heads based on node willingness
and number of neighbors. Here the heuristic places a cluster
size budget on the contiguous amount of time that a node
acts as cluster head. As seen from the simulation results,
this heuristic produce larger cluster head durations while
decreasing the cluster size and enhancing the stability.

Our proposed CMCSR is a novel energy-aware topology
management approach based on dynamic node priorities and
cluster size load in ad hoc networks. CMCSR consists of two
parts that implement the MDS and CDS elections
respectively. Compared to five prior heuristics of MDS and
CDS elections in ad hoc networks, MDS offers four key
advantages. i) CMCSR obtains the MDS and CDS of the
network without any negotiation stage; only two-hop
neighbor information is needed. ii) CMCSR allows nodes in
the network to periodically re-compute their priorities, so as
to balance the cluster head role and prolong the battery life
of each node. iii) CMCSR introduces the willingness value
of a node, which decides the probability of the node being
elected into the MDS according to the battery life and
mobility of the node and iv) MDS introduces doorway
concept for the CDS in addition to the well-known gateway
and cluster head concepts.
A key contribution of this work consists of converting the
static attributes of a node, such as node identifier, into a
dynamic control mechanism that incorporates the three key
factors for topology management in ad hoc networks -- the
nodal battery life, mobility, and cluster size load balancing.
Although existing proposals have addressed all these
aspects, CMCSR constitutes a more comprehensive
approach.

References
[1] http://w3.antd.nist.gov/wahn_mahn.shtml.
[2] Tomas Johansson and Lenka Carr-Motyˇckov´. “On
Clustering in Ad Hoc Networks,” First Swedish
National Computer Networking Workshop,
SNCNW2003, 8-10 September, 2003.
[3] R.Pandian, P.Seethalakshmi and V.Ramachandran,
“Enhanced Routing Protocol for Video Transmission
over Mobile Adhoc Network,” Journal of Applied
Sciences Research 2(6): 336-340, INSInet Publication,
2006.
[4] L. Hu. “Topology Control for Multihop Packet Radio
Networks,”. IEEE Transactions on Communications,
41(10), Oct. 1993.
[5] S. Narayanaswamy, V. Kawadia, R. S. Sreenivas, and
P. R. Kumar, “Power Control in Ad-Hoc Networks:
Theory, Architecture, Algorithm and Implementation of
the COMPOW Protocol,” Proceedings of the European
Wireless Conference on Next Generation Wireless
Networks: Technologies, Protocols, Services and
Applications, pages 156-162, Florence, Italy, Feb. 25-
28, 2002.
[6] H. Takagi and L. Kleinrock, “Optimal Transmission
Ranges for Randomly Distributed Packet Radio
Terminals,” IEEE Transactions on Communications,
32(3),7, Mar. 1984.
[7] L. Li, V. Bahl, Y.M. Wang, and R. Wattenhofer,
“Distributed Topology Control for Power Efficient
Operation in Multihop Wireless Ad Hoc Networks,”
Proceedings of IEEE Conference on Computer
Communications (INFOCOM), Apr. 2001.
[8] R. Ramanathan and R. Rosales-Hain, “Topology
Control of Multihop Wireless Networks using Transmit
Power Adjustment,” Proceedings of IEEE Conference
on Computer Communications (INFOCOM), IEEE,
Mar. 26-30, 2000.
[9] R. Prakash, “Unidirectional Links Prove Costly in
Wireless Ad-Hoc Networks,” Proceedings of the
Discrete Algorithms and Methods for Mobile
Computing and Communications - DialM, Seattle, WA,
Aug. 20, 1999.
[10] S. Bandyopadhyay and E. J. Coyle, “An Energy
Efficient Hierarchical Clustering Algorithm for
Wireless Sensor Networks”, In Proc. INFOCOM 2003,
San Francisco, Apr, 2003.
[11] M. Maeda and Ed Callaway, "Cluster Tree Protocol
(ver.0.6)",http://www.ieee802.org/15/pub/2001/May01/
01189r0P80215_ TG4-Cluster-Tree-Network.pdf.
[12] L. Bao and J.J. Garcia-Luna-Aceves, “Transmission
Scheduling in Ad Hoc Networks with Directional
Antennas,” Proc. ACM Eighth Annual International
Conference on Mobile Computing and networking,
Atlanta, Georgia, USA, Sep, 23-28 2002.
[13] B. Chen, K. Jamieson, H. Balakrishnan, and R.
Morris, “Span: an Energy-Efficient Coordination
Algorithm for Topology Maintenance in Ad Hoc
Wireless Networks,” In Proc. 7th ACM MOBICOM,
Rome, Italy, Jul, 2001.
[14] C.C. Chiang, H.K. Wu, W. Liu, and M. Gerla,
“Routing in Clustered Multihop, Mobile Wireless
Networks with Fading Channel,” IEEE Singapore
International Conference on Networks SICON'97, pages
197-211, Singapore, Apr. 14-17, 1997.
[15] L. Bao and J.J. Garcia-Luna-Aceves, “Topology
Management in Ad Hoc Networks,” Proc of the 4
th

ACM Interational Symposium on Mobile Ad Hoc
Networking and Computing (MOBIHOC), Annapolis,
Maryland, USA, Jun. 2003.


(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

64
Authors Profile

Mr D K L V Chandra Mouly received M.Tech (CSE) from S V
University College of Engineering, Tirupati, India in the year
2007. Currently he is pursuing his Ph.D. (Part-time) at S V
University, Tirupati. His areas of interests are Computer Networks
and Distributed Systems.

Dr Ch D V Subba Rao received Ph.D (CSE) from S V University,
Tirupati, India in 2008. He got 18 years of teaching experience. At
present, he is working as Associate Professor, Dept of Computer
Science and Engineering, S V University College of Engineering,
Tirupati, India. His areas of interests are Distributed Systems,
Operating Systems, Computer Networks and Programming
Language Concepts.

Dr M M Naidu received Ph.D (IIT-Delhi) in the year 1988. He got
32 years of teaching experience. Currently he is working as
Professor in the Dept of Computer Science and Engineering, S V
University College of Engineering, Tirupati, India. His areas of
interests include Software Engineering, Enterprise Resource
Planning, Computer Networks and Computer Graphics.
















































































































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

65
Analysis and Proposal of a Geocast Routing
Algorithm Intended For Street Lighting System
Based On Wireless Sensor Networks
Rodrigo Palucci Pantoni
1
and Dennis Brandão
1


1
Eng. School of São Carlos of University of São Paulo,
São Carlos, Brazil
[email protected] and [email protected]

Abstract: This work is part of research project where are
studied and developed efficient technologies highlighted by the
ReLuz (Brazilian National Program of Public Lightning and
Efficient Light Signalization) program. It is proposed, in that
context, the development of remote command and monitoring
infrastructure for management purposes for large areas.
Specifically, it refers the research and analysis of efficient
routing algorithms in terms of energy consumption, guarantee
of package delivery and good performance (minimum delay).
Two new geocast routing algorithms are implemented and
compared with other candidates applied to street lighting system.
Such algorithms refer to wireless sensor networks IEEE
802.15.4 through multi-hop and low reach communication, and
low cost. The results show that the proposed algorithm GGPSRII
saves more energy, presents a good percentage of delivery
guarantee, good performance than compared ones.
Keywords: Lighting system, Geocast routing algorithm, IEEE
802.15.4, Wireless sensor network.
1. Introduction
Improvements in the quality of public lighting systems have
a direct impact in the quality of life for a large part of the
population as well as in the efficiency and rationality of
using electric power. This work is part of the research
project that intends to integrate the study and development
of efficient technologies in the scope of the Brazilian federal
government program ReLuz (National Program of Efficient
Public Lighting and Traffic Lights) [1]: a telecommand
system for managing public lighting in large areas. The
expected result is the economic operation of a public
lighting system with an economy index superior to the
indexes currently registered in the mentioned program, due
to the efficiency provided by the use of high performance
electronic systems together with a telecommand system.
The telecommand system is composed by devices attached to
the points of light, which are interconnected via network,
and software tools used for monitoring and control.
This work refers to the research and analysis for choosing
and implementing efficient network routing algorithms in
terms of electric power consumption, that is, besides the
rational use of the electric power, there is also the concern
about saving energy in terms of network communication.
The routing algorithm refers to IEEE 802.15.4 mesh-based
wireless sensor networks (WSN) through multi-hop
communication based on low range communication, low
cost and minimum electric power consumption, with
distributed routing for retransmitting information to the
final destination. The IEEE 802.15.4 standard was chosen
because of its minimum consumption, low cost and protocol
simplicity.
The main challenge for this work is handling the limitations
of the IEEE 802.15.4 with the characteristics of dense
networks, trying to reach a balance among guarantee of
delivery, performance (minimum delay) and electric power
efficiency for the specific purpose of public lighting.
2. Correlated works of public lighting system
Several proposals found in the literature related to public
lighting use PLC technology (Power Line Communication)
[2, 3, 4]. However, there are a few limitations from this
technology when it is applied in public lighting [3, 5], such
as noise and impedance variations. Furthermore, Brazilian
infrastructure would have to be in perfect state to achieve a
good operation, but in reality the infrastructure is old and
would have to be rebuilt.
A company [6] applied the physical and data link layer
standard of the model ISO/OSI IEEE 802.11 to the wireless
network protocol, which has higher electric power
consumption and high communication rates. In this
solution, each point of light has a network point that
communicates and sends information to an Internet point.
In [7], it was proposed the use of ZigBee [8] as the protocol
for the lighting system. [9] mentioned that the use of ZigBee
is not suitable for public lighting systems, although it has
not been quantitatively proved. They suggested the use of
the network protocol 6LoWPAN [10] for devices IEEE
802.15.4 [11] and a GPS. 6LoWPAN is destined to devices
with low cost, low electric power waste and low
communication rates, and the main goal is to integrate the
Internet with WSN naturally. However, the use of
6LoWPAN over the layers defined in IEEE 802.15.4 does
not define a routing algorithm by itself. Nevertheless, such
proposal naturally matches the work hereby proposed.
3. Proposed public lighting system
To create the routing algorithm, it is necessary to visualize
the entire architecture, in brief, where the requirements are
displayed superficially. The requirements are: points
supervision (device status, whether it is connected to the
network or not; battery power; life time estimation for
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

66
battery and lamp; and LEDs luminosity level), control
(switch the luminosity level lamp on or off; switch on/off a
lamp post, a selected segment, a street, neighbourhood, city,
etc; automatic programmed actions and freedom to actuate
the devices through a remote tool) and diagnostic and
alarms (trigger an event when a network, hardware or lamp
failure occurs).
During each device’s initialization, a GPS set attributes the
geographic coordinates of their positions.
The user would be able to select a specific area on the map
besides the previously programmed area (for example,
selecting a segment from a street) to actuate (switch the
lights on or off). Figure 1 shows the area selected by the user
through the system supervision and control tool.


Figure 1. System Supervision and Control Screen

Mechanisms to automatize input information process in the
public lighting system must be applied, in order to become
the process simplest and fast. Such mechanisms are not in
the scope of this paper.
4. Correlated works of routing algorithms for
WSN
This section intends to review researches found in the
literature related to routing algorithms for WSN in general,
independent of the application.
Routing algorithms for mesh-based networks, such as
AODV (Ad-Hoc On Demand Distance Vector) [12], DSR
(Dynamic Source Routing) [13], and DREAM (Distance
Routing Effect Algorithm for Mobility) [13] were developed
to provide mobility. These algorithms are reactive, that is,
routes are determined by flooding through nodes searching
the addressee node when a flow of information (triggered by
the upper layer) occurs. When the route is determined, it is
stored in the memory of the participant nodes. This
mechanism causes high energy costs, performance costs and
guarantee of delivery costs. Besides, devices would have to
keep large routing tables in dense networks, which would be
impossible considering that such devices have low memory
available. It is also interesting to keep the minimum possible
overhead in the network package, because it is limited at
127 Kb by the IEEE 802.15.4 specification on the data link
layer level.
In contrast to reactive algorithms, there are proactive
algorithms, such as DSDV (Dynamic Destination Sequenced
Distance-Vector) [14] and OLSR (Optimized Link State
Routing) [15]. Instead of building optimized routes only
when they are necessary, proactive algorithms keep a matrix
of connections always updated, which reflects the current
network status and channels available for data transfer.
From the computational and electric power consumption
point of view, these algorithms are too costly, especially
when providing mobility or when a fail occurs.
Geographic routing algorithms use the geographic location
of the devices as a reference, and the location can be
obtained from a GPS. The great advantage for this type of
routing is that routing tables are not necessary because the
devices decide where to forward the package according to
the smallest Euclidean distance of the destination
coordinate, for example.
Since those algorithms were designed for mobile devices,
one of the steps for this type of routing is transmitting
“hello” messages to all neighbouring devices (in radio
range), that periodically send packages with the
identification (such as the network address) of the device
and its position. So, the devices store the location of their
neighbours. They apply the greedy routing [16] by
transmitting a message to their neighbours that are
relatively closer in distance to the final destination. For any
variation of the greedy algorithm, it is important to define a
discard criterion to prevent the message from being
transmitted uninterruptedly over the network in case the
specified destination is not located. However, in cases where
it is necessary to find a balance between performance and
guarantee of delivery, the discard criterion must be defined
even if there is a path to the final addressee.
To assure package delivery, greedy algorithms are
frequently used combined with recovering strategies,
providing two operation modes. Such strategies are used
when a package is discarded in “pure” greedy mode, in case
there is an obstacle or a non-operating network device, for
example.
The most prominent recovering strategy uses planar graphs.
Basically, the idea is to draw the network as a unique graph
on a plane and forward the message in the direction of the
adjacent faces, which consequently forward the package to
the final destination. Those strategies are extensively
studied, as in GFG (Greedy-Face-Greedy) [17], GPVFR
(Greedy Path Vector Face Routing) [18], GPSR (Greedy
Perimeter Stateless Routing) [19] and GOAFR++ (Greedy
Other Adaptive Face Routing plus plus) [20].
In the GPSR algorithm, the recovering strategy is named
perimeter mode and uses the right hand rule to direct the
flow of network packages through the devices. In case the
distance from the device to the destination is smaller than
the distance to its neighbours, the algorithm returns to the
greedy mode.
The term unicast means a point-to-point connection where
data is sent from a sender to a receiver. The most
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

67
appropriate type of routing in our context, however, is the
so-called geocast, that also requires the devices to know
their geographic positions via GPS. The algorithms deliver
network messages to the devices in a specific geographic
area, delivering a message from one device to many devices.
There are several routing algorithms developed for that
purpose, some based on flooding messages, directed
flooding messages, and without flooding. Flooding messages
algorithms find the path to the destination area the same
way AODV does. The first package arriving at the
destination area broadcasts to all nodes in the area. On the
other hand, directed flooding messages algorithms define
two types of areas, the destination and the routing area. An
example of directed flooding is the LBM (Location Based
Multicast) algorithm [21] that is executed as follows: the
routing area is defined as an area in the direction of the
destination area, and packages forwarded outside these two
areas are discarded.
5. Proposed Routing Algorithms
The algorithms proposed in this study, without flooding, are
named GGPSR (Geocast Greedy Perimeter Stateless Routing
Protocol) and GGPSR II (Geocast Greedy Perimeter
Stateless Routing Protocol II). They consist of two parts:
modified GPSR to find the destination area and geocast to
broadcast the message to all addressee devices.
Instead of using the specific coordinate of a device as the
destination, the central point of the destination area is
calculated. The package is then forwarded to this point and,
when it gets to the destination area, the first device
receiving the message broadcasts it to all devices in the area.
As soon as a device receives the broadcasted message, the
device checks if it has already received this message,
checking a sequential number and therefore avoiding
unnecessary retransmissions. In case the device has not
received the message yet but it is in the destination area, the
device receives and rebroadcasts the message to the network.
Table I shows the pseudo-algorithm GGPSR in a simplified
way.
Considering the GPSR part of the proposed algorithm
GGPSR (only), this work suggests some modifications for
lighting systems applications. The first modification is
related to the “hello” messages. For the discussed
application, devices are fixed. Initially, it was assumed that
only one “hello” message when the device is initialized
would be enough. However, it is interesting to keep this
functionality but with a periodicity much longer than what it
is used in mobile devices. The reason to still keep this
periodicity is that the device can simply stop operating
because of a permanent or temporary failure caused by an
obstruction. Information about the neighbors from the
devices can affect the network reliability because each
device is also a message router.
The second modification is related to storing geographic
positions for the neighbors (applied to GGPSR and
GGPSRII). Originally, GPSR has three types of messages:
“hello” messages, consultation to destination locations and
data messages [19]. “Hello” messages are responsible to
inform the new device’s location to its neighbors; data
messages are responsible to forward the packages; and,
finally, consultation messages are responsible to obtain the
location of the addressees from one or more location
databases for a certain unique device’s identification (for
example, the network address). Once the scenario does not
have mobility, consultation messages and location servers
can be removed. Such functionality was implemented
through “hello” messages that send the unique identification
and the location. Besides, the supervision and control
software requires the location of all devices on the network.
Thus, the package would be sent by the system with the
geographic coordinate of the destination, instead of having
only the network address and requiring the current device to
obtain its location through the location server.
The difference between GGPSR and GGPSRII consists only
in the trigger condition of “hello” messages. In GGPSR, the
trigger is invocated according to a pre-determinated
frequency (period). On the other hand, in GGPSRII, the
trigger is invocated if only the data message does not reach
the destination (geocast region).Thus, it is necessary to
implement a confirmation message to inform the data
message forwarding failure. Table II show the GGPSRII
simplified pseudo-algorithm.
Table 1: GGPSR simplified pseudo-algorithm
// Initialization
For all devices
Send_Broadcast_Hello_Neighbors ();

Start Hello_Timer (period);

If Hello_Timer_Expire
Send_Broadcast_Hello_Neighbors ();

//Send
If (Packet.Destination_Position != myPosition &&
myPosition == UNICAST)
ModifiedGPSR_Forward (Packet);
Else If (Packet.Destination_Position == myPosition
&& myPosition == GEOCAST){
If (Packet.seqN_ < ReceivedSeqNo){
Broadcast_Neighborhood_Geocast_Region(Packet);
}
}

//Receive
If (Packet.Destination_Position == myPosition &&
myPosition == UNICAST)
ModifiedGPSR_Receive (Packet);
Else If (Packet.Destination_Position == myPosition
&& myPosition == GEOCAST){
If (Packet.seqNo¬_ < ReceivedSeqNo){
ModifiedGPSR_Receive (Packet);
}
}

Regarding the destination area, it can have the shape of a
four-vertex polygon, circle and point (in this last case, the
communication is unicast).
The geographic position is represented through geodesic
coordinates (latitude and longitude). Each coordinate is
allocated as floating types, which in language C has 4 bytes
and precision of seven decimal places. In relation to the
value ranges, this size is more than enough: the field
"hours" of the latitude coordinate varies between -180 and
180, whereas longitude varies between -90 and 90.
Table III shows the header struct in C language of the
packet types, including “hello” messages (hdr_gpsr_hello),
and data messages (hdr_gpsr_data) of the proposed protocol.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

68
Table 2: GGPSRII simplified pseudo-algorithm
// Initialization
For all devices
Send_Broadcast_Hello_Neighbors ();

//Send
If (Packet.Source_Position == myPosition &&
myPosition == UNICAST){
ModifiedGPSR_Forward (Packet);
Start Confirmation_Timer (period);
}
Else If (Packet.Destination_Position != myPosition
&& myPosition == UNICAST)
ModifiedGPSR_Forward (Packet);
Else If (Packet.Destination_Position ==
myPosition && myPosition == GEOCAST){
If (Packet.seqNo_ < ReceivedSeqNo){
Broadcast_Neighborhood_Geocast_Region(Packet)
;
}
}

//Receive
If (Packet.Destination_Position == myPosition &&
myPosition == UNICAST)
ModifiedGPSR_Receive (Packet);
Else If (Packet.Destination_Position ==
myPosition && myPosition == GEOCAST){
If (Packet.seqNo_ < ReceivedSeqNo){
ModifiedGPSR_Receive (Packet);
}
}

//Any Time
If Confirmation _Timer_Expire
Send_Broadcast_Hello_Neighbors ();

Related to the “hello” packet, the field “type_” refers to the
packet type (whether it is “hello” or data packet). The field
“x_” and “y_” are the source device geodesic coordinates.
The field “seqNo” is used to control the receiving and
rebroadcast actions, i. e., a device must have to broadcast
just a once “hello” messages to its neighbours.
Related to the data packet, the fields “sx_” and “sy_” are
the source device geodesic coordinates. The field “ts_” is the
timestamp, used for calculate latency. The fields
“sx_GF_Failed” and “sy_GF_Failed_” correspond to the
coordinates where the greedy mode have failed, that are
used for the packet whether it can be return to that mode in
case it is in the perimeter mode. The field “seqNo” is used
for control the receiving and rebroadcast actions on the
geocast region. The rest of variables are the polygon
coordinates’.
Table 3: GGPSR and GGPSRII header packets
struct hdr_gpsr_hello {
u_int8_t type_;
float x_;
float y_;
int seqNo_;
};
struct hdr_gpsr_data {
u_int8_t type_;
u_int8_t mode_;
float sx_;
float sy_;
float ts_;
float sx_GF_Failed_;
float sy_GF_Failed_;
float dst_x1;
float dst_y1;
float dst_x2;
float dst_y2;
float dst_x3;
float dst_y3;
float dst_x4;
float dst_y4;
int seqNo_;
};
6. Simulation and results
The simulation was analyzed to obtain quantitative data in
order to decide which routing algorithm would be more
appropriate to the problem in discussion. First, the ZigBee
protocol (which uses AODV routing algorithm) was
evaluated because it is a consolidated standard that provides
interoperability among several manufacturers and reduced
costs for production. For this reason, the AODV protocol is
simulated and compared to the GPSR, GGSR, GGPSRII and
LBM protocols.
Simulations were performed on a largely used simulator for
the academic area, the ns-2.33 [22]. Originally, in relation
to routing algorithms, ns-2.33 only implemented the AODV
algorithm. Hence, the authors implemented GPSR, LBM,
GGPSR and GGPSRII protocols. Configurations for
simulating all routing protocols are presented in Table IV.
Devices are equally distributed vertically and horizontally
along 50 meters to simulate lamp posts in a simplified way.
Figure 2 shows the location of the devices. Data always
flows from device 98 to all nodes on the “last line” (devices
0, 1, 2, 3, 4, 5, 6, 7, 8 and 9). So, device 98 is the network
coordinator. In addition, the ns2 oTcl code was
implemented to keep the energy of device 98 always as its
initial energy.
Table 4: Configurations for simulation
Network Interface Phy/WirelessPhy/802_15_4
MAC Mac/802_15_4
IFQ Queue/DropTail/PriQueue
Link Layer LL
Antena Antenna/OmniAntenna
Dimension X 170
Dimension Y 270
IFQLEN 50
Propagation Propagation/TwoRayGround
Phy/WirelessPhy Pt_ 7.214e-3 (100m)
Number of Devices 100
Duration 1000 simulation time
Transmission Power 0.3 mW
Reception Power 0.28 mW
Initial Energy 1 Joule
Packet size (less
header)
64 bytes
Flow CBR (Constant Bit Rate)

Simulation foresees the basic operation situation: network
traffic is requested every twelve hours (assuming that the
unit of simulation is in hours), that is, switch the lights on
and off for a street segment, for example.
GPSR and GGPSR “Hello” messages periodicity were
configured to 12 hours, that is the time to send data.
It is important to emphasize that in case of unicast
algorithms, ten messages are sent from device 98 to all
devices in the “last line”, as mentioned before. In case of
geocast algorithms, only one message is sent to all
addressees.
Figure 3 shows a comparison of the electric power of all
devices in the network summed during the time interval. It
was verified that the AODV protocol is the least efficient,
and GGPSRII is the most efficient. It can be concluded that
the use of ZigBee protocol is strongly not recommended.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

69

Figure 2. Simulated scenario
Figure 4 shows the rate in percentage of throughput to each
evaluated protocol. For the AODV traffic, it was necessary to
configure the beginning of the information flow in a non-
simultaneous way, with a short period of time between
transmissions. If all messages were sent to the ten devices
simultaneously, no message would be successfully received.
The scheduling configuration was also applied to the GPSR
routing for comparison, even though GPSR does not need
this artifice.
Observe that because of the AODV flooding characteristic,
there is a very significant lost of messages that reassures
discredit on the ZigBee protocol. For this same reason, it can
be verified in Figure 5 that there is a delay average much
higher than the AODV algorithm.


Figure 3. Comparing the network total energy

Figure 4. Delivery rate

Figure 5. Delay average (performance)

Table V shows the packet header comparison. GGPSRII
presents lesser size of bytes then “LBM – request”, which is
the largest header in terms of bytes.
TABLE 5: Packet header Comparison
Packet Header Size (bytes)
AODV - request 28
AODV - response 26
AODV - error 14
AODV - confirmation 4
LBM - request 62
LBM - response 52
LBM - error 14
LBM - confirmation 4
GPSR – hello 12
GPSR - data 34
GGPSRII - hello 12
GGPSRII - data 58
7. Conclusions and future works
The proposed GGPSRII routing algorithm proved to be
efficient in terms of energy, and provides a good balance in
terms of performance and guarantee of delivery.
Quantitative results showed that the research has a solid
base, that is, resources can be invested in hardware
prototypes for devices (an alternative to ZigBee) and in
implementing the supervision and control software.
Later on, this algorithm will be implemented to transmit a
message to multiple areas, using the Fermat point concept
proposed by [23].
Final validation tests for the protocol will be simulated in
real scenarios obtained from lamp posts mapping, usually
archived in city halls.
The authors will also develop an alarm and network
diagnostic mechanism for the lighting system based on the
protocol of the proposed routing. Further on, application
layer services will be included to switch the lights, trigger
alarms, provide diagnostics, etc.
Acknowledgment
The authors gratefully acknowledge the academic support and
research structure from the Engineering School of São Carlos
- University of São Paulo. The authors also acknowledge the
important technical contributions from Smar International
Corporation and the Prof. Tracy Camp for helping in provide
an implementation of LBM algorithm, which was very
helpful for an implementation of the LBM implementation of
this work.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

70
References
[1] Reluz Program (Jan. 2010). “National Program of
Efficient Public Lighting and Traffic Lights”. Available:
http://www.eletrobras.gov.br/ EM_Programas_Reluz
[2] Ei-Shirbeeny, E.H.T.; Bakka, M.E.. “Experimental pilot
project for automating street lighting system in Abu Dhabi
using powerline communication”. Proceedings of the 10th
IEEE International Conference on Electronics, Circuits
and Systems. Vol. 2, p.743 – 746, Dec. 2003
[3] Chueiri, I.J.; Bianchim, C.G. “Sistema de Comando e
Controle de Potência em Grupo para Iluminação Pública”.
BR n.PI0201334-7, 2002.
[4] Sungkwan C.; Dhingra, V. “Street lighting control
based on LonWorks power line communication Power
Line Communications and Its Applications”. IEEE
Symposium, 2008.
[5] Sutterlin, P.; Downey, W. (1999). “A power line
communication tutorial – challenges and technologies”,
Technical Report, Echelon Corporation, 1999.
[6] Streetlight Intelligence (Jan. 2010). Available:
http://www.streetlightiq.com
[7] Barriquello, C.H. ; Garcia, J.M. ; Corrêa, C. ; Menezes,
C.V. ; Campos, A. ; Do Prado, R.N. “Sistema Inteligente
Baseado em Zigbee para Iluminação Pública com
Lâmpadas de LEDS”. In: XVII Congresso Brasileiro de
Automática. Anais do XVII CBA. Juiz de Fora, 2008.
[8] Zigbee. ZigBee PRO Specification, ZigBee Alliance.
2007.
[9] Denardin, G.W.; Barriquello, C.H.; Campos, A. Do
Prado, R.N. “An Intelligent System for Street Lighting
Monitoring and Control”. 10° Congresso Brasileiro de
Eletrônica de Potência. Brasil, Bonito, 2009.
[10] Kushalnagar, N.; Montenegro, G.; Schumacher, C..
“IPv6 over Low-Power Wireless Personal Area Networks
(6LoWPANs): Overview, Assumptions, Problem
Statement, and Goals”. Request for Comments: 4919,
2007.
[11] IEEE 802.15.4. “Wireless Medium Access Control
(MAC) and Physical Layer (PHY) Specifications for Low-
Rate Wireless Personal Area Networks”, IEEE Computer
Society, 2006.
[12] Perkins, C. E.; Belding-Royer, E. M.; Das, S. R.. “Ad
Hoc On-Demand Distance Vector Routing”, Request for
Comments: 3561, 2003.
[13] Basagni, S.; Chlamtac, I.; Syrotiuk, V.R.; Woodward,
B.A. “A distance routing effect algorithm for mobility
(dream)”. In Proceedings of ACM/IEEE MobiCom ’98,
1998.
[14] Perkins, C.; Bhagwat, P. “Highly Dynamic Destination
Sequenced Distance-Vector Routing for Mobile
Computers”. Comp. Commun. 1994.
[15] Clausen, T.; Jacquet, P. “Optimized Link State Routing
Protocol”. Request for Comments: 3626, 2003.
[16] Finn, G.G. “Routing and addressing problems in large
metropolitan scale internetworks”. Technical Report
ISI/RR-87-180, ISI, 1987.
[17] Bose, P.; Morin, P., Stojmenovic, I.; Urrutia, J.”Routing
with guaranteed delivery in ad hoc wireless networks”. In:
Proceedings of the 3rd International Workshop on Discrete
algorithms and methods for mobile computing and
communications. 1999, ACM Press.
[18] Leong, B.; Mitra, S.; Liskov. B. “Path vector face
routing: Geographic routing with local face information”.
In Proceedings of the IEEE Conference on Network
Protocols, 2005.
[19] Karp, B; Kung, H. T. (2000) GPSR: greedy perimeter
stateless routing for wireless networks. in Proceedings of
the 6th ACM/IEEE MobiCom. 2000, pp. 243-254, ACM
Press.
[20] Kuhn,F.;Wattenhofer,R.;Zhang,Y.;Zollinger, A.
“Geometric ad-hoc routing: of theory and practice”. in
Proceedings of the 22nd annual symposium on principles
of distributed computing, 2003.
[21] Ko, Y.; Vaidya, N.H.”Geocasting in mobile ahoc
networks: Location-based multicast algnrithms” .In
Proceedings of WMCSA, pages 101-110, 1999.
[22] Network Simulator NS2 (Jan. 2010). Available:
http://www.isi.edu/nsnam/ns
[23] Lee,S.;Ko,Y. “Geometry-driven Scheme for Geocast
Routing in Mobile Ad Hoc Networks”. The 2006 IEEE
63rd Vehicular Technology Conference (VTC),
Melbourne, Australia, 2006.

Authors Profile

Rodrigo Palucci Pantoni R&D Systems
Analyst, received the Computer Science
degree in 2000 and subsequently received
the M.S. in 2006 at the University of São
Paulo (USP). He's attending the Ph.D
course, in the same university, as part of his
job at the Smar R&D department in the area
of software development for automation control and fieldbuses. He
joined Smar in 2000, working
in the Smar R&D department where he conducts research and
development of host systems, including a Fieldbus Foundation
Asset Management and a Configurator system. He now teaches
computer networks at Information Systems course at University
Dr. Francisco Maeda.


Dennis Brandão He received his Ph.D. degree in mechanical
engineering at the University of São Paulo in
2005. He now teaches “Industrial
Automation” at the Department of Electrical
Engineering of the same university. His
research activities are mainly in the area of
fieldbus technology and application, with a
particular interest for distributed systems
and continuous process control.






(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

71
Routing protocols in wireless networks

1
Vineet Agrawal ,
2
Dr Yashpal Singh,
3
Manish Varshney ,
4
Vidushi Gupta
1
Reader Deptt of Computer Science & Engg RBCET Bareilly, India
Email [email protected]
2
Reader Deptt of Computer Science & Engg BIET Jhansi, India
Email [email protected]
3
Sr Lecturer Deptt of Computer Science & Engg, SRMSWCET Bareilly, India
Email [email protected]
4
Lecturer Deptt of Computer Science & Engg, SRMSWCET Bareilly, India
Email [email protected]



Abstract: An ad hoc mobile network is a group of mobile nodes
that are dynamically and arbitrarily located in such a manner
that the interconnections between nodes are capable of changing
on a repetitive basis. In order to facilitate communication within
the network, a routing protocol is used to discover routes
between nodes. The primary goal of such an ad hoc network
routing protocol is correct and efficient route establishment
between a pair of nodes so that messages may be delivered in a
timely manner. In this paper we examine routing protocols for
ad hoc networks and evaluate these protocols based on a given
set of parameters. The scope of paper was to test routing
performance of four different routing protocols (AODV, DSR,
DSDV,etc) in variable network sizes up to thousand nodes.
Various types of scenarios are generated and each of the
protocol is simulated on each of these, then their parameters like
throughput, packet delivery ratio and delay will be compared
The performance differentials are analyzed using varying pause
time, constant nodes and dynamic topology. Based on the
observations, we make valuable conclusions about which
protocol performs better in which condition.

Keywords: Ad hoc ,Table-Driven, Demand Driven AODV,
DSDV,DSR, , MANET.
1. Introduction
Mobile hosts and wireless networking hardware are
becoming widely available, and extensive work has been
done recently in integrating these elements into
conventional networks such as the Internet. Oftentimes,
however, mobile users will want to communicate in
situations in which no fixed wired infrastructure such as this
is available, either because it may not be economically
practical or physically possible to provide the necessary
infrastructure or because the expediency of the situation
does not permit its installation. In networks comprised
entirely of wireless stations, communication between source
and destination nodes may require traversal of multiple
hops, as radio ranges are finite. Since Routing Protocols
emergence in the 1970s, wireless networks have become
increasingly popular in the computing industry. This is
particularly true within the past decade, which has seen
wireless networks being adapted to enable mobility. There
are currently two variations of mobile wireless networks.
The first is known as the infrastructure network (i.e., a
network with fixed and wired gateways).

Figure 1. A simple ad hoc network of three wireless mobile
hosts

The bridges for these networks are known as base stations.
A mobile unit within these networks connects to, and
communicates with, the nearest base station that is within
its communication radius. As the mobile travels out of range
of one base station and into the range of another, a
“handoff” occurs from the old base station to the new, and
the mobile is able to continue communication seamlessly
throughout the network. Typical applications of this type of
network include office wireless local area networks
(WLANs). The second type of mobile wireless network is
the infrastructure less mobile network, commonly known as
an ad hoc network. Infrastructure less networks has no fixed
routers; all nodes are capable of movement and can be
connected dynamically in an arbitrary manner. Nodes of
these networks function as routers which discover and
maintain routes to other nodes in the network. Example
applications of ad hoc networks are emergency search-and-
rescue operations, meetings or conventions in which persons
wish to quickly share information, and data acquisition
operations in inhospitable terrain. A community of ad hoc
network researchers has proposed, implemented, and
measured a variety of routing algorithms for such networks.
The observation that topology changes more rapidly on a
mobile, wireless network than on wired networks, where the
use of Distance Vector (DV), Link State (LS), and Path
Vector routing algorithms is well established, motivates this
body of work . DV and LS algorithms require continual
distribution of a current map of the entire network’s
topology to all routers. DV’s Bellman- Ford approach
constructs this global picture transitively; each router
includes its distance from all network destinations in each of
its periodic beacons. LS’s Dijkstra approach directly floods
announcements of the change in any link’s status to every
router in the network. Small inaccuracies in the state at a
router under both DV and LS can cause routing loops or
disconnection [7,9]. When the topology is in constant flux,
as under mobility, LS generates torrents of link status
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

72
change messages, and DV either suffers from out-of-date
state [4], or generates torrents of triggered updates.
The two dominant factors in the scaling of a routing
algorithm are: The rate of change of the topology and The
number of routers in the routing domain. Both factors affect
the message complexity of DV and LS routing algorithms:
intuitively, pushing current state globally costs packets
proportional to the product of the rate of state change and
number of destinations for the updated state. Hierarchy is
the most widely deployed approach to scale routing as the
number of network destinations increases. Without
hierarchy, Internet routing could not scale to support today’s
number of Internet leaf networks. An Autonomous System
runs an intra-domain routing protocol inside its borders, and
appears as a single entity in the backbone inter-domain
routing protocol, BGP. This hierarchy is based on well-
defined and rarely changing administrative and topological
boundaries. It is therefore not easily applicable to freely
moving ad-hoc wireless networks, where topology has no
well-defined AS boundaries, and routers may have no
common administrative authority. Caching has come to
prominence as a strategy for scaling ad-hoc routing
protocols. Instead, routers running these protocols request
topological information in an on-demand fashion as
required by their packet forwarding load, and cache it
aggressively. When their cached topological information
becomes out-of date, these routers must obtain more current
topological information to continue routing successfully.
Because of the fact that it may be necessary to hop several
hops (multi-hop) before a packet reaches the destination, a
routing protocol is needed. The routing protocol has two
main functions, selection of routes for various source-
destination pairs and the delivery of messages to their
correct destination. The second function is conceptually
straightforward using a variety of protocols and data
structures (routing tables). This paper is focused on
selecting and finding routes.
This paper examines routing protocols designed for these
wireless networks by first describing the operation of each of
the protocols and then comparing their various
characteristics. The remainder of the paper is organized as
follows. The next section presents a discussion of two
subdivisions of ad hoc routing protocols. Another section
discusses current table-driven protocols, while a later section
describes those protocols which are classified as on-demand.
The paper then presents Simulation parameters and the
performance evaluation including, a general comparison of
table-driven and on-demand protocols. Finally, the last
section concludes the paper.

2. Ad Hoc Routing Protocols
Since the advent of Defense Advanced Research Projects
Agency (DARPA) packet radio networks in the early 1970s
[1], several protocols have been developed for ad hoc mobile
networks. Such protocols must deal with the typical
limitations of these networks, which include high power
consumption, low bandwidth, and high error rates. As
shown in Fig.2, these routing protocols may generally be
categorized as:
• Table-driven(proactive)
• Source-initiated (demand-driven)(reactive)

2.1 Table-Driven Routing Protocols
Table-driven routing protocols attempt to maintain reliable,
up-to-date routing information from each node to every
other node in the network. These protocols require each
node to maintain one or more tables to store routing
information, and they react to changes in network topology
by propagating updates throughout the network in order to
maintain a consistent network view. The areas in which they
differ are the number of necessary routing-related tables and
the methods by which changes in network structure are
broadcast. The following sections discuss some of the
existing table-driven ad hoc routing protocols.

2.1.1 Destination-Sequenced Distance-Vector Routing
The Destination-Sequenced Distance-Vector Routing
protocol (DSDV) described in [2] is a table-driven algorithm
based on the classical Bellman-Ford routing mechanism [3].
The developments made to the Bellman-Ford algorithm
include freedom from loops in routing tables.

Figure 2.Categorization of ad hoc routing protocols

DSDV [2] is a hop-by-hop distance vector routing protocol
that in each node has a routing table that for all available
destinations stores the next-hop and number of hops for that
destination Every mobile node in the network maintains a
routing table in which all of the possible destinations within
the network and the number of hops to each destination are
recorded. Each entry is marked with a sequence number
assigned by the destination node. The sequence numbers
enable the mobile nodes to distinguish stale routes from new
ones, thereby avoiding the formation of routing loops.
Routing table updates are periodically transmitted
throughout the network in order to maintain table
consistency. DSDV basically is distance vector with small
adjustments to make it better suited for ad-hoc networks.
These modifications consist of triggered updates that will
take care of topology changes in the time between
broadcasts. To reduce the amount of information in these
packets there are two types of update messages defined: full
and incremental dump. The full dump carries all available
routing information and the incremental dump that only
carries the information that has changed since the last
dump. Because DSDV is dependent on periodic broadcasts it
needs some time to converge before a route can be used.
This converge time can probably be considered negligible in
a static wired network, where the topology is not changing
so frequently. In an ad-hoc network on the other hand,
where the topology is expected to be very dynamic, this
converge time will probably mean a lot of dropped packets
before a valid route is detected. The periodic broadcasts also
add a large amount of overhead into the network. New route
broadcasts contain the address of the destination, the
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

73
number of hops to reach the destination, the sequence
number of the information received regarding the
destination, as well as a new sequence number unique to the
broadcast [2]..

2.1.2The Wireless Routing Protocol
The Wireless Routing Protocol (WRP) described in [5] is a
table-based protocol with the goal of maintaining routing
information among all nodes in the network. To describe
WRP, we model a network as an undirected graph
represented as G.V; E. , where V is the set of nodes and E is
the set of links (or edges) connecting the nodes. Each node
represents a router and is a computing unit involving a
processor, local memory and input and output queues with
unlimited capacity. In a wireless network, a node has radio
connectivity with multiple nodes and a single physical radio
link connects a node with many other nodes. Each node in
the network is responsible for maintaining four tables:
• Distance table
• Routing table
• Link-cost table
• Message retransmission list (MRL) table
Each entry of the MRL of the update Message,
a retransmission counter, an acknowledgment- required flag
vector with one entry per neighbor, and a list of updates sent
in the update message. A link is assumed to exist between
two nodes only if there is radio connectivity between the two
nodes and they can exchange update messages reliably with
a certain probability of success The MRL records which
updates in an update message need to be retransmitted and
which neighbors should acknowledge the retransmission [5].
Mobiles inform each other of link changes through the use
of update messages. An update message is sent only between
neighboring nodes and contains a list of updates (the
destination, the distance to the destination, and the
predecessor of the destination), as well as a list of responses
indicating which mobiles should acknowledge (ACK) the
update. Mobiles send update messages after processing
updates from neighbors or detecting a change in a link to a
neighbor. In the event of the loss of a link between two
nodes, the nodes send update messages to their neighbors.
The neighbors then modify their distance table entries and
check for new possible paths through other nodes. Nodes
learn of the existence of their neighbors from the receipt of
acknowledgments and other messages. If a node is not
sending messages, it must send a hello message within a
specified time period to ensure connectivity. Otherwise, the
lack of messages from the node indicates the failure of that
link; this may cause a false alarm. Because of the broadcast
nature of the radio channel, a node can send a single update
message to inform all its neighbors about changes in its
routing table; however, each such neighbor sends an ACK to
the originator node. To ensure that connectivity with a
neighbor still exists when there are no recent transmissions
of routing table updates or ACKs, periodic update messages
without any routing table changes (null update messages)
are sent to the neighbors. The time interval between two
such null update messages is the HelloInterval. If a node
fails to receive any type of message from a neighbor for a
specified amount of time (e.g., three or four times the
HelloInterval known as the Router Dead-Interval), the node
must assume that connectivity with that neighbor has been
lost. When a mobile receives a hello message from a new
node, that new node is added to the mobile’s routing table,
and the mobile sends the new node a copy of its routing
table information. Part of the novelty of WRP stems from
the way in which it achieves loop freedom. In WRP, routing
nodes communicate the distance and second-to-last hop
information for each destination in the wireless networks.
WRP belongs to the class of path-finding algorithms with an
important exception. It avoids the “count-to-infinity”
problem [6] by forcing each node to perform consistency
checks of predecessor information reported by all its
neighbors. This ultimately (although not instantaneously)
eliminates looping situations and provides faster route
convergence when a link failure event occurs.

2.2 Source-Initiated On-Demand Routing

A different approach from table-driven routing is source-initiated
on-demand routing. This type of routing creates routes only when
desired by the source node. When a node requires a route to
a destination, it initiates a route discovery process within the
network. This process is completed once a route is found or
all possible route permutations have been examined. Once a
route has been established, it is maintained by a route
maintenance procedure until either the destination becomes
inaccessible along every path from the source or until the
route is no longer desired.

2.2.1 Ad Hoc On-Demand Distance Vector Routing
(AODV)
The Ad Hoc On-Demand Distance Vector routing protocol
(AODV) is an improvement of the Destination-Sequenced
Distance Vector routing protocol (DSDV)1. DSDV has its
efficiency in creating smaller ad-hoc networks. Since it
requires periodic advertisement and global dissemination of
connectivity information for correct operation, it leads to
recurrent system-wide broadcasts. Therefore the size of
DSDV ad-hoc networks is strongly limited. When using
DSDV, every mobile node also needs to maintain a whole
list of routes for each destination within the mobile network.
The advantage of AODV is that it tries to reduce the number
of required broadcasts. It creates the routes on an on-
demand basis, as opposed to maintain a complete list of
routes for each destination. Therefore, the authors of AODV
classify it as a pure on-demand route acquisition system [3].

2.2.1.1 Path Discovery Process
When trying to send a message to a destination node
without knowing an active route2 to it, the sending node
will initiate a path discovery process. A route request
message (RREQ) is broadcasted to all neighbors, which
persist to broadcast the message to their neighbors and so
on. The forwarding process is continued until the
destination node is reached or until an intermediate node
knows a route to the destination that is new enough. To
ensure loop-free and most recent route information, every
node maintains two counters: sequence number and
broadcast_id. The broadcast_id and the address of the
source node uniquely identify a RREQ message.
broadcast_id is incremented for every RREQ the source
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

74
node initiates. An intermediate node can receive multiple
copies of the same route request broadcast from various
neighbors. In this case – if a node has already received a
RREQ with the same source address and broadcast_id – it
will discard the packet without broadcasting it furthermore.
When an intermediate node forwards the RREQ message, it
records the address of the neighbor from which it received
the first copy of the broadcast packet. This way, the reverse
path from all nodes back to the source is being built
automatically. The RREQ packet contains two sequence
numbers: the source sequence number and the last
destination sequence number known to the source. The
source sequence number is used to maintain “freshness”
information about the reverse route to the source while the
destination sequence number specifies what actuality a route
to the destination must have before it is accepted by the
source. [3] When the route request broadcast reaches the
destination or an intermediate node with a fresh enough
route, the node responds by sending a unicast route reply
packet (RREP) back to the node from which it received the
RREQ. So actually the packet is sent back reverse the path
built during broadcast forwarding. A route is considered
fresh enough, if the intermediate node’s route to the
destination node has a destination sequence number which
is equal or greater than the one contained in the RREQ
packet. As the RREP is sent back to the source, every
intermediate node along this path adds a forward route entry
to its routing table. The forward route is set active for some
time indicated by a route timer entry. The default value is
3000 milliseconds, as referred in the AODV RFC [4]. If the
route is no longer used, it will be deleted after the specified
amount of time. Since the RREP packet is always sent back
the reverse path established by the routing request, AODV
only supports symmetric links.


Figure.3. AODV Path Discovery Process.

2.2.2 Dynamic Source Routing
The Dynamic Source Routing (DSR) protocol presented in
[8] is an on-demand routing protocol that is based on the
concept of source routing. The Dynamic Source Routing
(DSR) protocol is an on-demand routing protocol based on
source routing. In the source routing technique, a sender
determines the correct sequence of nodes through Dynamic
Source Routing (DSR) [3][12][13] also belongs to the class
of reactive protocols and allows nodes to dynamically
discover a route across multiple network hops to any
destination. DSR uses no periodic routing messages, thereby
reducing network bandwidth overhead, conserving battery
power and avoiding large routing updates throughout the
ad-hoc network. Instead DSR relies on support from the
MAC layer (the MAC layer should inform the routing
protocol about link failures). The two basic modes of
operation in DSR are route discovery and route
maintenance.

Figure 4. AODV Route Maintenance by using Link
failure Notification Message

2.2.2.1Route Discovery
Route discovery allows any host in the ad hoc network to
dynamically find out a route to any other host in the ad hoc
network, whether directly reachable within wireless
transmission range or reachable through one or more
intermediate network hops through other hosts. A host
initiating a route discovery broadcasts a route request packet
which may be received by those hosts within wireless
transmission range of it. The route request packet identifies
the host, refer red to as the target of the route discovery, for
which the route is requested. If the route discovery is
successful the initiating host receives a route reply packet
listing a sequence of network hops through which it may
reach the target. In addition to the address of the original
initiator of the request and the target of the request, each
route request packet contains a route record, in which is
accumulated a record of the sequence of hops taken by the
route request packet as it is propagated through the ad hoc
network during this route discovery. Each route request
packet also contains a unique request id, set by the initiator
from a locally-maintained sequence number. In order to
detect each
duplicate route requests received, host in the ad hoc network
maintains a list of the h initiator address, request id i pairs
that it has recently received on any route request.

2.2.2.2 Route Maintenance
Route maintenance can be accomplished by two different
processes:
• Hop-by-hop acknowledgement at the data link layer
• End-to-end acknowledgements
Hop-by-hop acknowledgement at the data link layer allows
an early detection and retransmission of lost or corrupt
packets. If the data link layer determines a fatal
transmission error (for example, because the maximum
number of retransmissions is exceeded), a route error packet
is being sent back to the sender of the packet. The route
error packet contains two parts of information: The address
of the node detecting the error and the host’s address which
it was trying to transmit the packet to. Whenever a node
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

75
receives a route error packet, the hop in error is removed
from the route cache and all routes containing this hop are
truncated at that point. End-to-end acknowledgement may
be used, if wireless transmission between two hosts does not
work equally well in both directions. As long as a route
exists by which the two end hosts are able to communicate,
route maintenance is possible. There may be different routes
in both directions. In this case, replies or acknowledgements
on the application or transport layer may be used to indicate
the status of the route from one host to the other. However,
with end-to-end acknowledgement it is not possible to find
out the hop which has been in error.

3 .Simulation And Its Parameters

3.1 Methodology
The main concentration of the project was to test the ability
of different routing protocols to respond on network
topology changes (for instance link breaks, node movement,
and so on). Furthermore the focus was set on different
network sizes, varying number of nodes and area sizes. Our
investigations did not include the protocol’s operation under
heavy load, e.g. its operation in congestion situations.
Therefore only rather small packet sizes and one source
node were selected. As referenced in many other papers,
Our protocol evaluations are based on the simulation of 50
wireless nodes forming an ad hoc network, moving about
over a rectangular (1500m X 300m) flat space for 200
seconds of simulated time. We chose a rectangular space in
order to force the use of longer routes between nodes than
would occur in a square space with equal node density. In
order to enable direct, fair comparisons between the
protocols, it was critical to challenge the protocols with
identical loads and environmental conditions. Each run of
the simulator accepts as input a scenario file that describes
the exact motion of each node and the exact sequence of
packets originated by each node, together with the exact
time at which each change in motion or packet origination
is to occur. We pre-generated 9 different scenario files with
varying movement patterns and traffic loads, and then ran
all three routing protocols against each of these scenario
files. Since each protocol was challenged in an identical
fashion, we can directly compare the performance results of
the protocols

3.2 Mobility Model
An important factor in mobile ad-hoc networks is the
movement of nodes, which is characterized by speed,
direction and rate of change. Mobility in the “physical
world” is unpredictable, often unrepeatable, and it has a
dramatic effect on the protocols developed to support node
movement. Therefore, different “synthetic” types of mobility
models have been proposed to simulate new protocols.
Synthetic means to realistically represent node movement,
but without using network traces. Nodes in the simulation
move according to a model that we call the “random
waypoint” model. The movement scenario files we used for
each simulation are characterized by a pause time. Each
node begins the simulation by remaining stationary for
pause time seconds. It then selects a random destination in
the 1500m x 300m space and moves to that destination at a
speed distributed uniformly between 0 and some maximum
speed. Upon reaching the destination, the node pauses again
for pause time seconds, selects another destination, and
proceeds there as previously described, repeating this
behavior for the duration of the simulation. Each simulation
ran for 200 seconds of simulated time. We ran our
simulations with movement patterns generated for 9
different pause times: 2, 10, 15, 25, 35, 50, 75, 85, 100
seconds. A pause time of 0 seconds corresponds to
continuous motion, and a pause time of 200 (the length of
the simulation) corresponds to no motion. Hence reducing
pause time increases mobility. In this way we put our
protocols in networks of varying mobility. Because the
performance of the protocols is very sensitive to movement
pattern, we generated scenario files with 9 different pause
times. All routing protocols were run on the same 9 scenario
files. We report in this paper data from simulations using a
maximum node speed of 20 meters per second (average
speed 10 meters per second).

3.3 Communication Model
As the purpose of our simulation was to compare the
performance of each routing protocol, we select our traffic
sources to be constant bit rate (CBR) sources. When
defining the parameters of the communication model, we
experimented with sending rates of 3 packets per second,
networks containing maximum connection of 35, and packet
sizes of 512 bytes. All communication patterns were peer-to-
peer, and connections were started at times uniformly
distributed between 0 and 180 seconds. The 9 different
scenario files for maximum node movement speed (20 m/s)
moving in a random waypoint model with which we
compared the routing protocols.

3.4 Performance Metrics
In order to compare routing protocols, the following
performance metrics are considered:
• Throughput: a dimensional parameter which gives the
portion of the channel capacity used for useful transmission
selects a destination at the beginning of the simulation and
(i.e., data packets correctly delivered to the destinations).
• Average End to End delay: the average end-to-end delay
of data packets, i.e. the period between the data packet
generation time and the time when the last bit arrives at the
destination.
•Packet delivery ratio: the ratio among the number of
packets received by the TCP descends at the final
destination and the number of packets originated by the
“application layer” sources. It is a measure of efficiency of
the protocol

4. Performances Analysis

DSDV which is a table driven proactive routing protocol
completely prevails over the on demand reactive routing
protocols AODV and DSR .Since DSDV proactively
maintains the routes to all destination in its table it does not
have to initiate the route request process as frequently as in
AODV and DSR while sending packets. Hence on average
DSDV clearly has less delay. Now we can easily examine
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

76
that DSR is the worst protocol in terms of delay. At high
mobility and more network load (512 byte packets at 3
packets/sec) insistent route caching strategy of DSR fails. In
these stressful condition links break very often leading to
invalidation of routes cached .Hence in these conditions,
picking up of staled cached routes occur leading to
utilization of additional network bandwidth and interface
queue slots even though the packet is ultimately dropped,
leading to more delay DSR performed inefficiently in our
metrics (PDR and Throughput) in these “stressful”
situations (higher mobility, more network load). The reason
of these phenomena is the aggressive use of route caching in
DSR. In our observation, such caching provides a significant
benefit up to a certain extent. With higher loads the degree
of caching is deemed too large to benefit performance.
Often, stale routes are chosen since route length (and not
any freshness criterion) is the only metric used to pick
routes from the cache when faced with multiple choices.
Picking stale routes causes two problems:
• Consumption of additional network bandwidth and
interface queue slots even though the packet is eventually
dropped or delayed
• Possible pollution of caches in other nodes
With high mobility, the possibilities of the caches being
stale are quite high in DSR. Eventually when a route
discovery is initiated, the large number of replies (as all
RREQs are replied) received in response is associated with
higher MAC overhead and cause increased interference to
data traffic. Hence, the cache staleness and high MAC
overhead mutually result in significant degradation in
performance for DSR in high mobility An efficient
mechanism to remove stale cached routes can improve
performance of DSR. On other hand since in AODV only
the first arriving request packet (RREQs) is answered and
further no RREQs are answered therefore it leads to less no.
of replies (RREPs) .Also the error packets RERRs are are
broadcasted in AODV which leads to lesser MAC load as
compared to unicasted REERs of DSR which leads to much
MAC layer load.


Figure 4. Average End To End Delay

5. Comparisons
The subsequent sections provide comparisons of the
previously described routing algorithms. The next section
compares table-driven protocols, and a further section
compares on demand protocols.

5.1 Table-Driven vs. On-Demand Routing
As discussed former, the table-driven ad hoc routing border
on is similar to the connectionless approach of forwarding
packets, with no regard to when and how frequently such
routes are preferred. It relies on an underlying routing table
revise mechanism that involves the stable propagation of
routing information. This is not the case, however, for on-
demand routing protocols. When a node using an



Figure 5. Throughput of Receiving Packets


Figure 6. Packet Delivery Ratios

desires a route to a new destination, it will have to wait until
such a route can be discovered. On the other hand, since
routing information is constantly propagated and
maintained in table-driven routing protocols, a route to
every other node in the ad hoc network is always available,
regardless of whether or not it is needed. This feature,
although useful for datagram traffic, incurs substantial
signaling traffic and power consumption. Since both
bandwidth and battery power are scarce resources in mobile
computers, this becomes a serious limitation.

6. Conclusion
In this paper we provide descriptions of several routing
schemes proposed for ad hoc mobile networks. We also
provide a classification of these schemes according to the
routing strategy (i.e., table-driven and on-demand). We have
presented a comparison of these two categories of routing
protocols, highlighting their features, differences,
and characteristics .We has compared the performance of
DSDV, AODV and DSR We used a detailed simulation
model to demonstrate the performance characteristics of
these protocols. By simulating we can argue that if delay is
our main criteria than DSDV can be our best choice But if
reliability and throughput are our main parameters for
selection then AODV gives better results compare to others
because its throughput and packet delivery ratio is best
among others. While there are many other issues that need
to be considered in analyzing the performance of ad hoc
networks, we believe that our work could provide intuition
for future protocol selection and analysis in ad hoc
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

77
networks. While we focus only on the network throughput,
reliability and the delay, it would be interesting to consider
other metrics like power consumption, the number of hops
to route the packet, fault tolerance, minimizing the number
of control packets etc.
Parameters On-demand Table-den
References

[1] J. Jubin and J. Tornow, “The DARPA Packet Radio Network
Protocols,” Proc. IEEE, vol. 75, no. 1, 1987, pp. 21–32.
[2] C. E. Perkins and P. Bhagwat, “Highly Dynamic Destination-
Sequenced Distance-Vector Routing (DSDV) for Mobile
Computers,” Comp. Commun.Rev., Oct. 1994, pp. 234–44.
[3] L. R. Ford Jr. and D. R. Fulkerson, Flows in Networks, Princeton
Univ. Press, 1962.
[4] C. Perkins, E. Belding-Royer, and S. Das, “RFC 3561: Ad hoc on-
demand distance vector (AODV) routing,” July 2003, category:
experimental. [Online]. Available: ftp://ftp.isi.edu/in-
notes/rfc3561.txt
[5] S. Murthy and J. J. Garcia-Luna-Aceves, “An Efficient Routing
Protocol for Wireless Networks,” ACM Mobile Networks and App.
J., Special Issue on Routing in Mobile Communication Networks,
Oct. 1996, pp. 183–97.
[6] A. S. Tanenbaum, Computer Networks, 3rd ed., Ch. 5, Englewood
Cliffs, NJ: Prentice Hall, 1996, pp. 357–58.
[7] C. E. Perkins and E. M. Royer, “Ad-hoc On-Demand Distance
Vector Routing,” Proc. 2nd IEEE Wksp. Mobile Comp. Sys. and
Apps., Feb. 1999, pp. 90–100.
[8] D. B. Johnson and D. A. Maltz, “Dynamic Source Routing in Ad-
HocWireless Networks,” Mobile Computing, T. Imielinski and H.
Korth, Eds., Kluwer, 1996, pp. 153–81.
[9] J. Broch, D. B. Johnson, and D. A. Maltz, “The Dynamic Source
Routing Protocol for Mobile Ad Hoc Networks,” IETF Internet
draft, draft-ietfmanet-dsr-01.txt, Dec. 1998 (work in progress).
[10] V. D. Park and M. S. Corson, “A Highly Adaptive Distributed
Routing Algorithm for Mobile Wireless Networks,” Proc.
INFOCOM ’97, Apr. 1997.
[11] M. S. Corson and A. Ephremides, “A Distributed Routing
Algorithm for Mobile Wireless Networks,” ACM/Baltzer Wireless
Networks J., vol. 1,no. 1, Feb. 1995, pp. 61–81.
[12] C-K. Toh, “A Novel Distributed Routing Protocol To Support
Ad-Hoc Mobile Computing,” Proc. 1996 IEEE 15th Annual Int’l.
Phoenix Conf.Comp. and Commun., Mar. 1996, pp. 480–86.
[13] R. Dube et al., “Signal Stability based Adaptive Routing (SSA)
for Ad-Hoc Mobile Networks,” IEEE Pers. Commun., Feb. 1997,
pp. 36–45.
[14] C-K. Toh, “Associativity-Based Routing for Ad-Hoc Mobile
Networks,” Wireless Pers. Commun., vol. 4, no. 2, Mar. 1997, pp.
1–36.
[15] S. Murthy and J. J. Garcia-Luna-Aceves, “Loop-Free Internet
Routing Using Hierarchical Routing Trees,” Proc. INFOCOM ’97,
Apr. 7–11, 1997.
[16] C. E. Perkins and E. M. Royer, “Ad Hoc On Demand Distance
Vector (AODV) Routing,” IETF Internet draft, draft-ietf-manet-
aodv-02.txt, Nov.1998 (work in progress).

Authors Profile

Vineet Agrawal is having total more than 15
years experience in teacing and industry.Mr.
Vineet Agarwal is presently working as a Asst.
Director of Rakspal Bahadur College of
Engineering & Technology, Bareilly. Author is
MCA & M.Tech from Birla Institute of
Technology, Mesra Ranchi. Author has worked
in Synthetic & Chemicals Ltd. For four years since 1995 to 1999
as a Sr. Engineer of Computern Application. Mr. Agarwal is the
author of number of books. He has written number of books on
various topics such as DBMS, Data Structure, Algoritms etc. Mr.
Agarwal is also pursuing his Ph.D. in computer scince.He has
presented a number of papers in various national
conferences.Number of papers have been published in the National
& International Journals.Mr Agarwal has also attended various
Faculty Development Programe conducted by Infosys and TCS.

Dr. Yahpal Singh is a Reader and HOD (CS)
in BIET, Jhansi (U.P.). He obtained Ph.D.
degree in Computer Science from Bundelkhand
University, Jhansi. He has experience of
teaching in various courses at undergraduate
and postgraduate level since 1999. His areas of
interest are Computer Network, OOPS, DBMS.
He has authored many popular books of
Computer Science for graduate and postgraduate level. He has
attended many national and international repute seminars and
conferences. He has also authored many research papers of
international repute.

Manish Varshney received his M.Sc (C.S)
degree from Dr. B.R.A. University, Agra,
M.Tech. (IT) from Allahabad University and
Pursuing PhD in Computer Science. He is
working as a HOD (CS/IT) in SRMSWCET
Bareilly. He has been teaching various
subjects of computer science for more than
half a decade. He is known for his skills at
bringing advanced computer topics down to
the novice's level. He has experience of industry as well as
teaching various courses. He has authored various popular books
such as Data Structure, Database Management System, Design
and Implementation of Algorithms, Compiler Design books for the
technical students of graduation and postgraduation.He has
published various research papers in National and International
journals. He has also attended one faculty development program
organized by Oracle Mumbai on Introduction to Oracle 9i SQL and
DBA Fundamental I.

Vidushi Gupta received her B.tech (C.S)
degree from Uttar Pradesh Technical
University, Lucknow.She is also pursuing
M.tech from Karnataka University, She is
working as Lecturer ( CS/IT department) in
SRMSWCET, Bareilly .She has published a
research paper in an International journal. She
has also attended one faculty development
program based on the “Research Methodologies”.


















(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

78
A Secure Iris Image Encryption Technique Using
Bio-Chaotic Algorithm
Abdullah Sharaf Alghamdi
1
, Hanif Ullah
2

Department of Software Engineering,
College of Computer and Information Sciences,
King Saud University, Riyadh, Kingdom of Saudi Arabia
[email protected] , [email protected]

Abstract: Due to dramatic enhancement in computers and
communications and due to huge use of electronic media,
security gains more and more importance especially in those
organizations where information is more critical and more
important. The older techniques such as conventional
cryptography use encryption keys, which are long bit strings and
are very hard to memorize such a long random numbers. Also it
can be easily attacked by using the brute force attack technique.
Instead of using the traditional cryptographic techniques,
Biometrics like Iris, fingerprints, voice etc. uniquely identifies a
person and a secure method for stream cipher, because
Biometric characteristics are ever living and unstable in nature
(with respect to recognition). In this paper we used the idea of
bio-chaotic stream cipher which encrypts the images over the
electronic media and also used to encrypt the images to store it
into the databases to make it more secure by using a biometric
key and a bio-chaotic function. It enhances the security of the
images and it should not be compromised. The idea also gives
birth to a new kind of stream cipher named bio-chaotic stream
cipher. The paper also describes how to generate an initial key
also called initial condition from a biometric string and how to
encrypt and decrypt the desired data by using the bio-chaotic
function.
Keywords: Biometric, stream cipher, bio-chaotic algorithm
(BCA), cryptography, key.
1. Introduction
Due to dramatic enhancement in computers and
communications and due to huge use of electronic media,
security gains more and more importance especially the
security of biometric images become a hot issue. Biometric
images are mostly used for the authentication system
because of there ever living and unstable (with respect to
recognition) characteristics. Conventional or traditional
symmetric or asymmetric cryptography is limited only to
text files but it cannot be used in case of huge files like
images and videos.
Image encryption techniques are extensively used to
overcome the problem of secure transmission for both
images and text over the electronic media by using the
conventional cryptographic algorithms. But the problem is
that it cannot be used in case of huge amount of data and
high resolution images [2].
Instead of using the traditional way of cryptography for
image encryption we can also use biometric e.g. fingerprint,
iris, face, voice etc for the same purpose. The main
advantage of a biometric is that it is ever living and unstable
characteristics of a human being and it cannot be
compromised. However it also suffers from some biometric
specific threats and that is the privacy risk in biometric
systems. An attacker can interpret a person’s biometric data,
which he can use for many illegal operations such is to
masquerade like a particular person or monitor the person’s
private data [3].
Similarly some chaos-based cryptosystems are used to solve
the privacy and security problems of biometric templates.
The secret keys are randomly generated and each session
has different secret keys. Thus biometric templates are
encrypted by means of chaotic cryptographic scheme which
makes them more difficult to decipher under attacks [4].
Moreover some chaotic fingerprint images encryption
techniques are also proposed which combines the shuttle
operation and nonlinear dynamic chaos system. The
proposed image encryption technique provides an efficient
and a secure way for fingerprint images encryption and
storage [5].
Similarly some new image encryption technique based on
hyper-chaos is also proposed, which uses an image total
shuffling matrix to shuffle the pixel positions of the plain
image and then the states combination of hyper-chaos is
used to change the gray values of the shuffled image [6].
In order to improve the security of the images we proposed a
better idea which is a new type of algorithm called Bio-
Chaotic stream cipher algorithm (BCA) for image
encryption which overcomes the problems of some of the
algorithms used previously for the same purpose. In this
algorithm we used the iris images and extract their features
by using the L.Rosa [9] iris feature extraction code. These
features are then used to generate the initial condition for
the secret key using the Hamming Distance technique,
which is then Xored to the iris extracted features to generate
another secret key called biometric key. This biometric key
is then used in the chaotic function to generate the bio-
chaotic stream cipher for further encryption.
The rest of the paper is organized such that section 2
consists the related work of the paper. Section 3 will show
the basic working and idea of the BCA. Section 4 presents
the graphical representation of the key generation process
and logistic map for the algorithm. Section 5 shows some
mathematical comparisons with other algorithms. Finally
section 6 draws a conclusion.
2. Related work
The same work is carried out in our conference paper
already published. The same algorithm is used for the
encryption of the Iris images. In this paper we elaborate the
algorithm with more detail and add some new features to the
existing proposed system [19].

The work that we seen relevant to our work is that of
Haojiang Gao, Yisheng Zhang, Shuyun Liang and Dequn Li
which proposed a new chaotic algorithm for image
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

79
encryption[2]. In this paper they presented a new nonlinear
chaotic algorithm (NCA) which uses power function and
tangent function instead of linear function. The
experimental results demonstrated in this paper for the
image encryption algorithm based on NCA shows
advantages of large key space and high-level security, while
maintaining acceptable efficiency [2].

Similarly the work done by Song Zhao, Hengjian Li, and Xu
Yan for the security and Encryption of fingerprint images is
more relevant to our work [5]. In this paper they proposed a
novel chaotic fingerprint images encryption scheme
combining with shuttle operation and nonlinear dynamic
chaos system. The proposed system in this paper shows that
the image encryption scheme provides an efficient and
secure way for fingerprint images encryption and storage
[5].
Also the work done by Muhammad Khurram Khan and
Jiashu Zhang for implementing templates security in remote
biometric Authentication systems seems relevant to us [4].
In this paper they presented a new chaos-based cryptosystem
to solve the privacy and security issues in remote biometric
authentication over the network. Experimental results
derived in this paper shows that the security, performance
and accuracy of the presented system are encouraging for
the practical implementation in real environment [4].

Similarly a new image encryption technique was introduced
by Tiegang Gao and Zengqiang Chen in their paper based
on the image total shuffling matrix to shuffle the position of
the image pixels and then uses a hyper chaotic function to
complex the relationship between the plain image and the
cipher image. The suggested image encryption algorithm
has the advantage of large key space and high security [6].

Moreover a coupled nonlinear chaotic map and a novel
chaos-based image encryption technique were used to
encrypt the color images by Sahar Mazloom and Amir
Masud Eftekhari-Moghadam in their paper [10]. They used
the chaotic cryptography technique which is basically a
symmetric key cryptography with a stream cipher structure.
They used the 240 bit long secret key to generate the initial
condition and to increase the security of the proposed system
[10].

3. Proposed System Bio-Chaotic Algorithm
(BCA)
The basic idea of the algorithm is such that we took an iris
image and extract its features by using L.Rosa code [9]. L.
Rosa used a code to generate a binary pattern from the given
iris image. The binary pattern is further divided into small
blocks of binary data to make the process simplified,
because it is very difficult to encrypt the binary pattern of
hundreds of thousands of bits at once. In our case we made
each block of 128 bits to make it simpler and to encrypt each
block easily. A random block is then selected to create the
initial condition for the secret key. The random selection of
the block is preferred because of the attackers, so that no one
can easily understand that which block is selected for the
initial condition.
At the transmission time of the image the bits of this
random selected block is encrypted by using Quantum
Encryption Technique [8]. Quantum encryption uses light
particles, also call photons instead of bits at communication
time. A photon can have one of the four orientations or
shapes, 45
0
diagonal, -45
0
diagonal, horizontal or vertical.
Each of these represents a bit, - and / represents a 0, while |
and \ represents a 1[8].
Fig 1 presents the block diagram of the proposed bio-chaotic
algorithm. The basic steps of the algorithm are as follows.

I. Generation of the initial condition from the randomly
selected block taken from the binary pattern of the iris
image. The technique used to create the initial condition
is that of Hamming Distance i.e.

Where n=1, 2, 3, 4…... Some other techniques can also be
used for the same purpose like



Figure 1. Block Diagram of Bio-Chaotic Algorithm

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

80
II. This initial condition is then converted into secret key by
using the LFSR method. An LFSR of length n over a
finite field Pq consist of n stages [a
n-1
,a
n-2
,a
n-3
,……..,a0]
with a
i Є
of Pq, and a polynomial



III. The secret key and iris template is then Xored in parallel
to generate the biometric key by using the equation,

IV. This biometric key is further Xored with different
blocks of the iris template (divided into blocks of 128
bits/block) which encrypts the image in such a way that
no intruder or attacker can easily decrypt the image.
V. To make the algorithm stronger and more secure we
add the chaotic function to the biometric key and apply
it over the iris image which encrypts it in a more secure
way. We use the following logistic equation [4].


Where n=1, 2, 3… is the map iteration index and r is the
value taken from the algorithm. On the basis of equation 4
we generate the logistic map for different values of the
algorithm the detail of which will be given in the next
section.
3. 1 Decryption Process
The decryption process of the used image is carried on by
the same way using the same key used for the encryption
process but in the opposite direction i.e. the ciphered image
is Xored with the biometric key to get the image back in its
original form. The receiver will first decrypt the randomly
selected block by using the same technique used for the
encryption process i.e. Quantum Decryption technique [8].
After decrypting the selected block the receiver will generate
the initial condition with the same procedure used for the
encryption process and will decrypt the image. The equation
used for the decryption process is as follows.

It shows the Exclusive OR operation.

4. Experimental Analysis of the Algorithm
In order to evaluate and check the performance of the
proposed algorithm i.e. Bio-chaotic algorithm we took iris
images from one of the renowned database CASIA (Chinese
Academy of sciences and institute of Automation) [11]. The
database contains a lot of iris images taken from different
people eyes. In our case we use 2 or 3 of the iris images
from this database to carry out our experimental process.
These images are shown in fig.2.

The algorithm is analyzed and tested by using different
values for x where x is any real value between 0 and 1.
Some of the logistic maps based on the experimental
analysis performed over sample and encrypted iris images
are included in this section. The logistic maps are derived
on the basis of the following mathematical function.

On the basis of the above equation we generate different
logistic maps using different values. Fig.3 and 4 shows the
statistical correlation curves of the sequence. By observing
the maps carefully it’s clear that even changing in a small
part of the value the whole map become different.
Fig.5 shows the encrypted images by using different chaotic
values. From the figure it is clear that how strong the
encryption process is that by changing even a small part of
the value the image become more and more invisible.
Similarly the decryption process is more sample as like the
encryption by just Xoring the Ciphered image with the key
and we will get the original image.





Figure 2. iris images used for experiments



Figure 3. Logistic map when value= 0.54000000000001






0 20 40 60 80 100 120
0
0.5
1
0 20 40 60 80 100 120
0
0.5
1
Size of Biometric template
Real value b/w 0 and 1
Logistic Map for Cipher and Decipher date
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

81

Figure 4. Logistic map when value= 0.58000000001




Figure 5. (a) Encrypted image at value= 0.580000000000

Figure.5. (b) Encrypted image at value= 0.7000000000001

Figure 5. (c) Encrypted image at value= 0.9800000000001




Table 1: Avalanche effect of the BCA




5. Statistical Analysis of Bio-Chaotic
Algorithm (BCA)
In this section statistical analysis and mathematical
observations like Avalanche effect, confusion and diffusion,
and entropy of the proposed algorithm are mentioned.

5.1 Avalanche Effect

The Avalanche effect refers to a desirable property of the
cryptographic algorithms. The Avalanche effect is evident
if, when an input is changed slightly (for example, flipping
a single bit) the output changes significantly (e.g., half the
output bits flip). In the case of quality block ciphers, such a
small change in either key or the plain text should cause a
drastic change in the cipher text. In our case the Avalanche
effect of the proposed system is determined by using the
following mathematical equation [12].



Where percentPC (percent difference between plain image
and ciphered image) could be found out by using the
equation



Where Acc is basically an Accumulator and it could be find
out by



In equation 8 DiffPC means the difference between plain
image and ciphered image and it is basically the Xor
operation between plain image and cipher image. Similar
methodology is used to find out the avalanche effect between
plain image and key and ciphered image and key. The
results of the above equations are tabulated in table1.

The table shows that the Avalanche effect between the plain
image and ciphered image, and plain image and key is less
than 50 percent that is a more desirable value for any
NO AvalanchePC Effect
for BCA
AvalanchePK Effect
for BCA
AvalancheCK Effect
For BCA
1 48.8758 % 47.9530 % 52.9465 %
0 20 40 60 80 100 120
0
0.5
1
0 20 40 60 80 100 120
0
0.5
1
Size of Biometric

Real value b/w 0 and 1
Logistic

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

82
algorithm. Similarly the Avalanche effect between ciphered
image and key is round about 50 percent which is slightly
bigger than the rest of the two, but again it is a desirable
value for our proposed algorithm.

5.2 Confusion and Diffusion

Confusion and diffusion are the two properties of the
operation of a secure cipher. Confusion refers to making the
relationship between the key and the cipher text as complex
and as involved as possible. Diffusion refers to the property
that redundancy in the statistics of the plain text is
dissipated in the statistics of the cipher text [12]. Confusion
and diffusion are the same properties like Avalanche effect
which is elaborated in the previous section. The confusion
and diffusion of the proposed algorithm is round about 49%,
which shows the strength of the proposed system.

5.3 Entropy

Entropy is a measure of the uncertainty or randomness
associated with a random variable. It is basically a measure
of the average information content one is missing when one
does not know the value of the random variable [12].
Entropy can be found by using the equation



By using the above equation we found the entropy of our
proposed system which is round about 127.3. The values
show better uncertainty and randomness of bits in the
algorithm. The probability of each bit is 0.5. The entropy
will be high if there is more randomness in the bits used in
the ciphered image. Table 2 shows the entropy of our
proposed system.


Table 2: Entropy of Bio-chaotic Algorithm



5.4 Histogram of the Images

Figure 6 shows the histogram for the plain and encrypted or
ciphered images used in the bio-chaotic algorithm.



0 50 100 150 200 250
0
1000
2000
3000
4000
5000
6000
7000
8000

Figure 6. (a) histogram of the plain image


Bio-chaotic Algorithm Entropy(H(X))
1 64.67
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

83
0 50 100 150 200 250
0
5000
10000
15000


Figure 6. (b) histogram of the ciphered image


6. Conclusion
This paper presents a new and novel idea for the encryption
and decryption of the iris images. The proposed algorithm
called the Bio-Chaotic Algorithm (BCA) takes an iris image
and with the help of L.Rosa code generates the iris features
or the binary bits pattern for the image. This binary bits
pattern is then divided into small blocks of bits to simplify
the process. Each block is that of 128 bits long. Then a
random block is selected from all these blocks to create the
initial condition. This initial condition is then passed from
the LFSR to generate the secret key. A secret key of 128 bits
is generated from the result of the LFSR. This secret key is
then used for the encryption of the iris image. A Quantum
encryption technique is also used to encrypt the randomly
selected block, so that no one can easily attack the block
used for the generation of the secret key. The same
procedure is then used at the receiver end to decrypt the iris
image. Chaotic function is used to make the algorithm more
secure and make the process of the encryption and
decryption more complex. Experimental and statistical
analysis of the algorithm shows that the algorithm is
stronger and more secure and can be used for the practical
implementation of the iris images encryption.

7. Future Work

In the future we would like to use the same technique for the
encryption of fingerprint images. Also we would like to use
a block size more than 128 bits to make the algorithm
stronger and more secure.

References
[1] Arroyo David, Li Chengqing, Li Shujun, Alvarez Gonzalo,
Halang A. Wolfgang,” Cryptanalysis of an image encryption
scheme based on a new total shuffling algorithm”, Elsevier ,
Science Direct, Volume 41, Issue 5, 15 September 2009, Pages
2613-2616
[2]Haojiang Gao, Yisheng Zhang , Shuyun Liang , Dequn Li, “A
new Chaotic Algorithm for image Encryption”, Elsevier ,
Science Direct , Aug 2005.
[3] Andrew Teoh Beng Jin, David Ngo Chek Ling, Alwyn Goh, “
Biohashing : two factor authentication featuring fingerprint
data and tokenized random number “ April 2004,”The Journal
Of The Pattern Recognition Society “ , Elsevier , April 2004.
[4] Muhammad Khurram Khan, Jiashu Zhang, “Implementing
Templates Security in Remote Biometric Authentication
Systems”, IEEE Conf. Proceedings on CIS’06, China, pp.
1396-1400, Vol.2, 2006.
[5] Song Zhao, Xu Yan,”A secure and efficient fingerprint images
encryption scheme” Proceedings of the IEEE, 2008, pp- 2803-
2808.
[6] Gao Tiegang, Chen Zengqiang,” A new image encryption
algorithm based on hyper-chaos” Elsevier, Science Direct,
Physics Letters A, Volume 372, Issue 4, p. 394-400, 2007.
[7] Muhammad Khurram Khan, Jiashu Zhang, “Improving the
Security of ‘A Flexible Biometrics Remote User Authentication
Scheme’”, Computer Standards and Interfaces (CSI), Elsevier
Science UK, vol. 29, issue 1, pp. 84-87, 2007.
[8] T Morkel 1, JHP Eloff,” Encryption Techniques: A Timeline
Approach”, Information and Computer Security Architecture
(ICSA) Research Group Department of Computer Science
University of Pretoria, 0002, Pretoria, South Africa
[9] Iris code by Luigi ROSA, L'Aquila ITALY
(19600bits)”http://www.advancedsourcecode.com/irisphase.asp
[10] Mazloom Sahar, Eftekhari-Moghadam Masud Amir”, Color
image encryption based on Coupled Nonlinear Chaotic Map”,
the journal of Chaos, Solitons and Fractals 42 (2009) 1745–
1754, ELSEVIER, 2009.
[11] CASIA Iris Database. [Online March, 2006]
http://sinobiometrics.com.
[12] Shannon, C. E., “A Mathematical Theory of Communication,”
Bell System Technical Journal, July 1948, p.623.
[13] Yao-Jen Chang, Wende Zhang, and Tsuhan Chen,
“Biometrics-Based Cryptographic Key Generation” 2004 IEEE,
USA.
[14] Ren Honge, Shang Zhenwei, Wang Yuanzhi , Zhang Jian ,
“A Chaotic Algorithm of Image Encryption Based on
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

84
Dispersion Sampling” The eight International conference on
Electronic Measurement and Instruments” 2007 IEEE.
[15] Shenglin Yang, Ingrid M. Verbauwhede,”Secure Fuzzy Vault
Based Fingerprint Verification System”, 2004 IEEE.
[16] The MathWorks
TM
Accelerating the pace of engineering and
science. “www.mathworks.com” Date accessed: 2 Feb 2009
[17] Eli Biham, Louis Granboulan, Phong Q. Nguy “Impossible
Fault Analysis of RC4 and Differential Fault Analysis of RC4”
Computer Science Department, Technion – Israel Institute of
Technology, Haifa
[18] J. Daugman,”High confidence visual recognition of persons by
a test of statistical independence “, IEEE Transactions on
Pattern Analysis and Machine Intelligence vol.15, 1993,
pp.1148-61.
[19] Alghamdi S. Abdullah, Ullah Hanif, Mahmud Maqsood, Khan
K. Muhammad., "Bio-chaotic Stream Cipher-Based Iris Image
Encryption," cse, vol. 2, pp.739-744, 2009 International
Conference on Computational Science and Engineering,
Canada.
[20] J.G. Daugman, “Uncertainty Relation for Resolution in Space,
Spatial Frequency, and Orientation Optimized by Two-
Dimensional Visual Cortical Filters,” J. Optical Soc. Amer.,
vol. 2,no. 7, pp. 1,160-1,169, 1985.

Authors Profile

Dr. Abdullah Alghamdi is a full time
associate professor, SWE Department,
College of Computer and Information
Sciences, KSU. He holds a Ph.D. in the
field of Software Engineering from the
department of computer science,
Sheffield University, UK, 1997. He
obtained his M.Sc. in the field of
software development technologies
from the UK in 1993. In the academic
year 2004/5 he worked as a visiting
professor at School of IT and Engineering, University of Ottawa,
Ottawa, Canada, where he conducted intensified research in Web
Engineering as part of his Post-Doc program. He recently
published a number of papers in the field of Web engineering
methodologies and tools. Dr. Abdullah worked as a part-time
consultant with a number of governmental and private
organizations in the field of IT strategic planning and headed a
number of IT committees inside and outside KSU. Currently he is
chairing the Software Engineering Department at KSU and part
time consultant at Ministry of Defense and Aviation.


Hanif Ullah received the BIT (Hons) and
MSc. Degree in Information Technology
from Iqra University Karachi in 2004 and
Quaid-e-Azam University Islamabad,
Pakistan in 2007 respectively. In January
2008, He joined King Saud University,
Saudi Arabia as a Research Assistant and
start working on Network and Information
security related topics. Currently He is
working as a Lecturer in the Department
of Software Engineering, College of
Computer and Information Sciences, King Saud University, Saudi
Arabia.








































































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

85

An ASes stable solution in I-Domain
G. Mohammed Nazer
1
and Dr.A.Arul Lawrence Selvakumar
2

1
Asst.Pofessor & Head, Dept of MCA, IFET College of Engineering,
Villupuram, India.
[email protected]

2
Professor & Head, Dept of CSE & IT, Kuppam College of Engineering,
Kuppam, India.
[email protected]

Abstract: Routers on the Internet use an interdomain routing
protocol called the Border Gateway Protocol (BGP) to share the
routing information between the Autonomous Systems (ASes).
These ASes defines local BGP policies that lead to various
routing anomalies like BGP divergence. In this paper, we close a
long-standing open question of Griffin and Wilfong, by
showing, for any network structure, if there exists two stable
routing outcomes, then there is a possibility of BGP oscillations.
Our results provide the first non-trivial necessary condition for
BGP safety – uniqueness of the stable routing outcome.
Another question, which is closely related to BGP, is how
long it will take to converge to a stable routing outcome. We also
address this by analyzing a formal measure of the convergence
time of BGP for the policies presented by Gao and Rexford.
Even for the restricted class of preferences, we prove that (i) the
convergence time is linear in the size of network (ii) BGP’s
running time cannot be more than (roughly) twice the length of
the longest customer-provider chain in the network.

Keywords: BGP, Border Gateway Protocol, Interdomain routing,
network security, routing, networks, routing protocols, BGP safety.

1. Introduction

BGP is the de facto protocol enabling interdomain
routing in the Internet. The task of Interdomain routing is to
establish routes between the administrative domains which
are called as Autonomous Systems (ASes) in the Internet.
Global routes are formed from the local decisions that are
based on the private routing policies. These routing
selections are communicated by the ASes to the neighboring
ASes. Persistent routing oscillations are formed due to the
lack of global coordination between the local routing
policies.

BGP safety – Unique stable routing outcome: The main
contribution in this paper is showing that BGP safety
necessitates the existence of a unique stable solution. This is
the result that closes the long-standing open question first
posed by Griffin and Wilfong [8]. To be more precise, Two
stable solutions in a network implies that the network is
unstable that lead to oscillations. To analyze the BGP
dynamics in a more simplified form, we use a more
convenient structure, called state-transition graph. The
state-transition graph, not only a useful conceptual tool for
evaluating and designing various network configurations but
also assist in detecting the potential routing oscillations and
how to debug them.
BGP convergence time analysis: How long it takes BGP to
converge to a stable routing outcome? This is another
question, which is closely related to BGP. To answer this
question, we require a formal definition of measuring the
convergence rate, as the Internet is asynchronous.
We analyze the BGP convergence time in particular,
Internet-like settings. In this Gao and Rexford settings,
every pair of neighboring ASes can have a business
relationship or a peering relationship, which causes natural
constraints on the ASes’ routing policies.
However, our first result is negative. We show that, even
for the restricted class of preferences, there are instances
such that the convergence rate of BGP is linear in the size of
the network. Specifically we show that in a network with n
nodes, it takes n phases to converge. We also prove that the
lower bound is tight: BGP is always guaranteed to converge
in n time steps. As there are thousands of ASes in today’s
Internet, the linear bound does not signify well. However,
one would expect BGP to converge at a much quicker rate in
practice as ASes’ routing policies are local in the sense that
they are not influenced by ASes that are far away. We prove
that the number of phases required for convergence is
bounded by approximately twice the depth of customer-
provider hierarchy.

2. A formal Model

2.1 BGP dynamics
Network model and its policies: In our model, we define
a network by an AS graph G = (N, L), where N represents
the set of ASes, and L represents number of physical
communication links between ASes. N consists of n source-
nodes {1,…,n} and a unique destination node d. P
i
denotes
the set of all simple non-cyclic routes from i to d in G. Each
source-node i has a ranking function ≤
i
, that defines a strict
order over P
i
(that is i has strict preferences over all routes
from i to d). We allow ties between two routes in P
i
only if
they share the same first link (i,j). The routing policy of
each node i consists of ≤
i
and of i’s import policy and export
policy.
i’s import policy dictates which set of routes Im(i) ⊆ P
i
i
is willing to send traffic along. We assume that ø ≤
i
R
i
for
any route R
i
∈ Im(i) (i prefers any route in Im(i) to not
getting a route at all) and that R

i

i
ø for any route R

I

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

86
Im(i) (i will not send traffic at all rather than send traffic
along a route not it Im(i)).
i’s export policy dictates which set of routes Ex(i,j) ⊆ P
i
i
is willing to announce to each neighbor j.
Update Messages and Activation Sequence: Basically
the BGP belongs to a family of routing protocols named
path-vector protocols. In this model, there are two kinds of
actions that an active node may carry out potentially change
the global routing state:
• A node i may select a route or change its selected
route from the routes to given destination d that it
currently believes to be available.
• A node j may send an update message to a
neighboring node i, informing i of the route that j
is currently using to destination d. The update
message is assumed to be delivered immediately
(without propagation delay), and is immediately
reflected in updated beliefs that i has about j’s
route.
The selection and update actions can occur at arbitrary
times. In particular, note that the update messages are not
required to be sent at a given time interval or whenever j’s
route changes. It is easy to show that a stable solution is
always in the form of a tree rooted in d. Further, the import
and export policies can be folded into the routing policies,
by modifying the preferences so that paths that are filtered
out have the lowest possible value.
2.2 The State-Transition Graph

In this subsection, we describe the state transition graph –
a tool that we use to analyze the convergence of BGP on
different instances.
The state-transition graph of an instance of BGP is
defined as follows: The graph consists of a finite number of
states, each state s is represented by an n-dimensional
forwarding vector of routing choices r
s
= (r
s
1
,…,r
s
n
), and n ∗
n knowledge matrix K
s
= {k
s
ij
}
i,j
. r
s
i
specifies the identity of
the node to which node i’s traffic is being forwarded, and k
s
ij

specifies the (loop-free) route that node i believes that its
neighboring node j is using. We define k
s
ij
= NULL when j
is not a neighbor of i; any knowledge that i has about non-
neighboring nodes’ routes is irrelevant to i’s route selection
and advertisement decisions. We assume, naturally, that
node i knows who it is forwarding traffic to: r
s
i
must be the
first hop in k
s
ij
. We allow two types of atomic actions that
lead to transitions from s to s’:
• Route transition – Route selection actions: Informally, a
route transition arises when a node I updates its selected
route by picking its favorite route from its current
knowledge set of routes used by its neighbors. Formally,
there is an i-route transition from state s to state s’ if
there is a node i such that: The forwarding vector in s’
is identical to the forwarding vector in s with the
possible exception of i.
• Knowledge transition – Informally, a knowledge
transition is an update message sent from a specific
node i to a neighboring node j announcing the route
that i believes it is sending traffic along. Formally, there
is a ji-knowledge transition from state s to state s’ if
there is a node i and a neighboring node j such that:
The forwarding vectors in the two states are identical.
The knowledge matrix in s’ is identical to the
knowledge matrix in s with the exception of i’s belief
about j, and k
s’
ij
= k
s
ij
. In other words, i learns of the
route that j currently believes it is using.
This definition reflects the restricted asynchrony of our
dynamic model. We can phrase this restriction equivalently
as: Update messages can be delayed in transit, but when
they are delivered, a fresh update message from the same
sender is delivered immediately (and thus overrides the
delayed update.) Thus, the state description does not have to
include messages in transit.

Stability and Oscillations in the State-Transition Graph:

A stable state is one in which the nodes forward traffic
along a stable solution, and have complete and accurate
knowledge about their neighbors’ routes. We want to prove
the existence of potential BGP oscillations in the state
transition graph. In many cases, oscillations occur only for
specific timings of asynchronous events. In particular,
starting at any given point of time, every node eventually
updates its route selection if its knowledge of routes has
changed, and every node eventually receives update
messages from each neighbor that has changed a route.
Further, in a given router, there can only be a finite
number of other activations taking place between subsequent
routing selections or updates. It is for this reason, we look
for oscillations that can arise through a fair activation
sequence. An infinite activation sequence σ said to be fair if
each transition in A appears infinitely often in σ. A fair
cycle in the state-transition graph is a finite cyclic path that
does not contain a sink, such that every action in A is taken
at least once in each traversal of the cycle.
2.3 Implications for the evaluation model of Griffin
We modify the dynamic evaluation model of Griffin in
two ways:
• Update messages are not delayed, instead, arrive
immediately to the destinations.
• In BGP execution, it is not necessary that a node
inform a neighboring node of every new route it
changes, rather it is enough if it announces once in a
while.
3. Two stable solutions leads to BGP
oscillation
In this section we prove our main result, that if there are
two stable solutions then the network is unstable in the sense
that persistent route oscillations are possible.
Theorem: If the AS graph G contains two stable solutions,
then there is a fair activation sequence under which BGP
will oscillate on G. That is, two stable solutions imply that
the network is unstable, in the sense that it could plausibly
lead to persistent route oscillations. Therefore, to achieve
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

87
BGP stability, the network must have a unique stable
solution.
The intuition behind our proof is as follows. In the state-
transition graph, each stable state will have a corresponding
“attractor region”: a subset of states (possibly just the stable
state itself, or much larger) that, once reached, we can be
certain that the system will ultimately converge to the stable
state. We can visualize the state-transition graph as a map,
with each attractor region a different color – red, blue, etc.
However, there will also be some states that do not lie in any
one attractor region, because different evolutions from that
state could lead to different stable states. We label these
states with a distinct color – purple, say – and show that the
Zero state must belong in this subset.
The key to the proof is showing that, starting from any
purple state, we can find a fair activation sequence that ends
at another purple state. We use the properties of route
selection and update actions to show that we can swap the
order of any two consecutive activations, perhaps repeating
one of them, and achieve the same result as the original
order. Thus, it is not possible that any given activation a
leads to a red state in the original order, but leads to a blue
state in the perturbed order. Using this, we show that we can
add each activation while staying within the purple region.
As the graph is finite, this implies the existence of a fair
cycle. If an instance of BGP results in a state- transition
graph (for a given destination) that has a fair cycle, we will
infer that there is a plausible sequence of route selections
and updates that will cause BGP to oscillate.
4. BGP’s convergence Rate
In this section, we handle the question of how long BGP
takes to converge to the unique stable solution. BGP is an
asynchronous protocol, and individual messages may be lost
or delayed arbitrarily. As we cannot assume a bound on the
actual elapsed time of a single message, any model of
convergence “time” needs to define a unit of time
measurement that remains meaningful in this asynchronous
setting. Let us consider the following definition:
Definition: A BGP phase is a period of time in which all
nodes get at least one update message from each
neighboring node, and all nodes are activated at least once
after receiving updates from their neighbors.
We analyze the number of BGP phases it requires for the
network to converge. The underlying principle in this
definition is that, although it is difficult for the analyst to
assert numerical bounds on the update frequencies at
different nodes, it is reasonable to expect that all nodes are
updating at similar timescales. The definition of phases
admits asynchrony, thus capturing the realistic possibility
that different sequences of update activations can lead to
different transient behavior. At the same time, by tying the
unit of measurement to the slowest node’s update instead of
a fixed time unit (or the fastest update), we avoid
pathological worst-case time bounds that are only attained
if, for example, one node’s update cycle is measured in years
instead of seconds or minutes.
How many consecutive phases does it take BGP to
converge to a stable solution in the worst case? Routes are
propagated through the network one hop at a time, so the
best we can hope for is a time proportional to the length of
the longest route in the stable solution. It is easy to construct
instances with n nodes in which there are routes of length
Ω(n). However, these instances are unnatural; currently,
Internet routes tend to be much shorter than this. For this
reason, we focus on bounding the BGP convergence time on
Internet-like graphs.
Example: The graph in Figure 1 depicts a network with n
nodes, and a destination node d. Node 1 prefers to go
directly to d. Any other node i prefers the route i → i−1 →
d over the direct route i → d. All routes of length greater
than 2 are less desirable to any node. This set of path
preferences is compatible with the Gao-Rexford constraints
for the following set of customer-provider relationships: 1 is
a customer of 2, 2 is a customer of 3, etc.; and, additionally,
d is a customer of every other node.











In each phase, initially all update messages go through,
and then all nodes are activated. In the first phase, only node
1 will change its routing choice and will route to d. In the
next phase, only node 2 will change its routing choice and
will route through 1. Then node 3 will change to route
through d and so on. The network will eventually converge
to the routing outcome in which all odd nodes route directly
to d and all even nodes route the rough their counter-
clockwise neighbor.
We prove that this bound is tight for the class of instances
that satisfy the Gao-Rexford conditions. In fact, we prove a
slightly stronger result: The following proposal shows that
our bound on BGP’s convergence rate is tight on the larger
class consisting of all instances in which the “No Dispute
Wheel” condition of [3], [5] holds.
Proposal: If “No Dispute Wheel” holds then BGP’s
convergence rate is at most n phases.
Proof: Let us assume that indeed the “No Dispute Wheel”
condition holds in a network graph G with a destination
node d. At every phase, one of the nodes of the graph
converges to a route that will not change from that point on.
The first node that converges in the first phase is the
destination node d, that has the empty path, and announces
that path to its neighbors. We now show that there must
exist a node in the network that is a direct neighbor of the
destination d and that its most preferred path is going
directly to d.
To see that this is indeed the case, pick an arbitrary node
v, look at its most preferred path to the destination. This
path goes through a neighbor of d right before it reaches d.
We shall denote this neighbor by v1. Now, consider the most
preferred path of node v1, and the closest node to d on that
path that we shall denote by v2. In this manner we define
the nodes vi for i = 1, 2, 3, ... At some point, nodes in the
sequence we defined must start repeating. If only one node
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

88
repeats infinitely then this node must have a direct route as
its most preferred path, and we are done. Otherwise, the
sequence of repeating nodes v
k
, v
k+1
, . . . , v
k+l
(for some k,
l) forms a dispute wheel: each node prefers to go through
the next one in the sequence rather than directly to d. This
contradicts our assumption. Therefore, there exists a node
that prefers to go directly to d over any other path. It will
choose this path to d on the second phase, send update
messages to its neighbors, and will never again change its
path (since no path will be better).
We now continue to follow the convergence process, and
observe that at any phase, there must exist a node v that
converges to its most preferred route given the route of the
nodes in the system that have already permanently
converged. This node will never again change its path
(because unless previous nodes change, it will have no better
path, and these previous nodes have also converged). To
prove that such a node must exist, we fix the routes of all
permanently converged nodes, and pick an arbitrary node v
1

that did not converge. We once again define the sequence of
nodes v
1
, v
2
, v
3
. . . by defining the node v
i+1
as the node that
is closest to d on the most preferred path of node vi that did
not permanently converge. The set of paths from which we
select this most preferred path, is the set of paths that are
consistent with the nodes that have already permanently
converged. Once again, this sequence of nodes must repeat,
and since it cannot contain a dispute wheel, it must have
only a single repeating node that is the closest node that did
not converge on its own most preferred path. In the next
phase, this node’s path converges. We have thus shown that
if the AS graph contains no dispute wheels, the convergence
time of BGP is bounded by the number of nodes in the entire
network graph.

5. Conclusion

We studied fundamental questions related to BGP
whether it will converge to a unique stable solution and how
long it will take to converge. We proved that, for any
network, if there exists two stable routing outcome, then
persistent BGP oscillations are possible. So the existence of
unique stable routing outcome is a necessary condition for
the BGP safe convergence. We also analyzed the worst-case
convergence time of BGP on instances that satisfy the
conditions mentioned by Gao-Rexford. We proved that the
convergence time on a graph with n nodes is Θ(n) in the
worst case, but is much smaller in networks with shallow
customer-provider hierarchies.
An interesting direction for future research is proposing
formal models for addressing these issues and assessing
their impact on our necessary condition for BGP safety.
First, can we close the gap between our necessary condition
and known sufficient conditions for safe convergence?
Second, can we develop a compositional theory for safe
policies? If we put together two sub networks with unique
stable solutions, when does the combination also have a
unique stable solution? It would also be valuable to extend
the convergence-time analysis to broader classes of
preferences, and to characterize the average-case (instead of
worst-case) convergence time following a network change.
Answers to these questions could provide network operators
with new principles to tradeoff the desire for flexible
autonomous policies with the need for global routing
efficiency. Finally, there are practical aspects of BGP
operations not considered in this paper such as MRAI
(Minimum Route Advertisement Interval) and RFD (Route
Flap Damping [19]), which play a significant role in BGP
convergence [20], [21].

References

[1] K. Varadhan, R. Govindan, and D. Estrin,
“Persistent route oscillations in inter-domain
routing,” Computer Networks, vol. 32, no. 1, pp. 1–
16, March 2000.
[2] T. G. Griffin and G. Wilfong, “An analysis of BGP
convergence properties,” in Proceedings of
SIGCOMM 1999.
[3] T. G. Griffin, F. B. Shepherd, and G. Wilfong, “The
stable paths problem and interdomain routing,”
IEEE/ACM Transactions on Networking, vol. 10,
no. 2, pp. 232–243, April 2002.
[4] L. Gao and J. Rexford, “Stable Internet routing
without global coordination,” IEEE/ACM
Transactions on Networking, vol. 9, no. 6, pp.
681–692, 2001.
[5] L. Gao, T. G. Griffin, and J. Rexford, “Inherently
safe backup routing
with BGP,” in 20th INFOCOM. Pistacaway: IEEE,
2001, pp. 547–556.
[6] T. G. Griffin, A. D. Jaggard, and V. Ramachandran,
“Design principles of policy languages for path
vector protocols,” in SIGCOMM ’03: Proceedings
of the 2003 conference on Applications,
technologies, architectures, and protocols for
computer communications. New York: ACM, 2003,
pp. 61–72.
[7] A. D. Jaggard and V. Ramachandran, “Robustness
of class-based path- vector systems,” in
Proceedings of ICNP’04, IEEE Computer Society.
IEEE Press, October 2004, pp. 84–93.
[8] N. Feamster, R. Johari, and H. Balakrishnan,
“Implications of autonomy for the expressiveness of
policy routing,” in SIGCOMM ’05: Proceedings of
the 2005 conference on Applications, technologies,
architectures, and protocols for computer comm.
New York, NY, USA: ACM Press, 2005.
[9] Sobrinho, “An algebraic theory of dynamic network
routing,” IEEE/ACM Transactions on
Networking, vol. 13, no. 5, pp. 1160–1173, 2005.
[10]T. G. Griffin and G. Huston, “TRFC 4264: BGP
wedgies,” 2005.
[11]L. Subramanian, S. Agarwal, J. Rexford, and R.
Katz, “Characterizing the internet hierarchy from
multiple vantage points,” INFOCOM 2002. Twenty-
First Annual Joint Conference of the IEEE
Computer and Comm.Societies. Proceedings. IEEE,
vol. 2, pp. 618–627, 2002.
[12] C. Labovitz, A. Ahuja, A. Bose, and F. Jahanian,
“Delayed internet routing convergence,” SIGCOMM
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

89
Comput. Commun. Rev., vol. 30, no. 4, pp. 175–
187, 2000.
[13] J. Feigenbaum, R. Sami, and S. Shenker,
“Mechanism design for policy routing.”
Distributed Computing, vol. 18, no. 4, pp. 293–305,
2006.
[14] H. Karloff, “On the convergence time of a path-
vector protocol,” in SODA ’04: Proceedings of the
fifteenth annual ACM-SIAM symposium on
Discrete algorithms. Philadelphia, PA, USA: Society
for Industrial and Applied Mathematics, 2004, pp.
605–614.
[15] T. G. Griffin and G. Wilfong, “A safe path vector
protocol,” in Proceedings of IEEE INFOCOM
2000, IEEE Communications Society. IEEE Press,
March 2000.
[16] H. Levin, M. Schapira, and A. Zohar, “Interdomain
routing and games,” in Proceedings of the 40th
ACM Symposium on Theory of Computing
(STOC), May 2008.
[17] A. Fabrikant and C. Papadimitriou, “The
complexity of game dynamics: BGP oscillations,
sink equlibria, and beyond,” in Proceedings of
SODA 2008.
[18] G. Huston, “Interconnection, peering, and
settlements,” in Internet Global Summit (INET).
The Internet Society, 1999.
[19]Z. M. Mao, R. Govindan, G. Varghese, and R. H.
Katz, “Route flap damping exacerbates internet
routing convergence,” in SIGCOMM ’02:
Proceedings of the 2002 conference on Applications,
technologies, architectures, and protocols for
computer communications. New York, NY, USA:
ACM, 2002, pp. 221–233.
[20]E. C. Jr., Z. Ge, V. Misra, and D. Towsley,
“Network resilience: Exploring cascading failures
within bgp,” in Allerton Conference on
Communication, Control and Computing, October
2002.
[21]K. Sriram, D. Montgomery, O. Borchert, O. Kim,
and D. R. Kuhn, “Study of bgp peering session
attacks and their impacts on routing performance,”
IEEE Journal on Selected Areas in
Communications, vol. 24, no. 10, pp. 1901–1915,
2006.





















































































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

90







Biometrics Based File Transmission Using RSA
Cryptosystem

Mr.P.Balakumar
1
, Dr.R.Venkatesan
2


1
Assistant Professor, Department of Computer Science and Engineering,
Selvam College of Technology, Namakkal, Tamilnadu, India.
[email protected]

2
Professor & Head, Department of Information and Technology,
PSG College of Technology, Coimbatore, Tamilnadu, India.
[email protected]

Abstract: Biometrics gives a lot of methods in high-secure
applications while using natural, user-friendly and fast
authentication. Most of the implementations of Public key
Cryptosystems widely use the RSA algorithm. The RSA algorithm
is one of the asymmetric algorithms in which we use two keys
private and public. The efforts in this paper merge the biometric
concept with the asymmetric cryptography to offer the security
for the document sending process in the distributed network. For
document sending the sender encrypts the message using the
receiver’s public key and for decryption the receiver uses his
private key. This system uses the fingerprints as the security-
providing medium. This system is developed under Graphical
User Interface environment which is very easy to operate by the
users. This system is developed using the Java language so that it
can be executed on any platform. The design of this system
supports both the Internet and Intranet environments. Dynamic
key generation process is the main contribution of this work.

Keywords: Cryptography, Biometrics, RSA, DSS, KDC

1. Introduction

A biometrics system is a standard method for the identity
verification of a human being based on the personal or
physical identification of characteristics. The functions of
biometric systems are determining, measuring and
codification of the unique characteristics of individual
persons with one already recorded. In recent years there has
been rapid growth in the use of biometrics for user
authentication applications because biometric based
authentication provides several benefits over knowledge and
possession-based methods. General biometric systems
consist of the four phases. They are, data collection which
includes sensing and pre-processing, signal analysing which
includes feature extraction and template generation, storage,
and decision making with a matcher as shown in Fig. 1.

A secured biometrics system does not change widely over a
long time, but a less secure biometrics system is likely to
change with time. For example, the iris-recognition does not
change over a human’s lifetime and it is more secure than
voice-identification. Uniqueness in human’s biometric is a
scale of the variations or differences in the biometric model
among the worldwide population. The high-level degree of
uniqueness produces more unique identifier. A low-level
degree of uniqueness indicates a biometric pattern that is

found commonly in the general population. The iris and
retina have higher levels of uniqueness than hand, voice and
finger printing. The nature of an application helps in
determining the degree of strength and uniqueness needed.
Living persons distinguish the biometric verification from
forensics, which does not involve real-time recognition of a
living human being.




Figure 1 General Biometric System.

Information sharing is a necessary part of our life. Hence,
security of information from mishandling is need. A
cryptography mechanism provides a set of data
transformations called encryption and decryption to send the
data in a secured manner. Encryption is applied to the
normal message i.e. the data to be translated is used to
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

91
produce the code message (encrypted data) which is apart
from original data using encryption key. Decryption uses the
decryption key to convert code message to original message
(the original data). Now, if the Encryption key and the
decryption key are same or one can be copied from the other
then it is said to be symmetric cryptography.

There is a drawback in symmetric cryptography. That is the
sender must send the same key to the receiver through
another secured channel. The attacker can capture it and he
could find the original secret key. This type of cryptography
system can be easily broken if the key used to encryption or
decryption is known.

To overcome the drawback present in the symmetric
cryptography we moved towards Public Key Cryptography
system that was found in 1976 by Whitfield Diffie and
Martin Hellman of Stanford University [22]. It uses a set of
associated keys one for encryption and another one for
decryption. One key, which is known as the private key, is
kept top secret by the user and another one key is public key
that is distributed to all other users.

2. Security of Biometrics

Regular biometrics can help to reduce the problems related
with the existing methods of user verification. The hackers
will find the weak points in the existing system and attack
the existing system accordingly. Unlike key systems, which
are able to find the message using brute-force attack,
biometric based systems are difficult to crack. The biometric
systems need considerably more attempts to breakthrough.
Although standard encryption techniques are helpful in
many ways to avoid breach of security, there are some new
types of attacks are possible. If biometric system is used as a
supervised verification tool, there may not be problems, but
in a distant unattended application, such as web oriented, e-
commerce applications, hackers may have sufficient time to
make frequent attempts before being noticed or may even be
able to actually break the remote client [8].

2.1 Comparison to Password
Real benefits of biometric systems are that they are much
longer in size than a password or phrase key. They vary
from hundred bytes to over a megabyte. Usually the message
content of such signals is relatively high. It is almost not
possible to keep in mind a 2K password and it would take an
tediously long time to type in such a password anyhow
(particularly with no errors). Fortunately, automated
biometrics can offer the security advantages of long
passwords while still retaining the speed and simplicity of
short passwords. Still, in general smaller amount of them are
typically covered, such as dissimilarity is that there is no
“fake password” input detector equivalent to the fake
biometric.(although perhaps if the password was in some
standard dictionary it could be deemed “fake”). Additionally,
in a password or token based verification system no effort is
made to prevent replay attacks (since there is no difference
of the “signal” from one presentation to another). However,
in an automated biometric-based verification system, one can
go to the extent of checking liveliness of the input signal.

Another significant difference concerns the matching
subsystems. A password based method always provides a
crispy result. If the password matches, it grants access and
otherwise refuses access. However the performance of a
pattern detection system in general is dependent relative on
several factors such as the quality of input and enrols data
along with the basic characteristics of the underlying
algorithm. This is typically reflected in a graded overall
match “score” between the submitted biometric and a stored
reference. In a biometrics-based system, they can purposely
set a threshold on the score to directly control the false
acceptance and false rejection rates. Inverting this, given a
good matching score the system can guarantee that the
probability of signals coming from a genuine person is
significantly high. Such a calibrated confidence measure can
be used to tackle non-repudiation support – something that
passwords cannot provide [8].

3. RSA Algorithm

The Rivest, Shamir, Adelman (RSA) scheme is a block
cipher asymmetric cryptosystem, in which the Plaintext and
ciphertext are integers between 0 and n-1 for some n. A
typical size for n is 1024 bits or 309 decimal digits. In RSA
system all the users must generate their private key
KR={d,n} and kept it in secret and store their public key
KU={e,n} in Key Distribution Centre(KDC). The sender
receives the receiver’s public key from the KDC and
encrypts the message using the receiver’s public key. The
receiver uses his private key to decrypt the coded message.
The private key is known only to the receiver himself.
3.1 Finger Prints
The finger printing biometrics is an automatic digital
version of the old ink-and-paper method used for more than
a century for recognition, mainly by law enforcement
agencies. Some samples of fingerprint images are shown in
the Figure.2. The Biometric device involves users placing
their finger on a platen for the print to be read. The minutiae
are then extracted by the vendor’s algorithm, which also
makes a fingerprint pattern analysis. Fingerprint template
sizes are typically 50 to 1,000 bytes.

The Fingerprint biometrics currently has three main
application areas: Large-scale Automated Finger Imaging
Systems (AFIS) generally used for law enforcement
purposes, fraud prevention in entitlement programs, and
physical and computer access.










Figure 2. Sample Fingerprints
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

92
3.2 Characteristics of Biometrics
Table.1 compares the seven mainstream biometrics in terms
of a lot of properties, ranging from how robust and distinct
[10] they are to what they can be used for (i.e., identification
or verification or verification alone). This table is an effort to
lend a hand to reader in categorizing biometrics along
important dimensions. Because this industry is still
functioning to establish comprehensive standards and the
technology is varying rapidly, however, it is difficult to make
assessments with which everyone would agree. The table
shows an assessment based on consideration with
technologists, vendors, and program managers. The table is
not proposed to be an aid to those in the market for
biometrics; rather it is a guide for the unskilled.


Table.1 Comparison of Mainstream Biometrics

Biometric
Identify
versus
Verify
Robust

Distincti
ve
Intrusive
Fingerprint Either Medium High Touching
Hand Verify Medium Low Touching
Facial Either Medium Medium 12+
inches
Voice Verify Medium Low Remote
Iris Scan Either High High 12+
inches
Retinal Either High High 1–2
inches
Keystroke
Dynamics
Verify Low Low Touching

When comparing ways of using biometrics, half can be used
for both the identification and verification, and the
remaining can only be used for authentication. In specific,
hand geometry has only been used for confirmation
applications, such as physical access control and time and
attendance verification. Adding to this, voice detection
because of the need for staffing and matching using a pass-
phrase, is used for verification only.

There is considerable changeability in terms of robustness
and individuality. Fingerprinting is sort of robust, and, even
though it is distinctive, a small proportion of the population
has unusable prints, always because of age, genetics, injury,
career, spotlight to chemicals, or other occupational hazards.
Hand/finger geometry is moderate on the distinctiveness
scale, but it is not very robust, while facial recognition is
either highly robust or distinctive.

In voice recognition, assuming the voice and not the
pronunciation is being measured; this biometric is
moderately robust and distinctive. Iris scans are both highly
robust because they are not highly vulnerable to routine
changes or damages and distinctive because they are
randomly formed. At last dynamic signature verification and
keystroke dynamics are not robust or distinctive.

4. Problem Statement

Even though the RSA algorithm uses the finger printing
biometrics system to generate the public key and private key
generation there are some problems in that approach.

They are:
1. Brute-force attack: The maximum size of the public key
and private key obtained by RSA algorithm is 155
digits. It can be captured by a brute force attacker using
thousands of machines and it requires three month of
computation. {Ref: Journal of Telecommunications and
Information Technology. Volume 4/2002. Pages 08-
09}.
2. Increased key storage requirement: RSA key storage
(private keys and public key) requires significant
amounts of memory. So, we have to store the public
key and private key in any equipment or in brain.{Ref:
Journal of Telecommunications and Information
Technology. Volume 4/2002. Pages 41-56}.
3. No Dynamic key generation: There is no dynamic key
generation in RSA algorithm. Therefore the user must
keep secretly his private key. There is a chance to lose
or stolen, forgotten the private key of the RSA
algorithm, hence he may lose the data.

5. Proposed Scheme

The architecture of the proposed scheme is shown in
Figure.3. The client generates the public key and sends to
the KDC. On document send process it retrieve the
receiver’s public key from KDC and encode the data with
aid of generated public key. Then it sends the encoded data
to the receiver. While viewing the document it dynamically
generates the private key which is used to decode the
encoded data.

The proposed digital signature algorithm is a version of the
RSA algorithm that overcomes the problems in the RSA
system. A brute force attacker can able to hack the private
key by using every possible combination of the key (i.e.
Numeric key). In our system, we use alpha numeric
(combination of alphabets and numeric) keys, hence the
attacker can not able to obtained the key values easily.

The second problem in the existing RSA algorithm is key
storage requirement. In our proposed system we generate the
private key dynamically. Hence there is no need for key
storage requirement. The third problem in the existing
system is no dynamic key generation. Normally, by using
RSA algorithm they have to generate their public key and
private key. Then they have to send the public key to the key
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

93
distribution centre and keep their private keys secretly with
themselves.

In our proposed algorithm we generate the public key using
the finger print and send that public key to the key
distribution centre. While encrypting the data the sender get
the public key of the receiver from the key distribution centre
and encrypt the data with that public key.

To decrypt the ciphertext the receiver requires his private
key. At that time of decryption only the receiver will be able
to know his private key. This process is called ‘Dynamic
private key generation’.

Figure 3. Architecture of Proposed Scheme

6. Key Distribution Centre

KDC has a very significant role in the asymmetric key
cryptosystem. It receives public key values from the clients
and stores in its locale. It is the only authoritative system to
distribute the public key values to the requesting users. The
KDC application is a server application. The KDC
application has two modules. One is key management
module and another one is key distribution module. The key
management module is mainly for receiving and
maintaining the key values. On other hand the key
distribution module distribute the public key value based on
the client requests.
6.1 Key Management Process
The key management module is created to perform the key
maintenance process. It has two main tasks. They are the key
receive process and key expiry management process. The
key receive process is run as a separate thread. The KDC
listen all the key value and send a key for the client request.
For the receiving process, it needs UDP socket. It does not
make any connection with the client application. This
module maintains the entire received public key values. The
key expiry management module keeps the validity of the key
values. KDC automatically removes the key values from the
key list if the client application process is terminated. The
clients can change their key value and update. So the
existing key value is replaced from the list and the new key
value is added into the list.

6.2 Client
The client application is designed to hold the document
transfer process and the key generation process. The client
application is divided into four modules. They are the Key
generation module, the sender module, the receiver module
and the document view module. The key generation module
generates the key from the finger print data. The sender
module is used to encode and send the document. The
receiver module receives the decoded documents that are
sent by the other clients. The document view module
maintains received documents after the decoding process the
user can view the document.
6.3 Key Generation Module
The key generation process is shown in Figure.4. This
module is to generate the public key by using the finger print
data. The input data is given as an image to the system. This
value is to create the key base value that is used to generate
the public key value.


















Figure 4. Key Generation

The public key value in KDC is stored with its client details.
The system supports the JPEG and the GIF image formats.
The pixel matrix is constructed using image data. The key
base is generated by using the image data matrix values. The
system has been implemented as a GUI based application
developed in Java. The main menu has three options. They
are the key preparation, document list and send process. The
key generation windows receive the input for the finger print
image file. The generate button is used to initiate the key
generation process. The send button is used to start the key
transfer process. The key distribution centre is designed to
receive and maintain all the public key values. The message
sending process is used to transfer a file from one client to
other. The message file is encoded before the sending
process. All the received messages are listed in the inbox.
The user can select the file and perform the view process.
The documents are decoded before the view process. The
private key value is generated at the time of the decoding
Capture the finger print data

Dot matrix Conversion

Generate key Base
Generate public key
Send the key to KDC
Key Update

Client

KDC
Pub Key
Generation
Document
Send
Key
Retrieval

Encode Send
Document
View Private Key
Generation

Decode
Display
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

94
process. The decoded documents are stored in the specified
folder

7. Testing and Implementation

Testing is the important phase in the system development
process. The system is developed as a GUI based application.
The system is tested before the implementation process. The
system is tested with different testing methods. They are unit
test, integration test, system test, validation test and stress
test. The system is tested with different network and
platform environments. The system uses the image scanner
to capture finger print image data. The system is tested in
the Intranet environment. In this system each and every
modules is tested separately for the unit test. For example
the RSA algorithms processes key generation, encode and
decode operations are tested with the corresponding
modules.

The Client application and the Key distribution applications
are tested separately. The integration test is performed after
all the modules are connected with the main menu. The
entire system is tested with all the operations by using a set
of finger print values. The stress test is conducted to test the
load management strength of the client application and the
key distribution centre application. Connecting multiple
clients with the KDC tests the key distribution centre stress.
In the client application sending a large file to the other
client tests the strength. The validation test is performed for
all input values. The finger print image availability is
checked before the key base generation process.

The system is developed to distribute document with security
using the biometrics. The system is tested and the results are
very good. The implementation of the system is conducted
as direct change over mechanism. The new system is directly
installed and activated into the action for usage. The system
can be implemented in any network environment. The
system supports all type of file transfer operations. The
system has developed as two applications the key
distribution centre and the client application. The key
distribution centre application is loaded into a separate
machine. The client application can be loaded into all other
client machines.

All the client application should be configured for the key
distribution centre IP address for the key updating and
request process. The system now designed to get the
fingerprints images from the image file. So the system must
be connected with an image scanner. The system can also be
connected with the fingerprint image scanner devices. The
client application and the key distribution applications can
be continuously executed to maintain the connection and
message receive process. All the messages are directly
received by the client applications. The system requires a
lesser amount of hard disk space to store the received and
decoded documents. The key distribution centre should be
connected with all the client applications. The system can be
run with one or more network environments.

7.1 Software Selection
Using Java language under windows platform develops the
simulation tool. Java supports multiple platforms, GUI
design and network operations. Using the Java language
develops the system. Image processing, cryptographic
operations, network transmissions and file processing are the
major are in the system. Java provides a variety of packages
and classes to support all these tasks. The user interface is
designed with GUI support. The application is designed to
run under any platform. The finger print values are
retrieved from image files. The image file data are extracted
and converted into pixel matrix. Using the classes such as
Image, MediaTracker and PixelGrabber in Java the system
does these processing. The Image class is used to convert an
image into an object. The Media Tracker and the Pixel
Grabber classes are used to support the data extraction and
pixel conversion process. These classes are available in the
java.awt package.

Java provides a separate package JCE for the cryptography
process. But the JCE requires the Service Providers for the
implementation. In Java cryptography can be implemented
in two ways. They are using the JCE with service providers
and the other one write the code for the cryptographic
algorithms. In this work the second method is applied. The
RSA algorithm is implemented by using the java.math
package support. RSA requires high bit length data type
support. Java provides a class Big Integer to process values
in 128 bits. All RSA key generation, encoding and decoding
operations are done by using the Big Integer class.

The file process and the data transmission process are
implemented with the support of java.io and java.net
package. All the files are processed using the byte stream
classes. The data transmission tasks are done using the
TCP/IP support classes in the java.net package. The key
distribution centre application is designed using the UDP.
The client application transfers the files using the Server
Socket and Socket classes. Data gram Socket and Data gram
Packet classes are used in KDC process.

8. Conclusion

The System is developed to provide security for the file
transfer process in distributed environment. Document
transmission between the systems that are in the distributed
environment is a usual task. The same environment is also
shared by a lot of members. So the system should ensure the
security of the documents that are transferred. Different
cryptographic techniques are used to secure the data. In the
recent days, biometrics is used to recognize the users. This
work combines the biometrics and cryptography to provide
the security for the document transmission process in the
distributed environment. Generally passwords and smart
cards are used for the security systems.

The system uses the biometrics technology as the security-
providing medium. This system uses the fingerprints for the
security system. Password can be hacked by trial and error
basis. But it is not possible to break the biometrics based
security system. The system is developed as two applications.
They are the key distribution centre application and the
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

95
client application. The KDC supplies the public key values
to the required clients. The client application is designed to
handle all the data transfer and security operations.

The system uses a designed key base generation algorithm
and RSA algorithm. The system is tested with various
samples and clients. The performance of the system is very
good. The system is tested with different type of file formats.
The result shows that the system supports all types of file
format. The system stores and distributes the public key
values for all clients in the key distribution centre. The
system does not require any key storage process for the
secret key. In this work using the fingerprint values, the
system can generate both the public key and private key.
Damages that occurred in the finger print may impact the
recovery of the documents.

In future the system can be implemented for all type of
authentication process such as capillary patterns in the
retina, hand geometry, facial characteristics, signature
dynamics, voice pattern, and timing of keystrokes. Data
compression technique can be used to reduce the content
size, process time and transmission time. In future the
system will include noise detection and filtering facility for
the input process.
References
[1] Bruce Schneier, “Applied Cryptography Protocols,
Algorithms” 2
nd
Edition, Wiley publication.
[2] Naughton.P and H.Schildt, “Java 2: The Complete
Reference”, McGraw-Hill,1999
[3] William Stallings, “Cryptography and Network
Security Principles and practice”, 2
nd
Edition,
Prentice Hall, Upper Saddle River.
[4] Anil Jain, Lin Hong, Sharath Pankanti, and Ruud
Bolle, “An Identity Authentication System Using
Fingerprints” Department of Computer Science,
Michigan State University East Lansing.
[5] James L. Wayman, “Biometrics Identification”,
Communications of the ACM, February 2000.
[6] Katrin Franke, Javier Ruiz-del-Solar, Mario, “Soft-
Biometrics: Soft-Computing for Biometric-
Applications” Dept of Pattern Recognition,
Fraunhofer IPK, Berlin, Germany.
[7] Nalini K. Ratha, Jonathan H. Connell, and Ruud
M. Bolle J, “An Analysis of Minutiae Matching
Strength” Watson Research Center.
[8] Rowley. T, “Silicon Fingerprint Readers: A solid
state approach to biometrics”, Proc. of the Card
Tech / Secure Tech, Orlando, Florida, May 97.
[9] Schneier.B, “The uses and abuses of biometrics”.
Communications of the ACM, August 1999.
[10] Schneir.B, “Security pitfalls in cryptography”,
Proc. of Card Tech /Secure Tech, Washington D.C.,
April 98.
[11] Wong C K and Lam S S, “Digital Signatures for
flows and multicasts”, IEEE/ACM Transaction and
Networking”, August 1999.
[12] www.rand.org/publications/MR/MR1237/MR1237.
appa.pdf
[13] www.mit.bme.hu/events/minisy2003/papers/orvos.p
df
[14] http://rpmfreelancer.no
ip.com:8080/duncan21/biometrics/finger.html
[15] www.cost275.gts.tsc.uvigo.es/presentations/COST2
75_Jain.pdf
[16] www.research.ibm.com/ecvg/pubs/sharat-
proc.pdfM. Wegmuller, J. P. von der Weid, P.
Oberson, and N. Gisin, “High resolution fiber
distributed measurements with coherent OFDR,” in
Proc. ECOC’00, 2000, paper 11.3.4, p. 109.
[17] R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn,
“High-speed digital-to-RF converter,” U.S. Patent 5
668 842, Sept. 16, 1997.
[18] The IEEE website. (2002) [Online]. Available:
http://www.ieee.org/
[19] M. Shell. (2002) IEEE Transaction homepage on
CTAN. [Online]. Available:
http://www.ctan.org/tex-
archive/macros/latex/contrib/supported/IEEEtran/
[20] W.Diffie and M.Hellman.” New Directions in
Cryptography”. IEEE Transaction on Information
Theory.IT-22(1978).472-492.


Authors Profile

Mr.P.Balakumar received the B.E. and
M.E. degrees in Computer
Science and Engineering from PSG
College of Technology, Coimbatore, in
1997 and Anna University, Chennai in 2004
respectively. During 1999-2001, he worked
as Lecturer in PSG College of Technology in
Coimbatore. Later during 2003-2008, he worked as Lecturer &
Assistant Professor in AMS Engineering College, Namakkal. He
now with Selvam College of Technology, Namakkal, Tamilnadu,
India as Assistant Professor in Department of Computer Science
and Engineering.

Dr.R.Venkatesan was born in Tamilnadu,
India, in 1958. He received his B.E (Hons)
degree from Madras University in 1980. He
completed his Masters degree in Industrial
Engineering from Madras University in
1982. He obtained his second Masters
degree MS in Computer and Information
Science from University of Michigan, USA
in 1999. He was awarded with PhD from Anna University, Chennai
in 2007. He is currently Professor and Head in the
Department of Information Technology PSG College of
Technology, Coimbatore, India. His research interests are in
Simulation and Modeling, Software Engineering, Algorithm
Design, Software Process Management.










(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

96






Artifact Extraction and Removal from EEG Data
Using ICA
Ashish Sasankar
1
, Dr. N.G Bawane
2
,Sonali Bodkhe
3
,M.N.Bawane
4

1
G.H.Raisoni College of Engineering
CRPF Gate 3 , Digdoh hills Hingna road Nagpur (India)
[email protected]

2
G.H.Raisoni College of Engineering
CRPF Gate 3 , Digdoh hills Hingna road Nagpur (India)
[email protected]

3
G.H.Raisoni College of Engineering
CRPF Gate 3 , Digdoh hills Hingna road Nagpur (India)
[email protected]

4
Govt Polytechnic,Nagpur(India)
[email protected]

Abstract: The Independent Component Analysis (ICA) has
emerged as a novel and promising new tool for performing
artifact corrections on EEG data. In this paper, ICA is used to
perform artifact correction on three types of artifacts namely,
frontal (eye), Occipital (rear-head), and muscle. EEG is
composed of electrical potentials arising from several sources.
Each source (including separate neural clusters, blink artifact or
pulse artifact) forms a unique topography onto the scalp – ‘scalp
map‘. Scalp map may be 2-D or 3-D.These maps are mixed
according to the principle of linear superposition. Independent
component analysis (ICA) attempts to reverse the superposition
by separating the EEG into mutually independent scalp maps, or
components.
Keywords: EEG, Independent Component Analysis(ICA), BCI
1. Introduction
EEG stands for Electroencephalogram. It senses electrical
impulses within the brain through electrodes placed on the
scalp and records them. It is a recording of brain activity,
which is the result of the activity of billions of neurons in the
brain. EEG can help diagnose conditions such as seizure
disorders, strokes, brain tumors, head trauma, and other
physiological problems. The pattern of EEG activity changes
with the level of a person's arousal. A relaxed person has
many slow EEG waves whereas an excited person has many
fast waves. A standardized system of electrode placement is
the international 10-20 system. A common problem with
EEG data is contamination from muscle activity on the
scalp. It is desirable to remove such artifacts to get a better
picture of the internal workings of the brain. In latest
publication[20] a new method is proposed for EEG signal
classification in BCI systems by using nonlinear ICA
algorithm. An ICA based EEG feature extraction and
modeling approach for person authentication is presented in
[21]. Efficient use of modern DSP and Soft computing tools
in the area of medical diagnosis is recently covered in [22].
In this paper we will focus on extracting and removing
artifacts from recorded EEG data using Independent
Component Analysis.
2. EEG Database and Preprocessing
The BIOMED team and department of Clinical and
Experimental Neurology, both at Katholieke Universiteit
Leuven (KUL) (Belgium) have given public access to two of
their long-term EEG recordings from patients suffering from
Mesial Temporal Lobe Epilepsy. Patient was 35 years-old
male. The data was collected from 21 scalp electrodes placed
according to the international 10-20 System with addition
electrodes T1 and T2 on the temporal region. The sampling
frequency was 250 Hz and an average reference montage
was used. The electrocardiogram (ECG) for each patient was
also simultaneously acquired and is available in channel 22
of each recording
Under this system, the EEG electrodes are placed on the
scalp at 10 and 20 percent of a measured distance. For
example, if a circumference measurement around the skull
was approximately 55 cm, a base length of 10% or 5.5 cm
and 20% or 11.0 cm would be used to determine electrode
locations around the skull. The skull may be different from
patient to patient but the percentage relationships remain the
same. Figure 1 shows a typical 10–20 electrode placement
looking down on the skull. Each site has a letter and a
number or another letter to identify the hemisphere
location.The letters Fp, F, T, C, P, and O stand for Front
polar, Frontal, Temporal, Central, Parietal and Occipital
respectively. Even numbers (2, 4, 6, 8) refer to the right
hemisphere whereas odd numbers (1, 3, 5, 7) refer to the
left hemisphere. The z refers to an electrode placed on the
midline. The smaller the number, the closer the position is to
the midline.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

97


Figure 1. Typical electrode placements under the
International 10 –20 system

The original EEG in f1.EDF from second 7500 to second
8100 (f1_750to810.set, 12 Mbytes). This EEG frame
contains a seizure[18][19].
3. Independent Component Analysis(ICA)
Independent Component Analysis (ICA) is one of a group of
algorithms to achieve blind separation of sources [Jutten &
Herault 1991]. ICA finds an unmixing matrix which linearly
decomposes the multichannel EEG data into a sum of
maximally temporally independent and spatially fixed
components. These Independent Components (ICs) account
for artifacts, stimulus and response locked events and
spontaneous EEG activity. One of the standard applications
of ICA to EEG includes artifact detection and removal .
Selected components responsible for artifacts are set to zero
and all other ICs can be projected back onto the scalp
yielding EEG in true polarity and amplitudes. Related
approaches to magneto encephalographic signals can be
found . Some simple neural network algorithms cane blindly
separate mixtures, of independent sources. On maximizing
the joint entropy(y), of the output of neural processor
minimizes the mutual information among the output
components, yi = g(ui), where g(ui) is an invertible bounded
nonlinearity and u=Wx, a version of the original sources.
ICA is suitable for performing blind source separation on
EEG data because: (1) it is possible that EEG data recorded
at multiple scalp sensors are linear sums of temporally
independent components arising from spatially fixed,
distinct brain or extra-brain networks, and, (2) EEG data by
volume conduction does not involve significant time delays.
In EEG analysis, the rows of the input matrix x are the EEG
signals recorded at different electrodes, while the columns
are measurements recorded at different time points.

3.1 Types of artifacts

Severe contamination of EEG activity by artifacts such as
eye movements, blinks, head movements, muscle, and line
noise create a problem for proper EEG interpretation and
analysis. The three types of muscle artifacts studied in this
paper are:
1) Eye artifacts – project mainly to the frontal side
2) Rear head artifacts – project mainly to the occipital
Side


3) Muscle artifacts – dispersed throughout the brain .
3.2 Assumptions for the ICA model

The following assumptions ensure that the ICA model
estimates the independent components meaningfully.
Actually the first assumption is the only true requirement
which ICA demands. The other assumptions ensure that the
estimated independent components are unique.

(1) The latent variables (or independent components) are
statistically independent and the mixing is linear.
(2) There is no more than one gaussian signal among the
latent variables and the latent variables have cumulative
density function not much different from a logistic sigmoid .
(3) The number of observed signals, m, is greater than or
equal to the number of latent variables, n (i.e. m _ n).
If n > m, we come to a special category of Independent
Component Analysis called ICA with over-complete bases .
In such a case the mixed signals do not have enough
information to separate the independent components. There
have been attempts to solve this particular problem but no
rigorous proofs exist as of yet . If m > n then there is
redundancy in the mixed signals. The ICA model works
ideally when n = m.
(4) The mixing matrix is of full column rank, which means
that the rows of the mixing matrix are linearly independent.
If the mixing matrix is not of full rank then the mixed
signals will be linear multiples of one another.
(5) The propagation delay of the mixing medium is
negligible.

3.3 The ICA model applied to EEG Data

In case of EEG signals we have m-scalp electrodes picking
up correlated brain signals where we would like to know
what effectively independent brain sources produced these
signals. The ICA model appears well suited for this scenario
because it satisfies most of the model assumptions
considered in section 4. Start with assuming that EEG data
can be modeled as a collection of statistically independent
brain signals. Assumption (5) is valid since volume
conduction in the brain is effectively instantaneous and
assumption (2) is plausible . In this paper, it will attempt to
separate the m-observed EEG signals into n-statistically
independent components (thus satisfying assumption (3) and
(4)). However, it is questionable to assume whether EEG
data recorded from m-electrodes is made up of exactly n-
statistically independent components since it ultimately
cannot know the exact number of independent components
embedded in the EEG data. Nonetheless, this assumption is
usually enough to identify and separate artifacts that are
concentrated in certain areas of the brain such as eye,
temporal, and occipital artifacts . The ICA model tends to
have a more difficult time in separating artifacts that are
more spaced out over the scalp such as muscle artifacts.

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

98
While steps< Maxsteps
For i=1:EndofBlock
u=weightsxdata+w0
y=1/(1+exp(-u))
weights=weights+Irate*(I-2yu)*weights
wts_blowup=1
noChange=1
Weights
=>
maxWts?
wts_blowup
?
End for loop
?
~wts_blowup
?
step=0;change=nochange;
wts_blowup=0
block=1;
Irate=Irate*lowerIrate%restart with
lower Irate
weights=identity matrix;
old wts=weights;
oldwtchange=weights-old_wts
step++
angledelta=0
delta=oldwtchange
change=oldwtchange
2
1
2
Yes
No
No
Yes Yes
Yes
Yes
No
Yes
3

(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

99
Irate=nochange
?
rnk=rank(data)
display("matrix may not be
invariable")
rank<chans
?
Display('data has rank<rnk>,Channels
are not independent')
Display("lower learning rate to <Irate> and starting again")
Step>2 ?
display('setp<step>-
Irate<Irate>,wchange<change>')
oldwts=weights;
angledelta>60?
step==1?
step>2 &
change<nochange?
change>maxWts?
RETURN
No
Yes
Yes
No
Yes
angledelta=cos
-1
No


,
`
]
]
]
]








nochange change
olddelta delta
*
*
Irate=IratexlowerIrate;
olddelta=delta;
oldchange=change;
Yes
No
olddelta=delta;
oldchange=chnage;
Yes
No
laststep=step;
step=maxsteps;
Irate=Irate*lowerrate2
Yes
Yes
No
End
while?
Yes W=weights
RETURN
3 2
1

Figure 2. The Bell & Sejnowski Infomax Algorithm
flowchart.


3.4 THE ICA ALGORITHM
Now here we present a brief derivation of the Bell Sejnowski
Information Maximization algorithm. It has been consider a
simple case of a one-input one-output system to derive the
ICA algorithm. The general multi-input multi-output system
is similarly derived with n-dimensional matrices of vector-
valued random variables in place of the scalar valued
functions.
Consider a scalar-valued function x with a gaussian fx(x)
that passes through a transformation function y = g(x) to
produce the output with fy(y) (Figure 3). This is analogous
to our matrix operation:


(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

100
Y = WX
For our work with EEG data we will take the transformation
function y to be the logistic
sigmoid function defined as:
u
e
x g y

+
· ·
1
1
) (

0 w wx u + ·


where w = slope of y (also called the weight)
w0 = bias weight to align the high density parts of the input
with y (Refer Figure 2.)

Figure 3. Transformation of the fx(x), of x when x is mixed
with a sigmoid mixing function

An increase in the joint entropy of the output, H(y),means a
decrease in its mutual information. The entropy of the output
is maximized when we align the high density parts of pdf of
x with the high sloping parts of the function g(x) (hence the
need for the biasing weight w0). The function g(x) is
monotonically increasing (i.e. has a unique inverse) and thus
the output fy(y) can be written as a function of the input fx(x)
as:

x
y
x f
y f
x
y


·
) (
) (
(1)
The entropy of the output is given by,

dy y f y f y f E y H
y y y
) ( ln ) ( )} ( {ln ) (


∞ −
− · − · (2)

Substituting (1) into (2) gives,

)} ( {ln ln ) ( x f E
x
y
E y H
x

]
]
]





·
(3)

We now would like to maximize H(y) of eq.(3) for statistical
independence. Looking at the right hand side we see that the
function x is fixed and the only variable we can change is y.
Or more preciously, the slope, w, of y. Hence we take the
partial of H(y) with respect to w. The second term in eq.(3)
does not depend on w and therefore can be ignored. The
change in slope, ∆w, necessary for maximum change in
entropy is then:

]
]
]







·



x
y
E
w w
y H
w ln
) (
α
(4)



We now come to an important step. We would like to
compute the derivative, but we cannot compute the
expectation. Hence, we make the stochastic gradient
approximation:

x
y
x
y
E





,
`


.
|


ln ln


to get rid of the expectation [4],The equation then
simplifies to:


,
`

.
|





,
`

.
|


·


,
`


.
|

∂ ∂
·




x
y
w x
y
x
y
dw w
y H
w
1
ln
) (
α
(5)

The above equation is the general form of the weight change
rule for any transformation function y. For the logistic
sigmoid function eq.(1), the terms in eq .(5) are evaluated
as:

) 1 ( y wy
x
y
− ·


(6)


)) 2 1 ( 1 )( 1 ( y wx y y
x
y
w
− + − ·

,
`

.
|




(7)

Substituting the above equations into eq.(5) gives the weight
update rule for y = logistic sigmoid function:

x y w w ) 2 1 (
1
− + ∆

α
(8)

Similarly, the bias weight update, w0, can be evaluated as:


y w 2 1 0 − ∆ α
(9)

Following similar steps we can derive the learning rules for
multivariate data for a sigmoid function:

[ ]
T T
x y W W ) 2 1 (
1
− + ∆

α (10)
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

101

y w 2 1
0
− ∆ α
(11)
4. MatLab Implementation
Equations (10) and (11) give the learning rules for updating
the weights to perform ICA. Implementing them directly
into Matlab will involve performing the inverse function,
which is computationally very intensive. We therefore
modify eq.(10) by multiplying it by W
T
W (this does not
change anything since W is orthogonal):

[ ]
, ) ) 2 1 ( (
) 2 1 ( (
) ) 2 1 ( (
) (
1
W u y I W
W W x y I W
W W x y W W
W W
W
y H
W
T
T T
T T T
T
− + ∆ ⇒
− + ∆ ⇒
− − ∆ ⇒




α
α
α
α
(12)
Where u=xW
The bias weight update rule remains the same:

) 2 1 (
0
y w − ∆ α (13)

The proportionality constant in eq.(12) and (13) is called the
learning rate (lrate).
In summary, the following two weight update rules are used
to perform ICA in Matlab:
[ ] W u y I Irate W W
T
old new
) ) 2 1 ( ( − + + ·

(14)

) 2 1 (
0 0
y Irate W W
old new
− + ·
(15)


Where
Irate= Learning Rate ;
W=weight matrix ;
W0=bias weight ;
I=Identity matrix;
y=logistic sigmoid ;
u=W x data+w0 ;

5. Result Discussion
Data Set f1.set considered in the paper contains 600 seconds
of data with sampling frequency Fs=250 Hz. There are 21
channels of data. The data was collected from electrodes
placed on the scalp at standard locations using the
international 10-20 system[4]. The EEG data is plotted using
function implemented in Matlab[18] and is depicted in
figure(1).
This data contains a seizure onset around 300 onwards on T3-
T5 channel with the appearance of rhythmic waves. Occipital
artifacts on O1and O2 .Eye blink artifact are on Fp1 and Fp2
and Muscle Artifact are on all channels.


Figure 4. EEG data from Data Set (f1.set)

5.1 Independent Components:
Execution of the data is processed through Matlab function of
EEG toolbox[19]. The resulting independent components are
shown in figure(5).

Figure 5. Independent Components of Dataset(f1.set)

5.2 Topographical Projections
The topographical projections of independent components
are shown in figure (6).

Figure 6. Independent Components with their respective
topographical projection of Dataset(f1.set)


(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

102
5.3 Corrected EEG Data
Selected the Right ,Left and Frontal Artifact were removed
from EEG data using ICA technique. The resulting artifact
corrected EEG data is shown in figure 7. A Comparison with
the original EEG data (figure 4) clearly shows that the
identified muscle artifacts have been greatly reduced

Figure 7. Corrected EEG Data of Dataset (f1.set)

6. Conclusions
It is clear that Independent Component Analysis is well suited
to perform artifact correction on EEG data. The topographical
views provided the first clues as to which components might
be artifacts. These plots together with the time plots of the
independent components were used to identify the eye and
occipital artifacts. One of the unique properties of ICA is that
it can eliminate the artifacts alone without disturbing the
surrounding EEG activity. An alternate approach for artifact
extraction could be simply subtracting the frontal, temporal,
and occipital readings from the EEG data. But this would
lead to considerable loss in collected information.
The muscle artifacts appearing on all channels for Dataset
(f1.set) after a seizure onset could not be removed or reduced
significantly. One reason could be that these artifacts are not
concentrated in any one region alone and hence ICA
algorithm cannot interpolated them as originating from any
single electrode. ICA technique will be useful in various BCI
application such as mental task detection, detecting various
brain disorder such as epilepsy and Image preprocessing etc .
References
[1] M. Ungureanu, C. Bigan, R. Strungaru, V. Lazarescu,
Independent Component Analysis Applied in
Biomedical Signal Processing, Measurement Science
Review, Vol. 4, Section 2, 2004.
[2] Arnaud Delorme, Scott Makeig, EEGLAB: An open
source toolbox for analysis of single-trial EEG dynamics
including independent component analysis, Journal of
Neuroscience Methods, 134, pp. 9-21, 2004.
[3] In J. Laidlaw, A. Richens, and D. Chadwick, editors,
ATextbook of Epilepsy, pages 1–22. Churchill
Livingstone, London, 4th edition, 1993.
[4] J. Gotman. Seizure recognition and analysis. In J.
Gotman, J.R. Ives, and P. Gloor, editors, Long-term
Monitoring in Epilepsy,volume 37. Elsevier,
Amsterdam, eeg supplement no. 37 edition, 1985.
[5] H. G. Wieser. Monitoring of seizures. In M. R.
Timberland and E. H. Reynolds, editors, What is
Epilepsy? The Clinical and Scientific Basis of Epilepsy,
pages 67–81. Churchill Livingstone, London, 1986.

[6] Anthony J Bell & Terrance J Sejnowski, An
information-Maximisation approach to blind
separation and blind deconvolution, Neural
Computation, 7,6, 1004-1034 (1995).
[7] Dominic Chan, Blind Signal Separation, PhD
Dissertation, University of Cambridge Jean-Francois
Cardoso Blind Signal Separation: Statistical principles
Proceedings of the IEEE, vol 9, no 10. p 2009-2025 Oct
1998.
[8] Scott Makeig, Independent Component analysis of
Electroencephalographic Data Advances in Neural
Information Processing Systems 8’’. MIT Press,
Cambridge MA 1996 .
[9] Foldvary N, Klem G, Hammel J, Bingaman W, Najm I,
Lüders H.(2001) The localizing value of ictal EEG in
focal epilepsy. HXURORJ\ 57:2022-2028.
[10] Klass DW.(1995) The continuing challenge of artifacts
in the EEG. 35: 239–269.
[11] B. Boashash, M. Mesbah, and P. Colditz, "Time
Frequency detection of EEG abnormalities," chapter
15, Article 15.5. pp. 663-669, Elsevier 2003.
[12] R.G. Andrzejak, G.Widman, K. Lehnertz, C. Rieke, P.
David, C.E. Elger, "The epileptic process as nonlinear
deterministic dynamics in a stochastic environment: an
evaluation on mesial temporal lobe epilepsy," Epilepsy
Res. Vol. 44, pp. 129-140, 2001.
[13] H.D.I. Abarbanel, R. Brown, and M.B. Kennel,
"Lyapunov exponents in chaotic systems: Their
importance and their evaluation using bserved data,"
International Journal of Modern Physics, vol. 5(9),
pp.1347-1375, 1991
[14] A. Wolf, J.B. Swift, H.L. Swinney, and J.A. Vastano,
"Determining Lyapunov exponents from a time series,"
Physica D, vol. 16(3), pp. 285–317, 1985.
[15] P. Grassberger, T. Schrieber, "Nonlinear time sequence
analysis," Int. J. Bifurcat. Chaos 1 (3), pp.512-547,
1991.
[16] N. Kannathala,b, M. L. Choob, U.R. Acharyab, P.K.
Sadasivana "Entropies for detection of epilepsy in
EEG," Computer Methods and Programs in
Biomedicine 2005.
[17] A. Subasi, "Epileptic seizure detection using dynamic
Wavelet network," Expert Systems with Applications,
vol. 29, pp 343–355, 2005
[18] De Clercq,W,Vergult, A.,Vanrumste B.,Van
Paesschen,W.,and Van Huffel, S.’Canonical
Correlation analysis applied to remove muscle
artifacts from the electroencephalogram’, IEEE
T.Biomed.Eng.2006;53:2583-2587.
[19] ergult,A,Delercq,Q.,Palmini,A.,Vanrumaste,B.,Dupont
P.,Van Huffel,S.,Van Paesschen,W.’Improving the
Interpretation of Ictal Scalp EEG:BSS-CCA Algorithm
for Muscle Artifact Removal, Epilepsia
2007;48(5):950-958.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

103
[20] Farid Oveisi,”EEG signal classification using nonlinear
independent component analysis”,icassp,pp.361-
364,2009 IEEE International Conference on Acoustics,
Speech and Signal Processing,2009.
[21] Chen He Wang ,J.”An Independent Component
Analysis(ICA) Based Approach for EEG Person
Authentication” , Proc. Of Bioinformatics and
Biomedical Engineering ,IEEE explore , pp 1-2,2009.
[22] Prabhakar Khandait,Narendra Bawane,Shyam Limaye,
“Efficient ECG Signal Analysis using Wavelet
Technique for Arrhythmia Detechtion: An ANFIS
Approach”, Proc. Of SPIE, Vol 7546,pp 75461G1-
G6,2010.

Author Profile

Ashish B. Sasankar received the MCA and M.phil
degrees in Computer Science from HVPM Amravati
University in 1996 and 2008,respectively. He is currently
pursuing M.Tech in Computer Science from GHRCE,RTM
Nagpur University (India).































































































































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

104
Analysis of Three Phase Four Wire Inverter for UPS
Fed Unbalanced Star Connected Load
R.Senthil Kumar
1
, Dr. Jovitha Jerome
2
and S.NithyaBhama
3
1
Department of Electrical and Electronics Engineering,
Bannari Amman Institute of Technology Anna University, Tamil Nadu India
[email protected]

2
Department of Control and Instrumentation Engineering,
PSG College of Technology, Anna University, Tamil Nadu India
[email protected]

3
Department of Electrical and Electronics Engineering,
Bannari Amman Institute of Technology Anna University, Tamil Nadu India
[email protected]


Abstract: A Three phase inverter with a neutral connection i.e.,
three phase four wire inverter is proposed. The uninterruptible
power supply (UPS) system is fed by three phase four wire
inverter and the load neutral point voltage is low to meet the
requirement of the system. The four leg inverters effectively
provide the neutral connection in three phase four wire system.
They are used in many applications to handle the neutral current
caused by the unbalanced and non-linear load. The unbalanced
load becomes non-linear, where the neutral of the loads are
accessible. The four leg inverter produces the three output
voltages independently with one additional leg.
The main feature of a three phase inverter, with an additional
neutral leg, is its ability to deal with load unbalance in a system.
The goal of the three phase four leg inverter is to maintain the
desired sinusoidal output voltage waveform for all loading
conditions and transients. The neutral connection is present to
handle the ground current due to unbalanced loads. The
feasibility of the proposed modulation technique is verified by
MATLAB/SIMULINK.

Keywords: Four wire inverter, Rectifier, THD, UPS.
1. Introduction
The primary function of an UPS is to maintain a constant
voltage and constant frequency supply for critical loads,
irrespective of variations in the input source or load
condition [2]. The way of providing a neutral connection for
three phase four wire systems using a four leg inverter
topology by tying the neutral point to the mid point of the
fourth neutral leg. The three phase four inverter has more
control flexibility, because two additional power switches
doubles the number of inverter output states from 8(=2
3
) to
16(=2
4
).This allows to improve the output waveform
quality.
In the medium or low power UPS; an output transformer
is used to mitigate the neutral to earth voltage. In the high
power UPS; it is to eliminate the output transformer so that
load is fed by the inverter directly, so the neutral of earth
voltage is emerged. The currents flowing on each phase are
generally not balanced so, that a transformer is not required,
a connection to the neutral terminal should be provided by
adding an extra wire to the inverter.
The load neutral terminal can be connected to the inverter
using two topologies:
• Three phase four-wire, in which the neutral point is
connected directly to the midpoint of the supply by
means of a capacitor divider.
• Three phase four-leg, employing an additional inverter leg
that permits to modify the neutral point voltage.
The first topology is certainly simplest one, but the three-
phase inverter turns into three independent single-phase
inverters. As consequence, zero-sequence harmonics are
generated; moreover, especially when the load is unbalanced
or non-linear, a high voltage ripple over supply capacitors is
produced by neutral currents. A further limitation is
represented by the maximum voltage value that the
amplitude of each phase fundamental harmonic can reach.
The second topology requires additional power switches
and a more complex control strategy, but it offers different
advantages, such as an increased maximum output voltage
value, a reduction of neutral currents and the possibility of
neutral point voltage control [5-7].
The block diagram for the four wire inverter for online
UPS as shown in Figure 1.

Figure 1.Block diagram for four wire inverter

The main components of the UPS are rectifier, battery, four
wire inverter, four wire inverter and load. When the main
supply is present, the rectifier provides power to an inverter
as well as battery. The battery is charged. The inverter is on
and feeds power to the load through UPS switch. The UPS
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

105
switch is always on and connects load to inverter output.
When the UPS fails, then load is connected directly to the
mains directly through main switch. When the supply is not
available, then battery bank supplies power to an inverter.
Thus an inverter is always on and it takes power from
rectifier or battery.
The three phase four wire inverter is suitable for use in
high power UPS for its advantage of feeding unbalanced
load and the higher dc voltage utilization [3]. As the load is
fed by three phase three wire inverter is shown in Figure 2.
In this paper, the load neutral point voltage for the three
phase four leg inverter is proposed and it is shown in Figure
3.

Figure 2.Three phase three wire inverter

Figure 3.Three phase four wire inverter
2. Three Phase Four Wire Inverter
The three phase four wire inverter obtained by replacing the
three wire switching network with a four wire switching
network is shown in Figure 4.

Figure 4.Four wire Switching Network

The simplified diagram of four leg inverter circuit feeding
four wire load is shown in Figure 5.The neutral inductor L
n

can reduce switching frequency ripple.

Figure 5.Simplified diagram for four wire inverter
The switch in the inverter legs R,Y,B,N denoted as
S
k
(S
R
,S
Y
,S
B
,S
N
) corresponds to each vector V
k
,for S=1 upper
switch in the inverter wire is conducting and for S=0,the
lower switch is conducting. The vector V (1011) represents
switching state is shown in Figure 5.[8].
The equivalent circuits for states (1011) and (1010) are
represented in Figure 6(a) and Figure 6 (b) respectively.


Figure 6(a).For switching state S
R
S
Y
S
B
S
N
1011
V
RN
=V
BN
=0 and V
YN
=-2V
d


Figure 6(b).For switching state S
R
S
Y
S
B
S
N
1010
V
RN
=V
BN
=2V
d
and V
YN
=0

The comparison of a three phase 3 wire and 4 wire voltage
source inverter as shown in table 1.

Table 1: Comparison of three phase 3 wire and 4 wire
inverter


S.NO


PARAMETER

THREE PHASE
THREE WIRE
LOAD

THREE PHASE
FOUR
WIRE LOAD

1.

Number of
required power
switches

6

8

2.

Equivalent
topology
Three
independent
single phase half
bridge.
Three
dependent
single phase full
bridge.

3.

Number of the
output vectors

6(no zero vectors)

16(14 active + 2
zero vectors)
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

106

4.

Maximum
achievable peak
value of line to
neutral voltage.


0.5Vd

0.577Vd
There are 16 switching states which are listed in table 2; it
can be taken from the graphical representation of switching
vectors in Figure 7.
There are 14 non-zero voltage vectors and two zero
vectors(1111), (0000).The three phase variables K
r
,K
y
and
K
b
can be transferred as orthogonal coordinates k
α
,k
β
,k
γ

using eq (1). Any three phase sinusoidal set of quantities
can be transformed to an orthogonal reference.
For given switching states of the inverter, the voltage
vector components can be calculated as,
( ) ( )
( ) ( )
]
]
]
]
]





]
]
]
]
]





− −
− −
·




,
`




.
|
b
y
r
k
k
k
k
k
k
2 1 2 1 2 1
3 4 sin 3 2 sin sin
3 4 cos 3 2 cos cos
3 2 π θ π θ θ
π θ π θ θ
γ
β
α
(1)
Where θ is the angle of orthogonal set α-β-0 with respect
to arbitrary reference. If α-β-0 axes are stationary and the α-
axis is aligned with the- axis, then θ=0 at all times. Thus, we
get
]
]
]
]
]





]
]
]
]
]






− −
·



,
`



.
|
b
y
r
k
k
k
k
k
k
2 1 2 1 2 1
2 3 2 3 0
2 1 2 1 1
3 2
γ
β
α
(2)
The above matrix can be rewritten as
( )
B Y R d
S S S V V − − · 2 . 3 1
α
(3)
( )
B Y d
S S V V − · . 3 1
β
(4)
( ) ( )
B Y R N d
S S S S V V + + − − · 3 . 3 1
γ
(5)

Table 2: Switching combination and output voltages for
3 phase 4-wire inverter

NO.

SR,SY,SB,SN







0

0000

0

0

0

1

0001

0

0

-Vd

2

0010

-1/3Vd

-1/ 3 Vd

1/3Vd

3

0011

-1/3Vd

-1/ 3 Vd

-2/3 Vd

4

0100

-1/3Vd

1/ 3 Vd

-1/3Vd

5

0101

-1/3Vd

1/ 3 Vd

-2/3 Vd

6

0110

-2/3 Vd

0

2/3 Vd

7

0111

-2/3 Vd

0

-1/3Vd

8

1000

2/3 Vd

0

1/3Vd

9

1001

2/3 Vd

0

-2/3 Vd

10

1010

1/3Vd

-1/ 3 Vd

2/3 Vd

11

1011

1/3Vd

-1/ 3 Vd

-1/3Vd

12

1100

1/3Vd

1/ 3 Vd

2/3 Vd

13

1101

1/3Vd

1/ 3 Vd

-1/3Vd

14

1110

0

0

Vd

15

1111

0

0

0

Figure 7.Switching vectors for three phase four wire inverter
3. Circuit Description of Four Wire voltage
Source Inverter
The three phase four wire voltage source inverter, commonly
used for three phase voltage generation is shown in Figure 8.
It consists of eight switches S
rp
-S
xn
and filter of inductor L
R
-
L
X
and capacitors C
R
-C
B
.The LC filter filters out the
switching harmonics. The voltage source inverter able to
generate balanced and high quality AC output voltage,
shown in Figure 8.


Figure 8.Three phase output voltages
In the three phase output voltage waveform shown in Figure
8, one line cycle is divided into six regions. In region 0˚–60˚,
120˚–180˚ and 240˚–300˚, the voltage waveforms in Figure
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

107
8 have similar pattern, i.e., one-phase voltage is always
lower than the other two [2].
The modulation method for four wire inverter are
1) The switch S
in
(i = r, y, b) for the phase with the lowest
voltage is always turned ON and the corresponding S
ip

for this phase is always turned OFF.
2) The switches S
in
and S
ip
for the other two phases are
driven complementarily.
3) The switches S
xn
and S
xp
for the neutral phase are driven
complementarily.
The main circuit diagram in Figure 3 is equivalent to
Figure 9(a) in 0˚–60˚region, which can be further organized
into Figure 9(b). The same equivalent circuit is also
applicable to 120˚–180˚ and 240˚–300˚ regions. The
switching of the inverter is shown in table 3.


(a)

(b)

Figure 9.Equivalent circuit for four wired VSI for 0˚–60˚

In region 60˚–120˚, 180˚–240˚and 300˚–360˚, the voltage
waveforms in Figure 8 have another pattern, i.e., one phase
voltage is always higher than the other two [2].
The modulation method for four wire inverter are
1) The switch S
ip
(i = r, y, b) for the phase with the
highest voltage is always turned ON and the
corresponding S
in
for this phase is always turned OFF.
2) The switches S
ip
and S
in
for the other two phases are
driven complementarily.
3) The switches S
xn
and S
xp
for the neutral phase are
driven complementarily.
With this Figure 3 is equivalent to Figure 10 (a) in 60˚–
120˚ region, which can be further organized into Figure
10(b).The same equivalent circuit is also applicable to 180˚–
240˚and 300˚–360˚ regions. The switching of the inverter is
shown in table 3.

(a)

(b)

Figure 10.Equivalent circuit for four wired VSI for 60˚–120˚
For further analysis, following assumptions are made.

1) L
R
= L
Y
= L
B
= L
X
= L.
2) C
R
= C
Y
= C
B
= C.
3) Switching frequency is much higher than fundamental
frequency.

Table 3: Switching logics for proposed controller


SWITCHES



DEGREES

S1

S2

S3

S4

S5

S6

N1

N2

0˚-
60˚

ON

OFF

OFF

OFF

OFF

ON

ON

OFF

60˚-
120˚

ON

ON

OFF

OFF

OFF

OFF

OFF

ON

120˚-
180˚

OFF

ON

ON

OFF

OFF

OFF

ON

OFF

180˚-
240˚

OFF

OFF

ON

ON

OFF

OFF

OFF

ON

240˚-
300˚

OFF

OFF

OFF

ON

ON

OFF

ON

OFF

300˚-
360˚

OFF

OFF

OFF

OFF

ON

ON

OFF

ON
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

108
4. Simulation Results
The Figure 11 shows the three phase AC rectifier and its
output.


Figure 11.Simulation circuit for rectifier.

Figure 12.Simulation result for rectifier

The above rectified output voltage in Figure 12 obtained
across the capacitor.
The Figure 13 is the three phase four wire inverter for
online UPS is proposed.

Figure 13.Simulation circuit for three phase four wire
inverter

From the simulation analysis of Figure 13
(i) The wire N provides a lower impedance loop for
unbalanced current and triplen harmonics, so the
imbalance of output is dramatically reduced.
(ii) The neutral inductance L
n
can reduce the current that
flows through the Switching components of wire N.

Figure 14.Input voltage for three phase AC source

The Figure 14 is the three phase input source voltage for the
UPS.

Figure 15.Simulation result for four wire inverter

The Figure 15 is the simulation result for four wire inverter
for three phase each output is phase shifted by 120˚.


The Figure 16 shows the DC source input voltage for four
wire inverter.

Figure 16.Simulation circuit for DC source four wire
inverter

Three line voltages V
RY
, V
YB
and V
BR
are step waves, with
step height V
dc
/2 and V
dc
. The three line voltages are
mutually phase shifted by 120˚ as shown in Figure 17.
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

109

Figure 17.Simulation result for three phase four wire
inverter

Table 4: Simulation result parameters


Parameter

Values

Voltage for each phase

100V

Frequency

50HZ

DC input voltage

200V

Inductance(L)

1mH

Capacitance(C)

1000µF

Neutral inductance(Ln)

1mH

Rated resistive load

100Ω

The neutral voltage waveform for four wire inverter as
shown in Figure 18.

Figure 18.Simulation result for neutral voltage

The Figure 19 shows the THD level for three Phase four
wire system. The harmonic distortion is reduced and its
THD level is 3.92%.

Figure 19.THD level for three phase four wire inverter

5. Conclusion
The three phase four wire UPS has been proposed in this
paper. The fourth wire makes the inverter have the ability of
handling unbalancing loads. The inductor in fourth wire
reduces the current through the switching components. The
inverter control has the advantages of both lower switching
to fundamental frequency ratio and outstanding ability to
carry unbalanced loads.
References
[1] Fanghua Zhang, and Yangguang Yan “Selective
Harmonic Elimination PWM Control Scheme on a
Three-Phase Four-Leg Voltage Source Inverter” IEEE
Trans. Power Electronics, vol. 24, no. 7, July 2009.
[2] Lihua Li and Keyue Ma Smedley, “A New Analog
Controller for Three-Phase Four-Wire Voltage
Generation Inverters” IEEE Trans. Power Electronics,
vol. 24, no. 7, July 2009.
[3] Liu Zeng, Liu Jinjun and Li Jin “Modeling, Analysis
and Mitigation of Load Neutral Point Voltage for
Three-phase Four-leg Inverter” IPEMC2009.

[4] Salvador Ceballos, Josep Pou, Jordi Zaragoza, José L.
Martín, Eider Robles, Igor Gabiola, and Pedro Ibanez,
“Efficient Modulation Technique for a Four-Leg Fault-
Tolerant Neutral-Point-Clamped Inverter” IEEE Trans.
Industrial Electronics, vol. 55, no. 3, March 2008.
[5] Armando Bellini and Stefano Bifaretti “Modulation
Techniques for Three-Phase Four-Leg Inverters”
Proceedings of the 6th WSEAS International
Conference on Power Systems, Lisbon, Portugal,
September 22-24, 2006.
[6] Bellini and S. Bifaretti “A Simple Control Technique
for three phase four leg inverters”. SPEEDAM 2006.
[7] Richard Zhang, V. Himamshu Prasad, Dushan
Boroyevich and Fred C.Le “Three-Dimensional space
Vector Modulation for Four –Leg Voltage-Source
Converters” IEEE Trans.Power Electronics, vol.17,
no.3, May 2002.
[8] Salem M. Ali Marian and P. Kazmierkowski “PWM
Voltage and Current Control of Four-Leg VSI” 1998
IEEE.





(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

110
Authors Profile

SenthilKumar.R was born in
Tamilnadu, India, on November 2, 1966.
He received the B.E degree in Electrical
and Electronics Engineering from
Madurai Kamaraj University, in 1989.
He received his M.E (Power systems)
from Annamalai University, in 1991. He
has 15 yrs of teaching experience.
Currently he is working as Asst.
Professor in EEE department, Bannari
Amman Institute of Technology Sathyamanglam. Currently he is
doing research in the field of power
converters for UPS Applications.

Dr.Jovitha Jerome was born in
Tamilnadu, India, on June 2, 1957. She
received the B.E. degree in Electrical
and Electronics Engineering and M.E.
degree in Power Systems from College
of Engineering, Guindy, Chennai. She
did her DEng in Power Systems.
Presently she is working as Professor
and Head in Instrumentation and Control Engineering Department
of PSG College of Technology, Coimbatore.

NithyaBhama.S was born on September
4, 1987. She received her B.E Degree in
Electrical and Electronics Engineering
from Erode Sengunthar Engineering
College, Thudupathi, Anna University.
Currently she is pursuing M.E in Power
Electronics and Drives at Bannari Amman
Institute of Technology, affiliated to Anna
University.





















































































































(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

111

Self Encrypting Data Streams for Digital Signals


A.Chandra Sekhar
*
Ch.Suneetha
1
G.Naga Lakshmi
2
*Professor,Department of Engineering Mathematics, GITAM University,Visakhapatnam,India
[email protected]
1,2
Asst.Professor,Department of Engineering Mathematics, GITAM University,Visakhapatnam,India


Abstract: Cryptography plays a vital role in implementing
electronic security systems. Cryptography can be used for
signing electronic documents digitally, digital rights
management, banking and for controlling access key documents.
Public key cryptography is secure if and only if extracting secret
key from public information is intractable. Hence design of
easily replicable cryptographic primitives is essential. In this
paper a new technique for self encrypting data streams is
proposed basing on the matrices and quadratic forms. This
algorithm uses a quadric form as private key. The key matrix is
also sent along with the cipher text in form of a string, which is
further encrypted. The elements of the key matrix will be
different for different data streams. This consists of data
encryption at two levels and the cipher text so obtained becomes
quite impossible to break or to interrupt.

Keywords: Quadratic forms, cryptography, key matrix
1. Introduction
Public key cryptography was introduced by Diffie and
Helman[3][4] In public key cryptosystems, the decryption
keys must be kept secret. A decryption key is therefore called
a “secret key” or a “private key”. Some important public key
cryptosystems are RSA cryptosystem, Rabin encryption, El
Gamal encryption. But the known Public key cryptosystems
are not efficient as many symmetric cryptosystems. Therefore
in practice, hybrid cryptosystems. i.e., combinations of public
key systems and symmetric systems are used. For example,
Alice if wants to send a message m in encrypted form to Bob.
She generates a session key for an efficient symmetric
cryptosystem. Then she encrypts the message m using the
session key and the symmetric system obtaining the cipher
text C. This encryption is fast because an efficient
symmetric cryptosystem has been used. Alice also encrypts
the session key with Bob’s public key. Since the session key
is small, this encryption is also fast. Bob decrypts the session
key using his private key. Then he decrypts the cipher text C
with the session key and obtains the original message M. In
this paper we applied this method to the matrices and
quadratic forms.
2. Quadratic Form
A homogeneous Polynomial of the second degree in n
variables i.e., the expression
j i ij
n
1 j
n
1 i
x x a
∑ ∑
· ·
· φ
where aij

R and aij = aji is called a quadratic form in n
variables x1, x2,…,xn over a real field.
Thus

n 1 n 1 2 1 12
2
1 11
x x a ....... x x a x a + + + · φ

n 2 n 2
2
2 22 1 2 21
x x a ....... x a x x a + + + +
+…………………..
+…………………..
…………………...

2
n nn 2 n 2 n 1 n 1 n
x a ....... x x a x x a + + + +
In the quadratic form there are n square terms x12,
x22……xn2 and xC2 product terms x1x2, x2x3…..,xn-1xn
so there are
2
) 1 n ( n
2
) 1 n ( n
n nC n
2
+
·

+ · + terms. For our discussion
we confine to a quadratic forms of three variables.
. gzx 2 fyz 2 hxy 2 cz by ax
2 2 2
+ + + + + · φ
We know very well that a homogenous second
degree equation
0 gzx 2 fyz 2 hxy 2 cz by ax
2 2 2
· + + + + + · φ
represents a pair if planes if and only if.
abc + 2fgh – af2 –bg2 – ch2 = 0
that is
0
c f g
f b h
g h a
· .
From this we can say that any quadratic form can be written
in the form of a matrix as X’ A X where
]
]
]
]
]





·
]
]
]
]
]





·
c f g
f b h
g h a
A
z
y
x
X .
Here A is a symmetric matrix. So, a symmetric matrix can be
recognized as representative of a quadratic form.
A is know as the discriminate of a quadratic form
If A ≠ 0 then we say the quadratic form is non singular, in
this paper we deal with non singular quadratic forms.
Let x1,x2-----xn and y1,y2------yn be two sets of variables
consider the following system of linear equations.
y1 = b11x1 + b12x2 +----- b1nxn
y2 = b21x1 + b22x2 +----- b2nxn
-------------------------------------------
-------------------------------------------
yn = bn1x1 + bn2x2 +----- bnnxn
Expressing each yi in terms of x1, x2 ------xn.
This system of linear equations can be written as matrix
equations Y = B X where
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

112

]
]
]
]
]





·
n
2
1
y
y
y
Y

]
]
]
]
]
]






− − −
− − − − − − − − − − − −
− − −
− − −
·
nn 2 x 1 n
n 2 22 21
n 1 12 11
b b b
b b b
b b b
B and
]
]
]
]
]





·
n
2
1
x
x
x
X
If B is non singular, than we can write X = B-1Y. Which
shows that these is one–to–one correspondence between the
pairs of vectors X and Y. In this context we say that the
system Y = BX determines a linear transformation and
where B is called the matrix of linear transformation.
2.1 Linear transformation of a Quadratic form
Let AX ' X · ϕ be a real Quadratic form in n variables. Since
A is a symmetric matrix ϕ can be reduced by a non singular
linear transformation X = PY over the real field[1][2][3][4].
Now X’ A X = (PY)’A (PY)
= Y’(P’ A P)Y
= Y’BY [where B = P’ A P]
We note that B’= (P’ A P)’
= P’ A (P)’= P’ A P = B
Clearly Y’ B Y is also a Quadratic form over the real field.
Thus if a Quadratic form AX ' X is subjected to linear
transformation X = PY then the transformation is also a
Quadratic form Y’B Y where B = P’A P
3. Proposed Method

Alice wants to send a message M to Bob. She convents the
message with equivalent numbers according to the following
table[7][8][9].
She chooses a non singular Quadratic form which is
otherwise known as a private key in such a way that, the
message matrix M and the corresponding matrix of the
Quadratic form Q are conformable for multiplication. She
multiplies M with Q resulting a matrix N. Then the elements
of N are reduced to mod 27 which gives the encrypted
message matrix E. This encrypted matrix E is sent to Bob in
public channel. Along with this she also sends a session key
matrix K, whose elements are the integer parts of the
elements of matrix N with respects to mod27. To maintain
absolute secrecy this key matrix K is sent in form of a string
which is further reduced to binary numbers. Bob with the
help of the cipher text and secret key matrix K and the
private key Q and gets the original message back.
4. Algorithm
1 The non singular quadratic form is chosen.
2 The matrix Q of the quadratic is obtained.
3 The plain text is converted into its equivalent
message matrix M which is multiplied with Q
to obtain N.
4 The result N is adjusted to mod(27) to obtain E.
5 The key matrix K is obtained by the
adjustment factor for each entry
6 The key matrix K and E are mixed to get the
cipher
7 For decryption , from the cipher K and E are
separated to get N
8 N is multiplied with the inverse of the matrix Q
and adjusted to mod(27) to get the required
plain text M.

4.1 Example
4.1.1 Encryption
If Alice wants to send a message GOOD LUCK to Bob she
converts message to a matrix
M = [6 14 14; 3 26 11; 20 2 10]
She takes a non-singular Quadratic form
2x
2
+ 3y
2
+ 4z
2
+ 2xy + 4yz + 4zx
Q = [2 1 2;1 3 2;2 2 4]
N = M * Q=[54 76 96;54 103 102; 62 46 84]
N is reduced to mod 27 resulting
E = [0 22 5; 0 22 21; 8 19 3 ]
And key matrix K= [2 2 3;2 3 3; 2 1 3]
The cipher text
C = [0 2 22 2 5 3 0 2 22 3 21 3 8 2 19 1 3 3 ]
Which is equivalent to ACWCFDACWDVDICTBCC

4.1.2 Decryption
The message ACWCFDACWDVDICTBCC is converted into
the equivalent code to obtain the cipher text C =[0 2 22 2
5 3 0 2 22 3 21 3 8 2 19 1 3 3 ]
The Key matrix K and E are separated as
K=[2 2 3;2 3 3; 2 1 3]
E== [0 22 5; 0 22 21; 8 19 3 ]
By multiplying the key with 27 and on adding the
cipher we get
N = [54 76 96;54 103 102; 62 46 84]
By multiplying N with inverse of Q we get the original
message as [6 14 14 32 6 11 20 2 10] which is GOOD
LUCK.

5. Conclusions
In the new algorithm technique for self encrypting
data streams is proposed basing on the matrices and
quadratic forms. This algorithm uses a quadric form as
private key. The key matrix is also sent along with the cipher
text in form of a string, which is further encrypted. This
consists of data encryption at two levels and the cipher text
so obtained becomes quite impossible to break or to
interrupt.Thus the computational overhead is very low. It is
almost impossible to extract the original information in the
proposed method even if the algorithm is known.
A B C D E F G H I J K L M
0 1 2 3 4 5 6 7 8 9 10 11 12
N O P Q R S T U V W X Y Z
13 14 15 16 17 18 19 2
0
21 22 23 24 25
Null or Space
26
(IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 4, April 2010

113
References
[1] K.R.Sudha, A.Chandra Sekhar and Prasad
Reddy.P.V.G.D “Cryptography protection of digital
signals using some Recurrence relations” IJCSNS
International Journal of Computer Science and Network
Security, VOL.7 No.5, May 2007 pp 203-207


[2] A.P. STAKHOV, ”THE ‘‘GOLDEN’’ MATRICES AND A NEW
KIND OF CRYPTOGRAPHY”, CHAOS, SOLTIONS AND
FRACTALS 32 ( (2007) PP1138–1146
[3]. A.P. Stakhov. “The golden section and modern harmony
mathematics. Applications of Fibonacci numbers,”
7,Kluwer Academic Publishers; (1998). pp393–99.
[4]. A.P. Stakhov. “The golden section in the measurement
theory”. Compute Math Appl; 17(1989):pp613–638.
[5]. Whitfield Diffie And Martin E. Hellman, New
Directions in Cryptography” IEEE Transactions on
Information Theory, Vol. -22, No. 6, November 1976
,pp 644-654
[6]. Whitfield Diffie and Martin E. Hellman “Privacy and
Authentication: An Introduction to Cryptography”
PROCEEDINGS OF THE IEEE, VOL. 67, NO. 3,
MARCH 1979,pp397-427
[7]. C. E. SHANNON Communication Theory of Secrecy
Systems The material in this paper appeared in a
confidential report “A Mathematical Theory of
Cryptography” dated Sept.1, 1946, which has now been
declassified.
[8]. E. Shannon, A Mathematical Theory of
Communication, Bell System Technical Journal 27
(1948) 379–423, 623–656.
[9]. A. Chandra Sekhar , ,K.R.Sudha and Prasad
Reddy.P.V.G.D “Data Encryption Technique Using
Random Number Generator” Granular Computing,
2007. GRC 2007. IEEE International Conference, on 2-
4 Nov. 2007 Page(s):573-576

Authors Profile

Dr.A .Chandra Sekhar received his PhD
degree in number theory from JNT University
and MSc., degree with specialization in
algebraic number theory from Andhra
University . He Secured the prestigious
K.NAGABHUSHANAM Memorial Award in
M.Sc., for obtaining University First rank. He
did his MPhil from Andhra University in 2000.He was with Gayatri
degree college during 1991to 1995 and later joined GITAM
Engineering college in 1995. Presently he is working as Professor
and Head of the Department of Engineering Mathematics at
GITAM Engineering college, Visakhapatnam, INDIA.

Mrs.Ch.Suneetha is presently working as
Assistant Professor in the Department of
Engineering Mathematics . She is pursuing her
PhD in number theory and cryptography under
the guidance of Dr.A.Chandra Sekhar




Mrs.Naga Lakshmi is working as Assistant
Professor in the Department of Engineering
Mathematics . She is pursuing her MPhil in
number theory and cryptography under the
guidance of Dr.A.Chandra Sekhar


























































Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close