vol 2 no 3

Published on January 2017 | Categories: Documents | Downloads: 54 | Comments: 0 | Views: 669
of 84
Download PDF   Embed   Report

Comments

Content

(IJCNS) International Journal of Computer and Network Security, 1 Vol. 2, No. 3, March 2010

Cryptanalysis of an efficient biometrics-based remote user authentication scheme using smart cards
Fuw-Yi Yang 1 and Jian-Sen Wang 2
1

Department of Computer Science and Information Engineering, Chaoyang University of Technology, 168 Jifong E. Rd., Wufong Township Taichung County, 41349, Taiwan, R.O.C. [email protected]

2

Department of Computer Science and Information Engineering, Chaoyang University of Technology, 168 Jifong E. Rd., Wufong Township Taichung County, 41349, Taiwan, R.O.C. [email protected]

Abstract:

The authors Li and Hwang have proposed an efficient biometrics-based remote user authentication scheme using smart cards. Security is provided through one-way hash functions and biometrics verification. This scheme is more efficient than other related schemes and enables the users to change their passwords freely. However, there are some flaws in it, such as vulnerability to impersonation attacks, passwordguessing attacks, and power analysis attacks. Thus, this paper shows that the scheme proposed by Li and Hwang can be susceptible to certain types of attacks.

2, we give a brief review of Li and Hwang’s proposed scheme, and then, in Section 3, we demonstrate this scheme’s weaknesses. Finally, we conclude this paper in Section 4.

2. Review of Li and Hwang’s proposed scheme
Li and Hwang proposed an efficient biometrics-based remote user authentication scheme using smart cards. The scheme is divided into three phases: registration phase, login phase, and authentication phase. Here, we briefly introduce the three phases. In Table 1, we list the notations and abbreviations used in their scheme. The three phases are as follows: Table 1: Notations used in their scheme Ci Si Ri IDi PWi Bi h (.) Xs Rc Rs || ⊕ Client Server Trust registration center Identity of the user Password of the user between Ci and Si Biometrics template of the user One-way hash function Secret information maintained by the Si Random number chosen by the Ci Random number chosen by the Si Concatenation XOR operation

Keywords: Biometrics, remote user authentication, impersonation attacks, password-guessing attacks, power analysis attacks

1. Introduction
In an insecure network environment, user authentication is a significant component of security. Remote user authentication schemes are used to verify the validity of the user login request. In 1981, Lamport [1] proposed a remote user authentication scheme with verification tables. However, Hwang and Li pointed out that if the verification tables were modified or stolen, the remote authentication system would be influenced. Therefore, in 2000, Li and Hwang proposed [2] a remote user authentication scheme without any verification tables, using smart cards. In general, ID-based remote user authentication schemes are based on passwords [3], [4]. As simple passwords are easy to break, many schemes have been proposed to enhance the security of the remote user authentication. But the passwords can be lost, forgotten, or shared with other people, and thus, there is no way to know who the actual user is. Therefore, it cannot provide non repudiation. Hence, biometric keys have been proposed [5], which are based on personal characteristics such as fingerprints, palm prints, and irises. Li and Hwang have proposed an efficient biometricsbased remote user authentication scheme using smart cards [6], which, however, could not withstand the impersonation and power consumption attacks. This paper shall point out the flaws of this proposed scheme. The rest of this paper is organized as follows. In Section

2.1 The Registration Phase Before the users login to the system, they must perform the following steps, as shown in Figure 1. Step 1: The users offer their personal biometrics, Bi , on the specific device and input the password, PWi , and the user identity, IDi , to the registration center in person.

2

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Step 2: The registration center computes the messages ri = h ( PWi || fi ) and

ei = h( IDi || Xs) ⊕ h( PWi || fi ) , where fi = h( Bi ) and Xs is the secret information generated by Si . Step 3: The registration center stores ( IDi , h (.), fi , ei ) into the user’s smart card and then sends it to the user through a secure channel.
Ci Ri

Step 1: Si checks if the format of C i ’s IDi is valid or not. Step 2: If the above-mentioned holds, Si computes the messages M 3 = h( IDi || Xs) , M 4 = M 2 ⊕ M 3 = Rc , M 5 = M 3 ⊕ Rs , and M 6 = h( M 2 || M 4 ) to provide mutual authentication between client and server. Step 3: Next, Si sends the messages (M5, M6) to Ci . Step 4: On receiving the Si ’s message, C i checks if M 6 = h( M 2 || Rc) . Step 5: If the above-mentioned holds, C i considers that Si is authenticated and then computes the following messages to offer mutual authentication between client and server. M 7 = M 5 ⊕ M 1 = Rs ,

IDi , Bi , PWi

Computes ri = h(PW || fi ) i ei = h(IDi || Xs) ⊕ h(PW || fi ) i Stores ( ID i , h(.), f i , e i ) in the smart card
Smart card

M 8 = h( M 5 || M 7 ) , where M7 is the random number of the server. The client, which knows M 1 = h ( IDi || Xs) , can send back the message
of M 8 = h((h( IDi || Xs ) ⊕ Rs) || Rs) .

Figure 1. The registration phase 2.2 The Login Phase Whenever the users want to login to the server, they need to perform the following steps, as shown in Figure 2. Step 1: The users insert their smart card into the smart card reader of a terminal and offer their personal biometrics, Bi , on the specific device to verify user biometrics. Next, the system checks if h( Bi ) = f i . Step 2: If it holds, the user passes the biometrics verification. Then the user inputs the PWi . Otherwise, it means the user did not pass the biometrics verification and the client terminates the session. Step 3: After receiving C i ’s password, the smart card will compute the messages ri ' = h( PWi || f i ) , M 1 = ei ⊕ ri ' = h( IDi || Xs) , and M 2 = M 1 ⊕ Rc , where Rc is a random number generated by the user. Step 4: Finally, Ci sends the messages ( IDi , M 2 ) to Si . Ci Si

Step 6: Ci sends the message M8 to Si . Step 7: On receiving C i ’s message, Si checks if M 8 = h( M 5 || Rs) . Step 8: If it holds, the server accepts C i ’s login request; otherwise, it rejects it.
Ci Si Checks the format of Ci ' s IDi Computes M 3 = h(IDi | | Xs) M 4 = M 2 ⊕ M 3 = Rc M5 = M 3 ⊕ Rs M6 = h(M 2 || M 4 ) M5 , M 6 Verifies M 6 = h(M 2 || Rc) Computes M 7 = M 5 ⊕ M1 M 8 = h(M 5 || M 7 ) M8 Verifies M 8 = h(M 5 || Rs) Figure 3. The authentication phase 2.4 Changing of password Whenever the users want to change their passwords, they can easily and freely change the password PWi to a new
?
?

Inserts the smart card and offers Bi
? Verifies h(Bi )= fi , Ci inputs PWi

Computes ri' = h(PWi || fi ) M1 = ei ⊕ ri' M 2 = M1 ⊕ Rc IDi , M 2 Figure 2. The login phase 2.3 The Authentication Phase On receiving the login request message, Si will authenticate whether the user is legal or not in the following manner, as shown in Figure 3.

password, PWi new , as shown in Figure 4. Step 1: The users insert their smart card in the smart card reader and offer their biometrics to the specific device in order to verify the user biometrics. Step 2: If it holds, the user can input the old password, PWi , and the new password, PWi new . Step 3: The smart card computes ri′ = h( PWi || f i ) , ei′ = ei ⊕ ri′ = h( IDi || Xs ) , and

(IJCNS) International Journal of Computer and Network Security, 3 Vol. 2, No. 3, March 2010

ei′′ = ei′ ⊕ h( PWi new || fi ) , the ei will be replaced with ei′′ and PWi has been changed with PWi Ci Si
new

.

Inserts the smart card and offers Bi
? Verifies h(Bi )= fi , Ci inputs PWi and PWi new

Computes ri' = h(PWi || fi ) ei' = ei ⊕ ri' = h( IDi || Xs) ei'' = ei' ⊕ h( PWi new || fi ) The ei will be replaced with ei''

3.2 It cannot withstand password-guessing attacks According to some of the research work [7], [8], storing of data as smart card messages is vulnerable, because the secret information stored in the smart card could be extracted by monitoring its power consumption (power analysis attacks). Using power consumption attacks, the attacker can get the message ( fi , ei ) and intercept the message (M2, M6) from the network. Through passwordguessing attacks, the attacker can guess the password P Wi′ , and compute the value as Rc' = ei ⊕ h( P Wi′ || fi ) ⊕ M 2 to check if M 6 = h( M 2 || Rc' ) . If it holds, the attacker can masquerade the user. Otherwise, the attacker can try for the next guessed password P Wi′ until M 6 = h( M 2 || Rc' ) is true. A detailed description is given below: Step 1: Using power consumption, the adversary gets the message ( fi , ei ) . Step 2: The adversary intercepts the message (M2, M6) from the network. Step 3: By choosing a password P Wi′ , the adversary computes the message of Rc' = ei ⊕ h( P Wi′ || fi ) ⊕ M 2 to check if M 6 = h( M 2 || Rc' ) . Step 4: If the above-mentioned holds, it means the password guessed, P Wi′ , is the correct password; thus, the adversary can masquerade the user. Step 5: On the contrary, if it does not hold, the adversary tries for the next password guess until M 6 = h( M 2 || Rc' ) is true. 3.3 Adversary can impersonate not only the client but also the server

Figure 4. Change password phase

3. The Weaknesses Proposed Scheme

of

Li

and

Hwang’s

It can be seen that Li and Hwang’s proposed scheme enables users to change their passwords freely and provides mutual authentication between the user and the server. The most significant feature of this scheme is that its operating mechanism is based on the users’ personal biometrics. However, Li and Hwang’s proposed scheme still retains three weaknesses, as explained below: 3.1 It cannot protect against impersonation attacks In the authentication phase, the server checks the format of the client’s identity; if it holds, the server computes the message (M3, M4, M5, M6), and the server can get the message ( IDi , M 2 ) from the client in the login phase. The server can use the message Xs to ( IDi , M 2 ) M2 ' , and secret then get information masquerade

M 1 = M 3 = h( IDi || Xs) ; therefore, the attacker can impersonate the client. The detailed procedure is given below: Step 1: The malicious server can compute the hash value h( IDi || Xs) itself, and sends the message h( IDi || Xs) to the adversary. Step 2: After receiving the message h( IDi || Xs) , the adversary chooses the random number Rc' to compute M 2 ' = h( IDi || Xs) ⊕ Rc ' . The adversary can
masquerade IDi and sends the login request message ( IDi , M 2 ' ) to the server. Step 3: The adversary can use the hash value h( IDi || Xs) to compute the random number Rs chosen by the server. The adversary will compute M 8 = h((h( IDi || Xs) ⊕ Rs ) || Rs) to achieve the validity of the login processes. The drawback exists because the server does not execute the biometrics verification processes, thus causing insider attacks as mentioned above. Furthermore, the scheme cannot achieve non repudiation.

Through a power analysis attack and the above-mentioned statement 3.2, the attacker can get the value of h( IDi || Xs) ; then, the attacker can masquerade not only the client but also the server. As the attacker can intercept the message ( IDi , M 2 ) from the network, using the message h( IDi || Xs) , it can compute the message (M 3 , M 4 , M 5 , M 6 ) to masquerade the server as well.

4. Conclusions
This paper points out that the scheme proposed by Li and Hwang is not secure enough against some weaknesses and proves that it is incapable to withstand impersonation and power consumption attacks. The attacker can break in as a legal user and intercept the messages from networks to masquerade the user. Although Li and Hwang’s proposed personal biometrics scheme is practical in some fields, the security is substandard.

Acknowledgment
This work was partially supported by the National Science Council, Taiwan, R.O.C. under Grant NSC 982221-E-324-019.

4

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

References
[1] L. Lamport “Password authentication with insecure communication”, Communications of the ACM 24, pp. 770–772, 1981. [2] M.S. Hwang and L.H. Li, “A new remote user authentication scheme using smart cards”, IEEE Transactions on Consumer Electronics, vol. 46, pp. 28–30, 2000. [3] M. Kim and CK. Koc, “A simple attack on a recently introduced hash-based strong-password authentication scheme”, International Journal of Network Security, vol. 1, pp. 77–80, 2005. [4] N.Y. Lee and Y.C. Chiu. “Improved remote authentication scheme with smart card”, Computer Standards and Interfaces, vol. 27, pp. 177–180, 2005. [5] C.T. Li and M.S. Hwang, “An online biometrics-based secret sharing scheme for multiparty cryptosystem using smart cards”, International Journal of Innovative Computing Information and Control, 2009. [6] C.T. Li and M.S. Hwang, “An efficient biometrics-based remote user authentication scheme using smart cards”, Journal of Network and Computer Applications, vol. 33, pp. 1-5, 2010. [7] T.S. Messerges, E.A. Dabbish, and R.H. Sloan, “Examining smart-card security under the threat of power analysis attacks”, IEEE Transactions on Computers, vol. 51, pp. 541–552, 2002. [8] P. Kocher, J. Jaffe, and B. Jun, “Differential power analysis”, Proceedings of Advances in Cryptology, pp. 388–397, 1999

(IJCNS) International Journal of Computer and Network Security, 5 Vol. 2, No. 3, March 2010

New Illumination Compensation Method for Face Recognition
Heng Fui Liau1and Dino Isa2
1

Faculty of Engineering, School of Electrical and Electronic Engineering,University of Nottingham Malaysia Campus, Jalan Broga, 43500 Semenyih, Selangor, Malaysia. [email protected]
2

Faculty of Engineering, School of Electrical and Electronic Engineering,University of Nottingham Malaysia Campus, Jalan Broga, 43500 Semenyih, Selangor, Malaysia. [email protected]

Abstract: This paper proposes a method for implementing
illumination invariant face recognition based on discrete cosine transform (DCT). This is done to address the effect of varying illumination on the performance of appearance based face recognition systems. The proposed method aims to correct the illumination variation rather than to simply discard it. In the proposed method, illumination variation, which lies mainly in the low frequency band, is normalized in the DCT domain. Other effects of illumination variation which manifest themselves by the formation of shadows and specular defects are corrected by manipulating the properties of the odd and even components of the DCT. The proposed method possesses several advantages. First, it does not require the use of training images or an illumination model and can be applied directly to the test image. Second is the simplicity of the system. It needs only one parameter to be determined, making it simple to implement. Experimental results on Yale face database B using PCA- and support vector machines (SVM)-based face recognition algorithms showed that the proposed method gave comparable performance compared to other well-known but more complicated methods.

Keywords: face recognition, illumination invariant, discrete cosine transform, biometrics.

1. Introduction
Face recognition has gained much attention in the past two decades, particularly for its potential role in information and forensic security. Principal component analysis (PCA) [1] and linear discriminant analysis (LDA) [2] are the two most well-known and widely-used appearance-based methods. Existing face recognition systems including PCA and LDA do not perform well in the presence of illumination variation. For example, Adini et al [3] stated that the variation between the images of the same person due to illumination inconsistencies is always larger than image variation due to the change in face identity. Due to the importance of this issue, recent researchers focus on developing robust face recognition systems. However, illumination variation still remains a challenging problem [4]. The problems associated with Illumination variation for facial image are mainly due to the different appearance of the 3D shape of human faces under lighting from different direction. Early work showed that the variability of images

of a lambertian surface in fixed pose, under variable lighting and without shadowing, is a 3D linear subspace [5]-[7]. One of the approaches used to solve the problem is building a 3D illumination model. Basri et al [5] proposed spherical harmonic model where illuminated images are represented in low-dimensional subspace. Lee et al [8] proposed the 9D linear subspace approach using nine images captured under nine different lighting directions. Face images obtained under different lighting conditions are regarded as a convex Lambertian and can be approximated well by a 9D linear subspace [8]. The illumination convex cone method [9]-[11] showed that a set of images of an object in a fixed pose under all possible illumination conditions is a convex cone in the image space. This method requires a number of training images for each face taken with different lighting directions to build the generative model. Unfortunately, the illumination cone is extremely complex to build. Several simpler models that approximate it in the best possible way with minimum complexity is presented in [10], such as low dimension subspace model, and cones-attached and conescast using extreme rays. Gross et al [12] proposed a 3D method based on light-field estimation. It operates by estimating a representation of the light field of the subject head. The light field is then used as the set of features for recognition. Recently, Zhang et al [13] proposed a 3D Spherical Harmonic Basis Morphable Model. They showed that any faces under arbitrary unknown lighting condition can be simply represented by three low-dimensional vectors which correspond to shape, spherical harmonic and illumination. One of the major drawbacks of the 3D illumination model-based approach is that a number of training images of the subject under varying lighting conditions or information of 3D shapes are needed during the training phase. Moreover, its application range is limited by the 3D face model, and has significant drawback for real-time system due to its expensive computation requirement. Every single pixel of the illuminated face image can be regarded as a product of reflectance and luminance at that point. Luminance varies slowly while reflectance can change abruptly. In other words, luminance, which corresponds to the illumination variation, is located mainly in low frequency band but reflectance, which corresponds to the stable facial features under illumination variation, is located at higher frequency band. Land's "retinex” model [14] approximates the reflectance as the

6

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

ratio of the image and its low-pass version serves as estimation for luminance. Chen et al [15] discards the low frequency DCT component in the logarithm domain to achieve illumination invariance. They first expand the pixel intensity in the dark region using logarithm transform. Then, discrete cosine transform is applied on the image in logarithm domain. The low frequency components are discarded by setting the corresponding coefficients as zero. Nanni and Lumini [16] had investigated discrete wavelet transform features under different lighting conditions. They found that the most useful sub-band are the first level of decomposition obtained by Biorthogonal wavelet and the horizontal details are the third level of decomposition obtained by Reverse Biorthogonal wavelet, which is robust against illumination variation. Franco and Nanni [17] proposed classifiers fusion scheme. PCA is applied to extract and reduce the dimensionality of the useful features from DWT domain [16] and DCT domain [15]. The individual classifier for each feature is Kernel Laplacian Eigenmap. The output of each classifier is further fused using sum rule. Generally, human face is similar in shape, have two eyes, a mouth and a nose. Each of these components makes different distinctive shadows and specularities depending on the direction of the lighting in a fixed pose. By using such characteristic, the lighting direction can be estimated and illumination variation can be compensated. Quotient image (QI) [18] is an effective method to handle illumination problem. This method is very practical because it only requires one training image for each person. The authors in literature [19] and [20] further improved the original QI method and proposed the self-quotient image (SQI) method. The luminance is estimated as the smoothed version of the image by Gaussian filtering. The illumination variation is eliminated by dividing the estimated luminance. However, the parameter selection for the weighted Gaussian filter is empirical and complicated. Shan et al [21] proposed quotient illumination relighting (QIR) method which synthesizes images under a predefined normal lighting condition from the provided face images captured under non-uniform lighting conditions. Zhao et al [22] proposed illumination ratio image. One training image for each lighting conditions is required to simulate the distribution of the images under varying lighting condition. Liu et al [23] proposed an illumination restoration method. The illuminated image is restored to image under frontal light source using a ratio-image between the face image under different lighting conditions and a face image under frontal light source, both of which are blurred by Gaussian filter. The image is further enhanced through an iterative algorithm where Gaussian filter with different sizes are applied on different regions of the face image. Vuccini et al [24] adopted Liu et al’s method to restore and enhance the illuminated image. LDA is employed as feature extractor. They proposed an image synthesis method based on QI to generate training images to overcome the small-sample-size problem. Xie and Lam [25] normalize the lighting effect using local normalization technique. The face image is partitioned to blocks with fixed size. For each block, the pixel intensities are normalized in such a way that it has

zero mean and unit variance. The advantage of the method is low computation complexity. This paper proposes an new illumination compensation method for face recognition . The proposed method aims to correct the illumination variation rather than to simply discard it. Compared to existing methods based on the DCT, the proposed method here does not discard the low frequency components which represent most of the effects of illumination variations. This allows us to retain more information, which might be useful for face recognition. Furthermore, some of the effects of this illumination variation lie in the higher frequency band, approximately the same frequencies as occupied by some important facial features. As previously mentioned, the effect of these kinds of illumination variation creates shadow and specularities in the image. Thus, removing the low frequency component does not help in this case. The proposed method here manipulates the odd and even DCT components to remove these artifacts. Using the DCT approach, the complexity of the previously described methods is avoided while retaining a comparable performance [8]-[10], [15], [21]-[24]. The proposed method is based solely on 2D images and does not need to estimate the 3D shape. It requires much less computation resource than other methods based on 3D model as stated above. Besides that, it does not require any prior information on the illumination model and training images. Furthermore, the parameter selection is simple. Only one parameter, the cutoff-frequency of the filter, is required to be determined. Experimental results on Yale face database B [11] using PCA- and support vector machines (SVM)--based face recognition algorithms showed that the proposed method has achieved good performance as compared to some of the more complicated methods described above.

2. Method
As aforementioned before, an illuminated face image, I(x,y) , can be regarded as product of reflectance, R(x,y) , and luminance, L(x,y) , as shown in (1). Taking logarithm transform on (1), we have (2).

(2) The logarithm transform transforms (1) into linear equation where the logarithm transform of the illuminated image is the sum of the logarithm transformed of reflectance and the logarithm transform of luminance as shown in (2). In the image processing field, logarithm transform is often employed to expand the values of the dark pixels. Fig. 1 gives a comparison between the original image and the same image following the logarithm transform. The brightness of the right image is spread more uniformly.

(IJCNS) International Journal of Computer and Network Security, 7 Vol. 2, No. 3, March 2010

number. Furthermore, DCT has lower computation complexity than Discrete Wavelet Transform. For an N×N image, the 2D DCT is given by (5). For υ , ν = 0,1,2,…N-1. and represent the horizontal component and vertical component respectively is the is the value of the pixel at coordinate (x,y). For 2D DCT, the frequency is computed as the square root of the sum of and . The inverse 2D DCT is expressed as (6). Figure 1. On the left is the original face image. On the right is the output of logarithm transform

(5)

Luminance component lies mainly in the low frequency band while the reflectance component lies mainly in the higher frequency band. The luminance component can be approximated as the 'low-pass version' of the illuminated image. The reflectance component of an illuminated face image represents the stable facial features under varying lighting conditions. Thus, the illumination variations can be compensated for provided we can estimate them relatively well. Let be the incident luminance and be the uniform luminance. A uniformly illuminated image, , can be expressed as (3). In order to normalize the brightness across all of the images, the mean value of are set to be the same for all image. (3)

The luminance component corresponds to the illumination variation. As previously described, the illumination variation can be well-approximated by the low frequency components of the illuminated face image. In section 2.1, an illumination compensation method based on low frequency DCT components are presented. In fact, illumination variation cannot be perfectly separated from the original image in the frequency domain. Under large and complex illumination variations, the shadows and specularities are casted on the face due to the unique 3D shape of human face. Some shadows and specularities lie in the same frequency band as the reflectance, which corresponds to facial features do. In this case, compensation technique based on low frequency components will be unable to remove the artifacts. In section 2.2, an illumination compensation method based on the properties of odd and even DCT components in horizontal direction, which has the capability to remove artifacts in the high frequency band, is presented. . 2.1 Illumination normalization using low frequency DCT components DCT has been widely used in modern imaging and video compression. DCT possesses some fine properties, such as de-correlation and energy compaction. The DCT is chosen because it only gives real value, unlike Discrete Fourier Transform (DFT) which gives complex

From (6), the uniform luminance component can be obtained from the original image by adding a compensation term, which compensates for the nonuniform illumination. Generally human faces are almost symmetrical. The lefthalf side of the face is almost identical to the right-half side of the face in most of the cases. For non-uniformly illuminated images, part of the face is brightened and the others are darkened. The component is approximated as the 'low-pass version' of with certain cut-off frequency. The compensation term can be estimated as follow. Firstly, the mean of an image reconstructed from low frequency DCT components is computed according to (7). Then, each pixel is subtracted by the mean as shown in (8). is estimated using the low frequency DCT components. A negative value indicates that a single pixel is 'dark' and positive value indicates that the corresponding pixel is 'bright'. Each pixel is adjusted by halving the difference between pixel value and mean value as shown in (8). In other words, the intensity of each pixel is adjusted according to the difference between the mean intensity of the image that is reconstructed using only low frequency components. The illumination variations which lie mainly in the low frequency band are reduced while important facial features are preserved. In other words, the low-pass version of the image in the DCT domain serves as an estimated of the luminance component. Fig. 2 shows the effect of illumination normalization in the low frequency DCT components on a face image. The left-most image is the original image. The second, third and forth images are the processed image with the cut-off frequency set at 2, 4 and 6 respectively. As you can see, the shadow effects at left side of the face are reduced at the increasing value of the cut-off frequency while the facial features are unaffected. Unlike other similar methods, only one parameter needs to be determined, which is the cut-off frequency of the lowpass filter. However, the normalization may cause some information lost in the process due to wrong estimations of the 'dark' and 'bright' zones.

8

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

is always below 0.5. For the non-uniformly illuminated image, the strong signals (greater than one) are located at the low frequency band. However, the middle and high frequency band contain points that have strength greater than 0.5. This shows that the illumination variation also corrupts the high frequency component, which corresponds to reflectance. Hence, this proves that no features are robust enough to be against the illumination variation. Figure 2. Normalization on low frequency DCT components. The left most image is the original image

2.2 Illumination Correction Using Odd and Even DCT Components in Horizontal Direction Under large and complex illumination variation, the effect of illumination variation occurs across almost the entire frequency band. Some variations such as shadows and specularities lie in the same frequency band as some important facial features do. In this section, we propose an illumination correction method which uses the properties of odd and even DCT components to eliminate the shadows and specularities and to restore the original features.

Figure 4. The top and bottom is the odd DCT component of original image and illuminated image respectively Since human face generally have symmetry property, we can easily compensate the illumination variation in the face based on the properties of odd and even components of DCT. Here, an illumination compensation method based on manipulating even and odd DCT components is proposed. First, two new images, and , are reconstructed from the odd and even DCT components in the horizontal direction of the original images. The pixels on the left half are compared with the pixels on the right half. If the leftside pixel is negative and the corresponding right-side pixel is positive, the values of both pixels will be adjusted according to (10) and (11). is the compensation term. Fig .5 shows an illuminated image and the corresponding and .

Figure 3. 2D-DCT basis functions In Fig. 3, different images have been produced by varying the DCT components in the horizontal direction. As shown in the Fig. 3, the second and forth images have been reconstructed by different even DCT components in the horizontal direction and the symmetry is evident. The first and third asymmetry images are reconstructed by odd DCT components in the horizontal direction are asymmetrical. Non-uniform illumination gives rise to the odd DCT components and affects the magnitude of the even DCT components. (5) can now be rewritten in that,

Figure 5. From left to right: illuminated image, Odd DCT components can be regarded as noise or luminance components due to illumination variation and the even DCT components represent the wanted information which may or may not be slightly corrupted by non-uniform luminance. Under such model, the luminance components can be estimated using odd DCT components thereby giving estimation for the correction term as well. Fig. 4 shows the difference between odd DCT components of image under variable illumination and the image of same person under uniformly illuminated condition. As shown in the top of Fig. 4, the odd components of the illuminated images fluctuate greatly with the magnitude greater than one. In contrast, the bottom of Fig. 4 also shows that illumination variation happens across the entire frequency band. Note that, the signal strength of the uniformly illuminated image

and

If the left-side pixel is positive and the corresponding right-side pixel is negative, the values of both pixels will be adjusted according to (12) and (13).

(IJCNS) International Journal of Computer and Network Security, 9 Vol. 2, No. 3, March 2010

3. Experiment results on Yale Face database B
Yale face database B is commonly used to evaluate the performance of illumination invariant face recognition. The Yale face database B contains 10 individuals in nine different poses. For each pose, there are 64 different illumination conditions. All images were manually aligned. The size of face images are cropped to 64×55 pixels. The face images were divided into 4 subsets according to the [11].The four subsets were, subset 1 ), subset 2 ), subset 3 ) and subset 4 ), where bracketed numbers indicate the angular position of light of incident with respect to the person. 3.1 Face recognition using principal component analysis PCA is a classical face recognition method. PCA seeks for a set of projection vectors which projects the input data in such a way that the covariance matrix of the training images is maximized. Six images in subset 1 were selected to serve as training images to build the covariance matrix for PCA [13] and [17]. Eigenvalue decomposition was applied on the covariance matrix to seek for the eigenvector that projected the data in the direction that maximized the covariance matrix. The nearest neighbour rule was used as a classifier with the Euclidean distance as the distance metric. Fig. 8 shows that the largest thirty principal components were sufficient to give the best result and to represent the face image. Therefore, thirty principal components were used to form the projection matrix. Fig. 9 and Fig. 10 show the performance of the proposed method for subset 3 and 4 respectively under different values of cut-off frequency. Setting the cut-off frequency at six gave the best results for both subsets.

Figure 6. The sum of the correction term and even DCT components gives the illumination corrected image The new pixel intensity for the compensated image can then be computed using (14). As shown in Fig. 6, the shadow created by the nose has been eliminated. Fig. 7 shows the illumination-corrected image using the proposed method. Illumination variations are largely reduced while the important facial features are preserved. Here, we would like to emphasize that the proposed method is able to recover facial feature as long as the specularities or shadow regions occur only on one side of the face. The output image looks different from the original image because the proposed method corrects the illumination variation based on the symmetry of human face. The output image is aligned in such a way that the left-half side is identical to the right-half side. Due to some subtle variations in the two halves of every face, a face image is not exactly symmetrical; for example, the two eyes have slightly different sizes, there is a mole at only one side of cheek, etc. Therefore, the output images may have an adverse effect. All of the images are normalized to a mean of 0.5.

Figure 7. Illumination normalization and compensation based on proposed method The flow of the proposed illumination invariant face recognition system can be summarized as below: 1. Perform logarithm transform on the input images to expand the values of the dark pixels. 2. Correct the non-uniform luminance in the low frequency domain using the proposed method in section 2.1. 3. Eliminate the specularities and shadows that lie in the same frequency band as the reflectance does using the proposed method in section 2.2.

Figure 8. Recognition rate for subset 3 and subset 4 that correspond to the number of principal components

Figure 9. Recognition rate for subset 3 that correspond to the cut-off frequency

10

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

of SVM tremendously in Subset 4. Our method scored 100% recognition rate in Subset 3 and 96.4% in subset 4. Table 2: Face recognition using SVM Method Subset 3 Subset 4 Original 13.6% 55% Histogram 100% 61.4% equalization Proposed method 100% 96.4% Figure 10. Recognition rate for subset 4 that correspond to the cut-off frequency Table 1: Face recognition using PCA Method Subset 3 Subset 4 Histogram 58.4% 15.6% equalization Proposed method 100% 92.1% Without illumination normalization in the low frequency band, the proposed method was only able to achieve 96.3% and 64.1% recognition rate in subset 3 and 4 respectively. The poor results were due to the large variation in the low frequency band where the illumination correction algorithm was unable to cope with. As we increased the value of cutoff frequency, the performance improved steadily although there were some small fluctuations. It gave the best performance when the cut-off frequency was set at 6 (100% for subset 3). For subset 4, the best performance was 92.1% when the cut-off frequency was at 6, 7 or 9. Beyond that, the performance decreased because some important features might have lost in the illumination normalization process. Table 1 shows that our method can achieve a better performance level than if only histogram equalization was used. 3.2 Face recognition using support vector machine Support vector machine (SVM) [26]-[28] is a popular technique for classification motivated by the results of statistics learning theory. Unlike traditional methods such as neural network that minimize the empirical training error, SVM is designed based structural minimization principal. SVM maps the training data into a higher dimensional feature using kernel trick, in which an optimal hyperplane with large separating margin between two classes of the labeled data is constructed. The training data of SVM are the PCA feature set described in the previous section. In this paper, several types of kernel such as linear kernel, wavelet kernel, polynomial kernel and radial basis function (RBF) kernel are studied. RBF kernel gives the best result. The RBF function is defined as below: 3.3 Comparison with other methods The results for subset 3 and 4, along with other wellknown methods, are presented in Table 3. From Table 3, it is shown that the proposed method gave results comparable with other well-known but more complicated methods. In contrast, the proposed method is much simpler to implement with much less computation resource requirement. The baseline system only scored 58.4% and 15.6% in subset 3 and subset 4 respectively. The proposed method outperformed linear subspace method, cones-attached method, illumination ratio image method and QIR method. As aforementioned, the above stated methods required training images to build the illumination model or to estimate the lighting conditions. The authors in [15] discards low frequency components, keeping only the high frequency ones which correspond to the reflectance component. Fig. 11 shows the difference between the proposed method and [15]. The shadows and specular defects remained on the image because they lay in the same frequency band as the reflectance component does. Discarding low frequency components might remove the important facial features that lie in low frequency band and degrade the performance of the face recognition system when the number of class was large. The proposed method aims to correct the illumination variation rather than to simply discard it to achieve illumination invariance. However asymmetrically, it was unable to completely remove artifacts that appeared on the both sides of the face. This caused the proposed method to have rather unimpressive performance in subset 4 compared to [15]. Our method is robust against illumination variation and is easy to implement. It can be a pre-processing stage for other face recognition algorithms. Table 3: Recognition rate comparison with other methods Methods Recognition rate (%) subset 3 subset 4 Linear Subspace [10] 100 85 Cones-attached [10] 100 91.4 Cone-cast [10] 100 100 Illumination ratio image [22] 96.7 81.4 Quotient Illumination Relighting 100 90.6 [21] illumination restoration [23] 98.3 96.4 DCT in logarithm domain [15] 100 99.8 Vucini et al’s method [24] 100 95 Our method+PCA 100 92.1 Our method +SVM 100 96.4

where

is the width of the RBF.

Table 2 shows the performance of SVM under different types of methods. Our algorithm improved the performance

(IJCNS) International Journal of Computer and Network Security, 11 Vol. 2, No. 3, March 2010

Figure 11. Comparison between proposed method (bottom) and [10] (top) 3.4 Computation time An efficient and simple illumination compensation method in DCT domain is presented. All experiments were conducted using Matlab 2008a on Core 2 Duo E6750 CPU and 2GB RAM. It takes 2.050 seconds and 2.311 seconds to process all of the face images included in subset 3 and subset 4 respectively. The mean computation time for each image is 0.0167 second. The complexity is low and it can be implemented in real-time system.

4. Conclusion
This paper proposes a novel illumination compensation method in DCT domain. The illumination is normalized based on the low frequency components of DCT. Subsequently, the illumination variations which create shadows and specularities are further corrected by the proposed method which uses the properties of odd and even components of DCT. The proposed method is simple in terms of design, where there is only one parameter that needs to be determined, which is the cutoff frequency of the filter for the illumination normalization process. The proposed method gives comparable result with other wellknown but more complicated methods in Yale face database B. The proposed method has rather unimpressive performance in subset 4 due to reason that large illumination variation creates shadows and specularities that occur at both sides of the face. Our method is robust against illumination variation and is easy to implement. It can be a pre-processing stage for other face recognition algorithms.

References
[1] M.A. Turk, A.P. Pentland, “Eigenface for face recognition”, Journal of Cognitive Neurosience, 3, pp. 71-86, 1991. [2] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, “Eigenface versus Fisherface”, IEEE Transaction on Pattern Analysis and Machine Intelligent,19 (6), pp.711-720, 1997. [3] Y. Aldini, Y. Moses, S. Ullman, “Face recognition: the problem of compensating for changes in illumination direction”, IEEE Transaction on Pattern Analysis and Machine Intelligent, 19 (7), pp.721-732, 1997. [4] P.J. Phillips, W.T. Scruggs, A.J. O’Toole, P.J. Flynn, K.W. Bowyer, C.L. Schott, M. Sharpe, “FRVT 2006 and ICE 2006 large-scale results”, National Institute of Standards and Technology Report.

[5] R. Basri, D.W. Jacobs, “Lambertian reflectance and linear subspaces”, IEEE Transaction on Pattern Analysis and Machine Intelligent, 25(2), pp.218-233, 2003. [6] S. Nayar, H. Murase, “Dimensionality of illumination manifold in eigenspace”, Technical Report CUCS-02194 Columbia University. [7] S. Amnon, “Photometric issues in 3D visual recognition from a single 2D image”, International Journal of Computer Vision,2, pp. 99-122, 1997. [8] K.C. Lee, J. Ho, D.J. Kriegman, “Nine points of light: acquiring subspaces for face recognition under variable lighting and pose”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol.2, pp. 519-526, 2001. [9] P.N. Belhumeur, D.J. Kriegman, “What is the set of images of an object under all possible illumination conditions?”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 270277, 1996. [10] A.S. Georghiades, P.N. Belhumeur, D.W. Jacobs, “From few to many: illumination cone models for face recognition under variable lighting and pose”, IEEE Transaction on Pattern Analysis and Machine Intelligent, 23(6), pp.630-660, 2001. [11] H.F. Chen, P.N. Belhumeur, D.J. Kriegman, “In search of illumination invariants”, iIn Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 13-15, 2000. [12] R. Gross, I. Matthews, S. Baker, “Appearance-based face recognition and light-fields”, IEEE Transaction on Pattern Analysis and Machine Intelligent,26(4), pp.449-465, 2004. [13] L. Zhang, S. Wang, D. Samaras, “Face synthesis and recognition from a single image under arbitrary unknown lighting using a spherical harmonic basic morphable model”, In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.2, pp.209-216, 2005. [14] E.H. Land, J.J. McCann, “Lightness and retinex theory”, Journal of the Optical. Society of America,61 (1), pp. 1-11, 1971. [15] W. Chen, M.J. Er, S. Wu, “Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain”, IEEE Transaction on System, Man and Cybernetics. B,36 (2) pp. 458-466, 2006. [16] L. Nanni, A. Lumini, “Wavelet decomposition tree selection for palm and face authentication”, Pattern Recognition Letters, 329, pp.343-353, 2008. [17] A. Franco, L. Nanni, “Fusion of classifiers for illumination robust face recognition”, Expert Systems with Applications: An International Journal, 36, pp.8964-8954, 2009. [18] S. Amnon, R.-R Tammy, “The quotient image: classbased re-rendering and recognition with with varying illuminations”, IEEE Transaction on Pattern Analysis and Machine Intelligent,23 (2),pp.129-139, 2001. [19] H. Wang, S.Z. Li, Y. Wang, “Generalized quotient image”, In Proceedings of the IEEE Conference on

12

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

[20]

[21]

[22]

[23]

Computer Vision and Pattern Recognition, 2, pp. 498505, 2004. H. Wang, S.Z. Li, Y. Wang, “Face recognition under varying lighting conditions using self quotient image”, In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 819824, 2004. S. Shan, W. Gao, B. Gao, D. Zhao, “Illumination normalization for robust face recognition against varying lighting conditions”, In Proceedings of the IEEE Workshop on AMFG, pp. 157-164, 2003. J. Zhao, Y. Su, D. Wang S. Luo, “Illumination ratio image: synthesizing and recognition with varying illuminations”, Pattern Recognition Letters. 24, pp.2703-2710, 2003. DH. Liu, KM Lam, LS. Shen, “Illumination invariant face recognition”, Pattern Recognition. 38 , pp.17051716, 2005.

[24] E. Vucini, M. Gökmen, E. Gröller, “Face recognition under varying illumination”, In Proceedings of the 15th Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision, pp.57-64, 2007. [25] X. Xie, KM. Lam, “An efficient illumination normalization method for face recognition”, Pattern Recognition Letters. 27, pp.609-617, 2006. [26] V. Vapnik, The nature of statistical learning theory. Springer-Verlag, New York, 1995. [27] B.E. Boser, I. Guyon, V. Vapnik, “A training algorithm for optimal margin classiffers”, In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp144-152, 1992. [28] C. Cortes, V. Vapnik, “Support-vector network., Machine Learning”,20 , pp.273-297, 1995.

(IJCNS) International Journal of Computer and Network Security, 13 Vol. 2, No. 3, March 2010

Towards In-Car Ad Hoc Network for Pervasive Multimedia Services
Kamal Sharma1, Hemant Sharma2 and Dr. A K Ramani3
1

Institute of Computer Science & IT, Devi Ahilya University, Indore (M.P.), India [email protected]

2

Software Architect, Delphi Delco Electronics & safety Europe GmbH, Tec Centre, D 31162 Bad Salzdetfurth, Germany [email protected]

3

Prof. & Head, 1Institute of Computer Science & IT, Devi Ahilya University, Indore (M.P.), India [email protected]

Abstract: Pervasive computing systems inside modern
automobiles are made up of hundreds of interconnected, often replaceable devices and software components. In-vehicle multimedia components and applications are becoming complex artifacts due to advancement in technology and increased competition. There is a growing need for software platform to enable efficient deployment of multimedia services in automotive environment. This paper presents the architecture of a Bluetooth based multimedia ad hoc network in a car. The architecture is explained and presented together with a prototype implementation running on hand-held devices. Further, the different fundamental architectural tradeoffs, based on measurements, have also been analyzed.

However, the realization of a fully wireless car bus system is still far away. Fiber optic connections for multimedia bus systems offer advantages regarding costs and bandwidth, and the demanded reliability of mission-critical networks still require wired connections to ensure the safe operation of the car. Though, cost-effective wireless subsystems which could extend or partly replace wired bus systems are already nowadays conceivable. A very promising technology in this context is specified by the recent Bluetooth 3.0 [1] standard. The provisioning of multimedia streaming applications using wireless network, inside the vehicle, requires managing differentiated performance levels depending on application/user/device requirements in order to properly allocate network bandwidth, especially the limited one available in the wireless last-meter [3]. In particular, the Bluetooth specification [1] offers limited support to performance differentiation, by allowing to choose which of the three kind of logical transports to exploit and to statically configure performance requirements for ACL ones. In addition, current implementations of the Bluetooth software stack do not allow applications to exploit the limited performance functions included in the specification in a portable way. The result is that the development of Bluetooth operations in multimedia ad hoc applications currently depends on specific implementation details of the target Bluetooth hardware/software platform. This relevantly complicates service design and implementation, limits the portability of developed applications, and calls for the adequate modeling of performance parameters corresponding to potential ad hoc applications and services. The design presented in this paper is for a wireless streaming system that offers a means for bringing the participatory media and bulk content distribution into the wireless domain. The basis for the service is an opportunistic distribution of among network. Users in our system exchange data when their corresponding application(s) receive trigger. The system is going to be

Keywords: Ad Hoc Networks, Pervasive Computing, In Vehicle Multimedia, Bluetooth.

1. Introduction
During the last few years, the proliferation of miniaturized devices with networking capabilities has provided the technological grounds for pervasive networking even in automotive environments. Growing demand for personal device connectivity, mobile Internet access, remote monitoring and diagnostics, as well as enhanced safety and security is driving vehicle manufacturers and suppliers to seek out new wireless technologies. Wireless technology integration strategies would enhance the value proposition of vehicles by integrating advanced electronics systems such as infotainment systems, safety and stability systems, and comfort and convenience enhancement systems. The evolutionary development of in-car electronic systems has lead to a significant increase of the number of connecting cables within a car. To reduce the amount of cabling and to simplify the interworking of dedicated devices, currently appropriate wired bus systems are being considered. These systems are related with high costs and effort regarding the installation of cables and accessory components. Thus, wireless systems are a flexible and very advanced alterative to wired connections.

14

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

open to any node who wants to provide and consume the content. Hence, it is based on unlicensed short-range communication specifically Bluetooth.

Figure 1. Overview of In- Car Multimedia Network

By relying on short-range communication, the network will be highly disrupted most of time. The communication is further challenged by relatively short transfer opportunities which might be in the range of a few seconds when for example two nodes communicate with each other. The contribution of this paper is twofold: Ø Ø A mechanism for content streaming based on opportunistic communication; An evaluation of the mechanism through the use of realistic use cases and real traces.

2.1 The Multimedia PAN Bluetooth technology is based on a master-slave concept where the master device controls data transmissions through a polling procedure. The master is defined as the device that initiates the connection. A collection of slave devices associated with a single master device is referred to as a piconet. The master dictates packet transmissions within a piconet according to a time-slot process. The channel is divided into time slots that are numbered according to an internal clock running on the master. A time division duplex (TDD) scheme is used where the master and slaves alternatively transmit packets, where even numbered time slots are reserved for master-slave transmissions, while odd numbered time slots are reserved for slave-master transmissions.

The rest of the paper is organized as follows: Section 2 provides the description of system and potential application scenarios. In section 3, the architecture and design of the system has been discussed. Section 4 evaluates the design of the system. Section 5 provides an overview of related research and section 6 concludes the paper.

2. System Description
An ad hoc multimedia network in a car and participating devices are shown in figure 1. The diagram presents an ad hoc network containing a smart phone, an infotainment system, and rear seat entertainment system. The devices are equipped with Bluetooth module and are capable of establishing a Bluetooth connection. A piconet can be established between Infotainment system and iPhone when the driver or any other occupant of the vehicle pairs the phone. Similarly a piconet could be established between Infotainment system and rear seat entertainment system. The piconet shall enable sharing of iPhone data or access to information from internet, if appropriate application framework is available at infotainment system. Figure 2. The Multimedia PAN

(IJCNS) International Journal of Computer and Network Security, 15 Vol. 2, No. 3, March 2010

Figure 2 provides an overview of the multimedia PAN inside the car with the role of different nodes. The piconet here consists of a master, the infotainment device, and two slaves, the smart phone (iPhone) and rear-seat entertainment device. As soon as the infotainment system is

up and running, its Bluetooth module is ready to pair with other available device. Pairing with iPhone or with rear-seat entertainment unit or with both establishes the network.

Figure 3. Overview of Ad Hoc Multimedia Communication Architecture. 2.2 Application Scenarios The ad hoc multimedia network consisting of Infotainment system and the slave devices shall provide following services: Ø Access to audio streams from iPhone to infotainment system that could be played and hearable on the vehicle’s sound system. Access to video streams from iPhone to rear- seat entertainment unit via the piconet master. Access to internet from rear-seat unit using iPhone via the piconet master. Applications based on information received by the iPhone to help safe driving, such as weather information or traffic information. communication of intended data streams. Therefore, every application subscribes to the channels that it is interested in, and his corresponding device node will try to retrieve any data belonging to those channels. Following the approach used in the Internet-based podcasting protocols; the framework structures channels into different media streams. To make efficient use of contacts with a small duration, the streams are further divide into data packets, transport-level data units of a size that can typically be downloaded in an individual node encounter. Each data stream is further divided into protocol data units (PDU), the atomic transport unit of the network. The resulting system architecture is illustrated in Figure 3. The transport layer acts directly on top of the link-layer without any routing layer. To distribute information among the communicating nodes, the framework does not rely on any explicit multi-hop routing scheme. Instead of explicitly routing the data through specific nodes, it relies on a receiver-driven application-level dissemination model, where data content are routed implicitly as nodes retrieve information that they request from neighboring nodes. It distinguishes between an application layer, a transport layer, and the data link layer. In a first level of aggregation, the application organizes the data in media stream channels. Below the application layer, the transport layer organizes the data into data streams. Streams are smaller data units that should be able to communicate over short contacts. The use of smaller file blocks is also supported by the idea of integrating forward error correction, for instance the use of

Ø Ø Ø

3. System Architecture and Implementation
This section describes the design and implementation of our Ad Hoc network system. We first give a brief overview, then describe the system architecture, and finally discuss the protocols involved in a pair-wise association when devices in the network communicate. 3.1 System Architecture The architecture organizes application data into communication channels to facilitate identification and the

16

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

fountain codes, to speed up and secure the data transfer, specifically when packets are received unordered. The streams packets themselves are then again cut into smaller parts to optimize the interaction with the data link layer; i.e., the size is set to the PDU size of the data link layer. The proposed system is designed to work on any MAC architecture, however, to be effective even in the presence of short contact durations, short setup times and high data rates are important for achieving high application communication throughput. The design is further characterized by two fundamental choices. Ø First, it allows only pair wise associations even when the MAC layer supports multi-point communication. Second, it never pushes data in the network and relies instead on receiver-driven dissemination.

synchronization. The transfer mode is based on a clientserver model. The communication protocol is a requestresponse system. It is not a strict system though; some requests do not generate a response whereas other requests can generate several responses. The goal is to send as few data packets as possible to reduce the communication overhead. Figure 4 illustrates the state diagram of the transfer mode. It shows the three most important stages of synchronization between two device nodes: the negotiation, data query, and data communication stage. In the Negotiation stage, both devices determine if some of the subscribed channels are available on the other device. Instead of querying every single channel, the devices exchange a channel filter that contains all channel subscriptions a device offers. The devices then start to test their subscribed channels against the filter of the other device and create a list of matching channels. This is a local process and does not involve the exchange of messages.

Ø

The arguments for these two choices are simplicity and optimal usage of short contact durations. Furthermore, the transport layer is able to optimize the flow control between the nodes and is not constrained by the slowest receiver in range. Since the framework does not perform multi-hop routing explicitly, the system performance is mainly determined by the selection of nodes, it is synchronizing with and the order by which it transfers data from the peers. This task is performed by the synchronization service, depicted in Figure 1. The synchronization service is responsible for maintaining state about past synchronization encounters and current devices in the PAN. 3.2 Association Phases This section next describes the process when two nodes associate. The framework differentiates two phases: a discovery phase in which nodes detect that they are in a range and a data exchange phase in which the nodes negotiate and perform data transfer. 1) Device Discovery Phase We assume that every device that participates in the opportunistic data sharing belongs to the same network, e.g., with IEEE 802.15 every device is configured in ad hoc mode. While the detection of new devices in the network is handled by the MAC layer, the application in the nodes has to take care of the discovery of devices that participate in the wireless communication service. The synchronization service keeps track of the discovery messages received from each peer and maintains a history list. An important aspect of the discovery process is to identify peers with a good connection. Since the system is designed for ad hoc scenarios, it has to consider the specifics of wireless communication. 2) Data Transfer Phase Once a device node has been detected, the synchronization services switches to the transfer mode which handles the

Figure 4. State Diagram – Data Transfer Modes. In the Data Query stage, a device confirms the channels selected in the previous stage and then retrieves a list of streams offered by the remote device within those channels. Our implementation supports three different types of stream retrievals: 1. The peer requests any random stream within a channel that the remote peer offers.

(IJCNS) International Journal of Computer and Network Security, 17 Vol. 2, No. 3, March 2010

2. 3.

A peer requests any stream which is newer than a given date starting with the newest streams. The peer requests any streams that are newer than a given date starting with the oldest stream.

4.3 Device Synchronization We next look at the synchronization time. The synchronization time is the time between the moment two devices have discovered each other and associate until they start data communication. Figure 5 shows the performance of the architecture for device synchronization with different number of channels using channel filters compared to the time it would take without filters.

The actual communication of streams is handled in the Data Communication stage which usually takes up most of the connection time. The device starts to process the list of PDU streams that was created in the previous stage. The communication itself works analogous to the download process of BitTorrent [7]. Missing PDUs which are available at the remote peer are randomly selected and downloaded. Remark that a stream is divided into several pieces (PDUs) which are requested

4. Evaluation
This section evaluates important design choices of the proposed architecture as well as the performance we might expect from a wireless opportunistic application and content sharing service. Our evaluation relies on our prototype implementation and focuses on typical deployment scenarios in which: Ø Ø the nodes are co-located (e.g., all devices are in the car) ; or When nodes are not co-located (paired iPhone is not in the car).

Figure 5. Device Synchronization with and with out channel filter. 4.4 Data Transfer The system limits the association time to ensure that the system remains fair when multiple nodes are interested in the data provided by just a few nodes. Without this artificial limit, the first node that associates with the node having new data content would be able to communicate for an indefinite amount of time, letting the other nodes starve for data.

4.1 Evaluation Setup and Methodology All devices communicated over their integrated Bluetooth interface (802.15) turned into ad hoc mode. Our data communication application is written in C++. The devices that participate in the data communication system are configured in ad hoc mode using the same identifier. . We evaluated the framework’s accuracy by comparing its data flows to actual Bluetooth flows between the same devices. Performing this evaluation on a large-scale is difficult because it requires control over the software running at both end-points of a Bluetooth flow. We performed our evaluation by pre-instrumented Bluetooth stack of two classes of devices: Infotainment Unit and iPhone. 4.2 Device Discovery Phase An important aspect of opportunistic wireless podcasting is the discovery time. The discovery time is the time between the moment two nodes move into transmission range until they discover each other at the application layer and start the synchronization phase. This time is directly impacted by the interval at which the messages are sent. The more frequent these messages are sent, the quicker they will discover each other and be able to transfer data streams.

Figure 6. Average Data Exchange Time between the PAN devices To show how our implementation performs on different PAN nodes, Figure 6 shows a comparison of the download

18

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

time with the iPhone and the Infotainment unit. The results are from average data communication measurements of around 12 MB video streams showing the download time when two or five devices are part of the PAN inside the car. 4.5 PAN Accuracy When sending data to multiple receivers, a Bluetooth master must form a piconet. In a piconet, the master sends data to each receiver one packet at a time in a round-robin fashion. To evaluate architecture’s accuracy in piconet mode, we performed the following experiment. We set up a sending device to transmit a very large video file to each slave joining its piconet. We set up two iPhone as slaves to join the piconet one by one, every 90 seconds. We measure the transfer rate of the initial flow established between the master and the first joining slave and we plot how this rate changes over time in Figure 7.

users who retrieve data act simultaneously as clients and servers. It has post-facto gained interest in the research community; see for instance [7]. This system has similarities to BitTorrent [7], but the mobility assisted delivery means that data are provided in a random order from a random mix of peers whereas peer-to-peer content distribution systems like BitTorrent selects peers based on specific rules. The closest research field is the delay tolerant networking. The Delay Tolerant Network Research Group (DTNRG) [14] has proposed architecture [13] to support communication that may be used by delay tolerant applications. The architecture consists mainly of the addition of an overlay, called the bundle layer, above a network transport layer. Messages of any size are transferred in bundles in an atomic fashion that ensures node-to-node reliability. Multicast for delay-tolerant networks has been proposed in [12]. In contrast to multicast, our work assumes open user groups. The info-station concept is akin to our proposal and the paper in Ref. [10] studies means for avoiding exploitation of other nodes. We differ in that we make the nodal exchanges governed by a protocol instead of a social contract between users. In reference [9], it has been shown that delay-tolerant broadcasting between mobile nodes results in sufficiently high application level throughput even for streaming. This is the case in urban pedestrian areas with reasonably high densities of users, as well as in public transportation and in places where people gather occasionally (e.g., sport fields, shopping malls, recreational areas). Contact patterns of human mobility have been analyzed in the Haggle project [6]. This project aims at developing an applicationindependent networking architecture for delay-tolerant networks. In contrast, we implement the podcasting service directly on top of the link layer to exploit applicationspecific policies in the way information spreads across the mobile users. BlueTorrent [8] is a cooperative content sharing system for Bluetooth. It differs from our approach in the search mechanisms and the content structuring. We rely on a channel-based content structure with a subscription model whereas BlueTorrent employs flat structuring with traditional query string search. TACO-DTN [11] is a content-based dissemination system for delay tolerant networks. It is implemented as a publish/subscribe system and was mainly designed to distribute temporal events whereas our approach is implemented as a pure receiverdriven system and optimized for dissemination of streaming media. We use bloom filters in the searching for data; a survey of the use of bloom filters in networking is given in [15].

Figure 7. Data communicate rate in PAN scenario, when iPhone and Infotainment Unit has been paired. Our results demonstrate that framework is accurate: its emulate flows behave similarly to Bluetooth flows with respect to the number of packets exchanged and packet sizes. Finally, we found that framework is accurate when running in piconet mode.

5. Related Research
Bluetooth is a low-cost technology initially designed for cable replacement [4] but more generally intended for all kinds of Personal Area Network (PAN) applications [5]. It is probable that, in the very near future, Bluetooth will be embedded in almost every mobile device. These features coupled with the interoperability characteristic provided by Bluetooth specifications [1], make this wireless technology very appealing for applications in automotive environments [2]. As an example, Bluetooth headsets are very popular as wireless audio link to a mobile phone, also for vehicular use. These reasons make Bluetooth the most suited technology for the design of the low power Wireless Communication Network (WCN). There has been substantial work on peer-to-peer content distribution for the wireless network and Internet. BitTorrent is a successful instance of such systems where

6. Conclusions
We presented, in this paper, architecture of wireless opportunistic communication network for In-vehicle multimedia application. The prototype implementation is targeted at devices that communicate over IEEE 802.15. We have evaluated different tradeoffs in the discovery, synchronization, and data communication phases.

(IJCNS) International Journal of Computer and Network Security, 19 Vol. 2, No. 3, March 2010

The overall resulting performance is promising and shows the feasibility of wireless opportunistic communication inside a vehicle. With the proposed opportunistic communication system, the scope of multimedia content exchange may broaden and that new participatory wireless broadcasting applications using the proposed concepts will emerge in the near future. We anticipate two main directions for the future work: Ø Ø Develop analytical models for relevant measures of performance for the described ad hoc network. Performance analysis of multimedia protocol for different multimedia resources and performance attributes.

[10]

[11]

[12]

[13]

References
[1] Bluetooth Special Interest Group., “Core Specification of the Bluetooth System v 3.0”. Apr. 2009. [2] R. Nüsser, R. Pelz, “Bluetooth-based Wireless Connectivity in an Automotive Environment”, Proc. of the IEEE Vehicular Technology Conference, Fall 2000, vol. 4, pp.1935-1942. [3] S. Burleigh, A Hooke, L. Torgerson, K. Fall, V. Cerf, B. Durst, and K. Scott. Delay-tolerant Networking: An Approach to Interplanetary Internet. IEEE Communications Magazine, 41(6):128–136, 2003. [4] J.C. Haartsen, “The Bluetooth Radio System,” IEEE Personal Comm., vol. 7, no. 1, pp. 28-36, Feb. 2000. [5] P. Johansson, M. Kazantzidis, R. Kapoor, and M. Gerla, “Bluetooth: An Enabler for Personal Area Networking,” IEEE Network, vol. 15, no. 5, pp. 28-37, Sept.-Oct. 2001. [6] Augustin Chaintreau, Pan Hui, Jon Crowcroft, Christophe Diot, Richard Gass, and James Scott. Impact of Human Mobility on the Design of Opportunistic Forwarding Algorithms. In Proceedings of IEEE INFOCOM, Barcelona, Spain, April 2006. [7] M. Izal, G. Urvoy-Keller, E.W. Biersack, P. Felber, A. Al Hamra, and L. Garcés-Erice. Dissecting BitTorrent: Five Months in a Torrent’s Lifetime. In Proceedings of Passive and Acrive Measurements Conference, April 2004. [8] Sewook Jung, Uichin Lee, Alexander Chang, Dae-Ki Cho, and Mario Gerla. BlueTorrent: Cooperative Content Sharing for Bluetooth Users,. In Proceedings of PerCom, NY, USA, March 2007. [9] Gunnar Karlsson, Vincent Lenders, and Martin May. Delay-Tolerant Broadcasting. In Proceedings of the [14] [15]

ACM SIGCOMM CHANTS Workshop, Pisa, Italy, September 2006. W. H. Yuen, R. D. Yates, and S. C. Sung. Noncooperative content distribution in mobile infostation networks. In Proceedings of the IEEE WCNC, 2003. G. Sollazzo, M. Musolesi, and C. Mascolo, “TACODTN: A Time-Aware COntent-based dissemination system for Delay Tolerant Networks,” in Proceedings of the First International Workshop on Mobile Opportunistic Networking, Puerto Rico, June 2007. W. Zhao, M. Ammar, and E. Zegura. Multicasting in delay tolerant networks: Semantic models and routing algorithms. In Proceedings of the Sigcomm Workshop on Delay Tolerant Networking, August 2005. S. Burleigh, A Hooke, L. Torgerson, K. Fall, V. Cerf, B. Durst, and K. Scott. Delay-tolerant Networking: An Approach to Interplanetary Internet. IEEE Communications Magazine, 41(6):128–136, 2003. Delay Tolerant Network Research Group (DTNRG). http://www.dtnrg.org. A. Broder and M. Mitzenmacher. Network applications of bloom filters: A survey. Proceedings of the 40th Annual Allerton Conference, 2002.

Authors Profile
Kamal Sharma received the M. Sc. (Electronics) and M. Tech. (Future Studies & Planning) degrees from Devi Ahilya University in 2002 and 2005, respectively. He is currently with Devi Ahilya University, Indore, INDIA. Hemant Sharma received the M. Sc., M. Tech. and Ph D degrees from Devi Ahilya University in 1996, 1998 and 2009, respectively. He is in to designing and developing platform architecture for In-vehicle Infotainment Systems for more than 10 years. He is currently with Delphi Delco Electronics Europe GmbH, Germany. Dr. A. K. Ramani received his Master of Engineering (Digital Systems) and Ph. D, from Devi Ahilya University, Indore. He worked as a research engineer in ISRO Satellite Center, Dept. of Space, Bangalore, India, during 1979-83. Since Jan. 1990, he is a professor with the School of Computer Science at Devi Ahilya University. He was associate professor at University Putra Malaysia, Dept. of Computer Science during May95 toMay99. During Sept 2005- July 2006, He was with the College of Computer Science and Information Technology, at King Faisal University (KFU), Kingdom of Saudi Arabia. He has guided 13 PhDs in different areas of Computer Science and Information Technology and has authored about 70 research papers.

20

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

An Integrated Approach for Legacy Information System Evolution
Dr. Shahanawaj Ahamad1
Department of Computer Science College of Arts & Science in Wadi Al-Dawasir, King Saud University Wadi Al-Dawasir-11991 Kingdom of Saudi Arabia. [email protected]
1

Abstract: Future legacy evolution has always been a great
challenge because of continuous changes in business operation influenced by requirements changes, production of large commercial benefits, information and communication technologies development. Usually legacies are not been made to accommodate these fast changing advancements, this is one of the basic challenge of legacy evolution and renovation also requires forward and backward procedures and specific knowledge generation for renovators. Web enabling legacy and COBOL based applications interaction with e-commerce based application is potentially hard to maintain and loss huge amount of organizational economical assets, this paper proposes the solutions procedures in this evolutionary direction so that on demand legacy evolution can be performed through adaptive maintenances.

This work is motivated by migration approach suggested in [11] that explains why a module based migration approach can be implemented for software maintenance, paper also suggested in this direction but with legacy technical aspects and implantation procedures.

3. Modularization
Following fig. 1 depicts how the legacy source is divided in to interacting modules.

Keywords: Legacy Systems; Software Evolution; Legacy Modules; Legacy Code; Legacy Restructuring.

1. Introduction
This paper presents a closer look at software renovation and explains how legacy software evolutions take place for future change. It is based on the view that an organization's software systems provide valuable functionality that has been proven in practice. As such, it should be reused whenever possible. At the same time, the packaging of this business functionality is usually far from optimal as they are often based on old languages, database systems, and transaction monitors, monolithic in design, and non maintainable as a result of repeated modification without supporting documents. As a consequence, legacy systems are very hard to change. A software system can be effectively evolved with following procedures: • Modularization • Restructuring • Analysis • Reformation • Transformation. Procedures proposed are explained in the following sections.

Figure 1: Dividing legacy source in modules Following are some identified issues associated with this procedure: • Availability of legacy sources. • Language used to develop legacy source. • Complexity of source understanding and comprehension. • Status of documentation and complexity in redocumentation. • Implementation of the tools for analysis and results description. • Check feasibility of division of modules. If division of source in modules is feasible and above mentioned issues are resolved then undertake next sequence of procedural approach.

4. Restructuring
Intercommunication of the modules using organized of architecture is to replace the internal structure of the legacy

2. Related Work

(IJCNS) International Journal of Computer and Network Security, 21 Vol. 2, No. 3, March 2010

source. Legacy modules are wrapped to interact with the organized architecture, appear as modules. This process is only forward process since the only modifications made to the legacy are: • Division in sub-modules. • Wrapping of the modules and major parts of the legacy remain untouched.

The interactive search for modules can have any one of a number of different starting points. • The first is to interview, if possible, the users, maintainers, and designers of the legacy system in order to get a picture of the overall functionality of the system, and, more specifically, to get an impression which functionality should be preserved and which is redundant. • Another starting point is to use the persistent data stores. Of all the different sorts of data playing a role in the legacy system, it is likely that the data stored in the database represents business (domain-specific) entities. Following such data down to the actual computations as program/data CRUD Analysis which leads to those programs or procedures that could be candidate (domain-specific) modules.

Figure 2: Restructuring procedure The issues associated with this procedure are: • Reengineer the Data Bases • Build a software access layer to access the Data Bases (Legacy and New) • Restore the programs so that they can access the new Data Bases. • Externalize a wrapper for each module. • Write organized scripts to interconnect the modules. • Test the restructured system.

• Another starting point consists of the program call relationships, which is inter-programs relationships analysis, which can disclose about cohesion and coupling of modules, or about the layers built into the legacy system. If one procedure invokes many others (high fan-out), and doesn't get invoked itself, it is likely to be a control (organized) module, with little built-in functionality. Likewise, if a procedure is called by many others (high fanin), it is likely to be some sort of utility routine, dealing with error handling or logging. The procedures with both low fan-in and low fan-out are the ones that are likely to contain business logic. • The screen /reports sequence can be identified, together with the key strokes leading to each subsequent screen/reports. Such screen/reports sequences are very close to use cases, telling what actions an end user performs. Moreover, following the flow of screen input fields through the program identifies those program slices that implement the given use case. This form of analysis can very well be supported by automated, interactive, tools. • Techniques for combining legacy elements into original ways in order to arrive at coherent modules include concept analysis and cluster analysis. Such techniques can be used to spot combined usage of pieces of data and functionality. For example, they can be used to group data elements into candidate classes, based on their usage in programs or procedures which can then be made into methods of the derived class. In particular concept analysis can be used to display the various combination possibilities in a concise and meaningful manner. • Search for modules of specific functions, for example, a module for valuing insurance options. A hypertext-based legacy browsing system can provide various starting points for such a search, such as indexes on words occurring in comments, column names, inferred types, and so on.

5. Analysis of Legacy Source
The objective is to examine the sources of the legacy system and extract information from them that reveals their purpose and architecture.

5.1 Analysis Methods for Legacy Sources The objective of analysis is to extract information from legacy source that presents their purpose, and architecture. Analysis techniques are crucial for before and further procedures, this section covers those analysis techniques that can help to find candidate modules for reformation and development in a legacy system. Tools collect all classes of information about the system, which is used by the renovation engineer to find candidate modules. There is no single way to find all good modules therefore the tools should provide many different views on one legacy source system, and show potential combinations of these views. This requires a hypertext-based browsing mechanism, potentially supported by intelligent agents and automata producing specialized reports, as well as an on line explanation mechanism.

22

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Moreover typical computations necessary for the behavior of the module looked may be identified using plan recognition and these computations may then be packaged into the required modules.

• • • •

Language knowledge Domain knowledge. System Knowledge. Requirements and strategy.

6. Reformation
This is reformation of individual modules and this procedure is to consider the renovation, up gradation of individual modules as depicted in Figure 3. The issues associated with this procedure are: • Determine for each module for which renovation strategy is to use. • Leave as is. • Completely replace it by commercial off-theshelve software systems. • Perform a very detailed analysis of the component in order to extract its business logic that can be used to rebuild or regenerate the modules. • Apply the selected reformation strategy. • Test the modules.

It is a fact that major parts of the reformation procedure can be automated by combining human efforts with full automation of repetitive tasks as speed, quality, reproducibility, traceability along with human intelligence. 7.2 Restructuring Support Division of legacy source resulting from modularization, now requires replacing the direct connections between legacy modules into indirect connections that are leveraged by the organized architecture. This procedure is forward process as the system is reorganized into modules but the code of each module is hardly affected. The following sequences of operations are needed to achieve this: • Reengineer the Data Bases. • Build a software access layer to access the Data Bases legacy and new. • Restore the programs so that they can access the new Data Bases • Identify the level of granularity at which the legacy system will be decomposed. The major issue associated with this are: • The total number of modules should be manageable. • Smaller modules with many cross relationships are better handled as a single, larger, module. • Wrap the identified module in order to connect them with the organized architecture. • Write organized scripts that simulate the connectivity in the original legacy system. 7.3 Reformation Support The reformation can be started with individual modules. This can be originally changes the original code of the module. The most interesting properties of this approach are: • The mutual dependencies between the evolution projects of the various modules have been eliminated or minimized. • The reformation strategy may differ per module as some modules may be replaced by COTS. In those cases that the decision is to perform a detailed reformation based on the existing code modules, typically when much useful business logic is contained in its transformation techniques may be applied to the code of the module. The issues associated with this procedure are:

Figure 3: Reformation with modules

7. Transformation
The goal is to technically restructure and improve the sources of the legacy modules. Transformation is forward process as the legacy sources are modified. 7.1 Transformation Techniques for Legacy Sources Transformation techniques perform forward, systematic, modifications of the legacy module source in order to enable their evolution and increase their flexibility. The legacy module source is transformed for: • Restructure the whole system. • Restructure the code of individual module. • Apply uniform comment conventions. • Eliminate deplore language features. • Convert to a new language version. • Translate to another language. The reformation automata have usually three inputs: • The sources of the legacy system. • The repository resulting from the system analysis phase. • A set of Automata Detailed Requirements specifications for Transformation. The crucial elements in successful reformation tools are:

(IJCNS) International Journal of Computer and Network Security, 23 Vol. 2, No. 3, March 2010

• Transformation intended for code improvement and applying uniform layout conventions, goto elimination, code restructuring, and dead code, dead data elimination etc. • Transformation intended for replacement of certain properties of the code and change of the userinterface or the database engine etc. • Full translation of the code to another language or platform and conversion between COBOL dialects or translation from obsolete 4GL to standard 3GL.

8. Conclusion
The work presented in this paper provides the procedural integrated approach to overcome the major challenges for future business operations which are the alignment with changing business goals and changing technologies, while retaining the assets legacy systems which supporting today's business operations. It is also discussed the automated analysis and transformation of legacy systems. The necessary techniques for analysis, transformation, and reformation of legacy systems were also discussed. Paper also concludes that the legacy evolution is an economically motivated task and legacy is a valuable asset for organization and business operations. Discussed approach focuses on modularization of whole legacy system, issues and propose the successive development procedure for the affective evolution.

[6] M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts. Refactoring: Improving the Design of Existing Code. Addison-Wesley, 1999. [7] I. Jacobson, M. Griss, and P. Jonsson. Software Reuse; Architecture, Process and Organization for Business Success. Addison Wesley, 1997. [8] C. Jones. Applied Software Measurement: Assuring Productivity and Quality. McGraw-Hill, 1991. [9]Oluwaseyi Adeyinka, “Service Oriented Architecture & Web Services: Guidelines for Migrating from Legacy Systems and Financial Consideration” Master thesis, Blekinge Institute of Technology, 2008. [10] M. Simos.Organization domain modelling (ODM) guidebook version 2.0. Technical Report STARS-VCA025/001/00, Synquiry Technologies, Inc, 1996. URL: http://www.synquiry.com/. 450 pp. [11] B. Meyer and C. Mingins. Component-base development: From buzz to spark. IEEE Computer, 23(7):35-37, 1999. [12] Dr. Vladimir Bacvanski, An Object-Oriented, Component-based, Approach to Migrating Legacy Systems, 2004.

Authors Profile
Dr. Shahanawaj Ahamad is an active academician and researcher in the field of Software Reverse Engineering with experience of ten years, working with King Saud University’s College of Arts and Science in Wadi Al-Dawasir, K.S.A. He is the member of various national and international academic and research groups, member of journal editorial board and reviewer. He is currently working on Legacy Systems Migration, Evolution and Reverse Engineering, published more than twenty papers in his credit in national and international journals and conference proceedings. He holds M. Tech. Degree in Information Technology followed by Ph.D. in Computer Science major Software Engineering, supervised many bachelor projects and master thesis.

References
[1] A.V. Aho, R. Sethi, and J.D. Ullman. Compilers. Principles, Techniques and Tools. Addison Wesley, 1986. [2] G. Visaggio, Value-based decision model for renovation processes in software maintenance, Annals of Software Engineering, 9, Kluwer Academic Publishers, May 2000, pp 215-233. [3]G. Visaggio, Ageing of a Data-Intensive Legacy System: Symptoms and Remedies, Journal of Software Maintenance: Research and Practice, John Wiley; vol. 13, pp. 281-308, 2001. [4]C. Szyperski. Component Software; Beyond ObjectOriented Programming. Addison-Wesley, 1998. [5] E. Stroulia, M. El-Ramly, L. Kong, P. Sorenson, and B. Matichuck. Reverse engineering legacy interfaces: An interaction-driven approach. In 6th Working Conference on Reverse Engineering, WCRE'99, pages 292-301. Society, 1999.

24

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Improving Information Accessibility In Maintenance Using Wearable Computers
Lt.Dr.S Santhosh Baboo1 and Nikhil Lobo2
1 Reader P.G. & Research Dept of Computer Science D.G.Vaishnav College, Chennai 106 2

Research Scholar

Bharathiar University Abstract: In aerospace and defense, maintenance is being carried out using technical manuals in hardcopy. Manuals in hardcopy format are very difficult to carry, access information and understand while carrying out maintenance. This paper brings out the concept of Interactive Electronic Technical Manuals that can run on Wearable Computers making maintenance more effective while reducing the effort involved. Manuals are now compact and easy to handle, easier to access, can be read in a systematic manner and easily comprehendible. Keywords: Wearable Computer, Interactive Electronic Technical
Manual, Manuals, Documentation

(Eric Jorgensen, 1994). The information in these manuals is presented in a window frame with navigation tools for zooming and panning allowing easy comprehension. Information is displayed based on the

1. Introduction
An aircraft is required to be maintained in airworthy condition. During maintenance of aircrafts, a technician is often required to refer manuals for maintenance procedures. These manuals are at present in hardcopy consuming time to access the relevant information. These hardcopy manuals can be replaced by Interactive Electronic Technical Manuals (IETM) and accessed using wearable computers improving the performance of technicians by increasing accessibility to information. Moreover these technicians can receive the latest information updates electronically without waiting for the amended manuals in hardcopy. Following are the objectives to be obtained by using Interactive Electronic Technical Manuals instead of paper-based manuals: • 100% identification of causes of problems (fault isolation) • Time spent in solving the problem (troubleshooting) decreased by 20 - 25% • Error reduction in performing removals and replacements by 35 - 40%

Figure 1. Interactive Electronic Technical Manual

3. Need for Interactive Electronic Technical Manuals (IETMs) when compared to PaperBased Manuals
• • • • Information can retrieved easily Since information is presented in electronic form, less storage would be required As in the case of hardcopies, pages will not be subject to wear and tear IETMs can be easily loaded into portable and wearable computers which maintenance personnel can take to the field Difficult procedures can be integrated with multimedia elements like video and animations to make understanding easier Information on various configurations of equipments can be maintained and displayed by storing only the difference The IETM can be used as a basis for developing Computer based training Packages on the same subject



2. Interactive Electronic Technical Manuals (IETMs)
Interactive Electronic Technical Manuals (IETMs) are manuals in electronic format with interactivity designed for display on computers. An IETM is intended to be the functional equivalent of a paper-based Technical Manual and in most cases a total replacement for the paper manual





(IJCNS) International Journal of Computer and Network Security, 25 Vol. 2, No. 3, March 2010



data integrity. Cross-references are dynamic and search feature is robust. The above IETM is integrated with data from other processes and systems like expert systems, test equipments and diagnostics

5. Features of Interactive Electronic Technical Manuals
Authoring Module To create manuals with their respective chapters, sections and subsections and incorporate them into the IETM Features: • User Authentication – to enter the Authoring Module a user is prompted to enter the login identification (ID) and password • Manual Description – facility to add or delete manuals and their respective chapters, sections and subsections • Builder – an editor that allows users to create individual pages to be incorporated into the manual • Publishing – publishing a viewer module Viewer Module To view manuals by clicking on their respective chapters, sections and subsections Features: • User Authentication – to enter the Viewer a user is prompted to enter the login identification (ID) and password • Split Window - to view text pages and corresponding illustrations simultaneously • Image Viewer - to pan and zoom illustrations • Safety Notes - warnings, cautions and notes used in the manual to be displayed in separate windows to alert users on their importance • Acronyms and Abbreviations - Glossary containing the list of acronyms and abbreviations used in the manuals • Find – to search for a specific word in a page • Search – to search for a specific word within manuals • Notes – facility for users to create their own notes while browsing • History Back and Forward – keeps a track of the order of pages that have been visited by the user • Bookmark- allow a user to bookmark a page to return to it later • Print - to print a desired chapter, section or subsection • Help – to guide the user how to use this Viewer Module

Figure 2. IETMs compared to Paper-Based Manuals

4. Types of Interactive Electronic Technical Manuals
• Page images obtained by scanning in rater format with the table of contents, list of tables, list of figures and index hyperlinked to the respective contents of the manual. A user can select a topic from the table of contents and the respective raster page is displayed. The page orientation of the manual is retained and it can be viewed and directly printed as per format specifications. An ASCII or PDF document in a scrolling text window. In addition to the above hyperlinks it also contains hyperlinks to sections, tables and figures. This document can be linked to video, audio and external applications. May contain raster and vector graphics. Bookmarks, search and sticky notes are provided. The manual can be viewed and directly printed as per format specifications. SGML files with the content structured so that it can be viewed as smaller logical blocks of text with very limited use of scrolling. It is viewed as an indexed PDF file. The information authored is fully structured and hierarchical. In a relational database data tagged with SGML is stored to prevent data redundancy and enforce



6. Manuals commonly used in IETM
Maintenance Manual The description and operation of each aircraft system is explained in this manual with procedures for removal and installation of assemblies that constitute that system. Procedure for repair and cleaning is also included along with inspection and testing. Illustrated Parts Catalogue





26

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

The assemblies along with their respective parts are mentioned in this manual. Each part has a unique part number, nomenclature and the quantity of that part in the assembly. Vendor that supplies this part is also mentioned. Description and Operation Manual The description and operation of each aircraft system is explained in this manual. Flight Manual The description and operation of each aircraft system is explained in this manual. Recommended procedures for normal operations and emergencies are mentioned. Consumable Products Manual For each aircraft system, the list of consumables required to carry out maintenance of that system. The part number, which is unique along with the nomenclature and quantity, are mentioned in this manual. Master Servicing Schedules This manual covers the preventive maintenance operations to be carried out to maintain the aircraft in airworthy condition. It also includes basic servicing, component replacements and unconditional inspections and checks. Storage and Preservation manual The manual details the procedures to be carried out for storage and preservation of that aircraft and its components. It also includes instructions for packing and transportation of an aircraft with its on-board components. TTGE For maintenance of an aircraft, the tools and ground support equipment are mentioned in this manual.

A wearable computer is a potential platform for many different applications that require privacy, mobility and continuous access to information (Lehikoinen, 2002).

Figure 3. WT4000 wearable computer with a scanner attachment The WT4000 is 5.7 inches in length, width 3.7 inches and height of 1.0 inches, weighing approximately 390.2 grams. It has a keyboard consisting of 23 alphanumeric keys and a memory (FLASH/RAM) of 64/128 MB. The display type is a backlit color TFT having a display resolution of QVGA in landscape mode (320x240). The operating system is Microsoft Windows CE 5.0 Professional Version. It can be operated in temperatures ranging from -20° to 50° C and can withstand multiple drops on concrete from a height of 4 feet. Optional accessories include wearable scanners RS309 and RS409.

8. Creation of Interactive Electronic Technical Manuals for a Military Helicopter
Interactive electronic technical manuals were developed and implemented on the field using wearable computers for the following manuals • Description and Operation • Maintenance Manual • Fault Isolation Manual • Airplane Illustrated Parts Catalog Following are the metrics collected on the field during maintenance of the helicopter Table 1: Comparison between Paper-Based Manuals and Interactive Electronic Technical Manuals for a Military Helicopter
Task Paper-Based Manuals Interactive Electronic Technical Manuals

7. Wearable Computers
A computer that is subsumed into the personal space of the user, controlled by the wearer, and that is always on and always accessible (Steve Mann, 1997). A portable computer worn around the body of a technician into which commands can be entered and executed while the technician is performing maintenance operations. These portable computers normally contain voice recognition and head mounted display as input and output interfaces. The technician is now able to access the capabilities of a desktop and is in contact with Interactive Electronic Technical Manuals containing maintenance instructions at all times. Three dominant aspects characterize wearable computers: they are always on and always ready, they are totally controlled by the user, and they are considered both by the user and by the others around to belong to the user’s personal space (Mann, 1997).

Correct identification of causes of 12 problems (faults) Time spent in solving the 12 problems (troubleshooting) Number of errors encountered when performing 38 removals and 12 replacements

7

12

82 hours

63 hours

14

9



The above metrics indicate the following Improvement in identification of causes of problems

(IJCNS) International Journal of Computer and Network Security, 27 Vol. 2, No. 3, March 2010

• •

(fault isolation) from 58% to 100% Time spent in solving the problem (troubleshooting) decreased by 23% Error reduction in performing removals and replacements by 36%

to handle, easier to access, can be read in a systematic manner and easily comprehendible.

9. Creation of Interactive Electronic Technical Manuals for a Transport Aircraft
Interactive electronic technical manuals were developed and implemented on the field using wearable computers for the following manuals • Maintenance Manual • Fault Isolation Manual • Airplane Illustrated Parts Catalog Following are the metrics collected on the field during maintenance of the transport aircraft Table 2: Comparison between Paper-Based Manuals and Interactive Electronic Technical Manuals for a Transport Aircraft
Task Paper-Based Interactive Electronic Manuals Technical Manuals 5 9

References
[1] Steve Mann, “Smart Clothing: The Wearable Computer and WearCam”, Personal Technologies, Vol.1 No.1, Mar. 1997. [2] Eric L. Jorgensen, “The Interactive Electronic Technical Manual Overview – Setting the Stage”, AFEI CALS Expo International, Oct. 1994. [3] Steve Mann, “An Historical Account of the WearComp and WearCam Inventions Developed for Applications in 'Personal Imaging”, The First International Symposium on Wearable Computers: Digest of Papers, IEEE Computer Society, pp. 66–73, 1997. [4] Lehikoinen J, “Interacting with wearable computers: techniques and their application in wayfinding using digital maps”. Ph.D. Thesis. University of Tampere. Department of Computer and Information Sciences, Report A-2002-2, 2002.

Correct identification of causes of 9 problems (faults) Time spent in solving the 9 problems (troubleshooting) Number of errors encountered when performing 16 removals and 4 replacements

58 hours

46 hours

8

5

• • •

The above metrics indicate the following Improvement in identification of causes of problems (fault isolation) from 56% to 100% Time spent in solving the problem (troubleshooting) decreased by 21% Error reduction in performing removals and replacements by 38%

10. Conclusion
The metrics collected for maintenance being carried out on a Military Helicopter and Transport Aircraft using IETMs on Wearable Computers are analyzed and following are the benefits • 100% identification of causes of problems (fault isolation) • Time spent in solving the problem (troubleshooting) decreased by 20 - 25% • Error reduction in performing removals and replacements by 35 - 40% In conclusion Interactive Electronic Technical Manuals run on Wearable result in manuals being compact and easy

28

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Remote Laboratory for Teaching Mobile Systems
Adil SAYOUTI1, Adil LEBBAT 1, Hicham MEDROMI1 and Fatima QRICHI ANIBA1
Laboratoire d’Informatique, Systèmes et Énergies Renouvelables Team Architecture of Systems, ENSEM, BP 8118, Oasis, Casablanca, Morocco [email protected], [email protected], [email protected] and [email protected]
1

Abstract: This paper presents the current contribution of
ENSEM (Hassan II University) in the Innovative Educational Concepts for Autonomous and Teleoperated Systems project which aims at create an innovative educational tool to allow students to perform remote laboratory experiments on autonomous and teleoperated mobile systems. Although the Internet offers a cheap and readily available communication channel for teleoperation, there are still many problems that need to be solved before successful real-world applications can be realized. These problems include its restricted bandwidth and arbitrarily large transmission delay, which influence the performance of the remote control over Internet. In this article, we propose a solution that consists in equipping the mobile systems with a high degree of local intelligence in order for them to autonomously handle the uncertainty in the real world and also the arbitrary network delay.

• •

Remote laboratories (Figure 1), which offer remote access to real laboratory equipment and instruments; Virtual laboratories (Figure 2), which offer access to a virtual environment using for this simulation software.

Keywords: E-learning, Control Architecture, Software Architecture, Multi-agents System, Distributed System, Mobile System, Internet. Figure 2. Virtual laboratories

1. Introduction
Experimentation is a very important part of education in engineering. This is also true for mechatronic engineering, which is a relatively new field, combining three engineering disciplines: mechanical engineering, electrical engineering and software engineering. The equipments needed for experiments in mechatronic are generally expensive. One solution for expensive equipments is sharing the available equipments with other universities around the world [1]. Two are the possibilities to realize it: • Remotely accessible student laboratory facilities - with the advent of the Internet and its rapidly spreading adoption in almost all spheres of society - have become feasible and are increasingly gaining popularity. • Virtual reality (VR) is a system which allows one or more users to move and react in a computer generated environment. At present, several e-learning laboratories have been developed. It can be distinguish two categories of them:

2. Multi-Agent Control

Systems

for

Autonomous

Figure 1. Remote laboratories

The organization of a system - or its control architecture determines its capacities to achieve autonomous tasks and to react to events [2]. The control architecture of an autonomous mobile system must have both decision-making and reactive capabilities: situations must be anticipated and the adequate actions decided by the mobile system accordingly, tasks must be instantiated and refined at execution time according to the actual context, and the mobile system must react in a timely fashion to events. This can be defined as a rational behavior, measured by the mobile system's effectiveness and robustness in carrying out tasks. To meet this global requirement, the control system architecture should have the following properties [3]: • Programmability: a useful mobile system cannot be designed for a single environment or task, programmed in detail. It should be able to achieve multiple tasks described at an abstract level. The functions should be easily combined according to the task to be executed. • Autonomy and adaptability: the mobile system should be able to carry out its actions and to refine or modify the task and its own behavior according to the current goal and execution context as perceived. • Reactivity: the mobile system has to take into account events with time bounds compatible with the correct and efficient achievement of its goals (including its own safety). Consistent behavior: the reactions of the mobile

(IJCNS) International Journal of Computer and Network Security, 29 Vol. 2, No. 3, March 2010

system to events must be guided by the objectives of is task. • Robustness: the control architecture should be able to exploit the redundancy of the processing functions. Robustness will require the control to be decentralized to some extent. • Extensibility: integration of new functions and definition of new tasks should be easy. Learning capabilities are important to consider here: the architecture should make learning possible. We note an interesting link between the desirable properties of intelligent control architecture for autonomous mobile systems and the behavior of agent-based systems: • Agent-based approaches to software and algorithm development have received a great deal of research attention in recent years and are becoming widely utilised in the construction of complex systems. • Agents use their own localised knowledge for decisionmaking, supplementing this with information gained by communication with other agents. • Remaining independent of any kind of centralised control while taking a local view of decisions gives rise to a tendency for robust behavior. • The distributed nature of such an approach also provides a degree of tolerance to faults, both those originating in the software/hardware system itself and in the wider environment. It is for these reasons that we consider an agent-based system to be a suitable model on which to base an intelligent control architecture for complex systems requiring a large degree of autonomy. Although widely used, multi-agent systems research has also lead to a number of definitions of agency. Once again, in some cases, these definitions are inconsistent. In our context, the terms agent or intelligent agent refer to a material or software entity with one or more independent threads of execution, and which is entirely responsible for its own input and output from/to the environment in which it is situated [4]. It is therefore autonomous. We assume that the agent has well-defined objectives or goals and exercises problem-solving behavior in pursuit of these goals; reacting in a timely fashion. It is this behavior that allows us to refer to the agent as intelligent. While being flexible problem solvers in their own right, the power of agents is only fully realised once multiple agents are combined and communicating. This is referred to as a multi-agent system (MAS). As agents are equipped with different abilities and different goals, each agent has a distinct sphere of influence within the environment in which all the agents are situated. These spheres of influence may overlap, defining a fundamental relationship between agents. Further relationships may be superimposed through the use of communication channels. A MAS, therefore, has all the basic properties of a complex system: autonomy, asynchronicity, concurrency, reactivity and extensibility.

3. System Architecture
The remote laboratories provide a live performance laboratory accessible via Internet, which can be used to cover the experimental issues in any tele-education system. Clearly as bandwidth increases and higher speed network access reaches users; these factors play an important role in user adoption of remote laboratories. The concept of remote laboratories is defined as a mechatronic workspace for distance collaboration and experimentation in research or other creative activity, to generate and deliver results using distributed information and communication technologies. To implement a remote laboratory, a common Internet-based teleoperation model [5] is used as shown in figure3.

Figure 3. System Architecture The Remote user, through his Internet navigator, addresses a http request to a Web server and downloads an application on his work station. A connection is then established towards the server in charge of the management of the mobile system to control. The user is then able to take the remote control of it. In parallel, other connections are also established towards multi-media servers broadcasting signals (video, sound) of the system to be controlled. The Internet network (network without quality of service) limits the quantity of information that can be transmitted (bandwidth) and introduces delays which can make the remote control difficult or impossible. The solution proposed, through this work, to face the limitations of the Internet, is founded on the autonomy and the intelligence, based on multi-agents systems, granted to the mobile system in order to interact with its environment and to collaborate with the remote user. The need that consists in wanting to assign to the mobile system the maximum of autonomy and intelligence brought us to examine in the detail the choice of a remote control architecture [6].

30

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

4. Remote Control Architecture
3.4 Control architecture Humans are sophisticated autonomous agents that are able to function in complex environments through a combination of reactive behavior and deliberative reasoning. Motivated by this observation, we propose a hybrid control architecture, called EAAS [7] for EAS (Equipe Architecture des Systèmes) Architecture for Autonomous System. Our architecture combines a behavior-based reactive component and a logic-based deliberative component. EAAS is useful in advanced mobile systems that require or can benefit from highly autonomous operation in unknown environment, time-varying surroundings, such as in space robotics and planetary exploration systems, where large distances and communication infrastructure limitations render human teleoperation exceedingly difficult. The proposed generic architecture consists in associating a deliberative approach for the high part and a reactive approach for the low part. The deliberative part or hierarchical agent allows decision-making and actions planning thanks to the use of the agent selection of actions (Figure 4). This last is composed of three levels: pilot, navigator and path planner.

The path planner generates the path using as input the goal, the mobile system localization and the global map of the environment. The reactive part of our architecture, based on couples of agents perception / action, allows the mobile system to react facing the unforeseen events (Figure 5).

Figure 5. Reactive Part of the EAAS Architecture 3.5 Remote control software architecture A Software architecture has been defined to make remote control of mobile systems possible. Our software architecture is based on a set of independent agents running in parallel. On the left side of figure 6, the server side is represented. It is basically composed of three main agents: “Connection Manager” which manages the different connected clients according to a Control Algorithm. This one is chosen by the designer of the system depending on the application: master/slave, priority, timeout... The “Media” agent communicates with the camera in order to broadcast signals (video, images) of the mobile system in its environment. The “SMA EAAS” (EAS Architecture for Autonomous Systems) which represents our control architecture. EAAS architecture is a hybrid control architecture including a deliberative part (Actions Selection Agent) and a reactive part. The reactive part is based on direct link between the sensors (Perception Agent) and the effectors (Action Agent).

Figure 4. Actions Selection Agent The pilot generates the setting points needed for action agent, based on a trajectory provided as an input. This trajectory is expressed in a different frame (e.g. Cartesian frame) from that of the setting points. This trajectory describes, in time, the position, kinematics and/or dynamic parameters of the mobile system in its workspace. The pilot function is to convert these trajectories into setting points to be performed by the action agent. The navigator generates the trajectories for the pilot based on data received from the upper level. These input data are of a geometrical type, still in a Cartesian frame, but not necessarily in the mobile system frame. Moreover, these data do not integrate dynamics or kinematics aspects; contrary to the trajectory, there is not a strict definition of the velocity, the acceleration or the force versus time. These input data are called path – continuous or discontinuous – in cartesian frame. The navigator must translate a path into a trajectory.

Figure 6. Remote control software architecture proposed

(IJCNS) International Journal of Computer and Network Security, 31 Vol. 2, No. 3, March 2010

The right side of the figure represents the client side. Agents are loaded in a web navigator. The “Remote Client” corresponds to a graphical user interface which allows the user to send orders to the mobile system and receive information about the environment. “Sender” and “Receiver” agents are used to allow the communication between the client and the server. “Pinger” and “Ponger” agents are used to observe dynamically the network. If the connexion is accepted, the “Connection Manager” will inform the “Local Client” agent which achieves the interface with the “SMA EAAS” to transmit orders transmission to the mobile system.

standard Internet protocols. First, the communication is based on the HTTP protocol, through a home page linking the applets used to move the mobile system. These applets are interpreted by the JVM of the browser. Then, the applets downloaded on the client communicate with the application server through the TCP/IP protocol.

5. Application
The mobile system used in our application is a Lego robot. Lego Mindstorms [8] is a development kit for manufacturing a robot using Lego blocks, and is gaining widespread acceptance in the field of technical education. By using a Mindstorm, a robot can be manufactured for various purposes and functions. It is beginning to be considered as a component of experimental equipment in robotics research. The Lego mobile robot (Figure 7) is powered by three reversible motors coupled to wheels and equipped with four sensors: sonar sensor, sound sensor, light sensor and touch sensor. The data produced by these sensors are used by perception agent to build a global map of the Lego robot environment’s. This global map, the goal and the Lego robot localisation are used by the actions selection agent to define a plan of actions to achieve its mission. The Lego robot is equipped with Bluetooth connection that permits the communication with the application server and facilitates its displacement in the environment in order to reach its objective.

Figure 8. Web Interface The remote users can pass online tests of knowledge in order to follow a formation answering to their needs. Different modules of formation (Cursus link) are available on our web site, to know: remote control, multi-agents systems, systems architecture, control architecture and autonomous mobile system. We have also set to the remote users a discussion forum within our application to interchange their ideas over the remote control subject.

6. Conclusion
In this paper, we have presented a Web-based remote control application so that Internet users, especially researchers and students, can control the mobile robot to explore a dynamic environment remotely from their home and share this unique robotic system with us. In the first part, an analysis of the existing control architectures and the approaches for their development has guided us to design a hybrid control architecture. It is called EAAS for EAS Architecture for Autonomous System. The proposed generic architecture consists in associating a deliberative approach for the high level and a reactive approach for the low level. The deliberative level allows decision-making and actions planning thanks to the use of the agent selection of actions. The reactive level, based on couples of agents perception / action, allows the mobile system to react facing the unforeseen events. Then, the software implementation of our architecture was presented. It is achieved under the shape of a multi-agents system by reason of its autonomy, intelligence, flexibility and the various possibilities of

Figure 7. General Architecture The Web interface of our application is designed with the intention of making the remote control easy for researchers and students in order to interact with the Lego mobile robot. A simple interface is designed to provide as much information as possible for remote control. This user interface consists of several Java Applets as shown in Figure 8. It can work on any web browser. The link between client(s) and server is based on two

32

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010 Adil Lebbat received his Degree in High Education Deepened in Network & Telecom in 2004 from the chouaib doukkali university, Eljadida, Morocco. In 2006 he rejoined the system architecture team of the ENSEM. Her main research is mainly about real time distributed platform based on multi agents systems: Application based on the core Linux for products of telecommunications.

evolution. In the second part, in order to validate the choice of our architecture, we presented one of applications achieved by the system architecture team of the ENSEM.

References
[1] P. Le parc, J. Vareille and L. Marce, “E-productique ou contrôle et supervision distante de systèmes mécaniques sur l’Internet”, Journal européen des systèmes automatisés (JESA), Vol. 38, n° 5, pp. 525558, 2004. [2] C. Novales, G. Mourioux and G. Poisson, “A multilevel architecture controlling robots from autonomy to teleoperation”, First National Workshop on Control Architectures of Robots. Montpellier. April 6,7 2006. [3] R. Alami, R. Chatila, S.Fleury, M.Ghallab and F.Ingrand, “An architecture for autonomy. The International Journal of Robotics Research”, Special Issue on Integrated Architectures for Robot Control and Programming, vol. 17, n° 4, pp. 315-337, 1998. [4] J. Ferber, “Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence”, Addison Wesley Longman, Harlow, UK, 1999. [5] A. Sayouti, F. Qrichi Aniba and H. Medromi, “Remote Control Architecture over Internet Based on Multi agents Systems”, International Review on Computers and Software (I.RE.CO.S), Vol 3, n° 6, pp. 666 – 671, November 2008. [6] A. Sayouti, “Conception et Réalisation d’une Architecture de Contrôle à Distance Via Internet à Base des Systèmes Multi-Agents”, Phd. Thesis, ENSEM, Hassan II University 2009. [7] A. Sayouti, H. Medromi, F. Qrichi Aniba, S. Benhadou and A. Echchahad, “Modeling Autonomous Mobile System with an Agent Oriented Approach”, International Journal of Computer Science and Network Security (IJCSNS), Vol 9, n° 9, pp. 316 – 321,September 2009. [8] LEGO Company, “Lego mindstorms official page”, http ://mindstorms.lego.com, 1997.

Hicham Medromi received the PhD in engineering science from the Sophia Antipolis University in 1996, Nice, France. He is responsible of the system architecture team of the ENSEM Hassan II University, Casablanca, Morocco. His actual main research interest concern Control Architecture of Mobile Systems Based on Multi Agents Systems. Since 2003 he is a full professor for automatic productic and computer sciences at the ENSEM, Hassan II University, Casablanca. Fatima Qrichi Aniba received an electrical Engineer’s degree from the ENSAM in 2003 Meknes, Morocco. In 2005 she got her Degree in High Education Deepened in automatic productic from the ENSEM, Hassan II University, Casablanca, Morocco. In 2005 she rejoined the system architecture team of the ENSEM. Her main research is mainly about Real Time Architecture Based on Multi Agents Systems.

Authors Profile
Adil Sayouti received the PhD in computer science from the ENSEM, Hassan II University in July 2009, Casablanca, Morocco. In the same year he received the price of excellence of the best sustained thesis in 2009. In 2003 he obtained the Microsoft Certified Systems Engineer (MCSE). In 2005 he joined the system architecture team of the ENSEM, Casablanca, Morocco. His actual main research interests concern Remote Control over Internet Based on Multi agents Systems.

(IJCNS) International Journal of Computer and Network Security, 33 Vol. 2, No. 3, March 2010

Resource Allocation in Wireless Networks
1 1 Manish Varshney, 2Dr Yashpal Singh, 3Vidushi Gupta Sr Lecturer Deptt of Computer Science & Engg, SRMSWCET Bareilly, India Email [email protected] 2 Reader Deptt of Computer Science & Engg BIET Jhansi, India Email [email protected] 3 Lecturer Deptt of Computer Science & Engg, SRMSWCET Bareilly, India Email [email protected]

Abstract: In this paper, we study about the resource allocation
techniques in wireless network ,thus describing utility based functions and various protocols using directional antenna .In other words the stud is based on utility-based maximization for resource allocation. We consider two types of traffic, i.e., best effort and hard QoS, and develop some essential theorems for optimal wireless resource allocation. Directional antenna technology provides the capability for considerable increase in spatial reuse which is essential in the wireless medium. In this paper, a bandwidth reservation protocol for QoS routing in TDMA-based MANETs using directional antennas is presented. The routing algorithm allows a source node to reserve a path to a particular destination with the needed bandwidth which is represented by the number of slots in the data phase of the TDMA frame. The performance of the proposed schemes is evaluated via simulations. The results show that optimal wireless resource allocation is dependent on traffic types, total available resource, and channel quality, rather than solely dependent on the channel quality or traffic types as assumed in most existing work. Further optimizations to improve the efficiency and resource utilization of the network is provided.

Keywords: Utility-based maximization, wireless networks,
resource allocation, , Mobile ad hoc networks (MANETs), quality of service (QoS), routing, time division multiple access (TDMA).

1. Introduction
RESOURCE allocation is an important research topic in wireless networks [1-7]. In such networks, radio resource is limited, and the channel quality of each user may vary with time. Given channel conditions and total amount of available resource, the system may allocate resource to users according to some performance metrics such as throughput and fairness [1], [2] or according to the types of traffic [3].but another foremost factor that is required in wireless network is spatial reuse of network. In order to communicate with another node in a particular location, a node that is transmitting using an omnidirectional antenna radiates its power equally in all directions. This prevents other nodes located in the area covered by the transmission from using the medium simultaneously. For this purpose directional antennas are used. Directional antennas allow a transmitting node to focus its antenna in a particular direction. Similarly, a receiving node can focus its antenna in a particular direction, which leads to increased sensitivity in that direction and significantly reducing multi-path effects and cochannel interference (CCI). This allows directional antennas to accomplish two objectives: (1) Power saving: a smaller amount of power can be used to cover the

same desired range (2) Spacial reuse: since transmission is focused in a particular direction, the surrounding area in the other directions can still be used by other nodes to communicate ,now coming back to the point of throughput and fairness required in resource allocation . “Throughput” and “fairness,” however, are conflicting performance metrics. To maximize system throughput, the system will allocate more resource to the users in better channel conditions. This may cause radio resource monopolized by a small number of users, leading to unfairness. On the other hand, to provide fairness to all users, the system tends to allocate more resource to the users in worse channel conditions so as to compensate for their shares. As a result, the system throughput may be degraded dramatically. The work in [6-7] show that the system can behave either throughput-oriented” or “fairness oriented” by adjusting certain parameters. However, they do not describe how to determine and justify the value of these parameters, leaving this trade-off unsolved. In this paper, we focus on basic techniques required for resource allocation in wireless networks .through the work we came to the basic two factors which are to be resolved , 1)the first factor relates “user satisfaction” for resource allocation to avoid such a “throughput-fairness” dilemma Since it is unlikely to fully satisfy the different demands of all users, we turn to maximize the total degree of user satisfaction. The degree of user satisfaction with a given amount of resource can be described by the utility function U(r), a non-decreasing function with respect to the given amount of resource r. The more the resource is allocated, the more the user is satisfied. The marginal utility function defined by u(r) = dU(r) dr is the derivative of the utility function U(r) with respect to the given amount of resource r. The exact expression of a utility function may depend on traffic types, and can be obtained by studying the behavior and feeling of users. We leave the work of finding utility functions to psychologists and economists, and focus on maximizing the total utility for a given set of utility functions. and 2) the second factor link to the directional antennas There are different models that are presented in the literature for directional antennas [8].In this paper, the multi-beam adaptive array (MBAA) system is used [1]. It is capable of forming multiple beams for simultaneous transmissions or receptions of different data messages. The rest of the paper is organized as follows. In Sec. II, resource allocation in wireless networks through utility functions are proposed and proved to be optimal under certain conditions while In Sec. III, resource allocation in wireless networks through directional antennas Finally, the paper is concluded in Sec. IV.

34

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

2. Resource Allocation In Wireless Networks Through Utility Functions
In [4], a utility-based power control scheme with respect to channel quality is proposed. In that scheme, users with higher SIR values have higher utilities, and thus are more likely to transmit packets. Therefore, the wireless medium can be better utilized and the transmission power can be conserved, which can adapt to channel conditions and guarantee the minimum utility requested by each user. In [8-9], the authors design a utility-based fair allocation scheme to ensure the same utility value for each user. However, letting users with different traffic demands to achieve an identical level of satisfaction may not be an efficient way of using wireless resource. Worse, traffic which is difficult to be satisfied tends to consume most of the system resource leading to another kind of unfairness. In [5], a utility-based scheduler together with a Forward Error Correction (FEC) and an ARQ scheme is proposed. That work gives lagging users more resource and thus results in a similar performance level (i.e., fixed utility value) for each user. The work in [13-14] targets at multi-hop wireless networks Utility functions have also been widely used in Internet pricing and congestion control [6]. The typical approach is to set a price to radio resource and to allocate tokens to users. The objective is then to maximize the “social welfare” through a bidding process. These kinds of bidding schemes, while useful for Internet pricing and congestion control, may not be practical for wireless networks. In wireless environments, the types of traffic, the number of users, and channel conditions are all timevarying. It would-be very expensive to implement a wireless bidding process because the users would have to keep exchanging control messages for real-time bidding, and the control protocols of the wireless system would also have to be modified to accommodate this process. Finally, the complexity and efficiency of wireless bidding have not been analyzed. It is hard to estimate the time elapsed to achieve the Nash equilibrium. We consider two common types of traffic: hard QoS and best effort traffic, and propose three allocation algorithms1 for these two types of traffic, namely, 1) the HQ allocation for hard-QoS traffic, 2) the elastic allocation for best effort traffic, and 3) the mixed allocation for the co-existence of both types of traffic. These three allocation schemes are all polynomial time solutions and proved to be optimal under certain conditions, and in any case, the difference between the total utilities obtained by our solutions and the optimal utility are bounded. The performance of the proposed schemes is validated via simulations. The results show that optimal wireless resource allocation depends on the traffic demand, total available resource, and wireless channel quality, rather than solely dependent on channel quality or traffic type as assumed in most existing work. 2.1 Resource Allocation Using Hard Qos, Best, And Mixed Traffic In Wireless Network Through Utility Based Functions

2.1.1 Problem Statement and Definitions Suppose that there are n users served by a base station. Let rtotal denote the total amount of radio resource available at the base station, and r i, the amount of resource to be allocated to user i. Users with the same kind of traffic may not feel the same way by given the same amount of resource because the wireless channel quality for each user may not be identical. Let denote the channel quality 2 of user i, 0 ≤ ≤ 1, and i = 1, 2, · · ·n. The smaller the value of the worse the channel quality. Given an amount of resource r i and channel quality qi, the amount of resource actually beneficial to user i is given by θ i = · qi . Let T (i) denote the type of traffic of user i. The utility function of user I is expressed by where (.) is the utility function of traffic T (i) and Ui(.) is the utility function for the type of traffic described by UT(i)(.) but taking into account the channel quality of user i. The marginal utility (.) is (.). Our objective is to maximize subject to and ∀ ≥ 0. An optimal allocation for n users with total available resource is defined as follows. Note that the optimal allocation may not be unique in the system. function of Ui(.) is and that of

Figure 1. The utility functions of two types of traffic • Definition 2.1: A resource allocation R+ = {r 1 ,r 2 ,∙∙∙,r n } for n users is an optimal allocation if for all feasible allocations Ra= {r ’1,r ’2 ,∙∙∙,r ’n},U(R+ ) ≥ U(Ra ),where U(R+ ) = and U(Ra )= . • Definition 2.2: R = {r 1, r 2 ,∙∙∙, r n } is a full allocation if 2.1.2 HQ Allocation for Hard QoS Traffic Suppose that there are n users in the queue, all with hard QoS traffic. Let denote the residual resource in the system. The resource allocation algorithm designed for users whose utility functions are all unit-step functions is referred to as the HQ allocation and the output is denoted by RHQ = {r 1, r 2 ,∙∙∙, r n }. Given the total available resource in the system , the channel quality and utility function (.) for all users i, RHQ can be obtained as follows. 1) Initialize ← 0, i = 1, 2, · · · , n; ← 2) Sort all users i in the queue in descending order of . 3) Repeat Steps (4) and (5) until the queue becomes empty. 4) Pop out user i who is now at the head of the queue. 5) If . > , then = , ; =



(IJCNS) International Journal of Computer and Network Security, 35 Vol. 2, No. 3, March 2010

The utility function for user i with hard QoS traffic is where described by is a unit-step function, is the channel quality of this user, Mi is the kind of QoS traffic, is the preferred amount of resource to be allocated.

4) If the queue is not empty, then pop out the QoS user i at the head of the queue; else go to Step (8). 5) For the popped user i: if = 0; go to Step (4). 6) Δ = − .



,

then

=

;else

Figure 2. Allocation ordering of k users in the HQ allocation. 2.1.3 . Elastic Allocation for Best Effort Traffic We next consider the best effort traffic. The resource allocation algorithm for users with concave utility functions is referred to as the elastic allocation and the output is denoted by = {r 1, r 2 ,∙∙∙, r n }. Given the total available resource , the channel quality qi and marginal utility function for each user i, can be obtained as follows. 1) For each user i, derive , the inverted function of . 2) Derive , by summing up , over all users i, i.e, = . 3) Find , the inverted function of . 4) Find , which is equal to ). 5) For all , i = 1, 2, · · · , n,if < (0), then = ; else = 0. The allocation rule of this scheme is to 1) derive the aggregated utility function from the inverse functions of all users, 2) calculate the allocated marginal utility from the aggregated utility function, and 3) determine for each user. 2.1.4 Mixture of Hard QoS and Best Effort Traffic Finally, we consider the co-existence of QoS and best effort traffic in the system, which is referred to as mixed allocation and the output of which is denoted by = {r 1 , r 2 , ∙∙∙ , r n }. Let denote the amount of residual resource to be given to best effort traffic, and ΔUi, the utility gain by allocating resource to QoS user i. Other notations remain the same as in the HQ and the elastic allocations. Given the total available resource , the channel quality and marginal utility function for each user i, can be obtained as follows. 1) Initialize ri ← 0, i = 1, 2, · · · , n ; and ← . 2) Sort all QoS users i in descending order of store them in the queue. 3) For each best effort user j, derive from Find by summing up over all users j; Find , the inverted function of . , and ;

7) If (Δ > 0), then = − ; go to Step (4); else = 0; go to Step (8); 8) If ( ) < (0), then = ( ));else = 0. The allocation rule of this mixed allocation is to: 1) allocate resource to the first k QoS users at the sorted queue, and 2) then allocate the residual bandwidth (i.e., − ) to all best effort users based on the elastic allocation. The value of k is determined based on the requirement that there is sufficient resource for this QoS user and the utility gain ΔUk is positive (i.e., − > 0) . 2.2 Resource Allocation Using Directional Antennas Medium Access Protocols (MAC) protocols for directional antenna systems can be classified into two categories: on demand and scheduled. In the on-demand scheme nodes must exchange short signals to establish a communication session.

Figure 3. An example of mixed allocation Data message transmission is done using the omnidirectional mode, and reception is done using the directional mode. Directional antennas are used to transmit request-to-send (RTS) and receive clear-to-send (CTS) signals while the receiver antenna remains in the omnidirectional mode during this exchange. In [13], communicating pairs are set up using the multi-beam forming ability of directional antennas. through cashing of the angle of arrival (AoA), Takai [14] avoided the use of the omnidirectional mode, which is only used when the AoA information is not available.

Figure 4 . (a) Transmission pattern of an omnidirectional antenna. (b) Transmission pattern of a directional antenna. 2.2.1 Directional Antenna System Assumptions And Definitions In this paper, it is assumed that each node in the network is equipped with an MBAA-antenna system. Each antenna is capable of transmitting or receiving using any one of k

36

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

beams which can be directed towards the node with which communication is desired. In order for node x to transmit to a node y, node x directs one of its k antennas to transmit in the direction of node y, and node y in turn directs one of its k antennas to receive from the direction of node x. Radio signals transmitted by omnidirectional antennas propagate equally in all directions. On the other hand, directional antennas install multiple antenna elements so that individual omnidirectional RF radiations from these antenna elements interfere with each other in a constructive or destructive manner.

directional antenna systems. Each node keeps track of the slot status information of its 1-hop and 2-hop neighbors. This is necessary in order to allocate slots in a way that does not violate the slot allocation conditions imposed by the nature of the wireless medium and to take the hidden and exposed terminal problems into consideration. 2.2.3 Slot Allocation Conditions For Directional Antennas A time slot t is considered free to be allocated to send data from a node x to a node y if the following conditions are true: 1) Slot t is not scheduled to receive in node x or scheduled to send in node y, by any of the antennas of either node (i.e. antennas of x must not be scheduled to receive and antennas of y must not be scheduled to transmit, in slot t). 2) Slot t is not scheduled for receiving in any node z, that is a 1-hop neighbor of x, from node x where y and z are not in the same angular direction with respect to x (i.e. ∩ ≠ ). 3) Slot t is not scheduled for receiving in node y from any node z, that is a 1-hop neighbor of x, where x and z are in the same angular direction with respect to y (i.e. ∩ ≠ ). 4) Slot t is not scheduled for communication (receiving or transmitting) between two nodes z and w, that are 1-hop neighbors of x, where w and y are in the same angular direction with respect to z (i.e. ∩ ≠ ), and x and z are in the same angular direction with respect to w (i.e. ∩ ≠ ). In Figure6, which illustrates allocation rule 2, node x cannot transmit to node y using slot t, because it is already using slot t to transmit to node z, which is in the same angular direction as node y. In Figure7, which illustrates allocation rule 3, node x cannot allocate slot t for sending to node y because slot t is already scheduled for sending from node z, that is a 1-hop neighbor of x, and ∩ ≠ . In Figure 9, which illustrates allocation rule 4, slot t cannot be allocated to send from x to y because it is already scheduled for communication between two nodes z and w, that are 1hop neighbors of x, where ∩ ≠ and ∩ ≠ . When a node S wants to send data to a node D, with a bandwidth requirement of b slots, it initiates the QoS path discovery process

Figure 5. Transmission pattern of an MBAA antenna system with k=4 beams. Each of the k beams can be oriented in a different desired direction. The figure shows: (a) Beams in transmission mode. (b) Beams in reception mode. This causes the signal strength to increase in one or multiple directions. The increase of the signal strength in a desired direction and the lack of it in other directions are modeled as a lobe. The angle of the directions, relative to the center of the antenna pattern, where the radiated power drops to one-half the maximum value of the lobe is defined as the antenna beamwidth, denoted by β [9]. With the advancement of silicon and DSP technologies, DSP modules in directional antenna systems can form several antenna patterns in different desired directions (for transmission or reception) simultaneously. Figure 4(a) shows the transmission patterns of an omnidirectional antenna. Figure 4(b) shows the transmission pattern of a directional antenna .In this paper, it is assumed that an MBAA antenna system is capable of detecting the precise angular position of a single source for locating and tracking neighbor nodes. Figure 5 shows a node equipped with an MBAA antenna array with k=4 beams. Each of the k beams is able to be oriented in a different desired direction. Figure 5(a) shows the antenna array in the transmission mode, and Figure 5(b) shows the antenna array in the reception mode. 2.2.2 Protocols For Directional Antennas The networking environment that is assumed in this paper is TDMA where a single channel is used to communicate between nodes. The TDMA frame is composed of a control phase and a data phase [17]. Each node in the network has a designated control time slot, which it uses to transmit its control information. However, the different nodes in the network must compete for the use of the data time slots in the data phase of the frame. In this section, the slot allocation rules for the TDMA directional antenna environment are presented. The hidden and exposed terminal problems make each node’s allocation of slots dependent on its 1-hop and 2-hop neighbor’s current use of that slot. The model used in this protocol is similar to that used in [11] and [12], but includes modifications to support

Figure 6. Illustration of allocation rule 2.

Figure 7. Illustration of allocation rule 3.

Figure 8. Illustration of allocation rule 4. 2.2.4 The Qos Path Reservation Algorithm

(IJCNS) International Journal of Computer and Network Security, 37 Vol. 2, No. 3, March 2010

Node S determines if enough slots are available to send from itself to at least one of its 1-hop neighbors. If that is the case, it broadcasts a QREQ(S, D, id, b, x, PATH, NH) to all of its neighbors. The message contains the following fields: • S, D and id: IDs of the source, destination and the session. The (S,D, id) triple is therefore unique for every QREQ message and is used to prevent looping. • b: Number of slots required. • x: The node ID of the host forwarding this message. • PATH: A list of the form (( ). It contains the accumulated list of hosts and time slots, which have been allocated by this QREQ message so far? is the ith host in the path, and is the list of slots used by to contains the slot send to . Each of the elements of number that would be used, along with the corresponding the set of angular groups, , which represents the direction in which the sending antenna of host i must be pointed, during that slot, to send data to host i + 1. • NH: A list of the form (( ). It contains the next hop information. If node x is forwarding this QREQ message, then NH contains a list of the next hop host candidates. The couple is the ID of the host, which can be a next hop in the path, along with a list of the slots, which can be used to send data from x to is a list of the slots to be used to send from host i to host i+1 along with the angular group for each slot. has the same format as in PATH.

the schemes proposed in [18-19] are the examples of the ”fairness” scheme (i.e., α = − 1), and the GR+ scheme in [9] is an example of the ”throughput” scheme. Fig. 8(a) compares the proposed HQ allocation with different allocation schemes, and Fig. 9(b) compares the proposed elastic allocation with different allocation schemes. Note that the axis of in Fig. 9(b) is in the logarithmic scale. Theresults show that the ”throughput-first” scheme has a higher total utility when is small, but the ”fixed” allocation one is closer to the proposed scheme as increases. Finally, when becomes very large, the ”fairness-first” scheme can achieve the highest utility. In order to verify, and analyze the performance of the directional antennas (protocols presented in paper), simulation experiments were conducted. Several performance measures were computed as the traffic rate (messages/second) is varied. The measured parameters are the overall percentage of packets received successfully, the average number of requests per successful acquisition of QoS path , The average number of requests per session, and the average QoS path acquisition time

3. Performance Analysis
(a)

In this paper we have gone through two methods of resource allocation i.e resource allocation through utility functions and the other using directional antennas. Now since both methods are for resource allocation it is necessary to analyze the performance of both the above given methods, analyzing them leads to the comparison of the the resource allocation method . In this section, we conduct simulations to evaluate the performance of our allocation algorithms. firstly analyzing resource allocation in wireless network through utility function. So we consider QoS traffic, best effort traffic, and the con-existence of both. The simulation parameters are described as follows. For QoS traffic, the utility function is a unit-step function with = 10 and UM = 1, i.e., UQoS(r) = fu(r − 10); for best effort traffic, UBE(r) = 1 − er/10. The value of qi is randomly generated by a uniform distribution over [0, 1]. We then measure the distributions of and θ i under different values of . In Fig. 9, different resource allocation schemes are compared with the proposed allocation schemes. The comparison is based on the scheme proposed in [10], which allocates radio resource proportionally based on factor Depending on the setting of the value α, the system can be tuned to work with different performance metrics. The curve denoted ”throughput” is for α = 1, which gives more resources to the users in better channel conditions, thereby leading to a larger system throughput. The curve denoted ”fairness” is for α = − 1, giving all users an identical value of θi = · qi. The curve denoted ”fixed” is for α = 0, which provides the same amount of resource to all users. Note that

(b)

Figure 9. Utility comparison with different resource allocation schemes a)Qos traffic b)best effort traffic Simulation results clearly demonstrate the increased efficiency and performance of the network as the number of directional antennas increases. As was indicated earlier, this increased performance is due to the considerable increase in spatial reuse and the ability for each node to simultaneously send or receive data in different directions. This functionally increases the effective number data slots by a multiple of the number of antennas (or directions) used. This effect significantly improves performance. As the data shows, the increase in performance, or speed-up factor, when the number of antenna is increased by a factor of 2 (i.e. doubled from 1 to 2, and then from 2 to 4) is significant (speed up factor > 1).As expected, however, it still below a theoretical speed-up factor of 2. For the first set of experiments for example, the data shows that that ratio of the overall average percentage (average for all data traffic rates) of successful packets of the two-antenna case to the oneantenna case is 1.61, which is > 1 and < 2. The ratio for the

38

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

four-antenna case to the two antenna case is 1.84, which is also > 1 and < 2, and the ratio for the four-antenna case to the one-antenna case is 2.95 which are < 4. This is to be expected from the theory of parallel and distributed systems because the actual speed-up factor is always below the ratio of the number of parallel units, or antennas.

4. Conclusion
In this paper, we study about two basic method of allocating resources in wireless networks the first one Data message length:100MB was utility-based maximization for resource allocation in infrastructure-based wireless networks.

paths due to the extended range of directional antennas using the same total transmission power compared to the omnidirectional case. In turn this results in reduced end-toend delay. The simulation results clearly show a significant gain in performance with an increase in the number of successfully received packets, as well as a decrease in the QoS path acquisition time. However, as expected, this gain in performance is still below the theoretical speed-up factor. In the future, we intend to improve this protocol through the employment of additional optimization techniques. T his leads us to conclude that existing channel-dependentonly resource schemes and schedulers cannot provide optimal allocation in wireless networks. So In addition, we intend to perform more simulations in order to further study, analyze and improve the performance of the protocol under different network environments including different mobility rates, and traffic conditions.

References
[1] D.Angelini and M. Zorzi, “On the throughput and fairness performance of heterogeneous downlink packet traffic in a locally centralized CDMA/TDD system,” in Proc. IEEE VTC-Fall 2002. [2] Furuskar et al., “Performance of WCDMA high speed Packet data,” in Proc. IEEE VTC 2002-Spring. [3] Y. Cao, V. O. K. Li, and Z. Cao, “Scheduling delaysensitive and besteffort traffic in wireless networks,” in Proc. IEEE ICC 2003. [4] M. Xiao, N. B. Shroff, and E. K. P. Chong, “A utilitybased power control scheme in wireless cellular systems,” IEEE/ACM Trans. Netw., vol. 11, no. 2, 2003. [5] X. Gao, T. Nandagopal, and V. Bharghavan, “Achieving application level fairness through utilitybased wireless fair scheduling,” in Proc. IEEE Globecom 2001. [6] F. P. Kelly, “Charging and rate control for elastic traffic,” European Trans. Telecommun., Jan. 1997. [7] V. A. Siris, B. Briscoe, and D. Songhurst, “Economic models for resource control in wireless networks,” in Proc. IEEE PIMRC 2002, Lisbon, Portugal, Sept. 2002. [8] L. Chen, S. H. Low, and J. C. Doyle, “Joint congestion control and media access control design for wireless ad hoc networks,” in Proc. IEEE Infocom, Miami, FL, March 2005. [9] L. Bao and J. J. Garcia-Luna-Aceves. Transmission scheduling in ad hoc networks with directional antennas. Proc. of the 8th annual international conference on mobile computing and networking, pages 48–58, September 2002. [10] Q. Dai and J. Wu. Construction of power efficient routing tree for adhoc wireless networks using directional antenna. Distributed Computing Systems Workshops. Proceedings. 24th International Conference on, pages 718–722, March 2004. [11] Jawhar and J. Wu. A race-free bandwidth reservation protocol for QoS routing in mobile ad hoc networks. Proc. of the 37th Annual Hawaii International Conference on System Sciences (HICSS’04), IEEE Computer Society, 9, January 2004.

(a)

(b)

Figure 10. Simulation results. In this we develop some essential theorems for utilitybased resource management. Then, three polynomial time resource allocation algorithms are proposed for two types of utility functions. We prove that, in any case, the difference between the total utilities obtained by our proposed solutions and the optimal utility is bounded, and under certain conditions, all these three schemes can achieve the maximum total utility (i.e., optimal). From the simulation results, we find that different types of traffic require different kinds of schemes to achieve optimal allocation. In addition, when is small, the system tends to allocate more resources to the users in better channel conditions, i.e., ”throughput-oriented;” however, when is abundant, the system becomes ”fairness-oriented,” meaning that even with the same traffic, the preference tendency between throughput and fairness can still differ. The second one was a protocol for TDMA-based bandwidth reservation for QoS routing in MANETs using directional antennas The protocol takes advantage of the significant increase in spacial reuse provided by the directional antenna environment, which drastically increases the efficiency of communication in MANETs. This is due to the reduction in signal interference, and the amount of power necessary to establish and maintain communication sessions. Additionally, this protocol provides for a relatively smaller hop count for QoS

(IJCNS) International Journal of Computer and Network Security, 39 Vol. 2, No. 3, March 2010

[12] W.-H. Liao, Y.-C. Tseng, and K.-P. Shih. A TDMAbased bandwidth reservation protocol for QoS routing in a wireless mobile ad hoc network. Communications, ICC 2002. IEEE International Conference on, 5:3186–3190, 2002. [13] S.; You J.; Hiromoto R.E.; Nasipuri, A.; Ye. A MAC protocol for mobile ad hoc networks using directional antennas. Wireless Communications and Networking Conference, 2000. WCNC. 2000 IEEE, 3(23-28):1214– 1219, September 2000. [14] M. Takai, J. Martin, and R. Bagrodia. Directional virtual carrier sensing for directional antennas in mobile ad hoc networks. In Proc. ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC), Lausanne, Switzerland, June 2002. [15] J. Ward and R.T.Jr. Compton. High throughput slotted aloha packet radio networks with adaptive arrays. Communications, IEEE Transactions on, 41(3):460– 470, March 1993. [16] J. Zander. Slotted aloha multihop packet radio networks with directional antennas. Electronics Letters, 26(25):2098–2100, December 1990. [17] C. R. Lin and J.-S. Liu. QoS routing in ad hoc wireless networks. IEEE Journal on selected areas in communications, 17(8):1426–1438, August 1999. [18] Jawhar and J. Wu. Qos support in TDMA-based mobile ad hoc networks. The Journal of Computer Science and Technology (JCST), Springer, 20(6):797–810, November 2005. [19] Jawhar and J. Wu. Race-free resource allocation for QoS support in wireless networks. Ad Hoc and Sensor Wireless Networks: An International Journal, 1(3):179– 206, May 2005

Algorithms, Compiler Design books for the technical students of graduation and postgraduation.He has published various research papers in National and International journals. He has also attended one faculty development program organized by Oracle Mumbai on Introduction to Oracle 9i SQL and DBA Fundamental. Vidushi Gupta received her B.tech (C.S) degree from Uttar Pradesh Technical University, Lucknow.She is also pursuing M.tech from Karnataka University, She is working as Lecturer ( CS/IT department) in SRMSWCET, and Bareilly She has also attended one faculty development program based on the “Research Methodologies”.

Authors Profile
Dr. Yahpal Singh is a Reader and HOD (CS) in BIET, Jhansi (U.P.). He obtained Ph.D. degree in Computer Science from Bundelkhand University, Jhansi. He has experience of teaching in various courses at undergraduate and postgraduate level since 1999. His areas of interest are Computer Network, OOPS, DBMS. He has authored many popular books of Computer Science for graduate and postgraduate level. He has attended many national and international repute seminars and conferences. He has also authored many research papers of international repute. Manish Varshney received his M.Sc (C.S) degree from Dr. B.R.A. University, Agra, M.Tech. (IT) from Allahabad University and Pursuing PhD in Computer Science. He is working as a HOD (CS/IT) in SRMSWCET Bareilly. He has been teaching various subjects of computer science for more than half a decade. He is known for his skills at bringing advanced computer topics down to the novice's level. He has experience of industry as well as teaching various courses. He has authored various popular books such as Data Structure, Database Management System, Design and Implementation of

40

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Comparative Analysis of Wireless MAC Protocol for Collision Detection and Collision Avoidance in Mobile Ad Hoc Networks
D.Sivaganesan1, Dr.R.Venkatesan2
Department of Computer Science and Engineering, Karpagam College of Engineering, Coimbatore-32, TamilNadu, India. [email protected]
2 Department of Information Technology. PSG College of Technology, Coimbatore, TamilNadu, India. 1

Abstract:

Packet collisions at the Medium Access Control

(MAC) layer in distributed wireless networks use a combination of carrier sensing and collision avoidance. When the collision avoidance strategy fails, such schemes cannot detect collisions and corrupted data frames are still transmitted in their entirety, thereby wasting the channel bandwidth and significantly reducing the network throughput. To address this problem, this paper compares the wireless MAC protocol CSMA, MACA and IEEE802.11 capable of collision detection and collision avoidance. The performance of the MAC protocol has been investigated using extensive analysis and simulations. Our results shows that as compared with CSMA, MACA MAC protocols, the protocol IEEE802.11 has significant performance gains in terms of node throughput and reduce the network collisions.

Keywords: MAC, collision detection, collision avoidance, CSMA, CSMA/CD, MACA.

1. Introduction
Distributed Medium Access Control (MAC) protocols such as the IEEE 802.11 Distributed Coordination function (DCF) are widely used in computer networks to allow users to statistically share a common channel for their data transmissions. In wireless networks, a critical drawback of distributed MAC protocols is the inability of nodes to detect collisions while they are transmitting. As a result, bandwidth is wasted in transmitting corrupted packets, and the achieved throughput degrades. This situation is exacerbated as the number of nodes in the network increases, since, now, the rate of collisions increases. To address this issue, this paper presents MAC protocol capable of detecting collisions in wireless networks which outperforms existing MAC protocols. The Aloha protocol [1] was the first MAC protocol proposed for packet radio networks. With pure Aloha, a node sends out a packet immediately upon its arrival at the MAC sub layer and a collided packet is retransmitted with a probability P immediately or after each packet transmission time. Carrier Sense Multiple Access with Collision Detection (CSMA/CD) [2] employs two mechanisms to enhance the medium utilization in aired local area networks (LANs):“carrier sense” and “collision detection.” Carrier sense requires a node to listen before transmitting and

collision detection requires a node to transmit and listen at the same time for terminating a possible collision. Although CSMA/CD have been proven to be very successful in wired LANs, it cannot be directly employed in wireless networks because of two problems [21]. The first is the hidden terminal problem [3]. Two mutually hidden terminals are two nodes that cannot sense each other (due to the distance or obstacles between them) but can still interfere with each other at a receiver. With hidden terminals, carrier sense alone cannot effectively avoid collisions. The other problem for CSMA/ CD in wireless networks is that, in the same wireless channel, the outgoing signal can easily overwhelm the incoming signal due to high signal attenuation in wireless channels. This problem makes it difficult for a sender to directly detect collisions in a wireless channel. Some existing MAC protocols [5],[6],[7],[8] depend on inband control frames for exploring the possible future channel condition for a data frame and also for reserving the medium for the data frame. However, when the collision avoidance strategy fails, a corrupted data frame is still fully transmitted. Another category of protocols [3],[9],[10]uses one or more out-of-hand control channels to avoid collisions, These protocols are more effective in dealing with hidden terminals and thus, reduce the probability of collisions in a network. However, they are incapable of detecting collisions either, and if the collision prevention strategies of these protocols fail, then the collided data frames are still transmitted in their entirety.

2. Carrier Sensing and Collision Avoidance
The most widely used mechanism to avoid collisions in the contention-based MAC is probably ‘‘carrier sensing” [11]. Which is used in both wired and wireless networks? The drawbacks associated with this mechanism that motivates the development of a scheme with collision detection. With carrier sensing, a node listens before it transmits. If the medium is busy, then the node defers its transmission. After the medium has been sensed idle for a specified amount of time, the node usually takes a random back off before transmitting its frame. The random back off is for avoiding collisions with other nodes that are also contending for the medium. Besides the “physical” carrier sensing technique

(IJCNS) International Journal of Computer and Network Security, 41 Vol. 2, No. 3, March 2010

introduced above, the IEEE 802.11 DCF and MACA (Multiple Access Collision Avoidance) also employs a technique called “virtual” carrier sensing. The virtual carrier sense technique relies on in-band control frames to deal with hidden terminals. Before sending a data frame in to the idle medium after proper deferrals and back offs, a source sends out a Request to Send (RTS) frame to contact the receiver and reserve the medium a round the source, If the receiver receives the RTS frame and its channel is determined to be clear, the receiver sends out a Clear to Send (CTS) frame to respond to the sender and reserve the medium too. The data transmission then begins if the handshake and medium reservation process succeeds. Several situations may cause difficulties to the virtual carrier sensing technique. One of them is the ‘chained’ hidden terminal phenomenon. Basically, in a data transaction in the MAC layer, the CTS frame sent by a receiver to suppress the hidden terminals of the initiating sender may be lost at the receiver’s neighbors due to the receiver’s own hidden terminals. In such a case, some hidden terminals of the initiating sender may not be suppressed. An example is shown in Figure 1, where node A is the initiating sender and node B is the receiver, The CTS frame generated by node B is corrupted at node C (a hidden terminal of node A) by the signals of node D, which is a hidden terminal of node B. Node mobility may also limit the effectiveness of the virtual carrier sensing technique with a small probability. With virtual carrier sensing, only nodes that have receive the medium reservation message know when to defer. Therefore, when a node newly moves into a neighborhood and misses the preceding reservation information, it becomes an unsuppressed hidden terminal to an ongoing data transaction, another phenomenon that may impact virtual carrier sensing is that the interference range of a node can larger than its data transmission range[12]. Therefore, even if a node is out of the range of another node for successfully receiving its CTS frame, the node may still interfere with the other node’s data reception. A more effective way to suppress hidden terminals is to use an outof-band control channel. With a single data channel, control information cannot be delivered when the data frame is in transmission. With an additional control channel, however, control signals can always be present whenever necessary, which improves the ability of hidden terminal suppression.

3. Spectrum Reuse Phenomenon

and

the

Capture

The radio spectrum needs to be spatially reused in multihop wireless network for improving network through-put. Better spectrum reuse allows more transmissions to go on simultaneously in the network without collisions. A phenomenon closely related to spectrum reuse is “capture” which implies that, when two frames collide at a receiver in a wireless network, one of the frames may still be correctly decoded if the received power of the frame is higher than that of the other by a threshold. However, as we now show the capture effect is not sufficient to eliminate collisions and collision detection is required to prevent bandwidth wastage on corrupted frames. To illustrate the possibility of collisions in the presence of capture effect, two scenarios are shown in Figure. 2a and 2b (the nodes are in a line for easy demonstration). In the case, nodes A and D are the initiating senders, whereas nodes B and C are their receivers, respectively. In the second case, nodes B and C are the senders and nodes A and D are their receivers, respectively. In these two cases, assuming same transmission power levels and ambient noise, captures for the data frames may easily happen at receivers because the senders are much closer to their receivers than the interference sources. However, for combating high link error rates, acknowledgments for data frames are widely used in the MAC sub layer of wireless networks. Therefore, interference may come not only from the initiating senders, but also from their receivers. In both cases shown in Figure. 2b, the two senders have to finish their transmissions almost at the same time for all the data and acknowledgment frames to be received without collisions. For example, in case A shown in Figure. 2a, if node A finishes its data transmission earlier than node D, then node B will send its acknowledgment frame to node A while node C is still receiving the data frame from node D. A collision may therefore easily occur at node C. Similarly, if node D finishes is transmission earlier, then node B may easily have a collision. The same thing is true for case B. The corrupted frame, however, be an acknowledgment instead of a data frame. In reality, two nodes may not finish their transmissions the same time, since their frames may have different sizes and their transmissions may begin at different times. Thus, collision detection is important in these cases to terminate the colliding transmissions.

Figure 1. Hidden Terminal Problem Figure 2 a. Exposed Terminal Problem

42

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Case A A B

C Case B

D

C

D

A

B Figure 2 b. Collision involving capture

4. The MAC Protocol IEEE 802.11
IEEE 802.11-Carrier Sensing and Collision Avoidance The MAC protocol in this paper assumes that each node has the ability to simultaneously transmit on two channels, the control and data channels, with two antennas and, their associated communication circuitry. The control channel has a much smaller bandwidth as compared to the data channel and is used for transmitting medium reservation related signals, whereas the data channel is for transmitting the data and acknowledgments. Instead of relying on bitbased frames, the control channel employs pulses to deliver control information. The pulses in the control channel are single-frequency waves with random length pauses. In the proposed protocol, pulses only appear in the control channel and the control channel only carries pulses. When a node is an active sender or receiver in the data channel, it monitors the control channel all times, except when it itself is transmitting in the control channel. If a node is transmitting in the data channel but detects a pulse in the control channel, then it aborts its transmissions. To describe the operation of the protocol, we consider what happens when the MAC sub layer at a node say node A, receives a packet to transmit to node B. Before node A can transmit, it first listens to the control channel to make sure that it is idle. If the control channel is found idle for a period of time longer than the maximum pause duration of a pulse, then node A starts a random back off timer whose value is drawn from the node’s contention window. If the node detects no pulse before its back off timer expires, then it proceeds to transmit the packet upon the expiration of its back off timer. Otherwise, the node cancels It’s back off timer and keeps monitoring the control channel. As soon as the back off timer of node A expires, it starts to transmit pulses in the control channel a long with the packet in the data channel. 3.6

Once the node has finished transmitting the frame header in the data channel, it expects the intended receiver node B to have received the information and reply with a CTS pulse in the control channel. The CTS pulse is transmitted by node B during a pause in the pulses being sent by node A in the control channel. If a node A does not obtain the expected CTS pulse in the following pause period after the frame header is transmitted then node A aborts its transmission in both channels If node A obtains the expected CTS pulse. Then it keeps transmitting. Node A, however may still abort its transmissions after obtaining the expected CTS pulse if it detects a pulse of another node in one of its pulse pauses after, which indicates a colliding situation. If the node aborts its transmissions due to the lack of the expected CTS pulse or the detection of a pulse of another node, then it doubles its contention window and then returns to monitor the control channel. After node A fully transmits the packet, it expects an acknowledgment from the receiver. If the node does not obtain the expected acknowledgment, then it doubles its contention window and starts to monitor the control channel again to look for a retransmission opportunity. The whole process repeats until either node A obtains an acknowledgment for the packet or the retry limit is reached. The node discards the packet in the latter case and resets its contention window to the minimum size in both cases. The above description is for the case of a unicast packet. In the case of a broadcast packet, the proposed protocol uses the basic CSMA protocol as in the IEEE 802.11 DCF. Figure 3. Shows the RTS / CTS dialogue.

(IJCNS) International Journal of Computer and Network Security, 43 Vol. 2, No. 3, March 2010

therefore terminate their transmissions, and the collision is resolved automatically. If only one of the two receivers can correctly read the frame header, then the sender of the other receiver will, in general, abort its transmissions due to the lack of a legitimate CTS pulse. The collision on is therefore also resolved in such a case. If both receivers can correctly read the frame headers, then each will send back a CTS pulse with the length specified in the MAC headers of their respectively received data frames. If the two initiating senders do not d raw the same CTS length, then the sender that draws the shorter one may not receive a legitimate CTS pulse and thus abort its transmissions. If both senders receive legitimate CTS pulses, then one sender will usually still need to abort its transmissions (since their acknowledgment frames may be interfered with or ca use interference). The collision detection mechanism starts to work in such a case. With pauses of random lengths, lie pulses of the two senders will desynchronize each other over time. A collision is therefore resolved. The above description of collision detection is not restricted to two transmitting nodes that are neighbors. Therefore, two nodes that are hidden terminals to each other still detect each other if they transmit at the same time.

5. Results
The comparison of MAC protocols CSMA, MACA and IEEE 802.11 has been simulated using Network Simulator Glomosim 2.03. In our simulation each node always has packets to send, and the destination of each packet is randomly drawn among the neighbors of the node. The results for three wireless MAC protocols in terms of collision rate and throughput. Figure 3 a, 3 b. Shows the collision rate and throughput simulation graph for CSMA. Figure 4 a, 4 b. Shows the collision rate and throughput simulation graph for MACA. Figure 5 a, 5 b. Shows the collision rate and throughput simulation graph for IEEE 802.11.The protocol IEEE802.11 has significant performance gains in terms of node throughput and reduce the network collisions.

Figure 3. RTS / CTS dialogue IEEE 802.11 MAC HANDSHAKE 3.7 Collision Avoidance and Detection This section further explains how the proposed MAC protocol achieves collision avoidance and collision detection. As in the CSMA case, the protocol considers it a potential colliding situation when a transmitting node detects another transmitting node. For collision avoidance, the proposed protocol uses handshake and medium reservation procedures like those used by traditional wireless MAC protocols. The difference is that, in the proposed protocol, these procedures are moved to the control channel where CTS pulses are used for handshaking and the pulse relay is used for medium reservation. When the collision mechanism comes into play and this is the essential difference between the proposed protocol and other wireless MAC protocols. To understand how the proposed protocol resolves collisions, we consider the case where two neighboring nodes cause collisions. If two neighboring nodes draw the same back off delays at a contention point for medium access, then they start to transmit signals in the data and control channels almost at the same time. If both receivers of the two senders cannot correctly read the frame headers due to the resulting collision (that is, the address or another field in the header does not have a legitimate value), then neither will send back a CTS pulse. Both senders will

Figure 3 a. CSMA – Nodes Vs Collision

44

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Figure 3 b. CSMA – Nodes Vs Throughput

Figure 5 a. IEEE 802.11 – Nodes Vs Collision

Figure 4 a. MACA – Nodes Vs Collision Figure 5 b. IEEE 802.11 – Nodes Vs Throughput

6. Conclusion
From the simulation result, we conclude that, a better behavior is obtained when using CSMA instead of MACA because of the RTS / CTS messages. The use of RTS packets whenever a source has a data packet to send without first sensing the channel, results in an increase in packet collisions and hence decreased throughput. The collision avoidance mechanism incorporated into IEEE 802.11 for the transmission of RTS packets aids in the reduction of the number of collision improves the throughput. Consequently, more data packets reach their destination. We conclude that, the MAC protocol IEEE802.11 has significant performance gains in terms of node throughput and reduces the network collisions in Mobile Ad Hoc Networks.

Figure 4 b. MACA – Nodes Vs Throughput

(IJCNS) International Journal of Computer and Network Security, 45 Vol. 2, No. 3, March 2010

References
[1] N. Abramson, “The Aloha System- Another Alternative for Computer Communications,” Proc. AFIPS Fall Joint Computer Conf.., 1970. P. Karn, “MACA—A New Channel Access Method for Packet Radio,” Proc. Ninth ARRL Computer Networking Conf, 1990. V. Bharghavan, A, Demers, S. Shenker, and L. Zhang, “MACAW:A Medium Access Protocol for Wireless LANs,” Proc. ACM Ann,Conf Applications, Technologies, Architectures, and Protocols for Computer Comm.(SIGCOMM ‘94), Aug. 1994. Z.J. Haas and J. Deng, “Dual Busy Tone Multiple Access (DBTMA)-A Multiple Access Control Scheme for Ad Hoc Networks,” IEEE Trans. Comm., vol. 50, pp. 975-985, June 2002. CL. Fullmer and J.J. Garcia-Luna-Aceves, “Solutions to Hidden Terminal Problems in Wireless Networks,” Proc. ACM Ann. Conf Applications, Technologies, Architectures, and Protocols for Computer Comm. (SIGCOMM ‘97), Sept. 1997. C.L. Fullmer and J.J. Garcia-Luna-Aceves, “Floor Acquisition Multiple Access (FAMA) for Packet— Radio Networks,” Proc. ACM Ann, Conf Applications, Technologies, Architectures, and Protocols for Computer Comm.(SIGCOMM ‘95), Sept. 1995. F.A, Tobagi and L. Kleinrock, “Packet Switching in Radio Channels: Part I—The Hidden Terminal Problem in Carrier Sense Multiple Access and the Busy Tone Solution,” IEEE Trans. Comm.,vol. 23, pp. 1417-1433, 1975. IEEE 802.11 Wireless Local Area Networks, http://grouper.ieee.org/ groups/802/ 11/, 1999. C. Wu and V.O.K. Li, “Receiver-Initiated Busy-Tone Multiple Access in Packet Radio Networks,” Proc. ACM Ann. Conf Applications, Technologies, Architectures, and Protocols for Computer Comm. (SIGCOMM ‘87), Aug. 1987. Z.J. Haas and J. Deng, “Dual Busy Tone Multiple Access (DBTMA)-A Multiple Access Control Scheme for Ad Hoc Networks,” IEEE Trans. Comm., vol. 50, pp. 975-985, June 2002. L. Kleinrock and F.A. Tohagi, “Packet Switching in Radio Channels: Part I-Carrier Sense MultipleAccess Modes and Their Throughput-Delay Characteristics,” IEEE Trans. Comm., vol. 23, pp. 1400-1416, 1975. K. Xu, M. Gerla, and S. Bae, “How Effective Is the IEEE 802.11 RTS/CTS Handshake in Ad Hoc Networks?” Proc. IEEE Global Telecom, Conf. (GLOBECOM ‘02), Nov. 2002. V. Kanodia, C. Li, A. Sabharwal, B. Sadeghi, and E.Knightly, “Distributed Multi-Hop Scheduling and Medium Access with Delay and Throughput Constraints,” Proc.ACM MobiCom, July 2001. M.Barry, A.T.Campbell, A.Veres, “Distributed Control Algorithms for Service Differentiation in Wireless Packet Networks,” Proc. IEEE INFOCOM. Apr. 2001.

[15]

[16]

[2]

[17]

[3]

[18]

[4]

[19]

[5]

[20]

[6]

[21]

J.L. Sobrinho and AS. Krishnakumar, “Real-Time Traffic over the IEEE 802.11 Medium Access Control Layer, ‘ Bell Labs Technical I., pp. 172-187, 1996. X.Yang and N.H. Vaidya, “Priority Scheduling in Wireless Ad Hoc Networks,” Proc. ACM MobiHoc, June 2002. S. Singh and C.S. Raghavendra, “PAMAS-Power Aware Multi-Access Protocol with Signaling for Ad Hoc Networks,” ACM SIG COMM Computer Comm. Rev.. pp. 5-26, 1998. J.P Monks, V. Bharghavan, and W.W. Hwu, “A Power-Controlled Multiple Access Protocol for Wireless Packet Networks,” Proc. IEEE INFOCOM. Apr. 2001. P.Jacquet, P.Minet, P.Muhlethaler, and N. Rivierre, ‘‘Priority and Collision Detection with Active Signaling — The Channel Access Mechanism of HIPERLAN,’’ Wireless Personal Comm., vol. 4, pp. 11-25, Jan. 1997. Y.Thy and K. Chua, “A Capacity Analysis for the IEEE 802.11 MAC Protocol, Wireless Networks, vol.7, no.2, pp.159-171, Mar, 2001. T. Rappaport, Wireless Communication: Principles and Practice .PHI, 1999.

Authors Profile
D.Sivaganesan received his BE degree in Computer Science Engineering in 1999, M.Tech degree in Information Technology in 2004.He is currently working as an Assistant Professor in the Department of CSE, Karpagam College of Engineering, Coimbatore.He is currently pursuing Ph.D. His research interest includes Mobile Computing, Mobile Agents, Web Programming, Object Computing, Simulation and Microprocessors Based Systems. He has published 10 technical papers in International, National Conferences and Journals. Dr.R.Venkatesan received his PhD degree in Computer Science and Engineering. He is currently working as Professor and Head, Department of Information Technology, PSG College of Technology, Coimbatore. His research interest includes Simulation and Modeling, Software Engineering, Software, Algorithm Design, Database Technology, Software Project Management, Software Process Management. He has published 25 technical papers in International, National Conferences and Journals.

[7]

[8] [9]

[10]

[11]

[12]

[13]

[14]

46

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Data Encryption technique using Location based key dependent Permutation and circular rotation
Prasad Reddy. P.V.G.D*, K.R.Sudha2 , S.Krishna Rao3
*

Department of Computer Science and Systems Engineering, Andhra University, Visakhapatnam, India, [email protected]
2

Department of Electrical Engineering, Andhra University, Visakhapatnam, India, [email protected]

3

Department of Computer Science and Systems Engineering, Sir.C.R.Reddy College of Engineering, Eluru, [email protected]

Abstract: Wireless delivers data through public channels to unspecified clients in mobile distributed systems. In such scenario, a need for Secure Communication arises. Secure communication is possible through encryption of data. A lot of encryption techniques have evolved over time. However, most of the data encryption techniques are location-independent. Data encrypted with such techniques can be decrypted anywhere. The encryption technology cannot restrict the location of data decryption. GPS-based encryption (or geo-encryption) is an innovative technique that uses GPS-technology to encode location information into the encryption keys to provide location based security. In this paper a new technique for Data encryption method using Location based key dependent permutation and circular rotation is proposed for mobile information system. Keywords: Location Permutation, GPS dependent, circular rotation,

1.

Introduction

The dominant trend in telecommunications in recent years is towards mobile communication. The next generation network will extend today’s voice-only mobile networks to multi-service networks, able to carry data and video services alongside the traditional voice services. Wireless communication is the fastest growing segment of communication industry. Wireless became a commercial success in early 1980’s with the introduction of cellular systems. Today wireless has become a critical business tool and a part of everyday life in most developed countries. Applications of wireless range from common appliances that are used everyday, such as cordless phones, pagers, to high frequency applications such as cellular phones. The widespread deployment of cellular phones based on the frequency reuse principle has clearly indicated the need for mobility and convenience. The concept of mobility in application is not only limited to voice transfer over the wireless media, but also data transfer in the form of text , alpha numeric characters and images which include the

transfer of credit card information, financial details and other important documents. The basic goal of most cryptographic system is to transmit some data, termed the plaintext, in such a way that it cannot be decoded by unauthorized agents[5][6][7][8][9]. This is done by using a cryptographic key and algorithm to convert the plaintext into encrypted data or cipher text. Only authorized agents should be able to convert the cipher text back to the plaintext. GPS-based encryption (or geo-encryption) is an innovative technique that uses GPS-technology to encode location information into the encryption keys to provide location based security[12][13]. GPS-based encryption adds another layer of security on top of existing encryption methods by restricting the decryption of a message to a particular location. It can be used with both fixed and mobile. The terms location-based encryption or geo-encryption are used to refer to any method of encryption in which the encrypted information, called cipher text, can be decrypted only at a specified location. If, someone attempts to decrypt the data at another location, the decryption process fails and reveals no details about the original plaintext information. The device performing the decryption determines its location using some type of location sensor such as a GPS receiver. Location-based encryption can be used to ensure that data cannot be decrypted outside a particular facility for example, the headquarters of a government agency or corporation or an individual's office or home. Alternatively, it may be used to confine access to a broad geographic region. Time as well as space constraints can be placed on the decryption location. Adding security to transmissions uses location-based encryption to limit the area inside which the intended recipient can decrypt messages. The latitude/longitude coordinate of node B is used as the key for the data encryption in LDEA. When the target coordinate is determined, using GPS receiver, for data encryption, the ciphertext can only be decrypted at the expected location. A toleration distance(TD) is designed to overcome the inaccuracy and inconsistent problem of GPS receiver. The sender can also determine the TD and the receiver can decrypt the ciphertext within the range of TD. Denning’s model is effective when the sender of a message knows the

(IJCNS) International Journal of Computer and Network Security, 47 Vol. 2, No. 3, March 2010

recipient’s location L and the time that the recipient will be there, and can be applied especially effectively in situations where the recipient remains stationary in a well-known location. The mobile client transmits a target latitude/longitude coordinate and an LDEA key is obtained for data encryption to information server. The client can only decrypt the ciphertext when the coordinate acquired form GPS receiver matches with the target coordinate. For improved security, a random key (R-key) is incorporated in addition to the LDEA key. In the present paper the objective is to modify the cipher by introducing the concept of key dependent circular rotation. In this the bits are rotated depending upon the Rkey after whitening with the LDEA key using the Exclusive – OR operation.

2.

Random number quadruple vector

generator

using

of the cipher text denoted by X consisting of the binary bits of the form x11 x12 x13 ……xn 21…..xn28 As the numbers in the first stage of the cipher text are between 0 to 256 we have bits in every number. Here we have 8n 2 binary bits . Thus we divide the string of 8n 2 binary bits into 8 substrings each having n 2 binary bits. 1. The next stage of the cipher we transpose the matrix X consisting of 7 substrings and we interchange the first bit of the substring with k1th bit of the entire string. Similarly the second bit with k2th bit of the entire string. This process is continued for all the bits in X to get the final cipher text. 2. The next stage of the cipher we transpose the matrix X consisting of 7 substrings. The bits in X are rotated left di times where di = ki mod n 2. In the decryption process the bits are rotated di bits right. 3.1 Algorithm Algorithm for Encryption: { 1. 2. 3. 4. 5. 6. 7. } { 1. 2. 3. 4. 5. 6. 7. } read n,K,P,r For i=1 to n { p=convert(P); X=p LDEA key C1=Permute(X) C=Lrotate(C1) } Write(C) Algorithm for Decryption:

For the generation of the random numbers a quadruple vector is used[7][10]. The quadruple vector T is generated for 44 values i.e for 0-255 ASCII values. T=[0 0 0 0 0 0 0 0 1 1 ……………… 0 0 0 0 1 1 1 1 2 2…………………….. 0 1 2 3 0 1 2 3 0 1 ……………………..3] The recurrence matrix[1][2][3] [4]
0 1 0  A = 1 1 0    0 0 1   

is used to generate the random sequence for the 0-255 ASCII characters by multiplying r=[A] *[T] and considering the values to mod 4. The random sequence generated using the formula [40 41 42]*r is generated.[10]

3.

Development of the cipher

Consider a plain text represented by P which is represented in the form P=[Pij] where i=1to n and j=1 to n ---1 Let the key matrix be defined by K=[Kij] where i=1 to n and j=1 to n ---2 Let the cipher text be denoted by C=[ Cij] where i=1to n and j=1 to n corresponding to the plain text (1) For the sake of convenience the matrices P,K and C are represented as P=[p1 p2 ……pn2] K=[k1 k2 ……kn2] C=[c1 c2 ……cn2 ] Where in the components are taken row wise from the corresponding matrices. The process of encryption and decryption are shown 1. The components of p are first converted into their corresponding binary bits in the form p11 p12 p13 p14 p15 p16 p17 p18, p21 ……pn2 1…..pn28 where p11 p12 .... are the corresponding binary bits corresponding to p1 p2….. As the numbers in the plain text are between 0 to 256 we have bits in every number. Here we have 8n 2 binary bits . Thus we divide the string of 8n 2 binary bits into 8 substrings each having n 2 binary bits. 2. The plain text P is whitened by using the Exclusive – or operation with the LDEA key to get the first stage

read LDEA-key,R-key,n,C for i=1 to n { C1=Rrotate(C) X=permute(C1) p= X LDEA key P=convert(p) } write P;

4.

Illustration of the cipher

Encryption : The distance between every pair of points in the universe is negligible by virtue of communication facilities. Let us reach each point in the sky. This is the wish of scientists. ASCII equivalent P=[ 84 104 101 32 100 105 115 116 97 110 99 101 32 98 101 116 119 101 101 110 32 101 118 101 114 121 32 112 97 105 114 32 111 102 32 112 111 105 110 116 115 32 105 110 32 116 104 101 32 117 110 105 118 101 114 115 101 32 105 115 32 110 101 103 108

48 105 103 105 114 109 109 32 102 46 76 99 104 110 116 107 121 116 104 32 115 46]

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

105 98 108 101 32 98 121 32 118 116 117 101 32 111 102 32 99 111 117 110 105 99 97 116 105 111 110 97 99 105 108 105 116 105 101 115 101 116 32 117 115 32 114 101 97 32 101 97 99 104 32 112 111 105 32 105 110 32 116 104 101 32 115 46 84 104 105 115 32 105 115 32 101 32 119 105 115 104 32 111 102 99 105 101 110 116 105 115 116 115

32 102 46 76 99 104 110 116 107 121 116 104 32 115 46]

97 99 105 108 105 116 105 101 115 101 116 32 117 115 32 114 101 97 32 101 97 99 104 32 112 111 105 32 105 110 32 116 104 101 32 115 46 84 104 105 115 32 105 115 32 101 32 119 105 115 104 32 111 102 99 105 101 110 116 105 115 116 115

The distance between every pair of points in the universe is negligible by virtue of communication facilities. Let us reach each point in the sky. This is the wish of scientists.

LDEA- key r1=[ 0 0 1 0 0 0 1 1 ]; r2= [1 0 0 1 1 1 1 1 ]; r3= [1 1 1 1 1 1 1 1]; X=P xor LDEA key Permuting with R-key and Rotating right di times where di = ki mod 16 and transposing C=[34 88 14 9 32 120 38 3 101 125 86 87 53 89 95 85 99 94 110 13 58 120 55 43 39 104 86 30 33 25 122 69 49 89 111 69 102 124 22 27 121 111 119 102 127 30 91 61 99 31 74 77 60 104 55 50 39 105 86 94 37 25 122 85 49 89 79 69 100 124 22 19 39 104 118 30 35 25 122 77 107 93 78 109 52 122 55 19 109 110 118 54 59 27 91 109 123 87 79 109 92 126 53 51 124 109 55 114 119 31 75 93 103 92 78 29 48 121 55 67 113 109 119 70 119 28 91 29 65 76 76 4 48 48 19 3 61 105 119 118 103 31 90 93 81 13 73 68 116 36 19 18 61 105 119 118 103 31 90 93 117 93 79 85 116 125 23 83 101 110 118 22 59 25 91 109] Cipher text C = "Xx& e}VW5Y_Uc^n:x7+'hV-! zE1YoEf| yowf -[=c JM<h72'iV^% zU1YOEd| 'hv-# zMk]Nm4z7 mnv6; [m{WOm\~53|m7rwK] g\N 0y7CqmwFw [ ALL 00  =iwvgZ] Q IDt$ =iwvgZ] u]OUt} Senv ; [m Decryption: cipher text C= "Xx& e}VW5Y_Uc^n:x7+'hV-! zE1YoEf| yowf -[=c JM<h72'iV^% zU1YOEd| 'hv-# zMk]Nm4z7 mnv6; [m{WOm\~53|m7rwK] g\N 0y7CqmwFw [ ALL 00  =iwvgZ] Q IDt$ =iwvgZ] u]OUt} Senv ; [m After key dependent permuting and circular rotation the ASCII equivalent is P=[ 84 104 101 32 100 105 115 116 97 110 99 101 32 98 101 116 119 101 101 110 32 101 118 101 114 121 32 112 97 105 114 32 111 102 32 112 111 105 110 116 115 32 105 110 32 116 104 101 32 117 110 105 118 101 114 115 101 32 105 115 32 110 101 103 108 105 103 105 98 108 101 32 98 121 32 118 105 114 116 117 101 32 111 102 32 99 111 109 109 117 110 105 99 97 116 105 111 110

5.

Cryptanalysis

If the latitude and longitude coordinate is simply used as the key for data encryption; the strength is not strong enough. That is the reason why a random key is incorporated into LDEA algorithm. Let us consider the cryptanalysis of the cipher. In this cipher the length of the key is 8n2 binary bits. Hence the key space is 28n2 . Due to this fact the cipher cannot be broken by Brute force attack. The Cipher cannot be broken with known plain text attack as there is no direct relation between the plain text and the cipher text even if the longitude and latitude details are known. It is noted that the key dependent permutation plays an important role in displacing the binary bits at various stages of iteration, and this induces enormous strength to the cipher. Avalanche Effect With change in LDEA key from 2334719 to 2334718 the cipher text would change to G=[34 88 14 9 32 120 38 3 37 121 86 87 37 89 94 85 99 94 110 13 58 120 55 43 103 108 86 30 49 25 123 69 49 89 111 69 102 124 22 27 57 107 119 102 111 30 90 61 99 31 74 77 60 104 55 50 103 109 86 94 53 25 123 85 49 89 79 69 100 124 22 19 103 108 118 30 51 25 123 77 107 93 78 109 52 122 55 19 45 106 118 54 43 27 90 109 123 87 79 109 92 126 53 51 60 105 55 114 103 31 74 93 103 92 78 29 48 121 55 67 49 105 119 70 103 28 90 29 65 76 76 4 48 48 19 3 125 109 119 118 119 31 91 93 81 13 73 68 116 36 19 18 125 109 119 118 119 31 91 93 117 93 79 85 116 125 23 83 37 106 118 22 43 25 90 109] The change in the cipher text is 44 bits

6.

Conclusions

In this chapter a cipher is developed using the LDEA key dependent permutation and circular rotation as the primary concept. The cryptanalysis is discussed which indicates that the cipher is strong and cannot be broken by any

(IJCNS) International Journal of Computer and Network Security, 49 Vol. 2, No. 3, March 2010

cryptanalytic attack since this includes transposition of the binary bits of the plain text at every stage.

Authors Profile
Dr Prasad Reddy P V G D, is a Professor of Computer Engineering with Andhra University, Visakhapatnam, INDIA. He works in the areas of enterprise/distributed technologies, XML based object models. He is specialized in scalable web applications as an enterprise architect. With over 20 Years of experience in filed of IT and teaching, Dr Prasad Reddy has developed a number of products, and completed several industry projects. He is a regular speaker in many conferences and contributes technical articles to international Journals and Magazines with research areas of interest in Software Engineering, Image Processing, Data Engineering , Communications & Bio informatics

7.

Acknowledgements

This work was supported by grants from the All India Council for Technical Education (AICTE) project under RPS Scheme under file No. F.No.8023/BOR/RID/RPS114/2008-09.

References
[1] K.R.Sudha, A.Chandra Sekhar and Prasad Reddy.P.V.G.D “Cryptography protection of digital signals using some Recurrence relations” IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.5, May 2007 pp 203-207 [2] A.P. Stakhov, ”The ‘‘golden’’ matrices and a new kind of cryptography”, Chaos, Soltions and Fractals 32 ( (2007) pp1138–1146 [3] A.P. Stakhov. “The golden section and modern harmony mathematics. Applications of Fibonacci numbers,” 7,Kluwer Academic Publishers; (1998). pp393–99. [4] A.P. Stakhov. “The golden section in the measurement theory”. Compute Math Appl; 17(1989):pp613–638. [5] Whitfield Diffie And Martin E. Hellman, New Directions in Cryptography” IEEE Transactions on Information Theory, Vol. -22, No. 6, November 1976 ,pp 644-654 [6] Whitfield Diffie and Martin E. Hellman “Privacy and Authentication: An Introduction to Cryptography” PROCEEDINGS OF THE IEEE, VOL. 67, NO. 3, MARCH 1979,pp397-427 [7] A. V. N. Krishna, S. N. N. Pandit, A. Vinaya Babu “A generalized scheme for data encryption technique using a randomized matrix key” Journal of Discrete Mathematical Sciences & Cryptography Vol. 10 (2007), No. 1, pp. 73–81 [8] C. E. SHANNON Communication Theory of Secrecy Systems The material in this paper appeared in a confidential report “A Mathematical Theory of Cryptography” dated Sept.1, 1946, which has now been declassified. [9] E. Shannon, A Mathematical Theory of Communication, Bell System Technical Journal 27 (1948) 379–423, 623–656. [10] A. Chandra Sekhar , ,K.R.Sudha and Prasad Reddy.P.V.G.D “Data Encryption Technique Using Random Number Generator” Granular Computing, 2007. GRC 2007. IEEE International Conference, on 2-4 Nov. 2007 Page(s):576 – 576 [11] V. Tolety, Load Reduction in Ad Hoc Networks using Mobile Servers. Master’s thesis, Colorado School of Mines, 1999. [12]L. Scott, D. Denning, Geo-encryption: Using GPS to Enhance Data Security, GPS World, April 1 2003. [13] Geo-encryption protocol for mobile networks A. AlFuqaha, O. Al-Ibrahim / Computer Communications 30 (2007) 2510–25

K.R.Sudha received her B.E. degree in Electrical Engineering from GITAM, Andhra University 1991.She did her M.E in Power Systems 1994. She was awarded her Doctorate in Electrical Engineering in 2006 by Andhra University. During 1994-2006, she worked with GITAM Engineering College and presently she is working as Professor in the department of Electrical Engineering, Andhra University, Visakhapatnam, India.

S.Krishna Rao received his M Tech degree in Computer Science and Systems Engineering from Andhra University in 2000.He is presently working as Associate Professor in the Department of Computer Science and Engineering in Sir.C.R.Reddy College of Engineering, Eluru.

50

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Implementation of Watermarking for a Blind Image using Wavelet Tree Quantization
S.M.Ramesh1, Dr.A.Shanmugam2 , B.Gomathy3
Senior Lecturer, Dept.of ECE, Bannari Amman Institute of Technology, Erode- 638401, India
[email protected]
2 1

Professor, Dept.of ECE,
[email protected]

Bannari Amman Institute of Technology, Erode-638401, India Lecturer, Dept.of CSE, Bannari Amman Institute of Technology, Erode- 638401, India
[email protected]
3

Abstract:

This paper proposes implementation of watermarking for a blind image using wavelet tree quantization for copyright protection. In such a quantization scheme, there exists a large significant difference while embedding a watermark bit 1 and a watermark bit 0; it then does not require any original image or watermark during the watermark extraction. As a result, the watermarked images look lossless in comparison with the original ones, and the proposed method can effectively resist common image processing attacks; especially for JPEG compression and low-pass filtering. Moreover, by designing an adaptive threshold value in the extraction process, our method is more robust for resisting common attacks such as median filtering, average filtering, and Gaussian noise. Experimental results show that the watermarked image looks visually identical to the original, and the watermark can be effectively extracted even after either an unintentional image processing or intentional image attacks.

Keywords - Wavelet tree quantization, JPEG compression, Low-pass filtering, Discrete Wavelet Transform.

1. Introduction
The Internet has popularized tremendously fast in our life in the last decade. Due to the digitalization of documents, images, music, videos, etc, people can access and propagate them easily via the network. The watermarking technique has been widely applied to digital contents for copyright protection, image authentication, proof of ownership, etc. This technique embeds information so that it is not easily perceptible; that is, the viewer cannot see any information embedded in the contents. The issue here is the detection of the existence of watermarks in the digital contents to prove ownership. The spatial and spectral domains are two common methods for watermarking. In the spatial domain, the watermark is embedded in the selected areas on the texture of the host image. In the spectral domain, since the spread spectrum communication is robust against many types of interference and jamming, the host image is transformed to the frequency domain using methods such as Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT); then the watermark is embedded in the

mid-frequency to ensure the transparency and robustness of the watermarked image. For suggested inserting the watermark into the perceptually significant portion of the whole DCT transformed image, wherein a predetermined range of low frequency components excludes the DC component. This watermarking scheme has been shown to be robust against common attacks such as compression, filtering, and cropping. The proposed a watermarking method, in accordance with the multi-threshold wavelet coding (MTWC), the successive subband quantization (SSQ) is adopted to search for the significant coefficients. The watermark is added by quantizing the significant coefficient in the significant band by using different weights. The proposed a watermarking method based on the qualified significant wavelet tree (QSWT). The QSWT is derived from the embedded zerotree wavelet algorithm (EZW). The watermark is embedded in each of the two subbands of the wavelet tree. Several watermarking methods used two sets of coefficients, one to represent the watermark bit 0, and the other to represent the watermark bit 1. According to the embedded watermark bit, only a set of coefficients are quantized each time. The proposed a blind watermarking approach called differential energy watermarking. A set of several 8×8 DCT blocks are composed and divided into two parts to embed a watermark bit. The high frequency DCT coefficients in the JPEG/MPEG stream are selectively discarded to produce energy differences in the two parts of the same set. The wavelet coefficients of the host image are grouped into wavelet trees, and each watermark bit is embedded using two trees. One of the two trees is quantized with respect to a quantization index, and both trees exhibit a large statistical difference between the quantized tree and the unquantized tree; the difference can later be used for watermark extraction. EL [10] improved SHW [11] method by adding the Human Visual System (HVS) to effectively resist geometric attack. BKL [9] improved SHW [11] method by using four trees to represent two watermark bits

(IJCNS) International Journal of Computer and Network Security, 51 Vol. 2, No. 3, March 2010

to improve visual quality. One of the four trees is quantized according to the binary value of the two embedded watermark bits. But these methods [9-11] cannot effectively resist the attacks of low-pass filtering for JPEG compression. In this paper, propose a blind watermarking method based on wavelet tree quantization.

difference is greater than the adaptive threshold value; otherwise, a watermark bit 0 is extracted. Experimental results show that the proposed method is very efficient for resisting various kinds of attacks.

2. Watermarking by Quantization of Wavelet Trees
2.1 Wavelet Trees A host image of size n by n is transformed into wavelet coefficients using the 4-level discrete wavelet transform (DWT).With 4-level decomposition and have 13 frequency bands as shown in Fig. 1. The parent-child relationship can be connected between these sub nodes to form a wavelet tree. If the root consists of more than one node, then an image will have many wavelet trees after the DWT. In this case, we have 3 sub bands LH4, HL4, and HH4 as roots, and the total wavelet trees are 3×(n/24×n/24) after an image of size n by n is transformed by a 4-level wavelet transform. A higher resolution level (such as level 3 in Fig.1) has more significant coefficients than a lower resolution level (such as level 2 in Fig.1). As we can see, when n = 512, there are 85 coefficients for a wavelet tree constructed from a node in LH4 to LH1 following the parent-child relationship. For avoiding attacks such as low pass filters, in our proposed method we only need the largest two coefficients; these two coefficients are selected from one coefficient of LH4 and four coefficients of the same orientation in the same spatial location in LH3 as shown in Fig. 2. 2.2 The Preprocess With a four-level DWT and have 13 frequency bands as shown in Fig.1. A higher level subband is more significant than a lower subband. Using the LL4 subband as a root is not suitable for embedding a watermark, since it is a low frequency band that contains important information about an image and easily causes image distortions. Embedding a watermark in the HH4 subband is also not suitable, since the subband can easily be eliminated, for example by a lossy compression. The LH4 subband is more significant than the HL4 subband, hence the LH4 subband has a higher priority than the HL4 subband in the selection. A binary watermark image W comprised of size Nw (≤ S = n/24×n/24) bits is embedded. We represent each watermark bit as 1 or 0, and use a 90 pseudorandom function with another seed to shuffle Nw bits. According to the watermark bits embedded later, we select Nw non-overlapping wavelet trees and compute the global average significant difference of the total number of the Nw wavelet trees using Eq.(1). 1 Nw ε = { ------ ∑ (maxi – seci)} Nw i = 1

Figure 1. The watermark embedding procedure. A watermarking technique is denoted as blind if the original image is not needed for extraction. In previous research, the watermark embedded in the significant coefficients was found to be robust. The common issue is the use of blind detection to find out whether the extracting order is the same as the embedding order. Hence, propose a watermarking method which embeds a watermark bit in the maximum wavelet coefficient of a wavelet tree. The proposed method is different from others, which use two trees or two blocks to embed a watermark bit. We embed the watermark by scaling the magnitude of the significant difference between the two largest wavelet coefficients in a wavelet tree to improve the robustness of the watermarking

Figure 2. The watermark extraction procedure. The trees are so quantized that they exhibit a sufficiently large enough energy difference between a watermark bit 1 and a watermark bit 0, which is then used for watermark extraction. During extraction, an adaptive threshold value is designed. A watermark bit 1 is extracted if the significant

(1)

Where ε is the global average significant difference in all Nw wavelet trees; {⋅} is the floor function; maxi is the local maximum wavelet coefficient of the ith wavelet tree; seci is

52

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

the local second maximum wavelet coefficient of the ith wavelet tree, 1≤ i ≤ Nw. If the embedded watermark bit is 1, the local maximum coefficient is not quantized under Maximum (ε ,T) Δi ≥ ε and is quantized under Δi < Maximum (ε ,T) . If the embedded watermark bit is 0, the local maximum coefficient is quantized by setting the local maximum coefficient = local second maximum coefficient. 2.3 Watermark Embedding Let maxi and seci be the local maximum wavelet coefficient and the local second maximum wavelet coefficient in a wavelet tree; the difference between both of them is named as the local significant difference. The select a threshold value β as an increment for quantization. The larger the β is, the better robustness but the worse distortion of the watermarked image will be. Each time at embed a watermark bit and quantize the maximum coefficient in a wavelet tree. if (maxi < 0), then maxi = 0 Δi = maxi − seci, (2) (3)

Otherwise, the embed a watermark bit 0 according to Eq. (7) as follows: if (seci < 0 ), than maxinew = 0, secinew = 0; otherwise maxinew = seci (7)

Where Δi denotes the significant difference between the maximum coefficient and the second maximum coefficient in the ith wavelet tree of Nw. Because some maximum coefficients in a wavelet tree may be a negative value, and a positive value has higher robustness than a negative value under attacks, it will result in Δi more significant if we modify the maximum coefficient from a negative value to a positive value. When the Δi is more significant, it will be more accurate at extracting watermark. To achieve the new maximum coefficient be positive and to decrease the distortion of watermarked image due to quantization, the new maximum coefficient is set to a smallest positive value zero here. When embed a watermark bit 1, the maxi is quantized by maxinew = maxi ,if (Δi ≥ Maximum (ε, T)) maxinew = maxi + β, if (Δi < Maximum (ε ,T) and maxi is root) (4)

Where secinew denotes the new second maximum coefficient in the ith wavelet tree after embedding the watermark bit 0. When embedding a watermark bit 1, the maximum local significant coefficient is quantized and added by an incremental β if Δi < Maximum (ε, T); otherwise (i.e., Δi ≥ Maximum (ε, T)), it is kept the same as before. The reason for not quantizing the maximum local significant coefficient is that we won’t increase the distortion of the image. Some images having small ε imply that their significant difference is not obvious. We need an extra parameter T to improve the robustness. The larger the T is, the higher probability the maxi is quantized to a larger value; but, in the meanwhile, the more distortion of the image will be as well. For example, let ε =12 and Δi = 13. Suppose T is set to be less than ε, such as T =11, maxi will not be quantized as Δi >ε =12. On the other hand, if T is set to be larger than ε, such as T =14, maxi will be quantized and increased by β as Δi <T=14. On the contrary, when embedding a watermark bit 0, the value of maxi is quantized by decreasing to the local second maximum seci and hence the new Δi will be equal to 0. Based on this strategy, there exists a large 91 energy difference between embedding watermark bit 1 and watermark bit 0.

3. Design of Watermark Decoder
3.1 The Decoder Design In the proposed method, neither an original image nor an original watermark image is required for the extraction process. During the embedding process, embed a watermark bit 1 by adding an energy β (or β × γ ) to the local maximum wavelet coefficient in the wavelet tree, and embed a watermark bit 0 by setting maxi = seci Hence, if the wavelet tree was embedded a watermark bit 0, the local significant difference between the largest two coefficients will be close to zero; otherwise, if the wavelet tree was embedded a watermark bit 1, the local significant difference between the largest two coefficients will be greater than β. In order to extract watermark bits correctly and the value of y by Eq. (8). 1 Nw × α y = --------- ∑ Ï• j Nw × α j = 1

(5)

maxinew = maxi + β × γ , if (Δi < Maximum(ε ,T) and maxi is root) (6) Where maxinew denotes the new maximum coefficient in the ith wavelet tree after embedding the watermark bit. The maxi located at lower resolution level (child node) is less robust than those located at higher resolution levels (root node). The reason has been stated in Section II-B. According to the band sensitivity, the coefficients quantized at different resolution level are given different weights. The quantized coefficient at the lower resolution level is given a heavier weight than that at the higher resolution level. Hence, if the maxi does not locate at the highest resolution level, we will quantize the maxi by adding γ times of β energy, here γ is a scale parameter and we set γ =1.5.

------ (8)

Where φ = {max1 - sec1 ,max2 - sec2 ,…..,maxi - seci }, for i=1,2,…,N; the sorted φ is denoted as Ï• (φ) = { Ï•1,Ï• 2 ...Ï• Nw }, Ï•1 < Ï• 2 < .... < Ï• Nw ; α is the scale parameter, 0 < α ≤ 1; α is crucial to y and is used to determine how many percentages of the significant difference in φ can be used for the average. Hence, α marks the minimal y value for extracting the watermark. The larger α is, the larger the y will be. Suppose all embedded watermark bits are 1 in the watermark. This means that the difference between the

(IJCNS) International Journal of Computer and Network Security, 53 Vol. 2, No. 3, March 2010

maximum wavelet coefficients and the second maximum wavelet coefficients for any embedded wavelet tree is greater than β. The value of α should be set as small as possible to avoid extraction errors (see Eq. (9),(10)); the reason for this is that it can exclude those big significant differences of the embedded wavelet trees in Eq. (8). On the other hand, if all embedded watermark bits are 0 in the watermark, the value of α must be set as large as possible. Therefore, α is sensitive to the content of the watermark. 3.2 Watermark Extraction Following Eqs. (8),(9) and (10), it would be easy to extract the watermark. If the local significant difference is greater than or equal to y, where 0 < y ≤ β, then the embedded watermark bit could be 1; otherwise, the embedded watermark bit could be 0. The watermark bit can be extracted based on Eq. (9) and (10) as follows: watermark bit = 1, if (maxi - seci ) > y watermark bit = 0, otherwise (9) (10)

The peak signal-to-noise ratio (PSNR) to evaluate the quality between the attacked image and the original image. After extracting the watermark, the normalized correlation coefficient (NC) is computed using the original watermark and extracted watermark to judge the existence of the watermark. The value of the NC coefficient is defined as follows: 1 w h -1 w w -1 NC = ----------- ∑ ∑ w (i, j) × w ' (i, j) wh ×ww i = 0 j = 0

(11)

Where wh and ww are the height and width of the watermark. w (i, j) and w' (i, j) are the values of the coordinate (i, j) in the original watermark and the extracted watermark, respectively. Here w (i, j) is set to 1 if it is a watermark bit 1; otherwise, it is set to -1. w' (i, j) is set in the same way. So the value of w(i, j) × w' (i, j) is either -1 or 1. Table 1: Normalized Correlation Coefficients (NC) after Attacks by JPEG Compression with the Quality Factors (QF) 10, 20, 30, 40, 50, 60, 70, 80, 90, AND 100.
Quality Factors (QF) Normalized Correlation Coefficients (NC) 0.33 0.65 0.78 0.82 0.89 0.96 1 1 1 1 E ES E A ES ESH ESHA ESHA ESHA ESHA Extracted Watermark

10 20 30

(a)

ESHA (b)

40 50 60 70 80 90 100

Figure 3. (a) The original image of Lena of size 512×512. (b) The original binary watermark of size 32×16.

(a)

ESHA (b)

Figure 4. (a) Watermarked Lena with PSNR = 41.88dB (b) The extracted watermark with NC = 1

4. Experimental Results

The Lena (512×512 pixels, 8 bits/pixel) obtained from attacking the Stir mark benchmark and Photo Impact 11 software tools to simulate common image attacks. While there is no attack, for the sake of brevity only the Lena image and the binary watermark are shown in Fig. 3. Fig. 4 shows the watermarked image and the extracted result. Predetermine the scale parameter T at 10, β =7, γ =1.5, and α =0.7. In the following, consider both geometric and nongeometric attacks. Nongeometric attacks include JPEG compression, low pass filtering, histogram equalization, and sharpening. From Table 1, the proposed method can correctly extract the watermark while the quality factors are greater than 70 and it becomes worse if the quality factor is

54

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

decreased. The less the quality factor is, the more vague the extracted watermark is.

4.1 Experiment Analysis Compare the proposed method with BKL [9], EL [10], and SHW [11] using the Lena image. The watermark consists of 256 bit 1and 256 bit 0. The results are shown in Table 2. From Table 2, the PSNR of the proposed method is better than those of [11]. In this method, it is not so good for the rotation attacks with degree greater than ± 0.70; but it is far better than the listed methods; especially for low pass filtering attacks for JPEG compression. Table 2: Comparison of the Proposed Method and the Methods in [9], 10] and [11].
Attacks / NC SHW PSNR = 38.2dB EL PSNR= 40.6dB BKL PSNR= 41.54dB Proposed Method PSNR = 41.88dB 0.33

In this paper, propose a wavelet-tree-based blind watermarking method by quantizing the maximum wavelet coefficient in a wavelet tree. Embed a watermark bit by quantizing the maximum wavelet coefficient in a wavelet tree. The trees are so quantized that they exhibit a sufficiently large enough energy difference between a watermark bit 0 and a watermark bit 1, which difference, denoted as significant difference, is then useful for later watermark extraction. During extraction, an adaptive threshold value is designed. The magnitude of the significant difference in a wavelet tree is compared to the adaptive threshold value. Furthermore, regarding each wavelet tree embedded with a watermark bit, we not only can embed more bits in an image but can extract the watermark without any need of the original image and watermark. Moreover by designing an adaptive threshold value in the extraction process. In addition to the copyright protection, the proposed method can also be applied to data hiding and image authentication.

References
[1] D. P. Mukherjee, S. Maitra, and S. T. Acton, "Spatial domain digital watermarking of multimedia objects for buyer authentication," IEEE Trans. Multimedia, vol. 6, pp. 1-15, Feb. 2004. [2] N. Nikolaidis and I. Pitas, "Robust image watermarking in the spatial domain," Signal Processing, vol. 66, pp. 385-403, 1998. [3] A. K. Mahmood and A. Selin,"Spatially adaptive wavelet thresholding for image watermarking,"presented at Proc.IEEE ICME, Toronto, 2006 of Conference. [4] J. R. Hernandez, M. Amado, and F. Perez- Gonzalez, "DCT- omain watermarking techniques for still images: detector performance analysis and a new structure," IEEE Trans.Image Processing, vol. 9, pp.55- 68, Jan. 2000. [5] L. Sin-Joo and J. Sung-Hwan, "A survey of watermarking techniques applied to multimedia," in Proc. IEEE ISIE 2001, pp. 272-277 [6] V. M. Potdar, S. Han, and E. Chang,"A survey of digital image watermarking techniques,"presented at Proc. IEEE INDIN 2005 of Conference. [7] H. J. Wang, P. C. Su, and C. C. J. Kuo, "Wavelet-based digital image watermarking," Optics Express,vol.3, pp. 491- 496, Dec. 1998. [8] M. Hsieh, D. Tseng, and Y. Huang, "Hiding digital watermarks using multiresolution wavelet transform," IEEE Trans. Ind. Electron., vol. 48, pp. 875-882, Oct. 2001. [9] B. K. Lien and W. H. Lin,"A watermarking method based on maximum distance wavelet tree quantization," presented at 19th Conf. Computer Vision, Graphics and Image Processing, 2006 of Conference. [10] E. Li, H. Liang, and X. Niu,"An integer wavelet based multiple logo-watermarking scheme," presented at Proc. IEEE WCICA, 2006 of Conference. [11] S. H. Wang and Y. P. Lin, "Wavelet tree quantization for copyright protection watermarking," IEEE Trans.

JEPG (QF = 10) JEPG (QF = 20) JEPG (QF = 30) JEPG (QF = 40) JEPG (QF = 50) JEPG (QF = 60) JEPG (QF = 70) JEPG (QF = 80) JEPG (QF = 90) JEPG (QF = 100)

NA

0.15

0.17

NA

0.34

0.61

0.65

0.15

0.52

0.79

0.78

0.23

0.52

0.83

0.82

0.28

0.52

0.89

0.89

0.46

0.59

0.94

0.96

0.57

0.63

0.97

1

0.89

0.71

0.99

1

1

0.78

1

1

1

0.88

1

1

5. Conclusion

(IJCNS) International Journal of Computer and Network Security, 55 Vol. 2, No. 3, March 2010

Image Processing, vol. 13, pp. 154-165, Feb. 2004. [12] G. C. Langelaar and R. L. Lagendijk, "Optimal differential energy watermarking of DCT encoded images and video,"IEEE Trans. Image Processing, vol. 10, pp. 148-158, Jan.2001. [13] F. A. P. Petitcolas, Weakness of existing watermark Scheme 1997[online] Available: http://www.petitcolas.net/fabien/watermarking/stirma rk/index.html. [14] PhotoImpact 11 software, http://www.ulead.com, Ulead system, Inc.

Guide Award” five times from Tamil Nadu state Government. He is also the recipient of “Best Outstanding Fellow Corporate Member Award” by Institution of Engineers (IE),India -2004 and “Jewel of India” Award by International Institute of Education and Management, New Delhi–2004 and “Bharatiya Vidya Bhavan National Award for Best Engineering College Principal 2005” by Indian Society for Technical Education (ISTE). “Education Excellence Award” by All India Business& Community Foundation, New Delhi.

Authors Profile
S.M.Ramesh received the B.E degree in Electronics and Communication Engineering from National Institute of Technology (Formerly Regional Engineering College), Trichy, Bharathidhasan University, India in the year 2001 and the M.E, degree in Applied Electronics from RVS College of Engineering and Technology, Dindugal, Anna University, India in the year 2004. From 2004 to 2005, he served as a Lecturer in the Department of Electronics and Communication Engineering, Maharaja Engineering College, Coimbatore, India. From 2005 to 2006, he served as a Lecturer in the Department of Electronics and Communication Engineering, Nandha Engineering College, Erode, India. Since June 2006, he served as Senior Lecturer, in the Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam, and Erode, India. He is currently pursuing the Ph.D. degree in Anna University, Chennai-India, working closely with Prof. Dr.A.Shanmugam and Prof Dr.R.Harikumar.

Dr.A.Shanmugam received the B.E, degree in Electronics and Communication Engineering from PSG College of Technology., Coimbatore, Madras University, India in the year 1972 and the M.E, degree in Applied Electronics from College of Engineering, Guindy, Chennai, Madras University, India in the year 1978 and received the Ph.D. in Computer Networks from PSG College of Technology., Coimbatore, Bharathiyar University, India in the year 1994.From 1972 to 1976, he served as a Testing Engineer at Test and Development Center, Chennai, India. From 1978 to 1979, he served as a Lecturer in the Department of Electrical Engineering, Annamalai University, India. From 1979 to 2002, he served different level as a Lecturer, Asst.Professor, Professor and Head in the Department of Electronics and Communication Engineering of PSG College of Technology, Coimbatore, India. Since April 2004.He assumed charge as the Principal, Bannari Amman Institute of Technology, Sathyamangalam, Erode, India. He works in field of Optical Networks, broad band computer networks and wireless networks, Signal processing specializing particularly in inverse problems, sparse representations, and overcomplete transforms.Dr.A.Shanmugam received “Best Project

56

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

A Hierarchical Genetic Algorithm for Topology Optimization and Traffic Management in ATM Networks Utilizing Bandwidth and Throughput
Alaa Sheta1, Malik Braik2 and Mohammad Salamah3
1

Taif University, Information Systems Department, College of Computers and Information Systems, Taif, Saudi Arabia [email protected]

Al-Balqa’ Applied University, Information Technology Department, Prince Abdullah Bin Ghazi Faculty of Science and Information Technology, Salt, Jordan [email protected]
3

2

National Education Center for Robotics (NECR), King Hussein Foundation, Amman, Jordan [email protected]

Abstract: The Asynchronous Transfer Mode (ATM) network is
expected to become a backbone network for high speed digital multimedia services in distributed environments. This paper explores an optimization based hierarchical approach using Genetic Algorithms (GAs) for installing an ATM network with optimal structure and traffic flow. A Hierarchical Genetic Algorithm (HGA) is used to solve such an optimization problem. HGA approach can improve the performance of conventional GA, though it consumes more computation time. Thus, HGA approach is capable of reducing the overall delay while increasing the bits transmitted over the network. This will definitely improve the network performance and meets the requirements in multimedia service environments. The preliminary results indicate that HGA based ATM network design can be very efficacious.

Keywords: Genetic Algorithms, ATM Network, Flow
Management.

1. Introduction
There is a rapidly demand for telecommunication networks to function efficaciously despite obstacles such as disabling portions of the network, limiting the links capacities, high cabling costs, and so on [1]. Thus, networks must constantly able to maintain an acceptable level of performance. Despite this need, the problem of dynamic redesign of functioning networks still received a little attention. Today, with the intensifying role of using the Internet and networking technologies, there have been immense advances in using the telecommunications services in the present information society. Asynchronous Transfer Mode (ATM) is one of the most promising networking technologies. ATM was rated to reduce the complexity of the network and improve the flexibility of traffic performance which has good ability for transmitting many kinds of information, and carry traffic over all kinds of networks. Furthermore, ATM has been

increasingly used in telecommunications systems for high speed multimedia services [2], [3]. Recently, control flow and management are core concepts of the optimal structure of ATM networks; hence, flow control and management have drawn much attention by the researchers in telecommunication field. In addition to that the nodes in the ATM network design must be linked in an economical way to handle expected traffic and capacity constraints [1], [4]. As the models for the design of ATM networks are quite complex, and one of the limiting factors is the requirement of expensive exchange based equipments, therefore, Genetic Algorithms (GAs) were used to solve such problems of traffic assignment and topological design of local area networks [5], [6], flow optimization in survivable ATM networks [7], and flow assignment in ATM networks [8]–[10]. However, many programming models have been developed which deals with telecommunication network planning [11]. In this context, Evolutionary Algorithms (EAs) have been frequently applied to telecommunication services in the last years [9], [12]. This paper handles an optimal ATM network structure (i.e. topology and link capacities) using Hierarchical Genetic Algorithms (HGA) [13], [14]. The objectives of the network planning are: designing the network structure to carry out the estimated traffic, and minimizing the cost of network [15],[16]. Consequently, we are dealing with flow assignment and links capacities as complex optimization problems using GAs in two levels, the first GA level deals with selection the optimal network structure, and the second GA level reduces the overall delay while maintain the throughput, this cycle will continues until terminated by some condition. The remainder of the paper is organized as follows. In Section 2 we explain the details of GA levels; each alternative level of HGA is described extensively. The case

(IJCNS) International Journal of Computer and Network Security, 57 Vol. 2, No. 3, March 2010

study and the schematic diagram of the ATM network are discussed in section 3. In Section 4 we provide an extensive and comprehensive formulation of the use of HGA for solving the ATM design problem. In section 5 we present our preliminary results for the ATM network case study. Finally, in section 6 we offer several concluding remarks and address the challenges ahead in the future enhancements.

2.2 GA Level 2 GA is optimized to find the best value of (fi) such that the overall delay is minimized. The computed network delay is then returned to GA Level 1 to be considered as the new fittest value. The fitness criterion, in this level, is considered as the mean time delay. The mean delay is computed as in Equation 3.
T = 1 N ∑ θ i =1 C
N fi , θ = ∑ fi i =1 − fi

2. Hierarchical GA
The ATM design problem was considered as a parameter estimation problem. GAs has been used in a hierarchical approach to optimize the ATM network. The proposed novel HGA optimization problem is shown in Figure 1. There are two levels were performed to produce an optimal ATM network design. The two levels are evolved in parallel. The fitness score of the individuals in each level depends on the performance of the chromosomes in both levels.

(3)

i

where θ is the total traffic over the network, N is the number of links. This idea was presented in [16].

3. Case Study: ATM Network
There are two sets of customers to be considered while planning the ATM network design: 1) the user who uses the services through the network. The network should meet the user’s needs in terms of quality of service. The flow assignment and the capacities links are the two main entities affecting the cost of the ATM network. The maximum flow assignment must be tested, and the network should maintain a high capacity. 2) the company which will be building the ATM network and maintaining it. The network operation should be as cost effective as possible for both installation and maintenance. Minimizing the total cost is mainly an important matter of selecting the best design of ATM network. The schematic diagram for the ATM under study with all specified links and capacities is shown in Figure 2.

Figure 1. Hierarchical GAs in two levels 2.1 GA Level 1 GA is used to install a new network with a set of network links which satisfies the constraints demand criterion and has a minimum delay over the network. In this level, a new network structure is evolved which has all nodes of the original network and a subset of its links. The network configuration must be implemented subject to the constraints defined in Equations 1 and 2.
f ≤ C × 95% i i

(1)

Figure 2. The ATM Network Topology This ATM network was presented in [17], [18]. The network has 7 nodes with 14 links. The links and capacities for each node are presented in Table 1. Each network link is characterized by a set of attributes; these attributes for a given link (i) form the flow (fi ) and the capacity (Ci ). (fi ) is defined as the effective quantity of information transported by link (i). The capacity (Ci ) is defined as the measure of the maximal quantity of information that can be transmitted by link (i).

Thus, the maximum flow value (fi) for link (i) in Equation 1 is restricted not to exceed 95% of the capacity (Ci).
Throughput ≥ 0.5 ×

i =1



n

C i

(2)

The throughput of the network is allowed not to be less than half the sum of all capacities. We must realize that the boundary range of the genes values is based on the link ID of the ATM experiments and the gene redundancy must be eliminated. A gene duplicate leads to a link duplicate, which leads to the increase of the network delay.

58 Table 1: ATM network capacity of each link
Links ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Start Node 1 1 1 1 2 2 2 3 3 4 6 5 6 4 End Node 4 6 3 2 3 6 5 4 5 6 5 7 7 7 Link Capacity 150 150 225 225 150 150 150 150 150 150 150 150 150 150

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

4. Problem Formulation
The optimization problem of the ATM network is implemented in a hierarchical manner. Two levels of GAs are used. • GAs Level 1 is used to select the best network topology and forward it to GA Level 2. The ATM network should be effectively connected such that at least half the network bandwidth is busy. • GAs level 2 computes the minimum delay over the network. In GAs level 2 we determine the (fi) parameters for each network link (i). HGA requires number of steps for the problem formulation as shown below: Encoding Mechanism: This step is used to encode the variables of the optimization problem in terms of genes. In this work, a table was created for all pairs of links combination. The links in the table correspond to the virtual paths included between pair of nodes. In the proposed algorithm each link is identified by an ID number which is in accordance to the row number in the table. The proposed GAs will have 14 genes. Each gene corresponds to a link and each link has an identified (Ci). Initialization: This step uses the encoding method to create a random initial population by randomly generating a suitable number of chromosomes. In this paper, the chromosome length equals to the number of links in the original network. Number of Populations: Various population sizes were used while running GA in each level. Selection Mechanism: Selection schemes helps in determining the convergence properties of GAs. A selection scheme in GAs is known as the process which favorably selects better individuals in the current population for the mating pool [19], [20]. The selection method determines how individuals are chosen for mating. The first step in the

selection method is the fitness assignment. Each individual in the selection pool receives a reproduction probability depending on its objective value. This fitness is used for the selection step afterwards. Stochastic uniform selection mechanism is considered in this work. Reproduction: This step increases the number of good chromosomes in the next generation. This scheme determines how the new individuals will be assimilated into the population. Many number of reproduction operators were presented in the literature [3], [7]. Crossover: This procedure exchanges genes between the parents. Two chromosomes are randomly selected from the population as parent chromosomes. Two new chromosomes with the genes from both the parent chromosomes are obtained. A scattered function for the crossover is considered in this work. Mutation: This step is used to have a new chromosome which differs from the chromosomes in the population. A chromosome is randomly selected as the mutated chromosome. In this paper, the mutation type is Gaussian function with Scale equal 1 and Shrink equal 1. Fitness Evaluation: The fitness function is computed by using the mean time delay as given in Equation 3. This fitness is based on a given objective function on level 2 which is the minimum time delay over the evolved network in level 1. Each chromosome in the population is assigned a specific value associated with the gene arrangement in order to select the best individuals. Finally, the effectiveness of the proposed algorithm was tested based on the GA utilization. The GA utilization criterion can be computed as given in Equation 4.
Utilization = f i C i

(4)

Improving GA utilization in all cases will increase the flow assignment of the links for the ATM network. HGA requires number of tuning parameters in accordance with GA optimization. We run GA using various population sizes in the two levels of HGA. The corresponding results of the network topology for the various population sizes are shown in Figures 3, 4, 5, 6, 7, 8, 9, 10 and 11.

Figure 3. Network Topology evolved by Pop. Size L1 = 10 and Pop. Size L2 = 25

(IJCNS) International Journal of Computer and Network Security, 59 Vol. 2, No. 3, March 2010

Figure 4. Network Topology evolved by Pop. Size L1 = 10 and Pop. Size L2 = 50. Delay time = 11.0223

Figure 8. Network Topology evolved by Pop. Size L1 = 30 and Pop. Size L2 = 75. Delay time = 10.7585

Figure 9. Network Topology evolved by Pop. Size L1 = 50 and Pop. Size L2 = 25. Delay time = 10.76630 Figure 5. Network Topology evolved by Pop. Size L1 = 10 and Pop. Size L2 = 75. Delay time = 10.76282

Figure 10. Network Topology evolved by Pop. Size L1 = 50 and Pop. Size L2 = 50. Delay time = 10.7643

Figure 6. Network Topology evolved by Pop. Size L1 = 30 and Pop. Size L2 = 25. Delay time = 10.8009

Figure 7. Network Topology evolved by Pop. Size L1 = 30 and Pop. Size L2 = 50. Delay time = 10.7581

Figure 11. Network Topology evolved by Pop. Size L1 = 30 and Pop. Size L2 = 75

60 The HGA related parameters used in the experiments were determined as shown in Table 2. Table 2: Parameters evolved for GA-based experiments
Operator Creation function Initial range Selection mechanism Crossover type Mutation type Maximum generations Type random initial with uniform distribution lower bound is 1 and upper bound is 95% of line capacity Stochastic uniform Scattered function Gaussian function with Scale = 1 and Shrink = 1 50

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

was presented to show the performance of the proposed HGA algorithm. Figure 12 shows the convergence process of GAs with various population sizes for both level 1 and level 2.

5. Experimental Results
From the developed results, the proposed HGA was able to provide number of ATM network with various topologies. Having a closer look to each developed structure, we can find that all evolved network have seven links except one network which has 8 links (see Figure 4). This network has the maximum delay which is expected since the number of links increased. The evolved network were able to achieve the required criteria which is specified by the designer (i.e. increasing the network throughput to at least 50% of the network capacity and manage the traffic to be with minimum delay). The developed network could be unreliable in some sense because the reduced number of links may cause a problem if failure occurs in a link. This could be another objective to investigate in the future. Of course the best reliable network will be the fully connected network. This will be, of course, a very expensive network to implement. The computed delay times for various population sizes of the GA in levels 1 and 2 are shown in Table 3.

Figure 12. The convergence process of GAs with various population sizes The algorithm converges to the optimal solution at ”high” speed and finds a good solution in less than 20 generations of running. Moreover, the algorithm reached a solution with the lowest (i.e. optimal) delay in most cases. In some cases, the algorithm finds the optimal solution very quickly; however, there are some cases where longer convergence times are observed in order to obtain the optimal solution.

6. Conclusions and Future Work
A Hierarchical Genetic Algorithms (HGA) based optimization mechanism for Asynchronous Transfer Mode (ATM) network has been formulated. Minimizing the total cost is mainly the purpose of the proposed approach, subsequently; the work addresses the maximum allowable flow assignment in each link while simultaneously keeping the overall delay within the minimum acceptable range. It can be inferred from the results obtained that ATM network design using HGA produces good network plans in terms of network throughput and GA utilization. Also, it allows more flexibility in the traffic management, and less complexity of the planning task. It is suggested to improve the current work by adding new issues related to dynamic capacity allocation within the network. Additionally, improvement can be achieved if using Parallel Genetic Algorithms (PGAs). This way the computation time can be reduced to a reasonable computational effort.

Table 3: Delay time for various Pop. sizes of GA levels
Description GAs level 1 GAs level 2 Delay time

Pop. Size Pop. Size Pop. Size Pop. Size Pop. Size Pop. Size Pop. Size Pop. Size Pop. Size

10 10 10 30 30 30 50 50 50

25 50 75 25 50 75 25 50 75

10.81690 11.02230 10.76282 10.80090 10.75810 10.75850 10.76630 10.76430 10.76006

References
[1] L. Painton and J. Campbell, “Genetic algorithms in optimization of system reliability,” IEEE Transactions on Reliability, vol. 44, pp. 172–178, 1995.

It was found that the minimum delay was achieved with population size 30 for GA level 1 and 50 for GA level 2. In general, as a common practice, the convergence of the GA

(IJCNS) International Journal of Computer and Network Security, 61 Vol. 2, No. 3, March 2010

[2] D. Raychaudhuri and D. Wilson, “ATM–based transport architecture for multiservices wireless personal communication networks,” IEEE Journal On Selected Areas In Communications, vol. 12, no. 8, pp. 1401–1413, 1994. [3] H. El-Madbouly, “Design and bandwidth allocation of embedded ATM networks using genetic algorithm,” In Proceedings of World Academy of Science, Engineering and Technology, vol. 8, pp. 249–252, 2005. [4] D. W. Coit and A. E. Smith, “Reliability optimization of series-parallel systems using genetic algorithm,” IEEE Transactions on Reliability, vol. 45, pp. 254–260, 1996. [5] Alaa Sheta, Mohammad Salamah and Malik Braik,” Topology Optimization and Traffic Management in Telecommunication Network Using Hierarchical Genetic Algorithms”, In the Proceedings of ICICIS2009, Ain Shams Univ., Cairo, Egypt, pp. 143148, March, 2009. [6] E. lbaum and M. Sidi, “Topological design of localarea networks using genetic algorithms,” In IEEE/ACM Transactions on Networking, vol. 4, pp. 766–778, 1996. [7] K. Walkowiak, “Genetic algorithms for backup virtual path routing in survivable ATM networks,” In Proceedings of MOSIS 99, vol. 2, pp. 123–130, 1999. [8] A. Kasprzak and K. Walkowiak, “Algorithms for flow assignment ATM virtual private networks,” In Proceedings Of First Polish-German Teletraffic Symposium PGTS 2000, vol. 2, pp. 24–26, 2000. [9] E. Alba, “Parallel evolutionary algorithms in telecommunications: Two case studies,” In Proceedings of the CACIC02, Buenos Aires, Argentina, 2002. [10] L. He and N. Mort, “Hybrid genetic algorithms for telecommunications network back-up routing,” BT Technology Journal, vol. 18, no. 4, pp. 42–50, 2000. [11] L. Xian, “Network capacity allocation for traffic with time priorities,” Journal of International Network Management, vol. 13, pp. 411–417, 2003. [12] C. Rose and R. Yates, “Genetic algorithms and call admission to telecommunications networks,” Computers and Operations Research, vol. 23, pp. 9–6, 1996. [13] L. Davis and Coombs, Optimizing networks link sizes with genetic algorithms. Modeling and Simulation Methodology, Knowledge System’s Paradigms, Amsterdam, The Netherland: Elsevier, 1989. [14] S. Pierre and H. H. Hoang, “An artificial intelligence approach for improving computer communication network design,” Journal Of Operational Research Society, vol. 41, no. 5, pp. 405–418, 1990. [15] M. Gerla, J. A. S. Monteiro, and R. Pazos, “Topology design and bandwidth allocation in ATM nets,” IEEE. JSAC, vol. 7, no. 8, pp. 1253–1262, 1989. [16] A. Sheta, M. Salamah, H. Turabieh, and H. Heyasat, “Selection of appropriate traffic of a computer network with fixed topology using GAs,” International Journal of Computer Science, IAENG, vol. 34, no. 1, 2007. [17] S. Routray, A. Sherry, and B. Reddy, “Bandwidth optimization through dynamic routing in ATM

networks: Genetic algorithm and tabu search approach,” International Journal of Computer Science, vol. 1, no. 3, 2006. [18] R. Susmi, A. M. Sherry, and B. V. Reddy, “ATM network planning: A genetic algorithm approach,” Journal of Theoretical and Applied Information Technology, vol. 1, pp. 74–79, 2007. [19] D. Thierens and D. E. Goldberg, “Convergence models of genetic algorithm selection schemes,” In PPSN III: Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature. London, UK: Springer-Verlag, pp. 119– 129, 1994. [20] B. L. Miller and D. E. Goldberg, “Genetic algorithms, selection schemes, and the varying effects of noise,” Evolutionary Computation. vol. 4, no. 2, pp. 113–131, 1996.

Authors Profile
Alaa Sheta received his B.E., M.Sc. degrees in Electronics and Communication Engineering from Faculty of Engineering, Cairo University in 1988 and 1994,

respectively. A. Sheta received his Ph.D. degree from the Computer Science

Department, School of Information Technology and Engineering, George Mason University, Fairfax, VA, USA. His published work exceeds 70 publications between book chapters, journal papers, conferences and invited talks. His research interests include Evolutionary Computation, Computer Networks, Image Processing, Software Reliability and Software Cost Estimation. Currently, Dr. Sheta is a Professor with the College of Computers and Information Systems and the Director of the Vision and Robotics Laboratory at Taif University, Taif, Saudi Arabia. He is on leave from the Electronics Research Institute (ERI), Giza, Egypt.

Malik Braik received his B.Sc. degree in Electrical Engineering from Faculty of Engineering, Jordan University of Science and Technology in 2000. Five years later, M. Braik received his M.Sc in Computer Science from Department of Information Technology, Al-Balqa Applied University, Jordan in 2005. His research interests focus on Evolutionary Computation, Image Processing, Data Security and Computer Networks. Currently, Braik is working with the Department of Information Technology, Prince Abdullah Bin Ghazi Faculty of Science and Information Technology, Al-Balqa Applied University, Al-Salt, Jordan.

62

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Mohammad Salameh received his B.E. degree in Computer Science from Faculty of King Abdullah II School for Information Technology (KASIT), University Of Jordan in 2005, and in 2009 Mohammad received his M.Sc. degree in Computer Science from Faculty of Graduate Studies and Scientific Research, Al-Balqa` Applied University in 2009. Mohammad Salameh is a joiner researcher; his research interest fields are Robotics Design and Programming, Evolutionary Computation, Computer Networks, Image Processing. Mohammad is a Robotics Trainer and Programmer in the National Education Center for Robotics (NECR), King Hussein Foundation, Amman, Jordan.

(IJCNS) International Journal of Computer and Network Security, 63 Vol. 2, No. 3, March 2010

Capacity Enhancement of 3G Cellular System using Switched Beam Smart Antenna with Windowed Beam former
Prof. T. B. Lavate1, Prof. V. K. Kokate2 , Prof. Dr A. M. Sapkal 3
1

Department of E & T/C, College of Engineering Pune, Maharashtra, India, [email protected], [email protected]

2

Department of E & T/C,Indira College of Engineering & Management, Pune, Maharashtra, India, [email protected]
3

Department of E & T/C, College of Engineering Pune-5, Maharashtra, India,

[email protected]
Abstract: The 3G cellular networks are being designed to provide high bit rate services, multimedia traffic, in addition to voice calls within the limited bandwidth available. The suitable solution to the bandwidth limitation is the smart antenna. The smart antennas at the base station of cellular system have the interference rejection or signal to interference plus noise improvement capability and in turn to improve the capacity of cellular system. The smart antennas can generally be classified as either switched beam or adaptive array smart antenna. However switched beam smart antenna is cheaper to implement in many applications and hence investigated here much in detail. Switched beam smart antenna (SBSA) creates a group of overlapping beams that together result in omni directional coverage. SBSA has the interference rejection capability depending on side lobe level. The simplest way to reduce the side lobe level and improve the SINR of SBSA is to use non adaptive windowed beam forming functions. In this paper performance of linear SBSA has been investigated using MATLAB for various windowed beam forming functions such as Hamming, Gaussian and Kaiser-Bessel functions. It has been observed that the Kaiser-Bessel weights provide one of which lowest array side lobe levels still maintaining nearly the same beam width as uniform weights and consequently Kaiser-Bessel function can widely be used with SBSA to improve the capacity of 3G cellular system.

CDMA all subscribers use same frequency & this creates inter-cell & intra-cell co-channel interference. Thus the actual performance of CDMA system is still interference limited and is affected by adverse channel conditions created by multipath propagation. It is obvious that the capacity of CDMA based wireless communication system can be improved by different interference reduction techniques & that can be achieved by using smart antennas such as switched beam smart antenna (SBSA) or adaptive array smart antenna . In this paper section 2. describes switched beam smart antenna used to improve the capacity of CDMA cellular system.

2. Switched Beam Smart Antenna and its Array Pattern
2.1 Concept of SBSA Depending upon the various aspects of smart antenna technology they are categorized as switched beam smart antenna and adaptive array smart antenna. Here we investigate switched beam smart antenna in detail. The switched beam smart antenna (SBSA) has multiple fixed beams in different directions & this can be accomplished using feed network referred to as beam former & most commonly used beam former is Butler matrix. In terms of radiation patterns switched beam is an extension of cellular sectorization method of splitting a typical cell. That is switched beam antenna increases the capacity of cellular system by creating micro sectors and each micro sector contains a pre-determined fixed beam pattern with greatest sensitivity located in the centre of the beam & less sensitivity elsewhere. The receiver selects the beam that provides greatest signal enhancement & interference reduction. Actually SBSA enhances receive signals & switch from one beam to another as the desired user moves throughout the sector. An N beam switched beam antenna generally provides an N-fold antenna gain and some diversity gain by combining the received signal from different beams as

Keywords: SBSA, DS/CDMA, Windowed Beam forming

1. Introduction
Nowadays the wireless operators are faced with increasing capacity demands for both voice & data services. To achieve this, multiple access techniques such as TDMA & CDMA can be employed. For instance in TDMA based wireless communication system (2G systems) the frequency reuse factor is normally greater than 6. That is these systems employ the frequency reuse concept which increases their capacity to some extent. But this frequency reuse concept creates the co-channel interference. In CDMA systems the frequency reuse factor is 1 which enables it to offer higher capacities & the capacity improvement by CDMA technology may be as high as 13 times that of TDMA. But in

64 shown in Fig.1.

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

as shown in Fig. 2. It is obvious that N element SBSA forms N spatial channels whose input signal is given by S = [S1 S2 S3 ------- SN ] (3)
B e a m f o r m e r

Beam-1

Beam-2

Interference Signal Beam-3 Beam-4

Beam Select

Signal output

where Si = √Pi*[1 e-jφi e-j2φi e-j3φi ……. e-j (N-1)φi ] is the ith user signal φi=2πd/λ*sinΦ i ; Φi is the ith user angle of arrival signal with power Pi. Taking into account an equation (1) and equation (3) the output of SBSA is Y=AHS (4) = [Y1 Y2 Y3 ----------YN]

Desired signal Direction

If all users are located exactly as Ó©i = Φi; the vector Yi=[0, 0, 0, 0, √Pi, 0, 0, --- 0]. But if Ó©i ≠ Φi; , we have to evaluate the maximum element of each vector to detect active users.
0

Figure 1. Switched beam smart antenna. However these switched beam smart antennas have non uniform gain with respect to AOA due to scalloping & they can also have a problem with locking into the wrong beam due to multi path fading or interference and provide limited interference suppression since they can't suppress the interference if it is in the same beam as the desired signal. Hence switched beam smart antenna solutions work best in minimal to moderate co-channel interference scenario. Further they are often much less complex and are easier to retro-fit to existing wireless technologies. 2.2 Signal model and array pattern of SBSA The SBSA creates a number of two way spatial channels on a single conventional channel in frequency, time or code. Each of these spatial channels has the interference rejection capabilities of the array depending on side lobe level (γ).The steering matrix for N element SBSA is given by A= [a1 a2 a3 ------- aN ] (1)

-5 -10 Norm aliz ed power gain(dB) -15 -20 -25 -30 -35 -40 -45 -50 -60

-40

-20

0 AOA

20

40

60

Figure 2. SBSA Array pattern for number of antenna elements N = 8. It is apparent from Fig.2 that the array factor of SBSA has side lobe level γ= -16 dB. The presence of side lobes means the array radiates in untended directions or the side lobes can receive the same signal from multiple angles which may cause fading in communication systems. These harmful side lobes of SBSA can be suppressed by windowing the array elements also called as array weighting and is discussed in section 3.

where ai = [1 e-jΨi e-j2Ψi e-j3Ψi ……. e-j (N-1)Ψi ] is the steering vector; Ψi=2πd/λ*sinÓ©i ; Ó©i is the ith reference angle, d is the spacing between the antenna elements. The matrix A forms the special filters with orthogonal properties aiH ak =N, if i=k; and aiH ak =0, if i≠k. In practice the SBSA creates several simultaneous fixed beams through the use of equation (1) and Butlar Matrix theory. With Butlar Matrix for SBSA of N elements the array factor can be given as sin[Nπd/λ (sinÓ© - sinÓ©â„“)] AF(Ó©) = Nπd/λ (sinÓ© - sinÓ©â„“) where sinÓ©â„“ = â„“λ/Nd; â„“= ±1/2, ±3/2, ------- ± (N-1)/2. If the element spacing is d = λ/2 the beams of SBSA are evenly distributed over the span of 1800 and using MATLAB 7.0 equation (2) for N=8 is simulated and simulation results are (2)

3. WINDOWED BEAM FORMERS FOR SBSA

y

Ѳ

wN/2

w2

w1

w1

w2
d

wN/2

x

Figure 3. N Element linear antenna array with weights.

(IJCNS) International Journal of Computer and Network Security, 65 Vol. 2, No. 3, March 2010

The array factor of N element uniform linear array is given by sin[NΨ/2] AF(Ó©) = N (sinΨ/2) where Ψ= kdsinÓ© + β, β is the phase shift from element to element. But such uniform linear array also called Boxcar window, exhibits side lobe levels of about γ = -16 db. The array side lobes can be suppressed by weighting or windowing the array elements as shown in Figure 3. The array factor of such N linear element windowed array is given by N/2 AFn (Ó©) = Σ wncos((2n-1/2)kdsinÓ©) (6) n=1 To determine the weights wn there are number of useful window functions that can provide weights for each element in array viz. Hamming, Gaussian, Kaiser-Bessel functions etc. By carefully controlling the side lobes in windowed SBSA arrays, most interference can be reduced to insignificant level. But side lobe suppression is achieved at the expense of main lobe beam width. These windowed non adaptive beam formers have advantages as: • • • Cheaper to implement than adaptive beam forming, No main lobe distortions due to interference, Any level of side lobe suppression is possible with correct choice of windows. The disadvantages of windowed beam former are: • Distribution and power of interference affects its performance, • Increase in main lobe beam width (∆). 3.1 Window weight functions for SBSA The useful window functions for SBSA are investigated below: 3.1.1 Hamming weight function : weights are chosen by The Hamming
-10

0

(5)
-10

Hamming window Boxcar window

-20

|A F |d B

-30

-40

-50

-60 -90

-60

-30

0

30

60

90

θ

Figure 4. Array factor with Hamming weights for N=8 3.1.2 Gaussian weight function: The Gaussian weights are found by Gaussian function to be w (n+1)= exp((-0.5(α (n-N/2)/N/2) 2) (8) where n=0,1,…,N

α≥2

The Gaussian array weights can be determined using the gaussian(N) command in MATLAB. The normalized gaussian array weights for N=8 and α=2.5 are w1=1, w2=0.6766, w3=0.3098 and w4=0.0960. The Gaussian weighted array factor is plotted in Fig..5
0 Gaussian window Bxcar window

w(n+1)=0.54 -0.46 cos[2πn/(N-1)] (7) where n = 0, 1, 2, ……. ,N-1. The hamming array weights can be found using the hamming(N) command in MATLAB. Array pattern for N=8 elements with normalized Hamming weights for linear array is plotted in Fig. 4. It is apparent that Hamming weights provide side lobe suppression γ = -39 db at the expense of large increase in beam width ∆=2. Table 1: Hamming normalized weights for array N=8

-20

|A F |d B

-30

-40

-50

-60 -90

-60

-30

0

30

60

90

θ

Weight types Hamming

w1 1

w2 0.673

w3 0.2653

w4 0.0838

Figure 5. Array factor with Gaussion weights for N=8 It is clear that Gaussian weighted array pattern provides side lobe suppression γ = -48dB with increase in main lobe beam width ∆=1.85

66 3.1.3 Kaiser-Bessel weight function: The Kaiser Bessel weights are determined by

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

lobe beam width (∆=1.2) compared to Hamming and Gaussian weight functions & hence Kaiser Bessel weight functions are chosen to suppress the side lobes in switched beam smart antenna. (SBSA)

I0 w (n) = (9) I0 where n= 0,…..,N/2, α>1

4.

Performance analysis of Kaiser-Bessel windowed SBSA and simulation results

The Kaiser-Bessel normalized weights for N=8 are found using the Kaiser(N,α) command in MATLAB . Table 2: Kaiser-Bessel normalized weights for array N=8 Weight types Kaiser-Bessel w1 1 w2 0.8136 w3 0.5136 w4 0.210

We consider the DS/CDMA system model in which the data is modulated using BPSK format. The pulse and PN code amplitudes are all independently and identically distributed random variables. We assume that the PN code length M=128 & the power of each mobile station is perfectly controlled by the same base station BS which employs switched beam smart antenna. As shown in [2] the bit error rate (BER) for DS-CDMA, 1200 sectorized systems is given by

The Kaiser-Bessel weighted array factor is plotted in Fig 6 using MATLAB; in which γ = -33 db and ∆=1.2.
0 Kaiser-Bessel Boxcar window -10

Pe = Q

(10)

where Eb(1)/N0 is SINR for user of interest #1, Eb(k)/N0 is the same for interfering users. We extended equation (10) to switched beam smart antenna as

-20

Pe= Q
|A F |d B -30

(11)

-40

-50

where k1 is the number of interfering users with same PN code like user #1 affected side lobe, k2 is the number of interfering users affected main lobe & k3 is like k2 but affected side lobes.
10
0

-60 -90

-60

-30

0

30

60

90
10
-2

θ

120 deg. sect. Antenna Boxcar SBSA Kaiser-Bessel SBSA

Figure 6. Array factor with Kaiser-Bessel weights N=8 Table 3: Important parameters of windowed beams for N=8 element linear array Window Beam width γ (∆) (db) Boxcar 1.0 -16 Hamming Gaussian (α = 2.5) Kaiser-Bessel (α = 3.0) 2.0 1.85 1.2 -38 -48 -33
10
-10

Bit Error Rate

10

-4

10

-6

10

-8

0

2

4

6 8 Eb/No dB

10

12

14

Table3. shows that Kaiser-Bessel function provides side lobe suppression γ = -33dB but with minimum increase in main

Figure 7. The BER performance for DS/CDMA and switched beam smart antenna for 100 users and N=8

(IJCNS) International Journal of Computer and Network Security, 67 Vol. 2, No. 3, March 2010

For Boxcar (non windowed) SBSA the side lobe level (as shown in fig.2) γ = -16 db. While as Kaiser-Bessel weights shown in Table 2 can be selected for SBSA so that its side lobe level (as shown in fig.6) can be reduced to γ= -33db. Using MATLAB 7.0 equation (10) and equation (11) with γ= -16 db. & γ= -33db are simulated. That is using equation (10) & (11) the simulation results are presented in Fig 7. where BER as a function of Eb/N0 and number of active users = 100 in the service area is plotted. Fig. 7 presents BER of DS/CDMA system with conventional 1200 sectored antenna, no windowed SBSA (N=8, γ= -16dB) & Kaiser Bessel windowed SBSA (N=8, γ = -33dB). As follows from Fig. 7 for the fixing level of BER, e.g. for 3G communication system acceptable BER is Pe =10-4 , by the application of SBSA the number of active users in conventional CDMA system can be increased up to 1.12 times for no windowed SBSA antenna and up to 1.4 times with Kaiser- Bessel windowed SBSA antenna. i.e KaiserBessel windowed SBSA antenna can increase the number of active users in 3G system significantly without losing of performance quality.

[5]

[6]

Md. Bhakar, Vani R. M., P. V. Hunagund, “Smart Antennas Systems: Concepts of Beam steered Array Configurations”, Proceedings of IEEE International symposium on microwaves 2008, Banglore section David Cabrera, Joel Rodriuez,Adviser:Victor V.Zaharov, “ Switched Beam Smart Antenna BER Performance analysis for 3G CDMA Cellular Communication”, Polytechnic University of Puerto Rico.

Authors Profile
T.B. Lavate received ME (Microwave) degree in E&T/C in 2002 from Pune University. He is pursuing his Ph.D. in College of Engineering, Pune-5 affiliated to pune university. He has published nine papers on wireless communication & smart antenna. He is member of IETE & ISTE.

5. Conclusion
In this paper the performance analysis of windowed SBSA is investigated. The relation of BER for DS/CDMA is extended for the case of application of SBSA at base station. As a result shows, the application of Kaiser-Bessel windowed SBSA improves the BER which in turn increases the active users in conventional CDMA system significantly & hence SBSA is still an attractive solution to increase the capacity of existing 3G cellular wireless communication system.

V. K. Kokate graduated in E & T/C from University of Pune in 1973 and received his Masters in E & T/C (Spl: Microwave) from the University of Pune.. Presently he is working as H.O.D. of E&TC Engineering Department of Indira College of Engineering, , Pune. having over 35 years experience in teaching and administrative. His fields of interests are Radar, Microwaves, Antennas and EMI & C. He is a Fellow Member of IETE and Member of ISTE/IEEE. In his credit, he has about twenty papers published in International/National repute Conferences/ Journals. A.M.Sapkal graduated in E&TC from University of Pune in 1987 and received his Masters in E & TC (Spl: Microwave) from the University of Pune in 1992 and obtained his Ph.D in 2008. Presently he is working as Professor in Dept. of E &TC Engineering of College of Engineering, Pune. Having, over 20 years experience in teaching, his fields of interests are power electronics, microwaves, and image processing. He is a member of ISTE, IETE & IEEE. In his credit, he has about twenty five papers published in National / International repute Conferences/Journals.

References
[1] L.C.Godara,“Application of antenna arrays to mobile communications II: Beamforming & direction of arrival conciderations”, Preceedings of IEEE,volume 85,issue 8,August 1997, pages 1195-1245. [2] T. S. Rappaport,“ Smart antennas for wireless communications,” Prentice Hall, India 2005. [3] Frank Gross ,“ Smart antennas for wireless communication,” McGraw-Hill, New York 2005 [4] J. C. Liberti, T. S. Rappaport, “Smart antennas for wireless communications: IS-95 and Third Generation CDMA Applicatons”, Prentice Hall PTR, New Jersey 1999..

68

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

DDOM: The Dynamic Data Oriented Model for Image Using Binary Tree
A.Shahrzad Khashandarag1*, A.Mousavi2, R.Aliabadian3, D. Kheirandish4 and A.Ranjide Rezai5
1

Young Researchers Club of Tabriz, Islamic Azad University Tabriz Branch, Tabriz-Iran 1 [email protected]
3

2

[email protected]

[email protected]

4

[email protected]

5

[email protected]

Abstract: This paper presents a dynamic data oriented
model (DDOM) of image. The ability of model is very fast image processing against Data oriented Model that Habibizad and cooperators proposed in [8]. In our approach, the Sobel algorithm is used for edge detection of image and histogram thresholding is used for clustering of image and binary tree is used for anatomy of model. Each node of this tree represents the feature of image or sub image. By using the presented model in this paper, tasks as measuring similarity ratio, segmentation, clustering … can be done with high desired precision and corresponding speed [8].

GIF (Graphics Interchange Format) is an 8-bit-per-pixel bitmap image format that was introduced by CompuServe in 1987 and has since come into widespread usage on the World Wide Web due to its wide support and portability. The format uses a palette of up to 256 distinct colors from the 24-bit RGB color space. GIF images are compressed using the LZW lossless data compression technique to reduce the file size without degrading the visual quality [10]. In computing, JPEG (pronounced JAY-peg) is a commonly used standard method of compression for photographic images. The name JPEG stands for Joint Photographic Experts Group, the name of the committee who created the standard. The group was organized in 1986, issuing a standard in 1992, which was approved in 1994 as ISO 10918-1. The compression method is usually lossy compression, meaning that some visual quality is lost in the process, although there are variations on the standard base line JPEG, which are lossless. There is also an interlaced "progressive" format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed whilst downloading over a slow connection, allowing a reasonable preview before all the data has been retrieved [11]. DOM (Data Oriented Model) proposed by Habibizad and cooperators. This model is design for tasks that are very fast. In the related works section, this model will be explained.

Keywords: Data oriented modeling; Image Segmentation; histogram thresholding; Binary Tree.

1. Introduction
Recently many researchers have been studied on data structures for image processing tasks. In [1-7] has been explained the data structures of images. The main idea of this paper is to present an optimized and dynamic data oriented model of image. Dynamic Data Oriented Modeling (DDOM) is an approach which models concepts by using the data structure. We have introduced dynamic data oriented modeling of image for fast image processing. The Conventional methods use the color of the pixels to model the image and try to reduce the size of image (Compression) [8]. Previous image models are: • • • • BMP JPEG GIF DOM

2. Related Works
In this paper we want to optimize the model of DOM which the Habibizad and cooperators proposed in [8]. Habibizad and cooperators in [8] proposed: 2.1 Data Oriented Modeling Habibizad and cooperators suppose that Fig1 is the image for data oriented modeling. Fig. 2 shows ADBT of Fig. 1. ADBT is Average-Difference-Binary-Tree-of-Image which is stored in an array. This ADBT has three levels, numbered as 0, 1 and 2. The features of pixels are stored in the leaves. Features of F are obtained by combining features of A and C. In the same way, features of G are obtained from B and

.BMP or .DIB (device-independent bitmap) is a bitmapped graphics format used internally by the Microsoft Windows and OS/2 graphics subsystem (GDI), and used commonly as a simple graphics file format on those platforms. This format is in two parts, Part one, is the data, which indicate whole image features including width, height, color palette etc. The second part consists a block of bytes that describes the image, pixel by pixel. Pixels are stored starting in the bottom left corner going from left to right and then row by row from the bottom to the top. Each pixel is described using one or more bytes [9].

(IJCNS) International Journal of Computer and Network Security, 69 Vol. 2, No. 3, March 2010

D. finally features of the entire image, E, are obtained by combining features of F and G. By putting F and G together, we can achieve smooth of original image as is shown in Fig. 3 [8].
Fig. 4 shows the array, which stores the ADBT of Fig. 2.

Figure 1. an

I 2 Image [8].

1

Fig. 6 shows segmented of Fig. 5. This segmented image is created with histogram thresholding. Fig. 7 shows the reverse N for image processing in this model. Fig. 8 shows ADBT of Fig. 6. This ADBT has three levels, numbered as 0, 1 and 2. The features of regions are stored in the leaves. Features of L0 are obtained by combining features of R0 and R1. In the same way, features of L1 are obtained from R2 and R3. Finally features of the entire image, L2, are obtained by combining features of L0 and L1. By putting L0 and L1 together, we can achieve smooth of original image as is shown in Fig. 9. Fig. 10 shows the ADBT of Fig. 6 that has been stored in an array. In Fig.8 A and B and D have the similar intensity. 3.2 Edge detection with Sobel Algorithm Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Edges in imag-

Figure 2. ADBT of an

I2

1

image [8].

Figure 3. Smoothed of original image [8].

Figure 6. a segmented image.

Figure 4. ADBT of Fig. 2 stored in an array [8].

For ease of understanding, mentioned steps are described top to down; but for enhancing speed of making ADBT, it will be created down to top [8]. After that leaves will be initialized by values of produced vector. Therefore, count of leaves is equal to count of pixels of image. Each leaf is corresponded to a pixel of original image and its A and D are same as the pixel’s color. For each non-leaf node, A is equal to average of its children’s A and D is equal to difference of its children’s A [8].
Figure 7. the reverse N for image processing.

3. Our Works
3.1 Dynamic Data Oriented Modeling This paper presents a dynamic data oriented model (DDOM) of image. To illustrate the concept, Fig. 5 shows a 3 3 images, which have nine pixels, labeled A, B, C, D, E, F, G, H and I respectively.
Figure 8. ADBT of Fig.8

Figure 5. An instance Image

Figure 9. Smoothed of original image

70

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Figure 10. ADBT of Fig. 8 stored in an array

-es are areas with strong intensity contrasts – a jump in intensity from one pixel to the next. Edge detecting an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. There are many ways to perform edge detection. However, the majority of different methods may be grouped into two categories, gradient and Laplacian. The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. The Laplacian method searches for zero crossings in the second derivative of the image to find edges [12]. The Sobel operator performs a 2-D spatial gradient measurement on an image. The Sobel operator is shown in Fig. 11.
Figure 12. Edge detection with Sobel algorithm

Note: G is Gradient and f is Image. 4.3 Image Segmentation Algorithm According to the S. Arora and cooperator’s results in article [13], segmentation algorithm is: Following steps describe the proposed algorithm for image segmentation: 1. Repeat step 2–6, times; where n is the number of thresholds. 2. Range R = [a, b]; initially a = 0 and b = 255. 3. Find mean and standard deviation of all the pixels in R. 4. Sub-ranges’ boundaries and are calculated as and ; where and are free parameters. 5. Pixels with intensity values in the interval [ ] and [ ] are assigned threshold values equal to the respective weighted means of their values. 6. a = T1 + 1 ; b = T2 − 1 . 7. Finally, repeat step . with (5) (6) In Fig. 12 Shown the result of Sobel algorithm: 5 with and

Figure 11. Sobel Operator.

(1) (2)

(3) (4)

In Fig13 shown the result of image segmentation algorithm. Also in Fig14 the result of histogram Lena is shown.

4. Conclusion
In this paper, a dynamic data oriented model for image is introduced which models image as a binary tree. Each node of this tree represents the feature of image or sub image. By using the presented model in this paper, tasks as measuring similarity ratio, segmentation, clustering … can be done with high desired precision and corresponding speed.

(IJCNS) International Journal of Computer and Network Security, 71 Vol. 2, No. 3, March 2010

segmentation through a fast statistical recursive algorithm", pattern recognition letters 29 (2008) 119125, published by Elsevier.

Authors Profile
Asghar Shahrzad Khashandarag received the B.S. degree in computer engineering from the Payame Noor University Bonab Branch, Iran, in 2008, and the M.S. degree in computer engineering from the Islamic Azad University Tabriz Branch, Iran, in 2009, respectively. From 2008, he works as a researcher with the Young Researchers Club of Tabriz. He has published more than 10 papers in various journals and conference proceeding. His research interests include image processing, signal processing, wireless sensor network.

Figure 13. Results: Lena (a) Lena gray, (b) histogram, (c) 2

level thresholding, (d) 4 level, (e) 6 level and (f) 8 level.
Alireza Mousavi received his B.Sc. in computer engineering, software Engineering, from Allameh Mohaddes Nouri University, Mazandaran, Iran, in 2008, and the M.S. degree in computer engineering from the Islamic Azad University Tabriz Branch, Iran, in 2010, respectively. His research interests include image processing, Residue Number systems, wireless sensor network.

Figure 14. Result of Histogram

References
[1] Zhiyong Wang, Dagan Feng, and Zheru Chi, “Region-

[2] [3]

[4]

[5] [6] [7] [8]

[9] [10] [11] [12] [13]

Based Binary Tree Representation For Image Classification”, IEEE International Conference on Neural Networks & Signal Processing, Nanjing, China, 2003. Xiaolin Wu, “Image Coding by Adaptive TreeStructured Segmentation”, IEEE Transactions on Information Theory, VOL. 38, NO. 6, 1992. G.S.Seetharaman, B.Zavidovique, “Image Processing in a Tree of Peano Coded Images”, Computer Architecture for Machine Perception, 1997.CAMP’97. Proceeding Fourth IEEE International Workshop on. M. Kunt, M. Benard, and R. Leonardi, “Recent results in highcompression image coding”, IEEE Transaction Circuits Syst., vol. CAS- 34, no. 11, pp. 1306-1336, Nov 1987. R. Leonardi and M Kunt, “Adaptive split-and-merge for image analysis and coding”, Proc. SPIE, vol. 594, 1985. G. J. Sullivan and R. L. Baker, “Efficient quadtree coding of images and video”, in ICASSP Proc., May 1991, pp. 2661-2664. J. Vaisey and A. Gersho, “Image compression with variable block size segmentation”, IEEE Trans. Signal Processing, vol. SP-40, pp.2040- 2060, Aug. 1992. A. Habibizad Navin, A. Sadighi, M. Naghian Fesharaki, M. Mirnia, M. Teshnelab, and R. Keshmiri, "Data Oriented Model of image : as a framework for image processing", World Academy of Science, Engineering and Technology 34 2007 http://en.wikipedia.org/wiki/Windows_bitmap http://en.wikipedia.org/wiki/GIF http://en.wikipedia.org/wiki/JPEG http://www.pages.drexel.edu/~weg22/edge.html S. Arora, J. Acharya , A. Verma, Prasanta K. Panigrahi, " Multilevel thresholding for image

Ramin Aliabadian received the B.Sc. degree in computer engineering from the Shomal University, Amole, Iran, in 2008, and the M.S. degree in computer engineering from the Islamic Azad University Arak Branch, Iran, in 2010, respectively. His research interests include image processing, computer architecture, computer networks.

Davar Kheirandish Taleshmekaeil received the B.Sc. degree in Computer Hardware engineering from the Allameh Mohaddes Nouri University, Mazandaran, Iran, in 2008, and the M.S. degree in computer engineering from the Islamic Azad University Tabriz Branch, Tabriz, Iran, in 2010, respectively. His research interests include image processing, computer architecture, computer networks.

Ali Ranjide Rezai received the B.Sc. degree in computer engineering from the Shomal University Amole, Iran, in 2008, and the M.S. degree in computer engineering from the Islamic Azad University Tabriz Branch, Iran, in 2010, respectively. His research interests include image processing, computer architecture.

72

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Comprehensive Analysis and Enhancement of Steganographic Strategies for Multimedia Data Hiding and Authentication
Ali Javed, Asim Shahzad, Romana shahzadi, Fahad Khan
Faculty of Telecommunication and Information Engineering University of Engineering and Technology Taxila Pakistan [email protected], [email protected], [email protected]

Abstract: This research paper focuses on the analysis and
enhancement of steganographic strategies for multimedia data hiding authentication. Based on an authentication game between an image and its authorized receiver, and an opponent, security of authentication watermarking is measured by the opponent's inability to launch a successful attack. In this work, we consider two stages of data hiding mechanism: Hiding the data in an image along with conditional security and detecting the hidden data. First we detect whether there exists a hidden message within the image and then applying the conditional security mechanism, we extract that hidden message. We propose a novel security enhancement strategy that results in efficient and secure LSB-based embedding and verification phenomenon. Both theoretical analysis and experimental results are presented. They show that using our approach, protection is achieved without significant increase in image size and color distortion, and without sacrificing the image or video quality.

wish to plan an escape from jail.

Figure 1. Stenography Process However, the prison warden, Ward, can monitor any communication between Alice and Bob, and if he detects any hint of “unusual" communications, he throws them both in solitary confinement. Alice and Bob must then transmit their secret plans so that nothing in their communication seems unusual" to Ward. There have been many proposed solutions to this problem, ranging from rudimentary schemes using invisible ink to a protocol which is provably secure assuming that one-way functions exist.

Keywords: Steganography, LSB, Stego image, Stego key,
payload, Watermarking, Covert Communication

1. Introduction
The word steganography literally means covered writing as derived from Greek. It includes a vast array of methods of secret communications that conceal the very existence of the message. Among these methods are invisible inks, microdots, character arrangement (other than the cryptographic methods of permutation and substitution), digital signatures, covert Channels and spread-spectrum communications. Steganography is the art of concealing the existence of information within seemingly inoffensive carriers. Steganography can be viewed as parallel to cryptography. Both have been used throughout recorded history as means to protect information. At times these two technologies seem to converge while the objectives of the two differ. Cryptographic techniques "scramble" messages so if intercepted, the messages cannot be understood. Steganography, in an essence, "camouflages" a message to hide its existence and make it seem "invisible" thus concealing the fact that a message is being sent altogether. An encrypted message may draw suspicion while invisible messages will not [1]. Steganography refers to the problem of sending messages hidden in “innocent looking" communications over a public channel so that an adversary eavesdropping on the channel cannot even detect the presence of the hidden messages. Simmons gave the most popular formulation of the problem: two prisoners, Alice and Bob[2],

Figure 2. General Model of Steganography

2. Literature Survey
2.1 Steganography Techniques 2.1.1 Physical steganography Steganography has been widely used including recent historical times and the present day. Possible permutations

(IJCNS) International Journal of Computer and Network Security, 73 Vol. 2, No. 3, March 2010

are endless and known examples include •Hidden messages within wax tablets: in ancient Greece, people wrote messages on the wood, and then covered it with wax upon which an innocent covering message was written. • Hidden messages on messenger's body: also in ancient Greece. Herodotus tells the story of a message tattooed on a slave's shaved head, hidden by the growth of his hair, and exposed by shaving his head again. The message allegedly carried a warning to Greece about Persian invasion plans. This method has obvious drawbacks such as delayed transmission while waiting for the slave's hair to grow, and its one-off use since additional messages requires additional slaves. In WWII, the French Resistance sent some messages written on the backs of couriers using invisible ink. • Hidden messages on paper written in secret inks, under other messages or on the blank parts of other messages. • Messages written in morse code on knitting yarn and then knitted into a piece of clothing worn by a courier. • Messages written on the back of postage stamps. 2.1.2 Digital steganography
Modern steganography entered the world in 1985 with the advent of the personal computer applied to classical steganography problems. Development following that was slow, but has since taken off, going by the number of 'stego' programs available: Over 725 digital steganography applications have been identified by the Steganography Analysis and Research Center.

• Chaffing and winnowing. • Mimic functions convert one file to have the statistical profile of another. This can thwart statistical methods that help brute-force attacks identify the right solution in a ciphertext-only attack. • Concealed messages in tampered executable files, exploiting redundancy in the i386 instruction set. • Pictures embedded in video material (optionally played at slower or faster speed). • Injecting imperceptible delays to packets sent over the network from the keyboard. Delays in key presses in some applications (telnet or remote desktop software) can mean a delay in packets, and the delays in the packets can be used to encode data. •Content-Aware Steganography hides information in the semantics a human user assigns to a datagram. These systems offer security against a non-human adversary/warden. •Blog-Steganography. Messages are fractionalized and the (encrypted) pieces are added as comments of orphaned weblogs (or pin boards on social network platforms). In this case the selection of blogs is the symmetric key that sender and recipient are using; the carrier of the hidden message is the whole blogosphere. 2.2 Types of Steganography Pure Steganography Pure steganography is where you only need to know the technique to be able to read off the message. Private Steganography Private Steganography requires you to know a password as well as the technique. Public steganography Public steganography is like public key cryptography - you have a public and private key and knowledge of the technique. A standard library for pure steganography isn't so smart - because it would allow people to simply try the techniques. 2.3 Image Steganography

Figure 3. Cover Image Here a cover Image of a tree is shown in fig. 3. By removing all but the last 2 bits of each color component, an almost completely black image results. Making the resulting image 85 times brighter results in the image below.

Steganography is the art of hiding the fact that communication is taking place, by hiding information in other information. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet. For hiding secret information in images, there exist a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. As stated earlier, images are the most popular cover objects used for steganography. In the domain of digital images many different image file formats exist, most of them for specific applications. For these different image file formats, different steganographic algorithms exist. 2.4 Image Compression

Figure 4. Stego Image Digital steganography techniques include: • Concealing messages within the lowest bits of noisy images or sound files. • Concealing data within encrypted data. The data to be concealed is first encrypted before being used to overwrite part of a much larger block of encrypted data.

When working with larger images of greater bit depth, the images tend to become too large to transmit over a standard Internet connection. In order to display an image in a reasonable amount of time, techniques must be incorporated to reduce the image’s file size. These techniques make use of mathematical formulas to analyze and condense image data, resulting in smaller file sizes. This process is called

74

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

compression. In images there are two types of compression, Lossy and Lossless. Both methods save storage space, but the procedures that they implement differ. Lossy compression creates smaller files by discarding excess image data from the original image. It removes details that are too small for the human eye to differentiate, resulting in close approximations of the original image, although not an exact duplicate. An example of an image format that uses this compression technique is JPEG (Joint Photographic Experts Group). Lossless compression, on the other hand, never removes any information from the original image, but instead represents data in mathematical formulas. The original image’s integrity is maintained and the decompressed image output is bit-bybit identical to the original image input. The most popular image formats that use lossless compression is GIF (Graphical Interchange Format) and 8-bit BMP (a Microsoft Windows bitmap file). 2.5 Video Steganography Since a video can be viewed as a sequence of still images, video steganography can be viewed simply as an extension of image steganography. The internet and the World Wide Web have revolutionaries the way in which digital data is distributed. The widespread and easy access to multimedia content has motivated development of technologies for digital steganography or data hiding, with emphasis on access control, authentication, and copyright protection. Steganography deals with information hiding, as opposed to encryption. Much of the recent work in data hiding is about copyright protection of multimedia data. This is also referred to as digital watermarking. Digital watermarking for copyright protection typically requires very few bits, of the order of 1% or less of the host data size. These watermarks could be alpha-numeric characters, or could be multimedia data as well. One of the main objectives of this watermarking is to be able to identify the rightful owners by authenticating the watermarks. As such, it is desirable that the methods of embedding and extracting digital watermarks are resistant to typical signal processing operations, such as compression, and intentional attacks to remove the watermarks. While transparent or visible watermarks are acceptable in many cases, hidden data for control or secure communication need to be perceptually invisible. The signature message data is the data that we would like to embed or conceal. The source data is used to hide the signature data; we often refer to the source as the host data. After embedding a signature in to a host, we get the watermarked or embedded data. The recovered data, also referred to as the reconstructed data, is the signature that is extracted from the embedded data. [11]

Figure 5. Main Flow Diagram 3.2 Text in Image Flow Diagram

Figure 7. Text in Image flow Diagram 3.3 Image in Image Flow diagram

3. Proposed Methodology
3.1 Proposed Main Flow Diagram
The proposed methodology in this work is to embed the stego message into image and video using LSB Technique.

(IJCNS) International Journal of Computer and Network Security, 75 Vol. 2, No. 3, March 2010

3.5 LSB Technique The most widely used technique to hide data is the usage of the LSB- Least Significant Bit technique. Least Significant Bit insertion method is a simple approach to embed information in a cover file [11]. The LSB is the lowest order bit in a binary value. This is an important concept in computer data storage and programming that applies to the order in which data are organized, stored or transmitted [12]. Usually, three bits from each pixel can be stored to hide an image in the LSBs of each byte of a 24-bit image. Consequently, LSB requires that only half of the bits in an image be changed when data can be hidden in least and second least significant bits and yet the resulting stego-image which will be displayed is indistinguishable to the cover image to the human visual system [11]. When using a 24-bit image, a bit of each of the red, green and blue color components can be used, since they are each represented by a byte. In other words, one can store 3 bits in each pixel. An 800 × 600 pixel image, can thus store a total amount of 1,440,000 bits or 180,000 bytes of embedded data [13]. For example a grid for 3 pixels of a 24-bit image can be as follows: (00101101 00011100 11011100) (10100110 11000100 00001100) (11010010 10101101 01100011) When the number 200, which binary representation is 11001000, is embedded into the least significant bits of this part of the image, the resulting grid is as follows: (00101101 00011101 11011100) (10100110 11000101 00001100) (11010010 10101100 01100011) Although the number was embedded into the first 8 bytes of the grid, only the 3 underlined bits needed to be changed according to the embedded message. On average, only half of the bits in an image will need to be modified to hide a secret message using the maximum cover size [13]. Since there are 256 possible intensities of each primary color, changing the LSB of a pixel results in small changes in the intensity of the colors. These changes cannot be perceived by the human eye thus the message is successfully hidden. With a well-chosen image, one can even hide the message in the least as well as second to least significant bit and still not see the difference. In the above example, consecutive bytes of the image data – from the first byte to the end of the message – are used to embed the information. This approach is very easy to detect [14]. A slightly more secure system is for the sender and receiver to share a secret key that specifies only certain pixels to be changed. Should an adversary suspect that LSB steganography has been used, he has no way of knowing which pixels to target without the secret key [15]. In its simplest form, LSB makes use of BMP images, since they use lossless compression. Unfortunately to be able to hide a secret message inside a BMP file, one would require a very large cover image.
Figure 8. Text in Video Flow Diagram

Figure 7. Image in Image flow Diagram 3.4 Text in Video Flow Diagram

3.6 Implementation of LSB Technique To illustrate implementation of LSB technique, consider the following image of parrots showing true colors Palette of the

76 Image

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

resulting difference between the new from the original color value is called the embedding error. Since there are only three LSB’s for each pixel, the total number of bits that can be hidden is only three times the total number of pixels having the dimensions 768x512.

4. Simulation and Results
Following images were taken and processed by the application of Digital Steganography and the results are as following: 4.1 Procedure for Text in Image hiding Figure 9. Parrot (True Color Palette Image) This image is composed of red, green, and blue color channels. The pixel at the top-left corner of the picture has the values 122, 119, and 92 for its red, green, and blue color components respectively. In binary, these values may be written as: 01111010 01110111 01011100 The procedure is started with opening the image in which you want to hide the data and then enter the stego message in the specified space. The image is shown in fig.11and the stego message is shown in the text box.

Figure 11. Open Image The stego message is then merged into the original image as shown in fig. 12.

Figure 10. Red, Green and Blue color channels of Parrot Image To hide the character “a” in the image, the LSB (the rightmost bit) of each of the three 8-bit color values above will be replaced with the bits that form the binary equivalent of the character “a” (i.e., 01100001). This replacement operation is generally called embedding. After embedding, the color value would now change to: 01111010 1110111 01011101 Since there are only three values, only three of the eight bits of the character “a” can fit on this pixel. Therefore the succeeding pixels of this image will also be used. In the three color values shown above, only the last value actually changed as a result of LSB encoding, which means almost nothing has changed in the appearance of the image. Nevertheless, even in case wherein all LSB’s are changed; most images would still retain their original appearance because of the fact that the LSB’s represent a very minute portion (roughly 1/255 or 0.39%) of the whole image. The

Figure 12. Merge Text The stego message is detected on the receiving end as shown in fig. 13.

(IJCNS) International Journal of Computer and Network Security, 77 Vol. 2, No. 3, March 2010

mountain on the left is the original image in which the stego image will be embedded.

Figure 13. Detect The stego message is finally extracted as shown in the text box in fig. 14 Figure 16. Open Images For merging the stego image in the original image user has to enter the password and then the stego image embeds into the original image.

Figure 14. Extract The difference between the original image and the image after insertion of stego message is shown in fig. 15. The two images look exactly the same which is why image steganography is so important and useful in hiding data or sending secret messages Figure 17. Merge Image On the receiving end the stego image can be recovered by entering the password as shown in fig. 18.

Figure 18. Enter Password Figure 15. View Difference The stego image is finally extracted after entering the password as shown in fig. 19

4.2 Procedure for Image in Image stenography In this phase the stego message is also image. The image of a cat in fig. 16 is a stego image in this case and the image of a

78

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Figure 19. Extract Image The difference between the original image and the image after insertion of stego message is shown in fig. 20. The two images look exactly the same. Figure 22. View Frames The stego text is added to be embedded in the video as shown in fig. 23

Figure 20. View Difference 4.3 Procedure for Text in Video stenography In this phase the stego message is embedded in the video, as you can see in fig. 21 the video is added to enter the stego message. Figure 23. Add Text The stego text is detected as shown in fig. 24

Figure 24. Detect text Figure 21. Add Video The frames can be viewed in the fig. 22 Finally the stego data is extracted on the receiving end as shown in fig. 25

(IJCNS) International Journal of Computer and Network Security, 79 Vol. 2, No. 3, March 2010 [11] N. F. Johnson, S. Jajodia, “Exploring Steganography: Seeing the Unseen,” IEEE Computer, February 1998, pp.26-34. [12] Julie K. Petersen, The Telecommunications Illustrated Dictionary, CRC Press, 2002, ISBN: 084931173X. [13] Krenn, R., “Steganography and Steganalysis”. [14] Wang, H & Wang, S, “Cyber warfare: Steganography vs. Steganalysis”, Communications of the ACM, 47:10, October 2004 [15] Anderson, R.J. & Petitcolas, F.A.P., “On the limits of steganography”, IEEE Journal of selected Areas in Communications, May 1998 [16] E. Kawaguchi and R. O. Eason :"Principle and applications of BPCS-Steganography", Proceedings of SPIE: Multimedia Systems and Applications, Vol.3528, pp.464-463, 1998.

Figure 25. Extract

Author Profile 5. Conclusion
Thus we conclude that steganographic techniques can be used for a number of purposes along with the covert communication or deniable data storage, Confidential communication, Protection of data alteration, Secret data storing, Media Database systems, Copyright Protection, Feature Tagging and perhaps most importantly digital watermarking. A digital watermark is invisible to the eye, undetectable without the appropriate secret key, but contains small ownership identification. A much wider field of steganography is digital Steganography allows copyright owners to incorporate into their work identifying information invisible to human eye, yet protecting against the dangers of copyright infringement.
Engr. Ali Javed is serving as a Lecturer in Software Engineering Department at University of Engineering & Technology Taxila, Pakistan since September, 2007. He has received his MS degree in Computer Engineering from the University of Engineering & Technology Taxila, Pakistan in January, 2010. He has received B.Sc. degree in Software Engineering from University of Engineering & Technology Taxila, Pakistan, in September, 2007. His areas of interest are Digital Image Processing, Computer vision, Video Summarization, Machine Learning, Software Design and Software testing.

References
[1] Neil F. Johnson, Zoran Duric, Sushil Jajodia, Information Hiding: Steganography and Watermarking - Attacks and Countermeasures Kluwer Academic Press, Norwrll, MA, New York, The Hague, London, 2000. [2] G. Simmons, \The prisoners problem and the subliminal channel," CRYPTO, pp. 51{67, 1983. [3] C. Kurak, J. McHugh, "A Cautionary Note On Image Downgrading," IEEE Eighth Annual Computer Security Applications Conference, 1992. pp. 153-159. [4] Tuomas Aura, "Invisible Communication," EET 1995,. [5] Warren Zevon, Lawyers, Guns, and Money. Music track released in the albums Excitable Boy, 1978; Stand in the Fire, 1981; A Quiet Normal Life, 1986; Learning to Flinch, 1993. [6] David Kahn, The Codebreakers, The Macmillan Company. New York, NY 1967. [Kurak92] C. Kurak, J. McHugh, "A Cautionary Note On Image Downgrading," IEEE Eighth Annual Computer Security Applications Conference, 1992. pp. 153-159. [7] Herbert S. Zim, Codes and Secret Writing, William Marrow and Company. New York, NY, 1948. [8] Anderson, R.J. & Petitcolas, F.A.P., “On the limits of steganography”, IEEE Journal of selected Areasin Communications, May 1998 [9] Owens, M., “A discussion of covert channels and steganography”, SANS Institute, 2002 [10] Johnson, N.F. & Jajodia, S., “Steganalysis of Images Created Using Current Steganography Software”, Proceedings of the 2nd Information Hiding Workshop, April 1998

80

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Modeling and Simulation of Synchronous Buck Converter for PMG in Low Power Applications
R. Bharanikumar1, A. Nirmal Kumar2 and K.T. Maheswari3.
1, 2, 3,

Department of Electrical and Electronics Engineering, Bannari Amman Institute of Technology, Anna University, Tamil Nadu, India . E-mail: [email protected], [email protected]

Abstract: This paper focuses on the design and development of
synchronous buck converter in low power generation system. The buck topology suffers from low efficiency at light loads due to dissipation that does not scale with load current. In this paper we present a method for improving efficiency in buck converter by reducing gate drive losses. Results of PSIM simulation are presented.

3. Small Wind Turbine in Battery Charging Application
The performance limitations of permanent magnet wind turbine generators in battery-charging applications are caused by the poor match of the rotor, generator and load characteristics over most of the operating wind speed range. Even the small amount of energy (1kWh) that these batteries store can sufficiently improve the quality of life for such areas, giving people access to electrical lighting, TV/radio, and other household conveniences. We placed an optimizing direct current DC/DC voltage converter between the rectifier and batteries. We can control the current output of the synchronous buck converter, which allows us to control the power going to the batteries. Battery-charging systems are very important in developing countries where rural families cannot afford a solar-battery home system or other electricity options. The technical aspects of charging numerous 28-V batteries with a small permanent magnet alternator wind turbine suggest that a special batterycharging station needs to be developed. The major advantage of a centralized battery-charging station is that it can bring electric service to a very low-income segment of the population. This performance improvement comes at higher system capital cost; however, the cost per charged battery of the system with the individual charge controllers is lower because of better performance characteristics.

Keywords: Buck Converter, Synchronous Rectification, PMG,
Wind turbine.

1. Introduction
Small wind turbines offer a promising alternative for many remote electrical uses where there is a good wind resource. The goal of this work is to characterize small wind turbines, wind-diesel hybrid system components and windhybrid systems and to develop new off-grid applications for small wind turbines in order to expand the international market for these systems. Projects fall into two classifications: applications development and testing. Testing includes both small turbines and wind-hybrid systems. Although the projects that fall under applications development and testing are varied, they all focus on the remote power market and all include small wind turbines as the power source.

2. Block Diagram
A block diagram consists of a rectifier stage , a buck converter and controller. Many small wind turbine generators consist of a variable-speed rotor driving a permanent-magnet synchronous generator. The principal application of such wind turbines is battery charging, in which the generator is connected through a rectifier to a battery bank. The wind turbine electrical interface is essentially the same whether the turbine is part of a remote power supply for tele communications, a stand-alone residential power system or a hybrid village power system.

4. Permanent Magnet Generator
Permanent magnet alternators are the most powerful and cost-effective solution for building a wind generator. Their low-rpm performance is excellent, and at high speeds they can really crank out the current due to their efficiency. They provide an optimal solution for varyingspeed wind turbines, of gearless or single stage gear configuration. [5] The evolution of the control design of PM drives begins with the cost reduction of permanent magnet material and follows the progress of control theory of AC electric machinery. The main difference between PM drives and their earlier developed counter parts lies in the removal of the excitation field circuitry with troublesome brushes and its replacement with permanent magnets. But the application PM disables classical field weakening control, because the magnets produce constant magnetic field intensity. With the cost reduction of rare permanent magnet materials PM machines became very popular in industry due to their • Simple structure

Figure 1. Block Diagram

(IJCNS) International Journal of Computer and Network Security, 81 Vol. 2, No. 3, March 2010

• • •

High efficiency Robustness High torque to size ratio.

4.1 Types of Permanent Magnet Generators Modern permanent magnet generators need no separate excitation system. They can be gearless or with gearbox and are fully controlled with variable speed and reactive power supply. They provide the highest power quality and efficiency for the end user. They offer three different concepts of permanent magnet generator technology. 4.1.1. Low Speed Robust Gearless System In a direct drive application the turbine and the generator are integrated to form compact and structurally integrated unit. The design gives free access to all parts for easy installation and maintenance. The simple and robust low speed rotor design with no separate excitation or cooling system results in minimum wear, reduced maintenance requirements, lower life cycle costs, and a long lifetime. 4.1.2. Medium Speed Compact and Economical Unit This is a very compact slow speed system with the turbine main bearing and the permanent magnet generator integrated to a single-stage gearbox giving high efficiency with low maintenance needs. It emphasizes the same simple and robust low speed rotor design with no separate excitation or cooling system, resulting in less wear, reduced maintenance requirements, lower life cycle costs, and a long lifetime. 4.1.3. High Speed Small Power Pack The system is mechanically similar to the doubly fed type with even smaller space requirements. Extremely high power with a small size and typical speed range is from 1000 to 2000 rpm for a 6 or 8 pole generator. The following are the key specifications evolved for PMG PMG AC 3φ, 4-pole machine Rated Speed 500 RPM - 3000 RPM Output Power 1.0 - 1.5 kW Output voltage 65V

explained below. The first state corresponds to the case when the switch is ON. In this state, the current through the inductor rises, as the source voltage would be greater than the output voltage, whereas the capacitor current may be in either direction, depending on the inductor current and the load current. When the inductor current rises, the energy stored in it increases. During this state, the inductor acquires energy. When the switch is closed the diode is in the off state. In Figure 2 the capacitor is getting charge. The second state relates to the condition when the switch is OFF and the diode is ON. In this state, the inductor current freewheels through the diode and the inductor supplies energy to the RC network at the output. The energy stored in the inductor falls in this state. In this state, the inductor discharges its energy and the capacitor current may be in either direction, depending on the inductor current and the load current.

Figure 3. Second Stage of Basic Circuit When the switch is open, the inductor discharges its energy. When it has discharged all its energy, its current falls to zero and tends to reverse, but the diode blocks conduction in the reverse direction. In the third state, both the diode and the switch are OFF. During this state, the capacitor discharges its energy and the inductor is at rest, with no energy stored in it. The inductor does not acquire energy or discharge energy in this state.

Figure 4. Third Stage of Basic Circuit Here it is assumed that the source voltage remains constant with no ripple, and the frequency of operation is kept fixed with a fixed duty cycle. When both the input voltage and the output voltage are constant, the current through the inductor rises linearly when the switch is ON and it falls linearly when the switch is OFF. Under this condition, the current through the capacitor also varies linearly when it is getting charged or discharged.

5. Buck Converter
Output current 20A

6. Synchronous Buck Converter
The synchronous-rectified buck converter uses currentmode control to regulate the output voltage. This control mode allows the converter to respond to changes in line voltage without delay. Also, you can reduce the output inductance to increase the converter's response to dynamicload conditions.

Figure 2. Basic Circuit of Buck Converter The operation of the buck converter is explained first. This circuit can operate in any of the three states as

82

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

Although these features would appear to favor currentmode control in applications that require a fast dynamic response, this control method has some disadvantages. For example, it tends to be sensitive to noise in the control loop. Also, current-mode control method requires two feedback loops: a current inner loop and a voltage outer loop, thus complicating the design. Finally, the controller uses a current-sensing resistor in series with the output inductor. This current-sensing resistance typically dissipates as much power as do the MOSFETs, further reducing the currentmode converter's efficiency. Voltage-mode control is attractive for low-voltage buck converters because, it involves a single control loop, exhibits good noise immunity and allows a wide range for the PWM duty-cycle ratio. Also, voltage-mode converters do not require a resistor for sensing current. However, the transfer function of standard voltagemode buck converters that use Schottky diodes changes from no load to full load, making it difficult to achieve fast response to large dynamic loads. The voltage drop of a MOSFET is much less than that of a Schottky diode, which improves the efficiency of buck converters using synchronous rectification. Synchronous rectification increases the efficiency of a buck converter by replacing the Schottky diode with a low-side NMOSFET. The resultant voltage drop across the MOSFET is smaller than the forward voltage drop of the Schottky diode. A more comprehensive comparison includes the switching losses for both the MOSFET and the Schottky diode. However, at typical operating frequencies and voltages, a buck regulator's switching losses are usually small in comparison with the conduction losses. The low-side MOSFET conducts current in its third quadrant during the off times of the highside MOSFET. This synchronous switch operates in the third quadrant, because the current flows from the source to the drain, which results in a negative bias across the switch. A positive voltage at the gate of the device still enhances the channel.

MOSFETs. Under light loads, the control block usually turns the lower MOSFET off to emulate a diode. Synchronous rectification with discrete MOSFETs causes variable switching delays because of the variations in gate charge and threshold voltage from one MOSFET to another. Standard control circuits compensate for these variations by delaying the turn-on drive of the lower MOSFET until after the gate voltage of the upper MOSFET falls below a threshold. This delay creates a dead time in which neither MOSFET conducts. The dead time eliminates the possibility of a destructive shoot-through condition in which both MOSFETs conduct simultaneously. Standard designs use the same method to delay the turn-on of the upper device. A typical design delays discrete MOSFET conduction with a 60-nsec dead time and limits converter switching frequency to 300 kHz. 6.1 Conventional Vs Synchronous Buck Converter The comparison of efficiency between a synchronous rectifier with a parallel Schottky diode and that of a Schottky diode alone is shown in Figure 6.

Figure 6. Efficiency Graph

7. Test Results
7.1Generator Testing Details 7.1.1. G ENERATOR SPEED VS GENERATOR OUTPUT VOLTAGE The table shows various readings of generator speed and their corresponding output voltages. It can be seen from the table the generator output voltage of 51.1 volts is obtained for the maximum speed of 1900 rpm.

Figure 5. Synchronous Buck Converter As Figure 5 shows, conventional synchronous-rectified buck converters partition the PWM-control and synchronous-drive functions into a single IC that drives discrete MOSFETs. The control and driver circuits synchronize the timing of both MOSFETs with the switching frequency. The upper MOSFET conducts to transfer energy from the input, and the lower MOSFET conducts to circulate inductor current. The synchronous PWM control block regulates the output voltage by modulating the conduction intervals of the upper and lower

Table 1: Speed and generator voltage
Generator speed in rpm 900 1200 1350 1700 1900 Generator o/p voltage in Volts 19.2 26.33 29.41 46.2 51.1

The following graph has been drawn for the generator speed versus generator output voltage. Generator speed is

(IJCNS) International Journal of Computer and Network Security, 83 Vol. 2, No. 3, March 2010

taken in x axis and generator output voltage is taken in yaxis.

1200 1350 1700 1900

4.10 4.62 5.81 6.50

The following graph has been drawn for the generator speed versus wind velocity. Generator speed is taken in x axis and wind velocity is taken in y axis.

Generator Speed in rpm

2000 1500 1000 500 0 0 2 4 6 8

Figure 7. Generator Speed Vs Generator Output Voltage 7.1.2. Generator Output Voltage Vs Wind Velocity The table shows various readings of wind velocity and their corresponding output voltages. It can be seen from the table the generator output voltage of 51.1 volts is obtained for the maximum wind velocity of 6.5 m/s.

Wind Velocity in m /s

Figure 9. Generator Speed Vs Wind Velocity Table 2: Voltage with wind velocity
Generator output voltage in volts 19.2 26.33 29.41 46.2 51.1 Wind velocity in m/s 3.07 4.10 4.62 5.81 6.50

8. Simulation Results

Electronic circuit design requires accurate methods for evaluating circuit performance. Because of enormous complexity of modern integrated circuits, computer aided circuit analysis is essential and can provide information about circuit performance that is almost impossible to obtain with laboratory prototype measurements. PSIM is a generalThe following graph has been drawn for the wind velocity purpose circuit program that simulates electronic circuits. versus generator output voltage. Wind velocity is taken in x- PSIM can perform various analyses of electronic circuits: axis and in y axis generator output voltage is taken. The the operating points of transistors, a time domain response, generated voltage is maximum when the wind velocity is a small signal frequency response, and so, on. Simulation between 6 and 12 m/s. work was done for all the circuits and results are attached.

8.1Simulated Circuit Diagram

Figure 8. Generator Output voltage Vs Wind Velocity 7.1.3. Generator Speed Vs Wind Velocity The table shows various readings of generator speed and their corresponding wind velocity. It can be seen from the table the wind velocity of 6.5 m/s is obtained for the maximum speed of 1900 rpm. Table 3: Speed with wind velocity
Generator speed in rpm 900 Wind velocity in m/s 3.07

84 Figure 10. Simulated Circuit Diagram

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 3, March 2010

The figure shows the simulated circuit diagram. The Synchronous buck converter operate in current program mode control .The unit is a PI controlled device that controls the power level at which the converter operates. The unit is primarily designed to operate from the three-phase alternating current output of the wind turbine. The following graph shows the PSIM simulation results for the output voltage from the synchronous buck converter.

of IEEE Power Electronics Conference. pp. 787-791, 2006. [6] A.B. Raju, K.Chatterjee and B.G. Fernandes, “A Simple Power Point Tracker for Grid connected Variable Speed Wind Energy Conversion System with reduced Switch Count Power Converters”, IEEE conference on Power Electronic specillists,2003. pp 456-462.

Authors Profile
Bharanikumar.R was born in Tamilnadu, India, on May 30, 1977. He received the B.E degree in Electrical and Electronics Engineering from Bharathiar University, in 1998. He received his M.E Power Electronics and Drives from College of Engineering Guindy Anna University in 2002. He has 9 yrs of teaching experience. Currently he is working as Asst. Professor in EEE department, Bannari Amman Institute of Technology, Sathyamangalam, TamilNadu, India Currently he is doing research in the field of power converter for special machines; vector controlled based synchronous machine drives, converters for wind energy conversion systems.

Figure 11. Output result of Synchronous buck converter

9. Conclusion
Synchronous rectification is possible with all commonly used converter topologies. It is achieved by simply adding a MOSFET in parallel with the free-wheeling diode. As a result of this addition, the efficiency will improve significantly. A simple to operate and robust wind-electric battery-charging station has been developed and tested. Future work for the applications development and testing team will include continued testing of commercial or near commercial products for the remote electrification market.
A.Nirmal Kumar was born in the year 1951. He completed his PG and UG in Electrical Engineering from Kerala and Calicut University respectively. He completed PhD in Power Electronics in the year 1992 from P.S.G. College of Technology, Coimbatore under Bharathiar University. He was with N.S.S. College of Engineering for nearly 28 years in various posts before joining Bannari Amman Institute of Technology, Sathyamangalam, TamilNadu, India in the year 2004. He is a recipient of Institution of Engineers Gold Medal in the year 1989. His current research areas include Power converters for Wind Energy Conversion System and Controller for Induction motor drives.

References

[1] Monica Chinchilla, Santiago Arnaltes, Juan Carlos Burgos, “Control of Permanent-Magnet Generators Maheswari .K.T was born in Tamilnadu Applied to Variable-Speed Wind-Energy Systems India, on Dec 20, 1980. She received her B.E connected to the Grid”, IEEE Transactions on Energy Degree in Electrical and Electronics Conversion, vol. 21, no. 1, pp.130-135, , March 2006. Engineering from Erode Sengunthar [2] F. Z. Peng, “Z-Source inverter,” IEEE Trans. Ind Engineering College, Erode, Bharathiyar Applicat.,vol. 39, pp.504–510, Mar./Apr. 2003. University. Currently she is pursuing M.E in [3] F. Z. Peng, M. Shen, and Z. Qian, “Maximum boost Power Electronics and Drives at Bannari control of the z- source inverter,” IEEE Transaction on Amman Institute of Technology, affiliated to Power Electronics. vol.20, no.4, pp833-838 July2005. Anna University. Her field of interest includes PMG and [4] Shigeo Morimoto, Hideaki Nakayama, Masayuki Sanada, Power converters. Yoji Takeda, “Sensorless Output Maximization Control For Variable-Speed Wind Generation System using IPMSG” , IEEE Transactions on Industrial Applications 2003, pp.1464-1471 [5] Tomonobu Senjyu, Sathoshi Tamaki , Naomitusu Urasaki, Katsumi Uezato Toshihisa Funabashi, Hideki Fujita “Wind Velocity and PositionSensorless Operation for PMSG Wing Generator”, Proceedings

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close