Chapter 1 Introduction
In recent past, terror of several terrorist groups and act of outlaw is been spreading rapidly in India. Tracking those acts before it is caused or recovering from it has become difficult to our security system. That failure is due to lack of integrated information from different department t of India like: passport, transport, police, defense and, income tax etc. Moreover, tracking indepth information about ever individual is effected due to over population of our country. In order to overcome the security problems a systems which has integrated information of all department of India along with some Biometric system to uniquely identify ever individual. A biometric system provides automatic recognition of an individual based on some sort of unique feature or characteristic possessed by the individual. Biometric systems work by first capturing a sample of the feature, such as recording a digital sound signal for voice recognition, or taking a digital color image for face recognition. The sample is then transformed using some sort of mathematical function into a biometric template. The biometric template will provide a normalized, efficient and highly discriminating representation of the feature, which can then be objectively compared with other templates in order to determine identity. Most biometric systems allow two modes of operation. An enrolment mode for adding templates to a database, and an identification mode, where a template is created for an individual and then a match is searched for in the database of pre-enrolled templates. 1.1 OBJECTIVE
The Development & Implementation of this Iris Based Security System with Iris recognition is aiming to:
Develop an integrated Iris based security system containing detailed information of
citizen in the country.
Iris based security system should be enabled with biometric system.
1
Biometric system, where iris of human eye will be used as unique feature of a citizen for validating. To provide fast, easy and integrated information of a citizen. To reduce the time factor needed for searching information of a citizen form different department of Indian security. 1.2 PROBLEM STATEM ENT
Conventional systems cannot check passwords, PIN chip and identity cards to see if the user providing correct data is also the lawful owner. As biometric techniques work with person-linked characteristics (which can neither be lost nor forgotten, and are not easy to steal), they promise a new dimension in quality, comfort and security in personal authe nticatio n. But many of the biometric authentication systems are overshadowed by the very high failure rate. Recognition systems, such as finger-print recognition, voice recognition, etc. are usually impeded by external factors i.e. Finger print can be damaged b y scarring and voice may change with age. Another occular based recognition system, retinal scan, even though highly accurate are not very user friendly as they are intrusive. Hence, these systems tend to false reject or are unfriendly to user. Iris recognition on the other hand is rarely affected by external factors as it is protected by external transparent sclera and eyelids and is stable i.e. A single enrollment in the database c an last a lifetime. Unlike retinal scan iris image can be captured us i ng a cam- era illuminated by infrared light from a distance. Even though, iris recognition is relatively young biometric authentication system, it is one of the fastest emerging recognition systems.
2
1.3
SCOPE
The system will have a console system implementation and three-tier architecture. The middle layer will contain logics of iris pattern generator and matcher and the data would be stored in external data stores which the user that is admin can access from graphical user interface
u
G r a p h i c a l s e r i n t e r f a c e
M
i d
d l e
w
a
r e
c o
n t a i n
i n g
L
o g i c
D
a t a
s t o
r a g
e
Fig.1.3.1: Three tier architecture of Iris based security system
Admin can login to the system, and do various processes.
Admin can add or edit person information. Admin can when search a person on the system by providing necessary credential which can be printed also. Admin can add organization detail. Admin can add institutional detail.
3
Study of Iris The iris is located behind the transpare nt cornea and aqueous humour of the eye, but in front of the lens. It is a membrane in the eye, responsible for controlling the diameter and size of the central darker pupil and the amount of light reaching the retina. It is a colored ring around the pupil. The color of the iris is the “Eye Color”, which can be green, blue, or brown. Its only physiological purpose is of course to control the amount of light that enters the eye through the pupil, by the action of its dilator and sphynctor muscles that control pupil size, but its construction f r o m elastic connective tissue gives it a complex, fibrillose pattern. The larger the pupil, the more light can enter.
Fig 1.3.2: Schematic Diagram of the Human Eye Structure of the iris
Fig 1.3.2: Cross Section of iris The iris is divided into two major regions:
4
1.
The pupillary zone is the inner region whose edge forms the boundary of the The ciliary zone is the rest of the iris that extends to its origin at the ciliary body.
pupil.
2.
Iris surface features
1.
The pupillary ruff is a series of small ridges at the pupillary margin formed by the continuation of the pigmented epithelium from the posterior surface. The Circular contraction folds, also known as contraction furrows, are a series of circular bands or folds about midway between the collaret and the origin of the iris. These folds result from changes in the surface of the iris as it dilates. Crypts at the base of the iris are additional openings that can be observed close to the outermost part of the ciliary portion of the iris.
2.
3.
Features
of Iris
The iris has many features that ca n be used to distinguish one iris from another. One of the primary visible characteristics is the trabecular mes hw ork, a tissue which gives the appearance of dividing the iris in radial fashion that is permanently formed by the eighth month of gestation. During the development of the iris, there is no genetic influence on it, a process known as chaotic morphogenesis that occurs during the seventh month of gestation, which means that even identical twins have different irises. The fact that the iris is protected behind the eyelid, cornea, and aqueous humor means that, unlike other biometrics such as fingerprints, the likelihood of damage or abrasion is minimal. The iris is also not subject to the effects of aging which means it remains in a stable form from about the age of one until death. interfere with the recognition technology.
5
The use of glasses or contact l e n s e s
(colored or clear) has little effect on the representation of the iris and hence does not
Fig 1.3.3: A front-on view of the human eye Since the iris in this sense is highly unique, stable and easily captured, enough to be used as a biometric signature. 1.4 BIOM ETRICS it is complex
Biometric is the process of uniquely identifying humans based on their physical or behavioral traits. Biometric systems are mainly based on fingerprints, facial features, voice, hand geometry, handwriting, and the one presented in our project, the iris. Biometric systems first captures a sample of the feature. This extracted feature is then transformed into mathematical function or biometric template. The biometric template i s a normalized, efficient and highly discriminating representation of the feature, which can then be objectively compared with other templates in order to determine identity. Biometric characteristics can be divided into two main classes
Physiological are related to shape of the body.
Like Fingerprints, Face recognition, Hand geometry etc.
6
Behavioral are related to behavior of the person.
Like Signature, Keystroke, Voice or speech A good biometric template is characterized by the use of a feature that i s :1. Highly Unique:
A biometric template s h o u l d be unique. This allows a person to be uniquely identified with minimum failures. 2. Stable: Biometric feature of the person should remain same or change very little so that the enrolled template in database will still be matchable. The likelihood of damage and abrasion to this feature should be minimal. 3. Easily captured: Biometric system should be able to extract t h e biometric feature effectively and efficiently. Hence, the features should be externally visible.
7
Fig 1.4.1: General Block Diagram of a Biometric System 1.5 IRIS RECOGNITION
Iris recognition is a method of biometric authe ntication that recognizes a person by pattern of the iris. Iris recognition should not be confused with another ocular-based technology, retina scanning. The automated method of iris recognition is relatively young, existing in pate nt only since 1994 and is still considered the most accurate. It uses a specialized camera technology, with subtle infrared illumination reducing specular reflection from the convex cornea, to create images of the detail-rich, intricate structures of the iris. Converted into digital templates, t h e s e images provide mathematical representations of the iris that y i e l d unambiguous positive identification of an individual. Iris recognition technology offers the highest accuracy in identifying individuals of any method available. This is because no two irises are alike - not between identical twins, or even between the left and right eye of the same person. Irises are also stable; unlike other identifying characteristics that can change with age; the pattern of one’s iris is fully formed
8
by ten months of age and remains the same for the duration of their lifetime. Iris recognition is rarely impeded by glasses or contact lenses and can be scanned from 10cm to a few meters away. Gathering u ni qu e information of an individual from iris pattern requires extracting t h i s pattern and encoding it into a bit-wise biometric template. Therefore, iris recognition algorithms need to isolate and exclude the artifacts a s well as locate the circular iris region from the acquired eye image. Artifacts in iris include eyelids and eyelashes partially covering it. Then, the extracted i r i s region needs to be normalized. The normalization p r o c e s s will produce iris regions, which have the same consta nt dimensions, so that t w o photographs of the same iris under different conditions will have characteristic features at the same spatial location i.e. It involves unwrapping the doughnut shaped extracted i r i s e s into a consta nt dimensional rectangle. The significant features of the normalized iris must be encoded so that c o m p a r i s o n s between templates can be made. Most iris recognition algorithms make use of a band pass decomposition of the iris image to create a biometric template. Finally, templates a r e matched using Hamming distance. The Hamming distance gives a measure of how many bits are same between two bit patterns. Using the Hamming distance of two bit patterns, a decision can be made as to whether the two patterns were generated from different irises or from the same one. Process of IRIS Recognition System can be summarized as follows:-
Fig 1.5.1: Block Diagram of Iris Recognition System
9
Therefore, the iris is an externally visible, yet protected organ whose unique pattern remains stable throughout the human life. These characteristics make it one of the most preferred biometric techniques for identifying individuals. Digital image processing techniques can be employed to extract t h e unique iris pattern from a digitized image of the eye, and encode it into a biometric template. allows comparisons to be made between templates. This biometric template contains an objective mathematical represe ntation of the unique information stored in the iris, and
Chapter 2
Literature Survey
The iris has been historically recognized to possess characteristics unique to each individual. In the mid-1980s, two ophthalmologists— Dr. Leonard Flom Aran Safir
Proposed the concept that no two irises are alike. They researched and documented the potential of using the iris for identifying people and were awarded a patent in 1987. Soon after, sophisticated algorithm that brought the concept to reality was developed by Dr. John Daugman and patented in 1994.
10
Since then many other systems have been developed, most notable include the systems of Wildes, Boles and Boashash, Lim and Noh.
2.1
EXISTING SYSTEM
Existing systems are as follow: The existing systems of security practiced in India are usually primitive.
Fig 2.1: Birth Certificate
A citizen is identified on the bases of paper documents like birth certificate, identification card, pan card or identification parade.
Fig 2.1: PAN Card
While some security areas are automated with motion detector alarm system, barcode
system, smart card etc...
11
Fig 2.1: Smart card
Fig 2.1: Barcode ID
Biometric systems are also used for unique identification of citizen like fingerprint recognition, face recognition, DNA testing.
Fig 2.1: Fingerprint recognition Data of all departments of security like defense, police or C.B.I is not integrated.
12
2.1.1
DRAWBACKS OF EXISTING SYSTEM
The following are some of the drawbacks that were observed in the existing system.
•
High maintenance o Identification documents like birth certificate, PAN card, smart card or barcode ID, it can worn out with the time or spoil due to in contact with some foreign element like water. So need to be maintained. Moreover identification document need to be carried along always.
•
Time taken o Take taken for identification with identification parade is long. Biometric identification of DNA testing is long procedure and takes lot of time.
•
High chance of fraudulence o There are high chances of fraudulence of identification document including smart card and barcode ID, which can cheat the security system.
•
Faulty result o Identification parade and other identification document can also led to faulty result in some cases. As wrong person is identified in identification parade. Moreover, in biometric system like finger print can be identical in case of twins.
•
Affected by the nature of work
Fingerprint readability also may be affected by the work an individual does. For example, transportation workers such as mechanics, food workers, may present fingerprints that are difficult to read due to dryness or the presence of foreign substances, on fingers. • Distributed Information o Information of individual is distributed in different department of India which make tracking of all information difficult in short time.
13
2.2
PROPOSED SYSTEM
The system at hand can be divided into two inter-related subsystems: Iris pattern generator and matcher system: This part of the system deals generating and compare iris pattern using various image processing technique. There are various processes that are handled by this system are: 1. Unique feature extracted of iris. 2. Iris pattern is generated and stored into the database. 3. If required Iris pattern is matched to the iris patterns stored in the database
M
A E
a
n
a g
e m
e n
t
s y
sP t a e tm Gt e er nn e & r a mt o ar t c h s y s t e m
F E e a t u r eI r i s x t r a c t og r e n
e r
d d in g a n d d it in g h u m a n In f o r m a ti o n
P a t t e r n e r a t o r
S e a r c h in g a n d r e t r i e hv u i n g a n m i n fo r m a ti o n
M
a t c h
e r
P
rP e r - o c e s s e d
i m a g e
U
s e r
Fig 2.2: Block Diagram of Iris Based Security System Management system : This part deals with the management side of the security system. The various processes handled by this system are: • The management can register and edit details of a human like personal information, educational etc.
14
• •
The management can search and retrieve the information about any human. Management can register new educational and organization names with there unique code.
ADVANTAGES OF PROPOSED SYSTEM Attractive and user friendly graphical user interface. Unique bio-metric feature used for identifying the citizen. It provides help which make the system user, easy to understand the use of the system.
It is an integrated Iris based security system containing detailed information of citizen in
the country.
Iris based security system should be enabled with biometric system, where iris of human
eye will be used as unique feature of a citizen for validating. It is advantageous due to the fact that iris is one human body part, which is less prone to damage, as compared to fingers, hands and other parts used for physical biometrics.
It provides fast, easy and integrated information of a citizen. department of Indian security.
It reduces the time factor needed for searching information of a citizen form different
15
Chapter 3
Analysis
System has been implemented in this environment: Operating System Environment Application Development Windows XP Professional with SP3 MATLAB® Presentation Tier UI Programming using Matlab, flash & Photoshop Middle Tier Logic Development matlab language Data Tier Relational Tables in MS Access RDBMS CPU Micro soft Access Intel Pentium Intel Celeron
16
AMD Athlon or AMD Duron Processor Running at 300 MHz or faster RAM Hardware requirements Feasibility study When developing software, the highlighted concept is mainly the feasibility of the system i.e. whether the system is feasible in the following contexts: Technical feasibility Technical aspects were considered while the feasibility study was conducted. Since the nation has licensed copy of all the software required for the system as well as necessary hardware to meet the requirement of Infrared camera of the new Iris based security system, it can be concluded that the system is technically feasible. Operational feasibility New employees are required to administer and maintain, for the same small training program is required. This training will include briefing of the Iris based security system with case and role-plays. Initially a demonstration will be allowed to use the Iris based security system in presence of developers. Any problems encountered can thus be taken care of. As the users have originated the request for a new system, acceptability is expected to be high. Hence we can conclude that the Iris based security system is operationally feasible. Financial feasibility A financial feasibility study was carried out to know the financial viability of the project in the terms of the amount of investment in the project and output expected. 512 MB 3.00 MHz Intel Pentium III processor ( equivalent) and later 120 GB available disk space.
17
The study includes the cost involved at the time of development of the project as well as the future cost in terms of maintenance of software and other miscellaneous expenditure. The proposed hardware and software are affordable cost. Cost of developing the software is very little. On the basis of analysis the study concluded that the project is financial viable. Start-up cost
1. Salaries of programmers 2. Cost of training 3. Preparation of manuals and other documents
Operating cost
1. Salaries of administrator & management people. 2. Cost of installation and maintenance. 3. Cost of high end Infrared camera.
Result of feasibility study
On all the three levels i.e. Technical, operational and economical level, the proposed system is feasible. Iris based security system is providing more benefits when compared with the Traditional practice mainly on the basis of costs in terms of time & overhead incurred.
18
Chapter 4
Methodology
4.1
IMAGE ACQUISITION
This step is one of the most important and deciding factors for obtaining a good result. A good and clear image eliminates the process of noise removal and also helps in avoiding errors in calculation. In this case, computational errors are avoided due to absence of reflections, and because the images have been taken from close proximity. This project uses the image provided by CASIA (Institute of Automation, Chinese Academy of Sciences) . These images were taken solely for the purpose of iris recognition software research and implementation. Infra-red light was used for illuminating the eye, and hence they do not involve any specular reflections. Some part of the computation which involves removal of errors due to reflections in the image were hence not implemented.
4.2
EYE IMAGE SEGMENTATION
Segment Eye performs automatic segmentation of the iris region from an eye image. Also isolates noise areas such as occluding eyelids and eyelashes. Here we will find:
19
circleiris circlepupil imagewithnoise
- centre coordinates and radius of the detected iris boundary - centre coordinates and radius of the detected pupil boundary - original eye image, but with location of noise marked with NaN values
Then we find the Top Eye Lid & Bottom EyeLid. 4.3 IRIS LOCALIZATION
To find the main IRIS boundary we will be using the CANNY EDGE detection & HOUGH transform. 4.3.1 EDGE DETECTION
Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Edges in images are areas with strong intensity contrasts – a jump in intensity from one pixel to the next. Edge detecting an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. This was also stated in my Sobel and Laplace edge detection tutorial, but I just wanted reemphasize the point of why you would want to detect edges. The Canny edge detection algorithm is known to many as the optimal edge detector. Canny's intentions were to enhance the many edge detectors already out at the time he started his work. He was very successful in achieving his goal and his ideas and methods can be found in his paper, "A Computational Approach to Edge Detection". In his paper, he followed a list of criteria to improve current methods of edge detection. The first and most obvious is low error rate. It is important that edges occurring in images should not be missed and that there be NO responses to non-edges. The second criterion is that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge. This was implemented because the first 2 were not substantial enough to completely eliminate the possibility of multiple responses to an edge.
20
Based on these criteria, the canny edge detector first smoothes the image to eliminate and noise. It then finds the image gradient to highlight regions with high spatial derivatives. The algorithm then tracks along these regions and suppresses any pixel that is not at the maximum (no maximum suppression). The gradient array is now further reduced by hysteresis.Hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a nonedge). If the magnitude is above the high threshold, it is made an edge. And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above T2. Canny Edge includes: GAUSSIAN LOWPASS FILTER
The image was filtered using Gaussian filter, which blurs the image and reduces effects due to noise. The degree of smoothening is decided by the standard deviation, σ and it is taken to be 2 in this case.
Example: Step1 In order to implement the canny edge detector algorithm, a series of steps must be followed. The first step is to filter out any noise in the original image before trying to locate and detect any edges. And because the Gaussian filter can be computed using a simple mask, it is used exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian smoothing can be performed using standard convolution methods. A convolution mask is usually much smaller than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time. The larger the width of the Gaussian mask, the lower is the detector's sensitivity to noise. The localization error in the detected edges also increases slightly as the Gaussian width is increased. The Gaussian mask used in my implementation is shown below.
21
Step2 After smoothing the image and eliminating the noise, the next step is to find the edge strength by taking the gradient of the image. The Sobel operator performs a 2-D spatial gradient measurement on an image. Then, the approximate absolute gradient magnitude (edge strength) at each point can be found. The Sobel operator uses a pair of 3x3 convolution masks, one estimating the gradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows). They are shown below:
The magnitude, or EDGE STRENGTH, of the gradient is then approximated using the formula: |G| = |Gx| + |Gy|
Step3 Finding the edge direction is trivial once the gradient in the x and y directions are known. However, you will generate an error whenever sumX is equal to zero. So in the code there has to
22
be a restriction set whenever this takes place. Whenever the gradient in the x direction is equal to zero, the edge direction has to be equal to 90 degrees or 0 degrees, depending on what the value of the gradient in the y-direction is equal to. If GY has a value of zero, the edge direction will equal 0 degrees. Otherwise the edge direction will equal 90 degrees. The formula for finding the edge direction is just: theta = invtan (Gy / Gx) Step4 Once the edge direction is known, the next step is to relate the edge direction to a direction that can be traced in an image. So if the pixels of a 5x5 image are aligned as follows: x x x x x x x x x x x x a x x x x x x x x x x x x
Then, it can be seen by looking at pixel "a", there are only four possible directions when describing the surrounding pixels - 0 degrees (in the horizontal direction), 45 degrees (along the positive diagonal), 90 degrees (in the vertical direction), or 135 degrees (along the negative diagonal). So now the edge orientation has to be resolved into one of these four directions depending on which direction it is closest to (e.g. if the orientation angle is found to be 3 degrees, make it zero degrees). Think of this as taking a semicircle and dividing it into 5 regions.
23
Therefore, any edge direction falling within the yellow range (0 to 22.5 & 157.5 to 180 degrees) is set to 0 degrees. Any edge direction falling in the green range (22.5 to 67.5 degrees) is set to 45 degrees. Any edge direction falling in the blue range (67.5 to 112.5 degrees) is set to 90 degrees. And finally, any edge direction falling within the red range (112.5 to 157.5 degrees) is set to 135 degrees. Step5 After the edge directions are known, non maximum suppression now has to be applied. Nonmaximum suppression is used to trace along the edge in the edge direction and suppress any pixel value (sets it equal to 0) that is not considered to be an edge. This will give a thin line in the output image. Step6 Finally, hysteresis is used as a means of eliminating streaking. Streaking is the breaking up of an edge contour caused by the operator output fluctuating above and below the threshold. If a single threshold, T1 is applied to an image, and an edge has an average strength equal to T1, then due to noise, there will be instances where the edge dips below the threshold. Equally it will also extend above the threshold making an edge look like a dashed line. To avoid this, hysteresis uses 2 thresholds, a high and a low. Any pixel in the image that has a value greater than T1 is presumed to be an edge pixel, and is marked as such immediately. Then, any pixels that are connected to this edge pixel and that have a value greater than T2 are also selected as edge pixels. If you think of following an edge, you need a gradient of T2 to start but you don't stop till you hit a gradient below T1.
24
Fig 4.3: Canny edge image After Canny Edge we will be performing the GAMMA Adjustment: Values in the range 0-1 enhance contrast of bright regions; values > 1 enhance contrast in dark regions.
4.3.2
HOUGH TRANSFORM
For identifying the feature & shapes specifically eclipse, circle. The Hough transform is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a socalled accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. The classical Hough transform was concerned with the identification of lines in the image, but later the Hough transform has been extended to identifying positions of arbitrary shapes, most commonly circles or ellipses. We will be using a scaling factor & multiply it with the radius (0.4) to speed the hough transform. Here we will be considering a range of iris & pupil radius provided in research from the Chinese University. Then border of the inner & outer circles are marked with white lines & the other noisy with eyelids is extracted. image
4.4
IMAGE NORMALIZATION
25
Image is then normalized & the patterns of circle IRIS, circle PUPIL & Noise is written in an pattern Once the iris region is segmented, the next stage is to normalize this part, to enable generation of the iriscode and their comparisons. Since variations in the eye, like optical size of the iris, position of pupil in the iris, and the iris orientation change person to person, it is required to normalize the iris image, so that the representation is common to all, with similar dimensions. Normalization process involves unwrapping the iris and converting it into its polar equivalent. It is done using Daugman’s Rubber sheet model. The center of the pupil is considered as the reference point and a remapping formula is used to convert the points on the Cartesian scale to the polar scale.The modified form of the model is shown below.
Fig 4.4: Normalization process
where r1 = iris radius
The radial resolution was set to 100 and the angular resolution to 2400 pixels. For every pixel in the iris, an equivalent position is found out on polar axes. The normalized image was then
26
interpolated into the size of the original image, by using the interp2 function. The parts in the normalized image which yield a NaN, are divided by the sum to get a normalized value.
Fig 4.4: Unwrapping the iris
Fig 4.4: Normalized iris image
4.5
ENCODING
The final process is the generation of the iriscode. For this, the most discriminating feature in the iris pattern is extracted. The phase information in the pattern only is used because the phase angles are assigned regardless of the image contrast. Amplitude information is not used since it depends on extraneous factors. Extraction of the phase information, according to Daugman, is done using 2D Gabor wavelets. It determines which quadrant the resulting phasor lies using the wavelet An easier way of using the Gabor filter is by breaking up the 2D normalized pattern into a number of 1D wavelets, and then these signals are convolved with 1D Gabor wavelets.
27
Gabor filters are used to extract localized frequency information. But, due to a few of its limitations, log-Gabor filters are more widely used for coding natural images. It was suggested by Field, that the log filters (which use gaussian transfer functions viewed on a logarithmic scale) can code natural images better than Gabor filters (viewed on a linear scale). Statistics of natural images indicate the presence of under-represent high frequency high-frequency components. Since the ordinary Gabor fitlers components, the log filters become a better choice.
LogGabor filters are constructed using
Since the attempt at implementing this function was unsuccessful, the gabor- convolve function written by Peter Kovesi was used. It outputs a cell containing the complex valued convolution results, of the same size as the input image. The parameters used for the function were: nscale = 1 norient = 1 minwavelength = 3 mult = 2 sigmaOnf = 0.5 Using the output of gaborcovolve, the iriscode is formed by assigning 2 elements for each pixel of the image. Each element contains a value 1 or 0 depending on the sign + or – of the real and imaginary part respectively. Noise bits are assigned to those elements whose magnitude is very small and combined with the noisy part obtained from normalization. The generated IrisCode is shown below.
28
Fig 4.5: Iris Code image
4.6
CODE MATCHING
Patterns are then matched using hamming distance to find the similarity. HAMMING DISTANCE: The Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. Put another way, it measures the minimum number of substitutions required to change one string into the other, or the number of errors that transformed one string into the other.
29
S ta r t
A c q u
i s i t i o on f
P
r e p r o c e s s e d I m a g e
S
e g
m e n t a tf o or n i I r i sb o u n d a r y d e f i n it i o n
N o r m
a l iz a t io n
F e a t u r e e n c o d in g
C h e c k e x is t N O
if
Y
E S
M a t c h i n ug s i n g H a m m in g c o d e
I r is P a t t e r n s t o r e d in t o d a t a d is k
R
e s u lt
S t o p
Fig 4 : Process Flow Diagram
Chapter 5 Details of Hardware and Software
The software used to develop the system is as follow: MATLAB MATLAB® is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. Using the
30
MATLAB product, you can solve technical computing problems faster than with traditional programming languages, such as C, C++, and FORTRAN. You can use MATLAB in a wide range of applications, including signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. Add-on toolboxes (collections of special-purpose MATLAB functions, available separately) extend the MATLAB environment to solve particular classes of problems in these application areas. MATLAB You can integrate your MATLAB code with other languages and applications, and distribute your MATLAB algorithms and applications. Key Features • • • High-level language for technical computing Development environment for managing code, files, and data Interactive tools for iterative exploration, design, and problem solving Mathematical functions for linear algebra, statistics, Fourier analysis, filtering, optimization, and numerical integration. • • • 2-D and 3-D graphics functions for visualizing data Tools for building custom graphical user interfaces Functions for integrating MATLAB based algorithms with external applications and languages, such as C, C++, FORTRAN, Java, COM, and Microsoft Excel. Adobe Flash Adobe Flash (previously called Macromedia Flash) is a multimedia platform created by Macromedia and currently developed and distributed by Adobe Systems. Since its introduction in 1996, Flash has become a popular method for adding animation and interactivity to web pages; Flash is commonly used to create animation, advertisements, and various web page components, to integrate video into web pages, and more recently, to develop rich applications.
31
Adobe Photoshop Adobe Photoshop is a professional image editing software package that can be used by experts and novices alike. While this handout offers some very basic tips on using the tools available in Photoshop, more comprehensive guidance can be accessed on the web or in the help menu of your version of Photoshop. Microsoft Access Microsoft Office Access, previously known as Microsoft Access, is a relational database management system from Microsoft that combines the relational Microsoft Jet Database Engine with a graphical user interface and software development tools. It is a member of the 2007 Microsoft Office system. Access stores data in its own format based on the Access Jet Database Engine. It can also import or link directly to data stored in other Access databases, Excel, SharePoint lists, text, XML, Outlook, HTML, dBase, Paradox, Lotus 1-2-3, or any ODBC-compliant data container including Microsoft SQL Server, Oracle, MySQL and PostgreSQL. Software developers and data architects can use it to develop application software and non-programmer "power users" can use it to build simple applications. It supports some object-oriented techniques but falls short of being a fully object-oriented development tool.
Chapter 6 Design Details
6.1
DATA FLOW DIAGRAM
A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. DFDs can also be used for the visualization of data processing.
32
L A
o gC i rn e d e n t i a l .c k o C g ri ne d e n t i a l L P e r Ds o . e n t
0 . 0
N K S
a n e u
t i o o w c u r s i n
n l e i t g
H o mA ec lk a e n rD d s . o t n . P e d g e B a s e d y s y C s to eS d m p e . e c I R I S A c k oS d p . e e c . C A A d mS p i . ne c c A k d S p .i en c m
A
d m
i n
LEVEL 1
L
A d m i n
o gC i rn e d
e n
t i a
l
1 . 0
L
o g D
i n
P .e t
e r m L
i s s i o n o g i n
A
c. kL
o Cg i r n e d
e n
L o g in P r o c e s s
t i a
l
A
A d m i n
d
mS
pi n e c .
4 . 0 A d m in E n r o l l P r o c e s s
A
d
md
e.i nt L o g i n
A
c. kA
d S p i . en c m
Fig 6.1: Data Flow Diagram of Iris based Security System
Chapter 7 Implementation Plan for next semester
Login Process
Identifier and name Initiator Goal Pre-condition
Login Process Admin User To log into the system The system is running, the user is added as admin into the system.
Post-condition
The user is successfully able to log in into the system
When admin wishes add, edit or do some processing then need to login, the user supplies his credentials to the system. The system can successfully verify the user supplied credentials. The system can successfully allow the user to perform various admin level tasks as follow: • • • • • Adding new person detail. Editing person detail. Searching person information. Adding new organization detail. Adding new institution detail.
Add new person detail 41
Identifier and name Initiator Goal Pre-condition Post-condition
Add new person Admin User The system should be able to add new user. The system is running The admin is successfully able to add new personal information to the system.
In order to add new person to the system. The admin needs to fill up the forms like Personal Information, Contact Detail, and Health Detail etc. in the system. The system verifies wheather a person with same or similar iris pattern exists. If it doses it gives the detail of persons having same or similar iris pattern and admin can take necessary step. Edit the person detail Identifier and name Initiator Goal Edit the person detail Admin User The System should allow the admin to edit the information about person exist in the database Pre-condition Post-condition The system should be running The admin should be successfully able to edit person detail.
In order to edit person that exist in the system, admin needs to refill up the forms like Personal Information, Contact Detail, and Health Detail etc. expect iris pattern in the system. The system verifies weather a person with same or similar iris 42
pattern exists. If it doses it gives the detail of persons having same or similar iris pattern and admin can take necessary step. Search Process
Identifier and name Initiator Goal
Search Process Admin User The System should allow the admin to find the person in system.
Pre-condition Post-condition
The system should be running. The user should be successfully able to find the person information by proving necessary credential.
When admin wish to get the information about a particular person then admin need to provide necessary credential to get the information and admin would b able to get all the information about the person. Add institutional detail
Identifier name Initiator Goal
and Add institutional detail
Admin User The System should allow the admin to add new institution in the system.
Pre-condition Post-condition
The system should be running. The admin should be successfully able to add new in institution the system.
43
When institution list is updated with new names, the admin is able to update the same in the system so that it can be available to use. Add organization detail
Identifier name Initiator Goal
and Add organization detail
Admin User The System should allow the admin to add new
organization in the system. Pre-condition Post-condition The system should be running. The admin should be successfully able to add new organization in the system.
When organization list is updated with new names, the admin is able to update the same in the system so that it can be available to use.
GANTT CHART • • • • Gantt charts allow you to access how long a project should take. Gantt charts lay out the order in which tasks need to be carried out. Gantt charts help manage the dependencies between tasks. Gantt chart is useful way to prevent the general flow of a project’s tasks. Such charts are particularly useful for coordinating multiple activities.
44
YEAR No. 1 2 3 WORK TASK PLANNING ANALYSIS DESIGN PHASE 4 5 6 7 CODING DEBUGGING TESTING IMPLEMENTA TION
2011 AUG SEPT OCT
2012
P P P P P P A A A A A A D D D D D
P- Planning
A- Analysis
D- Design 45
T- Testing
I- Implementation
46
References
Bibliography [1] K. Ryoung Park and J. Kim, “A real-time focusing algorithm for iris IEEE Trans. Syst., Man Cybern. C, Cybern., vol. 35, no. 3, pp. 441–444, Aug. 2005. [2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class-specific linear projection,” IEEE Trans. Pattern Anal. Machine Intell, vol. 19, pp. 711–720, July 1997. recogntion camera,”
Papers P. Phillips, W. T. Scruggs, A. J. O’Toole, P. J. Flynn, K.W. Bowyer, C. L. Schott, and M. Sharpe, “FRVT 2006 and ICE 2006 large-scale results,” Nat. Inst. Standards Technol., 2007. [Online].
47
J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. LoIacono, S. Mangru, M. Tinker, T. M. Zappia, and W. Y. Zhao, “Iris on the move: Acqutision on images for iris recognition in less contrained environments,” Proc. IEEE, vol. 94, no. 11, pp. 1936–1947, Nov. 2006.