Palm Print Authentication

Published on January 2017 | Categories: Documents | Downloads: 19 | Comments: 0 | Views: 118
of 60
Download PDF   Embed   Report

Comments

Content

A PROJECT REPORT ON Wavelet Based Palmprint Authentication System

1

Abstract Palmprint based personal verification has quickly entered the biometric family due to its ease of acquisition, high user acceptance and reliability. This paper proposes a palm print based identification system using the textural information, employing different wavelet transforms. The transforms employed have been analyzed for their individual as well as combined performances at feature level. The wavelets used for the analysis are Biorthogonal, ymlet and !iscrete "eyer. The analysis of these wavelets is carried out on #$$ images, acquired through indigenously made image acquisition sys%tem.

Ac!no"led#ement
2

&e are very grateful to our head of the !epartment of 'lectronics and communication 'ngineering, "r. %%%%%%%, %%%%%%%%%%%% (ollege of 'ngineering ) Technology for having provided the opportunity for taking up this pro*ect.

&e would like to express our sincere gratitude and thanks to "r. %%%%%%%%%!epartment of (omputer cience ) 'ngineering, %%%%%%%(ollege of 'ngineering ) Technology for having allowed doing this pro*ect.

pecial thanks to +rest Technologies, for permitting us to do this pro*ect work in their esteemed organization, and also for guiding us through the entire pro*ect &e also extend our sincere thanks to our parents and friends for their moral support throughout the pro*ect work. ,bove all we thank god almighty for his manifold mercies in carrying out the pro*ect successfully

$NTRO%&CT$ON
3

Biometrics based personal identification is getting wide acceptance in the networked society, replacing passwords and keys due to its reliability, uniqueness and the ever in%creasing demand of security. (ommon modalities being used are fingerprint and face but for face authentication people are still working with the problem of pose and illumination invariance where as fingerprint does not have a good psychological effect on the user because of its wide use in crime investigations. -f any biometric modality is to succeed in the future it should have the traits like uniqueness, accuracy, richness, ease of acquisition, reliability and above all user acceptance. Palm print based personal identification is a new biometric modality which is getting wide acceptance and has all the necessary traits to make it a part of our daily life. This pro*ect investigates the use of palm print for personal identification using combination of different wavelets. Palm print not only has the unique information available as on the fingerprint but has far more amount of details in terms of principal lines, wrinkles and creases. "oreover it can easily be combined with hand shape bio%metric so as to form a highly accurate and reliable biometric based personal identification system. Palm print based personal verification has become an increasingly active research topic over the years. The Palm%print is rich in information and has been analyzed for discriminating features like where wavelet transform has been used for feature extraction has motivated us to investigate the effectiveness of using combination of multiple wavelets for the textural analysis of palm print. Personal identification is ubiquitous in our daily lives. .or example, we often have to prove our identity for getting access to bank account, entering a protected site, drawing cash from an ,T", logging in to a computer, and so on. (onventionally, we identify ourselves and gain access by physically carrying passports, keys, access cards or by remembering passwords, secret codes, and personal identification numbers /P-0s1. 2nfortunately, passport, keys, access cards can be lost, duplicated, stolen, or forgotten3 and password, secret codes, and personal identification numbers /P-0s1 can easily be forgotten,

4

(ompromised, shared, or observed.

uch loopholes or deficiencies of conventional personal

identification techniques have caused ma*or problems to all concerned. .or example, hackers often disrupt computer networks, credit card fraud is estimated at billions dollars per year worldwide. The cost of forgotten passwords is high and accounts for 4$5%6$5 of all the -T help desk calls and resetting the forgotten or compromised passwords costs as much as 2 7 84$9user9year. Therefore, robust, reliable, and foolproof personal identification solutions must be sought in order to address the deficiencies of conventional techniques, something that could verify that some one is physically the person he9she claims to be. , biometric is a unique, measurable characteristic or trait of a human being for automatically recognizing or verifying identity. By using a biometric identification, the individual verification can be done by doing the statistical analysis of biological characteristic. This measurable characteristic can be physical, e.g. eye, face, finger image and hand, or behavioral, e.g. signature and typing rhythm. Besides bolstering security, biometric systems also enhance user convenience by alleviating the need to design and remember multiple complex passwords. 0o wonder large scale systems have been deployed in such diverse applications as 2 %:- -T and entry to !isney Park, ;rlando. -n spite of the fact that automatic biometric recognition systems based on fingerprints /called ,.- 1 have been used by law enforcement agencies worldwide for over 4$ years, biometric recognition continues to remain a very difficult pattern recognition problem. , biometric system has to contend with problems related to noisy images /failure to enroll1, lack of distinctiveness /finite error rate1, large intra%class variations /false re*ect1, and spoof attacks /system security1. Therefore, a proper system design is needed to verify a person quickly and automatically. -n this pro*ect, a multibiometric system is proposed for human verification i.e. authenticating the identity of an individual. The proposed multibiometric system uses both hand

5

and finger stripe geometry for verification process. ,rtificial 0eural 0etwork /,001 is applied for feature learning and verification.

%E'E(OP)ENT O* $)A+E AC,&$S$T$ON P(AT-*OR)
There are two types of systems available for capturing the palmprint of individuals i.e., scanners and the pegged systems. canners are hygienically not safe whereas the pegged systems cause considerable inconvenience to the user. <ence both of these systems suffer from low user acceptability. The attributes of ease of acquisition and hygienic safety are of paramount importance for any biometric modality. The proposed image acquisition setup satisfies the mentioned criteria by proposing a contactless, peg free system, .igure =. -t is an enclosed black box, simple in architecture and employs ring source light for uniform illumination. Two plates are kept inside the image acquisition setup. The upper plate holds the camera and the light source while the bottom plate is used to place individual>s hand. The distance between these two plates is kept constant to avoid any mismatch due to scale invariance. The distance between the two plates after empirical testing is kept at ?4 inches. The Palmprint images have been collected from #$ individuals with ?$ images each making a total dataset of #$$ images. The dataset contains all images of males with age distribution between == to #@ years, with a high percentage between == to =# years. , low resolution of A= dpi has been used employing ;0B ! ( &%8# (B%B'C <;T for Palmprint images acquisition.

6

$)A+E RE+$STRAT$ON
;ur image registration approach follows the technique proposed and is summarized as followsD The acquired color /CEB1 parameters of Palmprint are changed to < - parameters. The hue value of skin is same so it was safely neglected along with the less discriminating saturation value. The Palmprint has been analyzed for its texture using the gray level or intensity values, among the < - values. Eray level images retain all the useful discriminating information required for personal identification, along with considerable reduction in processing time. The color images using following are changed to gray level imthe equationD

-F /$.=G6G C1 H /$.#6A$ E1 H /

B1

/?1

The gray level images are normalized and thresholded to get a binary image. <ysteresis thresholding has been adopted due to its effectiveness in varying illumination conditions and undesirable background noise.

,lthough the user training ensures optimal and standard acquisition of palmprint, a rotational alignment is incorporated in our proposed approach to cater inadvertent rotations. The longest line in a palm passes through the middle finger, and any rotation is considered with reference to this line. The second order moment helps analyzing the elongation or eccentricity of any binary shape. By finding the 'i%gen values and eigenvectors, we can determine the eccentricity of the shape by analyzing the ratio of the 'igen values. &e can determine the direction of elongation by using the direction of the eigenvector with corresponding highest 'igen value. The parameters of the best fitting ellipse have been extracted using second order statistical moments on the binarized palm print corresponding to the longest line. (onsequently, the offset /theta1 between the normal axis and the longest line passing through the middle finger is calculated. The theta is calculated using the following equation

7

&here a, b and c are the second order normalized moments of the pixels and are calculated using the following equationsD

Palmprint is adversely affected by the variations in illumination. The problem has been addressed by computing normalized energy of the decomposition blocks so as to minimize feature variance due to non%uniform illumination. The energy computed from each block for the three wavelet types is concatenated to form a feature vector of length =A for an individual Palmprint. The normalized energy of the C;- image block B associated with subband is defined as

&here the normalize energy is given as

&here In> equals the total number of blocks present in the image "atching is performed by calculating the 'uclidean !istance between the input feature vector and template feature vector. 'uclidean distance between two vectors is calculated by squaring the difference between corresponding elements. .or p/x, y1 and q/s, t1 the 'uclidean distance between p and q is defined asD

8

, detailed analysis of results revealed that rotation of the palmprint caused considerable blur in the vertical aligned images due to interpolation.

*EAT&RE E.TRACT$ON AN% C(ASS$*$CAT$ON
&e obtained ten images of each individual of which five were used for training and the rest of them were used for validation. The obtained registered palm print image has been analysed for its texture using different symmetrical wavelet families namely biorthogonal 8.G, symmelt 6 and demeyer #. The palm print region =#@x=#@ has been decom%posed into three scales for each wavelet type.

,n intelligent solution to this problem is devised by rotating the axis of region instead of the palm

9

, reverse transformation is computed from the affine transform, as followsD

2sing the above equations, a rotation invariant region of interest is cropped from the palm print. The approximation or interpolation error still exists but the results show im%proved performance and accuracy. The selected wavelets have been analyzed for their individual performance by formulating similar energy based feature vectors of length =A, using G levels decomposition.

NOR)A($/E% ENER+0
&e denote the additional energy for performing certain computation%intensive *obs in each system as the energy unit /'21. The computation%intensive *ob we chose was the *peg fdct islow routine in file *fdctint.c from -ndependent JP'E EroupKs implementation of JP'E, which comes with the "ibench benchmarks. -t performs a forward discrete cosine transform /!(T1 on an eight%by%eight block of integers. Three different sets of inputs are randomly chosen from the large image file included in "ibench. The *ob is memory%intensive as well since input data are read from memory each time before a !(T is performed. To obtain the additional energy for

10

performing one such !(T, we repeat the !(T a total of

times over the set of chosen

input data. This is assumed to be the target, which takes our systems about four seconds to complete. The context in this case simply involves making the system idle. The energy of every benchmark is measured with a companion measurement of the '2. -n most cases, we report experimental results normalized to the '2 thus obtained. This accounts for differences in the hardware and ; . The '2 for the three systems we studied is between and Joules. The

benefits of using the '2 are as follows. 'xperiments were conducted on different days for different benchmarks. The absolute energy figure for an event varied slightly from day to day. <owever, the energy remained quite constant if normalized to the corresponding '2 /within ?51. "oreover, since the '2 is only dependent on the ;( and memory, the comparison of non%L(! energy consumption of different systems is fairer after normalization.

CONCEPT O* PA() J&ST ($1E *$N+ER

Palm identification, *ust like fingerprint identification, is based on the aggregate of information presented in a friction ridge impression. This information includes the flow of the friction ridges, the presence or absence of features along the individual friction ridge paths and their sequences, and the intricate detail of a single ridge. To understand this recognition concept, one must first understand the physiology of the ridges and valleys of a fingerprint or palm. &hen recorded, a fingerprint or palm print appears as a series of dark lines and represents the high, peaking portion of the friction ridged skin while the valley between these ridges appears as a white space and is the low, shallow portion of the friction ridged skin. This is shown in .igure.

11

.igureD .ingerprint Cidges /!ark Lines1 vs. .ingerprint :alleys /&hite Lines1.

Palm recognition technology exploits some of these palm features. .riction ridges do not always flow continuously throughout a pattern and often result in specific characteristics such as ending ridges or dividing ridges and dots. , palm recognition system is designed to interpret the flow of the overall ridges to assign a classification and then extract the minutiae detail M a subset of the total amount of information available, yet enough information to effectively search a large repository of palm prints. "inutiae are limited to the location, direction, and orientation of the ridge endings and bifurcations /splits1 along a ridge path. The images in .igure = present a pictorial representation of the regions of the palm, two types of minutiae, and examples of other detailed characteristics used during the automatic classification and minutiae extraction processes.

Palm sho"in# t"o types o2 minuates and characteristics

TE.T&RE ANA(0S$S
-n many machine vision and image processing algorithms, simplifying assumptions are made about the uniformity of intensities in local image regions. <owever, images of real ob*ects often do not exhibit regions of uniform intensities. .or example, the image of a wooden surface is not uniform but contains variations of intensities which form certain repeated patterns called visual texture. The patterns can be the result of physical surface properties such as roughness or oriented strands which often have a tactile quality, or they could be the result of reflectance
12

differences such as the color on a surface. &e recognize texture when we see it but it is very difficult to define. This difficulty is demonstrated by the number of different texture definitions attempted by vision researchers. (oggins has compiled a catalogue of texture definitions in the computer vision literature and we give some examples here. N O&e may regard texture as what constitutes a macroscopic region. -ts structure is simply attributed to the repetitive patterns in which elements or primitives are arranged according to a placement rule.P N O, region in an image has a constant texture if a set of local statistics or other local properties of the picture function are constant, slowly varying, or approximately periodic.P N OThe image texture we consider is nonfigurative and cellular... ,n image texture is described by the number and types of its /tonal1 primitives and the spatial organization or layout of its /tonal1 primitives... , fundamental characteristic of textureD it cannot be analyzed without a frame of reference of tonal primitive being stated or implied. .or any smooth gray%tone surface, there exists a scale such that when the surface is examined, it has no texture. Then as resolution increases, it takes on a fine texture and then a coarse texture.P N OTexture is defined for our purposes as an attribute of a field having no components that appear enumerable. The phase relations between the components are thus not apparent. 0or should the field contain an obvious gradient. The intent of this definition is to direct attention of the observer to the global properties of the display i.e., its overall Ocoarseness,P Obumpiness,P or Ofineness.P Physically, none numerable /a periodic1 patterns are generated by stochastic as opposed to deterministic processes. Perceptually, however, the set of all patterns without obvious enumerable components will include many deterministic /and even periodic1 textures.P N OTexture is an apparently paradoxical notion. ;n the one hand, it is commonly used in the early processing of visual information, especially for practical classification purposes. ;n the other hand, no one has succeeded in producing a commonly accepted definition of texture. The resolution of this paradox, we feel, will depend on a richer, more developed model for early
13

visual information processing, a central aspect of which will be representational systems at many different levels of abstraction. These levels will most probably include actual intensities at the bottom and will progress through edge and orientation descriptors to surface, and perhaps volumetric descriptors. Eiven these multi%level structures, it seems clear that they should be included in the definition of, and in the computation of, texture descriptors.P This collection of definitions demonstrates that the OdefinitionP of texture is formulated by different people depending upon the particular application and that there is no generally agreed upon definition. ome are perceptually motivated, and others are driven completely by the application in which the definition will be used. -mage texture, defined as a function of the spatial variation in pixel intensities /gray values1, is useful in a variety of applications and has been a sub*ect of intense study by many researchers. ;ne immediate application of image texture is the recognition of image regions using texture properties. Texture is the most important visual cue in identifying these types of homogeneous regions. This is called texture classification. The goal of texture classification then is to produce a classification map of the input image where each uniform textured region is identified with the texture class it belongs. &e could also find the texture boundaries even if we could not classify these textured surfaces. This is then the second type of problem that texture analysis research attempts to solve M texture segmentation. Texture synthesis is often used for image compression applications. -t is also important in computer graphics where the goal is to render ob*ect surfaces which are as realistic looking as possible. The shape from texture problem is one instance of a general class of vision problems known as Oshape from QP. This was first formally pointed out in the perception literature by Eibson. The goal is to extract three% dimensional shape information from various cues such as shading, stereo, and texture. The texture features /texture elements1 are distorted due to the imaging process and the perspective pro*ection which provide information about surface orientation and shape.

14

WA'E(ETS
*O&R$ER ANA(0S$S ignal analysts already have at their disposal an impressive arsenal of tools. Perhaps the most well%known of these is .ourier analysis, which breaks down a signal into constituent sinusoids of different frequencies. ,nother way to think of .ourier analysis is as a mathematical technique for transforming our view of the signal from time%based to frequency%based.

15

.or many signals, .ourier analysis is extremely useful because the signal>s frequency content is of great importance. o why do we need other techniques, like wavelet analysisR .ourier analysis has a serious drawback. -n transforming to the frequency domain, time information is lost. &hen looking at a .ourier transform of a signal, it is impossible to tell when a particular event took place. -f the signal properties do not change much over time M that is, if it is what is called a stationary signalMthis drawback isn>t very important. <owever, most interesting signals contain numerous non stationary or transitory characteristicsD drift, trends, abrupt changes, and beginnings and ends of events. These characteristics are often the most important part of the signal, and .ourier analysis is not suited to detecting them.

S3ORT-T$)E *O&R$ER ANA(0S$S -n an effort to correct this deficiency, !ennis Eabor /?G4@1 adapted the .ourier transform to analyze only a small section of the signal at a timeMa technique called windowing the signal. Eabor>s adaptation, called the hort%Time .ourier Transform / T.T1, maps a signal into a two%dimensional function of time and frequency.

16

The T.T represents a sort of compromise between the time% and frequency%based views of a signal. -t provides some information about both when and at what frequencies a signal event occurs. <owever, you can only obtain this information with limited precision, and that precision is determined by the size of the window. &hile the T.T compromise between time and frequency information can be useful, the drawback is that once you choose a particular size for the time window, that window is the same for all frequencies. "any signals require a more flexible approachMone where we can vary the window size to determine more accurately either time or frequency.

WA'E(ET ANA(0S$S
&avelet analysis represents the next logical stepD a windowing technique with variable% sized regions. &avelet analysis allows the use of long time intervals where we want more precise low%frequency information, and shorter regions where we want high%frequency information.

<ere>s what this looks like in contrast with the time%based, frequency%based, and T.T views of a signalD

17

Bou may have noticed that wavelet analysis does not use a time%frequency region, but rather a time%scale region. .or more information about the concept of scale and the link between scale and frequency, see O<ow to (onnect cale to .requencyRP

What Can Wavelet Analysis %o4 ;ne ma*or advantage afforded by wavelets is the ability to perform local analysis, that is, to analyze a localized area of a larger signal. (onsider a sinusoidal signal with a small discontinuity M one so tiny as to be barely visible. uch a signal easily could be generated in the real world, perhaps by a power fluctuation or a noisy switch.

, plot of the .ourier coefficients /as provided by the fft command1 of this signal shows nothing particularly interestingD a flat spectrum with two peaks representing a single frequency. <owever, a plot of wavelet coefficients clearly shows the exact location in time of the discontinuity.

18

&avelet analysis is capable of revealing aspects of data that other signal analysis techniques miss, aspects like trends, breakdown points, discontinuities in higher derivatives, and self% similarity. .urthermore, because it affords a different view of data than those presented by traditional techniques, wavelet analysis can often compress or de%noise a signal without appreciable degradation. -ndeed, in their brief history within the signal processing field, wavelets have already proven themselves to be an indispensable addition to the analyst>s collection of tools and continue to en*oy a burgeoning popularity today.

W3AT $S WA'E(ET ANA(0S$S4 0ow that we know some situations when wavelet analysis is useful, it is worthwhile asking O&hat is wavelet analysisRP and even more fundamentally, O&hat is a waveletRP , wavelet is a waveform of effectively limited duration that has an average value of zero. (ompare wavelets with sine waves, which are the basis of .ourier analysis.

19

inusoids do not have limited duration M they extend from minus to plus infinity. ,nd where sinusoids are smooth and predictable, wavelets tend to be irregular and asymmetric.

*i#ure 567 .ourier analysis consists of breaking up a signal into sine waves of various frequencies. imilarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original /or mother1 wavelet. Just looking at pictures of wavelets and sine waves, you can see intuitively that signals with sharp changes might be better analyzed with an irregular wavelet than with a smooth sinusoid, *ust as some foods are better handled with a fork than a spoon. -t also makes sense that local features can be described better with wavelets that have local extent.

N&)BER O* %$)ENS$ONS Thus far, we>ve discussed only one%dimensional data, which encompasses most ordinary signals. <owever, wavelet analysis can be applied to two%dimensional data /images) and, in principle, to higher dimensional data. This toolbox uses only one and two%dimensional analysis techniques. T3E CONT$N&O&S WA'E(ET TRANS*OR) "athematically, the process of .ourier analysis is represented by the .ourier transformD

20

&hich is the sum over all time of the signal f/t1 multiplied by a complex exponential. /Cecall that a complex exponential can be broken down into real and imaginary sinusoidal components.1 The results of the transform are the .ourier coefficients ./w1, which when multiplied by a sinusoid of frequency w yields the constituent sinusoidal components of the original signal. Eraphically, the process looks likeD

*i#ure 568 imilarly, the continuous wavelet transform /(&T1 is defined as the sum over all time of signal multiplied by scaled, shifted versions of the wavelet function.S

The result of the (&T is a series many wavelet coefficients C, which are a function of scale and position6 "ultiplying each coefficient by the appropriately scaled and shifted wavelet yields the constituent wavelets of the original signalD

21

*i#ure 569 SCA($N+ &e>ve already alluded to the fact that wavelet analysis produces a time%scale view of a signal and now we>re talking about scaling and shifting wavelets. &hat exactly do we mean by scale in this contextR caling a wavelet simply means stretching /or compressing1 it. To go beyond colloquial descriptions such as Ostretching,P we introduce the scale factor, often denoted by the letter a. -f we>re talking about sinusoids, for example the effect of the scale factor is very easy to seeD

The scale factor works exactly the same with wavelets. The smaller the scale factor, the more OcompressedP the wavelet.
22

*i#ure 56:: -t is clear from the diagrams that for a sinusoid sin /w t1 the scale factor Ia> is related /inversely1 to the radian frequency Iw>. imilarly, with wavelet analysis the scale is related to the frequency of the signal. S3$*T$N+ hifting a wavelet simply means delaying /or hastening1 its onset.

T3E %$SCRETE WA'E(ET TRANS*OR) (alculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. &hat if we choose only a subset of scales and positions at which to make our calculationsR -t turns out rather remarkably that if we choose scales and positions based on powers of two so%called dyadic scales and positionsMthen our analysis will be much

23

more efficient and *ust as accurate. &e obtain such an analysis from the discrete wavelet transform /!&T1. ,n efficient way to implement this scheme using filters was developed in ?G66 by "allat. The "allat algorithm is in fact a classical scheme known in the signal processing community as a two%channel sub band coder. This very practical filtering algorithm yields a fast wavelet transform M a box into which a signal passes, and out of which wavelet coefficients quickly emerge. Let>s examine this in more depth.

ONE-STA+E *$(TER$N+ APPRO.$)AT$ONS AN% %ETA$(S .or many signals, the low%frequency content is the most important part. -t is what gives the signal its identity. The high%frequency content on the other hand imparts flavor or nuance. (onsider the human voice. -f you remove the high%frequency components, the voice sounds different but you can still tell what>s being said. <owever, if you remove enough of the low% frequency components, you hear gibberish. -n wavelet analysis, we often speak of approximations and details. The approximations are the high%scale, low%frequency components of the signal. The details are the low%scale, high%frequency components. The filtering process at its most basic level looks like thisD

24

The original signal

passes through two complementary filters and emerges as two

signals. 2nfortunately, if we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with. uppose, for instance that the original signal consists of ?$$$ samples of data. Then the resulting signals will each have ?$$$ samples, for a total of =$$$. These signals , and ! are interesting, but we get =$$$ values instead of the ?$$$ we had. There exists a more subtle way to perform the decomposition using wavelets. By looking carefully at the computation, we may keep only one point out of two in each of the two =$$$% length samples to get the complete information. This is the notion of own sampling. &e produce two sequences called c, and c!.

The process on the right which includes down sampling produces !&T (oefficients To gain a better appreciation of this process let>s perform a one%stage discrete wavelet transform of a signal. ;ur signal will be a pure sinusoid with high% frequency noise added to it. <ere is our schematic diagram with real signals inserted into itD

25

0otice that the detail coefficients c! is small and consist mainly of a high%frequency noise, while the approximation coefficients c, contains much less noise than does the original signal. Bou may observe that the actual lengths of the detail and approximation coefficient vectors are slightly more than half the length of the original signal. This has to do with the filtering process, which is implemented by convolving the signal with a filter. The convolution OsmearsP the signal, introducing several extra samples into the result.

)&(T$P(E-(E'E( %ECO)POS$T$ON The decomposition process can be iterated, with successive approximations being decomposed in turn, so that one signal is broken down into many lower resolution components. This is called the wavelet decomposition tree.

26

Looking at a signal>s wavelet decomposition tree can yield valuable information.

N&)BER O* (E'E(S ince the analysis process is iterative, in theory it can be continued indefinitely. -n reality, the decomposition can proceed only until the individual details consist of a single sample or pixel. -n practice, you>ll select a suitable number of levels based on the nature of the signal, or on a suitable criterion such as entropy.

27

WA'E(ET RECONSTR&CT$ON &e>ve learned how the discrete wavelet transform can be used to analyze or decompose signals and images. This process is called decomposition or analysis. The other half of the story is how those components can be assembled back into the original signal without loss of information. This process is called reconstruction, or synthesis. The mathematical manipulation that effects synthesis is called the inverse discrete wavelet transforms /-!&T1. To synthesize a signal in the &avelet Toolbox, we reconstruct it from the wavelet coefficientsD

&here wavelet analysis involves filtering and down sampling, the wavelet reconstruction process consists of up sampling and filtering. 2p sampling is the process of lengthening a signal component by inserting zeros between samplesD

The &avelet Toolbox includes commands like idwt and waverec that perform single% level or multilevel reconstruction respectively on the components of one%dimensional signals. These commands have their two%dimensional analogs, idwt= and waverec=.
28

RECONSTR&CT$ON *$(TERS The filtering part of the reconstruction process also bears some discussion, because it is the choice of filters that is crucial in achieving perfect reconstruction of the original signal. The down sampling of the signal components performed during the decomposition phase introduces a distortion called aliasing. -t turns out that by carefully choosing filters for the decomposition and reconstruction phases that are closely related /but not identical1, we can Ocancel outP the effects of aliasing. The low% and high pass decomposition filters /L and <1, together with their associated reconstruction filters /LK and <K1, form a system of what is called Tuadrature mirror filtersD

RECONSTR&CT$N+ APPRO.$)AT$ONS AN% %ETA$(S &e have seen that it is possible to reconstruct our original signal from the coefficients of the approximations and details.

29

-t is also possible to reconstruct the approximations and details themselves from their coefficient vectors. ,s an example, let>s consider how we would reconstruct the first%level approximation ,? from the coefficient vector c,?. &e pass the coefficient vector c,? through the same process we used to reconstruct the original signal. <owever, instead of combining it with the level%one detail c!?, we feed in a vector of zeros in place of the detail coefficients vectorD

The process yields a reconstructed approximation ,?, which has the same length as the original signal and which is a real approximation of it. imilarly, we can reconstruct the first% level detail !?, using the analogous processD

The reconstructed details and approximations are true constituents of the original signal. -n fact, we find when we combine them thatD
30

A ? H D? F S 0ote that the coefficient vectors c,? and c!?Mbecause they were produced by down sampling and are only half the length of the original signal M cannot directly be combined to reproduce the signal. -t is necessary to reconstruct the approximations and details before combining them. 'xtending this technique to the components of a multilevel analysis, we find that similar relationships hold for all the reconstructed signal constituents. That is, there are several ways to reassemble the original signalD

Relationship o2 *ilters to Wavelet Shapes -n the section OCeconstruction .iltersP, we spoke of the importance of choosing the right filters. -n fact, the choice of filters not only determines whether perfect reconstruction is possible, it also determines the shape of the wavelet we use to perform the analysis. To construct a wavelet of some practical utility, you seldom start by drawing a waveform. -nstead, it usually makes more sense to design the appropriate Tuadrature mirror filters, and then use them to create the waveform. Let>s see how this is done by focusing on an example. (onsider the low pass reconstruction filter /LK1 for the db= wavelet. Wavelet 2unction position

31

The filter coefficients can be obtained from the dbaux commandD Lprime F dbaux/=1 Lprime F $.84?# $.#G?# $.?#6# U$.$G?# -f we reverse the order of this vector /see wrev1, and then multiply every even sample by U?, we obtain the high pass filter <KD <prime F U$.$G?# U$.?#6# $.#G?# U$.84?# 0ext, up sample <prime by two /see dyadup1, inserting zeros in alternate positionsD <2 FU$.$G?# $ U$.?#6# $ $.#G?# $ U$.84?# $ .inally, convolve the up sampled vector with the original low pass filterD <= F conv/<2,Lprime13 plot/<=1

-f we iterate this process several more times, repeatedly up sampling and convolving the resultant vector with the four%element filter vector Lprime, a pattern begins to emergeD

32

The curve begins to look progressively more like the db= wavelet. This means that the wavelet>s shape is determined entirely by the coefficients of the reconstruction filters. This relationship has profound implications. -t means that you cannot choose *ust any shape, call it a wavelet, and perform an analysis. ,t least, you can>t choose an arbitrary wavelet waveform if you want to be able to reconstruct the original signal accurately. Bou are compelled to choose a shape determined by Tuadrature mirror decomposition filters.

The Scalin# *unction &e>ve seen the interrelation of wavelets and Tuadrature mirror filters. The wavelet function is determined by the high pass filter, which also produces the details of the wavelet decomposition. There is an additional function associated with some, but not all wavelets. This is the so% called scaling function. The scaling function is very similar to the wavelet function. -t is determined by the low pass Tuadrature mirror filters, and thus is associated with the approximations of the wavelet decomposition. -n the same way that iteratively up% sampling and convolving the high pass filter produces a shape approximating the wavelet function, iteratively

33

up%sampling and convolving the low pass filter produces a shape approximating the scaling function.

)ulti-step %ecomposition and Reconstruction , multi step analysis%synthesis process can be represented asD

This process involves two aspectsD breaking up a signal to obtain the wavelet coefficients, and reassembling the signal from the coefficients. &e>ve already discussed decomposition and reconstruction at some length. ;f course, there is no point breaking up a signal merely to have the satisfaction of immediately reconstructing it. &e may modify the wavelet coefficients before performing the reconstruction step. &e perform wavelet analysis because the coefficients thus obtained have many known uses, de%noising and compression being foremost among them. But wavelet analysis is still a new and emerging field. 0o doubt, many uncharted uses of the wavelet coefficients lie in wait. The &avelet Toolbox can be a means of exploring possible uses and hitherto unknown applications of wavelet analysis. 'xplore the toolbox functions and see what you discover.

34

%$+$TA( $)A+E PROCESS$N+

BAC1+RO&N% !igital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. ,n important characteristic underlying the design of image processing systems is the significant level of testing ) experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches )quickly prototype candidate solutions generally plays a ma*or role in reducing the cost ) time required to arrive at a viable system implementation.

What is %$P

,n image may be defined as a two%dimensional function f/x, y1, where x ) y are spatial coordinates, ) the amplitude of f at any pair of coordinates /x, y1 is called the intensity or gray level of the image at that point. &hen x, y ) the amplitude values of f are all finite discrete quantities, we call the image a digital image. The field of !-P refers to processing digital image by means of digital computer. !igital image is composed of a finite number of elements, each of which has a particular location ) value. The elements are called pixels. :ision is the most advanced of our sensor, so it is not surprising that image play the single most important role in human perception. <owever, unlike humans, who are limited to the visual band of the '" spectrum imaging machines cover almost the entire '" spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans are not accustomed to associating with image.

There is no general agreement among authors regarding where image processing stops ) other related areas such as image analysis) computer vision start. ometimes a distinction is made by defining image processing as a discipline in which both the input ) output at a process
35

are images. This is limiting ) somewhat artificial boundary. The area of image analysis /image understanding1 is in between image processing ) computer vision.

There are no clear%cut boundaries in the continuum from image processing at one end to complete vision at the other. <owever, one useful paradigm is to consider three types of computerized processes in this continuumD low%, mid%, ) high%level processes. Low%level process involves primitive operations such as image processing to reduce noise, contrast enhancement ) image sharpening. , low% level process is characterized by the fact that both its inputs ) outputs are images. "id%level process on images involves tasks such as segmentation, description of that ob*ect to reduce them to a form suitable for computer processing ) classification of individual ob*ects. , mid%level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. .inally higher% level processing involves O"aking senseP of an ensemble of recognized ob*ects, as in image analysis ) at the far end of the continuum performing the cognitive functions normally associated with human vision. !igital image processing, as already defined is used successfully in a broad range of areas of exceptional social ) economic value. What is an ima#e4 ,n image is represented as a two dimensional function f/x, y1 where x and y are spatial co%ordinates and the amplitude of If> at any pair of coordinates /x, y1 is called the intensity of the image at that point. +ray scale ima#e , grayscale image is a function - /xylem1 of the two spatial coordinates of the image plane. -/x, y1 is the intensity of the image at the point /x, y1 on the image plane. - /xylem1 takes non%negative values assume the image is bounded by a rectangle V$, aW ×V$, bW-D V$, aW × V$, bW → V$, info1
36

Color ima#e $t can be represented by three functions, C /xylem1 for red, E /xylem1 for green and B /xylem1 for blue. ,n image may be continuous with respect to the x and y coordinates and also in

amplitude. (onverting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. !igitizing the coordinate>s values is called sampling. !igitizing the amplitude values is called quantization. Coordinate convention The result of sampling and quantization is a matrix of real numbers. &e use two principal ways to represent digital images. ,ssume that an image f/x, y1 is sampled so that the resulting image has " rows and 0 columns. &e say that the image is of size " Q 0. The values of the coordinates /xylem1 are discrete quantities. .or notational clarity and convenience, we use integer values for these discrete coordinates. -n many image processing books, the image origin is defined to be at /xylem1F/$,$1.The next coordinate values along the first row of the image are /xylem1F/$,?1.-t is important to keep in mind that the notation /$,?1 is used to signify the second sample along the first row. -t does not mean that these are the actual values of physical coordinates when the image was sampled. .ollowing figure shows the coordinate convention. 0ote that x ranges from $ to "%? and y from $ to 0%? in integer increments. The coordinate convention used in the toolbox to denote arrays is different from the preceding paragraph in two minor ways. .irst, instead of using /xylem1 the toolbox uses the notation /race1 to indicate rows and columns. 0ote, however, that the order of coordinates is the same as the order discussed in the previous paragraph, in the sense that the first element of a coordinate topples, /alb1, refers to a row and the second to a column. The other difference is that the origin of the coordinate system is at /r, c1 F /?, ?13 thus, r ranges from ? to " and c from ? to 0 in integer increments. -PT documentation refers to the coordinates. Less frequently the toolbox also employs another coordinate convention called spatial coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y.

37

$ma#e as )atrices The preceding discussion leads to the following representation for a digitized image functionD f /$,$1 f /?,$1 f /xylem1F . . . f/$,?1 f/?,?1 . . XXX.. XXXX f/$,0%?1 f/?,0%?1 .

f /"%?,$1 f/"%?,?1 XXXX f/"%?,0%?1 The right side of this equation is a digital image by definition. 'ach element of this array is called an image element, picture element, pixel or pel. The terms image and pixel are used throughout the rest of our discussions to denote a digital image and its elements. , digital image can be represented naturally as a ",TL,B matrixD f /?,?1 f/?,=1 XX. f/?,01 f /=,?1 . fF . . f/=,=1 XX.. f /=,01 . . .

f /",?1 f/",=1 XX.f/",01 &here f /?,?1 F f/$,$1 /note the use of a monoscope font to denote ",TL,B quantities1. (learly the two representations are identical, except for the shift in origin. The notation f/p ,q1 denotes the element located in row p and the column q. .or example f/@,=1 is the element in the sixth row and second column of the matrix f. Typically we use the letters " and 0 respectively to denote the number of rows and columns in a matrix. , ?x0 matrix is called a row vector whereas an "x? matrix is called a column vector. , ?x? matrix is a scalar.

38

"atrices in ",TL,B are stored in variables with names such as ,, a, CEB, real array and so on. :ariables must begin with a letter and contain only letters, numerals and underscores. ,s noted in the previous paragraph, all ",TL,B quantities are written using mono%scope characters. &e use conventional Coman, italic notation such as f/x ,y1, for mathematical expressions

Readin# $ma#es -mages are read into the ",TL,B environment using function imread whose syntax is -mread /Ifilename>1 *ormat name T-.. JP'E E-. B"P P0E Q&! %escription Tagged -mage .ile .ormat Joint Photograph 'xperts Eroup Eraphics -nterchange .ormat &indows Bitmap Portable 0etwork Eraphics Q &indow !ump reco#ni;ed e<tension .tif, .tiff .*pg, .*peg .gif .bmp .png .xwd

<ere filename is a spring containing the complete of the image file/including any applicable extension1..or example the command line YY f F imread /I6. *pg>13 Ceads the JP'E /above table1 image chestxray into image array f. 0ote the use of single quotes /I1 to delimit the string filename. The semicolon at the end of a command line is used by ",TL,B for suppressing output -f a semicolon is not included. ",TL,B displays the results of the operation/s1 specified in that line. The prompt symbol /YY1 designates the beginning of a command line, as it appears in the ",TL,B command window.
39

%ata Classes ,lthough we work with integers coordinates the values of pixels themselves are not restricted to be integers in ",TL,B. Table above list various data classes supported by ",TL,B and -PT are representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred to as logical data class. ,ll numeric computations in ",TL,B are done in double quantities, so this is also a frequent data class encounter in image processing applications. (lass unit 6 also is encountered frequently, especially when reading data from storages devices, as 6 bit images are most common representations found in practice. These two data classes, classes logical, and, to a lesser degree, class unit ?@ constitute the primary data classes on which we focus. "any ipt functions however support all the data classes listed in table. !ata class double requires 6 bytes to represent a number uint6 and int 6 require one byte each, uint?@ and int?@ requires =bytes and unit 8=. Name !ouble 2int6 'lement1. 2int?@ 2int 8= -nt6 -nt ?@ -nt 8= element1. ingle single Zprecision floating Zpoint numbers with values unsigned ?@Zbit integers in the range V$, @##8#W /=byte per element1. unsigned 8=Zbit integers in the range V$, 4=G4G@A=G#W/4 bytes per element1. %escription !ouble Z precision, floatingZ point numbers the ,pproximate. unsigned 6Zbit integers in the range V$,=##W /?byte per

signed 6Zbit integers in the range V%?=6,?=AW ? byte per element1 signed ?@Zbyte integers in the range V8=A@6, 8=A@AW /= bytes per element1. igned 8=Zbyte integers in the range V%=?4A468@46, =?4A4688@4AW /4 byte per

-n the approximate range /4 bytes per elements1
40

(har Logical

characters /= bytes per elements1. values are $ to ? /?byte per element1.

-nt 8= and single required 4 bytes each. The char data class holds characters in 2nicode representation. , character string is merely a ?[n array of characters logical array contains only the values $ to ?,with each element being stored in memory using function logical or by using relational operators. $ma#e Types The toolbox supports four types of imagesD ? .-ntensity images3 =. Binary images3 8. -ndexed images3 4. C E B images. "ost monochrome image processing operations are carried out using binary or intensity images, so our initial focus is on these two image types. -ndexed and CEB colour images. $ntensity $ma#es ,n intensity image is a data matrix whose values have been scaled to represent intentions. &hen the elements of an intensity image are of class unit6, or class unit ?@, they have integer values in the range V$,=##W and V$, @##8#W, respectively. -f the image is of class double, the values are floating point numbers. :alues of scaled, double intensity images are in the range V$, ?W by convention. Binary $ma#es Binary images have a very specific meaning in ",TL,B., binary image is a logical array $s and?s.Thus, an array of $s and ?s whose values are of data class, say unit6, is not considered as a binary image in ",TL,B ., numeric array is converted to binary using function logical. Thus, if , is a numeric array consisting of $s and ?s, we create an array B using the statement.
41

BFlogical /,1 -f , contains elements other than $s and ?s.2se of the logical function converts all nonzero quantities to logical ?s and all entries with value $ to logical $s. 2sing relational and logical operators also creates logical arrays. To test if an array is logical we use the - logical functionD islogical/c1.

-f c is a logical array, this function returns a ?.;therwise returns a $. Logical array can be converted to numeric arrays using the data class conversion functions.

$nde<ed $ma#es ,n indexed image has two componentsD , data matrix integer, x , color map matrix, map "atrix map is an m[8 arrays of class double containing floating point values in the range V$, ?W.The length m of the map are equal to the number of colors it defines. 'ach row of map specifies the red, green and blue components of a single color. ,n indexed images uses Odirect mappingP of pixel intensity values color map values. The color of each pixel is determined by using the corresponding value the integer matrix x as a pointer in to map. -f x is of class double ,then all of its components with values less than or equal to ? point to the first row in map, all components with value = point to the second row and so on. -f x is of class units or unit ?@, then all components value $ point to the first row in map, all components with value ? point to the second and so on. R+B $ma#e ,n CEB color image is an "[0[8 array of color pixels where each color pixel is triplet corresponding to the red, green and blue components of an CEB image, at a specific spatial location. ,n CEB image may be viewed as OstackP of three gray scale images that when fed in to the red, green and blue inputs of a color monitor
42

Produce a color image on the screen. (onvention the three images forming an CEB color image are referred to as the red, green and blue components images. The data class of the components images determines their range of values. -f an CEB image is of class double the range of values is V$, ?W. imilarly the range of values is V$,=##W or V$, @##8#W..or CEB images of class units or unit ?@ respectively. The number of bits use to represents the pixel values of the component images determines the bit depth of an CEB image. .or example, if each component image is an 6bit image, the corresponding CEB image is said to be =4 bits deep. Eenerally, the number of bits in all component images is the same. -n this case the number of possible color in an CEB image is /=\b1 \8, where b is a number of bits in each component image. .or the 6bit case the number is ?@,AAA,=?@ colors6

43

$NTRO%&CT$ON TO )AT(AB What $s )AT(AB4 ",TL,B] is a high%performance language for technical computing. -t integrates computation, visualization, and programming in an easy%to%use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include Typical uses o2 )AT(AB "ath and computation ,lgorithm development !ata acquisition "odeling, simulation, and prototyping !ata analysis, exploration, and visualization cientific and engineering graphics ,pplication development, including graphical user interface building.

44

The main 2eatures o2 )AT(AB ?. ,dvance algorithm for high performance numerical computation ,especially in the .ield matrix algebra =. 8. 4. #. @. , large collection of predefined mathematical functions and the ability to define one>s own functions. Two%and three dimensional graphics for plotting and displaying data , complete online help system Powerful , matrix or vector oriented high level programming language for individual applications. Toolboxes available for solving advanced problems in several application areas

45

MATLAB

MATLAB Programming language User written Built in !un"tions

#ra$%i"s 2&' gra$%i"s 3&' gra$%i"s (olor an) lig%ting Animation

(om$utation Linear alge*ra +ignal $ro"essing ,ua)rature -t" Tool *o.es +ignal $ro"essing /mage $ro"essing (ontrol s4stems 3eural 3etwor5s (ommuni"ations 2o*ust "ontrol +tatisti"s

-.ternal inter!a"e /nter!a"e wit% ( an) 012T2A3 Programs

*eatures and capabilities o2 )AT(AB

",TL,B is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially

46

those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as ( or .;CTC,0. The name ",TL,B stands for matrix laboratory. ",TL,B was originally written to provide easy access to matrix software developed by the L-0P,(+ and '- P,(+ pro*ects. Today, ",TL,B engines incorporate the L,P,(+ and BL, libraries, embedding the state of the art in software for matrix computation. ",TL,B has evolved over a period of years with input from many users. -n university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. -n industry, ",TL,B is the tool of choice for high% productivity research, development, and analysis. ",TL,B features a family of add%on application%specific solutions called toolboxes. :ery important to most users of ",TL,B, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of ",TL,B functions /"%files1 that extend the ",TL,B environment to solve particular classes of problems. ,reas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others. The )AT(AB System The ",TL,B system consists of five main partsD %evelopment Environment This is the set of tools and facilities that help you use ",TL,B functions and files. "any of these tools are graphical user interfaces. -t includes the ",TL,B desktop and (ommand &indow, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.

47

The )AT(AB )athematical *unction This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast .ourier transforms. The )AT(AB (an#ua#e This is a high%level matrix9array language with control flow statements, functions, data structures, input9output, and ob*ect%oriented programming features. -t allows both ^programming in the small^ to rapidly create quick and dirty throw%away programs, and ^programming in the large^ to create complete large and complex application programs. +raphics ",TL,B has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. -t includes high%level functions for two%dimensional and three%dimensional data visualization, image processing, animation, and presentation graphics. -t also includes low%level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your ",TL,B applications. The )AT(AB Application Pro#ram $nter2ace =AP$> This is a library that allows you to write ( and .ortran programs that interact with ",TL,B. -t includes facilities for calling routines from ",TL,B /dynamic linking1, calling ",TL,B as a computational engine, and for reading and writing ",T%files.

48

)AT(AB WOR1$N+ EN'$RON)ENT MATLAB DESKTOP:"atlab !esktop is the main "atlab application window. The desktop contains five sub windows, the command window, the workspace browser, the current directory window, the command history window, and one or more figure windows, which are shown only when the user displays a graphic.

49

The command window is where the user types ",TL,B commands and expressions at the prompt /YY1 and where the output of those commands is displayed. ",TL,B defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. !ouble clicking on a variable in the workspace browser launches the ,rray 'ditor, which can be used to obtain information and income instances edit certain properties of the variable. The current !irectory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window .or example, in the windows operating system the path might be as followsD (D_",TL,B_&ork, indicating that directory OworkP is a subdirectory of the main directory O",TL,BP3 &<-(< -0 T,LL'! -0 !C-:' (. clicking on the arrow in the current directory window shows a list of recently used paths. (licking on the button to the right of the window allows the user to change the current directory. ",TL,B uses a search path to find "%files and other ",TL,B related files, which are organize in directories in the computer file system. ,ny file run in ",TL,B must reside in the current directory or in a directory that is on search path. By default, the files supplied with ",TL,B and math works toolboxes are included in the search path. The easiest way to see which directories are on the search path The easiest way to see which directories are soon the search paths, or to add or modify a search path, is to select set path from the .ile menu the desktop, and then use the set path dialog box. -t is good practice to add any commonly used directories to the search path to avoid repeatedly having the change the current directory. The (ommand <istory &indow contains a record of the commands a user has entered in the command window, including both current and previous ",TL,B sessions. Previously entered ",TL,B commands can be selected and re%executed from the command history window by right clicking on a command or sequence of commands. This action launches a menu from which to select various options in addition to executing the commands. This is useful to select various options in addition to executing the commands. This is a useful feature when experimenting with various commands in a work session.

50

$mplementations Arithmetic operations Enterin# )atrices The best way for you to get started with ",TL,B is to learn how to handle matrices. tart ",TL,B and follow along with each example. Bou can enter matrices into ",TL,B in several different waysD ? 'nter an explicit list of elements. ? Load matrices from external data files. ? Eenerate matrices using built%in functions. ? (reate matrices with your own functions in "%files. tart by entering !`rer>s matrix as a list of its elements. Bou only have to follow a few basic conventionsD ? eparate the elements of a row with blanks or commas. ? 2se a semicolon, to indicate the end of each row. ? urround the entire list of elements with square brackets, V W. To enter matrix, simply type in the (ommand &indow , F V?@ 8 = ?83 # ?$ ?? 63 G @ A ?=3 4 ?# ?4 ?W ",TL,B displays the matrix you *ust enteredD ,F ?@ 8 = ?8 # ?$ ?? 6 G @ A ?=

51

4 ?# ?4 ? This matrix matches the numbers in the engraving. ;nce you have entered the matrix, it is automatically remembered in the ",TL,B workspace. Bou can refer to it simply as ,. 0ow that you have , in the workspace,

Sum@ transpose@ and dia# Bou are probably already aware that the special properties of a magic square have to do with the various ways of summing its elements. -f you take the sum along any row or column, or along either of the two main diagonals, you will always get the same number. Let us verify that using ",TL,B. The first statement to try is um /,1 ",TL,B replies with ,ns F 84 84 84 84

52

&hen you do not specify an output variable, ",TL,B uses the variable ans, short for answer, to store the results of a calculation. Bou have computed a row vector containing the sums of the columns of ,. ure enough, each of the columns has the same sum, the magic sum, 84. <ow about the row sumsR ",TL,B has a preference for working with the columns of a matrix, so one way to get the row sums is to transpose the matrix, compute the column sums of the transpose, and then transpose the result. .or an additional way that avoids the double transpose use the dimension argument for the sum function. ",TL,B has two transpose operators. The apostrophe operator /e.g., ,K1 performs a complex con*ugate transposition. -t flips a matrix about its main diagonal, and also changes the sign of the imaginary component of any complex elements of the matrix. The apostrophe%dot operator /e.g., ,K.1, transposes without affecting the sign of complex elements. .or matrices containing all real elements, the two operators return the same result. o ,K Produces ,ns F ?@ # G 4 8 ?$ @ ?# = ?? A ?4 ?8 6 ?= ? and um /,K1K Produces a column vector containing the row sums ,ns F 84
53

84 84 84 The sum of the elements on the main diagonal is obtained with the sum and The diag functionsD diag /,1 Produces ,ns F ?@ ?$ A ? and um /diag /,11 Produces ,ns F 84 The other diagonal, the so%called anti diagonal, is not so important mathematically, so ",TL,B does not have a ready%made function for it. But a function originally intended for use in graphics@ 2liplr@ flips a matrix .rom left to rightD um /diag /fliplr /,111
54

,ns F 84 Bou have verified that the matrix in !`rer>s engraving is indeed a magic quare and, in the process, have sampled a few ",TL,B matrix operations. Operators 'xpressions use familiar arithmetic operators and precedence rules. H ,ddition % ubtraction [ "ultiplication 9 !ivision _ Left division /described in O"atrices and Linear ,lgebraP in the ",TL,B documentation1 . \ Power I(omplex con*ugate transpose / 1 pecify evaluation order +eneratin# )atrices ",TL,B provides four functions that generate basic matrices. aeros all zeros ;nes ,ll ones rand 2niformly distributed random elements randn 0ormally distributed random elements

55

<ere are some examplesD a F zeros /=, 41 aF $$$$ $$$$ . F #[ones /8, 81 .F ### ### ### 0 F fix /?$[rand /?, ?$11 0F G=@46A4$64 C F randn /4, 41 CF $.@8#8 $.$6@$ %$.8=?$ %?.=8?@ %$.@$?4 %=.$$4@ ?.=8@@ ?.$##@ $.##?= %$.4G8? %$.@8?8 %$.??8= %?.$GG6 $.4@=$ %=.8=#= $.8AG=

&sin# the )AT(AB Editor to create )-*iles
56

The ",TL,B editor is both a text editor specialized for creating "%files and a graphical ",TL,B debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. "%files are denoted by the extension .m, as in pixelup.m. The ",TL,B editor window has numerous pull%down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing "% functions. To open the editor, type edit at the prompt opens the "%file filename.m in an editor window, ready for editing. ,s noted earlier, the file must be in the current directory, or in a directory in the search path. +ettin# 3elp The principal way to get help online is to use the ",TL,B help browser, opened as a separate window either by clicking on the question mark symbol /R1 on the desktop toolbar, or by typing help browser at the prompt in the command window. The help Browser is a web browser integrated into the ",TL,B desktop that displays a <ypertext "arkup Language /<T"L1 documents. The <elp Browser consists of two panes, the help navigator pane, used to find information, and the display pane, used to view the information. elf%explanatory tabs other than navigator pane are used to perform a search

57

CONC(&S$ON

This pro*ect investigates combination of multiple wavelets at feature level for Palmprint based authentication system using an indigenously developed peg%free image acquisition platform. The results depict the superiority of combined wavelets over individual wavelet feature for the Palmprint authentication, using coarse level information. The pro*ect also presented a new approach for rotation invariance, which proved its effectiveness by enhancing genuine acceptance rate.

58

RE*ERENCES V?W Qiangqian &u, !avid ahang, and +uanquan &ang OPalm Line 'x%traction and "atching for Personal authentication. -''' TC,0 ,(%T-;0 ;0 B T'" , ",0, ,0! (BB'C0'T-( MP,CT ,D B %T'" ,0! <2",0 , vol. 8@, no. #, eptember =$$@. V=W ,*ay +umar, !avid (. ". &ong, <elen (. hen, ,nil +. Jain. Per%sonal :erification using Palmprint and <and Eeometry Biometric ,udio% and :ideo%Based Biometric Person ,uthentication. :ol =@669=$$8, pringer :erlag Berlin <eidelberg =$$8. V8W Junwei Tao, &ei Jiang, aan Eao, huang (hen, and (hao &ang. OPalmprint Cecognition Based on -mproved =! P(,P. ,gent (omputing and "ulti%,gent pringer% :erlag Berlin <eidelberg =$$@. handong 2niversity, Jinan. V4W Tee (onnie, ,ndrew Teoh, "ichael Eoh, !avid 0go. OPalmprint Cecognition with P(, and -(,.P -mage and :ision (omputing 0ew aealand =$$8, Palmerston 0orth, 0ew aealand, 8 /=$$81 =8=%==A. V#W Jiang, &., Tao, J., &ang, L. O, 0ovel Palmprint Cecognition ,lgorithm Based on P(, and .L!.P -''', -nt. (onference. ;n !igital Telecommunications, -''' (omputer ociety Press, Los ,lamitos /=$$@1. chool of -nformation ystems vol 4$669=$$@, cience ) 'ngineering,

59

V@W "urat 'kinci and "urat ,ykut. OPalmprint Cecognition by ,pplying &avelet

ubband

Cepresentation and +ernel P(,.P (omputer :ision Lab. !epartment of (omputer 'ngineering, +aradeniz Technical 2ni%versity, Trabzon, Turkey. "achine Learning and !ata "ining in Pat% tern Cecognition vol 4#A?9=$$A, pringer%:erlag Berlin <eidelberg =$$A. VAW Li hang, !e% huang <uang, Ji%Qiang !u, and ahi%+ai <uang. OPalmprint Cecognition

2sing -(, Based on &inner%Take%,ll 0et%work and Cadial Basis Probabilistic 0eural 0etwork.P ,dvances in 0eural 0etworks % - 00 =$$@, :olume 8GA=9=$$@, pringer%:erlag Berlin <eidelberg =$$@. V6W E. Lu, !. ahang, +.T. &ang, OPalmprint recognition using eigen%palms features.P, Pattern Cecognition Letters, vol. =4, no. G%?$, pp. ?4A8%?4AA, =$$8. VGW !avid ahang, &ai%kin kong, Jane Bou and "icheal &ong. O;nline Palmprint -dentification.P -''' TC,0 ,(T-;0 ;0 P,TT'C0 ,0,LB ,0! ",(<-0' -0T'LL-E'0(', vol. =#, no. G, eptember =$$8. V?$W+umar, hen, <. (. OCecognition of Palmprints 2sing &avelet%based .eatures.P Proc. -ntl. (onf. ys., (ybern., (-%=$$=, ;rlando, .lorida /=$$=1.

60

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close