Object Tracking System Using Motion Detection

Published on January 2017 | Categories: Documents | Downloads: 83 | Comments: 0 | Views: 435
of 6
Download PDF   Embed   Report

Comments

Content

International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026 May 2012 www.ijsret.org ISSN 2278 – 0882

Object Tracking System Using Motion Detection
Harsha K. Ingle*, Prof. Dr. D.S. Bormane**
*Department of Electronics and Telecommunication, Pune University, Pune, India Email: [email protected] **Department of Electronics and Telecommunication, Pune University, Pune, India (Email: [email protected])

ABSTRACT
Visual monitoring of activities using cameras automatically without human intervention is a challenging problem so we need automatic object tracking system. This paper presents a new object tracking model that systematically combines region and boundary features. We design a new boundarybased object detector for accurate and robust tracking in low-contrast and complex scenes, which usually appear in the commonly used monochrome surveillance systems. Keywords – Contour, Motion detection, Object detection, Object tracking, Shape features 3. Non rigid object structures. 4. Object-to-object and object-to-scene occlusions. 5. Camera motion Tracking. Motion detection is the process of confirming a change in position of an object related to its surrounding or the change in the surrounding relative to an object. Motion detection helps to save CPU time since the region of investigation is narrowed. Object detection is the process of detecting and spotting object in an image. Object detection is a process of scanning an image for an object of interest like people, faces, computers, robots or any object.

II. I. INTRODUCTION
Object tracking is important in many computer vision applications, such as surveillance, traffic control virtual reality, video compression, robotics and navigation. The task of tracking is to associate the object locations in a sequence of image frames over time. Object detection is a process of scanning an image for an object of interest like people, faces, computers, robots or any object. There are numerous applications of object detection that include national security and many scientific applications. Object tracking can be explained as a prediction of the behavior of an object in the future based on its behavior in the past. In many scientific and commercial applications, it is usually necessary to predict what an object might be doing in the near future. Difficulties in object tracking: 1. Abrupt object motion. 2. Changing appearance patterns of both the object and the scene.

METHODOLOGY

In order to represent the objects in tracking, many methods simplify them with geometric shapes like a rectangle or an ellipse which describe only the rough locations instead of the exact object boundaries [1],[2],[3]. These fixed shapes have problems to characterize real-time object shape variations in frame sequences, e.g. nonrigid objects. In addition, such simple shape-based tracking cannot be applied for high-level motion analysis like pose recognition. 1) Comaneci et al. [4] characterize moving objects with color histograms and the most probable object locations are found by the mean shift algorithm. Compared to color, texture is more robust to illumination variations in tracking. 2) Abdol-Reza Mansouri [5], With the assumption that the object color remains constant over frames, the object contour tracking is modeled as a Bayesian estimation problem. 3) Markov [6] process is used to quickly detect the texture boundary along a line, from which the projected contour of the object can be reconstructed.

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026 May 2012 www.ijsret.org ISSN 2278 – 0882

A fixed feature is generally insufficient to track objects in complex scenes. 4) In [7], objects are distinguished from the background by texture analysis. A tracker establishes the correspondence of the object locations over frames based on the distance measure unifying color, texture and motion. 5) Paragios and Deriche [8] combine the frame difference with the moving object boundary to evolve the geodesic active contour for object boundary detection. They also design an energy function to integrate boundary, intensity and motion information together so that the initial curve can be deformed towards the object boundary in subsequent frames by a partial differential equation (PDE). 6) In [9], the point correspondence from dense optical flow is combined with region features to determine the object location in 3D space for some challenging cases. Besides tracking objects, the combined features such as color and infrared,edge and motion , can also be applied for moving objects detection in videos. 7) In [10], objects are distinguished from the background by texture analysis. A tracker establishes the correspondence of the object locations over frames based on the distance measure unifying color, texture and motion.

III.

PROPOSED WORK

1). Input Image The sequence of images is taken from the standard image database such as ‘highway.bmp’database. These sequence of images having same background and same size. 2). Preprocessing In preprocessing, first we convert color image to gray because it is easy to process the gray image in single color instead of three colors. Gray scale is single channel of multi channel color images. Gray images required less time processing. Then we apply median filter to remove noise from images. Median filter is a low pass filter. Median filter removes the paper & salt noise. Also preserves the edges of object in image. 3). Motion Detection We are only detecting the motion between all the images. If there is motion in the scene it shown by white color. If there is no motion then it is shown by black color. Motion Detection means finding out difference between two images i.e. subtract first image from next image. 4). Motion Estimation Here we are calculating the residual error i.e. frame difference between all frames using sum of absolute difference. 5) Contour Tracking Here the tracking is done by applying motion detection algorithm.

It is proposed to implement object tracking system using motion detection with region and boundary features such as frame difference, shape features etc. It is proposed to compute energy of the features for object tracking.

IV.

EXPERIMENTAL EVALUATION

Fig.1 The Proposed Model Architecture

We have a image database which is downloaded from internet such as ‘highway.bmp’, ‘editing sequences. bmp’. In general, the tracking performance is highly dependent on whether the selected features can efficiently distinguish the objects of interest from the background. Regular features include color, texture, edge, motion, and frame difference. For all type of programming here we use MATLAB Software. MATLAB is a very powerful toolbox. Following is the flow of work: 1. Take one reference image and sequence from standard image database as a input. Reference image as: Iref (X,Y) Input image as: I Frame(X,Y)

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026 May 2012 www.ijsret.org ISSN 2278 – 0882

2. Convert color image to gray. 3. Filter the gray image with median low pass filter. 4. Calculate absolute difference between two images to detect the motion between them using following equation: abs diff = Iref – Iframe (1) 5. To estimate motion find shape features i.e edge features such as area , centroid, etc. as shown in table III. 6. To track contour find energy of all the features of image like frame difference, edge feature and color feature using following formula:

Fig 3: difference between two images The difference between two images is shown in Fig.3 and the detected motion is shown by third histogram.

absdiff ( X ,Y )
( X Y )

2

Energy =

(2)

V.

RESULTS

Following are the results of motion detection and motion estimation blocks.

Fig 4: results for 1 to 20 images (first sample) The detected motions for first 20 images (frames) are shown in Fig. 4. With their histograms.

Fig 2: difference between same images For detecting motion first find out the difference between same image samples as shown in Fig.2. and the difference is zero i.e. there is no motion.

Fig 5: difference between 1 to 4 images (second sample)

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026 May 2012 www.ijsret.org ISSN 2278 – 0882

The results for motion detection block for second sample are shown in Fig.5. with their histogram. The frame difference i.e. residual error for first sample and second sample is shown in TABLE I and II. Table I.MOTION DETECTION FOR FIRST SAMPLE IMAGES IMAGES IN FRAME SEQUENCES DIFFERENC (AS INPUT) E Highway1.bmp 0 Highway2.bmp 244073 Highway3.bmp 420826 Highway4.bmp 526865 Highway5.bmp 617548 Highway6.bmp 683535 Highway7.bmp 725608 Highway8.bmp 819075 Highway9.bmp 928426 Highway10.bm 944244 p

MajorAxis Length MinorAxis Length Eccentricity Orientation ConvexHull ConvexImag e ConvexArea Image FilledImage FilledArea EulerNumbe r Extrema EquivDiamet er Solidity Extent PixelIdxList PixelList Perimeter

double] [1x320 double]} 368.7553 277.3695 0.6590 0.2560 [1105x2 double] [240x320 logical] 76800 [240x320 logical] [240x320 logical] 76646 -9 [8x2 double] 312.2370 0.9970 0.9970 [76570x1 double] [76570x2 double] 1.1484e+003

double] [1x184 double]} 212.1217 138.7287 0.7565 -0.0111 [609x2 double] [120x184 logical] 22080 [120x184 logical] [120x184 logical] 22080 2 [8x2 double] 167.4683 0.9976 0.9976 [22027x1 double] [22027x2 double] 604

Table II. MOTION DETECTION FOR SECOND SAMPLE IMAGES IMAGES IN FRAME SEQUENCES DIFFERENCE (AS INPUT) Editingsequences1.bmp 0 Editingsequences2.bmp 159078 Editingsequences3.bmp 133627 Editingsequences4.bmp 160207 Table III. TABLE OF SHAPE FEATURES SHAPE SAMPLE 1 SAMPLE2 FEATURE Highway1.bm Editingseque p nces1.bmp Area Centroid BoundingBo x SubarrayIdx 76570 [160.0823 120.3795] [0.5000 0.5000 320 240] {[1x240 22027 [92.3049 60.5039] [0.5000 0.5000 184 120] {[1x120

The results for second block, motion estimation i.e. shape features are shown in TABLE III. Shape features are nothing but edge features. Out of 22 features we are considering only three features to track the object like area, field area ,Centroid. Only two features are not sufficient to track the object So we find one region feature like color. In color feature there are basically 3 colors i.e. red, green, and blue. Using histogram method we get these 3 colors from one color image. Table IV. TOTAL ENERGY FOR EDGE & COLOR FEATURES ENERGY ENERGY ENERGY ENERGY

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026 May 2012 www.ijsret.org ISSN 2278 – 0882

OF RED OF COLOR GREEN COLOR 235.8829 224.1365 234.4773 221.1587 234.6728 220.0188 232.9874 218.7467 232.2294 216.6070 233.4927 214.3735 234.9931 210.3201 237.1716 211.0153 239.8719 213.6013 242.1313 211.2906 249.4221 211.6740 253.9584 213.9107 266.0230 221.3265 269.3694 223.0158 280.0433 223.1982 266.6685 218.8692 254.1319 215.7705 256.7886 216.3735 243.6326 215.5610 242.1333 213.2468

OF BLUE OF EDGE COLOR 274.9072 271.1070 270.1876 269.0933 269.8252 262.1006 259.4942 261.2022 259.3852 260.0263 264.3953 268.8634 268.2790 263.8507 262.2647 261.0901 256.8016 259.1190 253.0131 254.4189 0.2884 0.2905 0.2908 0.2911 0.2935 0.2967 0.2981 0.2974 0.2974 0.2977 0.2984 0.2998 0.3027 0.3030 0.3033 0.3020 0.2965 0.3012 0.3007 0.3009

Compared with existing approaches, our work has two major contributions: 1) Our model fuses different features into two types of energy terms and combines them in a complementary fashion to alleviate the disadvantages of each other. Thus, it can achieve more robust performance in many challenging cases than current models based on either region or boundary energy functional. 2) The region features are used to compute the posterior probability of pixels which generates the force to deform the contour towards the object region.

REFERENCES
Journal Papers: [1] D. Comaniciu, V. Ramesh and P. Meer, “Kernelbased object tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 5, pp. 564-575, 2003 [2] R. T. Collins, “Mean-shift Blob Tracking through Scale Space,” in Proc. IEEE Conf. Comput. Vision Pattern Recognition, vol. 2, 2003, pp. 234-240. [3] A. Jepson, D. Fleet, and T. Elmaraghi, “Robust online appearance models for visual tracking” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1296-1311, 2003. [4] Abdol-Reza Mansouri, “Region Tracking via Level Set PDEs without Motion Computation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 947-961, 2002. [5] M. Heikkila and M. Pietikainen, “A texturebased method for modeling the background and detecting moving objects,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 4, pp. 657-662, 2006. [6] A. Shahrokni, T. Drummond, P. Fua, “Fast Texture-Based Tracking and Delineation Using Texture Entropy,” in Proc. IEEE Int. Conf. Comput. Vision, vol. 2, 2005, pp. 1154-1160. [7] R. Collins, Y. Liu, and M. Leordeanu, “On-Line Selection of Discriminative Tracking Features,” IEEE Trans Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 16311643, 2005. [8] M.S. Allili, and D. Ziou, “Object of Interest segmentation and Tracking by Using Feature

The above TABLE IV shows the energy for color feature and edge feature which will be used for tracking the contour of object.

Fig 6: Centroid of sample 1 The object was tracked using Centroid in still image frames. It is shown in Fig.6 for sample one.

VI.

CONCLUSION

In this paper, we propose a new object boundary tracking model to systematically combine both region and boundary features into one energy functional.

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026 May 2012 www.ijsret.org ISSN 2278 – 0882

Selection and Active Contours,” in Proc. IEEE Conf. Comput. Vision Pattern Recognition, 2007, pp. 1-8. [9] K. Zimmermann, J. Matas and T. Svoboda, “Tracking by an Optimal Sequence of Linear Predictors,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 4, pp. 677-692, 2009. [10] A. Yilmaz, X. Li, and M. Shah, “Contour based object tracking with occlusion handling in video acquired using mobile cameras,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 11, pp. 15311536, 2004. [11] V. Takala and M.Pietikinen, “Multi-object tracking using color, texture and motion,” in Proc. IEEE Conf. Comput. Vision Pattern Recognition, 2007, pp. 1-7. [12] Ling Cai*, Lei He, Yamasita Takayoshi, Yiren Xu, Yuming Zhao, Xin Yang “Robust Contour Tracking by Combining Region and Boundary Information” IEEE Transactions on circuits and systems for video technology. Books: [13] S. Sridhar. Digital Image Processing, Oxford Higher Edition. 2011 [14] Rafael C. Gonzalez, Richard E. Woods. Digital Image Processing, Pearson Education, 2009

IJSRET @ 2012

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close