Object Tracking System Using Motion Detection

Published on June 2016 | Categories: Documents | Downloads: 78 | Comments: 0 | Views: 493
of 6
Download PDF   Embed   Report

Comments

Content


International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026May2012 www.ijsret.org ISSN 2278 – 0882
IJSRET @ 2012
Object Tracking System Using Motion Detection
Harsha K. Ingle*, Prof. Dr. D.S. Bormane**
*Department of Electronics and Telecommunication, Pune University, Pune, India
Email: [email protected]
**Department of Electronics and Telecommunication, Pune University, Pune, India
(Email: [email protected])
ABSTRACT
Visual monitoring of activities using cameras
automatically without human intervention is a
challenging problem so we need automatic object
tracking system. This paper presents a new object
tracking model that systematically combines region
and boundary features. We design a new boundary-
based object detector for accurate and robust tracking
in low-contrast and complex scenes, which usually
appear in the commonly used monochrome
surveillance systems.
Keywords – Contour, Motion detection, Object
detection, Object tracking, Shape features
I. INTRODUCTION
Object tracking is important in many computer vision
applications, such as surveillance, traffic control
virtual reality, video compression, robotics and
navigation. The task of tracking is to associate the
object locations in a sequence of image frames over
time. Object detection is a process of scanning an
image for an object of interest like people, faces,
computers, robots or any object. There are numerous
applications of object detection that include national
security and many scientific applications. Object
tracking can be explained as a prediction of the
behavior of an object in the future based on its
behavior in the past. In many scientific and
commercial applications, it is usually necessary to
predict what an object might be doing in the near
future.
Difficulties in object tracking:
1. Abrupt object motion.
2. Changing appearance patterns of both the object
and the scene.
3. Non rigid object structures.
4. Object-to-object and object-to-scene occlusions.
5. Camera motion Tracking.
Motion detection is the process of confirming a
change in position of an object related to its
surrounding or the change in the surrounding relative
to an object. Motion detection helps to save CPU
time since the region of investigation is narrowed.
Object detection is the process of detecting and
spotting object in an image. Object detection is a
process of scanning an image for an object of interest
like people, faces, computers, robots or any object.
II. METHODOLOGY
In order to represent the objects in tracking, many
methods simplify them with geometric shapes like a
rectangle or an ellipse which describe only the rough
locations instead of the exact object boundaries
[1],[2],[3]. These fixed shapes have problems to
characterize real-time object shape variations in
frame sequences, e.g. nonrigid objects. In addition,
such simple shape-based tracking cannot be applied
for high-level motion analysis like pose recognition.
1) Comaneci et al. [4] characterize moving objects
with color histograms and the most probable object
locations are found by the mean shift algorithm.
Compared to color, texture is more robust to
illumination variations in tracking.
2) Abdol-Reza Mansouri [5], With the assumption
that the object color remains constant over frames,
the object contour tracking is modeled as a Bayesian
estimation problem.
3) Markov [6] process is used to quickly detect the
texture boundary along a line, from which the
projected contour of the object can be reconstructed.
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026May2012 www.ijsret.org ISSN 2278 – 0882
IJSRET @ 2012
A fixed feature is generally insufficient to track
objects in complex scenes.
4) In [7], objects are distinguished from the
background by texture analysis. A tracker establishes
the correspondence of the object locations over
frames based on the distance measure unifying color,
texture and motion.
5) Paragios and Deriche [8] combine the frame
difference with the moving object boundary to evolve
the geodesic active contour for object boundary
detection. They also design an energy function to
integrate boundary, intensity and motion information
together so that the initial curve can be deformed
towards the object boundary in subsequent frames by
a partial differential equation (PDE).
6) In [9], the point correspondence from dense
optical flow is combined with region features to
determine the object location in 3D space for some
challenging cases. Besides tracking objects, the
combined features such as color and infrared,edge
and motion , can also be applied for moving objects
detection in videos.
7) In [10], objects are distinguished from the
background by texture analysis. A tracker establishes
the correspondence of the object locations over
frames based on the distance measure unifying color,
texture and motion.
III. PROPOSED WORK
It is proposed to implement object tracking system
using motion detection with region and boundary
features such as frame difference, shape features etc.
It is proposed to compute energy of the features for
object tracking.
Fig.1 The Proposed Model Architecture
1). Input Image
The sequence of images is taken from the standard
image database such as ‘highway.bmp’database.
These sequence of images having same background
and same size.
2). Preprocessing
In preprocessing, first we convert color image to
gray because it is easy to process the gray image in
single color instead of three colors. Gray scale is
single channel of multi channel color images. Gray
images required less time processing. Then we apply
median filter to remove noise from images. Median
filter is a low pass filter. Median filter removes the
paper & salt noise. Also preserves the edges of object
in image.
3). Motion Detection
We are only detecting the motion between all the
images. If there is motion in the scene it shown by
white color. If there is no motion then it is shown by
black color. Motion Detection means finding out
difference between two images i.e. subtract first
image from next image.
4). Motion Estimation
Here we are calculating the residual error i.e. frame
difference between all frames using sum of absolute
difference.
5) Contour Tracking
Here the tracking is done by applying motion
detection algorithm.
IV. EXPERIMENTAL EVALUATION
We have a image database which is downloaded from
internet such as ‘highway.bmp’, ‘editing sequences.
bmp’. In general, the tracking performance is highly
dependent on whether the selected features can
efficiently distinguish the objects of interest from the
background. Regular features include color, texture,
edge, motion, and frame difference. For all type of
programming here we use MATLAB Software.
MATLAB is a very powerful toolbox. Following is
the flow of work:
1. Take one reference image and sequence from
standard image database as a input.
Reference image as:
Iref (X,Y)
Input image as:
I Frame(X,Y)
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026May2012 www.ijsret.org ISSN 2278 – 0882
IJSRET @ 2012
2. Convert color image to gray.
3. Filter the gray image with median low pass
filter.
4. Calculate absolute difference between two
images to detect the motion between them
using following equation:
abs diff =Iref – Iframe (1)
5. To estimate motion find shape features i.e
edge features such as area , centroid, etc. as
shown in table III.
6. To track contour find energy of all the
features of image like frame difference, edge
feature and color feature using following
formula:
Energy =
 
2
) (
) , (
Y X
Y X absdiff

(2)
V. RESULTS
Following are the results of motion detection and
motion estimation blocks.
Fig 2: difference between same images
For detecting motion first find out the difference
between same image samples as shown in Fig.2. and
the difference is zero i.e. there is no motion.
Fig 3: difference between two images

The difference between two images is shown in Fig.3
and thedetected motion is shown by third histogram.
Fig 4: results for 1 to 20 images (first
sample)
The detected motions for first 20 images (frames) are
shown in Fig. 4. With their histograms.
Fig 5: difference between 1 to 4 images
(second sample)
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026May2012 www.ijsret.org ISSN 2278 – 0882
IJSRET @ 2012
The results for motion detection block for second
sample are shown in Fig.5. with their histogram. The
frame difference i.e. residual error for first sample
and second sample is shown in TABLE I and II.
Table I.MOTION DETECTION FOR FIRST
SAMPLE IMAGES
IMAGES IN
SEQUENCES
(AS INPUT)
FRAME
DIFFERENC
E
Highway1.bmp 0
Highway2.bmp 244073
Highway3.bmp 420826
Highway4.bmp 526865
Highway5.bmp 617548
Highway6.bmp 683535
Highway7.bmp 725608
Highway8.bmp 819075
Highway9.bmp 928426
Highway10.bm
p
944244
Table II. MOTION DETECTION FOR SECOND
SAMPLE IMAGES
IMAGES IN
SEQUENCES
(AS INPUT)
FRAME
DIFFERENCE
Editingsequences1.bmp 0
Editingsequences2.bmp 159078
Editingsequences3.bmp 133627
Editingsequences4.bmp 160207
Table III. TABLE OF SHAPE FEATURES
SHAPE
FEATURE
SAMPLE 1
Highway1.bm
p
SAMPLE2
Editingseque
nces1.bmp
Area 76570 22027
Centroid [160.0823
120.3795]
[92.3049
60.5039]
BoundingBo
x
[0.5000
0.5000 320
240]
[0.5000
0.5000 184
120]
SubarrayIdx {[1x240 {[1x120
double]
[1x320
double]}
double]
[1x184
double]}
MajorAxis
Length
368.7553 212.1217
MinorAxis
Length
277.3695 138.7287
Eccentricity 0.6590 0.7565
Orientation 0.2560 -0.0111
ConvexHull [1105x2
double]
[609x2
double]
ConvexImag
e
[240x320
logical]
[120x184
logical]
ConvexArea 76800 22080
Image [240x320
logical]
[120x184
logical]
FilledImage [240x320
logical]
[120x184
logical]
FilledArea 76646 22080
EulerNumbe
r
-9 2
Extrema [8x2 double] [8x2 double]
EquivDiamet
er
312.2370 167.4683
Solidity 0.9970 0.9976
Extent 0.9970 0.9976
PixelIdxList [76570x1
double]
[22027x1
double]
PixelList [76570x2
double]
[22027x2
double]
Perimeter 1.1484e+003 604
The results for second block, motion estimation i.e.
shape features are shown in TABLE III. Shape
features are nothing but edge features. Out of 22
features we are considering only three features to
track the object like area, field area ,Centroid.
Only two features are not sufficient to track the
object So we find one region feature like color. In
color feature there are basically 3 colors i.e. red,
green, and blue. Using histogram method we get
these 3 colors from one color image.
Table IV. TOTAL ENERGY FOR EDGE & COLOR
FEATURES
ENERGY ENERGY ENERGY ENERGY
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026May2012 www.ijsret.org ISSN 2278 – 0882
IJSRET @ 2012
OF RED
COLOR
OF
GREEN
COLOR
OF BLUE
COLOR
OF EDGE
235.8829 224.1365 274.9072 0.2884
234.4773 221.1587 271.1070 0.2905
234.6728 220.0188 270.1876 0.2908
232.9874 218.7467 269.0933 0.2911
232.2294 216.6070 269.8252 0.2935
233.4927 214.3735 262.1006 0.2967
234.9931 210.3201 259.4942 0.2981
237.1716 211.0153 261.2022 0.2974
239.8719 213.6013 259.3852 0.2974
242.1313 211.2906 260.0263 0.2977
249.4221 211.6740 264.3953 0.2984
253.9584 213.9107 268.8634 0.2998
266.0230 221.3265 268.2790 0.3027
269.3694 223.0158 263.8507 0.3030
280.0433 223.1982 262.2647 0.3033
266.6685 218.8692 261.0901 0.3020
254.1319 215.7705 256.8016 0.2965
256.7886 216.3735 259.1190 0.3012
243.6326 215.5610 253.0131 0.3007
242.1333 213.2468 254.4189 0.3009
The above TABLE IV shows the energy for color
feature and edge feature which will be used for
tracking the contour of object.
Fig 6: Centroid of sample 1
The object was tracked using Centroid in still image
frames. It is shown in Fig.6 for sample one.
VI. CONCLUSION
In this paper, we propose a new object boundary
tracking model to systematically combine both region
and boundary features into one energy functional.
Compared with existing approaches, our work has
two major contributions:
1) Our model fuses different features into two types
of energy terms and combines them in a
complementary fashion to alleviate the disadvantages
of each other. Thus, it can achieve more robust
performance in many challenging cases than current
models based on either region or boundary energy
functional.
2) The region features are used to compute the
posterior probability of pixels which generates the
force to deform the contour towards the object
region.
REFERENCES
Journal Papers:
[1] D. Comaniciu, V. Ramesh and P. Meer, “Kernel-
based object tracking,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 25, no. 5, pp. 564-575, 2003
[2] R. T. Collins, “Mean-shift Blob Tracking through
Scale Space,” in Proc. IEEE Conf. Comput. Vision
Pattern Recognition, vol. 2, 2003, pp. 234-240.
[3] A. J epson, D. Fleet, and T. Elmaraghi, “Robust
online appearance models for visual tracking” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10,
pp. 1296-1311, 2003.
[4] Abdol-Reza Mansouri, “Region Tracking via
Level Set PDEs without Motion Computation,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp.
947-961, 2002.
[5] M. Heikkila and M. Pietikainen, “A texture-
based method for modeling the background and
detecting moving objects,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 28, no. 4, pp. 657-662, 2006.
[6] A. Shahrokni, T. Drummond, P. Fua, “Fast
Texture-Based Tracking and Delineation Using
Texture Entropy,” in Proc. IEEE Int. Conf. Comput.
Vision, vol. 2, 2005, pp. 1154-1160.
[7] R. Collins, Y. Liu, and M. Leordeanu, “On-Line
Selection of Discriminative Tracking Features,”
IEEE Trans
Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1631-
1643, 2005.
[8] M.S. Allili, and D. Ziou, “Object of Interest
segmentation and Tracking by Using Feature
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 1 Issue2 pp 021-026May2012 www.ijsret.org ISSN 2278 – 0882
IJSRET @ 2012
Selection and Active Contours,” in Proc. IEEE Conf.
Comput. Vision Pattern Recognition, 2007, pp. 1-8.
[9] K. Zimmermann, J . Matas and T. Svoboda,
“Tracking by an Optimal Sequence of Linear
Predictors,” IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 31, no. 4, pp. 677-692, 2009.
[10] A. Yilmaz, X. Li, and M. Shah, “Contour based
object tracking with occlusion handling in video
acquired using mobile cameras,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 26, no. 11, pp. 1531-
1536, 2004.
[11] V. Takala and M.Pietikinen, “Multi-object
tracking using color, texture and motion,” in Proc.
IEEE Conf. Comput. Vision Pattern Recognition,
2007, pp. 1-7.
[12] Ling Cai*, Lei He, Yamasita Takayoshi, Yiren
Xu, Yuming Zhao, Xin Yang “Robust Contour
Tracking by Combining Region and Boundary
Information” IEEE Transactions on circuits and
systems for video technology.
Books:
[13] S. Sridhar. Digital Image Processing, Oxford
Higher Edition. 2011
[14] Rafael C. Gonzalez, Richard E. Woods. Digital
ImageProcessing, Pearson Education, 2009

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close