Real Time Finger Tracking for Interaction Report

Published on July 2016 | Categories: Documents | Downloads: 28 | Comments: 0 | Views: 259
of 7
Download PDF   Embed   Report

Comments

Content

REAL TIME FINGER TRACKING FOR INTERACTION

Jaypee Institute of Information and Technology

MINOR PROJECT ELECTRONICS AND COMMUNICATION 3rd Year
MENTOR: Prof. Kalyansundaram

Made by: Tavish Naruka(8102171) Jasmeet Kaur() Komal Aggarwal(08502933)

CONTENTS
Abstract 1. Introduction 2. System Overview 3.1. Detecting Hand Region 3.2. Finding contour of the detected hand 3.3. computation of fingertip location 3.4. Tracking the motion of the fingertip 4. Display of mouse pointer 5. Future application 6. Limitation 7. Learning 8.Applications 8.1 Resize an image 8.2 Rotate an image

Abstract

This report discusses the design of a system that tracks the fingertip of the index finger using a single camera, for the control of a mouse pointer on the screen.To make tracking performance robust to the cluttered background having colors similar to those of skin region,the project is applied on backgrounds having no color similar to that of skin.The mapping of the location of the fingertip is done so that its moment can be tracked.

1. Introduction
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction. With efficient object tracking algorithms, it is possible to track the motion of a human hand in real time using a simple web camera.This report discusses the design of a system that tracks the tip of the index finger for the purpose of controlling mouse pointer on the screen. A single camera(web camera) is used to track the motion of the fingertip in real time. The camera is mounted on the top of a stand directed directly on the paper where finger is moved. Since the underlying tracking algorithm works by segmenting the skin color from the background, it is required for the efficient tracking of the finger tip that user's other hand is not in the scene.To move the mouse pointer, the user moves his index finger on a 2D plane (for example on a table or a sheet of paper). All other fingers are folded to form a fist.

2. System Overview
The system consists of three parts: First is background subtraction that will help in hand detection, to support it further skin detection is also done. Second step is fingertip detection and thirdly tracking the motion of hand and fingers.

3.1 Detecting hand region
This is done by background subtraction in which two immediate images are subtracted.The stable portion of the image gets subtracted resulting in a zero while that which is unstable or is in motion gives a non zero value thus this way the moving hand is detected.

To differentiate it from any other movable thing, the skin detection algorithm is also implemented. RGB which is the default color space of any image is converted to HSV and then those area lying in the hue and saturation range of skin which is 0 to 180 for hue and 25 to 200 for saturation are extracted thus making the skin detection more specific.

3.2 Finding contour of the selected hand
After the hand is selected we need to find the coordinates of different points of the hand so that the tip can further be detected and tracked. This is done by using two functions cvcanny and cvfindcontour. cvcanny finds all the edges of the image and also converts it into a binary image thus making it compatible to be used by cvfindcontour since it can take only binary image as input. cvfindcontour finds the contour of the converted binary image and conect them all giving it a boundary and all the points lying on it.

3.3 Computation of Fingertip location
To detect the fingertip of the hand, convexhull function is used. This function creates a convex structure around the hand and then three values for any convex formed is returned which are the start point, end point and the depth of the convex. The pointing finger is either the start or end of some convex structure formed and is thus detected.

3.4 Tracking Finger tip Motion
The topmost tip of the hand detected above is tracked and the same tracked path is drawn on the screen.

4. Display of mouse pointer
Once the fingertip is detected, we need to map the coordinates of the fingertip to the coordinates of the mouse on the monitor.However, we can not use the fingertip locations directly due to the following problems:

(i) noise from sources such as segmentation error make it difficult to position the mouse pointer accurately (ii)Due to the limitation on the tracking rate,we might get discontinuous coordinates of fingertip locations (iii) Difference of resolution between the input image from the camera and monitor makes it difficult to position the mouse pointer accurately . To circumvent these problems, a simple method is implemented. The displacement of detected finger tip is averaged over few frames and this average displacement is used to displace the mouse cursor on the screen. Also if this displacement is found to be less than a threshold value, the mouse cursor is not moved.

5. Future applications
This application gives user an easy way to move mouse cursor on the screen. Also since user is using his hand, the homing time (time to place hand on the mouse) is reduced a lot. With more robust finger tip detection, this application can replace the use of mouse.

6. Limitations of the system The system was evaluated on the basis of hand detection, finger tip detection, effort
required to place mouse on a specific location on the screen and the following observations are made: i. Though the hand detection algorithm performs in real time, it has a limitation that it works well only in an environment free from background noise. To be specific, if background contains colors similar to skin, then the algorithm will loose track of the hand or falsely report its location. ii. The hand pose detection works well in the specific setup.However, when the camera's is not in focus, the system has reported false pose detection. A better way to detect hand pose is to use Machine learning algorithm (for example Neural network or Support vector machine) iii. The mouse cursor movement on the screen requires more smoothing. Also user is not able to cover the entire screen.

7. Learning

It was fun to implement this project and along with fun, we learned a lot of things,some of which we hope to implement later to make this project more usable. Our learnings are summarized below: i. Skin segmentation- It is a hard problem and a lot of people have done work in this area. People have tried various color spaces – RGB(red, green, blue), HSV (Hue,Saturation, Value) etc. It has been observed that Hue color resembles our skin color and this forms the basis of most of skin segmentation algorithms. However, if we use Saturation along with Hue, it makes skin detection more robust.However, segmenting skin from cluttered background is still a challenging task. Further, the different illumination conditions make skin segmentation even harder. ii. Ways to track hands – People have been trying to use hands as an interface to computers. One of the ways to track hands robustly is to use motion to segment hand from the background. This can be done either by background subtraction or using optical flow to determine motion. We have use background subtraction method where we subtract two immediate clicks of same image and subtract them and if the difference is zero then it has something movable in it and that is hand.

8. APPLICATIONS implemented using the above: 8.1. Resize an image
Here in this application we can resize any stored image by just slowly decreasing or increasing the size of our palm by folding and unfolding it.This works according to the area of the hand. The area is calculated using function CVCONTOURMOMENTS which returns three value which are X,Y coordinates of the center of contour and area of contour. At the press of ‘r’ or ‘R’ the area factor is multiplied by the size of the image thus increasing or decreasing its size with the unfolding and folding of fist respectively. The resized image can be saved by press of ‘s’ or ‘S’

8.2. Rotating an image
This application helps us to rotate an image by any angle just by the rotation of our hand. We have calculated the angle with which our hand is rotated and used the same angle for the image rotation.

The slope for the two positions of hand is found and using them the angle is calculated between them. At the press of ‘r’ or ‘R’ the image is rotated with the same angle as that calculated above. The rotated image can be saved by press of ‘s’ or ‘S’

8.3. Moving mouse pointer
Once the finger tip has been calculated, we have used the point to move the mouse pointer on the screen. Since the motion was quite jerky, we have used opencv’s implementation of Kalman filter to smoothen the motion. Once we were able to use the filter with this application, we started using it elsewhere too.

8.4. Drawing surface on the screen
Once the finger tip has been calculated, we use the hand to draw on the screen. Pressing ‘c’ clears the canvas, and pressing ‘a’ toggles pen up/pen down.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close