Real-time Object Tracking System Using Fpga

Published on February 2017 | Categories: Documents | Downloads: 56 | Comments: 0 | Views: 302
of 4
Download PDF   Embed   Report

Comments

Content

REAL-TIME OBJECT TRACKING SYSTEM USING FPGA
ABSTRACT: Camera Link is an industry standard interface to digital video cameras. It uses a standard connector to connect to the camera using multiple serial links that can carry image information from monochrome and colour cameras with linescan or area scan sensors. This makes it an ideal way to connect cameras to your FPGA system. Introduction: Camera Link is a communication interface for vision applications. The interface extends the base technology of Channel Link to provide a specification more useful for vision applications. For years, the scientific and industrial digital video market has lacked a standard method of communication. Both frame grabbers and camera manufacturers developed products with different connectors, making cable production difficult for manufacturers and very confusing for consumers. A connectivity standard between digital cameras and frame grabbers is long overdue and will become even more necessary as data rates continue to increase. Increasingly diverse cameras and advanced signal and data transmissions have made a connectivity standard like Camera Link a necessity. The Camera Link interface will reduce support time, as well as the cost of that support. The standard cable will be able to handle the increased signal speeds, and the cable assembly will allow customers to reduce their costs through volume pricing.

General Description: Object tracking often involves the use of a camera to Provide scene data from which the motion of real world objects is mapped to system controls. Object tracking for control-based applications usually requires the use of a real-time system as sensing delays in the input can cause instability in closed-loop control. This is particularly important if the user must receive sensory feedback from the system. While image processing at video rates can be achieved on a serial processor such as a desktop computer, the required hardware is quite cumbersome. Furthermore, as the number of objects that need to be detected and reliably tracked increases, the real-time processing capabilities of even the fastest desktop computer can be challenged. This is due to several factors such as the large data set represented by a captured image, and the complex operations which may need to be performed on an image. At real-time video rates of 25 frames per second a single operation performed on every pixel of a 768 by 576 colour image. The main difficulty in video tracking is to associate target locations in consecutive video frames, especially when the objects are moving fast relative to the frame rate. Here, video tracking systems usually

employ a motion model which describes how the image of the target might change for different possible motions of the object to track. The role of the tracking algorithm is to analyse the video frames in order to estimate the motion parameters. These parameters characterize the location of the target. Field programmable gate arrays (FPGAs) provide an alternative to using serial processors. Continual advances in the size and functionality of FPGAs over recent years has resulted in an increasing interest in their use as implementation platforms for real-time video processing. Tracking Algorithm: Several object tracking techniques could be used in this application. Direct, motion based algorithms work on differences between successive frames. By detecting the differences between frames, the motion of an object may be inferred directly. Such motion based methods require frame buffering and were not considered in this application for that reason. An alternative is to use a segmentation based approach, where the target objects are segmented from the rest of the scene in the captured image. The object is then tracked by considering the change of position in successive frames. .One simple method is to segment the image based on colour by applying thresholds to each pixel. This is ideal for stream processing because thresholding is a point operation which can be implemented easily on the FPGA. Port Assignments: The Camera Link interface has three configurations. Since a single Channel Link chip is limited to 28 bits, some cameras may require several chips in order to transfer data efficiently. The naming conventions for the various configurations are: • Base—Single Channel Link chip, single cable connector. • Medium—Two Channel Link chips, two cable connectors. • Full—Three Channel Link chips, two cable connectors. Power: Power will not be provided on the Camera Link connector. The camera will receive power through a separate cable. Each camera manufacturer will define their own power connector, current, and voltage requirements. Communication: Two LVDS pairs have been allocated for asynchronous serial communication to and from the camera and frame grabber. Cameras and frame grabbers should support at least 9600 baud. These signals are • SerTFG—Differential pair with serial communications to the frame

grabber. • SerTC—Differential pair with serial communications to the camera. The serial interface will have the following characteristics: one start bit, one Stop bit, no parity, and no handshaking. Camera Control Signals: Four LVDS pairs are reserved for general-purpose camera control. They are defined as camera inputs and frame grabber outputs. Camera manufacturers can define these signals to meet their needs for a particular product. The signals are: • • • • Camera Camera Camera Camera Control Control Control Control 1 2 3 4 (CC1) (CC2) (CC3) (CC4)

Functional Block Diagram:

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close