Tracking Module Theory
- Recovery of visual motion
- Image Level Features
- Our Implementation
Recovery of visual motion
Moving objects of our three-dimensional euclideian world are
projecting on the two-dimensional image plane of our
camera (the device which we use in order to observe
our world). One of the most important problems in Computer
Vision is the recovery of the motion of the moving
objects in our 3-D world. The first step is the recovery
of the motion of the projected 2-D images of those
objects. We distingish between two types of motion
processing (image-plane motion):
- Full-field motion processing
We use global
information (full frame) and perform optimization
(optical flow), applying iterative techniques.
This approach is unsatisfactory for real-time
applications, since it requires specialized hardware.
- Feature-based motion processing
We conctrate on spatially localized areas, reach in
geometrical and physical information (i.e. edges,
corners, etc.). Our aim is to recover the motion
of those areas.
These methods are less computationally intensive, because
we use only a part of the information which a full frame
carries. Those methods are appropriate for
real time applications and can be implemented in software
(more flexible solution).
Image Level Features
The first step of Feature-Based algorithms
is the extraction of image-level features .
A feature is an amount of information extracted from
an image region of interest. Some examples of features
are:
- Edges
Spatial discontinuities in the image
plane. These discontinuities are caused
by depth or normal discontinuity
of a scene surface, or by illumination
discontinuities (shadows) on our world, or
by occlusion . Some of the attributes of
an edge are its location and
orientation on the image-plane as well
as its magnitude (strong or weak edges).
- Corners
Interecting edges. An attribute of a corner is
the angle between the two interecting
edges. Of course the corner inherits the
attributes of its edges.
- Gray-level image region
A feature could be an image region of
a specified gray-level pattern (this is the
kind of feature used in our application,
see setting
tracking parameters.
Our Implementation
Our implementation of tracking uses the
XVision package developed by Greg Hager at Yale.
Up to Tracking
Contents