
Model-Based Sensor Planning

Contact:
Steven Abrams <abrams@cs.columbia.edu>
The MVP system is a robust framework for planning sensor viewpoints. Given a
CAD description of an object and its environment, a model of a vision sensor,
plus a specification of the features to be viewed, MVP generates a camera
location, orientation, and lens settings (focus-ring adjustment, focal length,
aperture) which insure a robust view of the features. In this context, a
robust view implies a view which is unobstructed, in focus, properly
magnified, and well-centered within the field-of-view. In addition, MVP
attempts to find a viewpoint with as much margin for error in all parameters
as possible.
The next image shows a model of an object. Two edges on the inner
cube are to be viewed. The following image shows the visibility volume
-- from anywhere inside of this volume, the features can be
seen. The last image shows the view from the viewpoint which the
system computed.
Click
here for a video showing a
sensor placement experiment.


We have added moving environment models to MVP and are exploring
methods of extending MVP to plan viewpoints in a dynamic environment.
The first approach, currently limited to the case of moving obstacles
(the target, or features to view, are stationary), is to sweep the
model of all moving objects along their trajectories and to plan
around the swept volumes, as opposed to the actual objects. A
temporal interval search is used in conjunction with the swept volumes
to plan viewpoints which are valid for various intervals during the
task. This approach has been implemented in simulation and
experiments are being carried out in our robotics lab. The lab setup
involves two robot arms, one carrying the camera, one moving about in
the environment. The Dynamic MVP system plans viewpoints
which guarantee robust views of some stationary target, despite the
robot motion. The viewpoints are realized by the first robot while
the second robot is moving about, performing its task.
Want to get more details? Read On.
Here are some of our papers which (are available on-line) on
model-based sensor planning. More are on their way.
Steven Abrams and Peter K. Allen.
Swept Volumes and Their Use in Viewpoint Computation in Robot Work-Cells. To appear Proceedings
IEEE International Symposium on Assembly and Task Planning,
Pittsburgh, PA, August, 1995.
Steven Abrams and Peter K. Allen.
Computing Swept Volumes for Sensor Planning Tasks. In Proceedings
Proceedings 1994 DARPA Image Understanding Workshop.
Steven Abrams, Peter K. Allen, and Konstantinos A. Tarabanis.
Dynamic sensor planning.
In Proceedings 1993 IEEE International Conference on Robotics
and Automation, Atlanta, GA, May 1993.
Also in Proceedings DARPA 1993 Image Understanding Workshop.
Questions? Comments? Feedback?
If this research is of interest to you, please let us know!
Return to the Robotics Lab home page