The Virtual Vision Lab: Instructional Lab Modules for Machine Vision
NSF Combined Research and Curriculuum Development Program, Award Number 9315517
PIs: Peter K. Allen, Columbia University, Terrance Boult, Lehigh University, and Daryl Lawton, Georgia Tech
This project's goal was to produce instructional lab modules for new and emerging techniques in robotic vision. This was done by creating a Virtual Vision Laboratory (VVL) which uses an integrated multi-media presentation format that allows the student to learn about robot vision techniques from textual sources, runtime algorithm codes, live and canned digital imagery, interactive modification of program parameters and insertion of student developed code for certain parts of the tutorial. Its aim is to translate a research paper in robot vision into a usable and understandable laboratory exercise that highlights the important aspects of the research in a realistic environment that combines both simulated virtual components and real camera imagery. To properly understand the use of robot vision algorithms, one should use a real robotics workcell. Due to the complexity, fragileness, and expense of actual robotic equipment, there are usually no hands-on resources available to students that would allow them to test topics they learn in class. However, simulations can create a realistic robotic vision lab that is accessible to students. Our approach was to very accurately model our own robotics lab and translate it into a set of 3-D solid models that could be manipulated and moved as in a real workcell. All movement in the workspace is shown with animation, along with a simulated image from a robot mounted camera. There are 3 modules completed: robotic pick and place using stereo triangulation, real-time motion tracking, and application of dynamic contours (snakes) for segmentation.
There are two other tutorial modules, one on stereo triangulation and camera calibration and another on real-time motion tracking. Both of these tutorials use a set of 3-D models of objects and canned images to simulate a real robotics workcell environment. The programs are written using Inventor software and a Netscape browser for help files. The software for the tutorials on calibration and visual tracking runs on a Silicon Graphics Workstation(see this README file).
The links below will let you follow the help pages of these two modules to give you a flavor of the tutorial. The actual tutorial needs to run on a host machine which invokes these help screens.