The ultimate goal of this module is for you to instruct the robot with the gripper to pick up an object in the 3D real world and place it in a specified position. The problem is that the only data you have about the 3D world is in 2D images that you can get from the robot with the camera mounted on it.
You need to translate positions in those 2D images to the 3D world so that the robot with the gripper can locate the objects it will manipulate.
We call "calibrating the camera" finding that correspondece between the points in 2D images we acquire and the actual 3D locations.
To calibrate a camera, it is necessary to obtain exact coordinates, so we must start off with an object in the 3D world that we know everything about: a calibration plate.
This is the calibration plate we wil use:
We already know that there are 36 dots on the plate. We also know the distance between each dot. By identifying the dots (or "centroids") in the 2D image, we can calculate where each point in the 2D image is in the 3D world. Once we have identified a point on the 2D image, we can be sure that it lies somewhere on a ray from the camera in the direction of the identified point but there is no way we can determine its exact 3D coordinates.
That is because we know it lies on that ray but we cannot determine how far along that line it actually is.
To solve this problem, we take two pictures of the plate from different locations (which we will call camera stations). If we can identify the same point from both stations, the intersection of the rays calculated from each station will give us the exact point in 3D coordinates.
To begin the calibration process, select a camera station from which you would like to capture an image of the centroid plate. Choose the "Selecting a Camera Station" entry from the list of help topics below to read more about that stage of the tutorial.
Back to Loading a Tutorial Module