When you calibrate the camera, it is only accurate for a small area around the center of the calibration plate. Thus, if you choose two camera stations that are very similar, the transformation matrices generated during camera calibration will have a small window of accuracy that may not be large enough to ensure correct calculation of the 3D points necessary for the task. If you choose camera stations that are considerably different (2 and 3, for example), your window of accuracy will be greater in size.
If you used the built-in centroid finding algorithm, then the failure is not due to the identification of the centroids. If you supplied innacurate centroid data, then there are sure to be problems. Without accurate centroid data, camera calibration is impossible. Try to improve whatever method you were using to find the centroids and try again.
When you are asked to click on the feature points of the rocket and the launch pad, the algorithm assumes that the point you identified was the same point on the object from both camera stations. If you identify two different positions, the triangulation accuracy decreases and you are more likely to fail. Try to identify a point in the middle of the rocket and in the middle of the launchpad from both camera stations.
Up to reread about
pick and place task
Back to identifying
the place feature