Research Project P14:
Self-calibrating robot-vision system
Research Project Goal
Robotic arms generally measure their position with incremental
encoders, which measure the rotation of the joint very precisely but
do not provide absolute positions. To calibrate the absolute
position, there are many different mathods, all of which have
significant drawbacks. Since most robots have cameras which can see
their own end effectors, calibrating the position of the end effector
from the camera data would be very helpful. This calibration would
also provide the mapping from the arm's coordinate system to the
camera's coordinate system, which is essential for any type of
vision-guided manipulation.
Project Scope
Given a set of camera images of an arm, together with the measured
joint angles of the arm, estimate the joint angle offsets and
extrinsic parameters of the camera with relation to the arm.
This will be done either with markers on the hand, or with the hand
holding an easily identified calibration pattern in a rigit grasp.
Two possible extensions are:
-
Use the calibration to validate the full kinematics of the arm (e.g.
link lengths and axis alignment)
-
Use feature extraction to perform this calibration without adding
markers to the hand
Sample Data
Robot kinematics (with sample matlab code to generate pose of hand for
a given set of joint angles)
Data for a single calibration consists of several pictures of the arm
in different positions, together with the measured joint angles at
that point. We can gather data with different types of markers to
determine what works best.
Tasks
- Calibrate the camera
- Find location of markers on hand in images
- Solve non-linear system of equations to estimate encoder offsets,
camera extrinsic parameters, and 3d location of markers on hand.
Research Project Status
student names here
Point of Contact
Eric Berger, berger04@stanford.edu
Midterm Report
not yet submitted
Final Report
not yet submitted
|