Martin Jagersand Professor Research group |
M.Sc., Physics, Chalmers University, Sweden
M.Sc., Computer Science, University of Rochester, Engineering Licenciate, Chalmers Ph.D., Computer Science, University of Rochester PostDoc, Yale University |
Previous image based (model free) synthesis has mostly been only from different viewing angles. Here we show how to also simulate the actions of an articulated agent.
Experimental results with adaptive visual servoing, establishing some low level properties important to visual space robot programming.
Three minute video showing a PUMA robot and Utah/MIT hand
- Reach, pick up, insert and screw in an ordinary lightbulb using visual goals pointed out by the human (me). 320x240 preview mpeg
- Solve a shape sorter puzzle. (Only on full length ICRA proceedings video)
We use a combination of two techniques to achieve dextrous robot fine manipulation: the use of an approximate estimated motor model, based on the grasp tetrahedron acquired when grasping an object, and the use of visual feedback to achieve accurate fine manipulation.
We present a promising approach for combined visual model acqusition and agent control. The approach differs from previous work in that a full coupled Jacobian is estimated on-line without any prior models, or the introduction of special calibration movements. We show how the estimated models can be used for visual space robot task specification, planning and control. In the other direction the same type of models can be used for view synthesis.
A more detailed description of the low level visual servo controller, including the convergence enhancing restricted step and homotopy methods.
Defines information in 3 DOF scale and spatial coordinate space of images, and derives an informartion infinitesimal, giving a pointwise information measure in this space, useful for attention selection.
How a special class of rotations can be used to learn about the shape of unknown objects.
A short description of higher level aspects of uncalibrated visual control. Many experiments solving complex manipulation tasks in unstuctured e nvironments.
Long version of most of the hand-eye work.
How to deal with redundancy and complexity when doing real manipulation tasks in unstructured environments.
How to perform precisely controlled rotations using visual servoing on rotations defined in an affine image frame.
High degree of freedom (3, 6, 12, n) uncalibrated visual servoing with online estimation of the full coupled Jacobian.
Information in scale space is defined as a successive Kullback contrast between scale levels. Numerous experiments on real images show that this reflects our intuitive idea of in what scale the image information is.
University of Alberta |
Computing Science |
Copyright ©
Department of Computing Science.
All rights reserved. Last modified: Sun Sep 9 19:05:00 2001 Edit this profile |