Uncalibrated Visual Servoing

Azad Shademan, Amir-massoud Farahmand, Martin Jagersand


References

M. Jägersand, O. Fuentes, R. Nelson, "Experimental Evaluation of Uncalibrated Visual Servoing for Precision Manipulation," Proc. IEEE Int. Conf. Robot. Automat., Apr. 20-25, 1997, pp. 2874-2880. [BibTeX]

A.-M. Farahmand, A. Shademan, M. Jägersand, "Global visual-motor estimation for uncalibrated visual servoing," Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2007, pp. 1969-1974. [BibTeX]

A.-M. Farahmand, A. Shademan, M. Jägersand, C. Szepesvari, "Model-based and model-free reinforcement learning for visual servoing," Proc. IEEE Int. Conf. Robot. Automat., May 12-17, 2009, pp. 2917-2924. [BibTeX]

A. Shademan, A.-M. Farahmand, M. Jägersand, "Towards Learning Robotic Reaching and Pointing: An Uncalibrated Visual Servoing Approach," Proc. Canadian Conf. Comput. Robot Vis. (CRV '09), May 24-27, 2009, pp. 229-236. [BibTeX]

A. Shademan, A.-M. Farahmand, M. Jägersand, "Robust Jacobian Estimation for Model-Free Uncalibrated Visual Servoing," in preparation. [BibTeX]


Description

Visual Servoing (VS) is the nonlinear closed-loop control process that regulates some visual alignment error to zero in real-time. The nonlinearities are due to the projective geometry of the camera and the rigid-body kinematics and dynamics of the serial-link manipulator. The visual alignment error may either be expressed in the image space and used directly, which is commonly known as Image-Based Visual Servoing (IBVS), or the visual error may be used indirectly to infer a 3D task space error after some 3D scene reconstruction or pose tracking, which is commonly known as Position-Based Visual Servoing (PBVS). The PBVS approach usually requires 3D geometric model of an object, which is a limiting factor when visual servoing is to be performed in uncontrolled environment outlets. The IBVS approach does not require such knowledge, however, the depth of the object with respect to the camera appears in the control law. This depth may be estimated, but some knowledge of the intrinsic and extrinsic camera parameters would be required.

An attractive class of image-based visual servos, which does not need any a priori information is called uncalibrated visual servoing (Jagersand et al. 1997). The uncalibrated approach is a strong candidate to acquire flexibility in unstructured environments. At the University of Alberta, we investigate projective-geometric and learning methods to improve the performance of of uncalibrated visual servoing systems.

Methods

Without knowledge of the visual-motor model, the fully uncalibrated IBVS servo can be implemented by numerically estimating or updating the Jacobian from the visual-motor data.

Secant Update Method (Broyden Update) for Jacobian Estimation: The early work in uncalibrated visual servoing is based on Broyden's Jacobian update rule. (Jagersand et al. 1997)

Least Squares-Based Jacobian Estimation: Least-squares based method can be used efficiently to estimate the visual-motor Jacobian. We have used an efficient local least-squares-based method that provides an estimate of the visual-motor Jacobian at the vicinity of any previously observed point by looking at the global visual-motor space. This method is general and estimates the Jacobian of any point in the workspace directly from raw visual-motor data in a close neighborhood the point under consideration (Farahmand et al. 2007).

Robust M-estimation of Visual-Motor Jacobian: The assumption that the visual-motor data belongs to the visual-motor model being fit may not hold when operating in unstructured environments. Outliers to the visual-motor model may deteriorate the Jacobian hyperplane, which either makes the system unstable or drive the arm toward a local minima. In this paper, we consider a new approach to estimate the uncalibrated (model-free) visual-motor Jacobian based on the theory of robust M-estimation. In this work, a statistically robust estimator is used to reject the outliers due to different visual tracking errors (Shademan et al. 2009).



Back to main page