Mobile Manipulation and Telerobotics for Space Exploration

Azad Shademan, Alejandro Hernandez-Herdocia, David Lovi, Neil Birkbeck and Martin Jagersand




References

A. Shademan, A. Hernandez-Herdocia, A. Rachmielowski, A. Farahmand, N. Birkbeck, D. Lovi, M. Jagersand, L. Hartman, "Towards Operational Semi-autonomous Telerobotics for Space Exploration," Poster presented at and extended abstract published in the proceedings of the 6th Canadian Space Exploration Workshop (CSEW6), Saint-Hubert, Québec, Dec. 1-3, 2008, pp. 103-105.





Description

Current robots neither have humans’ sophisticated decision-making capabilities, nor their motor skills and dexterity, but can travel further and survive space environments far better than us. Historically, human presence in space for exploration is preferred because of the intellectual capacities of humans. However, human missions are both expensive and have safety risks. Both man and unmanned missions could benefit from better robotic/autonomous capabilities. For instance even during manned space station and shuttle missions, astronaut accomplishments are limited by the complexity of operating instruments and robotic equipment. The problem with full autonomy in space is the huge gap between artificial intelligence and human intelligence. On the other hand, conventional telerobotic technologies work poorly in the presence of large communication delays. Hence, a challenge is to research and develop the right balance of semi-autonomy for the remote robot while at the same time improving the robot operator interface by simplifying command languages, and enriching the sensory feedback to the operator (tele-presence). Failure to address these issues could lead to decreasing scientific return as mission control becomes increasingly complex. Our main thesis is to bring the robotic technology to fit the human by means of manipulators with human-like properties, command interfaces utilizing humans’ ability for visual spatial reasoning, gesturebased commands, and computer-vision based predictive display technology to provide a high fidelity visual immersion in the remote environment.

We have recently acquired a new WAM arm to develop a dual-arm manipulator that can operate in unstructured environments with minimal human supervisoin. The new dual-arm robot will be mounted on a Segway mobile platform to increase its effective work space. Despite being in the developmental stage, we presented some results from an early prototype at the Canadina Space Exploration Workshop (CSEW) held at the Canadian Space Agency (CSA) headquarters in Saint-Hubert, Québec, in 2008. We proposed an interesting application of our new platform in space exploration research. In particular, we proposed a semi-autonomous telerobotic operation interface. A higher level command interface is added to the operator interface using natural deictic visual command descriptions (e.g., visually point to locations and gesture to command manipulations). These commands are carried out using visual servoing. Since these methods use visual (camera coordinate) descriptions, there is no need for cumbersome camera-scene-robot calibrations. One advantage of working entirely with camera coordinates is flexibility and ease of adaptability to new unstructured environments (as in planetary exploration). Another advantage is higher robustness and higher motion accuracy by using continuous endpoint sensory feedback. Once the platform is complete, we believe that it will make a significant International impact as the most advanced and only dual-arm manipulator developed in Canadian universities.



System overview diagram



Technologies

Predictive display: Research shows that human performance degrades rapidly under delayed visual feedback. Predictive display compensates for this communication delay. 3D geometry and appearance models of the remote environment are automatically captured from cameras mounted on the robot in motion. By virtual rendering of the remote scene, the human operator will be immersed in the remote scene. Viewpoint changes and robot motion can be immediately rendered in response to the operator, eliminating the roundtrip delay in obtaining camera images. Instead the camera images are used indirectly to update the model with any change in the remote scene, or new areas explored.

Uncalibrated visual servoing: The main idea is to improve performance by closing the motion control loop over external (visual) sensory information. While calibrated approaches provide a provably stable solution to control the nonlinear visual task equation, their main limitation is the need for a-priori system parameters in the control law. In contrast to calibrated methods, the uncalibrated approach requires no knowledge of the camera or robot parameters. The system model is captured and updated during manipulation. This is well suited for exploration where one cannot beforehand model the environment or the objects one finds and may wish to pick up and manipulate.



Back to main page