In this project we develop an approach to automatic 3D model construction from multiple 2D views. This enables users to create 3D models based on standard 2D images and video input. An object can be scanned into a model by simply rotating it in front of a video camera and a scene can be scanned using a handheld video camera to capture different viewpoints. The technical basis of the approach involve recently developed methods in non-Euclidean geometry which show that reconstruction is possible without cumbersome calibration.
For this project we already has an lab infrastructure and a set of programs which perform the capture. (See www.cs.ualberta.ca/~vis/ibmr ) However, to make its use appealing in a wide set of applications we need to integrate it with existing modeling and rendering systems, as well as show its usefulness in a test production.
Specifically for the summer we seek to one computer science person to work on programming a plug-in for our system which integrates it with "Maya" an industry standard modeling and rendering system.
We seek a second person with talent in both arts as well as some computer science knowledge to plan and produce a demonstration of the system. This will involve capturing a number real world objects and characters, modeling a scene, and then integrating the components into a virtual 3D world and use this to produce a computer animation. The proposed topic for the animation would involve recreating a historical scene from museum objects for educational purposes.
This project involves a student with key experience in graphics programming to write a shader-based implementation of our dynamic texturing algorithm, and integrate the shader into our real-time image-based rendering system.
Specifically the following will be researched and implemented:
In contrast, humans effortlessly interact physically and visually in the world -- a human can easily pick up an visible object, and can also watch a whole task, learn, and transform the visual information into the necessary motor(muscle) movements. In vision-based robotics, instead of conventional programming, the human can show the robot what to do using gestures carrying symbolic (what) and deictic (where) information. In this implementation, the robot and human share the same visual frame, and the robot interprets the visual directions to carry out the task.
A main challenge for hand-eye coordination in uncalibrated environments is how to transform visual information into the motor frame and how to use it for motion control to ensure stable and convergent behavior. In visual feedback motion control this is a continuous process and the instantaneous coordinate transforms can be estimated on-line using Broyden methods from optimization theory. Researchers have solved some example tasks using these ideas, but complete and coherent principles for the application of uncalibrated or partially calibrated methods to arbitrary task are lacking.
In a summer project one or more of following can be studied: