Darius Burschka graduated from Technische Universität München, Germany, was a post doc at Yale University, and is now a research scientist in the Computational Interaction and Robotics Lab (CIRL) at the Johns Hopkins University. His research centers around sensing systems and methods, particilarly real-time vision, tracking and three-dimensional reconstruction, for the use in human-computer interfaces and mobile robotics.

Dana Cobzas is a postdoc at INRIA-Rhone-Alpes. Her interest are in the area of image based modeling, with applications in graphics rendering and image-based mobile robot navigation. She has contributed several new methods for registration of camera and 3D laser range sensory data, and applied these to an image-based model acquisition system for indoor mobile robot localizaqtion and navigation. Lately she has also been researching and implementing Structure-From-Motion (SFM) methods used to build 3D models from uncalibrated video alone.

Zachary Dodds received the PhD from Yale University in 2000. He is now an assistant professor of computer science at Harvey Mudd College in Claremont, CA. His research centers around the geometry of imaging and hand-eye coordination. He has investigated the specification of alignments based stereo camera information. Theoretically he has shown what alignments are verifiable under different levels of camera calibration. Practically, he has implemented a number of complete specification languages for visual servoing tasks.

Gregory D. Hager received the PhD in computer science from the University of Pennsylvania in 1988. He then held appointments at University of Karlsruhe, the Fraunhofer Institute and Yale University. He is now a full professor in Computer Science at Johns Hopkins University and a member of the Center for Computer Integrated Surgical Systems and Technology. His current research interests include visual tracking, vision-based control, medical robotics, and human-computer interaction.

Martin Jagersand graduated from the University of Rochester in 1997, was then a post doc at Yale University and research scientist at Johns Hopkins University. He is now an assistant professor at the University of Alberta. His research centers around dynamic vision - how the visual image changes under different camera or object/scene motions. He has applied this to robotics, developing a novel metod for vision-based motion control of an uncalibrated robot and vision system. On the vision and graphics side he has applied the same kind of modeling synthesize novel animations of camera motion or articulated agent actions.

Keith Yerex graduated from the University of Alberta in 2002 and is now a researcher at Virtual Universe Corporation. His interests are in the intersection of computer graphics and vision. Particularly, he has shown how to combine geometry and texture obtained from uncalibrated video and efficiently use this in real-time image-based rendering. He is also interested in image-based approaches to lighting and animation of non-rigid motion.