Poster abstracts

Recursive Structure from Motion using Hybrid Matching Constraints with Error Feedback

Fredrik Nyberg and Anders Heyden
Applied Mathematics Group School of Technology and Society
Malmo University, Sweden

We propose an algorithm for recursive estimation of struc- ture and motion in rigid body perspective dynamic systems, based on the novel concept of continuous-diĀ®erential matching constraints for the estimation of the velocity parameters. The parameter estimation proce- dure is fused with a continuous-discrete extended Kalman filter for the state estimation. Also, the structure and motion estimation processes are connected by a reprojection error constraint, where feedback of the structure estimates is used to recursively obtain corrections to the motion parameters, leading to more accurate estimates and a more robust performance of the method. The main advantages of the presented algorithm are that after initialization, only three observed object point correspondences between consecutive pairs of views are required for the sequential motion estimation, and that both the parameter update and the correction step are performed using linear constraints only. Simulated experiments are provided to demonstrate the performance of the method.

Spherical Catadioptric Arrays: Construction, Multi-View Geometry, and Calibration

By Douglas Lanman, Daniel Crispell, Megan Wachs, and Gabriel Taubin
Brown Univ.

This paper introduces a novel imaging system composed of an array of spherical mirrors and a single highresolution digital camera. We describe the mechanical design and construction of a prototype, analyze the geometry of image formation, present a tailored calibration algorithm, and discuss the effect that design decisions had on the calibration routine. This system is presented as a unique platform for the development of efficient multi-view imaging algorithms which exploit the combined properties of camera arrays and non-central projection catadioptric systems. Initial target applications include data acquisition for image-based rendering and 3D scene reconstruction. The main advantages of the proposed system include: a relatively simple calibration procedure, a wide field of view, and a single imaging sensor which eliminates the need for color calibration and guarantees time synchronization.

Reconstructing a 3D Line from a Single Catadioptric Image

By Douglas Lanman, Megan Wachs, Gabriel Taubin and Fernando Cukierman

This paper demonstrates that, for axial non-central optical systems, the equation of a 3D line can be estimated using only four points extracted from a single image of the line. This result, which is a direct consequence of the lack of vantage point, follows from a classic result in enumerative geometry: there are exactly two lines in 3-space which intersect four given lines in general position. We present a simple algorithm to reconstruct the equation of a 3D line from four image points. This algorithm is based on computing the Singular Value Decomposition (SVD) of the matrix of Plucker coordinates of the four corresponding rays. We evaluate the conditions for which the reconstruction fails, such as when the four rays are nearly coplanar. Preliminary experimental results using a spherical catadioptric camera are presented. We conclude by discussing the limitations imposed by poor calibration and numerical errors on the proposed reconstruction algorithm.

Synchronization and Calibration of a Camera Network for 3D Event Reconstruction from Live Video

Sudipta N. Sinha and Marc Pollefeys,
Department of Computer Science, UNC Chapel Hill

Existing algorithms for automatic 3D reconstruction of dynamic scenes from multiple viewpoint video requires calibrated and synchronized cameras. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in video. This precludes the need for specific calibration data or a pre-calibration phase. The first step consists of independently recovering the temporal offset and epipolar geometry between different camera pairs using an efficient RANSAC-based algorithm that randomly samples the 4D space of epipoles and finds corresponding extremal frontier points on the silhouettes. In the next stage, the calibration and synchronization of the complete camera network is recovered. For unsynchronized video streams, silhouettes interpolated based on sub-frame temporal offsets produce more accurate visual hulls. We demonstrate our approach on six different datasets acquired by computer vision researchers. The datasets contain 4 to 25 viewpoints with these cameras in general configurations. We are currently exploring calibration of projector-camera systems and that of heterogeneous camera networks containing video cameras, IR and 3D depth sensors. We are also trying to deal with severe silhouette extraction noise.