3D Reconstruction of the "Invisibles"
CVPR 2013 Tutorial

Sunday, June 24, 2013 PM

Organizer:      Jingyi Yu



Organizer

    Jingyi Yu is an Associate Professor at the Computer and Information Science Department at the University of Delaware. He received his B.S. from Caltech in 2000 and M.S. and Ph.D. degree in EECS from MIT in 2005. His research interests span a range of topics in computer vision, computer graphics and computational photography, including multi-perspective imaging, unconventional cameras, robust 3D reconstruction, and real-time rendering. He has served as the Program Chair of the OMNIVIS Workshop ’11, the General Chair of Projector-Camera Systems ’08, and an Area and Session Chair of ICCV ’11. He has received both the NSF Career Award in 2009 and the Air Force Young Investigator Award in 2010.


Course Descriptions

The problem of modeling and reconstructing the “invisibles”, e.g., specular or transparent objects such as 3D fluid wavefront and gas flows, has attracted much attention in recent years. Successful solutions can benefit numerous applications in oceanology, fluid mechanism and computer graphics as well as lead to new insights towards shape reconstruction algorithms. The problem, however, is inherently difficult for a number of reasons. First such objects do not have their own image. Instead, they borrow appearance from nearby diffuse objects. Second, determining the light path within these objects for shape reconstruction is non-trivial since refractions or reflections non-linearly alter the light paths. Finally, dynamic specular or transparent objects often exhibit spatially and temporally varying distortions that are hard to correct.

Existing solutions can be essentially viewed as a special class of correspondence matching algorithms. Often a known pattern such as a checkerboard is positioned near the surface and conceptually one can analyze the corresponding feature points in the observed cameras views and then apply stereo or volumetric reconstruction techniques. In this tutorial, we discuss a broad range of classical solutions as well as the emerging approaches based computational cameras/videos.

Classical Approaches. Most existing build upon point-pixel correspondences where a special planar pattern such as a checkerboard is placed near the surface and a single or multiple cameras are used to acquire the distorted pattern for shape reconstruction. A common issue in point-pixel based solutions is ambiguity: a pixel corresponds to a ray from the camera while the specular surface can lie at any position along the ray. We discuss state-of-the-art solutions on adding additional constraints such as smoothness, integrality, multi-view consistency, etc., for resolving this ambiguity. We also discuss in depth the cons and pros of each method.

Computational Imaging. An emerging class of solutions aims to use novel computational cameras and displays to directly resolve the point-pixel ambiguity by acquiring ray-ray correspondences. In this tutorial, we discuss four recent approaches: 1) the light path triangulation method by Kutulakos and Steger for recovering static specular objects, 2) the light field probe approach by Wetzstein et al. for acquiring both static and dynamic specular objects, 3) the Bokode approach by our group for reconstructing dynamic fluid wavefronts, and 4) the multi-camera approaches by Atcheson and our group for recovering 3D gas flows and wavefronts.



Technical Content

A. Introduction to invisible (transparent and reflective) object reconstruction.

B. Problem definition and challenges.

C. Traditional approaches

     1. Pixel-pixel correspondences.

     2. Reconstruction techniques.

     3. Ambiguity and solutions.

     4. Shape-from-distortion.

D. Computational camera approaches.

     1. Light path triangulation.

     2. The light field probe.

     3. The Bokode.

     4. Light field camera and camera array.

E. Static vs. Dynamic objects.

     1. Challenges.

     2. Multi-camera approach.

     3. Volumetric light path reconstruction.

     4. Applications.

 F. Conclusions and Future work:

     1. Theoretical directions

     2. How to use new computational cameras.

     3. A list of unanswered questions

     4. Sharing of code and images

 

Relationship to Previous Short Courses/Tutorials

The organizer has given two closely related tutorials on Multi-perspective Imaging at SIGGRAPH Asia 2008 and CVPR 2010, with a focus on computational photography. This new CVPR tutorial will tightly integrate the vision and the computational photography and provide a complete overview of the state-of-the-art solution and emerging computational camera solutions. It aims to stimulate new ideas in both computer vision and computational photography, ranging from new 3D reconstruction algorithms to novel computational cameras.



Relevant Publications

1.   Angular Domain Reconstruction of Dynamic 3D Fluid Surfaces, Jinwei Ye, Yu Ji, Feng Li, and Jingyi Yu, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012.

 

2.   Dynamic 3D Fluid Surface Acquisition Using a Camera Array, Yuanyuan Ding, Feng Li, Yu Ji, and Jingyi Yu, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2011.

 

3.   Multi-perspective Stereo Matching and Volumetric Reconstruction, Yuanyuan Ding, Jingyi Yu and Peter Sturm, in Proceedings of the Twelfth International Conference on Computer Vision (ICCV), 2009.

 

4.   State of the Art Report: Multi-perspective Rendering, Modeling, and Imaging, Jingyi Yu, Leonard McMillan and Peter Sturm, in Proceedings of Eurographics, 2008.

 

5.   General Linear Cameras, Jingyi Yu and Leonard McMillan, in the 8th European Conference on Computer Vision (ECCV), 2004, Volume 2: 14-27.