Multiscale Shape and Detail Enhancement from Multi-light Image Collections

Raanan Fattal, Maneesh Agrawala, Szymon Rusinkiewicz

Abstract

We present a new image-based technique for enhancing the shape and surface details of an object. The input to our system is a small set of photographs taken from a fixed viewpoint, but under varying lighting conditions. For each image we compute a multiscale decomposition based on the bilateral filter and then reconstruct an enhanced image that combines detail information at each scale across all the input images. Our approach does not require any information about light source positions, or camera calibration, and can produce good results with 3 to 5 input images. In addition our system provides a few high-level parameters for controlling the amount of enhancement and does not require pixel-level user input. We show that the bilateral filter is a good choice for our multiscale algorithm because it avoids the halo artifacts commonly associated with the traditional Laplacian image pyramid. We also develop a new scheme for computing our multiscale bilateral decomposition that is simple to implement, fast O(N2 log N) and accurate.

The Multi-Light Image Collection for this chard leaf contains 3 images taken under varying lighting conditions. The shading in each input image reveals different aspects of its shape and surface details. We combine the shading at multiple scales across the input images to generate the enhanced results. The result on the left exaggerates surface details by eliminating shadows, but yields a flat look. The result on the right is less extreme and includes some shadows to increase the perception of depth, at the cost of reducing some visible detail in the shadow regions.

Research Paper

PDF (39.5M)

More Details

Supplemental Materials

Multiscale Shape and Detail Enhancement from Multi-light Image Collections
SIGGRAPH 2007, August 2007. 51:1-51:9.