Conveying Shape: Lighting, Shading, Texture

From CS 294-10 Visualization Sp10

Jump to: navigation, search

Lecture on Apr 14, 2010

Slides

Contents

Readings

  • Perceiving Shape from Shading. Ramachandran (pdf))
  • Conveying Shape and Features with Image-Based Relighting. Akers et al. (html)
  • The lit sphere: a model for capturing NPR shading from art. Sloan et al. (html)

Optional Readings

  • Automatic lighting design using a perceptual quality metric. Shacked and Lischinski. (web)
  • Maximum entropy light source placement. Gumhold. (ieee)
  • Light Collages. Lee et al. (html)

Jon Barron - Apr 13, 2010 02:48:41 am

Ramachandran:

The main thesis of this work, from a visualization perspective, seems to be that shading is a powerful cue, but that it is easily overridden by higher-level cues, such as contours, lighting, priors on shapes, occlusion, etc.

I found the "illusory circle" figure on page 80 particularly interesting, as it highlights how effective illusory contours are for conveying shape, but demonstrate that *actual* contours, like dark lines, are less effective. This suggests that contours are important and effective, but that rendering contours as lines may be less than ideal.

I wasn't convinced by some of the figures, such as the perceptual grouping series of figures on page 81. They aren't nearly as convincing as Julesz's texton figures.

Akers:

This is an extremely neat idea, but I find many of the resulting composites very strange and unnatural to look at. Though they convey much more information than any single image, there's no natural interpretation of some of the composite features (such as the shadow under the occluding cheek bone in the skull, which doesn't appear to have been generated by any plausible physical process). This compositing seems best in the robotic assembly, in which it is primarily used to remove shadows.

Given that the authors were capable of controlling all aspects of lighting of the subject, other options were available to them which might have been more natural. A Debevec-esque "Light Stage" might have also produced natural looking and informative results. Or they could have used the multiple light directions to estimate the 3D surface (using shape from shading, etc), and then rerendered that surface (or a modified, "contrast enhanced" version of it) to make it easy to interpret.

Sloan:

This is a cool idea! They manually capture the BRDF of an artistic style, and render models according to that reflectance model. Unfortunately, the process is extremely manual, and the tool seems pretty cumbersome. It pretty much seems easier for an artist to paint what the sphere would look like, and just use that. I'd suggest that it would be better if they automate the manual sphere-mapping operation, but that seems like an extremely difficult task.

Jeffrey Patzer - Apr 13, 2010 10:15:58 pm

Akers' paper is one I found to be extremely fascinating. Essentially it captured what most web-design is focused on these days. By using gradients, pictures, and careful layouts the web designer hopes to accomplish the appearance of buttons, background shading, and other layout effects. Small shadows are created to give 3D appearances. Large spectrals are used for icons. These are exaggerated uses of the concepts discussed in the paper. These have become so pervasive that on many phone OS's you get buttons with bright strips along the top of buttons (virtual, not the physical ones). It's applied to icons on the iPhone. The Mac OS has a windowing systems that creates a drop shadow around a window. This list goes on and on. The interesting observation is that by looking at old operation systems, you don't see such uses of shading (or at least they are much less pervasive). People have come to associate a good web 2.0 site with one that makes use of many gradients and button appearances. The thing that I would like to see is at which point, does the viewer become over-saturated with these things. Where by trying to provide too many depth cues, do you lose depth? Or does this not happen? Are we able to see unlimited depth?

Paul Ivanov - Apr 14, 2010 06:29:48 pm

Ramachadran: What are the studies which quantify prevelance of countershading which seem to "flatten" the animal when naturally illuminated (from above). It's very intersting, I had never heard anything about that before. It could be true, but it could also be one of those anecdotes that seems like it could be true, with some examples to back it up, but without a systematic prevailance. Update: A quick google search turned up this paper:[What, if anything, is the adaptive function of countershading?] which probably has more information and has been cited 22 times, as well as this more recent Science paper [Bioluminescent Countershading in Midwater Animals: Evidence from Living Squid]

It took a bit of work to get the corrugated metal sheet (page 79) to look like it's lit from far left (instead of the far right). The trick was to imagine viewing a vertically standing sheet such that the top portion is closer to the viewer than the bottom.

I'd be intersted in seeing a demonstration of the specific instance of apparent motion described in the article.

Stephen Chu - Apr 19, 2010 08:21:46 pm

I found the moon images to be very interesting. They clearly show that the developed algorithm can create composite images that give additional information on an object's texture/shape. The fact that the moon image is not photo realistic is very obvious, but I would have a difficult time determining that the composite images of the baboon skull and robotic assembly are not photo realistic without prior knowledge. I was left wondering why detecting some non-photo realistic features is difficult for me. I'd also be interested in learning about any research in developing algorithms that attempt to turn non-photo realistic images to more realistic ones.

Arpad Kovacs - Apr 21, 2010 12:38:45 pm

Ramachandran:

Cool article. It was very interesting to see the conclusions from several experiments regarding the brain's methods of perception. It is probably not a big surprise that the brain assumes a single light source, coming from above, since this resembles our everyday lighting condition. It is also generally understood that we recover shape from outlines, orientation and other cues, but the experiments with various settings strengthen these hypothesis. These observations can be used to make better illustrations and more realistic computer graphics.

Akers:

The described method for generating a composite image from hundreds of photographs is a very useful tool for depicting hard to see features and combining many images into one to fully demonstrate all the details. The unique spatially-varying weights and the summing of the pixel weights to one helps compose a well-balanced composite. The control window makes it very easy to select any image and paint features from the source into the composite. This tool makes it very easy to interactively create good quality technical illustrations. At the same time, I am concerned that it would be easy to overuse the tool and overemphazie features that are not really present in the original set of pictures.

Sloan:

The paper describes a method that reverses the shading study and captures a shading model from a 2D image, then it reprojects it onto a geometry. However, there could be several problems with this projection, because of the original 2D image. For example, part or even half of the texture of a sphere can be obscured because of lighting, therefore generating that surface could be problematic.

Jaeyoung Choi - Apr 26, 2010 11:02:49 am

Ramachandran : This article shows many convincing examples to demonstrate how our perception is governed by several rules (although the examples on page 83 weren't convincing to me). I found the way author describes the effect of light source, and how grouping of objects is done to be interesting. It would've been nice if experiments were done under the direct sunlight and indoor lighting to compare the effect of source of lighting and see if the sunlight has more influence than the others. For the shape-from-shading system, author suggests the reason of its behavior from the evolution, which was interesting too.

Zev Winkelman - May 02, 2010 10:28:48 pm

Ramachandran - What I found most interesting about this article was yet another example of how millenia of human experience ( with a single light source that comes from above - the sun ) influence perception of visual elements of computer graphics.

Akers et al - The demo of this system in class was amazing. It has made me rethink how I view professionally 'prepared' images in media and advertisements.

Prahalika Reddy - May 12, 2010 08:06:46 am

The slides about getting shape from shading were very intriguing. It's interesting how circles with the darker shading at the bottom look as if they are popping out while circles with the darker shading at the top look as if they are pressed in. It's not something I would think about unless I was actually looking at the image.

Lighting is another important topic when it comes to images. As seen in the examples, different lighting can make the entire image look very different. I feel that idea has both advantages and disadvantages.

I've always felt that texture is one of the harder things to convey in an image. It's extremely difficult to show how something is supposed to feel in a picture.

Shimul Sachdeva - May 13, 2010 04:39:40 am

Hand-crafted illustrations are not always more effective than photographs for conveying the shape - on the contrary, I think a photograph can depict relative size better than a hand drawn image. It is easier to fudge size and depth variables in a hand drawn image than in a photograph.

The discussion on lighting effects on objects was good and useful. Is that how auto-contrast in photo-editing softwares works - introducing new light sources in the background?



[add comment]
Personal tools