Conveying Shape:Lighting, Shading, Texture

From CS294-10 Visualization Fa07

Jump to: navigation, search

Lecture on Dec 3, 2007

Slides

Contents

[edit] Readings

  • Perceiving Shape from Shading. Ramachandran (pdf)
  • Using Non-Photorealistic Rendering to Communicate Shape. Gooch and Gooch. (pdf)
  • Conveying Shape and Features with Image-Based Relighting. Akers et al. (html)

Optional Readings

  • The lit sphere: a model for capturing NPR shading from art. Sloan et al. (html)
  • Automatic lighting design using a perceptual quality metric. Shacked and Lischinski. (web)
  • Maximum entropy light source placement. Gumhold. (ieee)
  • Light Collages. Lee et al. (html)

[edit] Omar - Dec 03, 2007 12:24:29 am

regarding "Conveying Shape and Features with Image-Based Relighting" -- is this multi-source, pinpoint lighting also used for humans on magazine covers? one of the reasons people look so unusually good? but seriously, if such views of an object are pretty much impossible in the real world, can this lead to mismatches and interaction problems with the real-world objects?

also, for the first paper: i didn't see the dalmatian until i read the caption, then i found it. the authors bring up the complicated interplay of high-level concepts with low-level perception, but in the intervening years i'm sure much has happened. anyone with more knowledge on the subject have pointers?

[edit] Willettw - Dec 03, 2007 01:24:43 am

Regarding Omar's comment - controlling the lighting in portrait photography, product photography, etc. is easily as important as shot composition to professional photographers, who will often go to great lengths to manipulate small details. A given studio shot might use dozens of different lights, reflectors, and blockers to control the lighting of individual facets and regions of the photographed object. In that light(no pun intended), it seems like a very logical step to use this sort of system for commercial photography and I'd be surprised if it hasn't at least been tried. I can imagine that snapping a few dozen shots of a model under systematically varied lighting conditions over the course of a few seconds and then digitally recombining them later might be much easier than trying to manually arrange lights and reflectors to achieve the desired effect during a live shoot.

[edit] Mark Howison - Dec 03, 2007 01:42:38 pm

The slide from today's lecture showing the use of key, fill, and backlights to achieve balanced lighting of a subject in portrait photography reminded me of a similar solution used in stage lighting, called the McCandless system. The idea is to light many areas of the stage in similar ways to a portrait studio, using units of three or four lights above each area. Each light is either a key, fill, or backlight depending on its position relative to the area of the stage and its relative brightness to the other lights in the unit. It's also common to use subtle changes in the hue of the light depending on its direction.

The mathematical object behind the lit sphere shading method is as Maneesh said the Gauss sphere, not the Poincare sphere as I had mistaken it for. Basically, there is a canonical map called the Gauss map that takes the unit normals at every point on a surface and maps them onto a region of the unit sphere. Here is some more information on the Gauss map, including interactive demonstrations.

[edit] Mcd - Dec 04, 2007 08:32:44 pm

The convex/concave work of Ramachandran is convincing--watching the animations in class only reinforced it for me. While I doubt it's innate, I think the assumption of light from above is strongly ingrained in perception. The chevron patterns, though, were less clear to me. I had a very hard time seeing anything other than two-dimensional patterns. It's interesting that he argues for similar perceptive results in the two-color chevron case, where the page before he had used two-color circles to show the lack of just such an effect.

On another note, the photo-montage demonstrations were among my favorite demos of the semester. Perhaps I come with a very iSchool user-centered bias, but the tools were generally impressive and seemed quite useful.

[edit] Robin Held - Dec 07, 2007 01:17:35 pm

I find it very interesting how the use of multiple light sources (Akers et al) increases the visible detail on the surface of an object, but seems to reduce the overall depth in an image. The moon example illustrates this the best. The modified image makes the craters easy to distinguish, but the moon looks more like a disc than a sphere. This isn't necessarily a bad thing, as long as local detail is considered more important than the overall shape of an object. It's probably just a reminder that the choice of illustration style depends on the image content that one wants to convey.

[edit] Hazel Onsrud - Dec 09, 2007 03:41:15 pm

Regarding Willetw's comment: After class I kept thinking about how far one could take the application of a that or similar software. I let my imagination run one step further than yours and contemplated the use of this in film, in addition to photos. I realize a lot is already possible, but I would be interested in knowing more of the specifics.

[edit] James Andrews - Dec 10, 2007 07:34:46 am

Hazel -- Paul Debevec does a lot of work on this kind of automated relighting work for video, some of which is used in films. A recent relevant paper would be "Relighting Human Locomotion with Flowed Reflectance Fields" http://gl.ict.usc.edu/Research/RHL/index.html ... he has quite a fancy set up for this though. It uses "a vertical array of high-speed cameras under a time-multiplexed lighting basis" and has a limited capture space.

[edit] N8agrin - Dec 14, 2007 11:36:09 pm

A few years ago I was in a museum where there was a display of many famous individuals' faces. What was interesting and somewhat disturbing about this visualization was that each face appeared to stare at you as you moved throughout the room. The effect was created by using a 3d mask that was not facing out at you, as you might expect, but rather facing inward. By looking at the inside of the mask, lit a particular way, the inverse image was cast as a visually stunning representation of the face that moved!

It seems as though Ramachandran's work touches upon similar effects. What's most interesting is that even simple shading of an image can create, what appears to be, pre-attentive recognition of the dimensionality. Also of note, though more of a subjective sense, is that the organization of the shaded objects can actually "feel" jarring. For whatever reason, when the shading is not all lined up correctly and the objects are the same shape, there is a distinctly unsettling sensation.

Gooch and Gooch's approach to object shading reminds me of High Dynamic Range (HDR) photography, where a photographer uses a many exposures of the same scene, stitching them together to create one coherent image that contains very few over or under exposed areas. The effect of HDR is that an image tends to look hyper-realalistic. Here is an example: HDR Photo

[edit] Ken-ichi - Dec 15, 2007 05:40:27 pm

I'm intrigued by the fact that most people perceive light as coming from above. This makes intuitive sense for a species that involved in sunlight, where lighting from above is almost always a reasonable assumption. Does anyone know of any neurological work that has attempted to examine the physiological cause of this phenomenon? Is there an animal model for light perception?



[add comment]
Personal tools