Conveying Shape: Lighting, Shading, Texture

From CS294-10 Visualization Sp11

(Difference between revisions)
Jump to: navigation, search
(67.164.95.175 - Apr 26, 2011 12:54:31 am)
(Matthew Can - Apr 26, 2011 03:48:12 pm: new section)
Line 26: Line 26:
The paper by Sloan et al. extracts lit spheres from objects in source images. This is done by having the user fit spherical triangles onto scene objects (ultimately covering a sufficient distribution of normals). The spherical triangles are then mapped onto a lit sphere model that is used to shade geometry. This is a very neat technique for extracting unusual (in particular, NPR) shading models that would be otherwise quite difficult to define, especially mathematically. The only concerns I had were regarding the ease and accuracy with which users can apply these triangles to scene objects, and whether these objects would typically exhibit enough information to construct the spheres, though the paper indicates these were not serious problems. Anyway, the technique's application to NPR reminded me of some work I've seen before that does a similar, though less interactive, extraction of brush stroke characteristics (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4545847). That particular work does not go so far as to produce generative methods for the brush strokes, but it seems like it could.
The paper by Sloan et al. extracts lit spheres from objects in source images. This is done by having the user fit spherical triangles onto scene objects (ultimately covering a sufficient distribution of normals). The spherical triangles are then mapped onto a lit sphere model that is used to shade geometry. This is a very neat technique for extracting unusual (in particular, NPR) shading models that would be otherwise quite difficult to define, especially mathematically. The only concerns I had were regarding the ease and accuracy with which users can apply these triangles to scene objects, and whether these objects would typically exhibit enough information to construct the spheres, though the paper indicates these were not serious problems. Anyway, the technique's application to NPR reminded me of some work I've seen before that does a similar, though less interactive, extraction of brush stroke characteristics (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4545847). That particular work does not go so far as to produce generative methods for the brush strokes, but it seems like it could.
 +
 +
== Matthew Can - Apr 26, 2011 03:48:12 pm ==
 +
 +
Julian and Brandon brought up good points about fully automatic approaches to choosing lighting and shading conditions that best convey shape. I agree with Julian that this is not a one-size-fits-all problem, and that artists might want more control over the images they produce. At the same time, novices like me would be satisfied with an image that is good enough. And there are more novices than experts in the world, so it’s worth pursuing research in fully automated methods. The challenge is how to quantitatively encode how well an image reveals an object’s structure. If that were easy, we could try to optimize over lighting and shading conditions. Gumhold’s paper finds the position for a light source that maximizes the information in the resulting image. It’s not clear to me how well this information theoretic approach matches the objective of conveying structure. In any case, this is still an open and interesting problem.

Revision as of 20:48, 26 April 2011

Lecture on Apr 25, 2011

Slides

Contents

Readings

  • Perceiving Shape from Shading. Ramachandran (pdf)
  • Conveying Shape and Features with Image-Based Relighting. Akers et al. (html)
  • The lit sphere: a model for capturing NPR shading from art. Sloan et al. (html)

Optional Readings

  • Automatic lighting design using a perceptual quality metric. Shacked and Lischinski. (pdf)
  • Maximum entropy light source placement. Gumhold. (ieee)
  • Light Collages. Lee et al. (html)

Julian Limon - Apr 25, 2011 05:18:39 pm

The Schuman et al. paper that as discussed in class today reminded be of low-fidelity prototyping. Schuman et al. ran an experiment with architects to evaluate different computer-generated images (namely, CAD plots, shaded images and sketches). They discovered that people tend to associate sketches with preliminary drafts and CAD plots with final presentations. They also found that sketches stimulate significant more discussions and active changes than CAD plots and shaded images. This reminds me of why napkin-like prototyping is so powerful. When people are looking at these kinds of prototypes they are more likely to look at them, request changes, and discuss them. On the other hand, when prototypes are too realistic or pixel-perfect, people might reserve their comments. Specifically for our final project, I think it makes sense to gather feedback using low-fidelity sketches. Subjects would be less attached to the sketches and will be more likely to give honest comments. Even if some parts of the system are already built, efforts can be done to make it "look" more like a sketch if we want to obtain useful feedback.


On a totally different note, Photomontage was criticized today because it is a user-driven system and is not fully automated. I find this to be a pro as opposed to a con. Ultimately, artists know better than computers how they want to convey a certain story. I like Photomontage because, instead of trying to automate the whole process, it provides tools for artists to determine how they want the result to look like. Totally-automated techniques might lose some of the nuances that artists want to convey and might not be suited for some cases.

Brandon Liu - Apr 25, 2011 06:19:22 pm

The tradeoffs between artistic considerations and automation in the photomontage and image-based relighting systems are interesting. Specifically, if one were to automate such a system, how well would it perform? This reminds me of the 'Auto-' color/contrast/levels functions in Photoshop, that optimize for some quantity in an image. Could Photomontage take a similar approach and optimize detail in the final image? One strong argument against this is that the boundaries of interesting regions couldn't be determined automatically; instead, it is up for the human viewer to interpret which parts form a whole and can be meaningfully depicted.

67.164.95.175 - Apr 26, 2011 12:54:31 am

The paper by Sloan et al. extracts lit spheres from objects in source images. This is done by having the user fit spherical triangles onto scene objects (ultimately covering a sufficient distribution of normals). The spherical triangles are then mapped onto a lit sphere model that is used to shade geometry. This is a very neat technique for extracting unusual (in particular, NPR) shading models that would be otherwise quite difficult to define, especially mathematically. The only concerns I had were regarding the ease and accuracy with which users can apply these triangles to scene objects, and whether these objects would typically exhibit enough information to construct the spheres, though the paper indicates these were not serious problems. Anyway, the technique's application to NPR reminded me of some work I've seen before that does a similar, though less interactive, extraction of brush stroke characteristics (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4545847). That particular work does not go so far as to produce generative methods for the brush strokes, but it seems like it could.

Matthew Can - Apr 26, 2011 03:48:12 pm

Julian and Brandon brought up good points about fully automatic approaches to choosing lighting and shading conditions that best convey shape. I agree with Julian that this is not a one-size-fits-all problem, and that artists might want more control over the images they produce. At the same time, novices like me would be satisfied with an image that is good enough. And there are more novices than experts in the world, so it’s worth pursuing research in fully automated methods. The challenge is how to quantitatively encode how well an image reveals an object’s structure. If that were easy, we could try to optimize over lighting and shading conditions. Gumhold’s paper finds the position for a light source that maximizes the information in the resulting image. It’s not clear to me how well this information theoretic approach matches the objective of conveying structure. In any case, this is still an open and interesting problem.

Personal tools