Conveying Structure

From CS294-10 Visualization Sp11

Revision as of 06:28, 5 April 2011 by Matthew Can (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Lecture on March 30, 2011

Slides

Contents

Readings

  • Smart Visibility in Visualization. Viola and Gröller (html)
  • Using Deformation for Volumetric Browsing. McGuffin et al. (pdf)
  • Interactive Image-Based Exploded View Diagrams. Li et al. (html)

Optional Readings

  • Interactive Cutaway Illustrations of Complex 3D Models. Li et al. (html)
  • Non-Invasive Interactive Visualization of Dynamic Architectural Environments. Niederauer et al. (html)

Thomas Schluchter - Mar 30, 2011 07:26:33 am

The readings for this session articulated, more than others, an important category of tasks for visualizations: making the invisible visible. The examples that all papers present are inherently representational as they try to show the structure of layered, intertwined or nested elements of a whole. They fall squarely into the realm of scientific visualizations. The fascinating to me was to see that even here, a visualization tells a story: As the Smart Visibility paper shows, these images serve the purpose to make the "interesting" parts visible. What is considered interesting may change depending on context. In a diagnostic context, one would primarily look for anomalies in an image of an organ scan, while in an educational context, one would mainly be interested in the exemplary nature of the image. The manual or algorithmic determination of what's to be emphasized follows similar value judgments as in the visualization of financial data.

Another aspect of storytelling that stood out to me was the finding in the Deformations paper that to effectively convey structure, transitions and animations are needed to make clear why a deformed representation looks the way it looks. It's like seeing something (in some cases literally) unfold, which creates immediate visual evidence and understanding.

Julian Limon - Mar 30, 2011 02:35:30 pm

The Li et al. paper describes a very interesting approach to exploded view diagrams. I am not familiar with technical diagrams or with image editing techniques, so I found this whole new world fascinating.

Exploded diagrams are usually created to be shown in manuals and printed materials. I am not sure such a system would have been useful before. However, mobile electronic devices are now ubiquitous, and users can definitely benefit from an electronic version of the diagram that they can interact with. I can imagine that providers of manual could have phone or tablet-optimized versions of their diagrams that users can look up online. If a person needs to search for a specific part in the car, she might not have a computer at hand, but she would probably have a smartphone available. Moreover, a picture of the bar codes or part numbers printed in the car could be used to optimize the search. The system could then present a diagram that the user can interact with. Eventually, other users could recommend hacks that they have used to fix certain parts. Or, the system could be linked to stores where the specific part can fixed.

Finally, I feel that labeling could also benefit from a more social approach. Parts are usually named according to technical specifications. However, some users might not know how they are called. If the system allowed users to provide multiple alternative names for parts, search could be optimized.

Saung Li - Mar 30, 2011 09:28:42 pm

The techniques mentioned in the Smart Visibility paper, and also lecture, such as cut-away views, section views, and ghosted views, are great in allowing people to see the "important" features of a visualization without losing their context. I remember when I took biology, the textbook had a lot of images that applied these techniques and definitely helped me visualize and understand the topics. I would like to see this taken further. People need to identify what are the important features, so this may be difficult to do if we try to automate some of the techniques. Can an algorithm be made for applying these techniques? I would also like to see more visualizations using smart visibility techniques to be interactive, so that users can look at the image from different angles instead of one. These would be great supplements to the static images in textbooks, and other graphics where people would like to look at an object's internals from multiple perspectives.

Michael Cohen - Mar 30, 2011 10:03:33 pm

i think that the Non-Invasive Interactive Visualization of Dynamic Architectural Environments work could have many practical applications beyond video games. For instance, highlighting the actual location of an alarm or error condition raised by a building management system -- or to highlight the location of a fire alarm for fire fighters. To get a little more sci-fi, I could also see it being very useful for a high-end security system. If employees and visitors are required to carry a badge with, say, RFID tags that can be tracked around the building, then the exploded display could provide a comprehensive view of traffic within the building. If it were well-integrated into the rest of the security system, the display could also highlight areas of the floor plan where cameras and/or motion detectors detected something, but no RFID tags were detected.

In theory, much of this could be achieved with a static display, but in real buildings spaces and systems are reconfigured fairly often. The ability to tie maintenance/surveillance data into a dynamic 3D model that can be sliced and viewed automatically would make such a system more maintainable.

Michael Hsueh - Mar 31, 2011 05:10:14 am

The system for interactive exploded view diagrams by Li et al. is neat. The paper focuses on using static 2D input images and "converting" them to interactive models. In addition to mechanical part diagrams, I can see this working well for visualization of geological or biological information. I wonder how the system handles more complicated exploding paths such as those of locking parts that may require multiple translations to be inserted to the correct position. This would also rely on the skill and ability of the author to specify the correct constraints for such parts. Thinking along the lines of having arbitrary exploding axes, perhaps one kind of user interaction is to have the parts explode away from the cursor. That is, instead of dragging parts along their exploding axes, the parts would explode, more or less along their defined explosion axes, away from a moving cursor.

Reading the paper made me think about whether in the future there would be more interactive exploding diagrams available in general. Static exploded views are already common, in textbooks and service manuals and so on. Many of these views are presumably generated from 3D computer models and then printed for widespread consumption. Generating interactive diagrams from actual 3D models is likely easier and probably more accurate than doing so from static images, as in Li et al.'s system. With the proliferation of computing, it would not be surprising to see much increased use of interactive exploding diagrams, generated directly from the 3D models used for representation.

Sally Ahn - Apr 01, 2011 03:14:07 am

Viola and Groller provides a nice overview of the various automated visualization techniques that were inspired by hand-drawn technical illustrations. In both automated and hand-drawn illustrations, the techniques aim to render three-dimensional models/objects in a way that best conveys "certain information," and most of these methods (cut-away view, ghosted view, section view, exploded view) deal with the difficulty of depicting occluded parts. As we saw in lecture, however, each technique has its own pros and cons. On the other hand, with computer graphics and interactivity, we can overcome many of the limitations caused by occlusion, and both McGuffin et. al. and Li et. al.'s paper demonstrates this. The interesting difference between the two papers is that Li et. al. takes in 2D images as input, whereas McGuffin requires volumetric data, and yet provides additional 3-dimensional information about the object through interaction.

One thing I wondered while reading the Smart Visibility in Visualization paper was how the importance features were determined. I wonder if this estimation can be automated with machine learning from existing illustrations drawn by artists who have already thought about this.

Brandon Liu - Apr 01, 2011 01:35:47 pm

I thought it was interesting how in the Niderauer paper, the hand-drawn diagram had 'guide lines' indicating that the purple features in the building lined up. The final system didn't have this feature to create guidelines for the viewer. The benefit would be being able to view matches between level entrances/exits on each of the cuts. A way this could be implemented is by finding all the 'ramps' in the map based on their Y coordinate between the cuts and then drawing a set of dotted lines between the levels. The addition of the guide lines makes the hand-drawn diagram more effective. Another reason it's more effective is because of the colors: some parts of the diagram that aren't structurally relevant are greyed out, making it clearer where exactly the floors are.

Dan - Apr 01, 2011 03:13:48 pm

The lecture on exploded views, cutaways and ghosting of 3D objects was really engaging. The framework for conveying structure was good, choosing good views, and layering. I also like how important internal features were exposed for particular examples. I agree that it is particularly interesting when choosing to use culling (removal) or moving (exploded views) objects. It seems that depending on the case, each method provides a particular benefit. Perhaps it is good to know what type of presentation or underlying story is being told in order to make that call. The project that intercepts the OpenGL stream was amazing!

The volumetric browsing paper was great. The applications were amazing, especially for the biomedical applications. This could be used to teach medical students body parts or even regular students about the human body in general. The future for a technology like this is limitless. The Interactive Image-Based Exploded View Diagrams paper by Li et al was really cool. They developed a system for exploded views for mechanical parts and diagrams. This could be used to automate tons of digital reference manuals and things or that nature of engineers and technicians

Karl He - Apr 04, 2011 05:09:12 am

Cutaways and related diagrams are important for visualizing internals of structures. Algorithmic generation of cutaways seems infeasible, however. In certain cases it is possible, such as with showing how to find a path through a building, since most of what is needed to know what to show is given by the path. Some other applications such as showing the internals of a complex mathematical structure (like show in class), however, would be more challenging. There is nothing specifically that needs to be shown, but what is shown needs to give enough hints to the viewer s.t. he can interpolate the rest of the structure.

Matthew Can - Apr 05, 2011 12:58:01 am

From the reading, a key element of the Smart Visibility concept is that the visibility calculation accounts for the importance of the objects in the scene. More important objects will be made prominently visible when occluded by less important objects. Given importance values for the parts of several (self) occluding objects, it's a challenging algorithmic problem to create a visualization that conveys the most important structural information. But, as others have mentioned, I think a more interesting problem (and one where a solution would have greater impact) is how to automate the process of labeling important object parts. That is, what features of an object's structure make it worth showing? We talked in class about how the more asymmetrical parts of an object with sharp contours probably have high information content relative to the rest of the object. That could be one heuristic for calculating importance. This is probably a hard problem to solve for arbitrary 3D geometry, although I think it's reasonable to make an attempt in a specific domain.



[add comment]
Personal tools